id
stringlengths 47
47
| text
stringlengths 426
671k
| keywords_count
int64 1
10
| codes_count
int64 2
4.68k
|
---|---|---|---|
<urn:uuid:a82de307-2ebb-4cdc-ace4-a9d94145a5ba> | The tech sector is booming tremendously, there is a high demand for programmers and the one who have mastered their coding skills.
Programming jobs are paying significantly more than the average position. An understanding of at least one programming language makes an impressive addition to any resume.
If you are wondering which programming language you should learn for having a better career. So let me tell you a good news that all popular languages are pretty fair in terms of compensation.
Below I have mentioned some most popular programming courses that one should learn to master their coding skills.
C programming language is originally intended for writing system software.
C is a high level and general programming language that is ideal for developing firmware or portable applications. It was developed at Bell Labs by Dennis Ritchie for the Unix Operating System in the early 1970s.
C has a compiler for most of the computer systems, it has influenced many languages such a C++. It is also ranked among the most popular and widely used languages.
1 a) C++
C++ is an intermediate-level language with object-oriented programming features, originally designed to enhance the C language.
C++ powers major software like Firefox, Winamp and Adobe programs. It’s used to develop systems software, application software, high-performance server and client applications and video games.
Java is one of the most in demand programming languages, it is a standard for enterprise software, web-based content, games and mobile apps, as well as the Android operating system.
Java is designed in order to work on multiple platforms, for example a program written on windows can also run on Mac OS.
PHP is a widely used open source general purpose scripting language that is especially suited for web development and can be embedded into HTML.
PHP powers more than 200 million websites, including WordPress and Facebook. It can be directly embedded into an HTML source document rather than an external file, which has made it a popular programming language for web developers.
It is most commonly used as part of web browsers, whose implementations allow client-side scripts to interact with the user, control the browser, communicate asynchronously, and alter the document content that is displayed.
Python is a programming language which is used for many different applications. It’s used in some high schools and colleges as an introductory programming language because Python is easy to learn, but it’s also used by professional software developers at places such as Google, NASA, and Lucasfilm Ltd.
Python programming language was created in the late 1980s, and named after Monty Python. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java.
These are the most popular programming languages. These languages can make you rich, popular, and incredibly attractive or at least might help keep you to get employed.
However, if you are looking out for a course to improve your programming skills then log on to www.hunarr.co.in and get the best institutes to master your ability. | 1 | 2 |
<urn:uuid:2e508566-bcaf-49e9-bb40-2d44217f2c75> | In this report, Unshackling Expression, APC and its partner organisations study the state of freedom of expression on the internet in six Asian countries: Cambodia, India, Malaysia, Myanmar, Pakistan and Thailand. While the national reports provide an in-depth study of the state of freedom of expression online in the six countries, a study of internet rights in Asia is incomplete without a preliminary study of the international standards for freedom of expression. International standards form the yardstick, the baseline, for national standards on freedom of expression – and are the standards to which national laws must adhere. The six countries that form part of this study also have protections for freedom of expression in their constitutions, and most of these states are parties to international human rights treaties, imbuing them with an obligation to protect and respect international standards for the protection of human rights.
Unshackling Expression is a study of the criminalisation of and curbs placed on freedom of expression using laws and policies at the domestic level. A harsh measure, criminalisation affects the freedom of expression of people both directly and indirectly. Directly, it forms a clear, physical restraint on speakers who make their views known online. Indirectly, it causes a chilling effect on citizens, oftentimes resulting in self-censorship, leading to a less diverse and more conformative cyberspace. Further, restrictions on freedom of opinion and expression adversely affect the right to "to seek, receive and impart information and ideas of all kinds." In a 2011 report to the UN Human Rights Council, former UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, states:
[L]egitimate online expression is being criminalized in contravention of States’ international human rights obligations, whether it is through the application of existing criminal laws to online expression, or through the creation of new laws specifically designed to criminalize expression on the internet. Such laws are often justified on the basis of protecting an individual’s reputation, national security or countering terrorism, but in practice are used to censor content that the Government and other powerful entities do not like or agree with. 1
Freedom of expression is particularly crucial when it comes to the internet. Offline, one may have multiple ways of expressing oneself, but online, publication and participation are the first acts. All exercise of freedom of expression online begins with the act of publication – whether it be a publication of views through writing, posts, comments, messages or tweets, or through the use of visual, video or audio content. As such, any restriction on online content becomes a harsh restraint on freedom of expression, and none more so than the criminalisation of content or other forms of expression. Not only this, but in Asia in particular, there are several trends that are problematic to the free use of the internet.
In this chapter, we consider the international standards that define freedom of expression, and in particular, freedom of expression online, and also take a look at the regional standards established by the Association of Southeast Asian Nations (ASEAN).
International standards on freedom of speech and expression online
The history of the right of freedom of speech and expression precedes the internet. It finds its beginnings in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). As a binding treaty, the ICCPR has more value in international law. The UDHR and ICCPR guarantee certain inalienable rights to human beings. Recognising the inherent dignity of all beings, the ICCPR and UDHR guarantee, inter alia, the right to freedom of expression, 2 the right to privacy,3 the right against advocacy of national, religious or racial hatred (it has been understood as the right against “hate speech”)4 and the right to freedom of religion.5 Moreover, the ICCPR prohibits discrimination on grounds of race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. 6 These rights, among all the others guaranteed under the ICCPR, are available to all human beings, regardless of their countries of origin and residence.
The right to freedom of opinion and expression is a crucial right in the ICCPR. It is the "foundation stone of every free and democratic society."7 Without freedom of expression, the full development of the individual is impossible. Moreover, the "marketplace of ideas" aids the pursuit of truth. Without freedom of expression, the autonomy of an individual may be considered curtailed and restrained.
The importance of the right led the Human Rights Committee to hold that a general reservation to paragraph 2 of Article 19 of the ICCPR was unacceptable.8 Article 19 of the ICCPR as well as the UDHR guarantees the right to hold opinions without interference and guarantees everyone the right to freedom of expression and the right to receive and impart information, regardless of frontiers. Any limitations placed on this right must meet the standards required and justified by provisions in Article 19(3) of the ICCPR. Article 19 of the ICCPR reads:
(1) Everyone shall have the right to hold opinions without interference;
(2) Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice…
As the text of the right makes clear, the right to freedom of opinion, speech and expression is available regardless of borders or frontiers. More importantly, it is available through any media of one’s choice. It is this terminology that is crucial when considering freedom of speech online.
In addition to the international treaties, several regional charters also guarantee the right to freedom of opinion and expression. In Asia, it is the ASEAN Charter9 and the ASEAN Human Rights Declaration10 that enshrine this right. Vowing to respect and protect “human rights and fundamental freedoms,” the ASEAN Charter incorporates as one of its principles the “respect for fundamental freedoms, the promotion and protection of human rights, and the promotion of social justice.” Article 14 of the Charter states that “ASEAN shall establish an ASEAN human rights body” in accordance with the purposes and principles of the ASEAN Charter.
Taking off from this, the ASEAN Intergovernmental Commission on Human Rights was established in 2009, and the ASEAN Human Rights Declaration was unanimously adopted in November 2012. Under Article 23 of the ASEAN Human Rights Declaration:
Every person has the right to freedom of opinion and expression, including freedom to hold opinions without interference and to seek, receive and impart information, whether orally, in writing or through any other medium of that person’s choice.
In its General comment No. 34, the Human Rights Committee confirmed the applicability of Article 19 online, equally as it applies offline.11 The General Comment contains the authoritative interpretation of Article 19, including the scope and extent of the right.
The Human Rights Committee holds that there shall be no exceptions to the right to hold opinions, whether they are of a "political, scientific, historic, moral or religious nature." 12 In particular, the Committee makes clear that it is unacceptable to criminalise the holding of an opinion:
The harassment, intimidation or stigmatization of a person, including arrest, detention, trial or imprisonment for reasons of the opinions they may hold, constitutes a violation of article 19, paragraph 1. 13
As we shall see in the following national reports, the Asian states that form part of this study stand in potential violation of this understanding of Article 19, paragraphs 1 and 2. Moreover, the right to freedom of expression encompasses a wide variety of activities, including offensive speech (not falling within the ambit of Article 20, ICCPR), 14 and applies to "all forms of audio-visual as well as electronic and internet-based modes of expression." 15
In addition to Article 19, Article 20 of the ICCPR also impacts speech. Article 20 prohibits any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence. Speech that falls within the ambit of Article 20 (as hate speech) cannot merely be offensive, but must have an intent to cause harm, and be likely to cause harm. That is, for speech to fall within the definition of hate speech, it must have the quality of inciting imminent violence. 16 It cannot merely be a statement, but rather a call to violence on any of the above grounds, in order to qualify as hate speech. While restrictions are permissible on the above given grounds, they must also be necessary and proportionate to the aim sought to be achieved, and imposed by law.
Where the internet is concerned, the abovementioned report of former Special Rapporteur Frank La Rue gathers importance. La Rue highlights the “unique and transformative nature of the Internet not only to enable individuals to exercise their right to freedom of opinion and expression, but also a range of other human rights.” 17 The internet enables individuals not merely to be passive receivers of information, but to be active publishers of knowledge and information, for the internet, as an interactive medium, enables individuals to take active part in the creation and dissemination of information.
Moreover, the Human Rights Council has affirmed that offline human rights must be equally protected and guaranteed online. In its 20th session (29 June 2012), the Human Rights Council adopted a resolution which unanimously declared:
[T]he same rights that people have offline must also be protected online, in particular freedom of expression , which is applicable regardless of frontiers and through any media of one’s choice, in accordance with articles 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights. 18 (Emphasis supplied.)
However, it is important to remember that the right to freedom of speech and expression is not absolute. The ICCPR states that the right may be curtailed, if necessary and if provided by law, for the following reasons:
For respect of the rights or reputations of others;
For the protection of national security or of public order (ordre public), or of public health or morals. 19
The ASEAN Human Rights Declaration goes one step further. Its clause on restrictions, Article 8, states:
The human rights and fundamental freedoms of every person shall be exercised with due regard to the human rights and fundamental freedoms of others. The exercise of human rights and fundamental freedoms shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition for the human rights and fundamental freedoms of others, and to meet the just requirements of national security, public order, public health, public safety, public morality, as well as the general welfare of the peoples in a democratic society.
As the text makes clear, the ASEAN Human Rights Declaration expands the scope of justifications on the basis of which the right to freedom of opinion and expression may be restricted. In addition to the justifications provided in the ICCPR, the ASEAN Human Rights Declaration also adds public safety and the vague and open-ended “general welfare of peoples in a democratic society” as legitimate aims for the restriction of freedom of speech.
While restrictions are indeed permissible, they must meet tests of permissibility: they must be outlined by law, necessary and proportionate to protect a legitimate aim. These are the conditions laid down in the UDHR and the ICCPR. The test of legality requires that the restriction set by any government on the right to freedom of expression be expressly laid out in a law. This legislation, order or bylaw must be publicly available and understandable by the public, and no restriction is valid unless it has the backing of the law. 20 The law must be both accessible and foreseeable.21
Not only must the restriction be based in law, it must also be legitimate. The test of legitimacy requires that the restriction on freedom of expression be based on one of the justifications laid out in Article 19(3). 22 What are these justifications? Article 19(3) states that “protection of national security or of public order (ordre public), or of public health or morals” and “respect of the rights or reputations of others” constitute legitimate reasons for the restriction of freedom of expression. Any restriction – and indeed, criminalisation – of expression that does not fall in with these justifications is liable to be contested and as falling foul of Article 19, ICCPR.
Finally, the test of necessity and proportionality requires that the restriction be based on a "pressing social need" which makes the restriction "necessary in a democratic society.” 23 It must be placed so as to fulfil the aims set forth in Article 19, paragraph 3, ICCPR. Of course, the state has a margin of appreciation in testing the necessity of the restriction, but the margin is narrow where freedom of expression is considered.24 In determining pressing social need, the test of pluralism, broadmindedness and tolerance is to be applied, 25 which accommodates divergent views and opinions.
Not only this, but the restriction placed by the state on freedom of expression must be proportional – i.e., the least onerous restriction must be applied to appropriately meet the need. 26 A broad restriction is unacceptable, and the restriction must be narrowly tailored. For instance, the incidence of internet shutdowns across the world, where access to the internet is completely cut off in response to any situation (primarily, states use the excuse of security) is disproportional to the aims of the restriction, 27 and so would be contested under Article 19, paragraph 3.
1 La Rue, F. (2011). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue. A/HRC/17/27. https://www.un.org/ga/search/view_doc.asp?symbol=A/HRC/17/27
2 Article 19, ICCPR: (1) Everyone shall have the right to hold opinions without interference; (2) Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice…
3 Article 17, ICCPR: (1) No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation; (2) Everyone has the right to the protection of the law against such interference or attacks.
4 Article 20, ICCPR: (2) Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.
5 Article 18, ICCPR: (1) Everyone shall have the right to freedom of thought, conscience and religion. This right shall include freedom to have or to adopt a religion or belief of his choice, and freedom, either individually or in community with others and in public or private, to manifest his religion or belief in worship, observance, practice and teaching; (2) No one shall be subject to coercion which would impair his freedom to have or to adopt a religion or belief of his choice…
6 Article 26, ICCPR.
8 “[A] general reservation to the rights set out in paragraph 2 would be incompatible with the object and purpose of the Covenant.” Ibid., at para. 6.
11 “They include all forms of audio-visual as well as electronic and internet-based modes of expression.” Human Rights Committee. (2011). Op cit., at para. 12.
12 Human Rights Committee. (2011). Op. cit., at para. 9.
14 Human Rights Committee. (2000, 18 October). Communication No. 736/97, Ross v. Canada.
15 Human Rights Committee. (2011). Op. cit., at para. 12.
16 Khandhadai, G. (2016). Desecrating Expression: An Account of Freedom of Expression and Religion in Asia. Bytes for All, Pakistran and FORUM-ASIA. https://www.forum-asia.org/uploads/wp/2016/12/Final_FoER_Report.pdf .
17 La Rue, F. (2011). Op. cit.
18 Human Rights Council. (2012). The promotion, protection and enjoyment of human rights on the Internet. A/HRC/20/L.13. https://daccess-ods.un.org/TMP/3578843.1763649.html
19 Article 19(3), ICCPR.
20 Hinczewski v. Poland, No. 34907/05, § 34, ECHR 2010 (ECHR).
21 Keun-Tae Kim v. Republic of Korea, Communication no. 574/1994 CCPR/C/64/D/574/1994 (4 January 1999) (HRC); Sunday Times v. United Kingdom (no. 2), Judgment of 26 November 1991, no. 13166/87, Series A no. 216 (ECHR); Article 19 v. Eritrea, (2007) AHRLR 73 (ACHPR 2007).
22 Vladimir Petrovich Laptsevich v. Belarus , Communication no. 780/1997, § 8.5,
UN Doc. CCPR/C/68/D/780/1997 (2000) (HRC); Vladimir Velichkin v. Belarus, Communication no. 1022/2001, § 7.3, UN Doc. CCPR/C/85/D/1022/2001 (2005) (HRC).
23 Jacobs, F. C., & White, R. C. A. (1996). The European Convention on Human Rights. Oxford: Clarendon Press; Handyside v. United Kingdom, Judgment of 7 December 1976, Series A no. 24 (ECHR); Vogt v. Germany (no. 1), Judgment of 26 September 1995, Series A no. 323 (ECHR); Proposed Amendments to the Naturalization Provisions of the Constitution of Costa Rica, Advisory Opinion OC-4/84, (1984) (Inter-Am. Ct.); Prince v. South Africa, 2004 AHRLR 105 (ACHPR 2004).
24 Lehideux & Isorni v. France, no. 22662/94, ECHR 1998-VII (ECHR); Schwabe v. Austria, Judgment of 28 August 1992, Series A no. 242-B (ECHR).
25 Handyside v. United Kingdom, Judgment of 7 December 1976, Series A no. 24 (ECHR); Sunday Times v. United Kingdom (no. 1), Judgment of 26 April 1979, Series A no. 30 (ECHR); Dudgeon v. United Kingdom Judgment of 23 September 1981, Series A no. 45 (ECHR).
26 The Queen v. Minister of Agriculture, Fisheries and Food and Secretary of Health, ex parte Fedesa and others, ECR I-4023 (ECJ); Klass v. Germany, Judgment of 6 September 1978, Series A no. 28 (ECHR); Compulsory Membership in an Association Prescribed by Law for the Practice of Journalism, §§ 33-5, 54, Advisory Opinion 5/85 (1985) (Inter-Am. Ct.); Nebraska Press Association v. Stuart; Reno v. ACLU 521 U.S. 844 (1997) (US Sup. Ct.); Human Rights Committee. (2011). Op. cit., at para. 34: "[…] must be the least intrusive instrument amongst those which might achieve their protective function." | 1 | 3 |
<urn:uuid:3b3c9539-bf40-46c7-bb27-b0486b60adff> | - Open Access
A novel study on amyloid β peptide 40, 42 and 40/42 ratio in Saudi autistics
Behavioral and Brain Functions volume 8, Article number: 4 (2012)
We examined whether plasma concentrations of amyloid beta (Aβ) as protein derivatives play a central role in the etiology of autistic features.
Design and Methods
Concentrations of human Aβ (1-42), Aβ (1-40), and Aβ (40/42) in the plasma of 52 autistic children (aged 3-16 years) and 36 age-matched control subjects were determined by using the ELISA technique and were compared.
Compared to control subjects, autistic children exhibited significantly lower concentrations of both Aβ (1-40) and Aβ (1-42) and lower Aβ (40/42) concentration ratio. Receiver operating characteristics curve (ROC) analysis showed that these measurements of Aβ peptides showed high specificity and sensitivity in distinguishing autistic children from control subjects.
Lower concentrations of Aβ (1-42) and Aβ (1-40) were attributed to loss of Aβ equilibrium between the brain and blood, an imbalance that may lead to failure to draw Aβ from the brain and/or impairment of β- and γ- secretase's concentration or kinetics as enzymes involving in Aβ production.
Autism and other related autism spectrum disorders (ASDs) are behavioral syndromes that include various degrees of verbal, nonverbal, and social impairment, as well as restricted or stereotyped interests and activities. The disorders are characterized by early onset (before 36 months of age) [1, 2] and by long-lasting social or cognitive handicaps. With an overall prevalence of approximately 0.6% , ASDs are an important public health problem worldwide. Although international consensus considers these syndromes to be phenotypic expressions of impairments affecting the development of the central nervous system (CNS), numerous questions concerning their etiopathology are still unanswered.
Children with autism generally find it difficult to ignore irrelevant information and are easily distracted by other stimuli. Therefore, we can assume that these children may have a selective attention deficit. In humans, prenatal stress is linked to an increased vulnerability to various psychosocial problems of childhood and adulthood. In children, stress is associated with cognitive, behavioral, physical, and emotional problems [4–7], as well as with autism [8–10].
Free radicals seem to be implicated in the onset of autism. Reactive oxygen species (ROS), including superoxide (O2-), hydroxyl (_OH), hydrogen peroxide (H2O2), singlet oxygen (1O2), and nitric oxide (NO_), are produced through physiologic and pathologic processes . ROS are scavenged by specific defense systems, including antioxidant enzymes (superoxide dismutase [SOD], catalase [CAT], glutathione peroxidase [GPx]) and nonenzymatic antioxidants such as glutathione (GSH) and metallothioneins (MTs). Many autistic children seem to share a chronic flaw in the defense systems against ROS. In studies of the RBC of autistic children, Sogut et al. (2003) found higher concentrations of NO_ and GPx , Zoroglu et al. (2004) reported higher concentrations of NO_ and thiobarbituric acid-reactive substances (TBARs) , Chauhan et al. (2004) found a reduction in antioxidant proteins , and Geier et al. (2009) and Al-Gadani et al. (2009) described a decrease in reduced GSH [15, 16]. In autistic Saudi children, overexpression of SOD, together with slightly inhibited CAT activity, indicated that these children are under H2O2 stress . It is well known that glutamate is inhibited by astrocytes in a concentration-dependent manner. The inhibition of CAT clearly potentiated this effect.
Alzheimer's disease (AD), the primary dementing disorder of the elderly, affects more than four million persons in the United States. Aging is the chief risk factor for AD. Important pathological hallmarks of AD include loss of synapses and the presence of senile plaques (SPs) and neurofibrillary tangles (NFTs). SPs consist of a highly dense core of Aβ peptide, a peptide 39 to 43 amino acids in length (1-42) that is surrounded by dystrophic neurites . Aβ (1-40), which composes approximately 90% of total secreted Aβ, aggregates much more slowly than Aβ (1-42) . Aβ in amyloid plaques consists mainly of the Aβ (1-42) species, whereas vascular amyloid is composed primarily of Aβ (1-40). The relatively high solubility of Aβ (1-40) may allow this species to diffuse for greater distances than the less soluble Aβ (1-42), thereby increasing its deposition around brain vessels .
A growing body of evidence indicates that Aβ peptide toxicity is mediated by free radical damage to cell membranes [20–23]. The concept that Aβ induces lipid peroxidation is a key component of the Aβ-associated free radical model of neurodegeneration in AD [23, 24]. Consistent with a free radical process, Aβ causes lipid peroxidation in brain cell membranes, and this peroxidation is inhibited by free radical antioxidants [21, 23]. Giedraitis et al. (2007), suggest that the normal equilibrium between cerebrospinal fluid (CSF) and plasma Aβ may be disrupted in AD patients and may result in the initiation of amyloid deposition in the brain .
The findings of in vitro studies of lipid peroxidation induced by Aβ (1-42) and postmortem studies of lipid peroxidation (and its sequelae) in the AD brain, together with the confirmed role of oxidative stress in the etiology of autism [14–16], initiated our interest to study plasma concentration of Aβ peptide in autistic Saudi children and age-matched control subjects in an attempt to investigate the equilibrium status between the brain and blood and to highlight other factors that might contribute in the alteration of plasma Aβ peptide concentration. This comparison may help to clarify the causative role of Aβ peptide-induced oxidative stress in the pathology of autism and the possibility to use both as biomarkers of this disorder if they recorded remarkable sensitivity and specificity upon performing receiver operating characteristics statistical analysis. This could help in the early diagnosis and intervention to control the prevalence of this disease.
2. Materials and methods
2.1. Subjects and methods
The study protocol followed the ethical guidelines of the most recent Declaration of Helsinki (Edinburgh, 2000). Written informed consent was provided by the children's parents, and the children themselves assented to participation if they were developmentally able to do so. Subjects for this study were enrolled through the Autism Research and Treatment (ART) Center clinic, whose sample population consists of children aged 3 to 16 years with a diagnosis of ASD. The diagnosis was confirmed by using the Autism Diagnostic Interview-Revised (ADI-R), the Autism Diagnostic Observation Schedule (ADOS), and the Developmental, Dimensional Diagnostic Interview (3DI). Of the 52 autistic children, 40 were nonverbal and 12 were verbal. The intelligence quotient (IQ) of all autistic children was lower than 80. All children had sporadic autism (simplex cases), and all tested negative for the Fragile × syndrome. The control subjects were recruited from the well-baby clinic at King Khaled University Hospital; they also ranged in age from 3 to 16 years. Subjects were excluded from the study if they had dysmorphic features, tuberous sclerosis, Angelman syndrome, or other serious neurological (e.g., seizures), psychiatric (e.g., bipolar disorder), or medical (e.g., endocrine, cardiovascular, pulmonary, liver, kidney) conditions. All participants were screened via parental interview for current and past physical illness.
2.2. Samples collection
After overnight fast, 10 ml blood samples were collected from both groups in test tubes containing sodium heparin as anticoagulant. Tubes were centrifuged at 3500 rpm at room temperature for 15 minutes, plasma was obtained and deep freezed (at -80°C) until analysis time.
2.3. Measurement of Aβ (1-40) and Aβ (1-42)
Plasma concentrations of Aβ were measured by using the human Aβ (1-40) and Aβ (1-42) TGC ELISA kit (The Genetics Company, Schlieren, Switzerland) according to the manufacturer's instructions. Briefly, plasma samples were 100 times diluted in assay buffer and processed according to the manufacturer's recommended protocols. Samples and standards were incubated in capture wells overnight at 8°C with antibodies specific for Aβ (1-40) or Aβ (1-42). The capture antibody was 6E10 (Sigma, St Louis, Missouri), and the detection antibody was a biotin-labelled G2-10 (The Genetics Company, Schlieren, Switzerland). The synthetic Aβ (1-40) peptide (Bachem, Bubendorf, Switzerland) was used as the standard. After several rinses, the enzyme-conjugated detection reagent was added to the wells for 30 minutes. After additional rinses, wells were incubated with the chromogen solution for 30 minutes at room temperature, shielded from light. After the addition of the stop solution, the wells were read for absorption at 450 nm, and the Aβ concentration in the samples was calculated from standard curves. The detection limit was 25 pg/mL.
2.4. Statistical analysis
Results were expressed as means ± S.D. Statistical comparisons were performed with independent t-tests with the Statistical Package for the Social Sciences (SPSS). Significance was assigned at the level of P < 0.05. Receiver operating characteristics curve (ROC) analysis was performed. Area under the curve, cutoff values, and degree of specificity and sensitivity were calculated.
Table 1 presents plasma concentrations of Aβ (1-40), Aβ (1-42), and Aβ (40/42) ratio. Compared to age-matched control subjects, autistic children exhibited significantly lower plasma concentrations of Aβ (1-40) and Aβ (1-42) (P < 0.05) and non-significant lower Aβ (40/42) ratio (P = 0.168). Figure 1 illustrates the mean values of the measured Aβ peptides. The figures clearly show that overlap in the distributed values around the means of the autistic and control groups was seen in the concentrations of Aβ (1-40). This overlap could be due to the fact that the individual data set within each group was dispersed or spread out around the means.
Table 2 and Figure 2 show the Pearson correlations between the three measured variables.
Table 3 and Figure 3 show the results of ROC analysis: the area under the curve (AUC) and the specificity and sensitivity of Aβ (1-40), Aβ (1-42), and Aβ (40/42).
In general, Aβ (1-40) is less neurotoxic, less common in the neuritic plaques of AD, and less likely to be involved in the neuropathology of AD than Aβ (1-42). However, Aβ (1-42) is more difficult to study than Aβ (1-40) because of polymerization. As is the case for any peptide, the concentrations of Aβ are a balance between its rate of synthesis and its rate of degradation . Moreover, it has been reported that the concentrations of Aβ in brain and blood are in equilibrium, through the blood-brain-barrier (BBB), and that peripheral sequestration of Aβ may shift this equilibrium toward the blood, eventually drawing out the excess from the brain ("sink" effect) .
In the present study, the concentrations of both Aβ (1-40) and Aβ (1-42) were lower in autistic children than in age-matched control subjects (Table 1 Figure 1). This finding could be attributed to loss of Aβ equilibrium between the brain and blood, which may lead to the failure to draw out Aβ from the brain, i.e., increased blood-to-brain influx and decreased brain-to-blood efflux across the BBB. The observed low plasma concentrations of Aβ (1-40) and (1-42) in the autistic Saudi children, together with the LPS hypothesis of Jaeger et al. , could be easily supported by the findings of many studies showing that children with autism have an overload of gram-negative bacteria that contain LPS as a causative agent of mitochondrial dysfunction, a biochemical aspect recorded in a high percentage of autistic patients [16, 29–32].
Proteolytic cleavage of amyloid precursor protein (APP) by the sequential actions of β- and γ-secretases form the neurotoxic Aβ peptide, which typically consists of 40 or 42 amino acid residues (the amyloidogenic pathway). This could help to suggest the impairment of β- and γ- secretase's levels and/or kinetics in autistic patients showing lower plasma concentrations of Aβ (1-40) and Aβ (1-42). This suggestion could be supported by the work of Sokol et al. and Bailey et al. , who reported higher plasma concentrations of secreted APPα in autistic patients than in aged-matched control subjects and their recommendation to measure sAPP-α concentrations in serum and human umbilical cord blood as a potential tool for the early diagnosis of autism.
The pathogenesis of many neurological disorders is also believed to be associated with oxidative stress, which may be responsible for the dysfunction or death of neurons. Aβ can serve as a metalloenzyme to catalyze the generation of neurotoxic H2O2 from O2 through binding and reduction of Cu (II) . Fang et al. (2010) reported that oligomer and the fibril form of Aβ (1-42) can promote the generation of H2O2 when the concentration of co-incubated Cu (II) is below a critical level and the amount of TBARS reactivity is greatest when generated by Aβ (1-42) ˃˃ Aβ (1-40) .
At normal physiological conditions, SOD1 is known to increase cellular resistance to oxidative stress . However, when the SOD enzyme is overexpressed at levels that are much higher than those of other antioxidant enzymes, such as GPx and CAT, or higher than the ability of cells to supply reducing equivalents, increased oxidative stress is observed . Oxidative damage is likely because of the generation of ˑOH from the interaction of accumulating H2O2 with redox cycling proteins via Fenton-like chemistry . The lower Aβ (1-42) and Aβ (1-40) plasma concentrations reported in the present study, together with the proposed higher brain concentrations of both peptides, could be easily related to the findings of previous reports by Al-Gadani et al. (2009), which demonstrated that autistic Saudi children are under H2O2 stress because of overexpression of SOD and normal CAT activity .
Recent evidence suggests that the low-density lipoprotein receptor-related protein 1 (LRP1) transcytoses Aβ out of the brain across the blood-brain barrier (BBB) . Deane et al. reported that in RAP knockout mice the expression of LRP-1 is reduced in the brain and that Aβ (1-40) elimination from the brain to blood is also reduced. These findings provide evidence for a direct protein-protein interaction between LRP and Aβ and demonstrate that this interaction takes place in an isoform-specific manner. This finding shows that Aβ isoforms are differentially transcytosed or endocytosed through the BBB and that LRP at the BBB favors the clearance of Aβ isoforms relative to high β sheet content.
Recently, Gu et al. reported that exposure to lead (Pb2+) increases the concentrations of Aβ in the brain and inhibits LRP1 expression; this finding could explain the suggested Aβ accumulation in the brains of the autistic Saudi children in the present study. This explanation could find support in the work of El-Ansary et al. , who found that Pb2+ concentrations were significantly higher in the red blood cells (RBC) of 12 of 14 autistic Saudi children than in those of control subjects; this finding indicates that autistic children are more vulnerable to Pb2+ toxicity and hence are more likely to accumulate Aβ (1-40) and (1-42) in their brains. This could be supported through considering the lower Aβ 40/42 ratios recorded in the present study in autistic patients compared to control subjects. It is well known that clearance and transport from brain to blood is facilitated by an increased Aβ 40/42 ratio present at young ages . Moreover, young mouse model harboring a mutation favoring generation of Aβ 1-42 over Aβ 1-40 had a low Aβ 40/42 ratio, was shifted to plaque deposition .
Our speculated explanation could find a support in the most recent experimental study of Frackowiak et al. in which they used immunoblotting to prove that frozen autopsy brain samples of 9 autistic patients show accumulation of Aβ 40 and 42 in the cerebellum and cortex. Moreover, the explained association between chronic Pb toxicity previously recorded in 15/15 autistic patients of Saudi Arabia and the speculated Aβ accumulation of the present study is in good agreement with the finding of Garcidue˜nas [47, 48] which show that Children's exposure to urban air pollution increases their risk for auditory and vestibular impairment through the accumulation of Aβ 42 in their brainstems. To better understand changes in Aβ production, accumulation, and clearance in autistic patients, it will be necessary to continue studying the normal and disease-related metabolism of Aβ in various body fluids and in the brains of rodents used in animal models of autism.
Nutrition plays a vital role in the methylation of DNA, specifically the homocysteine (HCY)/S-adenosylmethionine (SAM) cycle. This cycle requires the presence of folate and B12, which facilitate the conversion of HCY to methionine, which is then converted to SAM. SAM then serves as a source of methyl groups for multiple methylation reactions, including the methylation of DNA. The increased concentrations of Aβ in the brains of autistic Saudi children could be easily explained by the hypothesis recently proposed by Lahiri and Maloney . They proposed that most AD cases follow an etiology based on Latent Early-life Associated Regulation or "LEARn" as a two-hit model [50, 51]. They reported that exposure to metals, nutritional imbalance (low B12), and other environmental stressors modify potential expression levels of AD-associated genes (e.g., Aβ peptide precursor protein) in a latent fashion. Autistic patients are known to exhibit oxidative stress , high RBC lead concentrations , and impaired DNA methylation because of a remarkably lower concentration of S-adenosylmethionine (SAM) . On the basis of this information, the two-hit hypothesis of Lahiri and Maloney could explain the impaired Aβ concentrations in the plasma of autistic Saudi children, as reported in the present study.
The Pearson correlations presented in Table 2 and Figure 2 show that while there was only an acceptable level of correlation between Aβ (1-40) and Aβ (1-42) (correlation coefficient less than 0.5), a very good level of association was found between Aβ (1-40) and Aβ (40/42) ratio (correlation coefficient of 0.859). This could be helpful to suggest that lower values of Aβ (1-40) and Ab (40/42) ratio must be recorded together as biomarker in a patient diagnosed as autistic while an association between Aβ (1-40) and Aβ (1-42) is not a must.
Table 3 and Figure 3 illustrate the results of ROC analyses of the two measured Aβ peptides. Although Aβ 40/42 ratio reported low value of sensitivity and specificity, absolute values of Aβ (1-42) and Aβ (1-40) reported satisfactory figures of sensitivity and specificity to be considered as potential biomarkers for autism.
amyloid precursor protein
autism spectrum disorders
Blood brain barrier
central nervous system
low density lipoprotein receptor-related protein 1
melanocyte stimulating hormone release inhibiting factor number 1
red blood cells
Receiver operating characteristics curve
Reactive oxygen species
tyrosine melanocyte stimulating hormone release inhibiting factor number 1.
World Health Organization: The ICD-10 Classification of Mental and Behavioural Disorders (ICD-10), WHO, Geneva. 1992
Rapin I: The autistic spectrum disorders. The New England Journal of Medicine. 2002, 347: 302-03. 10.1056/NEJMp020062.
Fombonne E: Epidemiological trends in rates of autism. Molecular Psychiatry. 2002, 7: S4-S6. 10.1038/sj.mp.4001162.
King S, Laplante DP: The effects of prenatal maternal stress on children's cognitive development: Project Ice Storm. Stress. 2005, 8: 35-45. 10.1080/10253890500108391.
King S, Mancini-Marie A, Brunet A, Walker E, Meaney MJ, Laplante DP: Prenatal maternal stress from a naturaldisaster predicts dermatoglyphic asymmetry in humans. Development and Psychopathology. 2009, 21: 343-53. 10.1017/S0954579409000364.
Laplante DP, Barr RG, Brunet A, Galbaud du Fort G, Meaney ML, Saucier JF: Stress during pregnancy affects general intellectual and language functioning in human toddlers. Pediatric Research. 2004, 56: 400-410. 10.1203/01.PDR.0000136281.34035.44.
Laplante DP, Brunet A, Schmitz N, Ciampi A, King S: Project Ice Storm: prenatal maternal stress affects cognitive and linguistic functioning in 5 1/2-year-old children. Journal of the American Academy of Child & Adolescent Psychiatry. 2008, 47: 1063-72. 10.1097/CHI.0b013e31817eec80.
Beversdorf DQ, Manning SE, Hillier A, Anderson SL, Nordgren RE, Walters SE: Timing of prenatal stressors and autism. Journal of Autism and Developmental Disorders. 2005, 35: 471-8. 10.1007/s10803-005-5037-8.
Kinney DK, Miller AM, Crowley DJ, Huang E, Gerber E: Autism prevalence following prenatal exposure to hurricanes and tropical storms in Louisiana. Journal of Autism and Developmental Disorders. 2008, 38: 481-8. 10.1007/s10803-007-0414-0.
Kinney DK, Munir KM, Crowley DJ, Miller AM: Prenatal stress and risk for autism. Neuroscience & Biobehavioral Reviews. 2008, 32: 1519-32. 10.1016/j.neubiorev.2008.06.004.
Gutmann B, Hutter-Paier B, Skofitsch G, Windisch M, Gmeinbauer R: In vitro models of brain ischemia: the peptidergic drug cerebrolysin protects cultured chick cortical neurons from cell death. Neurotoxicology Research. 2002, 4: 59-65.
Sogut S, Zorogğlu SS, Ozyurt H, Yilmaz HR, Ozugğurlu F, Sivasli E: Changes innitric oxide levels and antioxidant enzyme activities may have a role in the pathophysiological mechanisms involved in autism. Clinica Chimica Acta. 2003, 331: 111-117. 10.1016/S0009-8981(03)00119-0.
S S, Zoroglu F, Armutcu S, Ozen A, Gurel E, Sivasli O, Yetkin I: Meram, Increased oxidative stress and altered activities of erythrocyte free radical scavenging enzymes in autism. European Archives of Psychiatry and Clinical Neuroscience. 2004, 254: 143-147.
Chauhan A, Chauhan V, Brown WT, Cohen I: Oxidative stress in autism: increased lipid peroxidation and reduced serum levels of ceruloplasmin and transferring--the antioxidant proteins. Life Science. 2004, 75: 2539-49. 10.1016/j.lfs.2004.04.038.
Geier DA, Kern JK, Garver CR, Adams JB, Audhya T: Biomarkers of environmental toxicity and susceptibility in autism. Journal of Neurological Science. 2009, 280: 101-108. 10.1016/j.jns.2008.08.021.
Al-Gadani Y, El-Ansary A, Attas O, Al-Ayadhi L: Metabolic biomarkers related to oxidative stress and antioxidant status in Saudi autistic children. Clinical Biochemistry. 2009, 42: 1032-1040. 10.1016/j.clinbiochem.2009.03.011.
Katzman R, Saitoh T: Advances in Alzheimer's disease. FASEB J. 1991, 4: 278-286.
Jarrett JT, Berger EP, Lansbury PT: The C-terminus of the beta protein is critical in amyloidogenesis. Annals of the New York Academy of Sciences. 1993, 695: 144-8. 10.1111/j.1749-6632.1993.tb23043.x.
Weller RO, Massey A, Newman TA, Hutchings M, Kuo YM, Roher AE: Cerebral amyloid angiopathy: amyloid beta accumulates in putative interstitial fluid drainage pathways in Alzheimer's disease. American Journal of Pathology. 1998, 153: 725-33. 10.1016/S0002-9440(10)65616-7.
Bruce-Keller AJ, Begley JG, Fu W, Butterfield DA, Bredesen DE, Hutchins JB, Hensley K, Mattson MP: Bc1-2 protects isolatedplasma, and mitochondrial membranes against lipid peroxidation induced by hydrogen peroxide, and amyloid-peptide. Journal of Neurochemistry. 1998, 70: 31-9.
Butterfield DA, Drake J, Pocernich C, Castegna A: Evidence of oxidative damage in Alzheimer's disease brain: central role of amyloid beta-peptide. Trends in Molecular Medicine. 2001, 7: 548-54. 10.1016/S1471-4914(01)02173-6.
Reich EE, Markesbery WR, Roberts LJ, Swift LL, Morrow JD, Montine TJ: Brain regional quantification of F-ring and D-/Ering isoprostanes and neuroprostanes in Alzheimer's disease. American Journal of Patholology. 2001, 158: 293-7. 10.1016/S0002-9440(10)63968-5.
Butterfield DA, Lauderback CM: Lipid peroxidation and protein oxidation in Alzheimer's disease brain: potential causes and consequences involving amyloid β-peptide-associated free radical oxidative stress. Free Radical Biology & Medicine. 2002, 32: 1050-60. 10.1016/S0891-5849(02)00794-3.
Varadarajan S, Yatin S, Aksenova M, Butterfield DA: Review: Alzheimer's amyloidβ-peptide-associated free radical oxidative stress, and neurotoxicity. Journal of Structural Biology. 2000, 130: 184-208. 10.1006/jsbi.2000.4274.
Giedraitis V, Sundelöf J, Irizarry MC, Gårevik N, Bradley H, Wahlund LO: The normal equilibrium between CSF and plasma amyloid beta levels is disrupted in Alzheimer's disease. Neuroscience Letters. 2007, 427: 127-131. 10.1016/j.neulet.2007.09.023.
Sambamurti K, Greigh NH, Lahiri DK: Advances in the cellular and molecular biology of the beta-amyloid protein in Alzheimer's disease. Neuromolecular Medicine. 2002, 1: 1-31. 10.1385/NMM:1:1:1.
Matsuoka Y, Saito M, LaFrancois J, Gaynor K, Olm V, Wang L: Noveltherapeutic approach for the treatment of Alzheimer's disease by peripheral administration of agents with an affinity to beta-amyloid. Journal of Neuroscience. 2003, 23: 29-33.
Jaeger LB, Dohgu S, Sultana R, Lynch JL, Owen JB, Erickson MA: Lipopolysaccharide alters the blood-brain barrier transport of amyloid β protein: A mechanism for inflammation in the progression of Alzheimer's disease. Brain Behavior and Immunity. 2009, 23: 507-17. 10.1016/j.bbi.2009.01.017.
Oliveira G, Diogo L, Grazina M, Garcia P, Ataíde A, Marques C: Mitochondrial dysfunction in autism spectrum disorders: a population-based study. Developmental Medicine & Child Neurology. 2005, 47: 185-9. 10.1017/S0012162205000332.
Al-Mosalem OA, El-Ansary A, Attas O, Al-Ayadhi L: Metabolic biomarkers related to energy metabolism in Saudi autistic children. Clinical Biochemistry. 2009, 42: 949-957. 10.1016/j.clinbiochem.2009.04.006.
El-Ansary A, Al-Daihan S, Al-Dbass A, Al-Ayadhi L: Measurement of selected ions related to oxidative stress and energy metabolism in Saudi autistic children. Clinical Biochemistry. 2010, 43: 63-70. 10.1016/j.clinbiochem.2009.09.008.
Palmieri L, Persico AM: Mitochondrial dysfunction in autism spectrum disorders: Cause or effect?. Biochimica et Biophysica Acta. 2010, 1797: 1130-37. 10.1016/j.bbabio.2010.04.018.
Sokol DK, Chen D, Farlow MR, Dunn DW, Maloney B, Zimmer JA: High levels of Alzheimer beta-amyloid precursor protein (APP) in children with severely autistic behavior and aggression. J child Neurol. 2006, 21: 444-9.
Bailey AR, Giunta BN, Obregon D, Nikolic WV, Tian J, Sanberg CD: Peripheral biomarkers in Autism: secreted amyloid precursor protein-α as a probable key player in early diagnosis. International Journal of Clinical and Experimental Medicine. 2008, 1: 338-44.
Opazo C, Huang X, Cherny RA, Moir RD, Roher AE, White AR: Metalloenzyme-like activity of Alzheimer's disease beta-amyloid. Cu-dependent catalytic conversion of dopamine, cholesterol, and biological reducing agents to neurotoxic H2O2, Journal of Biological Chemistry. 2002, 277: 40302-8.
Fang CL, Wu WH, Liu Q, Sun X, Ma Y, Li YM: Dual functions of β-amyloid oligomer and fibril in Cu(II)-induced H2O2 production. Regulatory Peptides. 2010, 163: 1-6. 10.1016/j.regpep.2010.05.001.
Atwood CS, Obrenovicha ME, Liua T, hana HC, Perrya G, Smith MA: Amyloid-β: a chameleon walking in two worlds: a review of the trophic and toxic properties of amyloid-β Martins. Brain Research Reviews. 2003, 43: 1-16. 10.1016/S0165-0173(03)00174-7.
Amstad P, Peskin A, Shah G, Mirault ME, Moret R, Zbinden I: The balance between Cu, Zn-superoxide dismutase and catalase affects the sensitivity of mouse epidermal cells to oxidative stress. Biochemistry. 1991, 30: 9305-9313. 10.1021/bi00102a024.
Zhong S, Wu K, Black IB, Schaar DG: Characterization of the genomic structure of the mouse APLP1 gene. Genomics. 1996, 32: 159-162. 10.1006/geno.1996.0096.
Sato K, Akaike T, Kohno M, Ando M, Maeda H: Hydroxyl radical production by H2O2 plus Cu,Zn-superoxide dismutase reflects the activity of free copper released from the damaged enzyme. Journal of Biological Chemistry. 1992, 267: 25371-77.
Pflanzner T, Janko MC, André-Dohmen B, Reuss S, Weggen S, Roebroek AJM: LRP1 mediates bidirectional transcytosis of amyloid-β across the blood-brain barrier. Neurobiology of Aging. Corrected Proof, Available online 13 July 2010,
Deane R, Wu Z, Sagare A, Davis J, Yan SD, Hamm K, Xu F, Parisi M, LaRue B, Hu HW, Spijkers P, Guo H, Song X, Lenting PJ, Van Nostrand WE, Zlokovic BV: LRP/Amyloid β-Peptide Interaction Mediates Differential Brain Efflux of Aβ Isoforms. 2004, 43: 333-44.
Gu H, Wei X, Monnot AD, Fontanilla CV, Behl M, Farlow MR: Lead exposure increases levels of β-amyloid in the brain and CSF and inhibits LRP1 expression in APP transgenic mice. Neuroscience Letters. 2011, 490: 16-20. 10.1016/j.neulet.2010.12.017.
Fryer JD, Simmons K, Parsadanian M, Bales KR, Paul SM, Sullivan PM: Human apolipoprotein E4 alters the amyloid-beta 40:42 ratio and promotes the formation of cerebral amyloid angiopathy in an amyloid precursor protein transgenic model. J Neurosci. 2005, 25: 2803-2810. 10.1523/JNEUROSCI.5170-04.2005.
Herzig MC, Winkler DT, Burgermeister P, Pfeifer M, Kohler E, Schmidt SD: Abeta is targeted to the vasculature in a mouse model of hereditary cerebral hemorrhage with amyloidosis. Nat Neurosci. 2004, 7: 954-960. 10.1038/nn1302.
Frackowiak J, Mazur-Kolecka B, Kuchna I, Nowicki K, Brown WT, Wegiel J: Accumulation of Amyloid-Beta Peptide Species In Four Brain Structures In Children with Autism. International Meeting for Autism Research. 2011, Manchester Grand Hyatt San Diego, California
Calderón-Garcidue˜nas L, Solt A, Henríquez-Roldán C, Torres-Jardón R, Nuse B, Herritt L: Long-term air pollution exposure is associated with neuroinflammation, an altered innate immune response, disruption of the blood-brain-barrier, ultrafine particle deposition, and accumulation of amyloid beta 42 and alpha synuclein in children and young adults. Toxicol Pathol. 2008, 36: 289-310. 10.1177/0192623307313011.
Garcidue˜nas LC, D'Angiulli A, Kulesza RJ, Torres-Jardón R, Osnaya N, Romero L: Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials. Int J Devl Neuroscience. 2011, 29: 365-375. 10.1016/j.ijdevneu.2011.03.007.
Lahiri DK, Maloney B: The ''LEARn" (Latent Early-life Associated Regulation) model integrates environmental risk factors and the developmental basis of Alzheimer's disease, and proposes remedial steps. Experimental Gerontology. 2010, 45: 291-296. 10.1016/j.exger.2010.01.001.
Lahiri DK, Maloney B, Zawia NH: The LEARn model: an epigenetic explanation for idiopathic neurobiological diseases. Molecular Psychiatry. 2009, 14: 992-1003. 10.1038/mp.2009.82.
Lahiri DK, Zawia NH, Greig NH, Sambamurti K, Maloney B: Early-life events may trigger biochemical pathways for Alzheimer's disease: the ''LEARn" model. Biogerontology. 2008, 9: 375-379. 10.1007/s10522-008-9162-6.
Deth R, Muratore C, Benzecry J, Power-Charnitsky VA, Waly M: How environmental and genetic factors combine to cause autism: A redox/methylation hypothesis. NeuroToxicology. 2008, 29: 190-201. 10.1016/j.neuro.2007.09.010.
The authors would like to thank Shaik AL-Amodi Autism Research Chair, NPST - Medical Centers and the parents of autistic children, without whom this work was not possible. This work was supported by King Abdul Aziz City for Science and Technology (KACST).
The authors declare that they have no competing interests.
AE designed the study and drafted the manuscript. ABB helped to draft the manuscript and performed the statistical analysis. MO helped with the English polishing. LA provided samples and participated in the design of the study. All authors have read and approved the final manuscript.
About this article
Cite this article
Al-Ayadhi, L.Y., Ben Bacha, A.G., Kotb, M. et al. A novel study on amyloid β peptide 40, 42 and 40/42 ratio in Saudi autistics. Behav Brain Funct 8, 4 (2012) doi:10.1186/1744-9081-8-4
- Amyloid beta
- Brain influx
- Cognitive disability | 1 | 11 |
<urn:uuid:3849545d-5f7d-4bfa-b320-be74faaed4a1> | What is ergonomics?
Ergonomics can be simply defined as the study of work, but more precisely, as the science of designing the job to fit the worker, rather than physically forcing the worker’s body to fit the job.
When the companies adapt tasks, workstations, and equipment to the worker, they can reduce physical stress and eliminate potentially serious, or even disabling work-related musculoskeletal disorders (MSD’s).
Why ergonomics is important?
Industries are increasing their production rates; as a result, more workers of manufacturing companies are executing tasks including actions as:
- Frequent lifting, carrying, and pushing or pulling loads without help from other workers or devices;
- Working in production lines that requires fast and repetitive movements;
- Working more than eight hours per day;
- Having tighter grips when using tools;
These actions during work involve forced postures and repetitive movements that cause stress in the body, and may result in injuries called musculoskeletal disorder (MSD’s). The most common (MSD’s) are:
- Upper limb: rotator cuff tendonitis, radial tunnel syndrome, epicondylitis, others.
- Back injuries: herniated disk, lower back pain.
- Lower limb: prepatellar bursitis (kneecap inflammation)
Benefits of ergonomics in the workplace
Some of the benefits of ergonomics are:
- Increasing of production rate;
- Job satisfaction;
- Decreasing of stressing environments;
- Reduction of physical discomfort in workers;
- Simplifying tasks;
- Knowledge of the aspects that improve the work environments
Costa Rica has the following legislation for the ergonomics matter in the workplace:
- General Regulations on Occupational Safety and Health, article 83 (furniture);
- Agreement No. 2291-2015 Ordinary session of the Occupational Health Council N º1846-2015 (area and volume, hallways); and
- Construction Safety Regulation, article 18 (manual handling).
In addition, there are the following reference standards:
- INTE ISO 6385-2016 Principles of design
- INTE ISO 7730-2016 Thermal comfort
- INTE ISO 9241-1-2016 Data visualization
- INTE ISO 11064-4-2016 Distribution and dimension
- INTE ISO 11228-1-2016 Lifting and transport
Accident rate in Costa Rica
According to the Occupational Health Council, in 2014, 15% of all reported accidents were associated with physical overload risks; and in 2015, the rate increased to 18%.
Ergonomic Evaluation Tools
Futuris uses internationally recognized tools that assess risk factors on each job position and the different work conditions.
Some of these tools are explained as follow:
RULA was developed to evaluate the exposure of individual workers to ergonomic risk factors associated with upper extremities MSD’s. The RULA ergonomic assessment tool considers biomechanical and postural load requirements that the job tasks/demands on the neck, trunk and upper extremities.
The MAC tool was developed to help the user to identify high-risk workplace manual handling activities and can be also used to assess the risks associated with load lifting tasks, carrying and team manual handling activities.
The Assessment of Repetitive Tasks is a tool designed to assess repetitive tasks involving the upper limbs. It assesses some of the common risk factors in repetitive work that contribute to the development of upper limb disorders.
It is a guide created by the Health and Safety Executive of the United Kingdom that shows how to ensure the safety and suitability of workplace seating. It is addressed to those responsible for health and safety departments and it is also useful as a guide for employees, manufacturers, designers, suppliers, and users of industrial and office furniture.
It is a checklist created by the Occupational Safety & Health Administration (OSHA). It can help to create a safe and comfortable workstation. This tool assesses working postures, seating, monitor, working areas and accessories.
Our ergonomic projects
Futuris team has carried out more than 500 ergonomic assessments using the tools described above.
The table below shows a summary of the assessments carried out by Futuris.
|ICE||Power & Utilities||Ergonomic risk assessment for the operations of three hydroelectric plants and designing of control measures.|
|Allergan Medical Products||Pharmaceutical & Health Care||Ergonomic risk assessment (Job Safety Analysis) at approximately 30 workstations associated with manual operations for the manufacture of medical devices using MAC and ART tools. Designing of an action plan and support in the selection of engineering measures along with the site staff through an Ergonomic Committee.|
|Coca Cola|| |
Food & Drink
|Ergonomic risk assessment in 10 workstations of the Coca Cola Concentrate Plant through the application of 15 MAC evaluations (evaluation of manual handling), 46 ART evaluations (evaluation of repetitive movements) and 44 biomechanical evaluations. Priorities were identified, additional infrastructure controls were proposed, and staff trained.|
|Migración||Services||Ergonomic evaluation for eight immigration offices across the country and a total of 228 workers, 225 evaluations of computer stations, 15 MAC evaluations (evaluation of manual handling), 27 evaluations ART (evaluation of repetitive movements) and 228 anthropometric evaluations were applied. The main risk factors included the poor design of some workstations, incorrect use of equipment, inadequate postures and manual handling of loads in places with obstacles such as doors and uneven floors. Improvements were proposed in infrastructure, training requirements and the creation of an order and cleaning program.|
|GlaxoSmithKline||Services||Ergonomic risk assessments for the offices in shared spaces.|
|Fiserv||Services||Ergonomic risk assessments for the offices in one of their facilities.|
|Hologic Surgical Products||Pharmaceutical & Health Care||Risk Assessment in Health, Safety and Ergonomics, for 252 production job positions. A tool was developed that combined the results of ergonomic evaluations (through ART and MAC) and the analysis of other health and safety hazards. In addition, a rotation plan was developed for each production line in which positions with low residual ergonomic risk rotated with others of high ergonomic risk, in order to decrease the risk level.|
|Banco Popular||Services||Risk Assessment in Health, Safety, and Ergonomics, for over 50 administrative job positions.|
|Philips Volcano||Pharmaceutical & Health Care||Ergonomic risk assessment for 80 production job positions (through RULA, ART, and MAC) on a medical device manufacturing plant. In addition, a rotation plan was developed for each production line in which positions with low residual ergonomic risk rotated with others of high ergonomic risk, in order to decrease the risk level.| | 1 | 2 |
<urn:uuid:6cb49335-9495-43c1-97c4-7b0fa709dae9> | - 0.1 Frequently Used Terms
- 0.2 Terms We Use Often On Our Website
- 1 A
- 2 B
- 3 C
- 4 D
- 5 E
- 6 F
- 7 G
- 8 H
- 9 I
- 10 J
- 11 K
- 12 L
- 13 M
- 14 N
- 15 O
- 16 P
- 17 Q
- 18 R
- 19 S
- 20 T
- 21 U
- 22 V
- 23 W
- 24 X
- 25 Y
- 26 Z
Terms We Use Often On Our Website
And What They Mean
Terms/Definitions Listed In Alphabetical Order
Words That Are Underlined And Bolded Also Have Their Definitions On This Page
- 2-DIMENSIONAL (2D): An object that is essentially flat. You normally are only able to see and interact with one side of a 2D object at a time; an example would be a sheet of paper of a desk. The object represents itself on the X and Y Cartesian Coordinates. A 2D object will have 2 of the 3 Cartesian Coordinates, never all 3.
- 3-DIMENSIONAL (3D): An object with multiple sides that you can see and interact with at once; an example would be a ball. The object represents itself on the X, Y, and Z Cartesian coördinates.
- 3D MODEL: A 3D Model is the representation of your object within a Design File. You use CAD/CAM software to design/model your 3D Object.
- ACRYLIC: A See-through/transparent Plastic often used for the Frame of 3D Printers.
- ADDITIVE MANUFACTURING: This method of manufacturing is quickly becoming synonymous with 3D Printing. Additive Manufacturing starts off with the smallest amount/unit of the manufacturing material and continues to add that material in layers until the object is created; think about pouring the layers to bake a cake, it is also known as Controlled Material Addition. A 3D Printer uses a model/design file to create a 3D object. It can be likened to an architect (the Design File) providing the blueprint for the structure to the builder (the 3D Printer). Another primary manufacturing method is the Subtractive Manufacturing process primarily used by CNC machines.
- AUTOMATIC CALIBRATION: For a 3D Printer, Automatic Calibration is simply the Printer saving you the manual process of centering the Print Head, and any other adjustment, over the Print Bed before each 3D Printing project. 3D Printers normally accomplish this using sensors that tell the Printer where
the Print Head is along the Axes of the Printer. This is a highly desired feature, especially one we highly recommend to those who are new to 3D Printing. Some 3D Printers include a feature known as Automatic Material Recognition but simply include it as a feature of Automatic Calibration. Automatic Material Recognition (AMR), is the ability of a 3D Printer to sense the type of Filament that has been loaded and set the Print Temperature of the Extruder accordingly. An example of a 3D Printer that uses AMR is the Robox.
- BLUE PAINTERS TAPE: A type of Masking Tape, also known as Sticky Tape, that is one of the preferred
materials for covering your Print Bed when using certain Filaments. One example of a Filament that recommends the use of Blue Painters Tape in order for your project to stick to your Print Bed is the Taulman/Nylon 618 Filament. The tape is inexpensive and can normally be found at your local hardware store.
- BUBBLING: A term we use to mean that visible bubbles show up in your Printed object due to a contaminated or improper use of a Filament. This mostly happens due to your Filament getting wet (if it easily soaks in moisture; example Taulman/Nylon 618 Filament), Printing at the wrong recommended temperature setting for your Filament (recommended temperature range/setting for each filament provided in the 3D Printing Filaments section), or buying/making a poor quality Filament.
- BUILD VOLUME: Measured in length, width and height; this is the maximum size of an object that your 3D Printer can Print. To calculate your total Print value you simply multiply the maximum length, width and height values which are usually measured in inches. For example, a build volume of 16″ by 16″ by 9″ is 2.304in³. If your Printers Build Volume information is listed in centimeters or other unit of measurement you can use a free online tool like this one to convert if necessary.
AXES/CARTESIAN COORDINATES: Also known as the Cartesian Coordinate System; the dimensions/sides of an object can be represented in terms of X, Y, and Z. X represents the left-to-right (width) of the object. The Y represents the front-to-back (length) of an object. The Z represents the top-to-bottom (height) of an object. These coördinates represent the AXES OF A 3D Printer.
- CAD/CAM SOFTWARE: CAM (Computer Aided Manufacturing) is an element that falls under CAD (Computer Aided Design). CAM is simply the use of computer software to operate your manufacturing equipment and assist you in the manufacturing process.CAD software is used to design/create the template for what you want to manufacture. CAD software also incorporates CAE (Computer Aided Engineering); CAE is what checks to make sure what your design is structurally sound/will perform what it is designed to do. Most CAD softwares incorporate CAE and CAM so when dealing with the design of an object and 3D Printers you will almost always see it referred to only as CAD; although, the software that comes with most 3D Printers is only CAM or CAM/CAE software. These softwares would allow you to 3D Print from an already created CAD File or leaving you to use another software in order to design your object first. The Files that CAD software creates is often referred to as a Design File.
- CARBON FIBER: Carbon Fiber is an incredibly strong and light weight Polymer. Carbon Fiber can be recycled and reused. It is a Composite; meaning, it is composed of multiple materials; Carbon Fiber is commonly a combination of carious types of metals, glass, and Resins. Some 3D Printers, such as the Mark One, can 3D Print with Carbon Fiber and other Composites.
- CHEMICAL ETCHING: Chemical Etching is the use of both Electroless Plating and Electroplating in the construction of a PCB. Electroless Plating uses chemicals to treat the board while Electroplating binds/adds the Conductive material (the metal which is usually Copper) by adding an electric current.
CIRCUIT/ELECTRICAL CIRCUIT/ELECTRICAL NETWORK: A Circuit is the interconnection of electrical components in order to accomplish Conductivity, usually so that an electrical device (even something as simple as a light bulb) can be powered.
- CLAY: Clay is primarily a combination of earth (dirt), minerals (including trace amounts of various metals), and water. Some 3D Printers use Clay as Filament to create Clay objects. Some Printers, such as the Mini Metal Maker, even “treat” the Clay to transform the it into a metal object!
CMOS SENSOR: A CMOS (Complementary metal–oxide–semiconductor) Sensor is a type of sensor that is used by most 3D Scanners in order to sense light and capture the image of an object. A CMOS Sensor is sometimes referred to as an Active-Pixel Sensor. You have the Sensor/Pixel Sensor itself, which detects light, and the circuitry/computer components (made with the CMOS process) which actively interpret the information allowing devices (such as 3D Scanners and Digital Cameras) to convert that information into an image. The information gathered is represented in Pixels.
CFF (COMPOSITE FILAMENT FABRICATION): Similar to FDM, this seems to be a term coined by the Makers of the Mark One 3D Printer to describe a 3D Printer that can use Composites (such as Carbon Fiber and Fiberglass) to make 3D objects.
- COMPUTER AIDED DESIGN (CAD): is the use of computer systems to help in the creation, modification, analysis, or optimization of a design. See CAD/CAM Software definition for more information.
- COMPUTER NUMERICALLY CONTROLLED (CNC): The automation of machine tools using a computer program. CNC machines use Subtractive Manufacturing rather than the technique of Additive Manufacturing that 3D Printers use. One similarity between 3D Printers and CNC machines is that they both usually uses CAD software as the program to tell the machine what to manufacture and how to manufacture it. A great example of a CNC machine is the HandiBot.
- CONDUCTIVE: A material that readily permits the flow of an electrical current (conducts electricity) is said to be Conductive. Conductive materials are critical to some 3D Printing processes and all electronics, the primary being PCB creation. Pure metals, such as Copper and Silver, are the Conductive materials used most often in PCB creation.
- CONTOUR CRAFTING: The 3D Printing of entire full-sized living structures/homes in under a day, pioneered by Doctor Behrokh Khoshnevis.
- CONTROLLER: See “The Parts Of A 3D Printer” Page
- CURING: Curing is the process of causing a chemical reaction in a Resin by applying heat. The degree to and where the heat is applied causes the Resin to harden in specific way. Stereolithographic/Photo-Activated 3D Printers use the Curing of Resins to build 3D objects. They use a process known as Light or Photo-Curing : the process includes the use of photo-sensitive Resins , called Photopolymers, that can be considered to be the 3D Printer Filaments for Stereolithographic/Photo-Activated Printers. These Resins react to the intensity and other properties of light.
- CROWD FUNDING: Crowd-Funding is a very effective strategy that is used in many aspects of business and technology. It allows a person or business to present their idea to the public and they are able to receive funding by offering perks and incentives to those that help them in funding the project. Good examples would be Kickstarter and IndieGoGo.
- DC MOTOR: A DC (Direct Current) Motor is used in place of a Stepper Motor in some 3D Printers as DC Motors offer more accuracy and power capability as the power is regulated by how much voltage is provided to the motor. An example of a 3D Printer using DC Motors is the RAPPY 3D Printer.
- DELTA DESIGN/3D Printer: Delta 3D Printers are Printers that normally use 3 Stepper Motors rather than 2 to move the Print head/Extruder along the Axes (see cartesian coördinates definition) of the 3D Printer; the third Stepper Motor normally allows a smoother and faster operation, particularly when the Printer is being heavily used. Though all 3D Printers move left-to-right, back-to-front, and top-to-bottom, Delta Printers do not dedicate a Stepper Motor to a specific axis allowing them to run more smoothly. Also, because Delta Printers move the Extruder in a circular manner rather than straight lines you save the time taken by traditional 3D Printers to move the Extruder off of your object and then back on at a different point in order to complete a Layer or start a new one. You can easily recognize a Delta Printer due to it having 3 Effector/Control arms that move the Extruder. You can find Delta Printers here.
- DO IT YOURSELF (DIY): 3D Printers that come as an un-assembled or partly assembled kit with either some or most of the assembly labor complete, leaving the rest of the assembly to make the Printer operational left to you. If offered, this will be the option for obtaining your 3D Printer with the least cost to you. Though this option can be intimidating to beginners, Makers tend to offer tutorials and support and there is usually already a decently sized community behind any 3D Printer, especially Open Source ones, that you can turn to for help.
- DYE: Is a substance, usually derived from plants, that has the ability to bond with objects, such as Printer Filaments, and stain/color them. An example of a Filament that works well with Dying is the Taulman 645 Filament.
- ELECTROLESS PLATING: Is the plating of metals without the use of an external electric charge, but rather a purely chemical reaction generates the negative charge needed for binding. This plating method is most commonly seen in PCB construction.
- ELECTROPLATING: Is the process of binding/adding a metal to a surface through the use of chemicals and an external electric current. Electroplating is commonly used in PCB construction.
- EFFECTOR: See “The Parts Of A 3D Printer” Page
- EMG/ELECTROMYOGRAPHY: “is a technique for evaluating and recording the electrical activity produced by skeletal muscles”. prosthetics, such as the Dextrus 3D Printed Robotic Hand, use this technique to provide a functional replacement for a lost limb/appendage.
- END STOP:See “The Parts Of A 3D Printer” Page
- ENGRAVING: The action of drawing/imprinting a design/pattern into a hard material/surface. Engraving is commonly performed on metals, especially jewelry. Engraving unto wood is usually referred to as carving. Engraving is seen as a Subtractive Process as it cuts away or burns off some material in order to make the design/pattern. Some 3D Printer Makers refer to Engraving as Stylus Cutting.
- ETHERNET: This is a wired connection to the internet for your 3D Printer. You can connect 3D Printers that have Ethernet to your personal or business network so that multiple people can use it rather than just yourself when attaching via USB. See an example of an Ethernet port in the image below.
- EXTRUDER/THERMOPLASTIC EXTRUDER: See “The Parts Of A 3D Printer” Page
- FABLAB: Short for Fabrication Laboratory; Fablabs are normally small-scale workshops for learning and performing digital Fabrication (using CAD software). You should be able to find Additive Manufacturing/3D Printing at any Fablab.
- FABRICATOR: 3D Printers are sometimes called personal Fabricators. In the personal 3D Printing world Fabricators are also often machines that can do more than just one type of Fabrication, such as the Fabtotum, which can 3D Print, 3D Scan, and CNC milling as well. Fabrication is simply another term for manufacturing though this term is most often used when manufacturing is performed with the use of a machine/robot.
- FIBERGLASS: Fiberglass is a Plastic Composite of a Polymer reinforced with Strands of glass. Fiberglass is known for its cost effectiveness and light weight. Some 3D Printers, such as the Mark One, can 3D Print with Fiberglass and other Composites.
- FINISHING: Also knows as Surface Finishing; In relation to 3D Printers, Finishing is generally the smoothing/Buffing of the surfaces of your 3D Printed object. You can think of it as just “smoothing the rough edges”. The lower the Layer height (usually represented in Microns), the lower the need for Finishing. Some 3D Printer Makers boast no need for Finishing because their Printers have such a high Resolution (can achieve a very low Layer height) achievable by their Printers). The Most common forms of Finishing used on 3D Printed objects are Polishing/Buffing (Generally Used by FDM Printers) and Planarization (Chemical-mechanical Planarization is generally used by Stereolithographic Printers). Stereolithographic/Photolithographic Printers use a Planarization process called Shallow Trench Isolation (STI).
- FDM/FUSED DEPOSITION MODELING (also known as FFF/FUSED FILAMENT FABRICATION): FDM/FFF is the process that most consumer affordable 3D Printers use. FDM is a trademarked term owned by the Stratsys Corporation while the FFF term is applied to most 3D printers as it is an “Open/Open Source“-based term and 3D Printer Makers not associated with Stratasys should use this term when presenting their FDM-based 3D Printer to avoid any potential future legal quagmires.This process includes the design and creation of an object using CAD/CAM Software by producing an CAD/STL Design File which is then provided to your 3D Printer. A Filament, usually a Thermoplastic (Plastic) is heated/melted and then placed down in Layers until the object is built. You can discover more on how a 3D Printer uses these files to create 3D objects here.
- FOOTPRINT: On All About 3D Printing, a 3D Printers Footprint consists of, how much electricity it takes to run, how much physical space it uses (example; space on your desk), the materials it is made from (recyclable, toxic/non-toxic materials, ease to repair or replace parts).
- FRAME: See “The Parts Of A 3D Printer” Page
- FUMES: Noxious smells creating by airborne particles when 3D Printing with some Filaments. For more, please see our article, “The Dangers Of 3D Printing: Part II.”
- GUIDEWAY/GUIDE RODS” See “The Parts Of A 3D Printer” Page
- G-CODE (GCODE): G-code is a programming language used to instruct an automated machine, such as a 3D Printer, on how to do something. It is one of the most popular programming languages used by 3D Printing software.
- HIGH DEFINITION: Usually refers to the quality of a display, such as on a computer or television screen. High Definition is a display quality of 1280 Pixels by 720 Pixels. See definition for “Resolution” for more details.
- KINEMATIC COUPLING: Used by 3D Printers, such as the Mark One, Kinematic Coupling provides a high degree of accuracy when the 3D Printer calibrates parts such as the Print Bed or Extruder. Kinematics helps plot trajectories of something in motion with accuracy. Kinematically Coupled parts tend to have multiple contact areas so that the device can better sense where each part is in relation to another.
- LCD (LIQUID-CRYSTAL DISPLAY): A screen that provides you instant information that is an option, or may come standard, with some 3D Printers.
- LASER: A Laser is a device that emits light, usually accomplished by focusing the light through a special lens. Think of focusing sunlight through a magnifying glass. Lasers have many uses in the 3D Printing world. Common uses are 3D Printers using lasers to help with Print accuracy and 3D Scanners also improving their accuracy by using lasers to determine the exact dimensions of an object. Most 3D Printers and 3D Scanners use red Lasers as red lasers are the least expensive to make or purchase. Some Makers use green lasers in their devices as green Lasers are higher quality. We believe the Makers of the Robocular 3D Scanner explained why green Lasers are better at achieving higher quality very well: “CMOS color webcams have twice as many green pixel receptors as red, allowing us to capture a much higher number of clear points”. Some 3D Printers that can accomplish both Additive and Subtractive Manufacturing techniques by using lasers of higher intensity to actually cut physical objects the same way a blade/saw does in a CNC (see Computer Numerically Controlled) machine but with greater accuracy.
- LAYER(S): Generically a layer is “a thickness of some material laid on or spread over a surface”. 3D Printers put down Filament in Layers, one on top of another, in order to Print your object. Thinner Layers mean higher Resolution, your object will have finer detail/higher quality appearance. Thinner Layers usually also mean more Layers, which increase the strength/durability of your Printed object.
- MAKER(S): We refer to individuals/business entity’s that build their own 3D Printers, whether it be from an open or proprietary design, as Makers. Makers are often referred to as DIY (Do It Yourself) people as Makers are often individuals or small groups of entrepreneurs developing ever improving 3D Printers and other technology.
- MAKERSPACE (HACKLAB, HACKERSPACE, HACKSPACE): Makerspaces are community-driven workshops allowing those with common intersts (usually technology related) to collaborate.
- MICRON: A Micron is a unit of measurement usually used by Makers to represent the Print Resolution their 3D Printer can achieve. Micron is short for Micrometer/Micrometre and is 0.001 millimeters (mm)/0.000039 of an inch (in). Learn more about a Micron here.
- MODEL: See article: What is a Design File?
- MODULAR: Some 3D Printers are partially or completely modular; this means that the parts can be easily swapped out/upgraded to improve speed, Build Volume, and other capabilities. Most 3D Printers that are at least partially modular, allowing you to change the Extruder Head to one that can operate at higher temperatures or sometimes even add extra Heads for faster or multicolor Printing. Fully modular 3D Printers allow you to change all working components and sometimes even the Frame to accomodate a larger Print Bed/increased Build Volume.
- NYLON: Is a Thermoplastic that is similar to silk in look and feel. It is great for decorative 3D Prints. A good example of a Nylon-based material used with 3D Printers is the Taulman/Nylon 618 Printer Filament.
- OPEN SOURCE: Meaning, free access to a product’s design/blueprint, for you to change or reproduce for your own purposes. The philosophy of the communal sharing and improvement of an idea, object, and so on. Information is shared freely and patents are not involved. Usually anyone can improve upon an open source project. The RepRap project is recognized as the originator of Open Source 3D Printers.
- OUT-OF-THE-BOX/OUT-THE-BOX: 3D Printers and other devices that come to you fully assembled and ready to use right away.
- PCB BOARD/PRINTED CIRCUIT BOARD: PCB is short for Printed Circuit Board; even though it is likely a poor use of English to say PCB Board since you are effectively saying, “Printed Circuit Board Board”, you will generally see it written this way as there are many aspects of PCB’s and their manufacture and that term tends to be easily specified. A Circuit Board is simply a board/flat surface with pathways “drawn” into it with a Conductive material, usually Copper; Circuit Boards allows an electronic device to communicate with its various components as well as with other electronics. A general comparison is how streets are laid out in a city connecting various neighborhoods; the streets would be the Conductive material, the way they are laid out the
pattern, and the neighborhoods the various components. Circuit Boards are almost always referred to as Printed Circuit Boards (PCB) due to the process by which they are made. A Printed Circuit Board involves the Surface (usually a flat material), the Conductive material (usually a pure metal such as Silver or Copper), and a Soldering machine (in our case a 3D Printer) or other device that is used to etch the pathways/required pattern on to the Surface. While PCB construction in general most often uses a Semi-additive method most personally-affordable 3D Printers use either a purely Subtractive or Additive method. Some 3D Printers, such as the Fabtotum, make PCB’s using a Subtractive Manufacturing process; the most common being PCB Milling. Some others, such as the EX, use an Additive Manufacturing Process. One popular Additive PCB process is Chemical Etching.
- PCB MILLING: The Subtractive Process of removing areas of Conductive material (usually Copper or Silver) from a PCB to create the desired circuit patterns.
- PIXEL: A Pixel is simply the smallest unit of an image that a device can interpret/understand. For example, a 1080p Television screen can interpret 1920 Pixels vertically and 1080 Pixels horizontally across the screen. The greater amount of Pixels that devices such as Televisions, Photo and Video Cameras, and other devices that display and/or capture images can understand will result in a higher Resolution image. Higher Resolution images will look sharper/clearer. You can especially see the difference when you have two images that have a considerably different number of Pixels within displayed on a large screen or when observed up close.
- PHOTOPOLYMER: Is a Polymer that changes when exposed to light. Stereolithographic/Photo-Activated 3D printers use Resins that are Photopolymers; beams of light cause the Resin to harden in certain areas allowing those Printers to build a 3D object.
- PLA-ONLY Printer: Is a 3D Printer that does not come with a heated Print Bed which is very important for Printing with ABS Filament. These 3D Printers will only work well using lower heat Filaments such as PLA. The recommended operating temperature range of PLA Filament is 180 to 220 degrees Celsius/356 to 428 degrees Fahrenheit. PLA-Only Printers will have a similar operating temperature range. Note that the operating temperature for 3D Printers refers to the temperature of the Print Nozzle/Extruder and not the temperature at which that Filament melts in general.
- PLASTIC: Plastic is a material normally created by combining natural and synthetic substances into Polymers. Plastics are moldable (generally easy to shape into an object you wish).
- PLOTTING (PLOTTER): Plotting is the use of a Plotter, a Printer integrated with a computer. A Plotter excels at producing large, High Resolution drawings quickly. At one time Plotters were used extensively in the creation of hard copy’s of CAD designs (blueprints).
- PLUG-AND-PLAY: When referring to 3D Printers, Plug-and-play simply means that upon connecting your 3D Printer to your computer (Usually via USB), it will automatically install/enable all necessary software so you can Print right away; no fumbling with drivers or searching the internet for extra software just to get your Printer to work. We highly recommend such Printers for those purchasing their first 3D Printer or those who will be using one for the first time.
- POINT-AND-SHOOT: Similar to digital cameras, a Point-and-Shoot 3D Scanner allows you to take images of your object simply by pointing the Scanner at the object and normally capturing an image/scanning the object through the push of one or a few buttons. Point-and-Shoot also implies that the 3D Scanner automatically chooses the best settings for you automatically while you are capturing your object making these Scanners excellent for beginners. A good example of a Point-and-Shoot Scanner is the Fuel 3D Scanner.
- POLAR COORDINATES/POLAR COORDINATE SYSTEM: The Polar Coordinate System is used by Delta Printers as opposed to the Cartesian Coordinate System used by traditional 3D Printers. Rather than being restricted to straight lines of motion the Polar System allows for motion in-between straight line angles. This allows Delta Printers to move more smoothly than traditional 3D Printers which results in faster Printing as well as Printing that may even be more accurate since there are no abrupt stops during Printing which may cause unwanted vibration in the Frame of the Printer. The constant jerk exhibited by some traditional 3D Printers can even warp Frames over time that aren’t very rigid; depending on how warped your Frame becomes it will negatively affect your Print accuracy by or small amount or even drastically.
- POLYMER: Polymer is a word meaning “many parts”. Polymers are a combination of two or more materials/substances. Good examples of polymers are Plastics. Plastics are a mixture of natural and synthetic materials/substances.
- POPLAR WOOD: Poplar Wood comes from a hardwood tree that is native to Eastern North America. It is one of the preferred materials to be used on your Print Bed when Printing with Filaments such as Taulman/Nylon 618. This wood is normally inexpensive and can usually be found at your local hardware store.
- POWER SUPPLY: See “The Parts Of A 3D Printer” Page
- PRINT/BUILD VOLUME: In relation to 3D Printers the maximum Print Volume is the largest size/dimensions of an object, the length, width, and height, it can Print.
- PRINT BED/Printer BED:See “The Parts Of A 3D Printer” Page
- PRE-ORDER: This is usually the default order stage for 3D Printers and other devices on Crowdfunding pages such as Kickstarter or IndieGoGo. Though the drawback of Pre-Ordering an item is paying a portion or all of the cost of the device and having to usually wait an extended amount of time to actually receive the item, Pre-Orders are usually a good deal less expensive than waiting for the device to be ready for general/public sale.
- PROTOTYPE: A Prototype is an early sample, model or release of a product built to test a concept or process or to act as a thing to be replicated or learned from.
- RAPID PROTOTYPING: Rapid Prototyping is a group of techniques to quickly fabricate/build a scale model of a physical part or assembly using three-dimensional Computer Aided Design (CAD) data.
- RESIN: Resin is a natural material that comes from plants and trees that is normally similar to maple/pancake syrup in consistency. An example of a Printer
- RESOLUTION: The smallest unit of measurement for Resolution is called a Pixel. The term “Resolution” is sometimes interchanged with the term “Definition”; for example, a television that has the high display Resolution of 1280 Pixels by 720 Pixels would be callled a High Definition television. The term Resolution is usually used when referring to capturing an image or Model, such as with a 3D Scanner. Definition usually refers to the quality of a display, such as the screen of your computer or television. In reference to 3D Printers the Resolution is usually discussed pertaining to the size/diameter of the Print nozzle/Extruder tip and the positional accuracy (how many and how close together the steps of movement are) of the motors (commonly Stepper Motors) that move the Print nozzle/Extruder along its Axes (see Cartesian Coordinates above). As a rule-of-thumb, the smaller the diameter of the Extruder tip and the greater the accuracy of the motors the finer the resolution your object can be Printed in resulting in a higher quality Prints. Think of comparing the same movie watched in High Definition (720p) and Full High Definition (1080p/Blu-ray quality). For 3D Scanners the Resolution is determined by the overall quality of the imaging equipment. Most 3D Scanners use one or more high quality cameras to take images. Some Scanners also incorporate lasers, such as the MatterForm 3D Scanner. Generally, the higher the number of Pixels a scanner can process/handle will mean higher Resolution and in turn a higher quality model to work with or copy of your object.
- SECURE DIGITAL/SD CARD: An SD card is simply a data storage device, similar to a USB flash drive, that you can use to store design files on. A SD Card Reader allows your computer or 3D Printer to read the data on the card.
- SEMI-ADDITIVE MANUFACTURING PROCESS: A method of manufacturing that combines elements from both Additive and Subtractive manufacturing. Semi-additive Manufacturing is often used in the construction of PCB‘s. Under this process a PCB comes with a Layer of Conductive material (usually Copper) already on it. The areas of the Conductive material that WILL NOT form pathways are removed. The desired pathways left over are touched over with additional Conductive material to ensure uniformity/a working path and to adjust the Layer height of the pathways if desired.
- SERVO/SERVO-MOTOR: Is a type of motor normally used in robotics that operates by feedback; instead of sticking to defined parameters like a Stepper Motor does, a Servo Motor uses sensors to determine its position for the job it has to do.
- SMART/SMART 3D Printer: The A3DP website refers to Smart 3D Printers as Printers that are mostly plug-and-play (simple setup requiring at most the installation of one piece of software on your computer), have WiFi or other means of internet connection built-in, able to remote control your Printer, applications such as controlling your 3D Printer with your Smartphone/Tablet; the Printer must come with at least these features standard.
- SOLDERING: Soldering is the use of Solder (a metal alloy that is heated and used to join other pieces of metal; used often in the construction of electronics/PCB’s) to join multiple pieces of metal together. Soldering is commonly used in the construction of Circuit Boards (also known as Printed Circuit Boards)
- START-UP: A Start-Up is a business that is in its infancy stages. It can be a physical product, software, or even just an idea in the making. A Start-Up is also usually in search of funding to launch the business/product.
- STEP: In relation to a 3D Printer, a Step is the smallest distance a motor (Stepper Motor) can move the Print Nozzle/Extruder Head along the Axes of the Printer. The greater the number of Steps a 3D Printer can accomplish is the more accurate your Prints will be, resulting in higher quality, more detailed objects. For 3D Scanners (turn-table 3D Scanners), the Scanner normally takes one image/picture for every Step. Similar to 3D Printers, the more Steps a 3D Scanner can accomplish will be the higher the Resolution of the scanned object.
- STEPPER/STEP MOTOR: See “The Parts Of A 3D Printer” Page
- STEREOLITHOGRAPHIC/PHOTO-ACTIVATED 3D Printer: 3D Printers in this category use light intensity with special Curable (See Curing) Photopolymer Resins. The Resins harden in the area hit with a beam of light allowing the Printer to build your object. These Resins are considered the Filament for these Printers. Although, in most cases, both Photo-Activated and FDM Printers heat their Filament first, Photo-Activated Printers don’t force the heated Filament through an Extruder/Print Nozzle. These 3D Printers pour/add Resin to a container and light is beamed at alternating intensity to build your object within the container. When your object is complete you simply remove the finished product from the container. Think of an ice tray as the container, the water as the Resin, and the coldness of your freezer that hardens the water in the cubes of the ice tray as the intensity that hardens your Resin. Examples of Printers that use this method are the Lumifold and Peachy 3D Printer. You may also see this method of 3D Printing referred to as Photolithographic, Photo-Initiated, and other similar terms.
- STYLUS CUTTING: Stylus Cutting is simply the term some 3D Printer Makers use when stating that their device has the ability to Engrave objects.
- SUBTRACTIVE MANUFACTURING: A method of manufacturing primarily used by cutting machines such as CNC machines. This method/process of manufacturing is often simply called machining and is a type of Controlled Material Removal; starting with an object(s) larger than your desired end product and cutting away the unnecessary parts until you get your final object. Subtractive Manufacturing is the exact opposite of Additive Manufacturing which starts with the smallest size of the material and builds upon it. A person using the Subtractive Process is often called a Machinist, even though there are many other methods of manufacturing that fall under that term as well.
- TEXTURE/TEXTURE MAPPING: Texture is the detail added to a 3D Object using CAD/CAM software. These details include color, lighting, depth, and other aspects of the object. Adding Texture to your 3D object is called Texture Mapping. View the “What Is A 3D Printer Design File?“, article for information on how various CAD/CAD programs use Texture.
- THERMOPLASTIC: Is a Polymer that becomes moldable once it is heated above a specific temperature. The 3D Printer Filaments on this website are examples of Thermoplastics.
- ULTRA-FINE PARTICLES (UFP): As pertaining to 3D Printing, UFP’s are very small particles of Printer Filament that stay in the air during and after an object has been 3D Printed. UFP’s are commonly known as Fumes. Excessive exposure, such as Printing in an unventilated area, can cause respiratory/breathing issues. We always recommend Printing in a well ventilated environment such as opening a window and we encourage the use of a window fan.
- USB: Almost all 3D Printers offer a physical connection to your computer, this is usually accomplished through a USB connection, this is the same connection that you are most likely already familiar with as it is used by many standard (print on paper) home printers as well as keyboards, flash drives, ETC.
VOLUME: Refers to the capacity of a 3D Printer, 3D Scanner, or other device. For example; the maximum height of the range of vertical motion for the Extruder Head combined with the maximum dimensions of the Print Bed dictate the maximum size of an object it can Print. For a 3D Scanner, the Scan Volume is simply the maximum dimensions of an object it can Scan, this is usually dictated by the dimensions of the Bed/turn-table and the range of motion of the camera(s) and Laser(s), if equipped. Some devices do not have a set maximum Volume. Point-and-Shoot 3D Scanners are only limited by how much its software can accammodate as they are basically digital cameras that are optimized to recreate the object 3-Dimensionally for manipulation and/or 3D Printing. An example of a Point-and-Shoot 3D Scanner is the Fuel 3D Scanner. Some 3D Printers, such as Stereolithographic 3D Printers, are only limited by the amount of Filament you have as well as the size of the container you available to Print the object in. An example of a 3D Printer with this capability is the Peachy.
- WIFI: WiFi is the short name for the term “Wireless Fidelity”. WiFi uses radio waves to broadcast a network, such as the Internet, wirelessly between devices such as computers.
- WIKIPEDIA: A free to use online encyclopedia where information is uploaded, edited, and verified by thousands of users from around the world.
If there is a Frequently Used Term, or even a term used once, on our site that you do not understand and is not listed here or you need an additional explanation of a term that is here please let us know!
You will also find our FAQ Page useful!
Frequently Used Terms On allabout3dprinting.com
A 3D Printer is best thought of in relation to your regular home Printer; your ink Printer at home uses ink to print text and images on paper, a 3D Printer works by using materials such as plastic and metal … Read More
Why Is 3D Printing Important? What Is It and Why All The “Fuss”? The main goal of our website is tell about you the advantages of 3D printing and why it is important to join in on the 3D printing … Read More
amazon.com Conveniently Providing Everything You Need The 3D printing store at amazon.com is fully operational. If you are unfamiliar with amazon.com it is an online retailing giant. At the 3D printing section on their website you can not only compare … Read More | 1 | 2 |
<urn:uuid:4b29bf38-651a-40e7-95f4-fda94a4ebe57> | “London Patient in Remission” Second HIV
patient in remission, after being treated
with stem cell therapy offering hope that this miraculous treatment may lead to
a permanent cure to AIDS.
It’s both a miracle and the result of accelerating technologies, Physicians referring to a “London Patient” a man with HIV has become the second person in the world who has been cured of the virus since the global AIDS epidemic began decades ago.
A new approach of transplanting stem cells from a donor with a specific profile that is believed to make them immune to getting HIV to those with the HIV infection appears to have made history. The “London Patient” has been declared HIV free three years after receiving bone marrow stem cells from an HIV-resistant donor and about a year and a half after coming off antiretroviral drugs.
This fantastic accomplishment according to researchers from around the world could mean that humanity is on the verge of developing a cure for HIV, the virus that causes AIDS.
During a Reuter’s interview with Ravindra Gupta, an HIV biologist who helped treat the man insisted that the patient is “in remission” but cautioned that it’s…
“Too early to say he’s cured.”
The “London Patient” is choosing to remain anonymous for now. The reference to his location is similar to the first known case of a cured HIV-positive patient Timothy Brown, an American man, who was known as “The Berlin Patient.” The first person ever to get a stem cell bone marrow transplant for leukemia treatment in Germany more than a dozen years ago. That transplant to date has also appeared to wipe out any trace to his HIV infection.
The Brown case lead to many tests in which scientists tried for 12 years to copy the result with other HIV-positive cancer patients but were unsuccessful. The “London patient,” who had Hodgkin’s lymphoma, is the first adult to be cleared of HIV since Brown.
HIV remains a serious epidemic in the United States and around the world. It’s estimated that there are about 39,000 new HIV diagnoses in the United States in 2017 and that approximately 37 million people worldwide who are currently living with HIV. An estimated 35 million had died of AIDS since early 1980 were when the disease became an epidemic.
Scientists who have studied the London patient will be publishing a full report this week in the journal Nature. A presentation is also planned in Seattle at the Conference on Retroviruses and Opportunistic Infections, taking place this week.
Bone marrow, stem cell base transplants, as an HIV therapy, can have some harsh side effects but scientists believe it may be possible to treat patients with similar HIV-resistant immune cells; making the treatment easier on patients as well as more cost-effective. Dr. Annemarie Wensing, a virologist at University Medical Center Utrecht during an interview with the New York Times, said….
“This will inspire people that cure is not a dream. It’s reachable.”
As many as 41% of those infected with this deadly fungal infection in one recent outbreak in a Spanish hospital died within 30 days of being diagnosed. People who contract these drug-resistant diseases typically die soon after contracting them because of their untreatable nature.
Antibiotic-resistant superbugs – germs that evolve so quickly that existing treatment protocols can’t keep up and an estimated 23,000 Americans die from them every year. The danger of one of these superbugs becoming so resistant to antibiotics, so contagious and lethal that the danger of them of their becoming a worldwide pandemic that’s capable of killing millions, tens of millions, even as many as a hundred million people grows every day.
If that wasn’t enough of a potential nightmare, medical experts are now warning a similar nightmare is starting to grow with deadly drug-resistant fungal infections. Right now a deadly, drug-resistant fungus called Candida auris is spreading around the globe and is so dangerous it's being described by the Centers for Disease Control and Prevention (CDC) that it’s being called an "urgent threat."
This drug-resistant fungus called Candida auris was first discovered in 2009 and was found in the ear discharge of a patient. The fungus is now spreading around the world and has been reported in the US, Colombia, India, South Korea and now threatens to become a pandemic according to the CDC.
The first reported cases of Candida auris in the United States was reported by the CDC in August 2016. By May 2017, a total of 77 cases were reported in New York, New Jersey, Illinois, Indiana, Maryland, Massachusetts, and Oklahoma. After looking at people in contact with those first 77 cases, the CDC determined that the quick-spreading fungus had infected 45 more.
Now the deadly fungus is reaching a point where it could become an epidemic. The CDC reported in February 2019; there are 587 confirmed cases of Candida auris in the United States alone.
As is the case with this kind of antibiotic-resistant illness, people who have weakened immune systems are especially at risk for infection!
People who contract who are in the hospital already are suffering from a severe illness, according to the CDC. C. auris outbreaks are now one of the biggest health risks for hospitals and healthcare centers worldwide.
In the UK, an intensive care unit was forced to shut down after the hospital discovered 72 people there were infected with C. auris.
In Spain, a hospital found that 372 patients had the fungus. 41% of those infected Spanish hospital patients affected died within 30 days of being diagnosed.
The implications and risk with C. auris have healthcare experts alarmed and warning the numbers of those infected could grow geometrically because the fungal infection as of yet, can't be contained with existing drug treatments.
The danger of C. auris is best illustrated by the fact the fungus can survive on surfaces like walls and furniture for literally weeks, even more than a month, according to the CDC.
People are infected by drug-resistant diseases typically die soon after contracting them because of their untreatable nature. While it’s true that most fungal and bacterial infections can be stopped using drugs, it’s also true that drug-resistant fungi and bacteria have a genetic ability to evolve so quickly that the treatment that works for one patient, may not work for another. While at the same time this ability to change rapidly so also helps it survive and spread at an alarming rate.
Making the danger even greater, these drug-resistant diseases and fungus are extremely difficult for physicians to diagnosis. Often carriers infect others even before they know they’re infected!
The CDC is now saying 1 in 10 people the agency screened for superbugs carried a drug-resistant disease without even knowing it.
People who have C. auris rarely realize they are infected until they are very sick. The CDC reports people that are infected usually report…
Fever and chills that don't go away following over the counter and physician-prescribed drug treatment and until the symptoms of the fungus get so bad sufferers don’t usually get diagnosed until they are hospitalized and are tested for fungus through a lab test.
Some experts believe pesticides, and over the use of antibiotic drugs are creating these superbugs and sooner or later will cause the worst pandemic in human history, capable of killing millions of people.
Physicians and researchers insist they don’t know what is causing the rapid rise of these drug-resistant illnesses. One of the scariest aspects of this drug-resistant fungus is that there are different strains of C. auris in different parts of the world, which means this fungus didn't come from a single origin. They are being created around the globe at the same time.
Many physicians and researchers suspect heavy use of pesticides and other antifungal treatments caused C. auris to pop up in a variety of locations around the same time….
Researchers in 2013 reported another drug-resistant fungus called Aspergillus and noted that the fungus existed in places where a pesticide that had been used to target and kill that specific fungus was used.
As pesticides, antifungals, and antibiotics continue to be heavily used on crops and in livestock, it's possible that the fungi and bacteria they're targeting adapt and evolve in spite of the treatments to kill them.
The CDC is urging people to use soap and hand sanitizer before and after touching any patients, and reporting cases to public health departments immediately when they are detected.
Scientists have discovered a new drug that modulates the cell's defenses of the body and can stop the autoimmunity and stimulate the body to kill cancer cells.
New research has shown that T cells play a fundamental role in the fight against cancer. T cells are part of the defense cells of the body that are responsible for destroying potentially harmful agents for our bodies, such as bacteria, viruses, and even malignant cells.
A new study reveals that a molecule called tetrahydrobiopterin (BH4) regulates the growth of T cells in the immune system.
The study was led by researchers from the Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA) in Vienna and by scientists at Boston Children's Hospital in Massachusetts. The results of this research were recently published by the journal Nature.
"A fascinating feature of our discovery is that a system that was previously known only for its importance in neurobiology can also play a key role in T cell biology," says co-senior author Josef M. Penninger, the scientific and founding director of IMBA.
Cancer and autoimmunity
These findings can lead to a wide variety of therapeutic applications such as the control of autoimmune diseases (asthma, rheumatoid arthritis, lupus, allergies, etc.) and even trigger an immune response against cancer.
Harnessing the body's own healing mechanisms to fight disease is a rapidly growing field in medical research.
Recently, two scientists were awarded the Nobel Prize in Physiology and Medicine for 2018 after developing an approach to cancer therapy that stimulates the inherent ability of the immune system to destroy tumor cells.
These discoveries have revolutionized anticancer therapy, as it allows us to take advantage of the activity of our immune system to selectively destroy neoplastic cells and avoid the use of drugs that destroy both healthy cells and cancer cells.
Cancer is a disease that affects millions of people around the world and has a significant impact on society. The National Cancer Institute estimates that by 2018, doctors will diagnose more than 1,735,350 new cases of cancer and that 609,640 people will die of the disease in the United States.
Many diseases can originate due to the inadequate activity of the immune system. There are more than 80 types of autoimmune diseases which arise due to an overactive immune system that causes the body's defense cells to attack healthy tissues.
Among the most frequent autoimmune diseases are rheumatoid arthritis, systemic lupus erythematosus, type I diabetes, and inflammatory bowel disease (Crohn's disease and ulcerative rectocolitis).
A report from the National Institute of Health (NIH) published in 2005 estimated that up to 23.5 million people in the US suffer from an autoimmune disease. However, the NIH figures only take into account 24 autoimmune diseases. Therefore there is an underreporting of these diseases.
In this new study, it was evidenced that the reduction of BH4 severely limits the proliferation of T cells in humans. Apparently, T cells require BH4 to regulate iron intracellular concentrations and energy production. These findings are consistent with previous research linking iron deficiency with alterations of the immune system.
The research team found that the increase in BH4 in mice with cancer caused the proliferation of T cells and the reduction of tumors. Apparently, BH4 exerts this effect by suppressing the activity of a molecule called kynurenine that inhibits the action of T cells on malignant tumors.
The researchers used some blockers of BH4 mice with autoimmune diseases. These drugs stopped the autoaggressive activity of the T cells, stopped the allergic inflammation and prevented the T cells from causing autoimmune attacks in the intestine and the brain.
The prostate is an exclusive gland of the male
genitourinary apparatus, formed by muscle and glandular tissue and weighing approximately
20 grams. It is located in front of the rectum, surrounds the bladder and urethra,
and participates in the production of the seminal fluid, along with the periurethral
glands and the seminal vesicles.
Benign prostatic hyperplasia (BPH) is an entity characterized by an increase in glandular size and by the presence of an obstructive and irritative component that causes lower urinary tract symptoms (LUTS) and alterations in the quality of life of patients. It mainly affects men over 50 years of age. Benign prostatic hyperplasia (BPH), a very common entity worldwide, is the main reason for urological consultation in men. BPH is one of the most frequent benign tumors in direct relationship with age. In the US, the prevalence is 8% between 31 and 40 years and over 80% in those over 80 years. The prevalence in Europe presents a range of 14% in subjects of 40 years of age to 30-40% from 60 years.
The consultation for symptoms secondary to BPH is very frequent in outpatient practice. Obstructive symptoms include difficulty in initiating urination, decreased strength and caliber of the voiding stream, post-void dribbling, and incomplete voiding. Irritative symptoms include urgency, urinary frequency, and nocturia. It is worth noting that dysuria or burning during urination is also considered an irritative symptom, but patients with BPH rarely complain of dysuria, except when they have an overactive urinary tract infection.
The treatment of benign prostatic hyperplasia is aimed at reducing urinary symptoms and improving the quality of life of the patient. Will come conditioned by the clinic, comorbidities and the patient's expectations. The three available therapeutic options are watchful waiting, medical treatment, and surgical treatment, and the therapeutic decision will be conditioned, in addition to the above aspects, by the effectiveness and safety of the treatment, by the best cost-effectiveness ratio, and by the patient’s preferences.
Surgical treatment is the method that offers a better response for symptoms but carries a higher risk of complications. Transurethral resection of the prostate was until recently the most effective therapeutic option for those patients who do not respond favorably, or who do not accept pharmacological therapy.
Current procedures used to reduce the size of the prostate, while effective, can lead to highly feared side effects such as loss of sexual function, bleeding, and incontinence, and patients must stay in the hospital for days after the surgery.
If you are one of the millions of men who are not satisfied with your current treatment of benign prostatic hyperplasia (BPH) (such as medication or surgery), water vapor therapy, called Rezūm, is a new, safe and effective option designed to transform your experience regarding the treatment of BPH.
A study conducted in British patients showed that this procedure reduced the size of the prostate by 36%, these results are similar to those obtained with other treatments, but the detestable side effects mentioned above are avoided.
Rezūm uses the natural energy stored in the water vapor. It is a safe and effective procedure, available for the treatment of symptoms associated with benign prostatic hyperplasia. During each treatment, the sterile water vapor is released into the enlarged prostate tissue. When the steam turns into water, all the stored energy is released, which causes the death of the cells. Over time, your body's immune system removes the dead cells, reducing the size of the prostate. With the removal of the prostatic tissue, the urethra is unclogged, reducing the symptoms of BPH. Most patients begin to experience relief of symptoms in only two weeks, and maximum benefits are reached within three months.
Before the procedure, your doctor may ask you to stop using anticoagulants a few days or a week before the procedure. The procedure is completed in just a few minutes; however, keep in mind that the total duration of the consultation with your doctor will be approximately 2 hours. After the procedure, your doctor will prescribe analgesics and antibiotics orally for 3 to 5 days. The doctor may recommend the use of a urethral catheter for a few days to facilitate urination during the recovery process.
Kidney stones are one of the most common
disorders worldwide, approximately 10% of the population has suffered at least
one occasion of kidney stones at some time in their lives. Men suffer from kidney
stones more often than women. Children can also develop kidney stones; this may be due to genetic factors, low birth
weight, intravenous feeding, and deformities or abnormal anatomy of the urinary
tract. However, children are also at risk of developing kidney stones if they
do not drink enough fluids or eat foods with high
Kidney stones are crystallized masses that form in the kidney. The development of the stones depends on the chemicals found in the urine. Certain substances can accelerate the formation of stones, while others prevent the formation of these.
Most stones are composed of calcium oxalate, but others can be made up of uric acid, phosphate, and other chemicals. These start being small and get bigger over time. The stones can remain in the kidney or can move through the ureter (the tube that carries urine from the kidneys to the bladder). Stones can also form in the bladder or urethra (the tube that carries urine to the outside of the body).
Risk factors for developing kidney stones
• Family history of kidney stones
• Having previously undergone a kidney stone
• Obesity (a BMI greater than 30)
• Inflammatory bowel disease (Crohn's disease, ulcerative rectocolitis)
• Patients undergoing bariatric surgery, since the body absorbs less calcium after gastric bypass procedures
Below are some recommendations to prevent the formation of kidney stones:
Drink enough fluids throughout the day: Insufficient fluid intake contributes to the formation of stones. If you do not drink enough water, the urine will have less fluid and a higher concentration of chemicals that form the stones. That's why drinking more water may help prevent the combination of those chemicals that make up the stones. This recommendation is especially important during the summer when kidney stones are more likely to develop due to dehydration.
It is recommended that adults consume one ounce of liquid daily for every two pounds of body weight. For example, a 200-pound man should drink 100 ounces of fluid per day. We must bear in mind that not all liquids are beneficial, coffee, iced tea, and many soft drinks contain caffeine, which can cause dehydration if consumed in excess. The soda contributes to the accumulation of calcium oxalate, so it is better to avoid consuming it. The reduction of intake of sugary soft drinks significantly decreases the risk of suffering a lithiasis recurrence. Particularly in the reduction of those drinks acidified with phosphoric acid. The consumption of alcoholic beverages also increases the risk of kidney stones.
Decrease your salt intake: High salt intake negatively affects the composition of urine as it increases the excretion of calcium and decreases urinary citrate, favoring the formation of sodium urate crystals. Adults with hypertension or a history of kidney stones should limit sodium intake to 1,500 mg per day.
Avoid foods rich in oxalate: The union of oxalate with calcium in the urine is one of the most important stages in the formation of kidney stones, therefore by decreasing the intake of oxalate-rich foods reduces the formation of kidney stones. Among the foods with the highest oxalate content are spinach, rhubarb, sweet potato, beet, chocolate, kale, and peanuts.
In addition, excessive consumption of animal proteins such as beef, cheese, eggs, pork contributes to the crystallization of uric acid. Establishing a vegetarian diet at least twice a week allows replacing part of the animal protein with beans, dried peas, and lentils, which are high protein and low oxalate foods.
Monitor added sugars: Added sugars, especially in the form of corn syrup with high fructose content, contribute to the crystallization of uric acid. The natural sugars in fruits are perfectly fine for daily consumption.
Avoid vitamin C supplements: The consumption of more than 500 mg per day of vitamin C predisposes to the formation of greater amounts of oxalate.
Do not be afraid of calcium: contrary to what many people believe, the restriction in calcium intake may increase the risk of stone formation. Calcium binds to oxalate in the intestine; its deficiency causes it to increase oxalate absorption and with it an increase in urinary excretion, thus favoring the formation of kidney stones. Therefore, it is recommended to ingest approximately 1000 to 1200mg of calcium per day.
An inexpensive, household product found in most kitchens can promote an anti-inflammatory environment that could be a treatment to a life-changing autoimmune disease.
Baking soda (also known as sodium bicarbonate) has become a very popular product due to its multiple uses, ranging from household cleaning to dental care and more. Also, the usefulness of bicarbonate in the treatment of rheumatoid arthritis (RA) was recently demonstrated.
Maintaining a balanced pH is essential for the proper functioning of the organism; an excessively alkaline or excessively acidic environment leads to a wide range of physical disorders.
Sodium bicarbonate allows alkalizing an acidic environment in the organism. Several types of scientific research corroborate that our organism works in a better way in a slightly alkaline environment. However, it is important to take into account that any of the extremes (acid-alkaline) can be harmful to health.
In September of this year, the Journal of Immunology published a study, which concluded that drinking water mixed with sodium bicarbonate reduces the chances of developing diseases such as RA and lupus.
During this study, the researchers orally administered a mixture of bicarbonate and water to two populations in the study that included healthy men and rats. After 15 days of treatment, the scientists showed that the cells of the immune system (macrophages) began to play an anti-inflammatory function.
Researchers concluded that sodium bicarbonate acted as a natural stimulant of the anti-inflammatory response of macrophages. Many diseases such as rheumatoid arthritis can benefit from these anti-inflammatory properties.
How sodium bicarbonate works
Baking soda stops the autoimmune response, in which the body's defense cells attack their own tissues.
After the administration of sodium bicarbonate, the researchers noticed a decrease in autoimmune activity and an increase in anti-inflammatory activity in the stomach, spleen, kidneys and peripheral blood.
This effect is partly due to a change in the regulation of T cells and increased activity of cytokines and anti-inflammatory cells.
This combination of processes reduces the immune response and could help prevent the immune system from attacking its own tissues.
Sodium bicarbonate could be an economical, safe, and effective way to relieve the symptoms associated with RA and other autoimmune diseases, but it is essential that you consult with your doctor before starting any therapeutic regimen.
It is important to mention that people at risk of alkalosis (ph <7.35) do not benefit from the intake of sodium bicarbonate, even their consumption can be harmful.
"Baking soda is a really safe way to treat inflammatory disease," said Paul O'Connor, director of the physiology graduate program at Augusta University in Georgia, and lead author of the study.
That's not all that has been shown to do, either.
Sodium bicarbonate has also been used to treat acid reflux. Some research even recommends the intake of sodium bicarbonate as a method to prevent certain forms of cancer.
Michelle Neilly, a health coach at Integrative Nutrition in Pennsylvania, said: "While there is no miracle cure-all fix out there, some home remedies like baking soda could help patients with RA."
For several years inflammation has been considered a harmful process involved in the development of a large number of pathologies that compromise the quality of life of people such as arthritis, diabetes, atherosclerosis, asthma, and even Alzheimer's disease. However, inflammation is a natural process of the body that is put on the skin to protect the body; therefore there is much confusion around the idea of whether the inflammation is good or bad. Below we will clarify certain aspects regarding inflammation, what it is, how it manifests itself and how it is affecting you.
What is inflammation?
The immune system is the defense system of the body, which is made up of a set of organs and cells responsible for defending the body from any potentially harmful situation. The immune system is also able to eliminate cells that no longer work or function poorly. If a tumor cell is detected, the immune system induces its apoptosis (programmed cell death).
Inflammation is a process triggered by the immune system to protect and repair tissues from any injury caused by bacteria, viruses, fungi, toxins, etc. When there is a tissue injury, for example, a wound, the immune system activates a set of mechanisms that lead to inflammation and subsequent healing of the injured tissue.
When does it happen?
When the organism identifies damage or a foreign agent, the inflammatory process is initiated, which begins with the migration of white blood cells to the affected area, these cells are responsible for destroying microorganisms and limiting tissue damage. Subsequently, the synthesis of collagen and other proteins necessary for the repair of injured tissues occurs, this process is carried out by connective tissue cells called fibroblasts. Inflammation is a necessary process, without it we would be defenseless against viruses and bacteria and could never be cured.
When is it not good?
Although inflammation is an important part of the body's defense mechanisms, when there is an imbalance in its regulation, it can cause great damage to the organism. Like everything in life, it is necessary that there is a balance between the proinflammatory and anti-inflammatory elements.
What causes harmful inflammation?
The reality is that many of the habits of modern life are capable of triggering a chronic and uncontrolled inflammatory process that compromises the health of the individual. Stress, poor diet, smoking, and not getting enough sleep are the main factors that lead to this situation.
How can I avoid 'bad' inflammation?
Several studies have shown that trans fats increase oxidative stress and the production of free radicals, which promotes an inflammatory environment in the body. In addition, the excess in the consumption of simple carbohydrates and processed sugars considerably raise the levels of pro-inflammatory hormones such as insulin. The additives and preservatives present in some foods are capable of altering the intestinal microbiome, which profoundly influences the immune system. If you want to avoid inflammation, we recommend you stay away from this type of food and start a healthy diet.
What foods fight inflammation?
It is recommended to maintain a diet with high fiber content as it displaces unwanted foods and allows the elimination of toxins. The healthy fats present in olive oil, avocado, and nuts are a great ally. Also, it is also recommended to incorporate more protein of vegetable origin into the diet and increase the consumption of fruits, vegetables, and complex carbohydrates.
What else can I do?
Sleeping well not only reduces inflammation, but it also improves your cardiovascular health, decreases depression, and balances your immune system.
Eliminate stress is essential to combat inflammation; we recommend doing yoga, meditation or take aromatic baths three times a week.
Exercise is another powerful anti-inflammatory, especially when practiced regularly. It has been proven that aerobic exercise can reduce insulin and cholesterol levels, which reduces inflammation and increases blood flow to the tissues.
Biogen (BIIB) took a
nose-dive last month after the company announced that it would be pulling the
plug on the late-stage trial of its
Alzheimer’s drug, Aducanumab.
Biogen was in partnership with a Japanese pharmaceutical company, Eisai, on its Alzheimer’s drug candidate, Aducanumab, but both companies agreed to discontinue the phase 3 trial after an independent data-monitoring committee concluded that the drug would NOT “meet their primary endpoint.”
This is a catastrophic blow for Biogen who was hoping this Alzheimer’s drug, Aducanumab, would become the blockbuster profit center of its drug pipeline. Wall Street analysts had bet big on Biogen’s progress in its attempt to treat Alzheimer’s disease.
The hunt for drug treatment for Alzheimer’s has become one of the holy grails of bioscience. Finding a reliable drug to target the beta-amyloid protein, the main component of the amyloid plaques found in the brains of Alzheimer patients, has the potential of being a multi-billion a year product. Biogen is only one of many major pharmaceuticals to have tried and failed in attempting to tackle this human health scourge.
Jefferies Financial Group Inc. (JEF) said in its latest note to clients…
“This was a clear part of the potential downside risk and consistent with our HOLD rating thesis. We think the base business is worth $225-$250 w/o any pipeline. However, on a trading basis, we think the stock could trade down as low as $200-$230 with the removal of the program from the valuation,”
“They will have to now be overly-aggressive on M&A due to desperation.”
Meanwhile, Citi bank analysts released an update warning this Biogen failure will be a setback that will have adverse ripple effects across the bigger biotech space…
“With Aducanumab removed as the major pipeline catalysts, large-cap biotech’s ability to grow will remain in question.”
The signs of an addictive substance include bingeing, craving, tolerance, withdrawal, cross-sensitization, cross-tolerance, cross-dependence, reward and opioid effects.
And that’s why a recent article in the British Journal of Medicine concluded that sugar is addictive.
If you have a sweet tooth, this probably is not news. But it has been a contested topic in food and medical science for some time. No one gets the D.T.’s if they skip a candy bar. But they may get a headache, feel tired, or suffer distractive urges to run out and get their fix.
They may even hide their stash.
Lately, sugar “purges” are a celebrity trend. Do we need to copy that? Is craving sugar really an addiction that will wreck your health?
Time to take it easy… For people who are not diabetic or pre-diabetic, a little sugar is harmless. We evolved to seek out sweet tastes because it goes along with fruits and vegetables that are ripe and therefore at peak nutrition.
Wanting something sweet is not the real issue. The problem is that we no longer get a little sugar each day. We eat a lot of it. The average American devours 94 grams of sugar a day. That is the equivalent of 23.5 teaspoons of sugar.
It’s so far beyond our basic needs that coming into line may seem like punishment to sugar lovers.
The American Heart Association recommends an added sugar intake of 6 tsp or less per day for women, 9 tsp for men. That’s 24 grams for women, 36 grams for men.
Or to put that another way, a teaspoon of sugar is 4 grams, so the AHA recommendation comes to 1 1/2 tsp for women and 2 ¼ for men. You probably had that much before noon.
Even if you never put sugar in your coffee, eat Count Chocula for breakfast, or grab a donut on the way to the office, you are probably busting the AHA limit every day.
Some sources are obvious. If that fruit-enriched yogurt includes jam on the bottom to stir in, it’s a sugar bomb. One serving of Dannon strawberry yogurt contains 15 grams of sugar, almost four teaspoonfuls. This is fairly mild among flavored yogurts. Yoplait’s chocolate almond contains 22 grams of sugar. Fage split cup honey has 29 grams of added sugar.
Even if you stick to the AHA guidance, sugar intake is a tricky subject. Tracking only your added sugar can be misleading if you gravitate to naturally sweet foods. Bananas and grapes pack a wallop, but they also contain fiber, which ameliorates potential sugar spikes. In addition, the sugar in fruit is primarily fructose, which does not stimulate insulin production as glucose does.
But fruit juices are a special problem. Apple juice is nearly as sweet as a cola drink. Grape juice is sweeter than a Coke.
If you want to keep your added sugar under control, you can spot these culprits easily. It shouldn’t be a surprise that most granola is not health food, it is healthy food with sugar added, usually honey. Blueberry muffins and Pop Tarts don’t really fool anyone. But some foods do.
A few of the most common sugar culprits may surprise you.
Ketchup with those fries will cost you. One ounce of ketchup is worth 1 ½ tsp of sugar.
A bowl of corn flakes is worth a teaspoon of sugar… before you sprinkle any on top.
At lunch, a Big Mac will give you 8 grams, or 2 teaspoons of sugar. Or maybe you could choose a half-cup of pulled pork on a potato roll—we’re talking a small sandwich. Count that for 26 grams of added sugar, almost 6 teaspoons. If you’re eating that pulled pork in a restaurant, though, it’s probably going to come in at twice that much.
Added sugar appears everywhere—peanut butter, barbecue sauce, pasta sauce, any drink that is not labeled 0-calorie, canned fruit, bread, canned soup, frozen dinners, chocolate sauce, lunch meats, and baked beans, to name a few.
Cooking everything for yourself would solve a lot of this issue, but we don’t always have time to do that. We can take small steps, though.
For example, if you have a thing for “sweet tea” as they say in the south, make your own. You can even reduce the sugar slowly over time. It will certainly be better than a big, 16-ounce, glass of Arizona tea with 48 grams of sugar!
Anything you bake from scratch can be trimmed, too. Most cakes, quick bread, and muffins taste perfectly fine, maybe even better, with the sugar reduced by 1/3 to 1/2.
And just for the record, if you are tracking your sugar, it all counts. Despite its golden aura, honey is sugar. So are agave, corn syrup, maple syrup, and molasses.
Before you eat prepared food, read the label. Then keep track. You can also compare brands thanks to nutrition labels. For instance, most commercial bread contains far more sugar than necessary to make the yeast rise, but some brands are lower than others. Typically, rye bread has much less sugar, so if you enjoy it, that’s a great choice.
Bottom line: if it’s prepared food, read the label, even if you think it’s not a sweet. You might be surprised how much sugar is lurking.
News from Sweden by way of England: a glass of beet juice
could help you work out longer and feel better.
It can make those fast-twitch muscles more powerful, too.
Somewhere in the world, there must be six people besides me who are thinking, “Wow! Beet juice! Hooray, I’ll take two glasses!”
My husband jibes me for ordering from restaurant menus based on the side dishes. Pork or venison tonight? I’ll take the one that comes with beets, please.
This beet-juice research should be in the “news you can use “category, but for one problem. Much to my surprise, most people hate beets. Ranker puts them third on the list of most disliked vegetables. Evidently, people would far rather eat parsnips and okra. Amazing.
The Swedish research happened in 2007 based on a comparison of triathletes and endurance cyclists who took either sodium nitrate or table salt before a workout. In 2009, researchers in England found the effect puzzling enough to run their own experiments. Instead of a chemical, they used beet juice because beets are naturally high in sodium nitrate. Beet juice was a winner. Cyclists who imbibed could pedal longer before reaching exhaustion. Their systolic blood pressure dropped 6 points. And their oxygen demand fell an impressive 19%.
Even with this proof, there was no worldwide breakout of beet-juice drinks crowding your grocery store. Beet juice never reached the dizzy heights of pomegranate juice.
I suspect a cadre of beet haters suppressed it. The glory of beets stayed buried for a long time, though there are many elite and serious athletes who do swear by it.
Then in 2018, Andrew Jones and team at the University of Exeter in England published an extensive review on dietary nitrates and physical performance in the Annual Review of Nutrition. Beets were the main focus because they are an abundant natural source of sodium nitrate.
Sodium nitrate is one of those chemicals we have all been warned to avoid. It was supposed to be the devil in processed meats, a carcinogen. It is now believed that the cancer risk exists, but was overstated.
Apart from that, we need sodium nitrate, and it is common in the foods we eat—especially spinach, arugula, chard, and beets. One reason to seek out the chemical is that it is involved in producing nitric oxide (NO) which our bodies must have. NO helps keep blood vessels dilated, helps regulate glucose and calcium, and plays a role in reenergizing mitochondria and muscle contraction.
Beets boost NO and have more benefits than the first Swedish research team realized. They focused on elite endurance athletes. More recent research suggests the benefits to that group can be iffy, but the rest of us may benefit a lot. Beets seem to have an even greater ability to support fast-twitch muscles. That’s useful for sudden bursts of activity, such as sprints or sports that require explosive movements, like soccer. Or jumping.
The most exciting thing about the work being done on beet juice and nitrates is that they may particularly help older people overcome exercise intolerance. The inability to exercise hard and long is not just a “use it or lose it” matter. It’s caused by low NO levels in the body as we age along with less capacity to turn natural arginine into useful nitrates.
A couple of glasses of beet juice could help. And if drinking beet juice doesn’t appeal, eating one cup of spinach or beets should have the same effect.
Sex issues can be complicated, but they’re not uncommon. It
takes a doctor’s help to overcome obstacles such as medicines that put the
kibosh on the libido. It might take a
financial counselor or a budget to solve money worries that cause fizzle where
there should be sizzle.
But two of the most common issues are problems you can take care of yourself. They are all about attitude.
The first one is sometimes called “performance anxiety.” I prefer to say it straight up—it’s embarrassment.
Worrying about your ability to perform is actually a case of worrying about whether you are OK, whether you measure up. And let’s face it, we all want to be heroes with our beloved partners.
The odd thing about this anxiety is that members of the opposite sex are not usually as disturbed as their partners worry they may be.
For instance, it is extremely common for women to become less moist as they grow older, especially after menopause. A lubricant is a simple, easy answer, yet many women hesitate to introduce the idea.
A woman who does suggest she needs help, even though her pleasure and interest is as high as ever, could be surprised to find her partner is more than happy to oblige. Most men have no aversion whatsoever experimenting with different creams, gels, or lubricants to find one that both partners like. It can even add new sizzle to the dynamic.
Not to stereotype anyone, but telling your man, “honey, would you mind browsing the sex store for me?” is like asking your dog if he would care to have a raw steak.
From the opposite side of the bed, men can worry about ED even when they don’t have it. With age, the penis gets less hard when engorged. It can thus be shorter or have less girth. The constant urge may also calm down into a more occasional urge in maturity. Women are not as put off by these natural changes as their men fear. Even sex with softness is real sex to them if the spirit is willing.
Plus, the solution to less stamina and hardness might be as simple as more foreplay… something most women who love their men would never veto.
The second overwhelming mental block to good sex has to do with embarrassment, too. It’s about your body image. Parts that should be firm, flop. Parts that should be flat, grow round, and parts that should be round, grow flat. You may have noticed the word “should” in all those sentences. The “should” applies to your ideal teen and 20-something self. After that, give yourself a little leeway.
Ideally, for our overall health as well as our sexual health, we would exercise and control our weight. But if we slide, the fix takes time. You can’t lose 20 pounds overnight without surgery. And even then, it’s a daunting task.
You can fix your attitude faster. If your partner is not criticizing you or shunning you, then cut yourself some slack. Sexy is not a size; it’s an attitude.
Johnson & Johnson is facing class action lawsuits for its iconic baby powder.
This only happened because baby powder is the rare infant product we never want to give up. What’s in the talc, sometimes a taint of asbestos, is wrong. But it is the long-term exposure that implicates baby powder in diseases like ovarian cancer and mesothelioma.
We’re addicted. Grown men sprinkle it in their sneakers and underwear. Women prize it for a million reasons from keeping thighs from chafing, to freshening hair between shampoos, to setting makeup. Moms learn to sprinkle it on their kids’ feet at the beach to brush off sand more effectively. It can even calm squeaky floorboards. It’s a fine dry pet shampoo, too.
You probably already know that you can use baking powder or corn starch for many of those things, but it’s not completely satisfying, is it?
Because… the smell. There are people who hate baby powder smell. But for many of us, it has a strong association with that after-bath feeling of being clean.
As it turns out, “baby powder smell” is so popular, one company has blended an essential oil with the fragrance. It’s called Young Living's Gentle Baby Essential Oil Blend. If you don’t see it wherever you usually shop for essential oils, there’s always Amazon. I checked. They have it.
If you’d like to experiment with your own blend, the probable fragrance notes are vanilla, rose, honeysuckle, jasmine, geranium, and lavender. Then again, some claim that violet essential oil is exactly the thing.
The next step is getting the smell into a nice powder.
If you are completely addicted to talc, you can carry on. It is possible to buy odor-free talcum powder that is also asbestos free. This, too, can be ordered online, but you may also find it at a well-stocked aromatherapy store or wherever candle-making and soap-making supplies are sold.
Just a word, here, though. Some of the health effects that are associated with baby powder are related to breathing in particulates. If you choose to continue with talc, sprinkle without undue exuberance and avoid breathing deeply until it settles. Also, it is not a good idea to use this in the perineal area if you are a woman because of the suspected link to ovarian cancer.
If you want to avoid talc, your best bet depends on the feel you like and the purpose you have in mind.
Baking soda—great for controlling odor. Perfect for smelly dogs and shoes, controlling rashes, and absorbing moisture. The downside is that it doesn’t have the silky luxurious feel of powder, but it is safe and natural. You can get some smoothness by mixing baking soda with a softer substance like arrowroot or rice powder.
Cornstarch—this has the exquisite feel many prize for a powder that helps prevent chafing where the body rubs, such as inner thighs. Cornstarch is the main ingredient in the most popular and highly rated J&J alternatives Burt’s Bees Dusting Powder. It’s ideal for soothing itchy spots including bug bites, and it’s a champ at absorbing moisture. It’s naturally fragrance-free. If cornstarch has a downside, it is that it is so light and airy it can be messy to control in a shaker. For a deodorant, cornstarch needs a little help, blend with baking soda.
Rice Powder— an alternative many people don’t think of immediately. Our grandmothers may have known it as a face powder. It has several unusual properties. For instance, it is something of a sunscreen. Not as effective as the dedicated product, but it can be mixed into a paste and used on the skin to give some sun protection. Over time, however, this practice can also whiten skin, which is admired in some parts of the world, but not where bold suntans are the ideal. An oil and rice powder paste can also lighten under-eye circles. As a face powder, it’s a little coarse, so needs blending with cornstarch.
Arrowroot Powder—This is even silkier than cornstarch. You can also buy it knowing that arrowroot is never genetically modified, corn products usually are. Arrowroot deodorizes and absorbs moisture and oils well.
The final step—getting the smell and the powder together—is easy. You can’t just pout the oils on top because you would create a paste. So pour several drops of the oils or your favorite perfume on a few cotton balls. Put them in the bottom of a glass container you can close tightly. Pour your powder blend on top, screw down the lid, and shake. Fill the jar only half full so there’s plenty of room for shaking action. Let your new powder mix sit for a few days and you’ll have the scent and feel you want, without unhealthy consequences.
Aspirin is one of the most prescribed
drugs by physicians since it was synthesized
in 1890. Aspirin is used in a wide
variety of situations due to its broad
This drug is an anti-inflammatory; therefore it helps relieve the symptoms of arthritis. Due to its antiplatelet properties (prevents the formation of thrombi) it is administered in patients who have suffered a heart attack, stroke, or after undergoing cardiovascular surgeries such as stent placement or catheterization. Aspirin also works as an analgesic and antipyretic, and some studies have linked the use of aspirin with a lower chance of developing colon cancer.
However, despite its numerous qualities, recent clinical trials such as ARRIVE, ASCEND, and ASPREE suggest that the consumption of aspirin could have important adverse effects.
The trials mentioned above evaluated the administration of aspirin at low doses (100mg per day) in patients without known cardiovascular disease, with the aim of determining whether the daily intake of aspirin provides any benefit in the prevention of cardiovascular diseases.
The ARRIVE trial concluded that the daily administration of aspirin did not provide any benefit in the prevention of cardiovascular diseases. However, it led to an increased risk of stroke.
In the ASPREE trial, aspirin was administered daily to patients over 70 years of age for 5 years. The study showed that aspirin does not prolong the life of patients compared to placebo; on the contrary, it increases the risk of death from cancer.
In the ASCEND study, it was observed that aspirin decreased the rate of ischemic cerebrovascular events in patients with diabetes, however, the incidence of bleeding increased.
It is clear that the indiscriminate use of aspirin can lead to a wide variety of adverse effects. However, we should not demonize this drug, which can be very useful when used correctly. Below are some tips, based on recognized scientific studies, about who should take aspirin and who should not.
If you had a heart attack or had cardiovascular surgery such as stenting, daily aspirin use will help prevent a new heart attack.
If you are over 70 years old and do not suffer from any cardiovascular disease, do not consume aspirin on a daily basis since consuming this drug preventively does not prolong your life expectancy.
If you have diabetes mellitus but have not suffered from an ischemic event (heart attack or ischemic stroke), taking aspirin will increase the risk of bleeding.
Aspirin should not be administered to children. In the 70s, a presentation of aspirin for children was commercialized, which caused a rare but deadly disease called Reye's syndrome.
If you have had cardiovascular surgery, your doctor may advise you to use aspirin in combination with an anticoagulant such as Plavix, to prevent an ischemic event.
Non-steroidal anti-inflammatories such as ibuprofen, interfere with the effects of aspirin, To avoid this situation, take your aspirin at least 30 minutes before taking an NSAID, or take your aspirin at least six hours after taking an NSAID.
It is important to emphasize that the adverse effects of aspirin mentioned above occur with the daily and prolonged use of the drug, aspirin is entirely safe for adults in the treatment of fever or other ailments.
Sometimes there’s a solution beyond taking drugs to help
resolve anxiety. In some cases, a simple exercise can solve a serious problem.
All you need is proper advice and instructions.
Tapping is a technique that helps fight stress, anxiety and contributes to focus mentally on those feelings that are positive and to discard everything that prevents you from continuing with a full life.
This practice consists of tapping the fingers on precise parts of the body to release the stagnant emotions. Tapping is based on the premise that all problems, whether physical, economic, emotional, etc., are rooted in an energy imbalance within the person who suffers. The purpose of Tapping is to eliminate this imbalance, by stimulating certain points of the body. Like acupuncture, this technique acts on the energy points of the body.
Experts in meditation say that when stimulating energy points signals are emitted to the brain that reduces emotional tension and allows relaxation. As soon as this happens, the anxiety is immediately reduced, allowing you to move forward more calmly.
This technique of emotional liberation has gained great popularity in recent years given the collective awareness of the importance of personal and spiritual care. One of the main advantages of Tapping is that it can be carried out in any place since it is a discrete method that does not require the use of force to achieve the desired effect.
It has been proven that Tapping can alleviate a wide variety of negative afflictions, such as chronic pain, emotional problems, addictions, phobias, post-traumatic stress, and even physical illnesses.
How is the tapping done?
The first step to practice Tapping is to identify the problem you want to address. It can be a general situation that produces anxiety or a specific concern.
After assessing from 1 to 10 the level of anxiety felt, the person begins to hit with the tips of the fingers at certain points of the body, while saying positive statements about themselves.
What points must be hit?
The tapping points coincide with points of beginning or end of acupuncture meridians, and they are the following:
• 0: The side of the hand, between the base of the little finger and the wrist.
• 1: The top of the head.
• 2: The inner end of the eyebrow.
• 3: The lateral of the eye.
• 4: The bone under the eye.
• 5: Between the nose and the upper lip.
• 6: The point between the chin and the lower lip.
• 7: The tip of the inner end of the clavicle.
• 8: About four fingers below the armpit.
• 9: The inner angle of the nail of the thumb.
• 10: The inner angle of the nail of the index finger.
• 11: The inner angle of the nail of the middle finger.
• 12: The inner corner of the fingernail.
The points from 0 to 8 are the basic points of tapping and are always used. In contrast, the finger points (from 9 to 12) are optional. In principle, they are not used, but if you see that with the basic points you do not get good results, you can add them.
Do not worry too much about the accuracy of the points, hitting the area is enough. You just have to hit the points on one side of the body.
A tapping sequence to relieve stress.
Tap each point about seven times and repeat the following sentences out loud.
Point 0: "Even though I feel overwhelmed and afraid, I accept who I am and how I feel." Repeat it three times.
Continue touching on the other points with adjacent phrases:
Point 1: I know I can move through this
Point 2: I know that I have the inner strength
Point 3: I choose to believe that I will overcome this
Point 4: I know I can find my power inside
Point 5: I think this is my trip now
Point 6: I know I can go through this
Point 7: And to feel good about me again
Point 8: I choose to believe in my inner strength
Unless you’re vegan, you should eat fish twice a week for good health.
It’s what the American Heart Association recommends. Ditto researchers at the NIH/National Institute for Arthritis. The US Department of Agriculture agrees. So do thousands of doctors and dieticians.
This is one of the rare bits of diet advice that is almost universally accepted. The reason is omega-3 fatty acids. They abound in cold-water fish like tuna, cod, mackerel, mahi-mahi, salmon, pollack, and anchovies.
Studies have shown that regularly eating these kinds of fish can lower heart disease and stroke risk because of the omega-3 content. The habit may also improve arthritis, supply critical growth hormones to developing children, and ward off cognitive declines.
But if the benefit of eating cold-water fish is ingesting plentiful omega-3 fatty acids, so is the problem.
Here’s how: Numerous foods contain omega-6 and omega-3 fatty acids. Both are polyunsaturated fatty acids, or PUFAs. Omega-6’s are more concentrated in grains, seeds, nuts, beef, and vegetables like avocados and soybeans, including tofu. Most cooking oils are high in it—one exception being olive oil, which is monounsaturated and full of omega-9’s. Only a few oils are rich in omega-3. These include canola, walnut, and fish oils like that wonderful-tasting (not!) cod liver oil.
The issue is balance. The ideal diet for humans is a ratio somewhere between 1:1 or 4:1 of omega-6 to omega-3.
Our caveman forefathers were probably right at the 1:1 ratio. Some primitive societies get closer to the 4:1 range.
But today’s ratio is about 20:1 for most developed countries. That’s badly out of balance and invites numerous health problems. It is not simply a matter of too little omega-3; it’s also a case of too much omega-6. The two PUFAs have opposing effects in the body.
For instance, omega-3 is anti-inflammatory. Omega-6 is pro-inflammatory. Omega-3 helps control weight. Omega-6 helps to gain it.
Gaining weight was an important biological edge in cave times where getting food was chancy from day to day. It’s not a good thing for us, where grocery stores, food trucks, and restaurants beckon us with constant temptation to eat and eat again. For most of us, gaining weight is the easiest thing in the world.
Eating more fish and less beef, lamb, and pork is a good way to bring your diet back into balance. Unfortunately, if you are choosing farm-raised fish, you may not be getting the omega-3 content you thought you were.
That’s because many fish farming operations feed their fish on grain products. And those fish will grow up to be high in omega 3 just like the grains that make up their diet, and low in omega 6.
The easiest way to make sure the fish you eat are as healthy as you hoped is to opt for wild-caught fish when you can. In the wild, cold water fish feed on other fish and algae. But you don’t have to make every serving of fish wild caught. There are responsible farming operations.
Farm-raised fish have become an environmental necessity, and it can be done right. It’s not all bad. Catching some species in the wild even means a lot of fuel burned to get to the fishing grounds and back for small hauls. In other causes, overfishing has meant that farm raising can be good for the species’ survival. In Norway, extensive cod farming has decreased the waste in this fish which loses some of its delicate appeal if frozen.
Good farming operations are careful to avoid using any pesticide-treated food stock, antibiotics, or unnatural foods like grains. It can be hard to know where supermarket fish originates, and how the owners work, however.
If you shop at an independent fish market, your purveyor can probably guide you right. Strike up a relationship, and try to shop when it’s not during peak hours. Friday afternoon is not the time to tie up the fishmonger for a long, philosophical chat. Nor will he want to guide you away from any choice when there’s a whole line of other customers standing around to hear every word.
Talking to your “fish guy” is not an option for everyone, alas. So if you are on your own, here are some general rules.
Sockeye salmon are never farmed, so you can buy knowing they will be wild caught.
Farmed fish that are usually responsibly raised, meaning no pesticide or antibiotics, no mercury concentrations, and proper sanitation and environmental impact include:
· Barramundi farmed in the US and Australia
· Bass farmed in the US
· Catfish from the US
· Char (Atlantic)
· Farmed mussels--worldwide
· Farmed oysters--worldwide
· Farmed Pacific rainbow trout
· Farmed sturgeon from the US and Canada
· Farmed tilapia from Canada, Ecuador, and the US
We’re all so healthy now compared to 100 years ago. An American born today has a life expectancy of 78. Before 1900 it was only 47.
The reason our grandparents had shorter lives was not, as many propose, mainly owing because of childbirth and childhood diseases. More than half of adult deaths in 1900 could be laid to pneumonia, tuberculosis, and intestinal infections. Accidental deaths occurred twice as often as they do now.
We can thank modern medicine and science for this change. We know how to avoid flu, get over pneumonia, and set a broken leg. Cholera hardly exists. Even kidney failure, another disease that plagued our grandparents, is treatable now.
With so many extra years added to our lifespans, the new challenge is reaching beyond thinking about our lifespan to the idea of a long “healthspan.” None of us wants an extra 20 years of pain, debility and mental confusion if we can avoid them.
Will those years be good ones? Will you be active, useful, mentally vital and engaged with life all the way to the end?
Medicine can give you a long lifespan; you have to give yourself a long healthspan.
At a minimum that includes eating healthily, treating depression if it shadows you, avoiding stupid risks like riding a motorcycle without a helmet, building a network of friends and family, not smoking, and learning to deal with stress.
If you ever suspected that Americans were couch potatoes, the World Health Organization has the proof. In the US, 40% of adults fail to get minimal sufficient exercise every week. The Germans are even worse, at 42%.
On a global scale, the relationship of exercise to life expectancy isn’t simple. Poor countries like Uganda and Lesotho rank high for exercise, but healthcare and high HIV/AIDS rates devastate the population. China, which has a very active population also has world-class pollution that is presumed to knock three years of life off its people.
But overall, there is a positive relationship among developed nations between more exercise and longer lives. In Europe, the countries with the fewest couch potatoes—Sweden, Switzerland, France, the Netherlands, and Spain—have one to two years longer life expectancies than less active countries like Ireland, the UK, and Germany. Canadians with high rates of physical activity also have four more years of life expectancy than their US neighbors.
Those trends are complicated, as mentioned. But there’s plentiful research on individuals and the effects of exercise. More is better, as long as you are not trying to out-achieve super athletes.
When the WHO looked at the problem of inactivity, it also set guidelines for how much exercise is needed to keep people healthier. It’s actually a modest prescription. If you walk a dog and convince him to quit smelling the bushes and hup hup, you’re halfway there. All it takes for a start is a little more than 20 minutes a day of moderate aerobic activity, a little resistance work, and some balancing exercises to help prevent falls.
Herewith, the guidelines:
150 minutes or more per week in moderate-intensity aerobic activity
That could include brisk walking at 4 mph, swimming, heavy cleaning, biking 10-12 mph, mowing the lawn with a power mower, volleyball, gardening, badminton, tennis doubles, or any similar effort)
Do all aerobic activity performed in bouts of 10 minutes or longer
Seek to increase this to 300 minutes a week for even more benefits
Do these at least 3 days a week
You can keep it simple, like standing on one foot or heel-to-toe walking
Do resistance exercises 2 or more days a week
You can use resistance bands, weights, or body weight
If you don’t know what to do, find a trainer or class to get started
These days almost every Chinese restaurant has a notice somewhere on the menu proclaiming it does not use MSG.
Millions of people believe they are allergic to MSG, monosodium glutamate. Probably ten times as many believe it’s a harmful substance that ranks right up there with Red Dye #4 and propyl paraben.
So a Chinese restaurant goer, we’ll call him Charley, is happy to know he is safe at his favorite MSG-free restaurant.
But perhaps not as safe as he thought… Charley was a little hungry before going to dinner, so he grabbed a few cheesy goldfish crackers at home. At the restaurant, he started with a fried wonton app and dunked them in a dipping sauce. His wife chose the egg drop soup. When it came to the mains, Charley ordered chicken in garlic sauce, extra spicy; his wife went for the sweet and sour pork. Both douse their rice with the tableside soy sauce.
Between them, Charley and his wife have just eaten at least seven different items with MSG in them—the goldfish, the soy sauce, probably the cream cheese, the dipping sauce, the tomatoes in the sweet and sour, the chicken broth used in cooking, and the mushrooms in the Szechuan dish.
Charley leaves happily, and he doesn’t get the headache he swears he always gets when he eats MSG.
MSG danger ranks right up there with the number of words Eskimos have for snow as one of the most often repeated and misinformed myths we all know. MSG goes by many names on packages, MSG, monosodium glutamate, autolyzed yeast, glutamic acid, soy protein, yeast food, gelatin, and whey protein to name a few. Several stabilizers and thickeners like carrageenan, guar gum, and pectin often have MSG. Most Americans eat it several times a week if not daily.
It’s not some crazy thing invented in a test tube. It’s not a preservative that creates Frankenfoods that never rot.
In fact, MSG is all the rage these days as the fifth flavor—sweet, sour, salty, bitter and umami. Umami is glutamate. Parmesan cheese is a rich source of umami and MSG. Roquefort and cheddar are sources, too. Know what else is rich in glutamate? Green tea.
Fear of MSG originated with a letter to the editor in the New England Journal of Medicine in 1968 that linked it to “Chinese Restaurant Syndrome.” It was not established in a clinical trial or widely reported. It began with one person assuming an association between one ingredient in his dinner rather than a dozen other things. But the idea of Chinese Restaurant Syndrome was picked up and broadly repeated despite the lack of any human trials to back it up. Today numerous health blogs include MSG on their dangerous foods to avoid lists.
Is it dangerous? For a lot of people? For a few?
Eventually, scientists did perform research on the topic, but trial after trial failed to establish the Chinese Restaurant Syndrome as legitimate. The World Health Organization investigated the matter twice, in 1971 and 1987, and found no risk at normal consumption rates.
Nonetheless, there are hardly any foods that do not cause allergies in some people. So it’s more than likely that at least some people are affected by MSG.
But it remains stubbornly unproven.
In 2016, yet one more attempt to get to the facts of the matter resulted in a meta-analysis on the topic. Yoko Obayashi and Yoichi Nagamura looked through the Medline and FTSA databases for all the human trials they could find. It’s a wide net, FTSA abstracts more than 2,200 journals; Medline more than 5,600. If there was confirmed evidence, they were bound to find it. They were interested in papers written in English, of studies carried out in clinical trials on humans, that reported the incidence of headaches, and had a good statistical analysis or the data needed for one. They found ten papers that met their criteria.
There were five studies that gave MSG with food. Three of these were properly blinded (the researchers and the subjects didn’t know which food had the MSG and which didn’t). Two of these studies, however, used MSG in such high concentrations that some people might have detected it by taste. Even so, none of these studies found ANY proof of MSG causing headaches. Some of these studies also measured clinical data like blood pressure and pulse rate—also no proof.
In seven other studies that administered MSG given without food, researchers did find a few reports of headaches following MSG ingestion. But once again, in the studies where subjects reported headaches, it happened with doses that were much higher than anyone would use in regular cooking. The subjects could easily tell which broth had the MSG and react according to their pre-set bias.
These adverse reactions occurred when the MSG was giving in a drink or broth at a concentration of 2% or higher. At a concentration of 1.2%, its flavor is detectable.
The usual concentration in food is much lower, 0.2% to 0.8%.
The bottom line—most people who believe they are allergic to MSG probably aren’t. But some few people could be. If you have a can of Accent or Sazon at home, don’t sprinkle it in the soup you feed to your guests unless you know it’s OK with them.
But to say MSG is bad for you is like saying shrimp are bad for you because someone somewhere is allergic to it. Lots of people are allergic to shrimp, but there’s no hysteria about it. Restaurants don’t post signs about it.
The use of MSG was encouraged at one time to help people cut back on salt as a seasoning. It has about one-third as much sodium as table salt and sea salt. And if sodium is an issue for you, especially if you have high blood pressure, then a little MSG could do you some good if it helps you cut back on salt.
And if you are allergic to MSG, skip it in all forms, including the parmesan and the goldfish crackers.
Oatmeal is one of those love/hate breakfast foods. The warm, full stomach that some people enjoy looks like a bowl of slimy glop to others.
I understand. Regular, boiled-in-the-pan oatmeal really is a gray, gelatinous pile of glop with lumps in it. If you’re fine with that and content with the usual brown sugar, nuts, and banana trimmings, bless you. You are doing yourself a lot of good. You are a saint.
You’re eating smart, too. Women should consume 25 grams of fiber per day, men 38 grams. One cup of oatmeal for breakfast will give you 8 grams of fiber. An average “bowl” of oatmeal is bigger than that for most people, however. It should bring you 12-16 grams of fiber.
The usual trimmings help. A medium banana adds another 3 grams. Or a half cup of blueberries is worth a little less than 2 grams of fiber. A few walnuts would add another gram.
If you prefer to go in the fruit and nuts direction, though, consider dates. A half cup is worth almost 6 grams of fiber.
Brown sugar provides no fiber, nor does milk, but they complete the traditional breakfast oatmeal offering. Which works fine for people who like oatmeal.
Now for the rest of us. There are other ways to eat oats, and you might succeed in taking a more savory approach. I don’t run well when I start my day with sweets, so I found a way to make that work for me. While I never liked oatmeal, grits with salt, pepper, and butter are in my wheelhouse. That seemed like a possibility worth trying for oats—sans butter. Success! Good quality oatmeal, cooked thick, and served with salt and pepper tastes just fine. It’s not glop. Add a scoop of fat-free cottage cheese on the side, and you have a protein bomb, too.
Other ideas I’ve found palatable for an oatmeal avoider… You can also stir in a large handful of spinach leaves toward the end of cooking. Siracha works well to liven oats if you can stand a hot, hot breakfast. This concoction goes well with avocado slices on top. Some people tell me that crumbled bacon and a soft poached egg are delicious, but this is supposed to be a health blog, so we’ll just pretend that bacon never happens, OK?
The older we get, the more likely we are to fall. There is a bona fide public health problem in that issue.
But it is not these things:
· 25% of people over age 65 fall at least once a year
· 40% of hospital admissions for people over 65 are linked to injuries from a fall
· 8% of people age 70+ who show up in emergency rooms after falling will die from their injuries
· If you exclude traffic accidents, falls are responsible for 80% of disabilities caused by an unintentional injury among patients age 50 or older
Those are all horrible stats. The public health issue is that most falls are avoidable, and we’re not doing enough to help people avoid them.
The injuries and fatalities visited on the elderly don’t have to happen to people just because they have celebrated more birthdays. Better balance and better vision are two things that could radically improve those awful statistics.
Falling is a risk of walking upright for all of us. There’s no age limit on tripping over a loose manhole cover. Anyone can slide when they step on an unexpected patch of ice. I once owned a pair of shoes that turned into ice skates every time I got to a wheelchair ramp at the end of a sidewalk. A toy on the stairs can undo anyone.
But while these mishaps can lead to severe injuries to people of any age, younger people are more likely to recover their balance in time to avoid splatting. A reasonable amount of muscle tone and good stability are all it takes. A healthy 80-year old can stumble, regain footing and go on without falling just like a teenager can.
Unfortunately, most of us don’t retain the strength or agility we enjoyed at age 16 when we’re 30 or 40 and certainly not when we’re 50+.
In addition to recovering our balance under duress, we could often avoid falling if we see the risk in time. That’s why cataracts are strongly implicated in falling and being injured. Surgery to improve eyesight prevents accidents. In one British study, 97 patients who were scheduled for surgery on their cataracts were followed for three months before and three months after the operation to see if it made a difference.
Among the patients in the survey, 31 had fallen before surgery. In the months after surgery, only six of the fallers had another fall. And one of those was related to dizziness caused by medication. Among the patients who were not fallers before surgery, they were just as stable afterward. The study clearly showed that the risk factor for falling was not the patients’ ages—nobody got younger—it was their vision.
In the past, it was a common belief that if you had cataracts, you should wait for them to “ripen” before undergoing surgery. Some people still believe that, but it is no longer what doctors recommend.
The new thinking is about function. When your performance is affected, it’s time to take care of the problem. If you find your field of vision is fuzzy, if you don't see everyday things as sharply as you should even with glasses, then it’s time.
Germs cheat. They’ve
always been cheats, and they’re getting better at it.
Like all cheaters, they have an advantage in working outside the law. Bacteria don’t have to go through FDA approval to put a new variety out in the world. But the antibiotics that we develop to fight them have to play by strict rules. Even vabomere, a combination antibiotic released in 2017 took 8 years to get through FDA’s “expedited” approval process. One of the components in the combo was an already-approved antibiotic, the other was just an enabler to make it work better.
This mismatch between the wily and the lawful is becoming a frightening problem.
Penicillin was invented in 1928 and the first resistant staph germs didn’t show up until 1940. Penicillin-resistant pneumonia came along in 1965. In its early days, not many people got penicillin, which probably gave it a longer lead before resistant bacteria caught up.
But tetracycline was introduced in 1950, and a resistant form of shigella appeared in 1959. The record for fast retaliation was a near-simultaneous volley and return. In 1996, the FDA approved a new antibiotic, levofloxacin. A resistant strain of pneumonia arose the same year.
The problem of antibiotic resistance is so acute that, in 2017, the World Health Organization warned that we could run out of antibiotics.
We’ve all been taught the basic mechanics of the problem. It’s why our doctors and dentists warn us to take every last pill in our prescription. You get a strep throat or a urinary tract infection. Antibiotics begin to kill off the bacteria that cause your illness. The weakest ones go first. Then, if you stop too soon, the strongest survive and multiply. In a few generations, those stronger iterations become antibiotic resistant.
There isn’t much science can do about that situation. At best, doctors can hit the pause button before prescribing antibiotics for minor ailments. Patients can be more careful to take their meds as directed.
Beyond that, the basic answer has been the medical equivalent of “throw a bigger rock.” If penicillin fails, move on to erythromycin. If that fails, proceed to methicillin…
The alternative would be to discover what is happening to make antibiotics more resistant. The “stronger germs live to multiply” explanation.
Researchers are working feverishly to get ahead of bacteria, but as noted, germs cheat. Although several new antibiotics are in development, there has not been a whole new class of antibiotics since 1980. If the approval process is not expedited, it can take decades of work to get a new antibiotic to market. Germs work faster.
But physicists at McMaster University in Canada have taken images that reveal what is going on micro-level. The images capture the cell processes at a resolution as fine as 1-millionth of a hair. What they discovered is how resistant bacteria hold off antibiotics. The usual process is that an antibiotic attacks bacterial cell walls, punching holes in them. The cell then dies. But the resistant bacteria behave as if they are armored. Their walls are more rigid and harder to penetrate.
As lead researcher Andree Khondker put it, “it’s like going from cutting Jello to cutting through rock.” In addition, the antibiotic-resistant bacteria had less intense negative charges on their surface. That made them harder for antibiotic molecules to find and less sticky.
The beauty of this kind of research is that it could lead the way to developing a mechanism that would apply to all bacteria.
That’s still a long ways away. But this kind of research is
apt to be followed avidly. The antibiotic problem gets more urgent every day.
The usual definition of “sarcopenia” is muscle loss related
to aging. That’s grossly misleading
because sarcopenia starts when we’re still officially young, sometime in our
People who do not exercise strenuously lose about 3% to 5% of their muscle mass every decade from age 30 onward. Those who do exercise also lose muscle mass, but somewhat less than that.
Sarcopenia is one of the reasons we tend to gain weight with age. Then, if we do gain a bit of weight, sarcopenia also makes it harder to shed those pounds. Less muscle mass means a lower calorie burn.
For instance, a six-foot male who weighed 180 in his youth (age 30) and was slightly active, could maintain that weight on 2500 calories per day. Now advance him to age 60 and a weight that has crept upward at just 1% a year. Now he weighs 242 pounds. Getting back to his youthful 180-pound weight would require dropping his intake to 2100 calories to lose weight slowly, over 18 months. If he wanted a “fast” loss, he could drop down to 1600 calories and make his goal weight in 8 months. That would more or less take a big plate of spaghetti and meatballs out of the diet.
Most dieters want something faster than that, however. If this man wanted to shed his 62 pounds in 90 days with diet alone, he would need to cut his calories to less than 1,000 per day!
Women usually start at a lower weight, with less muscle mass, which means fewer allowable calories to begin with. Thus the effects of time and slow weight gain accrue even more bitterly. A 5’5” young woman weighing 120 pounds who is slightly active can maintain her weight on 1900 calories. But at age 60, after gaining 1% per year and becoming inactive, this woman would be at 161 pounds. Getting back to her earlier weight without exercise would limit her to only 1,017 calories per day for rapid weight loss or 1,350 to bring it down slowly in just under a year. A “rapid” 2-lb a week loss without adding heavy exercise is out of the question if she wants to maintain her health because the calorie allowance would be too low.
This is why recent research on resveratrol alongside exercise is so encouraging. It can turn back the clock on your muscles. And that could speed up weight loss—just as if you were young again.
In an experiment at the University of West Virginia, researchers divided 12 men and 18 women into groups that undertook exercise alone or exercise combined with 500 mg per day of resveratrol.
The resveratrol did not lower their cardiovascular risk any more than exercise alone did, but it greatly enhanced their physical condition. The group that took resveratrol alongside exercise saw significant increases in their muscle fiber area, a boost in their maximal oxygen consumption, and an improvement in mitochondrial structure and density.
That last item is important for weight loss. The mitochondria in our cells control cell respiration and energy production. So an increase in well-formed mitochondria translates into better energy—a potentially higher metabolism—that essentially pushes the aging clock in the muscle cells backward.
Should you want to try this at home, the team at WVU had the exercise groups do moderate aerobic and resistance training that they felt was consistent with what a person age 65-80 (as their subjects were) could do on their own.
Few activities have as much to say for themselves as walking does. It’s suitable for anyone age 2 to 100. You can meditate and gain peace while ambling around, or you can socialize and laugh while you walk with friends. Beyond suitable shoes, you don’t need elaborate gear or training.
Even that’s a minimal requirement if you are fairly healthy with good balance. I confess to regular five-mile hikes in flip flops, although it’s usually sturdy sandals. That said, sneaker-style walking shoes are probably a better choice. Do as I say, and all that…
If you live in a neighborhood like mine, walking can seem a little undemanding for physical activity. Where I live, riding a bike requires the purchase of skin-tight neon spandex clothes. Golf, beyond clubs, requires pastels and a different kind of clothes. Yoga, it seems simply cannot be done in cargo shorts and a snug tee shirt for modesty while doing shoulder stands.
Sometimes, I wonder what my mother was thinking, letting me grow up wearing the same kind of shorts and tops for working in the garden, biking, horseback riding, sailing, camping, and playing softball.
So if you feel walking doesn’t offer nearly enough shopping potential, I am glad to tell you that you can buy something special for your next walk to make it better—a set of Nordic poles.
The difference between regular walking and pole walking comes down to muscle engagement. According to Dr. Klaus Schwanbeck, regular walking uses 45% of the muscles in your body, almost all in the lower body. Pole walking uses 90% and engages the upper body as well. He claims that this also increases cardiovascular benefits by 22% compared to regular walking and burns 46% more calories.
The increase in calories burned is incentive enough for many of us, but for people who are recovering from back surgery or anyone prone to lower back pain, walking with a pair of Nordic poles is more comfortable as well. Poles help you offload weight from your lower body—the hips, knees, and lower back—and transfers it to the upper body. That not only eases pain in the lower body but also increases the beneficial exercise in the upper body.
One older woman claims Nordic pole walking went beyond the known benefits to core and abdominal muscles and helped erase back fat and upper arm flab.
Anecdotes like this are encouraging, but we also have research confirming the benefits. Researchers at the University of Montreal recruited 128 walkers age 60 and older. Half undertook a 12-week program of Nordic pole walking. The rest served as a control group. The pole walkers gained significant strength in legs and arms. Those in the control group who did not exercise showed a measured loss in grip strength and walking speed after 12 weeks. That’s not so surprising, but the Nordic pole walkers also showed some improvement in cognitive function.
Another group of researchers put pole walkers on a treadmill then used electromyography to see what was happening in the muscles. When they raised the angle of the treadmill, the regular walkers and the pole walkers used their muscles alike. But when they sped it up, the pole walkers experienced more activation in the external oblique (EO) and rectus abdominus (RA) muscles.
The EO runs along your side and waistline from just below the ribcage to the top of the pelvis. The RA is the muscle that gives superfit young men and women washboard abs.
There’s another subtle benefit that’s worth mentioning, too. Walking with a cane might be a good idea for many older people and anyone of any age with hip, knee, ankle or foot problems that might interfere with their stability. But a cane looks “old,” and hence a lot of people refuse to adopt the habit even if it would be a good idea. Walking with TWO canes, called Nordic poles, however, looks pretty darn sexy.
So young or old, in need of support or not, there’s a lot to be said for taking up pole walking.
California blondes. That’s all I need to say for you to get a picture of a nearly-mythical, natural, golden beauty with shiny, sun-streaked, beach-waved hair, a person who glows with good health. There’s a mythically gorgeous male surfer dude counterpart as well. Brazilian blondes of the female variety are all that Californians are, with perfect makeup.
Mythic is the operative word here. We already know that unfettered time in the sun is bad for your skin. Scientists in Brazil just proved it’s not good for your hair, either. It doesn’t matter whether that hair is still a natural color or already gray. Sunlight causes morphological (structural) changes.
The outer part of the hair shaft, the cuticle, is where most of the damage happens. When the cuticle’s structure changes, the result is hair that is rough, dull, frizzy and rife with split ends.
Sun alone is damaging, but lots of men and women spritz their locks with salt to encourage waviness or lemon juice to lighten them. In the short run, these home-style treatments work. In the longer run, they can do so much damage the only solution is a shave to the scalp and starting over.So if you omit the salt and lemon juice abuse, then a nice gentle shampoo and conditioner after sunning restores your hair to glory, right?
Actually, shampoos tend to make the sun problem worse.
In an experiment to find out how sun and shampoo impact hair health, the Brazilian researchers literally split hairs. They kept half of each hair as a control then tested what happened with the other half. Some hairs got irradiation (light) from mercury lamps that mimicked sunlight. Some got light followed by hand washing. And some were only washed.
And the verdict? Sun does more damage than shampooing. It causes fracturing and cavities in the hair shaft and cell lifting on the cuticle. But the combination of light and suds was the worst.
The interesting thing, however, is that while mainstream scientists have spent some time investigating what damages hair, they don’t report any cures. Published research on how to fix the damage is nearly nonexistent. That work is done at cosmetic companies, and the likes of L’Oreal and Estee Lauder aren’t about to share their formulas.
So what can you do to protect your hair in the sun? You can hardly smear it with a gob of zinc oxide. But some skin products are suitable for hair. Clarins makes a sun care spray-on oil that claims to work from head to toe. Opinions vary on whether it’s nice or gross on hair, however. Those who have very fine hair seem to object. Those with thicker, wavy hair love it. People with fine or colored hair seem to prefer Drybar’s Hot Toddy product. That one also includes protection from chlorine if you are a pool person.
It may take some trial and error to find a sunscreen for your hair that you like, but for most of us, it takes some experimentation to find a sunscreen that feels good on our skin, too, and this is no different. If you spend time in the sun and still want to have healthy looking hair, the search is worth it.
The alternative, if you hate hair products, of course, is to keep your hair covered with a hat or scarf. If that’s your option, you are in luck because you have thousands of variations to choose from. Any hat will physically block at least some sunlight, but some hats and scarves are made with sunscreen-infused fibers for extra protection.
If you are fortunate enough to have a good head of hair, give it some protection.
A French study in the news this week warns that the risk of early death increases by 14% for every 10% increase in ultra-processed foods in your diet.
According to reports, Americans are devouring 61% of their diet as processed foods, followed closely by Canadians at 62% and the Brits at 63%. So, if the new study is right, we’ve just saddled ourselves with an 84% increase in the risk for an early death because of how we eat. That’s quite a feat considering that American lifespans have been increasing for two decades.
Getting at the truth about food processing and health is complicated. It’s not surprising reporters pounce on the latest titillating research announcement and pass it along as a series of bad generalizations.
They’re not the only ones. Michael Pollan, who has done great work on nutrition education, has also been guilty of oversimplifying. A few of his rules that need rethinking…
Don’t eat food with more than five ingredients: Well, goodbye tossed salad. Au revoir ratatouille.
Don’t eat anything a third-grader can’t pronounce: So if the package promises Agaricus bisporus, put it back. But if it says mushrooms, keep it. Disregard the fact that they’re the same. Pronunciation is all.
Don’t eat anything your grandmother wouldn’t recognize: That’s it for you, tofu. Grammy didn’t do sushi, chia seeds or quinoa, either. Fortunately, given my vast food knowledge today, my grandchildren will be able to partake of them all in the future.
Don’t eat anything that won’t eventually rot: That might take sauerkraut out of the diet. I’ve never seen rotten sauerkraut, and I’ve forgotten a lot of things in my refrigerator over the years.
My personal favorite Pollan rule is “buy your snacks at the farmer’s market.” Yippee! Have you been to a farmer’s market lately? I adore pecan pie.
A CNN story on the same French research illustrated the embargo on processed food with a picture of sausage patties. And smack in the middle of that, on the same page, it ran a photo of bread for an article touting the health benefits of fiber.
Let’s see how they stack up with regards to processing—
Fresh sausage: kill hog, grind up, add salt and spices like sage, cook in a pan over moderate heat.
Fresh bread: thresh wheat, clean, moisten and condition for 24 hours, grind, bleach (if you want white bread), grist with other wheat to get the right gluten levels, enrich with niacin, thiamine, and folate. Harvest barley, soak to partially germinate the seeds, dry, heat, grind. (Malted barley flour is in every brand of all-purpose, bread, whole wheat, and plain white flours.) Combine the finished wheat flour with sugar, yeast, salt, and milk. Knead for a long time, let rise, punch down, let rise again, shape, bake in the oven.
I’m inclined to believe bread is healthier in general than sausage, but to call it “less processed” is a prodigious feat of food delusion.
And by the way, though whole wheat flour is healthier, it is not a bit less processed.
When I looked up “overly processed foods” for some examples and a good definition, I found that included chicken nuggets. OK. That’s probably fair.
But this all reminds me of the brouhaha over eating carbs—perpetrated by people who somehow don’t realize celery and lettuce are pure carbs. Did you know that washing food is technically considered “processing?” I highly recommend it nonetheless.
Altogether, the public advice on processed foods is a royal mess. The fact that we humans largely don’t die off before our 30th birthday is closely linked to processing our food. Fire kills bugs. Salt delays rot. Acid preserves produce so we can keep eating through the winter. So does canning, something my grannies both did. Numerous studies have established that frozen vegetables often have more intact nutrients than much of the “fresh” produce in grocery stores do after a long trip from field to processor, to warehouse, to distribution center, to local store.
Processed food includes canned tomatoes, black beans, and tuna. It also includes orange-dyed, banana-flavored marshmallow peanuts. This category is too vague to make any sense at all.
No matter which scientific studies capture headlines, the secret to eating healthy will not come down to such an ambiguous concept as “processed” food.
Instead, we need to look at food content. Salt is good within limits. Keep the daily dose under control. Fat is fine, as long as there’s not too much fat in your diet.
In contrast, additives with known problems, like sodium nitrate and BHT, are best avoided.
And who says more processing is always worse? It takes months of “processing” and many steps to create a delicious bleu cheese and hardly anything beyond a knife and fork to turn an avocado into guacamole. But I’m apt to put a mere schmear of bleu cheese on my crackers and gobble the guac on fried tortilla chips by the spoonful. So I ask you, which one is healthier?
False categories don’t help us. Eat lots of veggies, and I don’t care if you cook and puree them even though that is double-processing. Enjoy some fruit every day. Oatmeal to start the day is nice, even if it is a “breakfast cereal” and breakfast cereals seem to be on all the lists of taboo processed foods. Have a bit of cheese, but remember to keep the portion small—not because it’s processed, but because it is calorie dense, high in saturated fats and cholesterol with only modest nutritional value. Limit sugar, control salt and watch the fat. Of course, a pickle is less nutritious than a fresh cucumber, but a fresh cuke’s no powerhouse, either, since it’s mostly water.
We’re all searching for the best food for health. The answer is not to avoid “processed” foods in general. Avoid too much frying, excessive salting, and prodigious amounts of sugar. Pretty simple.
When you feel like
you’ve been pumped full of air and just
want to sit on the couch and groan, who cares how you got that way? Relief is
the first order of business. We suggested several tactics that work in the
Now, we’ll look at how to prevent bloat, gas, and associated stomach pains. There are a lot of tactics that may help you. So let’s run through them and end up with the one doctors are most likely to miss. It’s the one most likely to solve the problem if none of the more conventional answers work.
What you’re doing wrong to cause bloating and pain can be pretty obvious when you’ve gone to a chili cook-off and sampled everything on offer. In other cases, the reason you get bloated can be surprising. And even when you think you know what it is, the culprit may be hidden.
A case in point is the food additive inulin. It’s perfectly safe and is naturally contained in onions, wheat, bananas, artichokes, asparagus, and many other fruits and vegetables. It’s often added to prepared foods to increase fiber content. In that case, it was probably derived from chicory root. But here’s the thing…. Say, you think wheat bothers you, so you buy gluten-free bread. That’s smart. However, some of them also contain inulin, which could be another thing that bothers your digestion. In fact, if wheat is a problem, inulin may very well be an issue, too.
There are a host of small things that can cause bloat. Stop doing them; problem solved for many people. For instance, chewing gum. Or drinking through a straw. Also soft drinks and carbonated beverages. These all cause you to swallow air.
Do you talk a lot when you eat? Eat on the run and bolt your food down? That will do it for many people because those habits also cause you to swallow air. Air in the gut is gas, and the effect is bloat. Slow down. Put your sandwich down, or your fork on the plate, swallow first, then talk.
Another tactic you may try is dividing your intake into smaller meals. This isn’t for some mythical “natural way to eat” or “key to weight loss” reason. Here’s why that can really help a lot of people who suffer frequent rounds of bloat and gas: As with irritable bowel syndrome, there is some evidence that the misery of bloating is actually a sensitivity to your own digestive processes. It is believed that some of us simply feel what is going on in our stomach and colons more acutely than most people do.
Sugar can be a culprit in bloating and gas as well. But don’t think honey is an automatic pass, or that sugar-free candies are the perfect solution because they all contain different kinds of sugar (fructose in honey; mannitol, xylitol, etc. in candies) that cause problems of their own for many people.
After these simple causes have been eliminated, your next step is to see whether there is an allergy or food sensitivity involved. Now you are in for some work, and unfortunately, you may have to take the lead here and do a lot of problem-solving yourself. But there is a place to get help…
If you have persistent bloating and gas, have tried everything above, and have already had a clean colonoscopy, your doctor is very likely to check out on you. Even good doctors. He/she will say something like, “try cutting out dairy, a lot of people have trouble with that.” Or “wheat could be the problem.” But there’s something else that really could be at issue besides wheat and dairy.
It took me two years and several doctors before anyone said, “FODMAP.” The acronym stands for fermentable oligo-saccharides, di-saccharides, monosaccharides and polyols. These are all forms of sugar alcohol, and they are present in almost all foods.
If you are desperate and willing to do a bit of work, a FODMAP investigation is absolutely worth trying. In the time it takes to investigate what is bothering you, a low FODMAP diet won’t do you any harm. Even if it takes many weeks.
Basically, you go on a very strict low-FODMAP diet to clear the system. Only after you are reliably free of any gas, bloating, constipation, diarrhea, borborygmi ( that’s fancy for “stomach rumbling”) do you proceed. At that point, you begin to test a few foods to find out what you react to.
It’s important that in each test, or “challenge” you only look at one kind of FODMAP at a time. For instance, to see if the problem is sorbitol, which is one of the polyols, you will introduce high-sorbitol foods like blackberries and avocados. Nothing else in the FODMAP universe. This is not the time to slide in a bite of pizza.
Food sensitivities can be so puzzling; it’s critical to test only one thing at a time.
Ideally, you can work with a dietician, but even many dieticians aren’t very well trained in this procedure, so check credentials.
Medical schools are notorious for not doing a very good job in nutrition training. On top of that the first paper published on FODMAPs was in 2005, so most textbooks say nothing about it.
Let me give you a bit of encouragement if all the normal treatments like eating slower or avoiding dairy fail to help. The process of a thorough FODMAP evaluation will take weeks, but when you are tired of hurting, you’ll try anything. And it is completely worth the effort. In fact, if you find one thing that you can say for certain causes a problem, keep going. Most people with FODMAP issues react to more than one category of sugar alcohols and you may be very surprised by what you find.
I was shocked. Truly.
For me, dairy products –the most common intolerance—are no trouble at all. Despite four different doctors suggesting that. Wheat is, which I already knew, but my FODMAP tests showed me that wheat wasn’t the main problem.
The big surprise was that fruit was making me feel lousy. Yes, fruit.
I used to eat fruit every day, striving for three servings or more, but always getting at least two. It turns out that polyols and fructose are my weak spots. It was the daily apple and the frequent peaches and cherries that were getting me down. I never realized they were an issue because they were always in my diet. I also discovered that honey is a trigger for trouble. In fact, once I cleared myself of symptoms and tested honey, I discovered it causes a reaction almost instantly for me.
If you want to do this, I highly recommend buying the book
“The IBS Elimination Diet and Cookbook:
The Proven Low-FODMAP Plan for Eating Well and Feeling Great,” by Patsy Catsos.
It will explain everything and walk you through the whole program. See if you
can find a dietician also. And good
Bloat isn’t fat, thank goodness, it’s only gas. So it
But it may be even more miserable when it’s around. Sometimes it comes with a stomach ache. Sometimes you just feel like a Macy’s parade balloon that was accidentally filled with cement. If you’ve been lounging in sweats or yoga wear for a few days, zipping up regular pants can be alarming.
For the most part, time alone will take care of it—that’s how millions of us cope with Thanksgiving every year. The problem is, Thanksgiving gluttony aside, you may keep on doing whatever it was that caused the problem in the first place.
Want to get rid of bloat fast? Antacids can help, particularly old-fashioned Alka-Seltzer when you want immediate relief from the gas and have a stomach ache.
Even more old fashioned, you can add a bit of lemon juice to a teaspoon of baking soda in a bit of room-temperature water. Many sources suggest a glass of water, but frankly, this remedy is not delicious. Dissolve the lemon and soda in as little water as you can tolerate then follow up with nice clear water to wash the nasty out of your mouth. Lots of water, because water is also good for bloat.
Or you can go extreme. A rather scary farm wife once dosed me with a heaping tablespoon of straight baking soda. In the mouth, as is, no water. It was as nasty as you might expect, but immediately relieving. But warning, the gas comes up as belching, so definitely try this at home, but never in public.
Less urgent, but far more pleasant, some teas do a nice job. The best choices are ginger tea, peppermint tea, rosemary tea, and turmeric tea. Peppermint is most likely to work fastest to relieve the feeling of pressure, but ginger is especially good for any feeling of nausea. Try whichever one sounds best and experiment to find one you like. If you are simply feeling a little sick from too much rich food, even a cup of hot black tea seems to help. Provided you like tea.
Although dairy foods and milk, in particular, can be the source of many people’s stomach woes, buttermilk is good for bloat. Some people with lactose intolerance can handle buttermilk as it is low in lactose. If you can, then Ayurvedic medicine has a remedy for you: ¼ teaspoon of cumin and ¼ teaspoon of asafetida (should you have it around) in a glass of buttermilk. Blend well and drink. Asafetida alone is also good for bloat, too. It’s a garlicky-oniony substitute that is a staple in Indian cookery. The “fetida” in the name is related to the smell, which goes away with cooking.
Now that we’ve covered what to do when in trouble, how about
preventing bloating? That’s the subject of the next article.
Ever since Alice wandered into Wonderland and partook of the cake that made her grow bigger and the elixir that made her shrink, we’ve given food and drink almost magical status. Thousands of grandmothers have promised their balky offspring that eating carrots would ensure good eyesight and fish, being brain food, would make them smart.
A good deal of research has actually gone into looking for food magic, as well. More specifically, it’s investigated whether different micronutrients can help us take control of our weight, Type 2 diabetes, or metabolic syndrome.
Many different vitamins, antioxidants, polyphenols, minerals, and anti-inflammatories do have a relationship to weight control that is much stronger than mere coincidence. Sometimes it seems that obesity itself leads to a vitamin or nutritional deficit. Other times, the order appears to be reversed, where it’s the deficit that may lead to obesity.
Before going down the list of what works, however, we’d like to put guilt and shame behind us. Almost everyone who is overweight is well aware of it. Most people who decide to do something about it make that decision many times. Even research on the matter has shown that trying a score of different exercise plans and eating patterns is the norm. So is finding out that (a) most diets don’t work, or (b) they worked but only while doing something so difficult or restrictive it’s impossible to maintain it as a lifestyle, (c) you can’t exercise pounds away without changing your diet, too, and (d) the weight usually comes back, anyway.
Failure doesn’t have to happen though. There are a lot of success stories and yours might start with a little vitamin support.
Here’s a rundown on what science has to say:
Vitamin C—is a powerful antioxidant. That’s important because if you are overweight, you are also very likely to have or to develop high cholesterol, which antioxidants help manage. Also, a diet that is strong in antioxidant-rich foods can help speed up metabolism and decrease inflammation. Both of those actions support your weight loss goals.
So Vitamin C doesn’t cause you to lose weight, but it helps manage the side effects of being overweight and supports the things that do help you lose. For instance, people with adequate levels of vitamin C oxidize 30% more fat during exercise than people with low levels.
Vitamin C also decreases the risk of diabetes and helps in controlling blood pressure. It’s best to get Vitamin C from your food rather than from supplementation if possible. In addition to citrus fruit, guava, bell peppers, broccoli, kiwi, strawberries, tomatoes and kale are rich in vitamin C.
Vitamin E—Another antioxidant, vitamin E works in tandem with vitamin C. Everything above applies. It’s useful for controlling blood pressure…and it’s also better to acquire it from the diet if possible. Get it from sunflower seeds, spinach, avocados, almonds, butternut squash, kiwi, trout and shrimp.
Coenzyme Q10—Alas, despite claims, the proof that CoQ10 controls weight is not good. It has shown benefits for blood pressure and glycemic control, though. It’s also good for the heart among many other benefits. It just won’t make you skinny. This nutrient will probably need to come from a supplement if you are older since it’s hard to eat enough oil, seeds, and cold-water fish to bring levels up if they are seriously depleted. And even though it may not make you shed pounds, this micronutrient is getting a serious study for potential benefits in slowing Alzheimer’s, reducing migraines, and easing muscle pains.
Zinc—Taken as supplements or with adequate food zinc can improve blood lipid profiles—in other words, cholesterol and triglycerides. It seems to be especially beneficial for people who are obese or diabetic.
Cinnamon—Natural cinnamon varies widely in chemistry, which makes studies on its effects hard to compare. The region where it was grown, the amount of rain it got, the specific variety can all affect its strength. That said, it has been shown to improve fasting blood glucose levels, counter oxidative stress and may reduce fat. Cinnamon is a polyphenol. Other foods in this class include apples, cranberries, red beans, almonds and peanuts, but they have not been as widely studied for weight control yet.
Green tea—This may be the winner on the list. Green tea has shown that it can increase thermogenesis and fat oxidation. Thermogenesis is heat production and when it happens it burns calories.
Green coffee & chlorogenic acid—Though it doesn’t sound savory, chlorogenic acid is a component of green coffee, plums, peaches and dates. More studies are needed, but this shows promise for helping to lose weight. The fruits also contain ferulic acid, which is an antioxidant. Beware, however, that dates are high in sugar and thus a high-calorie snack.
Green coffee may be a champ, but studies so far have been small or lacked control groups. This looks very promising, so we will continue to monitor this situation and let you know if any new studies shed further light.
Lycopene—No help with weight loss, but it does help with glucose tolerance. Lycopene is found in guava, papaya, watermelon, tomatoes, eggplant and potatoes. But we already knew potatoes were not a weight loss food, didn’t we?
Antioxidants—Antioxidants do play a supporting role in weight loss. They help control low-grade inflammation which is associated with obesity and diabetes.
C.S. Johnston, Strategies for healthy weight loss: from vitamin C to the glycemic response. Journal of the American College of Nutrition, 2005 Jun 24(3), pages 158-65
Not too long ago, we ran an article on the
problem of blue light and poor quality sleep.
Recently, the Washington Post made sleep a front-page topic-- “Wake Up to a Health Crisis: We Need More Sleep.” Subhead, “Brain researchers warn that our lack of shut-eye may be making us sick.”
Sleep, it seems, is a hot topic with the brain research community now. As it should be.
A few highlights from the WaPo story illustrate how important good sleep is at every age. We’ll quote directly:
· Preschoolers who skip naps are worse at a memory game than those who snooze
· Poor sleep may increase the risk of Alzheimer’s
· Even a single night of sleep deprivation boosts brain levels of the proteins that form toxic clumps in Alzheimer’s patients
· All-nighters push anxiety to clinical levels
· Even modest sleep reductions are linked to increased feelings of social isolation and loneliness
· Adults over 50 with lots of insomnia were more likely to fall
That’s the gist of the news from the Post. The question as always is how to get that sleep.
The first step, of course, is to go to bed. That may be the hardest one when there’s a late game running into overtime or a movie you want to watch to the end, a party that’s too much fun to leave.
But assuming you have put your body into bed in a timely manner, comfort comes next. For most people, a cool bedroom helps. And banish the TV if you have the least trouble with sleep quantity or quality.
Then there’s the mattress. Good ones are expensive so we tend to hang on to them longer than we should. Stop it.
There’s one other thing that matters more than you might think, as well—your pillow.
Every few years there seems to be a pillow fad. Once it was memory foam, which every woman of a certain age soon came to realize made hot flashes worse. A couple of years ago, it was a type of shredded foam that was “better than down.”
Speaking of down, and feathers, that may or may not be a good idea. Some of us clog up at night on a bed of chicken feathers, which is what the cheaper feather-foam pillows use. Hotels for instance.
Size and fluffiness count, too. If you sleep on your back all night a very soft or flat pillow will be good for your neck and not push your head out of position. But if you’re a side sleeper, you need a nice tall, firm pillow to fill in between shoulder and head and keep you aligned well.
Earlier today, I looked all over the Internet for pillow suggestions. You can buy foam, feathers, down, polyester, and latex. I’d suggest the choice is one of those personal things.
But nowhere did I see anyone recommend my own favorite—buckwheat.
Yeah, that’s strange, I know. But if no other pillow ever seems to be just right, you hate hot pillows, you like your neck supported, and you want your pillow to stay in place, give it some thought. You can’t get one at your local mattress store, but they are available at Amazon.com.
Be warned, however, buckwheat pillows are hard as rocks. Not suitable for pillow fights. You could probably be arrested for throwing one of those babies around. And while hardness sounds like a bad idea, it’s actually comfortable… as if someone’s hands were propping your head in perfect position and keeping it there all night. With a buckwheat pillow, you actually push it into the shape you like and it stays there.
The other good thing about them is that you can push them to be thick enough for side sleeping, flat enough for back sleeping, and curved enough for stomach sleeping. The bad thing for some people, however, is that a fresh new buckwheat pillow will make a bit of sound as you shift. But if nothing else seems just right, it’s worth a try.
You may get so addicted you start taking it on trips with you.
A lot of factors come into play when you push a shopping cart around the grocery store. First of all—will your family eat it? If no one is ever going to take even one bite of those excellent canned sardines, it doesn’t matter how much calcium, selenium, Vitamin D and omega-3 fatty acids they have.
Then there’s quality. Blind comparisons at Serious Eats have established that Betty Crocker Instant Mashed Potatoes are markedly superior to Hungry Jack. So they say.
There’s also the question of whether you want to avoid GMO ingredients. And flavor preferences. I am personally certain that Lea & Perrins Worcestershire sauce is the only way to go. In fact, I am so certain of that, that I have never bought or tasted a competing brand. How’s that for objectivity?
But when it comes to ingredients that seem much the same from brand to brand—like eggs—is it worthwhile to pay more?
Honestly, the thought of chickens crowded in cages so small they can’t turn around is more than enough to keep me away from the brands known for their animal cruelty. I’m not even going to mention some of the worst abuses because they are stomach turning. Let’s just say that for me there are reasons to avoid the cheapest eggs.
That doesn’t automatically mean the most expensive eggs are the best, however. I’ve tried top-dollar, cage-free, organic, small-farm eggs that turned out to be old and unworthy. Organic foods protect you from exposure to pesticides, herbicides and growth hormones. They do not protect you from E. coli or other bacteria. That’s up to careful handling.
But what about those very pricey eggs that claim to have higher levels of omega-3 fatty acids?
This is a case where, if your budget has room, paying up is a good idea. For your health, a diet that is close to a 1:1 ratio of omega 6 to omega 3 fatty acids is best.
We don’t usually get that without making some effort because our diet is now tilted toward rich omega-6 foods and low in omega-3s. According to the National Oceanic and Atmospheric Administration, the average for Americans is about 4 ounces per week. Not enough. The “average” also hides the fact that most of that consumption comes from just a portion of us. Only 10% of Americans get two or more servings of fish per week.
But they do eat a lot of things fried in vegetable oils, meats, and grains. Only canola oil or fish oils are high in omega-3.
Eggs that claim to be high in omega-3 fatty acids were raised to purposely achieve that. The hens were fed diets that include omega-3 sources like flaxseed or fish oil.
Now here is where it gets interesting. Different brands of omega-3 enriched eggs have different levels in the final product. Research done by Nutrition Advance revealed these levels of omega 3 for different egg brands:
Organic Valley 225 mg omega-3 per large egg
Christopher 660 mg
4 Grain 150 mg
Sauder’s Eggs 325 mg
Eggland’s 115 mg
Fresh & Easy 160 mg
Gold Circle Farms 150 mg
Smart Balance 192 mg
Now, you know that missives like this on health topics sometimes carry a caution: “This is not medical advice. This statement has not been evaluated by the FDA and is not intended to diagnose or treat any medical condition.”
Good thing. Because I just realized I was buying the wrong brand. Hope we all learned something useful today. Yours in good health—Lynn.
Arthritis gets to most people sooner or later. Usually later. But “hand arthritis” can come very early.
It’s a stress-related woe, and there’s no lower age limit on busy hands.
Believe me, I know. I will never forget the winter I decided to knit sweaters for four boys. With a Christmas deadline, it was a nonstop venture, and my hands screamed. Those were young hands. Finger exercises, stretching and ibuprofen was all I could do at the time. Because I didn’t know there was a better answer.
That’s not surprising. Just try googling “hand stress arthritis” and you won’t get a lot of help—instead, your search engine will lead you into numerous blind alleys, and you’ll end up with articles on rheumatoid arthritis and osteoarthritis.
This kind of pain isn’t osteoarthritis, bursitis or rheumatism. It disappears within a day or two when you stop overworking a joint and comes back when resuming your abuse. For some people, ”hand stress” may be carpal tunnel syndrome that lands in the fingers instead of the more usual wrist area. But again, this is a pain that—unlike carpal tunnel—goes away if you stop doing whatever caused it.
That’s an obvious treatment: end the abuse. But what if you have an activity that you really, really need to pursue?
“Hand stress arthritis” doesn’t seem to be a medical condition that gets any attention. It doesn’t matter a lot, though, because if you’ve felt it, you know it’s definitely something real. Stretching the fingers like a concert pianist warming up may help.
So does boswellia. At long last, the Italian journal, Edison Minerva Medica, reported on an experiment with young subjects who had this kind of pain. The researchers divided them into two groups. One got the standard medical treatment, basically physical therapy. The other got a boswellia supplement.
After two weeks the pain decreased significantly for the patients who got boswellia. Swelling was reduced more as well and their hands functioned better than the control group. Some of the control group had to resort to pain medications because the therapy alone was not enough, but none of the subjects who got boswellia needed any pain medication.
Boswellia, or boswellia serrata, to give the supplement its full name, is the plant that also yields the famous resin beloved of wise men—frankincense.
Finding this study was an interesting addition to what we already know about boswellia. At Renown Health, it is included in Isoprex, our solution for joint health. It’s part of a formula that puts the brakes on a reaction called the “membrane attack complex” or MAC.
Most people think that the pain from arthritis is a simple mechanical problem. There’s nothing to cushion the cartilage between joints once the synovial fluid has been destroyed. But cartilage doesn’t have nerve cells. It’s the swelling and irritation in the muscles and tissues around the joint that cause the pain and set off a MAC attack.
Men and woman are so different
that John Gray became a rock star among self-help authors when he wrote a book
with the catchy title, “Men Are From Mars, Women Are From Venus.” It resonated
with those of us from both persuasions.
In the musical, My Fair Lady, Henry Higgins wants to know “Why Can't A Woman Be More Like a Man?” I can promise you that some women might reverse that question. But the French just wisely shrug their shoulders and say, “Viva la difference.” I can agree to that.
Men and women walk differently, talk differently, and now science has established that they tend to remember pain differently, too.
This applies, by the way, not only to male humans but male mice as well.
It matters because research has established that the memory of earlier pain plays a role in chronic pain. Male mice and humans clearly remember painful experiences very clearly. Take them back to the location where it happened and they will react with signs of stress and discomfort.
The researchers at McGill University and University of Toronto Mississauga are experts on pain, but this came as a surprise to them. At first, they noticed the difference between male and female mice, which they had not expected. When they tested humans, they found the same division.
One of the researchers opined that “because it is well known that women are both more sensitive to pain than men and that they are also generally more stressed out," they were gobsmacked by the results.
Naturally, the scientist who offered that opinion was a man. Would Human Resources please ask him to stop by for some sensitivity training?
In humans, the test consisted of strapping patients into a blood pressure cuff and blowing it up to be very tight. With the cuff in place, they were then asked to exercise their arms for 20 minutes and rate the pain.
That hurt so much that only 7 of the 80 people in the test rated the pain at lower than 50 on a 100-point scale.
Men and women both felt the pain acutely, the difference came the next day. Researchers either took the subjects back to the same room the next day or to a different one. When they returned to the same room, men rated the pain even worse the next day. That did not happen to men who were sent to a different room or to women in the test group.
It suggests that the memory of pain may make chronic pain worse, especially for men.
At this point, you may be connecting some obvious dots. It is commonly said that women tend to forget the pain of childbirth. Some believe there might be an evolutionary reason for this difference in pain perception.
Alas, scientists have looked at that question before and consider it something of a myth.
Karolinska Institute studies found that about half of women do forget the level of pain, but only when conditions are right. It was only the women who felt they had a caring staff and good support and who viewed their experience as positive at the time of giving birth who were more likely to forget the pain over the years.
So, despite gender differences, we humans all don't like to be hurt once, and we really, really hurt when old pains take another jab. The difference between us may be that women tend to give more weight to the emotional elements and men to the physical.
Someone should test that. It could be one of those Mars-Venus things.
It stands to reason that some
foods are good for you—salads, spinach, carrots, that kind of thing. But it's even better when your favorite
pleasures turn out to be advisable.
Millions of Brits are surely glad to know that their tea is full of antioxidants. Count me among those who are pleased to note that a glass of red wine is good for cholesterol and the heart.
Then there's chocolate. For millions, the news that chocolate was full of flavanols that might lower cholesterol and reduce blood pressure was the best news since Adam and Even figured out where babies came from.
That doesn't mean a Snickers bar, of course. The health claims are reserved for dark chocolate with high cocoa content and cocoa powder.
The claims are probably overblown. Two years ago, a search and metanalysis of the Cochrane database turned up 40 pilot studies on chocolate and health. The improvements in blood pressure were there—but they were small.
Cochrane's is a database of all the studies it can find around the world on natural health supplements and therapies. It's massive and there's no better source anywhere. But even a search through Cochrane's couldn't come up with good randomized, controlled studies that linked chocolate to a reduction in heart attacks or strokes.
Then a few days ago, an article published in Trends in Food Science and Technology piled on. Scientists at the University of Manitoba reviewed 17 studies on chocolate that were conducted over the past 20 years to investigate whether cocoa flavanols lowered blood pressure.
This is not going to make chocoholics happy. The evidence was “inconsistent” and “conflicting.” Nine of the 17 studies showed a small decrease in blood pressure. Eight studies did not.
The bottom line in all this is that there is no scientific evidence to justify an “authorized health claim” for chocolate in either the US or Canada, where the latest bad results came in.
Then again, your friends probably don't know about the cachet of an “authorized health claim.”
To gain that status, the claims must be backed by strong scientific evidence and then approved by FDA after a thorough review. It's not easy. FDA has approved only 12 such claims since 1990. But those claims are valuable because food and supplement makers can point to them in marketing and on product labels. An example of this kind of claim is “Adequate calcium and vitamin D as part of a healthful diet, along with physical activity, may reduce the risk of osteoporosis in later life.”
Canada says chocolate isn't worthy of a claim like that yet, and it doesn't appear that one will be coming anytime soon.
But if you love chocolate, there is other good news from England. Professor Alyn Morice at the University of Hull says chocolate is better than codeine for suppressing a cough. It coats the throat and soothes. He should know, Professor Morice is the head of Respiratory Medicine at Hull Medical school and an international authority on treating coughs.
The catch is that he bases his opinion on research on a sticky cough medicine with cocoa in the ingredients. Sipping a warm cup of cocoa won't keep the throat coated and do the same.
As the morning starts, the day goes.
If I had my preference, I'd always sleep in a bedroom with an east-facing window and wake to the morning light. My husband prefers the blinds drawn and nailed shut, fully-darkened approach to sleeping. Fortunately, our dog, Sally, is on the job to tell me when the sun is up.
It's not that I am actually a morning person. Just try talking to me and you'll soon give up. But I like a slow, calm start. Coffee, toast, reading, prayer. Walk the dog.
Then tai chi. Walking the dog is not always a calm thing. There are squirrels out there. Sometimes iguanas. People to say hello to. But tai chi puts me back into balance and gets the day going right.
A few years ago, for probably the third time, I signed up for classes—and what a difference a truly accomplished teacher made. It wasn't just the sequence of moves, it was the breathing, the exact tension in the hands, where my balance was... all revealed with kindness and encouragement.
Tai chi instructors at that level are rare and hard to find in most of the country. At best, you may find a yoga or taekwondo instructor who has learned the moves and added classes. The exercise itself is so valuable, even that will be a plus for you.
But if you have no instructor, then what? As I learned after spending my own money, most videos aren't very helpful. Books—some of them quite beautiful—are hard to follow because they can't show the flow of changes as they happen.
There's also the question of pacing. If you're like me, you will probably move too fast. The best benefits come from slow transitions from one position to another.
Now that I've brought up all those negatives, I will tell you where I found the best source ever for tackling tai chi when you can't find an instructor. It's a video that explains every move extremely clearly. So if you have to practice alone, this is the one video I would recommend for a complete beginner or even someone who wants to review his or her form:
Yang Tai Chi for Beginners Part 1, DVD from YMAA Publication Center
The instructor is Dr. Yang, Jwing-Ming.
This is available from Amazon, and cheap! Only $8.99. I keep it on my Kindle Fire for use.
If you are already doing tai chi, you will know why I recommend taking it up. If you've just thought about it, here's why starting a tai chi practice is a wonderful gift to give yourself.
What you get from tai chi
• It's a moving meditation
• It's excellent for developing and preserving functional balance
• It has been proven to help people with back pain
• It is suitable for the fit and the not-fit because of its gentle, slow movements
• Nonetheless, it is real exercise
• It improves blood and lymph circulation
• In one randomized, controlled trial, tai chi was as effective as physical therapy for people with knee osteoarthritis
• It improves posture, which may also reduce neck pain
• It lowers blood pressure
• It helps with depression
• It helps cognition, making decisions and other mental tasks
• In one study on 400 people already showing signs of dementia, tai chi slowed the disease
And if you're lucky like me, it also makes the dog bark. That's a lot of benefit and entertainment to start the day.
Quite a few savvy environmentalists are against genetically
modified (GMO) plants for any reason.
There are definitely real concerns. But would you consider a GMO version of ivy that cleans chloroform and benzenes out of the air better than a HEPA filter? What if your baby was breathing that stuff in? If you have city water or an attached garage, the baby is definitely getting a dose of both.
Household air is usually more tainted than the air in offices and schools. Toxic substances off-gas from fabrics, furniture, cookware, and cooking. Chlorinated water means your home has chloroform in the air. A lawn mower or car in the garage contributes benzenes. Particle board furniture and wrinkle-free fabrics pile on with formaldehyde. A fireplace or poorly adjusted gas burner on your stove adds carbon monoxide and nitrous oxide.
You have probably heard that houseplants are good for indoor air. They do take out the carbon dioxide and add oxygen. But they aren't very efficient at fighting the other pollutants. It takes about 20 houseplants to clear the formaldehyde found in a typical living room.
I'm not sure how anyone got the brilliant idea, “hmm rabbit plus ivy might work.” But it does. When Professor Stuart Strand at the University of Washington tried introducing the P450 2e1 gene from a rabbit into the common houseplant known as pothos or devil's ivy, he had a winner. In mammals, that gene produces an enzyme that helps break down chemicals. In an ivy plant, it's extremely effective at clearing the air.
Strand and team tested the modified ivy in a container to measure how well it worked. Compared to a regular plant, or no plant at all, the GMO ivy was a star. It broke down 75% of benzene within 8 days. It was even better at making formaldehyde go away. Within 6 days, the pollutant was barely detectable.
The work looks like it has a lot of potential, but no one knows yet how well these plants might work in a regular room or how many it would take to clear the air.
That's not the only concern. GMO plants have a habit of escaping their designated slots. A type of GMO bent grass intended for golf courses has escaped its bounds to clog irrigation systems in Oregon. GMO canola plants from Canada have invaded the Dakotas. Because canola can hybridize easily with other plants, it can become an invasive weed that farmers cannot control, thanks to its built-in resistance to RoundUp.
A Harvard study has concluded beyond any reasonable doubt that RoundUp-ready plants have played a big role in the loss of wild bees.
Most botanists saw that potential trouble coming, but other adverse effects are more shocking surprises.
Who foresaw that GMO crops would lead to more suicides in India? But they have, according to the country's Agricultural Ministry. Farming is hard there. It depends on adequate rain during the monsoon season. But Monsanto's GMO seeds require twice as much water. In years when monsoon rains are a little light, crops fail. Worse, the expensive seeds are often not even capable of resisting pests. They were developed for Western nemesis, not for Indian boll-worms.
We are careful about product sources at Renown Health. It's the reason all our products are made in the US, where we can be sure we know the quality and integrity of anything we use. We do not use GMO plant sources.
As a natural health company, we take the environment seriously. It's where we source everything from feverfew to grape skin extracts to mango seed butter. We think that as a person who uses natural healing products, that's important to you, too.
So, much as we like the idea of formaldehyde-eating ivy plants, we're not hanging any around the office.
I'm not sure what capabilities the guys in IT have. They may
know what websites I visit, but truly, the stopover at Larry Brown Sports was work-related.
I was looking for new developments in knee care.
That led to an item about New York Giants wide receiver Jawill Davis. He's out for the rest of the 2018-2019 season, placed on injured reserve.
Davis sustained a knee injury, which is not unusual among football players, but in this case no action on the field was involved. Davis was either dancing, or just plain horsing around, in the locker room when he slightly dislocated his knee.
Admittedly, Davis only played in four games for the Giants through the end of December. He's not a superstar. Still, even the least noticed athlete who makes it to any pro sport is well-conditioned, strong, and flexible. You wouldn't expect dancing to do them in.
Davis now has the distinction of owning the most embarrassing injury in sports for 2018. Larry Brown Sports Weird Injuries also lists such runners up as Kansas City pitcher Mike Moustakas who hurt his back picking up one of his kids. Or there was St. Louis pitcher Luke Weaver who missed a start after he cut his finger taking the aluminum foil off a food tray.
Pitcher Aaron Sanchez of the Toronto Blue Jays may win the prize for hiding the truth longest. He had a finger injury that kept him out of the game for two months. The reason was too embarrassing to share, he said. Probably what everyone was imagining was so bad, he finally 'fessed up that he caught his finger in his suitcase as it was falling off the bed.
But back to knees. They're really vulnerable. Even for athletes. Larry Brown Sports also reported that “On the eve of Opening Day, [Kansas City] Royals catcher Salvador Perez tore his MCL while carrying luggage, and is expected to miss 4 to 6 weeks of action.” That's the medial collateral ligament, which runs along the inside of the knee.
If this can happen to healthy 20-somethings, should the rest of us just conclude our knees are dead dodos, bound to be injured sooner or later?
Despite weird injuries like those suffered by Davis and Perez, when you consider the extreme physical challenges professional athletes face, they don't have nearly as many knee injuries as you'd expect. There's a lesson in that. Athletes prepare for it.If your knees are healthy now, dance with abandon, your knees can take it if you take care of them. If your knees already hurt, see your trainer or physical therapist for help and get ready to dance, even if you have to go gently.
Did you ever wonder why all those women were squeezing rolls of toilet paper in those absurd Charmin' ads from years ago? It wasn't the TP, no matter what Mr. Whipple said when he told them to stop. It must have been the baby on the wrapper.
The term for that impulse is cute aggression, and it's a real thing.
Proctor & Gamble made a fortune on the phenomenon of cute aggression before it was even known to science. If you're over 30, you probably remember the ads where crazy housewives were pulling packages of Charmin' toilet paper off the shelf to squeeze them. Out comes grocery manager “Mr. Whipple” to make them stop. Of course, after he sends them all away, he squeezes the Charmin' in secret.
The ads ran from 1965 to 1989, 504 of them. Proctor & Gamble brought Mr. Whipple out of retirement briefly in 1999 after the company took the cute baby picture off the label and switched to the cute Charmin' bears. The ad campaign made Dick Wilson, the actor who played Mr. Whipple, one of the most recognized characters of all time. Silly, yes. But it worked because it touched a deep human urge.
In 2012, Yale scientists, Rebecca Dyer and Oriana Aragon, investigated the urge to squeeze, bite, or show aggression toward adorably cute baby animals and human babies (but not toilet paper). They originated the term “cute aggression”.
You've seen it or done it. People pinch baby cheeks, which doesn't seem like a very loving gesture when you think about it. We pretend to growl at puppies, another not so friendly gesture.
You've surely heard someone say tell a baby, “I just want to bite your little toes off; I could eat you right up!” Or coo toward a puppy, “Oooh, I could squeeze you to death.” And they may be telling the literal truth if they say, “Oooh, I can't stand it!”
In 2015, neuroscientist Anna Brooks told a reporter that cute aggression is probably a natural mechanism to dial down feeling too good around cuteness.
People who are helplessly flooded with excessive levels of the feel-good hormone dopamine aren't functioning at their logical best. They could spend so much emotional energy feeling the love that they forget to do their chores, like change diapers and feed the baby.
Just recently, new research upheld that theory and added some details to the mystery of why some of us want to kill, maim, bite and squeeze cute things. As part of the testing, they asked participants to rate their response to cute and non-cute animals and babies then evaluate their reactions. They were asked about the statements “I can't stand it,” “I can't handle it” along with reactions of wanting to hold it and protect it.
This is what is most interesting: The higher the “I can't stand it” rating participants gave each picture, the more the reward centers in their brains lit up, and the more cute aggression they reported.
That strongly suggests that the early theory that cute aggression is a reaction to being emotionally overwhelmed.
It should be noted if you are shaking your head that all of us don't experience a high degree of cute aggression. I, for one, have never felt the urge to pinch baby cheeks or bite toes. OK, belly bubbles, yes, who could resist that? But my daughters give me pretty high marks for mothering, despite declining to eat them all up as infants.
And some people in the recent research group said they only felt the cute aggression urge toward animals and not toward babies. But I must admit, I've never squeezed a puppy, either, and I love dogs of all sizes and kinds. I do, however, force Squeaky, the tiny cat, to endure kitty kisses on her head. Sorry Squeaks, Mother Nature made me do it.
At any rate, the next time you hear someone threatening to squeeze a baby to death, it's probably all fine. Very much fine.
Katherine K.M. Stavropoulos and Laura A. Albo. “It’s so Cute I Could Crush It!”: Understanding Neural Mechanisms of Cute Aggression. Front. Behav. Neurosci., 04 December 2018. https://doi.org/10.3389/fnbeh.2018.00300
There's a new way to lower your risk of diabetes: If you're a night owl, tell the boss you'll be in late. That's just one of the benefits of living in sync with your natural internal clock. Some of us are early birds, some are night owls, and it's risky to change.
It's obvious that all of us humans don't have our body clocks in sync because of some internal force. In my own family, my brother was literally up with the birds. I suspect he's the one who told the rooster to get a move on it. As an adult, he liked to head into work at 4 a.m. to beat the traffic. I pull the blanket over my head and hold out as long as possible. We both had the same childhood schedules, the same breakfast, school, and bedtime routines. But we have remained different all our lives.
Society hasn't made it easy for us to accommodate our different clocks, however. Ever since Benjamin Franklin observed that early to bed and early to rise makes a man healthy, wealthy and wise, night owls have borne a slightly unsavory reputation. School hours favor people like my brother. Ditto most workplaces. Nightclubs are for night owls. So are parties, concerts, and most baseball games.
Whichever style you are, you now have science to make your case that you should follow your own clock. A Harvard study almost says it all in the title: “Mismatch of Sleep and Work Timing and Risk of Type 2 Diabetes.” The only word missing word is “causes,” but the report hints as much.
Harvard found that late chronotypes, or night owls, had higher rates of diabetes after several years of shift work that ran counter their natural schedule. Early birds were slightly affected by a mismatch, but not as much.
The work world is catching on. In Germany, a Thyssenkrupp steel factory put its morning people on the day shift and gave its night owls the evening shift. As a result, everyone got extra sleep, about an hour's worth per day on average.
“They got 16 percent more sleep, almost a full night’s length over the course of the week. That is enormous,” Till Roenneberg, a chronobiologist at Ludwig-Maximilian University in Munich, told the New York Times.
Dr. Roenneberg believes that inefficiencies caused by workers laboring out of sync with their own clocks may cost society about 1% of GDP.
As the New York Times put it, “if you rely on an alarm clock to wake you up, you're out of sync with your own body”.
And your body will fight back.
Isn't it just a little weird? Sixty-second commercials for ED
run on family television channels.
But we don't talk about constipation--a problem that is so common almost everyone suffers it occasionally.
Chronic constipation affects 15% to 20% of Americans—42 million people according to the National Institute of Diabetes and Digestive and Kidney Diseases at the National Institute of Health.
That should give you a clue that those dry runs in the bathroom are not just uncomfortable and embarrassing. It's serious enough for the government to study.
Most of the time, constipation has innocent causes: too little exercise, the wrong food, not enough water, a medication that binds you up, pregnancy, and just plain bad habits like resisting the urge when it's not convenient to go.
It can be a sign something more serious is wrong. You should see your doctor if you have blood in your stools, excessive pain, unexplained weight loss or this is new and unusual for you. But for the rest of us, constipation is usually a problem we can solve ourselves.
Constipation really isn't funny. It's miserable. Fortunately, there's a lot you can do. Here is some of the best and most respected advice, with a little extra insight.
1. Hydrate—Why this matters: not what you probably think. As we get older, our bodies hold water. Also, our thirst signals become less reliable as we age. So drink plentifully whether you feel thirsty or not. You may not be as well hydrated as you think you are. You don't have to glug down a quart of pure water at a time to stay healthy. Multiple small additions of beverages that you like throughout the day are ideal. It will make all your systems function better. Proper hydration benefits your skin, blood pressure, heart rate and metabolism. And helps soften stools, too.
2. Move—exercise helps speed up your digestive system and supports the muscles involved in pushing food through your system. Your colon is a muscle. Although doctors often advise constipated patients to exercise more, there is surprisingly little actual research on the topic.
• Strain #1 prevents harmful pathogens from entering your bloodstream
• Strain #2 promotes lactose intolerance, a very common problem
• Strain #3 gets past the stomach to prevent loose stools—AND CONSTIPATION!
• Strain #4 promotes regularity and overall immune system strength
• Strain #5 encourages your gut to produce lactase again, to naturally aid in digesting dairy products (including things like whey found in cookies and protein bars)
• Strain #6 seeks out and destroys toxins and helps maintain the correct pH in your gut
But the star of the show—Strain #7—is Saccharomyces boulardii... It's a powerful agent in restoring a healthy balance of gut bacteria. And this is the missing ingredient you won't find in cheap, grocery store products.
In science, nothing is ever final. Brain training is still under investigation.
Several studies between 2010 and 2013 reported to our joy that doing crossword puzzles might delay mental aging and preserve memory and cognitive function. Maybe even hold back the onset of Alzheimer's disease.
That proved less than gospel. Next came “scientific brain training” exercises.
Companies like Luminosity attracted thousands of paying subscribers who did daily exercises. And then the doubters came. Luminosity ended up paying a $2 million fine for false advertising. Later, a large-scale study showed that games of the sort online training companies were touting didn't work.
Still, the feeling remains that “use it or lose it” must have some truth to it. We all had classmates who weren't mental giants in high school, didn't get any sharper as they aged, and seemed old before their time.
We also know people who stay interested and interesting all their lives. The proverbial grandmother who is sharp as a tack, the elderly professor who misses nothing...
Our instincts may have a basis. A new paper in the British Journal of Medicine explains why people who don't work their brains overly hard seem to go downhill faster while the curious and mentally active remain alert much longer.
Playing problem-solving games and learning new things help people stay mentally sharp longer. In effect, they are a sort of insurance policy on mental acuity. In the words of the study's lead researcher, Dr. Roger Staff:
"These results indicate that engagement in problem-solving does not protect an individual from decline, but imparts a higher starting point from which decline is observed and offsets the point at which impairment becomes significant."
No doubt, there will be more scientific research on this topic ahead. But for now, your instincts are right. Using your brain is good for your brain.
In fact, there are activities that have proved even better than solving crosswords or Sudoku puzzles.
Try learning another language. In a group of Alzheimer's patients, scientist Ellen Bialystok at the University of York found that those who were bilingual experienced the onset of Alzheimer's about four years later than patients who never learned a second language. Another study on 648 patients in India found that learning a second language delayed Alzheimer's by 4.5 years.
The patients in these studies had been bilingual since childhood. But Thomas Bak, who led the Indian study, thinks that learning a second language later in life may have the same benefits. Researchers at Lund University in Sweden found that learning a language when older actually led to brain improvements. They took MRI's that proved it.
And if crosswords, Sudoku, and a second language aren't your thing—try music.
Because the other activity that is especially good for brain health is learning a musical instrument. If you always saw yourself as a rock guitar star, or sedately strumming a heavenly harp, you have a good excuse to get started.
The 1944 classic winter song, “Baby It's Cold Outside,” has stirred plenty of controversies lately.
The thing is, whether you choose to stay in where it's warm or venture out, you need your immune system in crack shape during the winter months.
But are you really more likely to get a cold in winter? Doctors usually say this is a myth. You don't come down with a cold because you got cold. Except that in a roundabout way, you do.
The viruses that cause colds multiply faster at somewhat lower temperatures. In winter, as you inhale colder air outdoors, it temporarily reduces the temperature in your nose, which encourages the viruses to multiply more rapidly and infect you more easily.
Another study that confirms we're prone to more colds in winter comes from a different angle. It turns out that your genes change seasonally. In winter, our DNA dials up the activity in our genes that control inflammation. Thus we are more likely to respond to germs around us with swelling, mucus, achiness, low-grade fever, and other signs of inflammation at work to fight off cold germs.
This is an interesting reaction that seems to apply no matter where you live... with some local variations. That's what makes it even more likely that our bodies prepare to get more colds when it's cold outside. The scientists collected data on about 1,000 people distributed across six countries: the US, the UK, Australia, Germany, Iceland, and the Gambia, in West Africa.
People's immune systems and inflammatory processes revved up during the winter in the countries that had cold winters. But the Gambia is hot all year. In the Gambia, DNA dialed up the inflammatory readiness in the summer rainy season when mosquitoes abound.
You can increase your immunity by simply not doing the things that lower it. Get enough sleep, eat well, exercise moderately.
The other good thing you can do for yourself is to try Isoprex this winter. Inflammation to fight germs is a good thing—until the system goes into overdrive and fails to turn off. Then it causes havoc throughout the body. One way that shows up in middle age and later is in the pain of arthritis. It can also mean a stuffier nose and more fever than your body really needs to fight a cold.
Isoprex supports the body to keep the right balance—allowing your genes to do what they should, then helping them remember to shut off.
Your cold could thank you. If you even get one.
If you're thinking about starting (or expanding) your family and would like an excuse to go to a taping of "The Dr. Oz. Show," come to NYC and you might get a two-fer. It seems that sperm counts everywhere (researchers also looked at Los Angeles; Palo Alto, California; Houston; Boston; and Indianapolis -- the Brit publication Daily Mail reports the same holds true in Europe) are plummeting, except in the Big Apple. The reason for decline in the West? Exposure to chemicals and increasingly sedentary lifestyles.
But why is NYC exempt? As Dr. Peter Schlegel -- president-elect of the American Society of Reproductive Medicine (ASRM) and New York resident -- said: "The exceptionalism of New York sperm donors is intriguing, but maybe not so surprising. New Yorkers tend to be physically active [walking culture] and our water system provides some of the cleanest and highest quality water in the U.S." He also added that NYC has the best pizza and the best bagels, both of which could owe their superiority to the water, too. In Boston, while total sperm count didn't decline, there were declines in categories such as average concentration and total motile sperm.
So men, to keep your swimming-sperm count up to speed (that's the motile count), get in your 10,000 steps a day (New Yorkers do it regularly), stay away from pesticides and processed foods, and bring your bride to "The Dr. Oz Show." Then stop for a slice and a whole-wheat bagel with lox, too. You'll be glad you did.
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D.
Distributed by King Features Syndicate, Inc.
Scientists have found a group of people in their 70s who have
muscles and aerobic capacities that would be the envy of healthy 20-somethings.
In fact, when they compared them to a group of 20-year-olds, they were just as
The simple anti-aging elixir they used was one we can all access—regular exercise. The catch is that these super-fit 70-year-olds kept it up for five decades.
To find these fitness superstars, researchers at Ball State University went looking for senior men and women who had begun exercising vigorously in the 1970s when jogging and fitness were a big trend. They located 28 people who began exercising in the 1970s and continued to work out at a high level every day for the next five decades.
When researchers brought them into the lab to test muscles and aerobic capacity, the older crew had muscle strength as good as the youngsters. Their aerobic capacity was slightly lower but still impressive. Compared to a control group of people their own age who had not been as active, however. The high exercisers were fitness heroes. They had 40% greater lung capacity compared to their inactive peers.
Five decades of steady, strong exercise is a difficult prescription for those of us who already let a few decades go by. But there is hope.
Even starting exercise later in life does pay off. Strength training is effective in keeping youthful muscle mass and balance to any age.
Your aerobic condition benefits from exercise as well, but it seems to need a bit more help. That's where nutrition comes in.
Adding natural life-enhancing herbs such as baikal skullcap to your daily routine could be your smartest move to keep up easy breathing. This herb derived from a flowering perennial that has been widely used in traditional medicine in Korea and China. It is used for upper respiratory tract infections, allergic rhinitis, and bronchial diseases.
Baikal skullcap is not easy to find. It doesn't even make the list of 100 most popular medicinal herbs, but it is an important ingredient in Renown Health's Isoprex.
After undergoing periodontal surgery, comedian and television personality Whoopi Goldberg returned to her seat on "The View" and admitted to the public that, despite her excellent dental insurance, she's never taken care of her teeth and is paying the price. "Your mouth is connected to your entire system," she told viewers. "If you do not take care of your mouth, then you are not taking care of your body, and it will kill you."
Mountains of research show that poor oral health increases your risk for many maladies, such as cardiovascular disease, diabetes, and head and neck cancers. And now, new research has emerged that shows that good dental care (brushing and flossing your teeth daily and getting regular checkups) could prevent or help reduce high blood pressure.
The study published in the journal Hypertension found that people with healthier gums and little tooth decay have lower blood pressure. It also revealed that folks taking high blood pressure medications get more benefit from the meds if their gums are healthy. Specifically, patients being treated for high blood pressure who have inflamed gums are 20 percent less likely to have their blood pressure in a healthy range than patients with no signs of periodontal disease.
So, if you have periodontal disease, have your blood pressure monitored regularly, and get to your periodontist pronto! If you have high blood pressure, remember that maintaining good dental hygiene is as important for protecting your heart as eating fiber regularly or increasing your steps from 8,000 to 12,000 daily.
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D.
Distributed by King Features Syndicate, Inc.
In "The Itchy & Scratchy Show," a cartoon featured on "The Simpsons," Itchy (Dan Castellaneta), a blue mouse, repeatedly kills Scratchy (Harry Shearer), a black cat. It's an endless cycle of torment. Creator Matt Groening really got that itchy and scratchy thing right. Just ask anyone who's ever had chronic dry skin, eczema or mosquito or fire ant bites. You gotta scratch, but the scratching just causes more itching!
Now there's proof: A study out of the Center for the Study of Itch (we kid you not!) at Washington University School of Medicine in St. Louis has identified how scratching damages the top layer of your skin and causes signaling proteins (inflammatory cytokines, for example) to be released. They activate the skin's itch-sensory neurons, which in turn produce signals that trigger inflammation and cause more scratching. In short, your skin barrier, your immune system and your peripheral nervous system all gang up on you.
What works to break the cycle? The American Academy of Dermatology recommends you apply a cold, wet cloth or an ice pack to itchy areas for 5-10 minutes. Moisturize with a cream free of additives and fragrances. Apply topical anesthetics with pramoxine and cooling agents such as menthol or calamine.
For itchiness that just won't stop, the itch researchers say a drug called nalfurafine hydrochloride may be the answer. It targets certain opioid receptors on spinal cord neurons. The drug is already approved in Japan to alleviate itching in dialysis patients and folks with severe liver disease. Ahh! Relief.
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D.
Distributed by King Features Syndicate, Inc.
Light comes into the eye, hits the retina, and you see. The concept is pretty simple when it comes to vision. Lights on, you can see. Lights off, you're literally in the dark.
But a few cells in the retina aren't involved in seeing. They interpret prolonged light as a time to tell the brain, “wake up,” which they do by generating a protein called melanopsin.
It takes just 10 minutes of prolonged light exposure for the melanopsin-producing cells to start the process. Melanopsin signals your brain that it's daytime. The brain, in turn, signals your pituitary gland to stop producing melatonin.
You probably know something about the hormone melatonin from the drugstore...it makes you sleepy. Melatonin in a bottle is widely recommended for jet lag and insomnia.
We've long known what melatonin does and how to use it to encourage sleep. But until recent work at Salk Institute, the exact mechanics of the melanopsin-melatonin process has been unclear.
Like many other cells in your body, the cells in the retina can be “down-regulated,” or turned off by other chemicals called arrestins.
But the Salk Professor, Satchin Panda, found that the expected arrestin process doesn't work as expected with the melanopsin cells in the retina.
There are two varieties of arrestins involved, it seems. One of them follows the normal pattern to shut down activity. But the surprise that Professor Panda and his team at Salk found was that the other arrestin didn't behave as it “should” on the melanopsin cells. Instead of shutting down melanopsin production, the arrestin made it increase. Increased melanopsin causes wakefulness because it suppresses melatonin.
OK, that's enough of a science lesson for today. This is what matters...
It's two things, actually. The new findings at Salk finally explain how your computer is keeping you awake, and, further, they could eventually lead to effective treatments for migraines, insomnia, jet lag and circadian disorders that may also play a role in obesity, insulin resistance, metabolic syndrome, and cognitive problems.
The research results, which were just published in the Nov. 27 issue of Cell Reports, explains why using computers, cell phones, and television after dark are especially bad for your sleep, much worse than simply reading an exciting mystery under normal lamplight.
All these electronics emit large doses of blue light, and your melanopsin cells are especially sensitive to that color. They interpret light from the blue end of the spectrum as if it is full, blazing daylight.
You know what happens next. When it's daytime to your eyes, your brain will get a wake-up call and you will lie in bed praying for sleep.
If you are not likely to turn off the television and walk away from your computer or cell phone after dark, however, you have defenses.
The hands-down best one is to get glasses with blue-blocker lenses.
In fact, you don't need to spend $50 (nonprescription) or $300 (prescription) for help. Consumer Reports tested three brands of nonprescription blue-blocking glasses. The winner was the basic orange safety glasses. Cost $8. Go to a Home Depot near you.
The nice thing about the big orange safety glasses is that you are also protected from flying debris, should that happen around your house. Say champagne corks on New Years' Eve? But on a typical evening, they give your surroundings a lovely calming glow. It's like seeing the world by firelight.
It's a cheap fix, and it actually works.
An article published in the European Journal of Neuroscience (December 2018) found that on workdays, “a decrease in evening blue light exposure led to an advance in melatonin and sleep onset.” Even for “late chronotypes,” which most of us call night owls, “controlling light exposure at home can be effective in advancing melatonin secretion and sleep.” The researchers used plain safety goggles and room darkening shades to test reactions. The safety goggles worked best.
Naturally, computers aren't the only thing that keeps people awake at night. Avoiding late-night screaming crowds at a sports arena, overdoing the Christmas punch, and resisting a snack of jalapeno poppers right before bedtime is also advisable if you want a gentle night's sleep.
But if your lifestyle doesn't include meditation before bedtime and dinner before sunset—or turning off the computer early—some sporty orange safety goggles are definitely worth the price.
Dangerfield loved to complain about his physician, Dr. Vinnie Boombatz, whose
careless instructions often left Dangerfield in worse condition. "He told
me to run five miles a day for eight weeks," Dangerfield gripes. "I
called him up and I said 'Doc, I'm 70 miles from my house!'"
Dangerfield isn't the only patient who has suffered from miscommunication with a physician ... and the miscommunication goes both ways.
From Doc to Patient: A survey published in JAMA finds more than a third of patients fail to tell their doctor if they disagree with treatment recommendations or don't understand them. That puts your health in jeopardy, and it's a major cause of hospital readmission!
From Patient to Doc: The study also found that 80 percent of people have lied to their doctors in ways that could affect their health and medical treatment. The top reason? To avoid being judged.
What to Do: When your doc suggests treatment, make SURE you understand. Demand clear explanations. And if you have a bad feeling about something, express it!
Now, when it comes to being honest with your doc: Your health history and lifestyle habits can be hard to discuss openly, especially if you've made poor choices, such as smoking, not exercising or drinking excessively. But you need to get that info to your doc (ask him or her not to put it in the electronic record), so appropriate care can be offered! Most docs are not judgmental; they just want to help you get and stay healthy. Trust us, we know these doctors exist!
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D.
Distributed by King Features Syndicate, Inc.
Can you starve your way to a long
life and great health? According to several studies on calorie restriction in
mice, possibly so.
Obviously, that's a plan that can go too far. Anorexia kills. And the consequences are dire even among the cured. Canadian researchers have calculated that girls who were anorexic at age 15 and recovered would cut 25 years off their life span.
But for those of us well above the anorexic level, what about all those studies that show calorie-restricted (CR) diets are linked to living longer?
It's a pertinent question because roughly 40% of Americans are obese. And, 70% of us are at least a little overweight. People whose weights are “normal” on height-weight charts are actually a minority.
There are plenty of logical reasons for us to shed pounds. Obesity is highly correlated with some kinds of cancer, diabetes, heart disease, knee and back pains, asthma, sleep apnea...
Yet, Americans keep getting heavier. And those of us who do shed a few pounds almost always regain them.
Except for those who believe in CR. The CR advantage probably comes down to attitude:
• Why diet to lose weight: Eat less to gradually get skinny because if you're fat you might get sick from one of many possible things (some of which could happen to you whether you diet or not) some day in the future.
• Why CR: Eat less so you will live longer. Getting thin is a nice bonus.
CR can be approached in many ways. Some people just cut calories by a significant amount every day. That can range from 10% less than your normal intake to 30% less. Others choose to eat regularly for 5 days a week and fast or drastically reduce their calories on the other two. Yet others fast on a different schedule.
The CR idea is so widely touted and has been “proven” so many times:
• Nematodes (worms) lived longer when their calories were restricted and they were receiving resveratrol at the same time.
• Another experiment on yeast and flies also showed they lived longer on restricted calories, again results were best when the subjects were fed resveratrol.
• A new experiment just showed that mice also lived longer when fed only once a day, presumably because that meant they fasted longer as well as receiving fewer calories overall.
Over the years, CR experiments have focused on mice, flies,
worms, and fungi because their short lifespans make it easy to follow subjects
through whole generations.
We don't know nearly as much about the effect on humans. There has never been an experiment where researchers began restricting the calories of dozens of children, kept them on restriction for the rest of their lives, and followed them all the way from cradle to grave. There never will be.
The best we have are some correlations. Based on data from 900,000 western European and North American adults body mass is strongly associated with lifespan. Among the morbidly obese, half died by age 70. Among the lean, less than a quarter had died by age 70.
There have also been some short-term experiments. Valter Longo at USC had subjects cut their calories in half five days per month. After three months, they managed to lower their triglycerides, cholesterol, and body mass. Their blood glucose levels improved, too.
That seemed promising. Alas, the diet wasn't popular. A fourth of the subjects dropped out of the experiment before the three-month mark. The odds that millions of us would use this approach for our entire lives, as Longo suggests we should, are somewhere between dismal and impossible.
Now the good news... Everything we know about CR so far doesn't suggest you should go that route if you don't want to. While the disadvantages of obesity are real, the advantages of CR are still in question.
Let's start with the obvious. You are not a mouse.
Murine—mouse—studies are significant and very helpful, but they're limited. It's one thing to test the chemistry and physiology of a drug on mice with their similar biology. It's quite another to use mice to test lifestyle choices when their lives differ so greatly from ours.
Those long-living mice existed in a highly protected environment with no predators, food shortages, crummy bosses, bills due... you get the idea. So far there are no perfectly controlled studies that prove any one of us will live longer with a CR lifestyle. All we know for sure is that being obese is not good.
Second, there is definite support for the importance of resveratrol in life extension, and it seems especially helpful when cutting calories.
And finally, newer research with primates and studies on human data sets expand the story. Longevity may have a better relationship to maintaining muscle mass than it does to maintaining the waist you had when you were 18 years old and in your physical prime.
Tell the truth. How many cookies have you had so far? That includes the broken pieces.
And, we know you have stoically resisted the fruitcake, but what about the gift box of Aunt Bessie's double-dark-chocolate homemade fudge? You couldn't hurt her feelings, could you?
It's the holiday season. We're surrounded by seasonal sweets at the office, at home, and at every party.
They're good, too. Only the Grinch would use this occasion to insist that sugar is bad for you and you should give it all up.
That would be unnatural.
The sweet taste exists in nature for a reason. It's not just your imagination or a regrettable character weakness—Mother Nature is tempting you. “Biochemistry,”a college textbook by Jeremy Berg et al. that has been around so long it's now in its 8th edition, makes quick work of that point:
“Five primary tastes are perceived: bitter, sweet, sour, salty, and umami (the taste of glutamate from the Japanese word for “deliciousness”). These five tastes serve to classify compounds into potentially nutritive and beneficial (sweet, salty, umami) or potentially harmful or toxic (bitter, sour).”
See? Mother Nature is on your side if you would rather have a brownie instead of seconds on kale.
So, have a little sugar. Not too much. Sugar adds calories, and we are never in favor of packing on extra pounds if you can avoid it. Needless to say, if you have diabetes or pre-diabetes, extreme self-discipline is needed.
But a little extra, occasional, sugar won't kill you. In fact, this time of year, sugary treats tend to come with one of life's greatest gifts—the company of friends and family. The smile of a friend and a cookie is a fair trade for sitting home alone. Especially during these short, dark days of the year when many of us feel more depressed than usual.
Still, you will no doubt encounter well-meaning people who make a point of letting you know they wouldn't indulge—because sugar's just bad for you always, forever, period, amen.
One of the favorite arguments by the sugar police is that sugar feeds cancer.
Well, it's time to tackle that myth. The sugar police are stretching the truth a bit.
Excess sugar does lead to obesity. And obesity is definitely implicated in some kinds of cancer.
But there is no direct link from eating sugar to getting cancer. Or growing cancer. Just as there is no direct link from pumping gas into the tanks of hearses cause a prevalence of caskets inside.
In fact, you can hardly avoid ingesting sugar if you eat a normal diet. It is abundant in healthy fruit like apples. Milk has it. So do carrots, peas, corn, wheat, and potatoes.
The modern problem is not sugar, it's excessive added sugar. And even then, the sugar-cancer link has been rejected in one high-quality study after another. What you have is a case of guilt by association.
A researcher named Otto Warburger first suggested that sugar caused cancer in 1924 because cancer cells use sugar (glucose) in a different way from regular cells. He got the science wrong, but the myth lives on.
The fact is, all the cells in your body use glucose—not just cancer cells. Your survival depends on it.
Because they grow so much faster, however, cancer cells are real glucose hogs. Unfortunately, you can't starve your cancer cells by cutting all the sugar out of your diet. There's no way to tell those strawberries to head to your good cells and not your cancer cells.
That's the realm of science, and it may be possible with some drugs in the future.
Oncologists at Brunel University in London have found a link between glucose and cancer cells that might be the answer. Cancer cells overproduce a protein named PARP 14. The protein allows cancer cells to grab enormous amounts of glucose from the system to fuel their rapid growth. The interesting thing with PARP14 is that it allows cancer cells to use glucose in a different manner than normal cells do. The scientists are looking for ways to block PARP14 production. In turn, that would prevent cancer cells from using the body's glucose stores while healthy cells could still access it.
Success along those lines is still years down the road.
In the meantime, a moderate amount of sugar is OK. Celebrate the season. Enjoy a cookie or two—but not dozens.
There's always New Year's Day for new resolutions. And in addition to cutting back on sugar, you could vow to eat more blueberries.
Because there is one thing that sugar is notorious for—dry skin. A rush of sugar causes an insulin spike, which causes inflammation, which leads to redness, dryness, and wrinkles through the process of glycation. Hence the blueberries...
Blueberries are chock full of vitamin A, vitamin C, antioxidants, and flavonoids that are good for your skin. Your face will love you for it.
And while you're at it, treat yourself to a nice face cream, too.
According to some
accounts, in 1953 C.A. Swanson & Sons had over 500,000 pounds of unsold
turkey after Thanksgiving. One employee suggested they cook the leftovers,
along with some favorite side dishes, package the meals in compartmented
aluminum trays and freeze them. That was the first TV dinner.
The impulse to pull something pre-made out of the freezer, heat it up and eat it while watching a favorite show is now a way of life for millions. And these days microwave technology can make it happen very quickly (Netflix and chili, anyone?). But there are potential dangers in the foods' plastic containers and in under-heated foods.
Dr. Oz did his own investigation at the Good Housekeeping test lab (watch it at DoctorOz.com; search for "microwave dinners") and discovered that the plastic packaging stayed intact if the food was zapped when frozen, but if the food was thawed then microwaved, the plastic melted! That's toxic. Also, even intact plastics contain potentially harmful chemicals, especially hormone-disrupting BPA/BPS. But thankfully, the microwaved meals the show had tested by an independent lab didn't have BPA in the food.
Another risk from microwaving frozen precooked meals or uncooked foods: Uneven heating creates hot and cold spots, leaving you exposed to disease-causing bacteria (if they happen to be lurking there) and raising your risk of food poisoning.
The bottom line: Don't microwave in plastic. Transfer the foods to glass. For precooked foods, use a meat thermometer to make sure all areas are at least 140 F. For cooking raw foods, follow your microwave's safety guidelines.
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D.
Distributed by King Features Syndicate, Inc.
For safety and efficiency, nature designed the human body to be complex. Your vitamins and supplements need to respect that wisdom, too.
Think about it... Your feet can't drag your body down the road all by themselves. They need the help of your spine to balance your body overhead, bones to support it, and a brain to decide where to head.
Your fingers may add a pinch of salt to your french fries, but your kidneys moderate it goes from your stomach to your bloodstream or gets washed out.
Yet we often talk about our vitamins and supplements as if each one was a standalone... Vitamin C is the one that is good for colds. Iron is the mineral that enriches the blood. Calcium is the one that supports healthy bones. And ashwagandha is terrific for anxiety. Or so we believe.
The truth is, vitamins and supplements need an entourage to work their best. Your body is complex and anything that goes into balancing or healing it needs to be well thought out.
The term “entourage effect” was invented by Dr. Ralph Mechoulam, who is the world's leading cannabis researcher. It refers to compounds that have a better or different effect when they work together. There's no reason the hemp and cannabis people should have that term all to themselves, though. Because it's true for vitamins, minerals, and herbals, too.
You have probably noticed a particular case of this in the past few years. Has your doctor checked your Vitamin D level? Vitamin D affects your energy level, but that's not why your physician makes this a routine check.
He's concerned about your bone density. Studies have shown that no matter how much calcium you ingest, it won't do the job of building bone unless your vitamin D levels are sufficient. Calcium is a star player for bone mass density, but it needs Vitamin D in its entourage.
As we said, the human body is a complex and marvelous instrument. We remember that at Renown Health Products.
If you look at the label on any Renown Health product, you will never, ever find that we have sent you a single-ingredient product. We believe every star deserves an entourage. Our work always recognizes that all the herbals, minerals, and vitamins you take need support to support one another work their best.
This is a far different prospect than the shotgun approach taken by makers of daily multiple vitamins. Multivitamins throw every known vitamin at your body in hopes that anything you lack is supplied and anything you don't need won't hurt you.
Multivitamins spectacularly fail to target dosages for specific uses. Vitamin C is a good example...
You may need 75 mg of vitamin C if you are a typical adult woman, 90 mg if you are an adult male. But a nursing mom needs closer to 120 mg, and a smoker needs 110 to 125 mg of Vitamin C. People with macular degeneration responded positively to 500 mg doses in one study.
And to complicate matters even more, people with iron overload, a disease called hematochromatosis, should not supplement with Vitamin C at all!
You will notice that Renown Health doesn't sell multivitamins. We offer you well-crafted products that are laser-targeted on specific health goals.
For instance, our Cerbrexum has 2000 IU of vitamin D3 because it's intended to aid in mental alertness. That calls for high levels. The vitamin D3 in the formula is also supported by ashwagandha, an herb that is sometimes called Indian ginseng. It has a long tradition in Ayurvedic practice for memory and concentration.
But here's the magic of entourage thinking... The Cerbrexum formula also includes Bioperine. It not only opens capillaries, good for the brain, it also enhances the action of vitamins and supplements like curcumin. And, of course, curcumin is part of Cerebrexum —which is found in the turmeric root powder in the formula. Curcumin is important in Chinese medicine and recent research has shown it aids cognitive function. All the ingredients focus on a clear mission... and there are no “kitchen sink” extras that you don't need.
In contrast, Renown Health's Isoprex also has Vitamin D3, but much less of it—500 IU. That is exactly what is appropriate to support the calcium in this product. Other Renown Health products don't include D3 at all because as wonderful as vitamin D may be, it's not on mission in those other formulas.
The next time you are standing at the pharmacy looking at rows of B vitamins (was it 7 or 8 that's good for shiny hair?), Vitamin K (no! Not with Warfarin!), Milk Thistle (but not if you are allergic to ragweed), Licorice Root (wait, that raises blood pressure, doesn't it?) and fifty other choices you will probably feel like you need an expert to choose the right combination of bottles.
We couldn't agree more! Trust us.
When Chris Pratt went from the pudgy (up to 300 pounds!) funny guy Andy Dwyer in "Parks & Recreation" to the ripped space scoundrel Peter Quill in 2014's "Guardians of the Galaxy," fans wondered how he transformed himself. He says: "There wasn't any trick or secret. You cannot do it in a month. It takes a year -- or a lifetime -- of consistency, every day."
He did it right; he's maintaining a healthy weight and good nutritional habits. But a new study in the International Journal of Eating Disorders found that many men who get into weight loss and muscle building become trapped in a cycle of obsessive exercise, hyper food regulation and distorted body image, and develop what's called bigorexia or muscle dysmorphia.
The researchers looked at data from the Growing Up Today Study (GUTS) on 2,460 males ages 18 to 32. A third of the men had been on a diet in the past year. Not so they could run faster or improve their health, but to "look better." They also were more likely to binge drink and be depressed.
How many guys are affected? A study in Military Medicine found that in a group of 1,150 new enlistees 13 percent of males had body dysmorphic disorder and 12.7 percent had MD.Signs of MD include extreme exercise routines, being convinced that your body isn't lean enough or muscular enough and using supplements excessively. Overcoming MD requires a commitment to change, ongoing talk therapy, medical support and patience. That's something to build on!
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D.
Distributed by King Features Syndicate, Inc.
If your immune system doesn't cure you, it could kill you. More likely it will make you hurt all over.
Inflammation that refuses to turn off is the real reason behind the pain of arthritis. It's an immune response gone awry. Now the proof piles up that inflammation and misdirected immunity also lead to burning pains in the feet, deafness, loss of muscle control, poor night vision, eventual blindness and cataracts caused by a rare disease known as PHARC.
The disease is so rare, most doctors don't know about it. But research on it may bring pain relief to the millions of us that don't have it.
Scientists at Scripps Research Institute have linked PHARC to the lack of a specific protein, ABHD12. Until this research, scientists were not sure what that the ABHD12 protein was for. Now they know it acts as a brake on the immune system to keep it from being overactive, and that's the discovery that could lead to help for millions of us with everyday afflictions like migraines and arthritis.
Eventually, researchers hope their discoveries could help them develop drugs to target ABHD12, which most people have, in order to treat cancer and chronic viral diseases.
Benjamin Cravatt, the head researcher says, “it is now known that the immune system plays a big role in many brain diseases including neurogenerative diseases such as Alzheimer's and Parkinson's. There have also been hints of immune involvement in developmental brain disorders such as autism and schizophrenia.”
No drug to tame the release of the ABHD12 protein exists yet. But if it did, its action would likely mimic several natural healing agents that control inflammation such as bromelain, oregano, and baikal skullcap. These natural sources subdue pain by acting directly on the body to regulate inflammatory proteins.
Some plants were born to be superstars. Lately, hemp and cannabis have been hogging the
news, but it's time to honor the orange. Besides delicious fruit and everyone's
favorite breakfast juice, oranges have dozens of uses.
Here are three of our favorites you may be overlooking.
1. Make Orange Vinegar for the Freshest House In Town
Orange-scented cleaning products seem to have replaced lemon. The smell is great, but the chemicals in commercial products aren't always something you really should be spreading all over the house.
There's a natural alternative that does the trick just as well. It's so cheap it's almost free, too. Make an orange-scented vinegar solution for cleaning. Save those orange peels until you can fill a glass container with a lid. Stuff them in, and cover with white vinegar. Now seal the lid and send the bottle to storage for two weeks to a month. A dark corner of the cupboard is fine. To use the solution, strain out all the orange peels.
You can put ¼ cup to ½ cup of your new vinegar essence in water for a great all-purpose cleaner. And once you've strained out the orange peels, they make great garbage disposal deodorizer-cleaners, too.
By the way, if you like something a little more complex, add some sprigs of rosemary, cedar, or pine to spike your orange scent. You'll love how the house smells. But warning, it could make you hungry!
2. Work Out Your Arthritis Kinks
There are drugs and natural remedies to treat the pain of arthritis. But even with those in your medicine cabinet for pain or swelling, a little self-management can go a long way to keeping your fingers and hands functioning smoothly.
These routines will leave you feeling more like you had a massage than a workout. You can do them with a tennis ball, but an orange is even friendlier to hurting hands.
The easiest one is the big squeeze. Palm the orange and wrap your fingers around it evenly. Now squeeze gently if you are really sore. Hold each squeeze to the count of 5 then release. Repeat 10 times. On your good days, you can squeeze with all your might to build hand strength.
Now try the claw pinch. Put all your fingers together and place your clustered fingertips down on the top of the orange. Put the tip of your thumb below. Pinch the orange as if trying to dent it. Gently if needed. Hard if you can. Hold each pinch 5 seconds and do 10 of them altogether.
Next come the solo finger presses... Put your thumb under the orange and place just your index fingertip on top, opposite your thumb. Keep the other fingers relaxed, they'll have to wait for their turn Now squeeze your thumb and index finger together for 5 seconds. When your index finger is done, move on to your middle finger and thumb. When that's done, do a press with your thumb and ring finger. Finish up with a thumb and pinky face off, and you've completed the first series. Try to reach 10 series of finger presses, 15 if you're feeling spunky.
Finish your routine with the unbender, because all you need after this squeezing is a good stretch. Put the orange in the palm of one hand. Place the index finger of your other hand on top of the orange. Keep your finger straight, and use the orange to push your finger back as far as you can comfortably move it.
People usually do this last exercise by pushing on the fingers of one hand with the other, but it's easier to overdo it that way. The orange keeps it gentle.
3. Eat Them
Not exactly a new use, you say. Maybe so, but we're going to add an orange twist. Eat the peel.
Orange peels are rich in nutrients. Gram for gram, the peel of an orange contains about twice as much vitamin C as the fruit. Orange peels also include the B-complex vitamins riboflavin, thiamine, niacin, pyridoxine, and folate along with Vitamin A.
Needless to say, but we'll say it anyway—only eat the peels of organic oranges. And wash them first.
Oranges are powerful. In addition to all those vitamins, they are a natural source for diosmin, a flavone with a strong pharmaceutical demand, especially for achy legs.
When Joe Coleman pitched for the Philly Athletics, Baltimore Oriels and Detroit Tigers from 1942 to 1955, he could only hope his baseball talent would be passed on to the next generations of his family. Well, it was! Son Joe Coleman pitched for 15 seasons from 1965 to 1979 -- a two-time 20-game winner -- and today his grandson Casey Coleman is with the Cubs Triple A team in Des Moines, Iowa.
Sometimes it's talent that's passed down, and sometimes, unfortunately, it's health challenges such as obesity and addiction. A new study published in Translational Psychiatry explains how choices made during pregnancy and breastfeeding affect the health of future generations.
Swiss researchers fed healthy female mice a high-fat diet during pregnancy and while nursing. The repercussions showed up in three generations of their offspring (those generations didn't eat excess fat, and neither did their mates). They had changes in their brain's dopamine-powered reward system that predisposed them to "develop obesity and addictive-like behaviors ..."
Seems your choices today may force your next three generations to battle obesity, addictions and the health problems associated with those conditions.
So how much and what kind of fats should you eat everyday to protect your health and the health of future generations? Stick with fats in nuts, oils like extra virgin olive oil and animal proteins like salmon. Then, on a 2,000-calorie-a-day diet, aim for 20 to 35 percent of calories (400 to 700 calories or 33 to 78 grams) from those good-for-you fats. That's good pitching and good hitting!
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D. Distributed by King Features Syndicate, Inc.
In 1970, when Stephen Stills recorded "Love the One You're With," it became a rallying cry for the hippy "free love" movement (more myth than fact, but the lyric helped sell a lot of records).
Well, if that's your era, and today you're still sexually active, you're still part of a very, should we say, robust movement, according to the National Poll on Healthy Aging. More than 1,000 people ages 65 to 80 were polled: Nearly three-quarters said they had a romantic partner, and 54 percent of them were sexually active.
They claim that they're not shy about it, either: 62 percent said if they were having problems with their sexual health, they would talk to their primary care provider. Unfortunately, only 17 percent had. That's a gap that's putting many older folks at risk.
Sexually transmitted diseases are at an all-time high among the elderly. From 2007 to 2012, the incidence of syphilis among seniors increased by 52 percent; chlamydia increased by 32 percent. And, according to AARP, every year since then has seen about a 20 percent jump in the incidence of STDs.
The reason? Divorce rates are up, while erectile dysfunction medications are easily available, and docs don't spend enough time talking with seniors about safe sex.
Well, it's time to get proactive. Ask your partner(s) about their sexual health, and get tested yourself. Medicare now offers free STD screenings for seniors. And use condoms -- many seniors don't. Keep it safe to love the one you're with.
"Don't sweat the petty things, and don't pet the sweaty things," comedian George Carlin once said. But staying cool, calm and collected isn't always easy. That may be why as many as 90 percent of Americans use deodorants and antiperspirants regularly, spending $18 billion a year in pursuit of pristine pits.
Ironically, though many of you worry about schvitzing (only 2 percent of you don't get smelly from sweat in your pits, groin, hands or feet), you also may sweat over the safety of the stuff you're applying under your arms. One ingredient in antiperspirants (not deodorants), aluminum chlorohydrate, is often targeted as dangerous. It stops you from perspiring by reacting with your sweat and creating gel plugs in your sweat glands' ducts, shutting them off.
Since the 1960s, when some poorly designed studies made people scared of aluminum (even in frying pans), it's been rumored that it could contribute to the development of Alzheimer's disease. But a 2001 study examined aluminum levels in urine of people who used antiperspirant daily and found that only 0.012 percent of aluminum from these products was absorbed through the skin. That's just about 2.5 percent of the aluminum you'll absorb over the same time period from food. And a larger review of research, published last year, concluded that there's not enough evidence to show that regular use of deodorants and antiperspirants increases your risk for dementia.
So that's one less thing to sweat over. Now where's that sweaty dog?
When you have a kitchen fire, you might grab a fire extinguisher, but you wouldn't crank up Spotify. Well, maybe the day's coming when you would. Students from George Mason University have invented a deep bass sonic blaster that uses sound waves to put out fires. The technology knocks out flames in small, confined spaces.
It would be great if that kind of gizmo could sing away chronic inflammation in your cells!
Inflammation is a result of your body's immune response when it's called on to heal a wound or defeat a virus. It's why your sprained ankle swells or you form a scab. And after your immune warrior cells win their war, inflammation fades away.
But what if the immune system can't win the war, because your body is under attack from chronically elevated blood sugar, a constant flow of stress hormones or going-nowhere belly fat? Then inflammation persists and becomes as damaging to your organs, cells and sex life as California's 300,000-acre Mendocino Complex fire and as hard to put out.
In Dr. Mike's upcoming book, "What to Eat When," you can discover effective ways to tame the flame. Here are a few:
1. Don't eat flame-throwing, sugar-added or processed foods, especially at night. Inflammation increases while you're at rest.2. Eat a plant-centered diet with lean animal proteins (no red meat). Get prebiotic fiber from 100 percent whole grains and produce.
3. Aim for 60 minutes of physical activity daily. Walking counts, but getting hot and sweaty cools off inflammation more quickly.
In the NFL and college football, a "prevent defense" often is used late in the game to prevent a long pass completion from an offensive squad that needs to score a touchdown with time running out on the clock. But if it's not carried out correctly (as many pundits have said), the only thing it prevents is your team from winning.
The same is true with your own "prevent defense" against Type 2 diabetes. Execute it correctly, and you'll defeat that disease. Mess it up, and you'll have to contend with the complications that come from chronic elevation of blood glucose levels.
Now, you know 10,000 steps a day, ditching added sugars and syrups, highly processed foods and red meats (especially processed red meats) are essential parts of your defense. But did you know that eating whole grains puts extra muscle in your lineup?
Researchers recently reported in The Journal of Nutrition that 100 percent whole grains, such as wheat, rye and oats, help block diabetes. Each half-ounce serving a day can lower your risk by 11 percent (for men) and 7 percent (for women). And folks who ate a bit less than 2 ounces of these whole grains daily had the lowest risk of developing Type 2 diabetes.
Want an even stronger defense? Add fiber- and nutrient-rich broccoli, nuts (walnuts and pecans) and legumes in your fight against Type 2 diabetes. You'll also get fatty acids that protect your brain health! Now there's a prevent defense that really works!
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D. Distributed by King Features Syndicate, Inc.
When Richard Farina wrote "Been Down So Long It Looks Like Up To Me" in 1966, he knew the Beat and Hippie subcultures from the inside out -- and felt the world was so topsy-turvy that feeling down was a kind of new normal, fueled in part by drugs.
Finding that depression is a new normal because of common drugs they take is something an astounding 37 percent of American adults can relate to, according to a study published in JAMA.
Researchers looked at the medication use of more than 26,000 adults from 2005 to 2014. Turns out, 203 often-used prescription drugs, some of which are also available over-the-counter, have depression and/or suicide listed as side effects. The meds included proton pump inhibitors and antacids, as well as sedatives, anti-seizure meds, hormonal contraception, blood pressure and heart medications, and painkillers.
The research also showed that if you're taking more than one of these, your risk of depression increases. Around 15 percent of adults who use three or more, which is not uncommon, experience depression, compared with 5 percent of folks taking none, and 7 percent of those taking just one. Drugs listing suicide as a potential side effect showed similar results.
So if you're feeling fatigued, sleeping too much or not enough, are sad or disengaged, or think about suicide, talk to your doctor about the prescription and over-the-counter meds you're taking. You may want to explore alternatives, including lifestyle changes that could ease pain and digestive woes and lower blood pressure, or opt for nonhormonal contraception.
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D. Distributed by King Features Syndicate, Inc.
Thumbs are an essential part of your hands and our language. On the positive side, a "green thumb" is a good thing. And "opposable thumbs" -- well, they're what differentiates you from other animals that cannot grasp or manipulate objects well. Just try writing your name, tying your shoelaces or hitching a ride without using your very agile thumb! But "sticking out like a sore thumb"? You don't want that.
Unfortunately, for many women (10 to 20 times as many as men) age 40 and older, thumb arthritis makes the basal joint at the bottom of the thumb swell and hurt, sometime severely. This form of osteoarthritis can happen because of overuse and stress from hobbies or a job; diseases that affect cartilage, such as rheumatoid arthritis; and obesity, which triggers inflammatory reactions that can damage tissue and bone.
If you've got sore thumbs (they usually come in pairs), you don't want to twiddle them!
-- You can opt for wearing a brace, using heat and ice packs, oral medications or corticosteroid injections, or get off-label hyaluronic acid injections. (Although approved for arthritic knees, the Food and Drug Administration hasn't given the thumbs-up for thumbs.)
-- ASU (avocado/soybean unsaponifiable) supplements sometimes work.
-- There are two surgeries: One fuses the joint, easing pain but limiting mobility. Another removes a bone from the base of the joint and reroutes a tendon to provide a cushion and stability.
Nothing guarantees you'll regain full mobility, and physical/occupational therapy is essential, but all three approaches ease constant pain and may make you able to hitchhike again!
The Jeep Grand Cherokee Trackhawk looks just like an ordinary Jeep Grand Cherokee, but it has a 707-horsepower supercharged V8 that can go from 0 to 60 mph in 3.5 seconds. It's in a class of car Road and Track Magazine calls "sleeper cars." Well, if you want to be high performance, you should aim to be a super sleeper.
But is this you? You head for bed at a good hour, so you should be able to get seven to eight hours of sleep. But can't doze off. Well, there's a good chance you're bringing daytime stresses into bed: You worry about that task at work you didn't finish; you panic over an unpaid bill.
You're in luck. There are proven ways to deal with your disruptive stress response and cruise off into dreamland.
-- Eat a light, healthy dinner, three to four hours before turning in. Stay clear of fatty animal proteins and inflammatory processed foods that amp up your stress response.
-- Get at least 30 to 60 minutes of exercise daily (but not right before bedtime). Combine aerobics and strength training to dispel stress and ease depression.
-- Skip that nightcap. Your body needs a few hours to process alcohol before you snooze, otherwise, it may wake you later when it clears your system.
You exercised, ate healthfully, skipped that drink. Now slide between the sheets. It's time to try five minutes of mindful meditation (instructions at www.sharecare.com). You'll learn to be in the moment, and in the next moment, you'll be asleep.
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D. Distributed by King Features Syndicate, Inc.
To paraphrase something the actress Allison Janney once said: If June Cleaver [Barbara Billingsley] made women in the 1950s and '60s feel bad because they didn't measure up to her all-too-perfect mom character in "Leave it to Beaver," Janney's character on the TV sitcom "Mom" should make moms everywhere feel great!
Well, laughter is great medicine. But there's something else that can make moms, especially those caring for children with special needs, feel better about themselves: cognitive behavioral therapy, or CBT.
Researchers at the University of Louisville have found that brief CBT sessions -- just five 45-to-60-minute meetings -- significantly improved the mental state of women who take care of children with chronic health conditions, such as cerebral palsy and cystic fibrosis. The therapists also believe that CBT works in any situation where mothers are emotionally stretched because of a child's complex health condition.
One therapist describes the women as feeling isolated and blue because they couldn't hire a babysitter who knew how to deal with their child's special needs, and consequently couldn't find a way to spend time with friends. But even if such situations didn't change, after therapy, the moms reported decreased depressive symptoms, such as negative thinking, and their sleep quality greatly improved.
So if you (or someone you know), find yourself in a similar situation, locate a CBT program near you. Contact the Association of Behavioral and Cognitive Therapies at www.abct.org to find a CBT therapist in your area.
(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D. Distributed by King Features Syndicate, Inc.
So what do the names Mark Sinclair, Caryn Johnson and Eric Bishop have in common? They sounded too generic -- even though they were the originals -- for their brands, which are better known as Vin Diesel, Whoopi Goldberg and Jamie Foxx.
But sometimes the generic version is a better choice. Take the original EpiPen from Mylan, which delivers lifesaving epinephrine to people suffering severe allergic reactions (anaphylaxis) to thing like bee stings, peanuts and shellfish. The brand raised its price by 400 percent between 2010 and 2016. That led to a $465 million federal overcharge settlement against Mylan, and encouraged it to market an authorized generic version, which still costs a lot -- between $300 and $500 for a two-pack. Even with that, there's been a shortage of EpiPens lately, and the Food and Drug Administration has had to extend the expiration date on specific lots of 0.3 milligram versions of the EpiPen and their authorized generic by four months.
The FDA hopes the expiration-date extension will be timed to coincide with the release of a newly approved, truly generic version of both the EpiPen and EpiPen Jr. It took a while for this generic to be developed because the delivery system was very difficult to duplicate. Once the device was proven to work (it took two years), the FDA gave Teva Pharmaceuticals permission to market its version. We hope everyone will breathe easier once the generic is available -- and (hopefully) affordable -- to all who desperately need it to protect themselves from anaphylaxis.
Last May, 27-year-old Icelander Hafthor Julius Bjornsson, renowned for his role as "The Mountain" Gregor Clegane, in "Game of Thrones," won the World's Strongest Man competition. At 6 feet, 9 inches tall and weighing over 390 pounds, Bjornsson eats eight meals a day, while lifting tons of weights.
His meals consist of lean meats, grains, vegetables and healthy fats found in avocados and peanut butter. He's said: "I eat quite healthy for a big guy ... but you get sick of eating all the time. Today, I was supposed to have chicken with sweet potatoes and greens. Because I didn't want that, I had salmon. We have very good fish in Iceland."
It's true that high-protein foods are good to eat after resistance exercising to encourage muscle building. But did you know that eating protein after working out --if you eat the right amounts -- also can help you lose weight? That is effective because refueling with protein after your muscle-strengthening activities increases the amount of energy-burning muscle mass you build, and that uses up extra calories. Just make sure you don't eat ever-more total calories as you exercise more!
To take advantage of the muscle building and weight loss:
-- Eat protein up to two hours after working out to take advantage of the protein synthesis it fuels.
-- Enjoy protein from salmon, trout and skinless chicken.
-- Eat 20-30 grams of protein (it's the equivalent of 4 ounces cooked salmon or 3.5 ounces grilled chicken breast).
In mid-September, the National Interagency Fire Center reported that firefighters continued to battle 89 large blazes across the Western states and Alaska; in Canada last August, British Columbia alone had more than 500 separate wildfires. You could say that both the U.S. and our northern neighbor were an in-FLAME-nation!
But you don't need timber and lightning to witness the ravages of inflammation firsthand. Your own brain is a potential target, according to researchers from Germany's University of Bonn. They've done a study, published in the journal Frontiers in Molecular Neuroscience, that pinpoints how poorly regulated inflammatory responses affect certain neurons and can lead to loss of brain cells -- especially as you get older.
Major triggers are inflammatory foods like added sugars and saturated fats, hormone-disrupting phthalates and BPA/BPS, and fiery habits like smoking, excess drinking and lack of sleep. If you have Type 2 diabetes, high blood pressure, cancer or chronic stress, your body's battling excess inflammation. So call out the fire brigade.
Quick Coolers: To put out your fires NOW try these three steps:
-- Take 900 milligrams daily of DHA omega-3 from algae.
-- Take a probiotic.
-- Floss your teeth daily.
Long-Term Fixes: To banish destructive inflammation adopt these habits:
-- Exercise for at least 30 minutes five days a week (walking 10,000 steps or equivalent and strength building).
-- Sleep seven to eight hours nightly.
-- Eat inflammation-fighting foods, like salmon, olive oil, 100 percent whole grains and cruciferous vegetables (broccoli and cauliflower).(c) 2018 Michael Roizen, M.D. and Mehmet Oz, M.D. Distributed by King Features Syndicate, Inc.
Twenty-four-year-old Spanish skateboarding star Danny Leon got made up to look like a not-so-steady-on-his-feet 80-year-old man. His goal: To see if teens at a local skate park would teach him the sport. They obliged, but when Danny started speeding down the half pipe and doing aerial spins, well, the kids were blown away.
Being a force of nature disguised as a harmless old guy -- that's a pretty good metaphor for the way a blood clot can disguise itself as a simple bruise. Don't you fall for it.
Bruises can be painful and turn shades of black and blue, but generally they're not harmful. One caveat: Easy or spontaneous bruising can indicate underlying disease and a need to see your doc.
A blood clot, on the other hand, is a concentrated aggregation of blood. It forms from an external injury to blood vessels or internal injury to the lining of a blood vessel from plaque, or because of dysfunction in your blood's flow-and-clot chemistry. Clots can obstruct blood flow or dislodge and travel through your bloodstream, triggering heart attack, stroke, deep vein thrombosis (DVT) or pulmonary embolism (PE). So if you spot a clot, see your doc.
Near your skin's surface, clots can appear bruise-like, but are generally redder and the underlying vein may be hard to the touch.
A clot that's moved and is causing trouble may trigger swelling and pain in an extremity (DVT); slurred speech and vision problems (stroke); chest pain or upper body discomfort, shortness of breath and a rapid heart rate (PE or heart attack).
The Widowmaker is a heart attack that is frequently fatal since it affects the left anterior descending artery (LAD). The LAD provides blood and oxygen to the entire frontal region of the heart, irrigating a more extensive area than other coronary arteries. The obstruction of the left anterior descending artery interrupts 40 percent of the blood that nourishes the heart, which leads to an increased risk of complications. The most common complications are irregular heartbeat, heart failure, and less frequently, sudden death.
A feature that makes the Widowmaker more fearsome is that it attacks silently. Most men who suffered sudden death due to coronary heart disease had no previous symptoms. Men 30 to 50 years old have a greater risk of death from coronary heart disease than women; this is because estrogen exerts protection against cardiovascular diseases.
Thanks to advances in medicine, an obstructed artery can be rapidly permeabilized with timely treatment. These procedures take place in a catheterization laboratory where the interventional cardiologist performs an angioplasty by threading a thin tube (catheter) in the blocked coronary artery expanding the diameter of the vessel and restoring blood flow.
How to prevent a heart attack from a widowmaker
Understand what happens in your body
A heart attack is caused by a blockage in an artery due to atherosclerosis (a process in which the fat called plaque adheres to the inside of blood vessels). However, to be at imminent risk of a sudden heart attack, there must be a blockage greater than 60% of the blood flow. A 90% blockage can cause a life-threatening heart attack. Plaque is often formed as a soft, unstable reservoir instead of a hard, stable reservoir. As it is soft, a fragment of the plaque often breaks off, and the fatty material flows into the bloodstream until it clogs a smaller artery.
Risk factors such as smoking, eating fatty foods, obesity, and having high cholesterol make the rupture of the atheromatous plaque more likely.
Calculate your risk of heart attack
Through the ASCVD Risk Estimator Plus of the American College of Cardiology, you can obtain an approximate risk of developing atherosclerosis in the next ten years. You simply need to enter your age, your cholesterol values (HDL and LDL), blood pressure, and answer questions regarding your lifestyle. If you get a score higher than 7.5 percent, you should consult your doctor for a full evaluation and establish therapeutic behavior.
Adopt a healthy diet
Various studies in different populations have linked the consumption of high-quality, healthy foods with a lower risk of heart disease compared to the consumption of unhealthy foods of low quality, independent of other risks such as sedentary lifestyle, obesity, and smoking. To maintain a healthy diet, you must prioritize the consumption of vegetable proteins and reduce the consumption of red and processed meats. Foods such as vegetables, fruits, whole grains, nuts, legumes, fish, and yogurt also have a beneficial effect on your health.
The adult population should avoid a sedentary lifestyle. Performing any type of physical activity will provide great health benefits. To obtain recognizable benefits, you should get at least 2.5 hours of moderate physical activity a week or 75 minutes of intense physical activity. To obtain an additional benefit, physical activity should include 5 hours of moderate-intensity aerobic exercise or 2.5 hours of high-intensity aerobic exercise. Exercising has very positive effects on emotional health. When exercising, the body releases endorphins; metabolites that make the person feel more calm and happy. Also, physical exercise helps some people sleep better. It can also be of great help in some psychological problems such as mild depression.
Take the medications recommended by your cardiologist
One of the most commonly prescribed drugs is the statin. This drug is to reduce cholesterol levels even if you have not been diagnosed with heart disease. Statins have been shown to reduce the risk of heart attack by 25 to 33 percent by limiting unstable plaque deposits and reducing inflammation.
Vitamin D exerts various functions throughout the body including the immune system. This vitamin is synthesized in the skin when exposed to sunlight. The wide use of sunscreen, a characteristic of modern life, would partly explain the increase in the prevalence of vitamin D insufficiency.
Multiple epidemiological studies have shown strong associations between asthma and reduced serum levels of 25-hydroxyvitamin D (25 [OH] D), the main precursor in the circulation of vitamin D.
More severe asthma has been observed in patients with low vitamin D levels, but it is unknown at this time whether the linkages express causality or inverse causality. It is possible that the association between asthma and vitamin D is complex. The information as a whole supports the therapeutic role of vitamin D in reducing the risk of asthma exacerbations.
In the Childhood Asthma Management Program, an association was observed between baseline vitamin D insufficiency (<30 ng / ml) and the risk of severe exacerbations over four years.
Asthma triggered by allergens
The asthmatic inflammation was essentially due to immunological reactions in response to aeroallergens. In this situation, T-helper lymphocytes phenotype 2 (Th2), B lymphocytes (producers of antibodies) and mast cells have a fundamental role.
Th2 lymphocytes synthesize various interleukins (IL), such as IL-4, IL-5, and IL-13, involved in the etiopathogenesis of asthma. IL-4, in particular, induces the synthesis of immunoglobulin (Ig) E by B lymphocytes.
IgE binds to mast cells, and cross-linking induces rapid release of proinflammatory mediators such as leukotrienes and histamine, which cause bronchial obstruction and mucus production.
Adaptive immune responses are regulated by various classes of regulatory T lymphocytes (Treg), for example, Foxp3 positive Treg lymphocytes and Treg lymphocytes that synthesize IL-10. In healthy subjects, both lymphocyte subpopulations participate in the emergence of tolerance towards non-harmful antigens.
Vitamin D plays a decisive role in the function of responses mediated by Treg lymphocytes. In several studies, it has been observed that vitamin D is favorably associated with the frequency of Foxp3 positive Treg lymphocytes and with the levels of IL-10 in the airways of patients with asthma.
Likewise, the stimulation and the signals derived from dendritic cells (DC) determine the induction of tolerance or the appearance of inflammatory responses; Vitamin D regulates multiple functions of DC.
In vitro, vitamin D suppresses the synthesis of IgE by B lymphocytes and increases the synthesis of IL-10 with induction of a regulatory B phenotype. In children, vitamin D deficiency is associated with increased levels of specific IgE against aeroallergens.
Vitamin D inhibits the activation of mast cells so that it reduces the synthesis of histamine and tumor necrosis factor alpha; it can also increase the production of IL-10 with anti-inflammatory properties.
Epithelial damage and asthmatic inflammation mediated by cytokines
The epithelial damage is accompanied by the release of IL-25, IL-33, and thymic stromal lymphopoietin, which directly stimulate various cell subtypes, including innate lymphoid cells type 2 (ILC2) and mast cells. ILC2 synthesize Th2-type cytokines, for example, IL-5, which induce eosinophilic inflammation.
Vitamin D modulates the epithelial response, especially by inducing synthesis in bronchial epithelial cells of soluble ST2, a suppressor of IL-33, associated with proinflammatory effects on effector cells, such as mast cells.
Viral infections induce the epithelial release of IL-33; in asthma, the mechanisms dependent on the Th2 phenotype alter the antiviral responses. Vitamin D is associated with increased immunological antimicrobial responses, through various mechanisms, including the increased production of antimicrobial peptides, such as cathelicidin, and autophagy, an important mechanism in viral and bacterial infections.
In a meta-analysis, the intake of vitamin D reduced the incidence of acute respiratory tract infections in selected patients with asthma.
Asthma resistant to steroids and IL-17
The physiopathological mechanisms involved in corticosteroid-resistant asthma would be somewhat different. The colonization of the airways with proinflammatory bacteria such as Haemophilusinfluenzae, oxidative stress (associated with air pollution) and vitamin D deficiency would play an important role in this type of asthma. Vitamin D increases the antimicrobial pathways and induces antioxidant responses.
Patients with asthma resistant to corticosteroids synthesize less IL-10. In these patients, the contribution of calcitriol is associated with the recovery of the clinical and immunological response of IL-10. Likewise, in patients with asthma resistant to corticosteroids, IL-17 would induce pathological neutrophilic inflammation, a phenomenon that reverts after the administration of vitamin D.
Vitamin D and remodeling of the airways
The final result of the abnormal immunological responses in asthma is the remodeling of the airways, associated with smooth muscle contraction and mucus secretion in the short term, and with remodeling and fibrosis in the long term. Vitamin D prevents the proliferation of smooth muscle cells in the airways.
Clinical data on the use of vitamin D for the treatment of asthma
In the study, Vitamin D Add-on Therapy Enhances Corticosteroid Responsiveness in Asthma (VIDA), showed that for every 10 ng / ml increase in serum levels of 25 (OH) D, the rate of therapeutic failures and exacerbations was reduced in a meaningful way.
Although the rate of asthma exacerbations did not decrease significantly in the total group assigned to vitamin D therapy, the exploratory analysis revealed a significant decrease in the frequency of exacerbations in the group of patients who reached levels of 25 (OH ) D3 ≥ 30 ng/ml.
In multiple investigations and Meta-analyses, it was observed that vitamin D supplements substantially decreased the rate of severe asthmatic exacerbations in patients with asthma.
With the exception of asthma resistant to corticosteroids, the different asthma endotypes have not been studied in detail in controlled clinical studies. Although more work will undoubtedly be required to answer these questions, the information as a whole suggests that the optimal state of vitamin D is important regarding the appearance and clinical evolution of asthma.
Sleep is part of the daily routine, but most people find it difficult to sleep properly at some point in their lives, also known as insomnia. It usually lasts a short period, perhaps when the individual is worried, nervous, or stressed. When these situations disappear, you go back to sleeping normally. However, if the individual cannot return to sleep well, it can be a real problem because sleep keeps our minds and bodies healthy.
What is sleep?
Sleep is the regular period in every 24 hours during which we are unconscious and unaware of our surroundings. There are two main types of sleep:
• REM sleep (Rapid Eye Movement): It comes and goes throughout the night, and constitutes a fifth of our sleep. The brain is very active, our eyes move quickly from side to side and we sleep, but our muscles are very relaxed.
• Non-REM sleep: The brain is quiet, but the body can move. Hormones are released into the blood, and the body is repaired after the day's wear and tear.
There are four stages of non-REM sleep:
1. "Pre-sleep": the muscles relax, the heart beats more slowly, and the body temperature drops.
2. "Light sleep": the individual can be easily awakened without feeling confused.
3. "Slow wave sleep": the blood pressure falls, the individual can talk or walk asleep.
4. "Slow and deep wave sleep ": during this time, it is very difficult to wake up, if someone wakes you up, you will feel confused.
Sleep is a biological necessity that allows restoring the physical and psychological functions essential for a full performance. Sleep and wakefulness are brain functions and, therefore, are subject to alterations of the nervous system. Sleep is neither a passive situation nor a lack of vigil, but an active state in which changes occur in bodily functions, in addition to mental activities of great importance for the physical and psychological balance of individuals. During sleep, hormonal, biochemical, metabolic, and temperature changes are necessary for the proper functioning of the human being during the day.
Sleeping adequately allows the release of oxytocin during non-REM sleep. It has been proven that this hormone helps relieve anxiety, increases confidence, and reduces social fear. Along with serotonin (a neurotransmitter released during sleep), oxytocin increases the feelings of love, empathy, and connection with other individuals due to its activity in the 5-HT1A receptors. Furthermore, maintaining adequate sleep hours generates an increase in serotonin concentrations in the brain which has been linked to the treatment of mental disorders such as depression.
Sleep also releases dopamine, the neurotransmitter widely known as being responsible for the sensation of pleasure. However, the latest findings show that its main function could be motivation since it was shown that people more focused on meeting certain demanding goals were those with the highest concentration of dopamine in the prefrontal cortex and the striatum.
How much sleep do we need?
This depends mainly on your age.
• Babies sleep about 17 hours a day.
• Older children only need 9 or 10 hours each night.
• Most adults need about 8 hours of sleep each night.
• Older people need the same amount of sleep, but usually have only one period of deep sleep at night, usually in the first 3 or 4 hours. After that, they wake up more easily. We also tend to dream less as we get older.
There are differences between people of the same age. Most of us need 8 hours per night, but some people (few) manage with only 3 hours per night. However, this can have serious consequences in the future.
Sleep disorders in adult life
You may feel that you do not get enough sleep or that even if you sleep the necessary hours, you do not get a good night's rest. There are many reasons for not sleeping well:
• The bedroom can be too noisy, hot, or cold
• The bed can be uncomfortable or too small
• Not having a regular sleep routine
• Not doing enough exercise
• Eat too late and find it difficult to go to sleep
• Tobacco, alcohol, and drinks that contain caffeine such as tea and coffee
Other more serious reasons include:
• Emotional problems
• Anxiety and worries
• Depression: wakes up very early and cannot go back to sleep
What happens if I do not sleep well?
Scientific studies increasingly give evidence that not sleeping well can affect our daily lives and our health. There are multiple consequences, both physical and psychological when we fail to have a restful sleep.
According to experts, a person must sleep between 8 hours a day to maintain an optimal physical, emotional, and mental state. However, the changes in the lifestyle of today have made the quality of sleep and the ideal time to rest much worse.
Studies have shown the negative health effects of restricting nighttime sleep. The results indicate that short periods of sleep have a negative impact on carbohydrate metabolism and endocrine function. Both factors are considered fundamental parts of the normal aging process, so if the habit of shortening sleep periods persisted in the organism, the severity of the chronic disorders associated with aging would increase.
Multiple studies showed a marked increase in glucose concentrations, which predisposes to metabolic diseases and increases the risk of obesity. The sympathetic nervous system also suffered negative alterations, increasing adrenergic activity and consequently, plasma levels of adrenaline and cortisol which is widely related to cardiovascular diseases such as hypertension, arteriosclerosis, ischemic heart disease, among others.
In a study published by Harvard Health Publications, it was evidenced that with sleep deprivation, patients presented deterioration of verbal fluency, planning capacity, creativity and originality, slowing of reaction time, signs of deactivation in the EEG, and drowsiness. The performance of long, repetitive and monotonous tasks is affected, especially in the case of newly acquired skills. Short-term memory impairments or reversible neuropsychological disorders may also appear in tasks involving the prefrontal cortex.
According to research conducted by the National Sleep Foundation of the United States, people who cannot sleep at least 6 hours a day, triple the risk of falling asleep at the wheel as a result of deterioration of mental coordination. The mood can also be affected, with a slight increase in anxiety, depression, irritability, confusion, etc.
Also, sleep deprivation has immunosuppressive effects. The ability of lymphocytes to produce cytokines is negatively affected, and there is a decrease in the production of necrosis factor alpha tumors (TNF-alpha) and some interleukins, which predisposes to suffer infectious diseases, mainly those that affect the respiratory system. In fact, it was determined that those who slept less had a greater risk of dying at a young age compared to those who slept properly.
Other studies have indicated that sleep deprivation delays the recovery of the hypothalamic-pituitary-adrenal axis and produces alterations in glucocorticoid feedback. Thus, lack of sleep can decrease resistance to stress and accelerate the effects of glucocorticoid excess on metabolism and cognitive functions.
Every day, like millions of Americans, Dr. Walter Koroshetz, 65, who directs the National Institute of Neurological Disorders and Stroke, takes a pill to control his blood pressure. He claims besides the therapeutic benefit of lowering his blood pressure, his medication helps him reduce his risk of dementia and helps keep his brain healthy and sharp.
Koroshetz is responsible for the institute's public health campaign called Mind Your Risks. Its goal is to let people know that there is a link between high blood pressure and stroke and dementia.
Koroshetz, as part of his campaign, also endorses efforts to keep your blood pressure down by exercising and paying attention to weight and diet.
The science underlying his concerns over high blood pressure is solid. Researchers have long understood that when blood pressure rises, it strains the tiny blood vessels that keep brain cells alive.
"With every pulse of your heart, you are pushing blood into these very small blood vessels in the brain… and when the heart pushes too hard, as it does when blood pressure is elevated, it can cause damage that can lead to a stroke.”
Koroshetz points to two recent large studies that have revealed an alarming trend among stroke patients…
"If you had a stroke, even a small stroke, your risk of dementia within the next two years was greatly magnified… So there's something about having a stroke that drives a lot of the processes that give rise to dementia."
The evidence is clearest for a type of dementia called vascular dementia, which occurs when something blocks or reduces the flow of blood to brain cells. Now, as a result of new studies, it seems that high blood pressure also appears to increase a person's risk of developing Alzheimer's disease, which is associated with the accumulation of plaques and tangles in the brain.
Koroshetz believes, as does many experts these days, if people really knew about the link between dementia and high blood pressure, they might be more inclined to do something about it…
"Only about 50 percent of people who have hypertension are actually treated," he says. "So I think there's a lot to be said for trying to get high blood pressure under control."
The Alzheimer's Association is helping get out the word through Koroshetz's campaign and via a presentation of new research on blood pressure and Alzheimer's at its annual scientific meeting in Chicago. And the group is encouraging people to control high blood pressure.
"The good news is that we can control blood pressure now," says Maria Carrillo, the group's chief science officer. "We can do that with exercise, with lifestyle, with healthy eating, and also with medications."
The USDA released its Dietary Guidelines, as well as information on the so-called “shortfall nutrients” that Americans are not getting enough of. Here are four important nutrients you may not be getting enough of and how to get them through the foods you eat.
Why You Need It: Fiber can help prevent type-2 diabetes, certain types of cancer, and heart disease. Research also suggests that consuming fiber-rich foods might boost weight loss by helping you to feel fuller after you eat. Fiber is also important to keep the digestive tract moving. But most of us eat only about half as much fiber as we should. Nutrition guidelines recommend that women eat 25 grams daily and men eat 38 grams daily; the average American consumes only about 14 grams.
How to Get It: Load up on plant-based foods—the less processed the better. (Consider this: a medium orange has 3 grams of fiber; a cup of OJ has zero.) Whole grains, such as oatmeal (3 grams per 1/2 cup), and beans (about 6 grams per 1/2 cup) are also great sources.
Why You Need It: Calcium is important for keeping bones and teeth strong, but it also helps muscles contract, nerves transmit signals, blood clot and blood vessels contract and expand. Adults aged 19 to 50 need 1,000 mg per day; for women 51-plus (and men 70-plus), it’s 1,200 mg daily.
How to Get It: Dairy products are good choices (choose nonfat or low-fat to limit saturated fat), delivering between 300 mg (milk) to 490 mg (nonfat plain yogurt) per 1-cup serving. Some dark leafy greens also offer calcium that’s well absorbed by the body: for instance, kale and collard greens provide 94 mg and 266 mg per cup, respectively.
Why You Need It: Potassium is critical for helping nerves transmit signals, muscles contract and cells maintain fluid balance inside and out. Newer scientific evidence demonstrates that potassium helps maintain normal blood pressure.
How to Get It: By eating a variety of fruits and vegetables—they’re full of this nutrient. But according to the Centers for Disease Control, only 32.5% of adults eat 2 or more servings of fruit per day and only 26.3% eat the recommended 3 or more servings of vegetables per day. Here are a few easy ways to increase intake of fruits and vegetables:
• Make fruit filled smoothies with fresh or frozen (not canned) mixed fruit, bananas, orange juice and pomegranate juice for an anti-oxidant boost
• Have a side salad with lunch and dinner.
• Use leftover veggies in a protein packed veggie frittata
• Have mixed fruit with a drizzle of chocolate sauce for an anti-oxidant packed dessert
4. Vitamin D
Why You Need It: Vitamin D is a fat-soluble nutrient that’s important in bone building and has been linked with lower incidences of cancers and lower rates of immune-related conditions, such as type-1 diabetes and multiple sclerosis. The primary way we get vitamin D is by making it ourselves—UV rays from the sun help us to produce it. In the wintertime, in northern latitudes, many people start to run out of their internal vitamin D stores.
How to Get It: Soak up some sun (ultraviolet, or UV, rays cause skin cells to produce vitamin D). Eat vitamin-D-fortified foods, such as milk, soymilk and cereals. Vitamin D is also found naturally in a few foods: fatty fish, such as salmon, mackerel and sardines, and in egg yolks.
If you live in the northern part of the United States, spend lots of time indoors and/or slather on the sunscreen anytime you’re outside, you may not be getting enough. Some studies suggest that as many as 7 out of 10 Americans are deficient in vitamin D. To be absolutely sure you’re covering your needs for this nutrient, consider a vitamin D supplement (for folks ages 1 to 70, the recommended amount is 600 IU).
Forty million Americans suffer from sleep problems, and 29% report averaging less than six hours of sleep a night. 70 million say they suffer from insomnia, while loss of productivity resulting from sleep issues costs U.S. employers $18 million per year.
New research shows that not getting enough sleep may have more serious consequences than missing a day or two of work.
In a study reported in the Journal of the American Medical Association, researchers at the University of Chicago found that too little sleep can promote calcium and plaque buildup in the heart arteries. This buildup can ultimately cause heart attacks and strokes.
The research team documented for the first time the exact risk of not getting enough sleep, finding that one hour less on average each night can increase coronary calcium by 16%.
The study was comprised of a group of 495 men and women aged 35 to 47. The results of the study showed that 27% of those getting less than five hours of sleep each night showed plaque in their heart vessels. Of those sleeping five to seven hours a night, 11% had plaque while only 6% of subjects sleeping more than seven hours each night had evidence of plaque buildup.
Dr. Tracy Stevens, spokesperson for the American Heart Association and a cardiologist at Saint Luke's Mid-America Heart Institute goes further and states that "We have enough evidence from this study and others to show that it is important to include sleep in any discussion of heart disease."
11 Year Study Finds that Insomniacs Are at Higher Risk for Heart Attacks
Insomnia can wreak havoc on your life. Chronic insomnia can last for months or years. Most people with chronic insomnia spend several nights a week struggling to fall asleep or stay asleep.
The results of a large-scale study investigating the connection between heart health and insomnia reinforce the findings of the University of Chicago team. Scientists at the Norwegian Institute of Science and Technology surveyed 52,610 men and women and follow up with the participants over a period of 11 years.
The results of the study were adjusted for several health and lifestyle factors, including age, sex, education, physical fitness, smoking, alcohol consumption and high blood pressure. What the researchers found was revealing:
• Study participants who had difficulty falling asleep had a 45% greater risk of heart attack compared to those who didn't have problems falling asleep.
• Participants having trouble staying asleep throughout the night had a 30% greater risk of heart attack than those participants able to sleep through the night.
• Those who woke feeling tired had a 27% higher risk of heart attack than people who woke feeling refreshed.
If you're having sleep problems, consider keeping a journal. By keeping regular track of bedtimes and wake times, as well as how you feel in the morning when you wake up, can give you a clear picture of how you're really sleeping. Check with your doctor if problems persist.
These and other studies are making it clear getting enough sleep could save your heart. Taking a supplement like Oraescin is another preventative step you can take to promote the overall health of your circulatory system.
Symptoms of low-testosterone such as a decreased sex drive, more belly fat and reduced vitality are alarming on their own... but several research studies are linking low testosterone levels with a higher risk of mortality as well.
A study published in the Journal of Clinical Endocrinology & Metabolism found that older men with low testosterone may die sooner than other men their age who have normal testosterone levels.
Researchers evaluated 794 men between 50 and 91 years old who were followed for an average of 11.6 years. Those with the lowest testosterone levels at the beginning of the study were 40% more likely to die over the course of the study than the men with higher T-levels.
Another study was carried out by researchers at the VA Puget Sound Health Care System and the University of Washington at Seattle. This study evaluated 858 males over the age of 40 who were grouped according to their testosterone levels and followed for an average period of 4.3 years.
Men in the low testosterone group had an 88% increased risk of death compared to the group who had normal testosterone levels... even after variables such as age and other illnesses were factored in.
Another study worth mentioning was published in the online journal, Heart. Researchers in this study evaluated 930 men, each diagnosed with coronary artery heart disease.
They were followed for 7 years, during which time the research team took tissue samples from the participants to evaluate both bioavailable testosterone as well as total testosterone.
A total of one in four of the men was found to have low testosterone levels... and 42 % of these men died, or one out of every 5 participants.
Conversely, among those with normal hormone levels, approximately 12% died, which was equivalent to one out of every eight men who participated in the study.
A similar study was led by Dr. Giovanni Corona of the University of Florence in Italy. In this study, researchers evaluated the testosterone levels of 1,687 men who were seeking treatment of erectile dysfunction. There was an average follow up period of 4.3 years.
During that time, 137 of the men had had a heart attack or other major heart problem. 15 of the men died. Dr. Giovanni’s team found that those who had lower levels of testosterone were the most likely to die of heart problems.
The research is a wake-up call for men over the age of 50. Other studies are confirming these findings that having low testosterone not only impacts your every day health, including your heart health, it may shorten your life as well.
On the other hand, having higher levels of testosterone can be protective to the heart, and can lower your risk of other health problems like obesity and blood sugar issues.
Make a point of taking T-Boost, an-all natural supplement that promotes healthy testosterone levels.
Designed to turn on your body’s natural hormone production, T-Boost helps keep your heart healthy and the grim reaper at bay. As it’s been said, an ounce of prevention is worth a pound of cure!
Strong circulation depends on a number of factors including a healthy heart, strong vein walls, and ideal levels of both good (HDL) and bad (LDL) cholesterol.
When cholesterol levels are at their ideal balance, blood flows freely throughout veins and arteries carrying oxygen and nutrients to the brain and other vital organs. If your cholesterol is high, lowering those levels is a critical part of improving your circulation as well as your overall health. Taking a more natural approach to lowering your cholesterol levels has a number of advantages. These include lowering the cost of medication, a decrease in unnecessary visits to the doctor’s office, and an increase in your overall health and well-being.
One of the best and easiest ways to start the process of reducing dangerously high levels of cholesterol is to get plenty of exercise.
Not surprisingly, regular physical activity has been shown to have an effect on the cholesterol levels in the body. Exercise, especially regular aerobic exercise can also be a great way to help burn calories, and maintain the body and weight that is right for you.
While researchers aren't exactly sure how exercise lowers cholesterol, they are beginning to have a clearer idea. What is known is that a healthy body weight and a healthy fat to muscle ratio for the body help to keep one’s cholesterol levels in a safe range.
What's more, when you're overweight, you tend to have a higher amount of low-density lipoprotein (LDL) in your blood. This type of lipoprotein has been linked to heart disease.
If you are just starting a regular exercise regimen, it's important to start slowly. Be sure to check in with your doctor to evaluate your current cardiovascular health. You might require blood tests or a treadmill test to see how your heart reacts when you exercise.
Beyond the benefits of lowering your cholesterol, there are other positives that come with exercising regularly. These include keeping your bones strong, improving your mood and circulation, and reducing your risk of cancer, diabetes, stroke, and obesity.
Eat More Heart-Healthy Foods
A heart-healthy diet is another great way to help reduce cholesterol naturally. While it can be challenging to change years of accumulated eating habits, the effort is worth it.
To begin, choose healthier fats. Saturated fats, the kind found in red meat and dairy products, raise your total cholesterol and the low-density lipoprotein (LDL) cholesterol, also known as the "bad" cholesterol. As an alternative, choose leaner cuts of meat, low-fat dairy products and monounsaturated fats, which are found in olive, peanut and canola oils.
The next thing is to eliminate trans fats, which are found in fried foods and commercial baked products like cookies, crackers and cakes. One way to tell if a food contains trans fat is if it contains partially hydrogenated oil. Even though these foods may taste good, they're not good for your heart.
In addition, put away refined flour products as well and choose whole grain foods. Various nutrients found in whole grains promote heart health. Look for whole-grain breads and whole-wheat pasta. Choose brown rice instead of white rice or try quinoa, a high fiber, protein rich whole grain.
Don't forget to eat lots of fruits and vegetables, which are rich in dietary fiber and help lower cholesterol. Include a mixture of colors and consider including things like vegetable casseroles, soups and stir-fried dishes on the menu.
Other foods to include are those rich in omega-3 fatty acids such as salmon, walnuts and almonds. Omega 3s have been shown to reduce the "bad" cholesterol and reduce inflammation in the body.
Lastly, take a supplement high in bioflavonoids like Oraescin for optimal heart health. Taking Oraescin gives your arteries, capillaries, veins and heart great circulatory support.
If you had chickenpox as a child or a young teen, you may think you're done with it. But without realizing it, you could be at risk for getting the disease known as "shingles".
In simple terms, the virus that caused your chickenpox can remain dormant in your nervous system. When your immune system is healthy and strong, it usually keeps the virus at bay. Aging and stress factors, however, can weaken your immune defenses and reactivate the virus, resulting in shingles.
Unfortunately, many people are either completely unaware of the disease, know very little about it and/or aren't aware of the risk factors.
A recent national survey by the American Pain Foundation found that over half of the respondents were not sure of the risk factors for shingles. Many of the respondents did not know about the relationship between chickenpox and shingles either.
While anyone who has had chickenpox can potentially develop shingles, 50% of the cases are among people over the age of 60. Stephen Tyring M.D., professor of medicine at the University of Texas Health Science Center in Houston, noted that the risk of shingles increases with age.
"With each decade, a person's immunity weakens, so that by 60 years of age, the likelihood of shingles significantly increases," says Tyring. "In fact, one out of two people who live to the age of 85 will have had shingles." (1)
In addition, if you have a family history of shingles, you may be more susceptible to developing the disease. In a report published in the journal Archives of Dermatology, Tyring and his research team identified family history as one reason why some people might be more susceptible to shingles. (2)
According to Tyring, "Your risk is double that of someone who has had no relatives with the virus. The estimate, however, is most valid for first degree relatives such as a mother, father or sibling."
How to Minimize Shingles Pain
The onset of shingles isn't always noticeable. You may experience a tingling sensation, itchiness or varying degrees of burning and pain. During the initial days of symptoms, blisters will burst and a rash will form, usually on one side of the body or face. The rash will typically heal in two to four weeks. In some cases, there might be longer-term nerve pain which can persist for months or even years after the initial rash has healed. The older you get, the more at risk you are for long-term nerve pain, which can be quite severe.
Although there is no known cure for shingles, there are ways you can relieve the symptoms.
For the rash, keep your skin as dry and clean as possible, which helps reduce the risk of bacterial infection. You may want to wear loose-fitting clothes to minimize any rubbing against the skin from clothes that are too tight.
To help boost your immunity to the virus that causes shingles, consider taking up Tai Chi, which is a traditional Chinese form of exercise. A study published in the Journal of the American Geriatrics Society found that Tai Chi may help older adults avoid getting shingles.(3)
Depending on the severity of the pain, an all-natural solution like Isoprex may provide relief. Isoprex not only helps relieve pain, it works safely and gently to stop dangerous inflammation in its tracks as well... without the side-effects or worries of over-the-counter and prescription pain medications.
Osteoarthritis (OA) is a painful and debilitating joint disease that affects 27 million Americans.
According to the Center for Disease Control and Prevention (CDC), one in two Americans will get some form of OA in their lifetime. In addition, it’s estimated that 1 out of every 2 will get symptomatic knee OA in their lifetime as well.
What’s more, it’s estimated that your risk of getting knee OA increases to 57% if you have had a past knee injury. In addition, your risk goes up to 66% if you suffer from obesity.
Medically speaking, OA is a joint disease that mostly affects the cartilage, which is the soft tissue that surrounds the bones in your joints. When you have OA, the cartilage breaks down and wears away, allowing the bones to rub directly against each other.
It’s this rubbing that causes you pain and causes the joint to swell, resulting in a loss of motion and mobility. Bone spurs may grow on the edges of the bones, and bits of bone or cartilage can break off and float around inside the joint space. As you might imagine, this can be quite painful.
The CDC goes on to report that many people fail to be proactive because they believe arthritis is something that happens as you age... and that you have to learn to live with the aches and pains.
The good news is that unless you have a family history of arthritis, such as one or both of your parents having OA, you don’t have to suffer needlessly. And perhaps most importantly, you can take steps to prevent OA from developing in the first place.
Could a Cure for Osteoarthritis Be On the Way?
A new study, reported in the Proceedings of the National Academy of Sciences, suggests that researchers may be closing in on a way to eliminate the pain associated with OA.
The study was conducted at Rush University Medical Center in collaboration with researchers at Northwestern University, both in Illinois. What makes this particular study so important is that researchers focused on the “pain pathway” rather than the “cartilage break-down pathway”.
Using a surgical mouse model, the medical researchers were able to track the development of both pain behaviors and the molecular events taking place in the nerves. Then, they correlated the data over an extended period of time.
In the assessment of the data, they looked at changes in the nerve ganglia that carry pain signals toward the brain. They were able to identify the mechanism that is central to the development of OA pain.
To confirm their findings, the researchers blocked the mechanism in the mice at nine weeks after surgery. They found that this reversed the decrease in the movement-provoked pain behavior observed in the mice that didn’t have the mechanism blocked.
The belief is that the research could have major implications for future treatment of OA, especially for those in whom the condition has become extremely debilitating. However, it’s too early to tell if this research will lead to a permanent cure to OA.
With that said, and depending on the severity of your pain, an all-natural solution like Isoprex can provide immediate joint pain relief. It works safely and gently to stop dangerous pain-causing inflammation in its tracks... without any side-effects.
Keep a supply of Isoprex on hand for whenever the need is there.
If you think turning 45 has to mean the end of having fun in the bedroom, you'll be happy to learn of a new survey conducted by Zogby International on sex after age 45.
Sex can still be fun when you hit your middle age, but it may take a little more work.
Nearly 3,000 people age 45 and older were interviewed nationwide about changes in their sex lives. Perhaps this may come as a surprise -- researchers found that Americans over 45 are often unaware of what happens to their own sexuality as they age.
"In this country with the kind of media saturation we have and where sex is certainly no longer a taboo, it is surprising that people are not more aware of the potential for changes in their sexuality as they age”, says Michelle Van Gilder, director of international marketing for Zogby.
That said, nearly three out of five survey participants consider themselves sexy and desirable, despite a cultural obsession with youth. The survey was conducted with sex therapist Dr. Ruth Westheimer, best known as Dr. Ruth.
The numbers are revealing, as 73% of men and women said that after turning 45, they noticed changes in their sexual desire. Over two-thirds say they began experiencing differences in sexual functioning about the same time. About 50% say they were surprised by the changes in their sex lives and over a third were caught off guard by the changes.
Ignorance Is Not Bliss
According to Dr. Ruth, such ignorance is a hindrance to sexual bliss.
She went on to suggest that the need for sexual education is not limited to teens. Older men and women need information about what happens to their libido and bodies as they age.
"Somehow with all the talk, with all the television, the message is not going through as much as we need," Dr. Ruth says. "They believe they are always going to be 25, they believe the change of life doesn’t apply to them."
What's more, when properly educated and prepared for the changes in sexual functioning which occur over time, middle-aged folks "can learn to have sex in the morning, to not drink too much the day before, all kinds of things," she says.
Here are some of the interesting results from the survey...
-- 65% of men experienced noticeable loss of ability to have erections -- 45% of men took a drug for erectile dysfunction -- 34% of women reported that vaginal dryness lessened sexual satisfaction
On the positive side of things...
-- Survey participants said that less bed hopping meant less worrying about STDs -- 75% commented that they've discussed libido changes with their partner -- Over 50% of women say not worrying about birth control has had a positive effect on their sex lives
To help deal with unexpected changes in your libido, take a Resveratrol supplement every day. Resveratrol supports strong blood vessels by strengthening their walls. It also keeps damaged, stretched or stiff blood vessels from leaking. This all helps regulate blood flow and pressure, so that oxygen-carrying blood is delivered to your tissues and organs—including your penis.
That’s where Revatrol with its 100mg of erection-boosting Resveratrol comes in. It works to increase Nitric Oxide (NO) and the enzyme known as cGMP, which causes the tissues in the penis to relax so NO-rich blood can flow in and get you hard. Just one caplet a day gives your body the Resveratrol it needs to keep you ready for sex at a moment’s notice.
It’s the one supplement that won’t let you or your partner down.
If you've ever had a migraine headache, then you know how debilitating they can be. Migraine sufferers typically experience a diminished quality of life along with impaired physical, social and occupational functioning. The pain can be severe.
The statistics may startle you. Migraine afflicts an estimated 10% of the world's population. In the United States, The Institute of Medicine recently reported that nearly 40 million Americans suffer from migraines. (1)
At a recent meeting of the American Pain Society (APS), David Dodick, M.D. and professor of neurology at the Mayo Clinic in Phoenix, noted that migraines have a genetic and biological basis.
"Today we know that migraine is a largely inherited disorder characterized by physiological changes in the brain, and, if attacks occur with high frequency, structural alterations in the brain," Dodick said. (1)
So... are you at risk for getting migraine headaches? It may not be a twist of fate if you are experiencing them right now. There could be something else going on that is a contributing factor.
Some of the triggers can be managed, such as stress, lifestyle choices including smoking and drinking, and high blood pressure. But equally important are the factors you have no control over that can predispose you to the condition.
Having one of more of the predetermined risk factors for migraine headaches doesn't mean you will inevitably develop migraines. However, being aware of the risks will help you arm yourself with the knowledge that you need to prevent and treat migraines should they occur.
Migraine Risk Factors You Need to Know:
The risk factors you have little control over include the following:
Family history — If your parents had migraines, your risk may be increased by up to 75%. If possible, it can be helpful to talk to them about their experience, so you can set into place a plan for prevention. Your family history of migraines will also make the diagnostic process simpler.
Gender — If you are female, you are at greater risk to develop migraines. During childhood, boys and girls have the same chance of developing migraine headaches. However, once hormones take center stage, the risks to a female jump significantly. In fact, adult women are three times more likely than men to get migraines. (2)
Hormonal changes — If you are a woman who gets migraines, hormones may be the culprit. During the menstrual cycle each month, hormone fluctuations can cause migraines. Any stress that causes hormones to spike can cause a migraine to occur if a person is susceptible.
Ethnicity — North American Caucasians appear to have a higher risk of developing migraines than either African Americans or Asian Americans. Migraines are less common in Europe or South America and much less common in Africa or Asia. Studies haven't connected this with any conditions in the environment, food supply, or medical knowledge, only genetics.(3)
If you have one or more of these risk factors, talk to your doctor about possible preventative measures. Discuss all your options to keep migraines from becoming a part of your life. In fact, according to Dr. Dodick, "Some studies have shown that migraine attacks can be cut in half or more with preventive treatments."
In addition, keep a bottle of Isoprex on hand. Isoprex is an all natural pain relief formula that can help minimize headache pains... without the side effects or dangers of NSAIDs.
1. Increase Fiber Intake
Dietary fiber plays an important role in the health of our digestive tract. Besides lowering cholesterol, fiber also feeds the healthy bacteria and helps them to flourish. The best sources of dietary fiber are actually whole grains such as whole wheat, brown rice, and whole oats, along with beans, peas, lentils, nuts, and seeds, then fruits and vegetables. Shockingly, most people get only half of the daily recommended 20 to 35 grams fiber. But be careful to increase your fiber intake gradually, otherwise you’ll most likely experience some unpleasant and painful gas and bloating. Be sure to get plenty of fluids at the same time you eat fiber-rich foods in order to soften the fiber during transit. A hearty bowl of oatmeal and a cup of tea should move things along nicely.
2. Load up on Whole Fruits and Vegetables
Eating a variety of whole fruits and vegetables as opposed to fruit or veggie juice is a great way to get more fiber. Fruits like pears, blueberries, raspberries and apples all contain a minimum of 4 grams of fiber per serving. Vegetables such as red bell peppers, leafy greens, broccoli and sweet potatoes also have a hefty dose of fiber. The pulp of the fruit and veggies are what help scrub your digestive tract and allows better absorption of nutrients and antioxidants.
3. Try Yogurt for Lactose Intolerance
Research suggests that many people who are not able to properly digest lactose, the type of sugar in milk, can tolerate yogurt with live active cultures. Yogurt is relatively high in lactose, but the bacterial cultures used to make it produce some lactase, the enzyme needed to digest the sugar. Great news for those who are lactose-intolerant and looking for good sources of calcium!
4. Read Labels for Hidden (Lactose-Containing) Ingredients
Milk and foods made from milk are the only natural sources of lactose. But many prepared foods, including bread and other baked goods, processed breakfast cereals, instant potatoes and soups, margarine, lunch meats, salad dressing, candy, protein bars and powdered meal-replacement supplements contain milk derivatives. So be sure to read labels carefully if you are lactose intolerant.
5. Good Bacteria for Your Gut?
Probiotics are “friendly” bacteria found in the gut that help us digest foods and fight harmful bacteria. They also include live, active cultures used to ferment foods, such as yogurt. To get the potential benefits offered by probiotics, mix a cut-up banana into a cup of low-fat vanilla yogurt—with a “Live & Active Cultures” seal on it—for a midday snack or turn it into breakfast and add some granola. Try different kinds of yogurt to see which one works best for you. Mix fresh or frozen berries, peaches and banana with yogurt and a couple teaspoons of ground flax seeds for a delicious breakfast on the go or snack. For optimal levels of the good stuff, look for a high quality probiotic supplement that contains several different strains of yeasts and bacteria.
High blood pressure is dangerous and affects many of your daily activities. One of the problems with high blood pressure is that if you have it, you may not feel it. As a result, the absence of symptoms makes it a silent killer, one that can be easy to ignore.
According to the Harvard Heart Letter, high blood pressure can also affect your sex life. It can alter circulation in your body and damage the inner lining of arteries causing them to lose their elasticity making them less able to handle the blood flow to the penis.
Ironically, if you’re taking blood pressure medicine, you have to be careful about taking drugs for erection problems. The combination of the two can lead to a significant and potentially life-threatening drop in blood pressure.
According to WebMD, some types of blood pressure drugs can actually cause erectile dysfunction. Because of this, some men find it difficult to stay on their medication. In fact, it’s estimated that 70% of men who experience side effects from taking high blood pressure medicine (such as problems with erections) stop taking it.
Getting an erection is a highly orchestrated dance between nerves, hormones, blood vessels and psychological factors. Some blood pressure medicines interfere with the production of testosterone, affecting this dance and reducing your sex drive.
Your ability to ejaculate can also be affected if you are taking high blood pressure drugs. When you have an orgasm, the bladder neck closes which allows the semen to flow out of through the penis. Some of the medicines can interfere with this mechanism and make it difficult to ejaculate.
Resveratrol For Better Sex
The good news is that researchers at the UC Davis Med School found that Resveratrol reduces blood pressure. And the higher study participant’s LDL level, the greater drop they experienced. So what is it about resveratrol that makes it so special?
To begin, the skin and the seeds of red wine grapes are also the richest known source of oligomeric proanthocyanadins, or OPCs for short.
OPCs are powerful compounds that fight free radicals. They are crucial for supporting healthy circulation, and perform a variety of roles throughout the body that are essential to optimal health.
For example, OPCs act as gentle cardiovascular cleansing agents that keep your heart and arteries clean and healthy.They improve blood flow in your brain and body, and promote normal blood pressure and cholesterol levels.
Essentially, resveratrol is a vasodilator—which means it opens up your arteries and capillaries to rush more blood and oxygen to your organs. Taking resveratrol gives your arteries, capillaries, veins and heart great circulatory support—without the headache or dizziness often associated with prescription drugs.
Resveratrol supports strong blood vessels by strengthen-ing their walls. It also keeps damaged, stretched or stiff blood vessels from leaking. This all helps regulate blood flow and pressure, so that oxygen- carrying blood is delivered to your tissues and organs—including your penis.
While you should always consult with your doctor about your high blood pressure concerns, an effective way to get your OPCs is by taking a resveratrol supplement such as Revatrol. Revatrol contains the highest amount of OPCs of any resveratrol supplement on the market – an astonishing 95%.
About 700,000 people suffer a stroke each year in the United States, with stroke being the third leading cause of deaths. Remarkably, studies show that up to 80 percent of strokes can be prevented.
What's important is that you learn how to recognize and respond to the signs and symptoms of a stroke. Along with this, it's important that you learn how to manage your risk of getting a stroke as well.
To begin, women should consider drinking something other than soda pop. Researchers at Osaka University in Japan found that women who drink just one soft drink each day dramatically raised their risk of suffering a deadly stroke by 83%. (1)
The Japanese researchers tracked the eating habits of nearly 40,000 men and women between the ages of 40 and 59 for a period of 18 years. This included how many soft drinks they consumed. During the course of the study, almost 2,000 of the participants suffered a stroke.
When the study period ended, the soda consumption of those who had strokes was compared with those who didn't have a stroke. The results were startling.
In particular, researchers found that the women drinking soda every day were at a much higher risk of suffering what's called an "ischemic" stroke. This type of stroke occurs when a weakened blood vessel bursts and causes hemorrhaging inside the brain.
What's more, it didn't matter if the participants were drinking regular or diet sodas, as the risk was equally high. As the Japanese researchers noted, "Soft drink intake was positively associated with risk of ischemic stroke for women".
Raise that Glass of Wine in a Toast to Stroke Prevention
On the other side of the coin, another study has found that drinking a glass of wine every day may help reduce the risk of stroke in women.
Published in the journal Stroke, a decades-long study of 84,000 women found that women who had a glass of wine every day were at less risk to suffer a stroke than women who abstained from drinking.
Specifically, the women who drank about a half glass of wine per day were 17 percent less likely to have a stroke, while those who drink a full glass per day reduced their risk of stroke by 21 percent. (2)
Researchers noted that the risk of stroke did not lessen further when the women drank more than a glass of wine per day. The lead researcher on the study, Dr. Monik Jimenez, commented although drinking wine can help reduce the risk of stroke, moderation is always advised.
"Higher intake can lead to high blood pressure and atrial fibrillation which are both risk factors for stoke," said Dr. Jimenez. "Our findings really stress moderation for women who do drink."
Another study published in the same journal, Stroke, found that a diet that includes oranges and grapefruits may also reduce the risk of stroke in women.(3) The study, which followed 69,622 women for 14 years, found that the women who ate the most citrus fruit had a 19 percent lower risk of having an ischemic stroke than women who ate the least.
To help strengthen your body's defense against heart problems, take Revatrol daily. In addition to containing 100mg of Trans-Resveratrol, which is a potent form of Red Wine extract, Revatrol gives your body all the heart, artery, cholesterol and cellular benefits of 50 bottles of red wine.
Antibiotics were hailed as “miracle drugs” when they first burst onto the scene in 1942 with the introduction of penicillin. Doctors were finally able to subdue life-threatening infections with a single magic bullet.
It was a blessing—or so we thought.
For a long time, the medical mainstream did its best to ignore the frightening fact that the microbes were fighting back. Today, antibiotic resistance is headline news. The rise of “super bugs” like MRSA, that can be deadly no matter what antibiotics we throw at them, is practically common knowledge.
In addition, there is another side effect of antibiotics that may ultimately prove more deadly than the rise of the "super bugs" and it's this -- antibiotics don’t discriminate.
Instead, they kill all bacteria in their path. Not just the pathogenic germs that cause illness but also the nonpathogenic “good” bacteria in your gut that are absolutely critical to health.
Today’s wide-spectrum antibiotics like the penicillins, tetracyclines, sulfonamides and aminoglycosides are the biological equivalent of a drive-by shooting. Everything takes a bullet, not just the targeted germs. The collateral damage to your intestinal ecology can be significant and long-lasting.
Even if you took antibiotics years ago, your digestive system could still be comprised. And when the good bacteria are wiped out, it opens the door for toxic fungi Candida and toxic bacteria Clostridia difficile to take over.
Bouts of diarrhea and damage to the colon can result... as well as problems like yeast infections, colds and other immune problems, skin problems, mood swings and more.
Overuse of Antibiotics Kills the Good Bacteria Essential for Your Digestive and Immune Health
To your detriment, doctors have ignored this kill-off for decades. In fact, up to 25% of people taking antibiotics experience the immediate side-effect of diarrhea.(1) But that’s just the tip of the iceberg.
Here are some additional facts that highlight the risk...
• People who take a lot of antibiotics have much higher incidences of colds and flu.(2) This happens because the kill-off of good bacteria leads to a significantly weaker immune system.
• Microflora kill-off by antibiotics is directly tied to the epidemic rise in Clostridium difficile infections that strike 3 million people and kill up to 20,000 victims every year.(3) Even a single course of antibiotics can leave you vulnerable.
• Large-scale studies reveal an alarming correlation between antibiotics intake and increased cancer risk due to the destruction of the microflora that are critical to immune health.
For decades, doctors have willfully ignored the damage done by antibiotics to the beneficial bacteria in the gut. In their eagerness to root out the bad guys, they’ve overlooked the fact that the good guys are being killed too.
In other words, they’ve been bombing the village to protect the people... but the village of your intestines is virtually destroyed in the process.
Until the medical establishment publicly acknowledges the threat that antibiotics pose and act accordingly, you're on your own. And that means taking steps to support your microflora with every healthy means available.
According to The World Health Organization, consuming probiotics on a daily basis helps strengthen the body’s natural defenses by providing friendly bacteria for the intestinal tract.(4)
The solution is to take a probiotic supplement like Prosentials. It is designed to help balance and protect your gut from the damaging effects of antibiotics.
(1) Ibid., Linder.
(2) Margolis, DJ. Antibiotics, acne, and upper respiratory tract infections. LDI Issue Brief. 2006 Feb;11(4):1-4.
(3) Parker-Pope, T. Stomach Bug Crystallizes an Antibiotic Threat. The New York Times. April 14 | 1 | 7 |
<urn:uuid:bd094fe1-9653-46e6-9f0c-938175507ce4> | The Story of the Threefold Community
The first decades of the twentieth century were a time of social experimentation and spiritual exploration. In New York City in the 1920s, a small band of anthroposophists – students of Rudolf Steiner – ran a rooming house, a laundry, a furniture-making shop, and a vegetarian restaurant near Carnegie Hall. Led by Ralph Courtney, members of the Threefold Commonwealth Group included Gladys Barnett (later Hahn), May Laird-Brown, Louise Bybee, and Charlotte Parker. Not the first association of anthroposophists in New York, the Threefold Group soon became the most active and lively in their efforts to put into practice the social ideals indicated in the writings and lectures of Rudolf Steiner.
Rudolf Steiner and the Social Question
It was during the First World War years that Steiner, already a well-known scholar, educator, and spiritual researcher, turned his attention to the social question. The times gave the topic special urgency. The war was a catastrophe for all of Europe, and Steiner correctly foresaw that the terms of its conclusion would have dire consequences for Germany’s social and economic fabric. Meanwhile, the Russian Revolution of 1917 showed vividly the powerful, widespread yearning for new social forms, and the total inadequacy of existing solutions.
Steiner saw that human development had outstripped existing social forms, even the supposedly forward-looking and revolutionary ones. In response, he offered observations that were neither prescriptive nor Utopian, but rather “how people would arrange things for themselves” if they were given the freedom to do so. If freed from distortions imposed by outmoded political, economic and religious structures, Steiner believed that:
- Every individual would freely express and live by her or his religious and spiritual beliefs – and would confer that right on every other individual (Cultural Life).
- Every individual would enjoy equal political rights – and would honor every other individual’s political rights (Rights Life).
- Every individual’s economic life would be based on the recognition of our universal interdependence with other people for all our material needs (Economic Life).
The Threefold Group took on the task of creating a community where, as Steiner put it, “real cooperation continually renews social forces.” Ablaze with idealism, they threw themselves into pursuing work and social lives driven by ideals of service and goals of social and spiritual improvement.
Their guiding light, Ralph Courtney, had met Steiner while working in Europe for the New York Herald Tribune; soon after, he returned to the US and took it upon himself to find ways to spread awareness of Steiner’s teachings in this country. Indeed, that became his life’s work, beginning with the founding of the Threefold Group and its ventures in New York City.
In 1926, Courtney, Charlotte Parker, Gladys Barnett (later Hahn), Louise Bybee, Margaret Peckham, Alice Jansen, and Reinhardt Mueller – acting on behalf of the Threefold Group – purchased a small farm on Hungry Hollow Road in what was then South Spring Valley, NY. Their aim was to create a center for learning about and living anthroposophical ideals.
Biodynamic gardening began almost immediately, making Threefold Farm the first in North America to use the biodynamic method that had been outlined by Steiner in his 1924 “Agriculture Lectures.” Anticipating by decades the era of Silent Spring and the organic movement, biodynamics introduced a consciously chemical-free method of agriculture that has been shown to go beyond “sustainability,” and actually strengthen and enliven the soil where it is practiced.
With the help of Charlotte Parker, Paul Stromenger, Alice and Fred Heckel, and many others, improvements were made, and additions and new buildings were constructed, all with the aim of getting the farm ready to host large groups of people, and in 1933 the first summer conference was held. In these early years, the summer conferences featured lecturers from Europe who had known and worked with Rudolf Steiner. Many gave their first American lectures at Threefold Farm. The first program in 1933 ran for two weeks and featured classes and lectures on agriculture, art (painting, speech, and eurythmy), science, education, spirituality, and sociology. Within a few years, the “Summer Season” of activities stretched from early June to Labor Day, with a “Summer School” running for three weeks in July.
With the exception of the World War II years, summer gatherings have been held at Threefold every year since 1933. In the early years, attendees (who numbered in the hundreds) slept in self-described “shacks” and ate and attended lectures under rented circus tents. Eurythmy and dramatic performances were staged amongst the trees of the nearby oak grove. Everyone enjoyed swimming in Threefold Pond (a summer pleasure to this day) and taking meditative walks in the neighboring fields and forests.
As the community matured in its role as a center for anthroposophical education and fellowship, it also attracted permanent, year-round residents, and the Threefold community became a center of social experimentation. Innovative forms of land ownership, dispute mediation, and currency were tried
Following the Second World War, the Threefold community attracted more homesteading families who bought land and built homes on and near Hungry Hollow Road. As the community grew, anthroposophical institutions arose to meet its changing needs. In 1948, Sabina Nordoff and Stephanie Jones started a kindergarten that in time grew into Green Meadow Waldorf School. Green Meadow dedicated its first building (today’s kindergarten) in 1956, beginning a period of construction and expansion that saw the dedications of the Lower School (1966), Gym (1970), Arts Building (1973), and High School (1974). These and many other community buildings were designed by architect Walter Leicht, who also managed Threefold maintenance and construction for many years as a volunteer while maintaining a large private practice. Green Meadow passed a major milestone when it held its first twelfth-grade graduation in 1973.
In 1949, the 200-seat Threefold Auditorium was dedicated, giving a permanent indoor home to the summer conferences and many other artistic and educational events. Mieta Waller-Pyle, Daniel Birdsall, Ralph Courtney, and Carl Schmidt collaborated on the auditorium’s design and construction. In the 1950s, community-based work with Rudolf Steiner’s four mystery dramas began when Hans and Ruth Pusch formed the Threefold Mystery Drama Group, which made its home in the auditorium. That work reached a culmination in August 2014, when the community organized a nine-day festival and conference to celebrate the mystery dramas. Threefold Mystery Drama Group performed all four plays in repertory in English, a historical first.
From its dedication until 1974, the auditorium also housed the research laboratory of Ehrenfried Pfeiffer.
Paul and Ann Scharff arrived at Threefold in 1959 and soon began work to establish an intentional community centered around the care of the elderly – the Fellowship Community. For that purpose, in 1966 the State of New York awarded a charter to the Rudolf Steiner Fellowship Foundation. The Monges family’s Hill Top House, designed by Walter Leicht, became the Fellowship’s main residential and dining facility.
The 1950s also saw further expansion of Main House and the construction of Richard Kroth’s painting studio, which later became the home of Eurythmy Spring Valley. The opening of the Tappan Zee Bridge in 1955 heralded Rockland County’s transformation from agricultural haven to bedroom community. Some far-sighted community members saw that rising property values and burgeoning subdivisions threatened the community’s very existence, so in 1965, the last year of his life, Ralph Courtney oversaw the chartering of the Threefold Educational Foundation and School. The new foundation became an umbrella under which the community’s various property holdings and legal entities were consolidated. Establishing the foundation created a firm institutional framework to house the community’s future initiatives; just as importantly, it preserved the rural feeling along Hungry Hollow Road that is treasured by our residents and visitors to this day.
Summer conferences continued through the 1970s, including “Self Development and Social Responsibility,” a remarkable international youth conference that drew some 600 participants from throughout the U.S. and Europe in August 1970.
Innovations in Adult Education
In 1972, Lisa Monges initiated a training in eurythmy that in time evolved into Eurythmy Spring Valley, which graduated its first class of full-time students in 1976. After some growing pains, Dorothea Mier arrived from Dornach, Switzerland in 1980 to lead the re-founding of the eurythmy school. Dorothea continued as the head of ESV for twenty-five years. The Eurythmy Spring Valley Ensemble embarked on its first performance tour in 1986.
More anthroposophically oriented adult-education initiatives followed. Threefold initiated a Foundation Studies program in the 1970s, and a painting school was added in 1982. In 1986, the Waldorf Institute, a Detroit-based Waldorf teacher-education school, relocated to the Threefold campus and adopted the name Sunbridge College. From 1991 to 2009, Sunbridge College was accredited to grant the M.S. degree in Waldorf education, making it the only state-chartered Waldorf teacher-education program in North America. Responding to changing needs, in 2010 Sunbridge College re-imagined itself as Sunbridge Institute, focused on low-residency programs for aspiring and practicing Waldorf teachers, including a Master’s program offered in partnership with Empire State College. Sunbridge Institute’s programs are recognized by the Association of Waldorf Schools of North America, qualifying graduates to teach in Waldorf schools worldwide. Sunbridge’s popular summer programs for Waldorf teachers carry on the Threefold tradition of summer adult education in service of anthroposophy.
In 1993, the Hungry Hollow Co-op Natural Foods Market, which began in 1973 as a natural foods buyers’ club in the basement of a Green Meadow teacher’s home, opened its doors to the public at the location of the old Threefold Corner Store. The Co-op’s building was renovated and expanded in 2004, Threefold extended its mandate for conscious land care by installing a 3,000-square-foot rain garden and starting an ongoing program of ecological landscaping.
In 1996, Renate Hiller and Michael Howard co-founded the Applied Arts program of Sunbridge College. After Michael moved away from the Threefold community, Renate led the development of the Fiber Craft Studio, which in 2008 became an independent institution operating under the Threefold Educational Foundation umbrella. Today, the Fiber Craft Studio offers two year-long part-time trainings, one-day workshops, classes for Sunbridge Institute, and the only Waldorf Handwork Teacher Training in North America.
It was also in 1996 that the Rudolf Steiner Fellowship Foundation acquired neighboring Duryea Farm, one of the last remaining family farms in Rockland County. Biodynamic farming and gardening had always been central to the Fellowship Community’s work and life; the addition of Duryea Farm’s orchards, fields, and forests dramatically enlarged the scale of that work. Among other activities, the Fellowship added a cow barn and dairy to their portfolio.
The story of the Threefold community has always been intertwined with the development of the biodynamic agriculture and land care in North America. In its earliest days, Threefold Farm was home to the first biodynamic gardens in North America. Ehrenfried Pfeiffer, whom Rudolf Steiner selected to be ambassador of biodynamics to our shores, taught at the first summer conference in 1933, and at dozens more courses in the years that followed. He lived and worked at Threefold from 1946 until his death in 1961, and work at his biochemical laboratory in Threefold Auditorium carried on its until 1974.
In 1996, Threefold built upon this legacy by creating the Pfeiffer Center for Biodynamics and Environmental Education. The Pfeiffer Center’s first director, Gunther Hauk, brought to Threefold many years’ experience as a Waldorf teacher, biodynamic practitioner, and beekeeper. In its first ten years, the Pfeiffer Center’s programs for adults and children earned it a national reputation. When Gunther retired in 2007, direction of the Pfeiffer Center passed on to Mac Mead, a former Fellowship Community co-worker and farmer whose ties to the community reached back to the 1970s.
The Seminary of the Christian Community in North America, which was founded in Chicago in 2001, relocated to the Threefold community in 2011. The influx of seminarians, and the Seminary’s public workshops and courses, was a valued addition to the community’s cultural life through the spring of 2019, when the Seminary relocated again to Toronto.
The Otto Specht School, a Waldorf School for children with developmental delays, social and sensory sensitivities, and learning challenges, which operated for many years within the Rudolf Steiner Fellowship Foundation, came under the wing of Threefold Educational Foundation in 2010.
April 2018 brought the retirement of the Foundation’s Executive Director, Rafael (Ray) Manaças. In his 29 years as Director, Ray guided the Foundation and the community through significant growth, including the construction of Holder House, our 40-room student dorm; a major renovation of the Threefold Corner site for the Hungry Hollow Co-op; and capital improvements at Threefold Auditorium. With Mimi Satriano, Ray spearheaded the founding of the Pfeiffer Center in 1996. The Otto Specht School and the Fiber Craft Studio both moved their operations under the Threefold umbrella with Ray’s guidance and encouragement.
Under Ray’s leadership, Threefold Educational Foundation consistently supported the work of the anthroposophical movement by organizing and hosting many conferences, including several National Conferences of the Anthroposophical Society; a series of six annual Research Conferences (2008-13); a 2010 national meeting of the Biodynamic Association that sparked the rebirth of the BDA; and a series of performances of Rudolf Steiner’s mystery dramas that culminated in the 2014 nine-day festival and conference.
Eric Silber, the Foundation’s new Executive Director, came to the Foundation after six years as business manager at Green Meadow Waldorf School. He aspires to continue Threefold’s development as an outward-facing organization, confident in its future, that is fully engaged with the outside world and actively putting the fruits of anthroposophical research – in education, agriculture, the arts, and more – within the reach of every person who could benefit from them.
In Eric’s first year as Director, a historic project neared completion: To combine the agricultural and educational programs of Threefold Educational Foundation (the Pfeiffer Center) and the Fellowship Community into a new, mutually supportive association of work and community. Encompassing nine acres under cultivation, 38.8 acres of pasture and hay, a dairy, an apiary, and a CSA, and deeply entwined with both Fellowship and Threefold community life, this new enterprise is a large and tangible expression of our ideas and ideals for care of the land, new social forms, education, and the future of agriculture.
The residents and institutions of the Threefold community have been promoting spiritual values in the arts, education, and community life since 1926. The 1965 charter of the Threefold Educational Foundation created a secure but flexible foundation upon which all our affiliated institutions – present and future – can evolve in freedom. Today, Green Meadow Waldorf School, Sunbridge Institute, the Otto Specht School, Eurythmy Spring Valley, the Fiber Craft Studio, the Pfeiffer Center, and many other projects and enterprises thrive under Threefold’s physical and institutional umbrella, while innovative new impulses in education, agriculture, land care, and the arts are continually arising. The Threefold Educational Center gives each one a fertile bed in which it can germinate, take root and grow. | 1 | 6 |
<urn:uuid:c37e0e53-137d-4b6b-a7c8-c5f624d0a55b> | The Abbey Theatre (Irish: Amharclann na Mainistreach), also known as the National Theatre of Ireland (Irish: Amharclann Náisiúnta na hÉireann), in Dublin, Ireland, is one of the country's leading cultural institutions. First opening to the public on 27 December 1904, and despite losing its original building to a fire in 1951, it has remained active to the present day. The Abbey was the first state-subsidized theatre in the English-speaking world; from 1925 onwards it received an annual subsidy from the Irish Free State. Since July 1966, the Abbey has been located at 26 Lower Abbey Street, Dublin 1.
Ireland's National Theatre
|Address||26 Lower Abbey Street|
|Owner||Abbey Theatre Limited (prev. National Theatre Society)|
|Designation||National Theatre of Ireland|
In its early years, the theatre was closely associated with the writers of the Irish Literary Revival, many of whom were involved in its founding and most of whom had plays staged there. The Abbey served as a nursery for many of leading Irish playwrights, including William Butler Yeats, Lady Gregory, Seán O'Casey and John Millington Synge, as well as leading actors. In addition, through its extensive programme of touring abroad and its high visibility to foreign, particularly American, audiences, it has become an important part of the Irish cultural brand.
- 1 History
- 2 1930s to 1950s
- 3 1950s to 1990s
- 4 Challenges in the 2000s
- 5 Reestablishment
- 6 Co-directors and new building plans
- 7 Notes
- 8 Bibliography
- 9 External links
The Abbey arose from three distinct bases. The first was the seminal Irish Literary Theatre. Founded by Lady Gregory, Edward Martyn and W. B. Yeats in 1899—with assistance from George Moore—it presented plays in the Antient Concert Rooms and the Gaiety Theatre, which brought critical approval but limited public interest. Lady Gregory envisioned a society promoting "ancient idealism" dedicated to crafting works of Irish theatre pairing Irish culture with European theatrical methods.
The second base involved the work of two Dublin directors, William and Frank Fay. William worked in the 1890s with a touring company in Ireland, Scotland and Wales, while his brother Frank was involved in amateur dramatics in Dublin. After William returned to Dublin, the Fay brothers staged productions in halls around the city and eventually formed W. G. Fay's Irish National Dramatic Company, focused on the development of Irish acting talent. In April 1902, the Fays gave three performances of Æ's play Deirdre and Yeats' Cathleen Ní Houlihan in St Theresa's Hall on Clarendon Street. The performances played to a mainly working-class audience rather than the usual middle-class Dublin theatregoers. The run was a great success, thanks in part to the beauty and force of Maud Gonne, who played the lead in Yeats' play. The company continued at the Antient Concert Rooms, producing works by Seumas O'Cuisin, Fred Ryan and Yeats.
The third base was the financial support and experience of Annie Horniman, a middle-class Englishwoman with previous experience of theatre production, having been involved in the presentation of George Bernard Shaw's Arms and the Man in London in 1894. An acquaintance of Yeats from London circles, including the Order of the Golden Dawn, she came to Dublin in 1903 to act as Yeats' unpaid secretary and to make costumes for a production of his play The King's Threshold. Her money helped found the Abbey Theatre and, according to the critic Adrian Frazier, would "make the rich feel at home, and the poor—on a first visit—out of place."
The founding of the Theatre is also connected with a broader wave of change found in European drama at the end of the nineteenth century. The founding of Théâtre Libre in Paris in 1887 and the work of the Moscow Art Theatre in 1895 represented a challenge to a “stale metropolitanism". This movement echoes Lady Gregory's commitment and determination to make the Abbey Theatre a theatre for the people.
Encouraged by the St Theresa's Hall success, Yeats, Lady Gregory, Æ, Martyn, and John Millington Synge founded the Irish National Theatre Society in 1903 with funding from Horniman. They were joined by actors and playwrights from Fay's company. At first, they staged performances in the Molesworth Hall. When the Mechanics' Theatre in Lower Abbey Street and an adjacent building in Marlborough Street became available after fire safety authorities closed it, Horniman and William Fay agreed to buy and refit the space to meet the society's needs.
On 11 May 1904, the Society formally accepted Horniman's offer of the use of the building. As Horniman did not usually reside in Ireland, the royal letters patent required were granted in the name of Lady Gregory, although paid for by Horniman. The founders appointed William Fay theatre manager, responsible for training the actors in the newly established repertory company. They commissioned Yeats' brother Jack to paint portraits of all the leading figures in the society for the foyer, and hired Sarah Purser to design stained glass for the same space.
On 27 December, the curtains went up on opening night. The bill consisted of three one-act plays, On Baile's Strand and Cathleen Ní Houlihan by Yeats, and Spreading the News by Lady Gregory. On the second night, In the Shadow of the Glen by Synge replaced the second Yeats play. These two bills alternated over a five-night run. Frank Fay, playing Cúchulainn in On Baile's Strand, was the first actor on the Abbey stage. Although Horniman had designed the costumes, neither she nor Lady Gregory was present, as Horniman had already returned to England. In addition to providing funding, her chief role with the Abbey over the coming years was to organise publicity and bookings for their touring productions in London and provincial England.
In 1905 without properly consulting Horniman, Yeats, Lady Gregory and Synge decided to turn the theatre into a limited liability company, the National Theatre Society Ltd. Annoyed by this treatment, Horniman hired Ben Iden Payne, a former Abbey employee, to help run a new repertory company which she founded in Manchester. Leading actors Máire Nic Shiubhlaigh, Honor Lavelle (Helen Laird), Emma Vernon, Máire Garvey, Frank Walker, Seamus O'Sullivan, Pádraic Colum and George Roberts left the Abbey.
The press was impressed with the building and the Cork Constitution wrote that "the theatre has neither orchestra nor bar, and the principal entrance is through a building which was formerly the Dublin morgue." Theatregoers were surprised and thought it to be scandalous that part of the theatre used to be a morgue. The orchestra was established under the guidance of Dr John F Larchet.
Contributions of founders and fundersEdit
Gregory helped create the Irish Literary Theatre, which would later form one base for the INTS, with W.B Yeats and Edward Martyn. She met Yeats in 1898, and he admitted to her that it was a dream of his to create a theatre in which new ambitious Irish plays could be performed. The idea seemed more and more possible to achieve as they kept talking and by the end of their first meeting they had a plan for how to make a "national theatre" a reality. In the first year of the theatre, Lady Gregory was in charge of finding money and support from patrons, and she even donated some of her own money. She was critical in making the ILT and the INTS function financially before Annie Horniman's support.
In 1903, when Horniman offered the INTS a theatre, Lady Gregory schemed to bypass the terms of the deal. She didn't like Horniman and was happy when she left, saying she was "free from her and from further foreign invasion." She wrote many plays for the theatre, specializing in the one-act play.
William Butler YeatsEdit
The Abbey Theatre is sometimes called Yeats' theatre or a manifestation of his own artistic ambitions and ideals. He wanted a theater in which the playwright's words were the most important thing, prevailing over the actor and the audience. It was very important to him that the authors had control. It was because of him and his efforts that Lady Gregory, Synge and he became the Board of Directors of the INTS. It was only after meeting Lady Gregory that Yeats thought the creation of such a theatre possible. He worked closely with her for almost a year before the first production of the ILT, during which his play Cathleen Ni Houlihan and Edward Martyn's The Heather Field were performed to great success, some even calling it "the cultural event of the decade," though some accused him of being too political or even of writing a heretical play.
He then adopted a new, more inclusive politic, which helped him and Lady Gregory recruit many new patrons, most Protestant and/or Unionist. As early as 1900, Yeats sent a letter to Lady Gregory that implied that he was confident about finding a reliable patron who, at the time, remained anonymous. The patron he was talking about was Annie Horniman, who had anonymously financed Yeats' first play in 1884. By that point, he was starting to want The Abbey to be seen as nationalist. However, by October 1901, he had lost interest in the ILT as a mean to express his artistic vision, as he was forced to make sacrifices to accommodate co-workers. He chose to stay because of his relationship with Horniman, who he saw as a mean to secure his ambitions and those of the Fay Brothers' troupe of Irish actors.
His relationship with Horniman was essential to his projects, so much so that he declared in front of an audience that he would not accept money from Nationalists and Unionists, which forced him to change the entire politics of the INTS. He gave this speech in 1903 and by 1904 he was the president of the Abbey Theatre. When Horniman left, he wanted to bring back the nationalist aspect the theatre once had but was stopped by a threat from Horniman to close it down; he finally had the last word with the help of Bernard Shaw and Lady Gregory. During the summer of 1909, Shaw offered his play Blanco Posnet to the Abbey, a play previously censored that allowed him to challenge British authority and to come back to the good graces of Nationalists, thus giving him a new reputation and making the INTS closer to becoming "a representative Irish Institution." Following Horniman's offer to sell him back the theatre, he then tried to "play" her so that she would pay more. Yeats, with the help of Lady Gregoy, bought the Abbey back and sued Horniman for the subsidy he believed that she owed but won only on the principle, and did not receive the money.
Miss Annie HornimanEdit
Annie Horniman, a British theatre enthusiast and manager, was essential in the creation of the Abbey Theater, as she was its first significant patron and the woman who offered the edifice in which it would later be established. She was first brought in by Yeats as a costume designer for his play The King's Threshold, as she greatly loved his art and it was also a way for him to get closer to her. Yeats's long relationship with her and her love for theatre made her more likely to accept to become a permanent patron and, by 1901, her money was secured. Her support was so important that he already had a role for her in the Abbey Theatre before it was even created. However, by the time the ITL became the INTS, Yeats had to assure her that her money would not be used to fund a Nationalist rebellion.
She supported him as well as the INTS with financial support as she came from a rich family and, in 1903, after Yeats eloquently declared his apolitical theatrical ideals, she offered to give him a theatre in Dublin worth thirteen thousand pounds, but for the deal to work, she had strict conditions. Firstly, she requested that his speech, essays on the "Irish National Theatre," and her offer be made public. Secondly, the point she stressed most, there were to be no politics at all. She finally gave the building for the Abbey Theatre in 1904, but remained the owner. Yeats accepted her terms but Gregory and Synge worked on finding ways to finesse their way around them before officially accepting. She didn't want to have anything to do with Irish politics, especially not nationalism, and was very reactive to anything she saw as political, which caused several inflammatory feuds with her colleagues. She also did not care for the accessibility of theatre, which was an important issue for the founders, and she created additional rules for ticket pricing, and made the Abbey Theatre one of the most expensive theatres in Dublin. From then, she became the manager of the Abbey Theatre. Over the years, she put many times the theatre's value in money back into it in exchange for input on the plays being staged and respect from the company's directors.
She remained involved for a few strenuous years and left in 1907, angrily realizing she couldn't achieve self-expression at the Abbey, but stayed financially involved until 1910. From 1907 to 1909, she turned on the INTS, essentially threatening to close if anything she deemed political was performed, even if the interpretation was debatable. After the riots following Synge's Playboy of the Western World, she fully expressed her hatred for Irish nationalism and patriotism and threatened the Abbey once again, but when Blanco Posnet was presented and the Nationalists were appeased, she made a deal with Yeats and Lady Gregory to sell them the Theatre. The negotiations dragged on and in 1910, when the Abbey stayed open on the day King Edward VII died, Horniman had a final dispute in court with Yeats before leaving the Abbey Theatre for good.
In the early years there were challenges in finding plays by Irish playwrights, and so the founders established guidelines for playwrights submitting plays and wrote some plays themselves. The emergence of the theatre, the challenge of finding plays by Irish playwrights, the protests surrounding Playboy of the Western World, and the work of the Irish Theatre were key developments during this time. As one of the first directors of the new Abbey Theatre, Lady Gregory exchanged correspondence with her counterparts W.B Yeats and JM Synge which chronicled the further development of the new Abbey Theatre including themes such as the critical reception of plays, the challenge of balancing state funding and artistic liberty, and the contributions of actors and others supporting the theatre.The new Abbey Theatre found great popular success, and large crowds attended many of its productions. The Abbey was fortunate in having Synge as a key member, as he was then considered one of the foremost English-language dramatists. The theatre staged many plays by eminent or soon-to-be eminent authors, including Yeats, Lady Gregory, Moore, Martyn, Padraic Colum, George Bernard Shaw, Oliver St John Gogarty, F. R. Higgins, Thomas MacDonagh, Lord Dunsany, T. C. Murray, James Cousins and Lennox Robinson. Many of these authors served on the board, and it was during this time that the Abbey gained its reputation as a writers' theatre.
The Abbey's fortunes worsened in January 1907 when the opening of Synge's The Playboy of the Western World resulted in civil disturbance. The troubles (since known as the Playboy Riots) were encouraged, in part, by nationalists who believed the theatre was insufficiently political and who took offence at Synge's use of the word 'shift', as it was known at the time as a symbol representing Kitty O'Shea and adultery, and hence was seen as a slight on the virtue of Irish womanhood. Much of the crowd rioted loudly, and the actors performed the remainder of the play in dumbshow. The theatre's decision to call in the police further roused anger of the nationalists. Although press opinion soon turned against the rioters and the protests faded, management of the Abbey was shaken. They chose not to stage Synge's next—and last completed—play, The Tinker's Wedding (1908), for fear of further disturbances. That same year, the Fay brothers' association with the theatre ended when they emigrated to the United States due to a clash with Yeats outlook; Lennox Robinson took over the Abbey's day-to-day management after Horniman withdrew financial support.
In 1909, Shaw's The Shewing-Up of Blanco Posnet led to further protests. The subsequent discussion occupied a full issue of the theatre's journal The Arrow. Also that year, the proprietors decided to make the Abbey independent of Annie Horniman, who had indicated a preference for this course. Relations with Horniman had been tense, partly because she wished to be involved in choosing which plays were to be performed and when. As a mark of respect for the death of King Edward VII, an understanding existed that Dublin theatres were to close on the night of 7 May 1910. Robinson, however, kept the Abbey open. When Horniman heard of Robinson's decision, she severed her connections with the company. By her own estimate, she had invested £10,350—worth approximately $1 million in 2007 US dollars—on the project.
With the loss of Horniman, Synge, and the Fays, the Abbey under Robinson tended to drift, suffering from falling public interest and box office returns. This trend was halted for a time by the emergence of Seán O'Casey as an heir to Synge. O'Casey's career as a dramatist began with The Shadow of a Gunman, staged by the Abbey in 1923. This was followed by Juno and the Paycock in 1924, and The Plough and the Stars in 1926. Theatregoers arose in riots over the last play, in a way reminiscent of those that had greeted the Playboy 19 years earlier. Concerned about public reaction, the Abbey rejected O'Casey's next play. He emigrated to London shortly thereafter.
World War I and the Irish Rebellion of 1916 almost ended the theatre; however in 1924, Yeats and Lady Gregory offered the Abbey to the government of the Free State as a gift to the Irish people. Although the government refused, the following year Minister of Finance Ernest Blythe arranged an annual government subsidy of £850 for the Abbey. This made the company the first state-supported theatre in the English-speaking world. The subsidy allowed the theatre to avoid bankruptcy, but the amount was too small to rescue it from financial difficulty.
The Abbey School of Acting was set up that year. The Abbey School of Ballet was established by Ninette de Valois — who had provided choreography for a number of Yeats' plays — and ran until 1933.
The Peacock and the GateEdit
Around this time the company acquired additional space, allowing them to create a small experimental theatre, the Peacock, in the ground floor of the main theatre. In 1928, Hilton Edwards and Micheál MacLiammoir launched the Gate Theatre, initially using the Peacock to stage works by European and American dramatists. The Gate primarily sought work from new Irish playwrights and, despite the new space, the Abbey entered a period of artistic decline.
This is illustrated by the story of how one new work was said to have come to the Gate Theatre. Denis Johnston reportedly submitted his first play, Shadowdance, to the Abbey; however, Lady Gregory rejected it, returning it to the author with "The Old Lady says No" written across the title page. Johnston decided to re-title the play. The Gate staged The Old Lady Says 'No' in The Peacock in 1928. (Note: academic critics Joseph Ronsley and Christine St. Peter have questioned the veracity of this story.)
1930s to 1950sEdit
The tradition of the Abbey as primarily a writers' theatre survived Yeats' withdrawal from day-to-day involvement. Frank O'Connor sat on the board from 1935 to 1939, served as managing director from 1937, and had two plays staged during this period. He was alienated from and unable to cope with many of the other board members. They held O'Connor's past adultery against him. Although he fought formidably to retain his position, soon after Yeats died the board began machinations to remove O'Connor. In 1941 Ernest Blythe, a politician, who had arranged the first State subsidy for the theatre, became managing director.
During the 1940s and 1950s, there was a steady decline in the number of new productions. There were 104 new plays produced from 1930-1940, whereas this number dropped to 62 for 1940-1950. Thereafter, there was another decrease. However, the theatre was undeterred by the dwindling amount of productions of original plays, and had their audience numbers increase. The attitude of the general public had vastly changed towards the Abbey since the beginning of the century. It was no longer reserved as a theatre for the rich and for a small clique of intellectuals, it had become a theatre for the people. The plays of O'Casey and Lennox Robinson that were being produced by theatre at the time most likely aided in this shift. Larger audiences also brought a change in the Abbey's repertory policy. Rather than the theatre's old system of limiting the initial run of a new play to week, no matter how popular the play became, the Abbey ran their new plays until their audience was exhausted. This change in policy which was brought about partly because of the shortage of new plays was to have serious consequences in future years when the Abbey found its stock of popular revivals exhausted.
During the 1940s and 1950s, the staple fare at the Abbey was comic farce set in the idealised peasant world of Éamon de Valera. If such a world had ever existed, it was no longer considered relevant by most Irish citizens, and as a result, audience numbers continued to decline. This drift might have been more dramatic but popular actors, including F. J. McCormick, and dramatists, including George Shiels, could still draw a crowd. Austin Clarke staged events for his Dublin Verse Speaking Society—later the Lyric Theatre—at the Peacock from 1941 to 1944 and the Abbey from 1944 to 1951.
In February 1961, the ruins of the Abbey were demolished. The board had plans for rebuilding with a design by the Irish architect Michael Scott. On 3 September 1963, the President of Ireland, Éamon de Valera, laid the foundation stone for the new theatre, and the Abbey reopened on 18 July 1966.
1950s to 1990sEdit
A new building, a new generation of dramatists, including such figures as Hugh Leonard, Brian Friel and Tom Murphy, and tourism that included the National Theatre as a key cultural attraction, helped revive the theatre. Beginning in 1957, the theatre's participation in the Dublin Theatre Festival aided its revival. Plays such as Brian Friel's Philadelphia Here I Come! (1964), Faith Healer (1979) and Dancing at Lughnasa (1990); Tom Murphy's A Whistle in the Dark (1961) and The Gigli Concert (1983); and Hugh Leonard's Da (1973) and A Life (1980), helped raise the Abbey's international profile through successful runs in the West End in London, and on Broadway in New York City.
Challenges in the 2000sEdit
In December 2004 the theatre celebrated its centenary with events that included performances of the original programme by amateur dramatic groups and a production of Michael West's Dublin By Lamplight, originally staged by Annie Ryan for The Corn Exchange company at the Project Arts Centre in November 2004. Despite the centenary, not all was well: audience numbers were falling, the Peacock was closed for lack of money, the theatre was near bankruptcy, and the staff felt the threat of huge lay-offs.
In September 2004 two members of the theatre's advisory council, playwrights Jimmy Murphy and Ulick O'Connor, tabled a "motion of no confidence" in Artistic Director Ben Barnes, and criticised him for touring with a play in Australia during the deep financial and artistic crisis at home. Barnes returned and temporarily held his position. The debacle put the Abbey under great public scrutiny. On 12 May 2005, Barnes and managing director Brian Jackson resigned after it was found that the theatre's deficit of €1.85 million had been underestimated. The new director, Fiach Mac Conghail, due to start in January 2006, took over in May 2005.
On 20 August 2005, the Abbey Theatre's Advisory Council approved a plan to dissolve the Abbey's owner, the National Theatre Society, and replace it with a company limited by guarantee, the Abbey Theatre Limited. After strong debate, the board accepted the program. Basing its actions on this plan, the Arts Council of Ireland awarded the Abbey €25.7 million in January 2006 to be spread over three years. The grant represented an approximate 43 percent increase in the Abbey's revenues and was the largest ever awarded by the Arts Council. The new company was established on 1 February 2006, with the announcement of a new Abbey Board chaired by High Court Judge Bryan McMahon. In March 2007 the larger auditorium in the theatre was radically reconfigured by Jean-Guy Lecat as part of a major upgrade of the theatre.
In 2009, the Literary Department announced the pilot of a new development initiative, the New Playwrights Programme. The six writers who took part in this pilot programme were Aidan Harney, Lisa Keogh, Shona McCarthy, Jody O'Neill, Neil Sharpson and Lisa Tierney-Keogh.
More than 30 writers were commissioned by the Abbey after Mac Conghail was appointed director in May 2005, and the Abbey produced new plays by Tom Murphy, Richard Dormer, Gary Duggan, Billy Roche, Bernard Farrell and Owen McCafferty. The Abbey also developed a relationship with the Public Theater in New York, where it has presented two new plays: Terminus by Mark O'Rowe and Sam Shepard's Kicking a Dead Horse. The Abbey also made an historic move in 2009/10 by producing four consecutive new plays by women writers: B for Baby by Carmel Winter, No Romance by Nancy Harris, Perve by Stacey Gregg and 16 Possible Glimpses by Marina Carr.
The Abbey ran a special programme, Waking the Nation, to commemorate the Easter Rising of 1916. Some controversy arose over the fact that of ten productions, only one, a monologue for children, was by a female playwright.
The Abbey is also a member of The Wheel
Co-directors and new building plansEdit
In 2016, the Abbey's direction passed to two co-directors on five-year contracts. Neil Murray from Wales and Graham McLaren from Scotland pursued policies involving significant touring, a wider selection of plays including shorter runs, reduced reliance on Abbey stalwarts such as The Plough and the Stars (57 productions in the theatre's history), free previews, and an emphasis on diversity. They have also pursued the project to renew the theatre building, with McLaren describing the current structure as "the worst theatre building I have ever worked in ... Stalinesque ... a terrible, terrible design.”
After discussions about new locations in the Docklands, on O'Connell Street and elsewhere, it was decided to redevelop the Abbey in-situ. Hence, in September 2012, the Abbey Theatre purchased 15-17 Eden Quay, and in 2016, 22-23 Eden Quay. With a budget of up to 80 million euro mentioned, including capital funding from central government, the plan is to remove the existing building, and build on the combined site, creating two new theatre spaces, of 700 and 250 seats, along with a restaurant, modern rehearsal spaces, and new offices. The new theatre would open on to the Liffey quays. As of January 2020, construction has not yet commenced.
- "Abbey Theatre Austin". Encyclopædia Britannica. I: A-Ak – Bayes (15th ed.). Chicago, Illinois: Encyclopædia Britannica, Inc. 2010. pp. 12. ISBN 978-1-59339-837-8.
- Foster (2003), pp. 486, 662.
- Trotter, Mary (2001). Ireland's National Theatre's : Political performance and the Origins of the Irish Dramatic Movement. Syracuse, N.Y: Syracuse University Press.
- Kavanagh, p. 30.
- Frazier, Adrian. Behind the Scenes: Yeats, Horniman, and the Struggle for the Abbey Theater, Los Angeles: University of California Press, 1990. p. 172
- Lynch, John (2004). "Film: "The Abbey Theatre: the First 100 years"". RTÉ.
- Gregory, Lady (1972). Our Irish Theatre: A Chapter of Autobiography (3rd ed.). Buckinghamshire: Colin Smythe.
- Mikhail, E. H. The Abbey Theatre: Interviews and Recollections', Rowman & Littlefield Publishers, October 1987. p. 97. ISBN 0-389-20616-4
- McCormack, W. J. (ed.). The Blackwell Companion to Modern Irish Culture, Blackwell Publishing, 28 January 2002. p. 7. ISBN 0-631-22817-9
- Frazier, p. 172.
- Hunt, p. 61.
- Richards, Shaun. The Cambridge Companion to Twentieth-Century Irish Drama, Cambridge: Cambridge University Press, February 2004. p. 63. ISBN 0-521-00873-5
- Butler Yeats, William. The Collected Letters of W. B. Yeats: Volume IV: 1905–1907, Oxford: Oxford University Press, Republished 1996. p. 616. ISBN 0-19-812684-0
- Edward Kenny (nephew of Máire Nic Shiubhlaigh): The Splendid Years: recollections of Maire Nic Shiubhlaigh, as told to Edward Kenny, with appendices and lists of Irish theatre plays, 1899–1916. Duffy and Co., Dublin. 1955
- Hunt, Hugh (1979). The Abbey: Ireland's National Theatre 1904-1979. New York : Columbia Press.
- Gegory, Isabella (1914). Our National Theatre: a chapter of autobiography. Ireland. pp. 1–10.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. p. 205.
- Frazier, Adrian (1990). Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. Los Angeles: University of California Press. pp. xiii. ISBN 0-520-06549-2.
- Adrian Woods Frazier (1990). Behind the scenes : Yeats, Horniman and the struggle for the Abbey theatre. University of California Press. ISBN 0520065492. OCLC 465842168.
- Frazier, Adrian (1990). Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. Los Angeles: University of California Press. pp. 27. ISBN 0-520-06549-2.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. pp. 43–47.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. pp. 219–230.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. pp. 235–238.
- Ireland's National Theaters: Political Performance and the Origins of the Irish Dramatic Movement. p. 115.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. pp. 46–49.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. pp. 49–50, 75–77.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. pp. 205–213.
- Behind the Scenes: Yeats, Horniman and the Struggle for the Abbey Theatre. pp. 232–238.
- Price, Alan Synge and Anglo-Irish Drama, London: Methuen, 1961. pp. 15, 25.
- Isherwood, Charles. "A Seductive Fellow Returns, but in a Darker Mood", New York Times, 28 October 2004.
- McKenna, Bernard (Winter 2015). "Yeats, The Arrow, and the Aesthetics of a 'National, Moral Culture': The Blanco Posnet Affair". Journal of Modern Literature. 38 (2): 16–28. doi:10.2979/jmodelite.38.2.16. JSTOR 10.2979/jmodelite.38.2.16.
- Leland, Mary. The Lie of the Land: Journeys Through Literary Cork, Cork: Cork University Press, 2000. p. 238. ISBN 1-85918-231-3
- Welch, Robert, Stewart, Bruce. The Oxford Companion to Irish Literature, Oxford: Oxford University Press, January 1996. p. 3. ISBN 0-19-866158-4
- Kavanagh, pp. 118, 127, 137.
- Kavanagh, p. 135.
- Collins, Glenn. "O'Casey's Widow Muses on His Friendship With Shaw", New York Times, 13 November 1989. Retrieved on 21 January 2008.
- Kavanagh, pp. 125–126.
- Sorley Walker, Kathrine. "The Festival and the Abbey: Ninette de Valois' Early Choreography, 1925–1934, Part One". Dance Chronicle, Volume 7, No. 4, 1984–85. pp. 379–412.
- Pinciss, G.M. (December 1969). "A Dancer for Mr. Yeats". Educational Theatre Journal. 21 (4): 386–391. doi:10.2307/3205567. JSTOR 3205567.
- Welsh (1999), p. 108.
- Welch, Robert, and Stewart, Bruce. The Oxford Companion to Irish Literature, Oxford: Oxford University Press, 1996. p. 275. ISBN 0-19-866158-4
- Bartlett, Rebecca Ann. Choice's Outstanding Academic Books, 1992–1997: Reviews of Scholarly Titles, Association of College & Research Libraries, 1998. p. 136. ISBN 0-8389-7929-7
- Pierce, David. "Irish Writing in the Twentieth Century: A Reader". Cork: Cork University Press, September 2000. p. 743. ISBN 1-85918-208-9
- Welch, p. 135.
- Walsh, Ian. Experimental Irish Theatre: After W B Yeats. Palgrave Macmillan. pp. 80–82.
- Haggerty, Bridget. "Irish Landmarks: The Abbey Theatre". irishcultureandcustoms.com. Retrieved on 21 January 2008.
- Harmon, Maurice. Austin Clarke 1896–1974: A Critical Introduction, Rowman & Littlefield, July 1989. p. 116. ISBN 0-389-20864-7
- Lavery, Brian. "Deficit, Cutbacks and Crisis for Abbey Theater at 100". New York Times, 16 September 2004. Retrieved on 21 January 2007.
- Hogan, Louise. "Judge appointed to lead Abbey Archived 1 March 2009 at the Wayback Machine". Irish Examiner, 30 September 2005. Retrieved on 21 January 2007.
- Lavery, Brian. "The Abbey Theater's Fiach Mac Conghail Takes a Cue From Yeats", New York Times, 25 March 2006. Retrieved on 23 January 2007.
- Kilroy, Ian. "Abbey Theatre lands historic €25.7m three-year grant Archived 1 March 2009 at the Wayback Machine". Irish Examiner, 25 January 2006. Retrieved on 25 January 2008.
- Hogan, Louise (21 March 2007). "Abbey Theatre to spend €730,000 on a building it's soon to abandon". Irish Independent. Retrieved 20 July 2013.
- "Abbey Theatre Saga Takes New Twist". The Irish Times.
- "MacConghail takes charge at Abbey Theatre", The Stage Newspaper, 15 February 2005. Retrieved on 21 January 2007.
- Carr, Aoife (9 November 2015). "Abbey admits programme does not represent gender equality". The Irish Times. Irish Times Trust. Retrieved 13 November 2018.
- MacCormack, Chris. ""Them's the Breaks": Gender Imbalance and Irish Theatre". Exeunt Mazagine. Retrieved 13 November 2018.
- Shortall, Eithne (1 July 2018). "Out with the old, in the with new at the Abbey Theatre". The Sunday Times. News International. Retrieved 13 November 2018.
- Crawley, Peter. "Downtown Abbey". News. Irish Theatre Magazine. Archived from the original on 10 June 2015. Retrieved 20 July 2013.
- Shortall, Eithne (1 July 2018). "Curtain up on Dublin's €80m new-look Abbey Theatre in 2021". The Sunday Times. News International. Retrieved 13 November 2018.
- Fitz-Simon, Christopher. The Abbey Theatre—Ireland's National Theatre: The First 100 Years. New York: Thames and Hudson, 2003. ISBN 0-500-28426-1
- Foster, R. F. W. B. Yeats: A Life, Vol. II: The Arch-Poet 1915–1939. New York: Oxford University Press, 2003. ISBN 0-19-818465-4.
- Frazier, Adrian. Behind the Scenes: Yeats, Horniman, and the Struggle for the Abbey Theatre. Berkeley: University of California, March 1990. ISBN 0-520-06549-2
- Gregory, Lady Augusta. Our Irish Theatre. New York and London: Knickerbocker Press, 1913.
- Grene, Nicholas. The Politics of Irish Drama: Plays in Context from Boucicault to Friel. Cambridge University Press, February 1999. ISBN 0-521-66536-1
- Hogan, Robert, and Richard Burnham. Modern Irish Drama: A Documentary History. Vols. I-VI..
- Hunt, Hugh. The Abbey: Ireland's National Theater, 1904–1979. New York: Columbia University Press, October 1979. ISBN 0-231-04906-4
- Igoe, Vivien. A Literary Guide to Dublin. Methuen, April 1995. ISBN 0-413-69120-9
- Kavanagh, Peter. The Story of the Abbey Theatre. New York: Devin-Adair, 1950.
- Kilroy, James. The "Playboy" Riots. Dublin: Dolmen Pres, 1971. ASIN: B000LNLIXO
- McGlone, James P. Ria Mooney: The Life and Times of the Artistic Director of the Abbey Theatre. McFarland and Company, February 2002. ISBN 0-7864-1251-8
- Robinson, Lennox. Ireland's Abbey Theatre. London: Sidgwick and Jackson, 1951.
- Ryan, Philip B. The Lost Theatres of Dublin. The Badger Press, September 1998. ISBN 0-9526076-1-1
- Welch, Robert. The Abbey Theatre, 1899–1999: Form and Pressure. Oxford: Oxford University Press, February 1999. ISBN 0-19-926135-0 | 1 | 26 |
<urn:uuid:8244271b-6218-49f6-b82b-547b0ac937e9> | This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (September 2011) (Learn how and when to remove this template message)
Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications (i.e., at different security levels), permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization. There are two contexts for the use of multilevel security. One is to refer to a system that is adequate to protect itself from subversion and has robust mechanisms to separate information domains, that is, trustworthy. Another context is to refer to an application of a computer that will require the computer to be strong enough to protect itself from subversion and possess adequate mechanisms to separate information domains, that is, a system we must trust. This distinction is important because systems that need to be trusted are not necessarily trustworthy.
Trusted operating systems
An MLS operating environment often requires a highly trustworthy information processing system often built on an MLS operating system (OS), but not necessarily. Most MLS functionality can be supported by a system composed entirely from untrusted computers, although it requires multiple independent computers linked by hardware security-compliant channels (see section B.6.2 of the Trusted Network Interpretation, NCSC-TG-005). An example of hardware enforced MLS is asymmetric isolation. If one computer is being used in MLS mode, then that computer must use a trusted operating system (OS). Because all information in an MLS environment is physically accessible by the OS, strong logical controls must exist to ensure that access to information is strictly controlled. Typically this involves mandatory access control that uses security labels, like the Bell–LaPadula model.
Customers that deploy trusted operating systems typically require that the product complete a formal computer security evaluation. The evaluation is stricter for a broader security range, which are the lowest and highest classification levels the system can process. The Trusted Computer System Evaluation Criteria (TCSEC) was the first evaluation criteria developed to assess MLS in computer systems. Under that criteria there was a clear uniform mapping between the security requirements and the breadth of the MLS security range. Historically few implementations have been certified capable of MLS processing with a security range of Unclassified through Top Secret. Among them were Honeywell's SCOMP, USAF SACDIN, NSA's Blacker, and Boeing's MLS LAN, all under TCSEC, 1980s vintage and Intel 80386-based. Currently, MLS products are evaluated under the Common Criteria. In late 2008, the first operating system (more below) was certified to a high evaluated assurance level: Evaluation Assurance Level (EAL) - EAL 6+ / High Robustness, under the auspices of a U.S. government program requiring multilevel security in a high threat environment. While this assurance level has many similarities to that of the old Orange Book A1 (such as formal methods), the functional requirements focus on fundamental isolation and information flow policies rather than higher level policies such as Bell-La Padula. Because the Common Criteria decoupled TCSEC's pairing of assurance (EAL) and functionality (Protection Profile), the clear uniform mapping between security requirements and MLS security range capability documented in CSC-STD-004-85 has largely been lost when the Common Criteria superseded the Rainbow Series.
Freely available operating systems with some features that support MLS include Linux with the Security-Enhanced Linux feature enabled and FreeBSD. Security evaluation was once thought to be a problem for these free MLS implementations for three reasons:
- It is always very difficult to implement kernel self-protection strategy with the precision needed for MLS trust, and these examples were not designed to or certified to an MLS protection profile so they may not offer the self-protection needed to support MLS.
- Aside from EAL levels, the Common Criteria lacks an inventory of appropriate high assurance protection profiles that specify the robustness needed to operate in MLS mode.
- Even if (1) and (2) were met, the evaluation process is very costly and imposes special restrictions on configuration control of the evaluated software.
Notwithstanding such suppositions, Red Hat Enterprise Linux 5 was certified against LSPP, RBACPP, and CAPP at EAL4+ in June 2007. It uses Security-Enhanced Linux to implement MLS and was the first Common Criteria certification to enforce TOE security properties with Security-Enhanced Linux.
Vendor certification strategies can be misleading to laypersons. A common strategy exploits the layperson's overemphasis of EAL level with over-certification, such as certifying an EAL 3 protection profile (like CAPP) to elevated levels, like EAL 4 or EAL 5. Another is adding and certifying MLS support features (such as role-based access control protection profile (RBACPP) and labeled security protection profile (LSPP)) to a kernel that is not evaluated to an MLS-capable protection profile. Those types of features are services run on the kernel and depend on the kernel to protect them from corruption and subversion. If the kernel is not evaluated to an MLS-capable protection profile, MLS features cannot be trusted regardless of how impressive the demonstration looks. It is particularly noteworthy that CAPP is specifically not an MLS-capable profile as it specifically excludes self-protection capabilities critical for MLS.
General Dynamics offers PitBull, a trusted, MLS operating system. PitBull is currently offered only as an enhanced version of Red Hat Enterprise Linux, but earlier versions existed for Sun Microsystems Solaris, IBM AIX, and SVR4 Unix. PitBull provides a Bell LaPadula security mechanism, a Biba integrity mechanism, a privilege replacement for superuser, and many other features. PitBull has the security base for General Dynamics' Trusted Network Environment (TNE) product since 2009. TNE enables Multilevel information sharing and access for users in the Department of Defense and Intelligence communities operating a varying classification levels. It's also the foundation for the Multilevel coalition sharing environment, the Battlefield Information Collection and Exploitation Systems Extended (BICES-X).
Sun Microsystems, now Oracle Corporation, offers Solaris Trusted Extensions as an integrated feature of the commercial OSs Solaris and OpenSolaris. In addition to the controlled access protection profile (CAPP), and role-based access control (RBAC) protection profiles, Trusted Extensions have also been certified at EAL4 to the labeled security protection profile (LSPP). The security target includes both desktop and network functionality. LSPP mandates that users are not authorized to override the labeling policies enforced by the kernel and X Window System (X11 server). The evaluation does not include a covert channel analysis. Because these certifications depend on CAPP, no Common Criteria certifications suggest this product is trustworthy for MLS.
BAE Systems offers XTS-400, a commercial system that supports MLS at what the vendor claims is "high assurance". Predecessor products (including the XTS-300) were evaluated at the TCSEC B3 level, which is MLS-capable. The XTS-400 has been evaluated under the Common Criteria at EAL5+ against the CAPP and LSPP protection profiles. CAPP and LSPP are both EAL3 protection profiles that are not inherently MLS-capable, but the security target for the Common Criteria evaluation of this product contains an enriched set of security functions that provide MLS capability.
Sanitization is a problem area for MLS systems. Systems that implement MLS restrictions, like those defined by Bell–LaPadula model, only allow sharing when it does not obviously violate security restrictions. Users with lower clearances can easily share their work with users holding higher clearances, but not vice versa. There is no efficient, reliable mechanism by which a Top Secret user can edit a Top Secret file, remove all Top Secret information, and then deliver it to users with Secret or lower clearances. In practice, MLS systems circumvent this problem via privileged functions that allow a trustworthy user to bypass the MLS mechanism and change a file's security classification. However, the technique is not reliable.
Covert channels pose another problem for MLS systems. For an MLS system to keep secrets perfectly, there must be no possible way for a Top Secret process to transmit signals of any kind to a Secret or lower process. This includes side effects such as changes in available memory or disk space, or changes in process timing. When a process exploits such a side effect to transmit data, it is exploiting a covert channel. It is extremely difficult to close all covert channels in a practical computing system, and it may be impossible in practice. The process of identifying all covert channels is a challenging one by itself. Most commercially available MLS systems do not attempt to close all covert channels, even though this makes it impractical to use them in high security applications.
Bypass is problematic when introduced as a means to treat a system high object as if it were MLS trusted. A common example is to extract data from a secret system high object to be sent to an unclassified destination, citing some property of the data as trusted evidence that it is 'really' unclassified (e.g. 'strict' format). A system high system cannot be trusted to preserve any trusted evidence, and the result is that an overt data path is opened with no logical way to securely mediate it. Bypass can be risky because, unlike narrow bandwidth covert channels that are difficult to exploit, bypass can present a large, easily exploitable overt leak in the system. Bypass often arises out of failure to use trusted operating environments to maintain continuous separation of security domains all the way back to their origin. When that origin lies outside the system boundary, it may not be possible to validate the trusted separation to the origin. In that case, the risk of bypass can be unavoidable if the flow truly is essential.
A common example of unavoidable bypass is a subject system that is required to accept secret IP packets from an untrusted source, encrypt the secret userdata and not the header and deposit the result to an untrusted network. The source lies outside the sphere of influence of the subject system. Although the source is untrusted (e.g. system high) it is being trusted as if it were MLS because it provides packets that have unclassified headers and secret plaintext userdata, an MLS data construct. Since the source is untrusted, it could be corrupt and place secrets in the unclassified packet header. The corrupted packet headers could be nonsense but it is impossible for the subject system to determine that with any reasonable reliability. The packet userdata is cryptographically well protected but the packet header can contain readable secrets. If the corrupted packets are passed to an untrusted network by the subject system they may not be routable but some cooperating corrupt process in the network could grab the packets and acknowledge them and the subject system may not detect the leak. This can be a large overt leak that is hard to detect. Viewing classified packets with unclassified headers as system high structures instead of the MLS structures they really are presents a very common but serious threat.
Most bypass is avoidable. Avoidable bypass often results when system architects design a system before correctly considering security, then attempt to apply security after the fact as add-on functions. In that situation, bypass appears to be the only (easy) way to make the system work. Some pseudo-secure schemes are proposed (and approved!) that examine the contents of the bypassed data in a vain attempt to establish that bypassed data contains no secrets. This is not possible without trusting something about the data such as its format, which is contrary to the assumption that the source is not trusted to preserve any characteristics of the source data. Assured "secure bypass" is a myth, just as a so-called High Assurance Guard (HAG) that transparently implements bypass. The risk these introduce has long been acknowledged; extant solutions are ultimately procedural, rather than technical. There is no way to know with certainty how much classified information is taken from our systems by exploitation of bypass.
"There is no such thing as MLS"
This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (March 2019) (Learn how and when to remove this template message)
With the decline in COMPUSEC experts, more laypersons who are not COMPUSEC-astute are designing secure computing systems and are mistakenly drawing this conclusion because the term MLS is being overloaded. These two uses are: MLS as a processing environment vs MLS as a capability. The false conclusion is based on a belief that there are no products certified to operate in an MLS environment or mode and that therefore MLS as a capability does not exist. One does not imply the other. Many systems operate in an environment containing data that has unequal security levels and therefore is MLS by the Computer Security Intermediate Value Theorem (CS-IVT). The consequence of this confusion runs deeper. It should be noted, however, that NSA-certified MLS operating systems, databases, and networks have existed in operational mode since the 1970s and that MLS products are continuing to be built, marketed, and deployed.
Laypersons often conclude that to admit that a system operates in an MLS environment (environment-centric meaning of MLS) is to be backed into the perceived corner of having a problem with no MLS solution (capability-centric meaning of MLS). MLS is deceptively complex and just because simple solutions are not obvious does not justify a conclusion that they do not exist. This can lead to a crippling ignorance about COMPUSEC that manifests itself as whispers that "one cannot talk about MLS," and "There's no such thing as MLS." These MLS-denial schemes change so rapidly that they cannot be addressed. Instead, it is important to clarify the distinction between MLS-environment and MLS-capable.
- MLS as a security environment or security mode: A community whose users have differing security clearances may perceive MLS as a data sharing capability: users can share information with recipients whose clearance allows receipt of that information. A system is operating in MLS Mode when it has (or could have) connectivity to a destination that is cleared to a lower security level than any of the data the MLS system contains. This is formalized in the CS-IVT. Determination of security mode of a system depends entirely on the system's security environment; the classification of data it contains, the clearance of those who can get direct or indirect access to the system or its outputs or signals, and the system's connectivity and ports to other systems. Security mode is independent of capabilities, although a system should not be operated in a mode for which it is not worthy of trust.
- MLS as a capability: Developers of products or systems intended to allow MLS data sharing tend to loosely perceive it in terms of a capability to enforce data-sharing restrictions or a security policy, like mechanisms that enforce the Bell–LaPadula model. A system is MLS-capable if it can be shown to robustly implement a security policy.
The original use of the term MLS applied to the security environment, or mode. One solution to this confusion is to retain the original definition of MLS and be specific about MLS-capable when that context is used.
Multiple Independent Levels of Security (MILS) is an architecture that addresses the domain separation component of MLS. Note that UCDMO (the US government lead for cross domain and multilevel systems) created a term Cross Domain Access as a category in its baseline of DoD and Intelligence Community accredited systems, and this category can be seen as essentially analogous to MILS.
Security models such as the Biba model (for integrity) and the Bell–LaPadula model (for confidentiality) allow one-way flow between certain security domains that are otherwise assumed to be isolated. MILS addresses the isolation underlying MLS without addressing the controlled interaction between the domains addressed by the above models. Trusted security-compliant channels mentioned above can link MILS domains to support more MLS functionality.
The MILS approach pursues a strategy characterized by an older term, MSL (multiple single level), that isolates each level of information within its own single-level environment (System High).
The rigid process communication and isolation offered by MILS may be more useful to ultra high reliability software applications than MLS. MILS notably does not address the hierarchical structure that is embodied by the notion of security levels. This requires the addition of specific import/export applications between domains each of which needs to be accredited appropriately. As such, MILS might be better called Multiple Independent Domains of Security (MLS emulation on MILS would require a similar set of accredited applications for the MLS applications). By declining to address out of the box interaction among levels consistent with the hierarchical relations of Bell-La Padula, MILS is (almost deceptively) simple to implement initially but needs non-trivial supplementary import/export applications to achieve the richness and flexibility expected by practical MLS applications.
Any MILS/MLS comparison should consider if the accreditation of a set of simpler export applications is more achievable than accreditation of one, more complex MLS kernel. This question depends in part on the extent of the import/export interactions that the stakeholders require. In favour of MILS is the possibility that not all the export applications will require maximal assurance.
There is another way of solving such problems known as multiple single-level. Each security level is isolated in a separate untrusted domain. The absence of medium of communication between the domains assures no interaction is possible. The mechanism for this isolation is usually physical separation in separate computers. This is often used to support applications or operating systems which have no possibility of supporting MLS such as Microsoft Windows.
Infrastructure such as trusted operating systems are an important component of MLS systems, but in order to fulfill the criteria required under the definition of MLS by CNSSI 4009 (paraphrased at the start of this article), the system must provide a user interface that is capable of allowing a user to access and process content at multiple classification levels from one system. The UCDMO ran a track specifically focused on MLS at the NSA Information Assurance Symposium in 2009, in which it highlighted several accredited (in production) and emergent MLS systems. Note the use of MLS in SELinux.
There are several databases classified as MLS systems. Oracle has a product named Oracle Label Security (OLS) that implements mandatory access controls - typically by adding a 'label' column to each table in an Oracle database. OLS is being deployed at the US Army INSCOM as the foundation of an "all-source" intelligence database spanning the JWICS and SIPRNet networks. There is a project to create a labeled version of PostgreSQL, and there are also older labeled-database implementations such as Trusted Rubix. These MLS database systems provide a unified back-end system for content spanning multiple labels, but they do not resolve the challenge of having users process content at multiple security levels in one system while enforcing mandatory access controls.
There are also several MLS end-user applications. The other MLS capability currently on the UCDMO baseline is called MLChat, and it is a chat server that runs on the XTS-400 operating system - it was created by the US Naval Research Laboratory. Given that content from users at different domains passes through the MLChat server, dirty-word scanning is employed to protect classified content, and there has been some debate about if this is truly an MLS system or more a form of cross-domain transfer data guard. Mandatory access controls are maintained by a combination of XTS-400 and application-specific mechanisms.
Joint Cross Domain eXchange (JCDX) is another example of an MLS capability currently on the UCDMO[permanent dead link] baseline. JCDX is the only Department of Defense (DoD), Defense Intelligence Agency (DIA) accredited Multilevel Security (MLS) Command, Control, Communication, Computers and Intelligence (C4I) system that provides near real-time intelligence and warning support to theater and forward deployed tactical commanders. The JCDX architecture is comprehensively integrated with a high assurance Protection Level Four (PL4) secure operating system, utilizing data labeling to disseminate near real-time data information on force activities and potential terrorist threats on and around the world's oceans. It is installed at locations in United States and Allied partner countries where it is capable of providing data from Top Secret/SCI down to Secret-Releasable levels, all on a single platform.
MLS applications not currently part of the UCDMO baseline include several applications from BlueSpace. BlueSpace has several MLS applications, including an MLS email client, an MLS search application and an MLS C2 system. BlueSpace leverages a middleware strategy to enable its applications to be platform neutral, orchestrating one user interface across multiple Windows OS instances (virtualized or remote terminal sessions). The US Naval Research Laboratory has also implemented a multilevel web application framework called MLWeb which integrates the Ruby on Rails framework with a multilevel database based on SQLite3.
Perhaps the greatest change going on in the multilevel security arena today is the convergence of MLS with virtualization. An increasing number of trusted operating systems are moving away from labeling files and processes, and are instead moving towards UNIX containers or virtual machines. Examples include zones in Solaris 10 TX, and the padded cell hypervisor in systems such as Green Hill's Integrity platform, and XenClient XT from Citrix. The High Assurance Platform from NSA as implemented in General Dynamics' Trusted Virtualization Environment (TVE) is another example - it uses SELinux at its core, and can support MLS applications that span multiple domains.
- Bell–LaPadula model
- Biba model, Biba Integrity Model
- Clark–Wilson model
- Discretionary access control (DAC)
- Evaluation Assurance Level (EAL)
- Graham-Denning model
- Mandatory access control (MAC)
- Multi categories security (MCS)
- Multifactor authentication
- Non-interference (security) model
- Role-based access control (RBAC)
- Security modes of operation
- System high mode
- Take-grant model
- Davidson, J.A. (1996-12-09). Asymmetric isolation. Computer Security Applications Conference. pp. 44–54. doi:10.1109/CSAC.1996.569668. ISBN 978-0-8186-7606-2.
- CSC-STD-004-85: Computer Security Requirements - Guidance For Applying The Department Of Defense Trusted Computer System Evaluation Criteria In Specific Environments (25 June 1985)
- Multi-Level Security confidentiality policy in FreeBSD
- "Validated Product - Red Hat Enterprise Linux Version 5 running on IBM Hardware". National Information Assurance Partnership, Common Criteria Evaluation and Validation Scheme, United States. June 7, 2007. Cite journal requires
- Controlled Access Protection Profile (CAPP)
- Corrin, Amber (2017-08-08). "How BICES-X facilitates global intelligence". C4ISRNET. Retrieved 2018-12-10.
- "Solaris 10 Release 11/06 Trusted Extensions". Communications Security Establishment Canada. 2008-06-11. Archived from the original on 2011-06-17. Retrieved 2010-06-26. Cite journal requires
- "Security Target, Version 1.22 for XTS-400, Version 6.4.U4" (PDF). National Information Assurance Partnership, Common Criteria Evaluation and Validation Scheme, United States. 2008-06-01. Archived from the original (PDF) on 2011-07-23. Retrieved 2010-08-11. Cite journal requires
- David Elliott Bell: Looking Back at the Bell–LaPadula model - Addendum Archived 2011-08-27 at the Wayback Machine (December 20, 2006)
- David Elliott Bell: Looking Back at the Bell–LaPadula model (December 7, 2005)
For example: Petersen, Richard (2011). Fedora 14 Administration and Security. Surfing Turtle Press. p. 298. ISBN 9781936280223. Retrieved 2012-09-13.
The SELinux reference policy [...] Multi-level security (MLS) adds a more refined security access method. MLS adds a security level value to resources.
- http://www.sse.gr/NATO/EreunaKaiTexnologiaNATO/36.Coalition_C4ISR_architectures_and_information_exchange_capabilities/RTO-MP-IST-042/MP-IST-042-12.pdf[permanent dead link]
- Lampson, B. (1973). "A note on the confinement problem". Communications of the ACM. 16 (10): 613–615. CiteSeerX 10.1.1.129.1549. doi:10.1145/362375.362389.
- NCSC (1985). "Trusted Computer System Evaluation Criteria". National Computer Security Center. Cite journal requires
|journal=(help) (a.k.a. the TCSEC or "Orange Book").
- NCSC (1986). "Trusted Network Interpretation". National Computer Security Center. Cite journal requires
|journal=(help) (a.k.a. the TNI or "Red Book").
- Smith, Richard (2005). "Chapter 205: Multilevel security". In Hossein Bidgoli (ed.). Handbook of Information Security, Volume 3, Threats, Vulnerabilities, Prevention, Detection and Management. New York: John Wiley. Archived from the original on 2006-05-06. Retrieved May 21, 2006. ISBN 0-471-64832-9.
- Patel, D., Collins, R., Vanfleet, W. M., Calloni, B. A., Wilding, M. M., MacLearn, L., & Luke, J. A. (November 2002). "Deeply Embedded High Assurance (Multiple Independent Levels of Security/Safety) MILS Architecture" (PDF). Center for research on economic development and policy reform. Archived from the original (PDF) on April 28, 2003. Retrieved 2005-11-06. Cite journal requires
|journal=(help)CS1 maint: multiple names: authors list (link)
- P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, Oct. 1998. . | 1 | 5 |
<urn:uuid:7983f63e-94eb-4467-be55-098c5dccd471> | : EnglishShort Description
: Energy availability is a real concern for everyone. Without energy or with access to much less energy than we currently use, we could not live in the same way, and life would not be easy. As an Sustainability Professional you will learn about this new "green" paradigm, where energy plays a central role in our lives. Building an energy future which assures ample supplies of energy to meet our needs should be a major priority and of concern to all. Energy supply is a complex subject and many considerations come into play: science, technology, the economy, politics, the environment, energetic independence, national security, and so on.
This course exposes the student to financially, environmentally, and socially responsible objectives that are supported by strategies and achieved by clear tactics that have measurable outcomes. The student is introduced to methods of implementing technologies and practices and will also learn how to measure the consequent social and environmental performance for written reports and persuasive presentations.
Sustainability Professionals are dedicated to the practice of environmental and social responsibility in ways that achieve financial stability over the long run. In this course the student will understand not just why businesses need to be more responsible but how businesses can be more successful over the long run. International standards are given full treatment. ISO 26000 is given detailed attention, slightly more than ISO 9000 or ISO 14000, because it melds guidance on both environmental and social responsibility into one general concept of social responsibility.
This course also specifies how to use traditional methods such as Six Sigma, lean, and operations research to improve processes, reduce resource use and waste, and make better social and environmental decisions that are based upon data from key financial, social, and environmental performance indicators. Armed with the knowledge from this course you can form educated and informed opinions on the future of energy and its impact on the economy, health, and the environment.Instructor Description
: Though this program is a self-paced program it is supported by an educational mentor. Educational mentors are subject matter experts who have years of experience in their field as well as the necessary educational training and credentials to work as an expert. The mentor is available to answer any questions a learner may have including questions on course content, course material, certifications, and even industry questions. Mentors also monitor the progress of learners to ensure training retention and program advancement. In eLearning, motivation is a key tool to success. Because of this, mentors provide encouraging comments, feedback, and coaching to motivate learners throughout the duration of the program to support completion and success!Certification
: Upon successful completion of this course, students will be equipped for an entry-level position in their field and will be prepared to sit for the NCCB national certification exam to become a Sustainability Specialist (CSS). Each state may have additional licensing requirements, be sure to research your states requirements for employment by visiting your states occupation board.
All required reference materials are provided with this program. Technical requirements:
• Broadband or High-Speed (DSL, Cable, Wireless)
• Processor - 2GHz Processor or Higher
• Memory - 1 GB RAM Minimum Recommended
• Operating Systems - Windows 7, 8 or 10; Mac OS x 10 or higher
Mac computers MUST have Microsoft Window Operating Systems over Bootcamp (Bootcamp is a free download from Apple''s website)
• Google Chrome is the best browser for this course
• Cookies MUST be enabled
• Pop-ups MUST be allowed (Pop-up Blocker disabled)
• Kindle Reader App may be needed for some reading assignments (No special equipment needed. This can be downloaded onto your computer.)
• Adobe PDF Reader
• Adobe Flash Player
• Adobe Acrobat Reader, Apple Quicktime, Windows Media Player, &/or Real Player
• PowerPoint Viewer (Use this if you don''t have PowerPoint) | 1 | 2 |
<urn:uuid:9ddc984b-a76f-4999-96af-d5d310bef06a> | The Celtic languages (usually //, but sometimes //) are a group of related languages descended from Proto-Celtic. They form a branch of the Indo-European language family. The term "Celtic" was first used to describe this language group by Edward Lhuyd in 1707, following Paul-Yves Pezron, who made the explicit link between the Celts described by classical writers and the Welsh and Breton languages.
|Formerly widespread in Europe; today Cornwall, Wales, Scotland, Ireland, Brittany, Chubut Province, Nova Scotia and the Isle of Man|
|ISO 639-2 / 5||cel|
During the 1st millennium BC, Celtic languages were spoken across much of Europe and in Asia Minor. Today, they are restricted to the northwestern fringe of Europe and a few diaspora communities. There are four living languages: Welsh, Breton, Irish and Scottish Gaelic. All are minority languages in their respective countries, though there are continuing efforts at revitalisation. Welsh is an official language in Wales and Irish is an official language of Ireland and of the European Union. Welsh is the only Celtic language not classified as endangered by UNESCO. The Cornish and Manx languages went extinct in modern times. They have been the object of revivals and now each has several hundred second-language speakers.
Irish and Scottish form the Goidelic languages, while Welsh and Breton are Brittonic. Beyond that there is no agreement on the subdivisions of the Celtic language family. They may be divided into a Continental group and an Insular group, or else into P-Celtic and Q-Celtic. All the living languages are Insular, since Breton, the only Celtic language spoken in continental Europe, is descended from the language of settlers from Britain. The Continental Celtic languages, such as Celtiberian, Galatian and Gaulish, are all extinct.
The Celtic languages have a rich literary tradition. The earliest specimens of written Celtic are Lepontic inscriptions from the 6th century BC in the Alps. Early Continental inscriptions used Italic and Paleohispanic scripts. Between the 4th and 8th centuries, Irish and Pictish were occasionally written in an original script, Ogham, but the Latin alphabet came to be used for all Celtic languages. Welsh has had a continuous literary tradition from the 6th century AD.
SIL Ethnologue lists six living Celtic languages, of which four have retained a substantial number of native speakers. These are the Goidelic languages (i.e. Irish and Scottish Gaelic, which are both descended from Middle Irish) and the Brittonic languages (i.e. Welsh and Breton, which are both descended from Common Brittonic).
The other two, Cornish (a Brittonic language) and Manx (a Goidelic language), died in modern times with their presumed last native speakers in 1777 and 1974 respectively. For both these languages, however, revitalisation movements have led to the adoption of these languages by adults and children and produced some native speakers.
|Language||Native name||Grouping||Number of native speakers||Number of people who have one or more skills in the language||Main area(s) where the language is spoken||Regulated by/language body||Estimated number of speakers in major cities|
|Irish||Gaeilge/ Gaedhilge / Gaeiluinn / Gaeilig / Gaeilic||Goidelic||40,000–80,000
In the Republic of Ireland, 94,000 people use Irish daily outside the education system.
|Total speakers: 1,887,437
Republic of Ireland: 1,774,437
United Kingdom: 95,000
United States: 18,000
|Ireland||Foras na Gaeilge||Dublin: 184,140|
|Welsh||Cymraeg / Y Gymraeg||Brittonic||562,000 (19.0% of the population of Wales) claim that they "can speak Welsh" (2011)||Total speakers: ≈ 947,700 (2011)
Wales: 788,000 speakers (26.7% of the population)
Chubut Province, Argentina: 5,000
United States: 2,500
Y Wladfa, Chubut
|Welsh Language Commissioner
The Welsh Government
(previously the Welsh Language Board, Bwrdd yr Iaith Gymraeg)
|Breton||Brezhoneg||Brittonic||206,000||356,000||Brittany||Ofis Publik ar Brezhoneg||Rennes: 7,000|
|Scottish Gaelic||Gàidhlig||Goidelic||Scotland: 57,375 (2011)
Nova Scotia: 1,275 (2011)
|Scotland: 87,056 (2011)||Scotland||Bòrd na Gàidhlig||Glasgow: 5,726|
|Cornish||Kernowek||Brittonic||Unknown.||3,000||Cornwall||Cornish Language Partnership (Keskowethyans an Taves Kernewek)||Truro: 118|
|Manx||Gaelg/ Gailck||Goidelic||100+, including a small number of children who are new native speakers||1,823||Isle of Man||Coonceil ny Gaelgey||Douglas: 507|
Celtic is divided into various branches:
- Lepontic, the oldest attested Celtic language (from the 6th century BC). Anciently spoken in Switzerland and in Northern-Central Italy. Coins with Lepontic inscriptions have been found in Noricum and Gallia Narbonensis.
- Northeastern Hispano-Celtic/Eastern Hispano-Celtic or Celtiberian, anciently spoken in the Iberian peninsula, Old Castile and south of Aragon. Modern provinces of Segovia, Burgos, Soria, Guadalajara, Cuenca, Zaragoza and Teruel. The relationship of Celtiberian with Gallaecian, in the northwest of the peninsula, is uncertain.
- Northwestern Hispano-Celtic/Western Hispano-Celtic, anciently spoken in the northwest of the peninsula (modern northern Portugal, Galicia, Asturias and Cantabria).
- Gaulish languages, including Galatian and possibly Noric. These languages were once spoken in a wide arc from Belgium to Turkey. They are now all extinct.
- Brittonic, including the living languages Breton, Cornish, and Welsh, and the extinct languages Cumbric and Pictish, though Pictish may be a sister language rather than a daughter of Common Brittonic. Before the arrival of Scotti on the Isle of Man in the 9th century, there may have been a Brittonic language on the Isle of Man.
- Goidelic, including the living languages Irish, Manx, and Scottish Gaelic.
Scholarly handling of the Celtic languages has been contentious owing to scarceness of primary source data. Some scholars (such as Cowgill 1975; McCone 1991, 1992; and Schrijver 1995) distinguish Continental Celtic and Insular Celtic, arguing that the differences between the Goidelic and Brittonic languages arose after these split off from the Continental Celtic languages. Other scholars (such as Schmidt 1988) distinguish between P-Celtic and Q-Celtic, putting most of the Gaulish and Brittonic languages in the former group and the Goidelic and Celtiberian languages in the latter. The P-Celtic languages (also called Gallo-Brittonic) are sometimes seen (for example by Koch 1992) as a central innovating area as opposed to the more conservative peripheral Q-Celtic languages.
The Breton language is Brittonic, not Gaulish, though there may be some input from the latter, having been introduced from Southwestern regions of Britain in the post-Roman era and having evolved into Breton.
In the P/Q classification schema, the first language to split off from Proto-Celtic was Gaelic. It has characteristics that some scholars see as archaic, but others see as also being in the Brittonic languages (see Schmidt). In the Insular/Continental classification schema, the split of the former into Gaelic and Brittonic is seen as being late.
The distinction of Celtic into these four sub-families most likely occurred about 900 BC according to Gray and Atkinson but, because of estimation uncertainty, it could be any time between 1200 and 800 BC. However, they only considered Gaelic and Brythonic. The controversial paper by Forster and Toth included Gaulish and put the break-up much earlier at 3200 BC ± 1500 years. They support the Insular Celtic hypothesis. The early Celts were commonly associated with the archaeological Urnfield culture, the Hallstatt culture, and the La Tène culture, though the earlier assumption of association between language and culture is now considered to be less strong.
There are legitimate scholarly arguments in favour of both the Insular Celtic hypothesis and the P-Celtic/Q-Celtic hypothesis. Proponents of each schema dispute the accuracy and usefulness of the other's categories. However, since the 1970s the division into Insular and Continental Celtic has become the more widely held view (Cowgill 1975; McCone 1991, 1992; Schrijver 1995), but in the middle of the 1980s, the P-Celtic/Q-Celtic hypothesis found new supporters (Lambert 1994), because of the inscription on the Larzac piece of lead (1983), the analysis of which reveals another common phonetical innovation -nm- > -nu (Gaelic ainm / Gaulish anuana, Old Welsh enuein "names"), that is less accidental than only one. The discovery of a third common innovation would allow the specialists to come to the conclusion of a Gallo-Brittonic dialect (Schmidt 1986; Fleuriot 1986).
The interpretation of this and further evidence is still quite contested, and the main argument in favour of Insular Celtic is connected with the development of the verbal morphology and the syntax in Irish and British Celtic, which Schumacher regards as convincing, while he considers the P-Celtic/Q-Celtic division unimportant and treats Gallo-Brittonic as an outdated hypothesis. Stifter affirms that the Gallo-Brittonic view is "out of favour" in the scholarly community as of 2008 and the Insular Celtic hypothesis "widely accepted".
When referring only to the modern Celtic languages, since no Continental Celtic language has living descendants, "Q-Celtic" is equivalent to "Goidelic" and "P-Celtic" is equivalent to "Brittonic".
Within the Indo-European family, the Celtic languages have sometimes been placed with the Italic languages in a common Italo-Celtic subfamily, a hypothesis that is now largely discarded, in favour of the assumption of language contact between pre-Celtic and pre-Italic communities.
How the family tree of the Celtic languages is ordered depends on which hypothesis is used:
"Insular Celtic hypothesis"
Eska (2010) evaluates the evidence as supporting the following tree, based on shared innovations, though it is not always clear that the innovations are not areal features. It seems likely that Celtiberian split off before Cisalpine Celtic, but the evidence for this is not robust. On the other hand, the unity of Gaulish, Goidelic, and Brittonic is reasonably secure. Schumacher (2004, p. 86) had already cautiously considered this grouping to be likely genetic, based, among others, on the shared reformation of the sentence-initial, fully inflecting relative pronoun *i̯os, *i̯ā, *i̯od into an uninflected enclitic particle. Eska sees Cisalpine Gaulish as more akin to Lepontic than to Transalpine Gaulish.
Eska considers a division of Transalpine–Goidelic–Brittonic into Transalpine and Insular Celtic to be most probable because of the greater number of innovations in Insular Celtic than in P-Celtic, and because the Insular Celtic languages were probably not in great enough contact for those innovations to spread as part of a sprachbund. However, if they have another explanation (such as an SOV substratum language), then it is possible that P-Celtic is a valid clade, and the top branching would be:
Although there are many differences between the individual Celtic languages, they do show many family resemblances.
- consonant mutations (Insular Celtic only)
- inflected prepositions (Insular Celtic only)
- two grammatical genders (modern Insular Celtic only; Old Irish and the Continental languages had three genders, although Gaulish may have merged the neuter and masculine in its later forms)
- a vigesimal number system (counting by twenties)
- Cornish hwetek ha dew ugens "fifty-six" (literally "sixteen and two twenty")
- verb–subject–object (VSO) word order (probably Insular Celtic only)
- an interplay between the subjunctive, future, imperfect, and habitual, to the point that some tenses and moods have ousted others
- an impersonal or autonomous verb form serving as a passive or intransitive
- Welsh dysgaf "I teach" vs. dysgir "is taught, one teaches"
- Irish múinim "I teach" vs. múintear "is taught, one teaches"
- no infinitives, replaced by a quasi-nominal verb form called the verbal noun or verbnoun
- frequent use of vowel mutation as a morphological device, e.g. formation of plurals, verbal stems, etc.
- use of preverbal particles to signal either subordination or illocutionary force of the following clause
- infixed pronouns positioned between particles and verbs
- lack of simple verb for the imperfective "have" process, with possession conveyed by a composite structure, usually BE + preposition
- Cornish Yma kath dhymm "I have a cat", literally "there is a cat to me"
- Welsh Mae cath gyda fi "I have a cat", literally "a cat is with me"
- Irish Tá cat agam "I have a cat", literally "there is a cat at me"
- use of periphrastic constructions to express verbal tense, voice, or aspectual distinctions
- distinction by function of the two versions of BE verbs traditionally labelled substantive (or existential) and copula
- bifurcated demonstrative structure
- suffixed pronominal supplements, called confirming or supplementary pronouns
- use of singulars or special forms of counted nouns, and use of a singulative suffix to make singular forms from plurals, where older singulars have disappeared
- Irish: Ná bac le mac an bhacaigh is ní bhacfaidh mac an bhacaigh leat.
- (Literal translation) Don't bother with son the beggar's and not will-bother son the beggar's with-you.
- bhacaigh is the genitive of bacach. The igh the result of affection; the bh is the lenited form of b.
- leat is the second person singular inflected form of the preposition le.
- The order is verb–subject–object (VSO) in the second half. Compare this to English or French (and possibly Continental Celtic) which are normally subject–verb–object in word order.
- Welsh: pedwar ar bymtheg a phedwar ugain
- (Literally) four on fifteen and four twenties
- bymtheg is a mutated form of pymtheg, which is pump ("five") plus deg ("ten"). Likewise, phedwar is a mutated form of pedwar.
- The multiples of ten are deg, ugain, deg ar hugain, deugain, hanner cant, trigain, deg a thrigain, pedwar ugain, deg a phedwar ugain, cant.*
The lexical similarity between the different Celtic languages is apparent in their core vocabulary, especially in terms of the actual pronunciation of the words. Moreover, the phonetic differences between languages are often the product of regular sound change (i.e. lenition of /b/ into /v/ or Ø).
The table below contains words in the modern languages that were inherited directly from Proto-Celtic, as well as a few old borrowings from Latin that made their way into all the daughter languages. Among the modern languages, there is often a closer match between Welsh, Breton, and Cornish on one hand, and Irish, Gaelic and Manx on the other. For a fuller list of comparisons, see the Swadesh list for Celtic.
|mouth of a river||aber||aber||aber||inbhear||inbhir||inver|
|(to) smoke||ysmygu||mogediñ, butuniñ||megi||caith(eamh) tobac||smocadh||toghtaney, smookal|
† Borrowings from Latin.
Article 1 of the Universal Declaration of Human Rights:
All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
- Irish: Saolaítear na daoine uile saor agus comhionann ina ndínit agus ina gcearta. Tá bua an réasúin agus an choinsiasa acu agus dlíd iad féin d'iompar de mheon bráithreachais i leith a chéile.
- Manx: Ta dagh ooilley pheiagh ruggit seyr as corrym ayns ard-cheim as kiartyn. Ren Jee feoiltaghey resoon as cooinsheanse orroo as by chair daue ymmyrkey ry cheilley myr braaraghyn.
- Scottish Gaelic: Tha gach uile dhuine air a bhreith saor agus co-ionnan ann an urram 's ann an còirichean. Tha iad air am breith le reusan is le cogais agus mar sin bu chòir dhaibh a bhith beò nam measg fhèin ann an spiorad bràthaireil.
- Breton: Dieub ha par en o dellezegezh hag o gwirioù eo ganet an holl dud. Poell ha skiant zo dezho ha dleout a reont bevañ an eil gant egile en ur spered a genvreudeuriezh.
- Cornish: Genys frank ha par yw oll tus an bys yn aga dynita hag yn aga gwiryow. Enduys yns gans reson ha kowses hag y tal dhedha omdhon an eyl orth y gila yn spyrys a vrederedh.
- Welsh: Genir pawb yn rhydd ac yn gydradd â'i gilydd mewn urddas a hawliau. Fe'u cynysgaeddir â rheswm a chydwybod, a dylai pawb ymddwyn y naill at y llall mewn ysbryd cymodlon.
Possibly Celtic languagesEdit
It has been suggested that several poorly-documented languages may possibly have been Celtic.
- Camunic is an extinct language which was spoken in the first millennium BC in the Valcamonica and Valtellina valleys of the Central Alps. It has most recently been proposed to be a Celtic language.
- Ligurian was spoken in the Northern Mediterranean Coast straddling the southeast French and northwest Italian coasts, including parts of Tuscany, Elba island and Corsica. Xavier Delamarre argues that Ligurian was a Celtic language, similar to, but not the same as Gaulish. The Ligurian-Celtic question is also discussed by Barruol (1999). Ancient Ligurian is either listed as Celtic (epigraphic), or Para-Celtic (onomastic).
- Lusitanian was spoken in the area between the Douro and Tagus rivers of western Iberia (a region straddling the present border of Portugal and Spain). It is known from only five inscriptions and various place names. It is an Indo-European language and some scholars have proposed that it may be a para-Celtic language, which evolved alongside Celtic or formed a dialect continuum or sprachbund with Tartessian and Gallaecian. This is tied to a theory of an Iberian origin for the Celtic languages. It is also possible that the Q-Celtic languages alone, including Goidelic, originated in western Iberia (a theory that was first put forward by Edward Lhuyd in 1707) or shared a common linguistic ancestor with Lusitanian. Secondary evidence for this hypothesis has been found in research by biological scientists, who have identified (firstly) deep-rooted similarities in human DNA found precisely in both the former Lusitania and Ireland, and; (secondly) the so-called "Lusitanian distribution" of animals and plants unique to western Iberia and Ireland. Both of these phenomena are now generally believed to have resulted from human emigration from Iberia to Ireland, during the late Paleolithic or early Mesolithic eras.
- Other scholars see greater linguistic affinities between Lusitanian, proto-Gallo-Italic (particularly with the Ligurian language (ancient)) and Old European.
- Pictish was for a long time thought to be a pre-Celtic, non-Indo-European language of Scotland. Some believe it was an Insular Celtic language allied to the P-Celtic language Brittonic (descendants Welsh, Cornish, Cumbric, Breton).
- Rhaetian was spoken in central parts of present-day Switzerland, Tyrol in Austria, and the Alpine regions of northeastern Italy. It is documented by a limited number of short inscriptions (found through Northern Italy and Western Austria) in two variants of the Etruscan alphabet. Its linguistic categorization is not clearly established, and it presents a confusing mixture of what appear to be Etruscan, Indo-European, and uncertain other elements. Howard Hayes Scullard argues that Rhaetian was also a Celtic language.
- Tartessian, spoken in the southwest of the Iberia Peninsula (mainly southern Portugal and southwestern Spain). Tartessian is known by 95 inscriptions, with the longest having 82 readable signs. John T. Koch argues that Tartessian was also a Celtic language.
- Ancient Belgian
- "North-West Indo-European". Old European. Archived from the original on 12 September 2018. Retrieved 12 September 2018.
- "North-West Indo-European". Academia Prisca. Archived from the original on 12 September 2018. Retrieved 12 September 2018.
- Hammarström, Harald; Forkel, Robert; Haspelmath, Martin, eds. (2017). "Celtic". Glottolog 3.0. Jena, Germany: Max Planck Institute for the Science of Human History.
- "American Heritage Dictionary. Celtic: kel-tik, sel". Dictionary.reference.com. Archived from the original on 8 August 2011. Retrieved 19 August 2011.
- The Celtic languages:an overview, Donald MacAulay, The Celtic Languages, ed. Donald MacAulay, (Cambridge University Press, 1992), 3.
- Cunliffe, Barry W. 2003. The Celts: a very short introduction. pg.48
- Alice Roberts, The Celts (Heron Books 2015)
- "Celtic Branch | About World Languages". aboutworldlanguages.com. Archived from the original on 25 September 2017. Retrieved 18 September 2017.
- Koch, John T. (2006). Celtic Culture: A Historical Encyclopedia. ABC-CLIO. pp. 34, 365–366, 529, 973, 1053. Archived from the original on 31 December 2015. Retrieved 15 June 2010.
- "A brief history of the Cornish language". Maga Kernow. Archived from the original on 25 December 2008.
- Beresford Ellis, Peter (2005) . The Story of the Cornish Language. Tor Mark Press. pp. 20–22. ISBN 0-85025-371-3.
- Staff. "Fockle ny ghaa: schoolchildren take charge". Iomtoday.co.im. Archived from the original on 4 July 2009. Retrieved 18 August 2011.
- "'South West:TeachingEnglish:British Council:BBC". BBC/British Council website. BBC. 2010. Archived from the original on 8 January 2010. Retrieved 9 February 2010.
- "Celtic Languages". Ethnologue. Archived from the original on 16 July 2011. Retrieved 9 March 2010.
- Crystal, David (2010). The Cambridge Encyclopedia of Language. Cambridge University Press. ISBN 978-0-521-73650-3.
- "Irish Examiner". Archives.tcm.ie. 24 November 2004. Archived from the original on 19 January 2005. Retrieved 19 August 2011.
- Christina Bratt Paulston. Linguistic Minorities in Multilingual Settings: Implications for Language Policies. J. Benjamins Pub. Co. p. 81. ISBN 1-55619-347-5.
- Pierce, David (2000). Irish Writing in the Twentieth Century. Cork University Press. p. 1140. ISBN 1-85918-208-9.
- Ó hÉallaithe, Donncha (1999). Cuisle. Missing or empty
- "www.cso.ie Central Statistics Office, Census 2011 – This is Ireland – see table 33a" (PDF). Archived from the original (PDF) on 25 May 2013. Retrieved 27 April 2012.
- Central Statistics Office. "Population Aged 3 Years and Over by Province County or City, Sex, Ability to Speak Irish and Census Year". Government of Ireland. Archived from the original on 7 March 2016. Retrieved 6 March 2016.
- Department of Finance and Personnel. "Census 2011 Key Statistics for Northern Ireland" (PDF). The Northern Ireland Statistics and Research Agency. Archived (PDF) from the original on 24 December 2012. Retrieved 6 March 2016.
- "Welsh language skills by local authority, gender and detailed age groups, 2011 Census". StatsWales website. Welsh Government. Archived from the original on 17 November 2015. Retrieved 13 November 2015.
- Office for National Statistics 2011 http://ons.gov.uk/ons/rel/census/2011-census/key-statistics-for-unitary-authorities-in-wales/stb-2011-census-key-statistics-for-wales.html#tab---Proficiency-in-Welsh Archived 5 June 2013 at the Wayback Machine
- United Nations High Commissioner for Refugees. "World Directory of Minorities and Indigenous Peoples – UK: Welsh". UNHCR. Archived from the original on 20 May 2011. Retrieved 23 May 2010.
- "Wales and Argentina". Wales.com website. Welsh Assembly Government. 2008. Archived from the original on 16 October 2012. Retrieved 23 January 2012.
- "Table 1. Detailed Languages Spoken at Home and Ability to Speak English for the Population 5 Years and Over for the United States: 2006–2008 Release Date: April 2010" (xls). United States Census Bureau. 27 April 2010. Archived from the original on 22 September 2014. Retrieved 2 January 2011.
- "2006 Census of Canada: Topic based tabulations: Various Languages Spoken (147), Age Groups (17A) and Sex (3) for the Population of Canada, Provinces, Territories, Census Metropolitan Areas and Census Agglomerations, 2006 Census – 20% Sample Data". Statistics Canada. 7 December 2010. Archived from the original on 26 August 2011. Retrieved 3 January 2011.
- StatsWales. "Welsh language skills by local authority, gender and detailed age groups, 2011 Census". Welsh Government. Archived from the original on 31 December 2015. Retrieved 6 March 2016.
- (in French) Données clés sur breton, Ofis ar Brezhoneg Archived 15 March 2012 at the Wayback Machine
- Pole Études et Développement Observatoire des Pratiques Linguistiques. "Situation de la Langue". Office Public de la Langue Bretonne. Archived from the original on 5 March 2016. Retrieved 6 March 2016.
- 2011 Scotland Census Archived 4 June 2014 at the Wayback Machine, Table QS211SC.
- "National Household Survey Profile, Nova Scotia, 2011". Statistics Canada. 11 September 2013. Archived from the original on 13 May 2014. Retrieved 7 June 2014.
- Scotland's Census. "Standard Outputs". National Records of Scotland. Archived from the original on 5 October 2016. Retrieved 6 March 2016.
- Alison Campsie. "New bid to get us speaking in Gaelic". The Press and Journal. Archived from the original on 10 March 2016. Retrieved 6 March 2016.
- See Number of Cornish speakers
- Around 2,000 fluent speakers. "'South West:TeachingEnglish:British Council:BBC". BBC/British Council website. BBC. 2010. Archived from the original on 8 January 2010. Retrieved 9 February 2010.
- Equalities and Wellbeing Division. "Language in England and Wales: 2011". Office for National Statistics. Archived from the original on 7 March 2016. Retrieved 6 March 2016.
- "Anyone here speak Jersey?". Independent.co.uk. 11 April 2002. Archived from the original on 11 September 2011. Retrieved 19 August 2011.
- "Documentation for ISO 639 identifier: glv". Sil.org. 14 January 2008. Archived from the original on 28 July 2011. Retrieved 19 August 2011.
- "Isle of Man Census Report 2011" (PDF). Economic Affairs Division, Isle of Man Government Treasury. April 2012. p. 27. Archived from the original (PDF) on 5 November 2013. Retrieved 9 June 2014.
- Sarah Whitehead. "How the Manx language came back from the dead". The Guardian. Archived from the original on 5 March 2016. Retrieved 6 March 2016.
- "Shelta". Ethnologue. Archived from the original on 29 June 2010. Retrieved 9 March 2010.
- "ROMLEX: Romani dialects". Romani.uni-graz.at. Archived from the original on 27 August 2011. Retrieved 19 August 2011.
- Schumacher, Stefan; Schulze-Thulin, Britta; aan de Wiel, Caroline (2004). Die keltischen Primärverben. Ein vergleichendes, etymologisches und morphologisches Lexikon (in German). Innsbruck: Institut für Sprachen und Kulturen der Universität Innsbruck. pp. 84–87. ISBN 3-85124-692-6.
- Percivaldi, Elena (2003). I Celti: una civiltà europea. Giunti Editore. p. 82.
- Kruta, Venceslas (1991). The Celts. Thames and Hudson. p. 55.
- Stifter, David (2008). Old Celtic Languages (PDF). p. 12. Archived (PDF) from the original on 2 October 2012. Retrieved 19 December 2012.
- MORANDI 2004, pp. 702-703, n. 277
- Celtic Culture: A Historical Encyclopedia Archived 31 March 2017 at the Wayback Machine John T. Koch, Vol 1, p. 233
- Prósper, B.M. (2002). Lenguas y religiones prerromanas del occidente de la península ibérica. Ediciones Universidad de Salamanca. pp. 422–27. ISBN 84-7800-818-7.
- Villar F., B. M. Prósper. (2005). Vascos, Celtas e Indoeuropeos: genes y lenguas. Ediciones Universidad de Salamanca. pgs. 333–350. ISBN 84-7800-530-7.
- "In the northwest of the Iberian Peninula, and more specifically between the west and north Atlantic coasts and an imaginary line running north-south and linking Oviedo and Merida, there is a corpus of Latin inscriptions with particular characteristics of its own. This corpus contains some linguistic features that are clearly Celtic and others that in our opinion are not Celtic. The former we shall group, for the moment, under the label northwestern Hispano-Celtic. The latter are the same features found in well-documented contemporary inscriptions in the region occupied by the Lusitanians, and therefore belonging to the variety known as LUSITANIAN, or more broadly as GALLO-LUSITANIAN. As we have already said, we do not consider this variety to belong to the Celtic language family." Jordán Colera 2007: p.750
- Kenneth H. Jackson suggested that there were two Pictish languages, a pre-Indo-European one and a Pritenic Celtic one. This has been challenged by some scholars. See Katherine Forsyth's "Language in Pictland: the case against 'non-Indo-European Pictish'" "Etext" (PDF). Archived (PDF) from the original on 19 February 2006. Retrieved 20 January 2006. (27.8 MB). See also the introduction by James & Taylor to the "Index of Celtic and Other Elements in W. J. Watson's 'The History of the Celtic Place-names of Scotland'" "Etext" (PDF). Archived from the original (PDF) on 20 February 2006. (172 KB ). Compare also the treatment of Pictish in Price's The Languages of Britain (1984) with his Languages in Britain & Ireland (2000).
- "What are the Celtic Languages? — Celtic Studies Resources". Celtic Studies Resources. Archived from the original on 10 October 2017. Retrieved 18 September 2017.
- Barbour and Carmichael, Stephen and Cathie (2000). Language and nationalism in Europe. Oxford University Press. p. 56. ISBN 978-0-19-823671-9.
- Gray and Atkinson, RD; Atkinson, QD (2003). "Language-tree divergence times support the Anatolian theory of Indo-European origin". Nature. 426 (6965): 435–439. Bibcode:2003Natur.426..435G. doi:10.1038/nature02029. PMID 14647380.
- Rexova, K.; Frynta, D; Zrzavy, J. (2003). "Cladistic analysis of languages: Indo-European classification based on lexicostatistical data". Cladistics. 19 (2): 120–127. doi:10.1111/j.1096-0031.2003.tb00299.x.
- Forster, Peter; Toth, Alfred (2003). "Toward a phylogenetic chronology of ancient Gaulish, Celtic, and Indo-European". Proceedings of the National Academy of Sciences. 100 (15): 9079–9084. Bibcode:2003PNAS..100.9079F. doi:10.1073/pnas.1331158100. PMC 166441. PMID 12837934.
- Renfrew, Colin (1987). Archaeology and Language: The Puzzle of Indo-European Origins. London: Jonathan Cape. ISBN 0224024957.
- James, Simon (1999). The Atlantic Celts: Ancient People or Modern Invention?. London: British Museum Press. ISBN 0714121657.
- Stifter, David (2008). Old Celtic Languages (PDF). p. 11. Archived (PDF) from the original on 2 October 2012. Retrieved 19 December 2012.
- Joseph F. Eska (2010) "The emergence of the Celtic languages". In Martin J. Ball and Nicole Müller (eds.), The Celtic languages. Routledge.
- Koch, John T.; Minard, Antone (8 August 2012). The Celts: History, Life, and Culture. ABC-CLIO. ISBN 9781598849646. Archived from the original on 10 October 2017. Retrieved 18 September 2017.
- "Dictionnaires bretons parlants". Archived from the original on 7 February 2019. Retrieved 6 February 2019.
- "Trinity College Phonetics and Speech Lab". Archived from the original on 12 February 2019. Retrieved 6 February 2019.
- "Learn Gaelic Dictionary". Archived from the original on 7 February 2019. Retrieved 6 February 2019.
- Markey, Thomas (2008). Shared Symbolics, Genre Diffusion, Token Perception and Late Literacy in North-Western Europe. NOWELE.
- "Archived copy". Archived from the original on 18 May 2013. Retrieved 2015-03-04.CS1 maint: archived copy as title (link)
- Kruta, Venceslas (1991). The Celts. Thames and Hudson. p. 54.
- Wodtko, Dagmar S (2010). Celtic from the West Chapter 11: The Problem of Lusitanian. Oxbow Books, Oxford, UK. pp. 360–361. ISBN 978-1-84217-410-4.
- Cunliffe, Barry (2003). The Celts – A Very Short Introduction – see figure 7. Oxford University Press. pp. 51–52. ISBN 0-19-280418-9.
- Ballester, X. (2004). ""Páramo" o del problema del la */p/ en celtoide". Studi Celtici. 3: 45–56.
- Unity in Diversity, Volume 2: Cultural and Linguistic Markers of the Concept Editors: Sabine Asmus and Barbara Braid. Google Books.
- Hill, E. W.; Jobling, M. A.; Bradley, D. G. (2000). "Y chromosome variation and Irish origins". Nature. 404: 351–352. doi:10.1038/35006158. PMID 10746711.
- McEvoy, B.; Richards, M.; Forster, P.; Bradley, D. G. (2004). "The longue durée of genetic ancestry: multiple genetic marker systems and Celtic origins on the Atlantic facade of Europe". Am. J. Hum. Genet. 75: 693–702. doi:10.1086/424697. PMC 1182057. PMID 15309688.
- Masheretti, S.; Rogatcheva, M. B.; Gündüz, I.; Fredga, K.; Searle, J. B. (2003). "How did pygmy shrews colonize Ireland? Clues from a phylogenetic analysis of mitochondrial cytochrome b sequences". Proc. R. Soc. B. 270: 1593–1599. doi:10.1098/rspb.2003.2406. PMC 1691416. PMID 12908980.
- Villar, Francisco (2000). Indoeuropeos y no indoeuropeos en la Hispania Prerromana (in Spanish) (1st ed.). Salamanca: Ediciones Universidad de Salamanca. ISBN 84-7800-968-X. Archived from the original on 31 December 2015. Retrieved 22 September 2014.
- The inscription of Cabeço das Fráguas revisited. Lusitanian and Alteuropäisch populations in the West of the Iberian Peninsula Transactions of the Philological Society vol. 97 (2003)
- Forsyth 2006, p. 1447; Forsyth 1997; Fraser 2009, pp. 52–53; Woolf 2007, pp. 322–340
- Scullard, HH (1967). The Etruscan Cities and Rome. Ithaca, NY: Cornell University Press.
- Koch, John T (2010). Celtic from the West Chapter 9: Paradigm Shift? Interpreting Tartessian as Celtic. Oxbow Books, Oxford, UK. pp. 292–293. ISBN 978-1-84217-410-4.
- Cólera, Carlos Jordán (16 March 2007). "The Celts in the Iberian Peninsula:Celtiberian" (PDF). e-Keltoi. 6: 749–750. Archived (PDF) from the original on 24 June 2011. Retrieved 16 June 2010.
- Koch, John T (2011). Tartessian 2: The Inscription of Mesas do Castelinho ro and the Verbal Complex. Preliminaries to Historical Phonology. Oxbow Books, Oxford, UK. pp. 1–198. ISBN 978-1-907029-07-3. Archived from the original on 23 July 2011.
- Ball, Martin J. & James Fife (ed.) (1993). The Celtic Languages. London: Routledge. ISBN 0-415-01035-7.
- Borsley, Robert D. & Ian Roberts (ed.) (1996). The Syntax of the Celtic Languages: A Comparative Perspective. Cambridge: Cambridge University Press. ISBN 0521481600.
- Cowgill, Warren (1975). "The origins of the Insular Celtic conjunct and absolute verbal endings". In H. Rix (ed.). Flexion und Wortbildung: Akten der V. Fachtagung der Indogermanischen Gesellschaft, Regensburg, 9.–14. September 1973. Wiesbaden: Reichert. pp. 40–70. ISBN 3-920153-40-5.
- Celtic Linguistics, 1700–1850 (2000). London; New York: Routledge. 8 vols comprising 15 texts originally published between 1706 and 1844.
- Forster, Peter; Toth, Alfred (July 2003). "Toward a phylogenetic chronology of ancient Gaulish, Celtic, and Indo-European". Proc. Natl. Acad. Sci. USA. 100 (15): 9079–84. Bibcode:2003PNAS..100.9079F. doi:10.1073/pnas.1331158100. PMC 166441. PMID 12837934.
- Gray, Russell D.; Atkinson, Quintin D. (November 2003). "Language-tree divergence times support the Anatolian theory of Indo-European origin". Nature. 426 (6965): 435–39. Bibcode:2003Natur.426..435G. doi:10.1038/nature02029. PMID 14647380.
- Hindley, Reg (1990). The Death of the Irish Language: A Qualified Obituary. Routledge. ISBN 0-415-04339-5.
- Lewis, Henry & Holger Pedersen (1989). A Concise Comparative Celtic Grammar. Göttingen: Vandenhoeck & Ruprecht. ISBN 3-525-26102-0.
- McCone, Kim (1991). "The PIE stops and syllabic nasals in Celtic". Studia Celtica Japonica. 4: 37–69.
- McCone, Kim (1992). "Relative Chronologie: Keltisch". In R. Beekes; A. Lubotsky; J. Weitenberg (eds.). Rekonstruktion und relative Chronologie: Akten Der VIII. Fachtagung Der Indogermanischen Gesellschaft, Leiden, 31 August – 4 September 1987. Institut für Sprachwissenschaft der Universität Innsbruck. pp. 12–39. ISBN 3-85124-613-6.
- McCone, K. (1996). Towards a Relative Chronology of Ancient and Medieval Celtic Sound Change. Maynooth: Department of Old and Middle Irish, St. Patrick's College. ISBN 0-901519-40-5.
- Russell, Paul (1995). An Introduction to the Celtic Languages. Longman. ISBN 0582100828.
- Schmidt, K.H. (1988). "On the reconstruction of Proto-Celtic". In G. W. MacLennan (ed.). Proceedings of the First North American Congress of Celtic Studies, Ottawa 1986. Ottawa: Chair of Celtic Studies. pp. 231–48. ISBN 0-09-693260-0.
- Schrijver, Peter (1995). Studies in British Celtic historical phonology. Amsterdam: Rodopi. ISBN 90-5183-820-4.
- Schumacher, Stefan; Schulze-Thulin, Britta; aan de Wiel, Caroline (2004). Die keltischen Primärverben. Ein vergleichendes, etymologisches und morphologisches Lexikon (in German). Innsbruck: Institut für Sprachen und Kulturen der Universität Innsbruck. ISBN 3-85124-692-6. | 1 | 16 |
<urn:uuid:44dd432c-1d6a-4b37-a0a7-74ef18cee493> | Electronic engineering is a discipline that utilizes the behavior and effects of electrons for the production of electronic devices (such as electron tubes and transistors), systems, or equipment. In many parts of the world, electronic engineering is considered at the same level as electrical engineering, so that general programs are called electrical and electronic engineering. (Many UK and Turkish universities have departments of Electronic and Electrical Engineering.) Both define a broad field that encompasses many subfields including those that deal with power, instrumentation engineering, telecommunications, and semiconductor circuit design, amongst many others.
The name electrical engineering is still used to cover electronic engineering amongst some of the older (notably American) universities and graduates there are called electrical engineers.
Some believe the term electrical engineer should be reserved for those having specialized in power and heavy current or high voltage engineering, while others believe that power is just one subset of electrical engineering (and indeed the term power engineering is used in that industry). Again, in recent years there has been a growth of new separate-entry degree courses such as information and communication engineering, often followed by academic departments of similar name.
The modern discipline of electronic engineering was to a large extent born out of radio and television development and from the large amount of Second World War development of defense systems and weapons. In the interwar years, the subject was known as radio engineering and it was only in the late 1950s that the term electronic engineering started to emerge. In the UK, the subject of electronic engineering became distinct from electrical engineering as a university degree subject around 1960. Students of electronics and related subjects like radio and telecommunications before this time had to enroll in the electrical engineering department of the university as no university had departments of electronics. Electrical engineering was the nearest subject with which electronic engineering could be aligned, although the similarities in subjects covered (except mathematics and electromagnetism) lasted only for the first year of the three-year course.
In 1893, Nikola Tesla made the first public demonstration of radio communication. Addressing the Franklin Institute in Philadelphia and the National Electric Light Association, he described and demonstrated in detail the principles of radio communication. In 1896, Guglielmo Marconi went on to develop a practical and widely used radio system. In 1904, John Ambrose Fleming, the first professor of electrical Engineering at University College London, invented the first radio tube, the diode. One year later, in 1906, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode.
Electronics is often considered to have begun when Lee De Forest invented the vacuum tube in 1907 . Within 10 years, his device was used in radio transmitters and receivers as well as systems for long distance telephone calls. Vacuum tubes remained the preferred amplifying device for 40 years, until researchers working for William Shockley at Bell Labs invented the transistor in 1947. In the following years, transistors made small portable radios, or transistor radios, possible as well as allowing more powerful mainframe computers to be built. Transistors were smaller and required lower voltages than vacuum tubes to work.In the interwar years the subject of electronics was dominated by the worldwide interest in radio and to some extent telephone and telegraph communications. The terms "wireless" and "radio" were then used to refer anything electronic. There were indeed few non-military applications of electronics beyond radio at that time until the advent of television. The subject was not even offered as a separate university degree subject until about 1960.
Prior to the second world war, the subject was commonly known as "radio engineering" and basically was restricted to aspects of communications and RADAR, commercial radio and early television. At this time, study of radio engineering at universities could only be undertaken as part of a physics degree.
Later, in post war years, as consumer devices began to be developed, the field broadened to include modern TV, audio systems, Hi-Fi and latterly computers and microprocessors. In the mid to late 1950s, the term radio engineering gradually gave way to the name electronic engineering, which then became a stand alone university degree subject, usually taught alongside electrical engineering with which it had become associated due to some similarities.
Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by hand. These non-integrated circuits consumed much space and power, were prone to failure and were limited in speed although they are still common in simple applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin.
The invention of the triode amplifier, generator, and detector made audio communication by radio practical. (Reginald Fessenden's 1906 transmissions used an electro-mechanical alternator.) The first known radio news program was broadcast 31 August 1920 by station 8MK, the unlicensed predecessor of WWJ (AM) in Detroit, Michigan. Regular wireless broadcasts for entertainment commenced in 1922, from the Marconi Research Centre at Writtle near Chelmsford, England.
While some early radios used some type of amplification through electric current or battery, through the mid 1920s the most common type of receiver was the crystal set. In the 1920s, amplifying vacuum tubes revolutionized both radio receivers and transmitters.
This is the early name for record players or combined radios and record players that had some presence in the war of 1812.
In 1928, Philo Farnsworth made the first public demonstration of purely electronic television. During the 1930s, several countries began broadcasting, and after World War II, it spread to millions of receivers, eventually worldwide.
Ever since then, electronics have been fully present in television devices. Nowadays, electronics in television have evolved to be the basics of almost every component inside TVs.
One of the latest and most advance technologies in TV screens/displays has to do entirely with electronics principles, and it’s the LED (light emitting diode) displays, and it’s most likely to replace LCD and Plasma technologies.
During World War II, many efforts were expended in the electronic location of enemy targets and aircraft. These included radio beam guidance of bombers, electronic counter measures, early radar systems, and so on. During this time very little if any effort was expended on consumer electronics developments.
In 1941, Konrad Zuse presented the Z3, the world's first functional computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives. Early examples include the Apollo missions and the NASA moon landing.
The invention of the transistor in 1947, by William B. Shockley, John Bardeen, and Walter Brattain opened the door for more compact devices and led to the development of the integrated circuit in 1959 by Jack Kilby.
In 1968, Marcian Hoff invented the microprocessor at Intel and, thus, ignited the development of the personal computer. Hoff's invention was part of an order by a Japanese company for a desktop programmable electronic calculator, which Hoff wanted to build as cheaply as possible. The first realization of the microprocessor was the Intel 4004, a 4-bit processor, in 1969, but only in 1973 did the Intel 8080, an 8-bit processor, make the building of the first personal computer, the MITS Altair 8800, possible.
In the field of electronic engineering, engineers design and test circuits that use the electromagnetic properties of electrical components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuner circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit.
In designing an integrated circuit, electronics engineers first construct circuit schematics that specify the electrical components and describe the interconnections between them. When completed, VLSI engineers convert the schematics into actual layouts, which map the layers of various conductor and semiconductor materials needed to construct the circuit. The conversion from schematics to layouts can be done by software (see electronic design automation) but very often requires human fine-tuning to decrease space and power consumption. Once the layout is complete, it can be sent to a fabrication plant for manufacturing.
Integrated circuits and other electrical components can then be assembled on printed circuit boards to form more complicated circuits. Today, printed circuit boards are found in most electronic devices including televisions, computers, and audio players.
Apart from electromagnetics and network theory, other items in the syllabus are particular to electronics engineering course. Electrical engineering courses have other specialisms such as machines, power generation, and distribution. Note that the following list does not include the large quantity of mathematics (maybe apart from the final year) included in each year's study.
Elements of vector calculus: divergence and curl; Gauss' and Stokes' theorems, Maxwell's equations: Differential and integral forms. Wave equation, Poynting vector. Plane waves: Propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance matching; pulse excitation. Waveguides: Modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Antennas: Dipole antennas; antenna arrays; radiation pattern; reciprocity theorem, antenna gain.
Network graphs: Matrices associated with graphs; incidence, fundamental cut set and fundamental circuit matrices. Solution methods: Nodal and mesh analysis. Network theorems: Superposition, Thevenin, and Norton's maximum power transfer, Wye-Delta transformation. Steady state sinusoidal analysis using phasors. Linear constant coefficient differential equations; time domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: Frequency domain analysis of RLC circuits. 2-port network parameters: Driving point and transfer functions. State equatioons for networks.
Electronic Devices: Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon: Diffusion current, drift current, mobility, resistivity. Generation and recombination of carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS capacitor, MOSFET, LED, p-I-n and avalanche photo diode, LASERs. Device technology: Integrated circuits fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process.
Analog Circuits: Equivalent circuits (large and small-signal) of diodes, BJTs, JFETs, and MOSFETs. Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and FET amplifiers. Amplifiers: Single-and multi-stage, differential, operational, feedback and power. Analysis of amplifiers; frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations. Function generators and wave-shaping circuits, Power supplies.
Digital circuits: of Boolean functions; logic gates digital IC families (DTL, TTL, ECL, MOS, CMOS). Combinational circuits: Arithmetic circuits, code converters, multiplexers and decoders. Sequential circuits: latches and flip-flops, counters and shift-registers. Sample and hold circuits, ADCs, DACs. Semiconductor memories. Microprocessor(8085): Architecture, programming, memory and I/O interfacing.
Definitions and properties of Laplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-time Fourier Transform, z-transform. Sampling theorems. Linear Time-Invariant (LTI) Systems: definitions and properties; causality, stability, impulse response, convolution, poles and zeros frequency response, group delay, phase delay. Signal transmission through LTI systems. Random signals and noise: Probability, random variables, probability density function, autocorrelation, power spectral density, function analogy between vectors and functions.
Basic control system components; block diagrammatic description, reduction of block diagrams—Mason's rule. Open loop and closed loop (negative unity feedback) systems and stability analysis of these systems. Signal flow graphs and their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Analysis of steady-state disturbance rejection and noise sensitivity.
Tools and techniques for LTI control system analysis and design: Root loci, Routh-Hurwitz criterion, Bode and Nyquist plots. Control system compensators: Elements of lead and lag compensation, elements of Proportional-Integral-Derivative (PID) control. Discretization of continuous time systems using Zero-Order-Hold (ZOH) and ADC's for digital controller implementation. Limitations of digital controllers: aliasing. State variable representation and solution of state equation of LTI control systems. Linearization of Nonlinear dynamical systems with state-space realizations in both frequency and time domains. Fundamental concepts of controllability and observability for MIMO LTI systems. State space realizations: observable and controllable canonical form. Ackerman's formula for state-feedback pole placement. Design of full order and reduced order estimators.
Analog communication (UTC) systems: Amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne noise conditions.
Digital communication systems: Pulse code modulation (PCM), differential pulse code modulation (DPCM), delta modulation (DM), digital modulation schemes-amplitude, phase and frequency shift keying schemes (ASK, PSK, FSK), matched filter receivers, bandwidth consideration and probability of error calculations for these schemes, GSM, TDMA.
Electronics engineers typically possess an academic degree with a major in electronic engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as a Bachelor of Engineering, Bachelor of Science or Bachelor of Applied Science depending upon the university. Many UK universities also offer Master of Engineering (MEng) degrees at undergraduate level.
The degree generally includes units covering physics, mathematics, project management and specific topics in electrical engineering. Initially such topics cover most, if not all, of the subfields of electronic engineering. Students then choose to specialize in one or more subfields towards the end of the degree.
Some electronics engineers also choose to pursue a postgraduate degree such as a Master of Science (MSc), Doctor of Philosophy in Engineering (PhD), or an Engineering Doctorate (EngD). The Master degree is being introduced in some European and American Universities as a first degree and the differentiation of an engineer with graduate and postgraduate studies is often difficult. In these cases, experience is taken into account. The Master and Engineer's degree may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy consists of a significant research component and is often viewed as the entry point to academia.
In most countries, a Bachelor's degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States and Canada), Chartered Engineer or Incorporated Engineer (in the United Kingdom, Ireland, India, South Africa and Zimbabwe), Chartered Professional Engineer (in Australia) or European Engineer (in much of the European Union).
Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electronic systems. Although most electronic engineers will understand basic circuit theory, the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI but are largely irrelevant to engineers working with macroscopic electrical systems.
Some locations require a license for one to legally be called an electronics engineer, or an engineer in general. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients." This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, such as Australia, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way, these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where licenses are not required, engineers are subject to the law. For example, much engineering work is done by contract and is therefore covered by contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations such as building codes and legislation pertaining to environmental law.
In locations where licenses are not required, professional certification may be advantageous.
Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Electrical Engineers (IEE),now the Institution of Engineering and Technology(IET). The IEEE claims to produce 30 percent of the world's literature in electrical/electronic engineering, has over 370,000 members, and holds more than 450 IEEE sponsored or cosponsored conferences worldwide each year. The IEE publishes 14 journals, has a worldwide membership of 120,000, certifies Chartered Engineers in the United Kingdom and claims to be the largest professional engineering society in Europe.
Electronic engineering in Europe is a very broad field that encompasses many subfields including those that deal with, electronic devices and circuit design, control systems, electronics and telecommunications, computer systems, embedded software, and so on. Many European universities now have departments of Electronics that are completely separate from or have completely replaced their electrical engineering departments.
Electronics engineering has many subfields. This section describes some of the most popular subfields in electronic engineering. Although there are engineers who focus exclusively on one subfield, there are also many who focus on a combination of subfields.
Electronic engineering involves the design and testing of electronic circuits that use the electronic properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality.
Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information.
For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error checking, and error detection of digital signals.
Transmissions across free space require information to be encoded in a carrier wave in order to shift the information to a carrier frequency suitable for transmission, this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.
Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. If the signal strength of a transmitter is insufficient the signal's information will be corrupted by noise.
Control engineering has a wide range of applications from the flight and propulsion systems of commercial airplanes to the cruise control present in many modern cars. It also plays an important role in industrial automation.
Control engineers often utilize feedback when designing control systems. For example, in a car with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the engine's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.
Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. These devices are known as instrumentation.
The design of such instrumentation requires a good understanding of physics that often extends beyond electromagnetic theory. For example, radar guns use the Doppler effect to measure the speed of oncoming vehicles. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.
Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control engineering.
Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware, the design of PDAs or the use of computers to control an industrial plant. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline.
Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of devices including video game consoles and DVD players.
For most engineers not involved at the cutting edge of system design and development, technical work accounts for only a fraction of the work they do. A lot of time is also spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.
The workplaces of electronics engineers are just as varied as the types of work they do. Electronics engineers may be found in the pristine laboratory environment of a fabrication plant, the offices of a consulting firm or in a research laboratory. During their working life, electronics engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers and other engineers.
Obsolescence of technical skills is a serious concern for electronics engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. And these are mostly used in the field of consumer electronics products
All links retrieved September 15, 2017.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia: | 1 | 5 |
<urn:uuid:e4a3e9d4-d066-4516-9c0c-f91c2f4567e5> | With rapid environmental changes, the global disease spectrum has shifted from an infectious disease model to a chronic non-communicable disease model. As for China, the most populous country in the world, cancer has become the leading cause of death and is a major problem in public health . Estimated by Chen et al., approximately 4,292,000 new invasive cancer cases were identified in China in 2015, and an astonishing number of 2,814,000 Chinese people died from cancer in 2015 . Lung cancer, stomach cancer, and esophageal cancer are the most commonly diagnosed cancers among both men and women in China .
In view of the high incidence of cancer in this country with a mammoth population size, it is necessary for tertiary hospitals and secondary healthcare institutions to collaborate and provide integrated care for cancer patients. However, the secondary healthcare facilities in China are targeted on common and minor conditions. The capability of these secondary healthcare facilities to cope with cancer patients is limited because of lack of medical expertise, inadequate equipment, and the poor secondary care teamwork [2, 3]. The Chinese government has been trying to construct various types of vertical integration among tertiary, secondary, and primary care, wishing that this effort could improve medical expertise and skills of personnel in secondary institutions [4, 5, 6]. At this point, there are three models of vertical integration: loose integration, medical consortium, and direct management .
The medical consortium (Chinese Pinyin: Yi Liao Lian He Ti), established nationwide, has been widely encouraged by the National Health and Family Planning Commission. According to Leut , integrated care is defined as the effort to connect the healthcare system, including acute, primary medical and advanced, with other human service systems to improve outcomes. Integrated care generally includes horizontal integration, vertical integration, system integration, organizational integration and others . A medical consortium is a form of vertical integrated care that typically involves one widely recognized tertiary hospital and several secondary hospitals or community health centers and improve the outcomes of patients through the collaboration of different levels of medical care [8, 9]. Shared medical professionals and electronic medical records, remote medical treatment and contracted relationships among the medical consortium hospitals make it possible for patients to have continuous care. As described by the general office of the State Council, the primary purpose of establishing medical consortiums is to encourage experienced physicians in tertiary hospitals to work in primary and secondary healthcare facilities, and therefore, instruct medical professionals to improve the quality of care in primary and secondary healthcare facilities .
In June 2014, the Health and Family Planning Commission of Shanxi Province initiated the pilot of medical consortium’s construction, including 10 core tertiary hospitals and several secondary hospitals . The first round of medical consortium construction was completed by the end of 2014. Among the 10 core tertiary hospitals that led the first round of medical consortium reform, the only tertiary hospital that was specialized in cancer diagnosis was Shanxi Provincial Cancer Hospital, with 15 secondary hospitals participating in the “cancer medical consortium” and serving approximately 30 million residents in total . As reported by the media, Shanxi Provincial Cancer Hospital tried to help these consortium secondary hospitals in three aspects . Firstly, an expert team was built specifically for this cancer medical consortium. These experts took turns to serve in the consortium secondary hospitals, and contributed to improving staff’s medical skills, consultation of patients, cancer screening, guiding surgery and health education [Shanxi Provincial Hospital]. Secondly, standardized continue education and medical training specialized in cancer treatment were provided to doctors and nurses, aiming at training qualified 1 to 2 doctors and 2 to 4 specialized nurses for each consortium hospital each year. Thirdly, a two-way referral system was established between the leading tertiary hospital and secondary hospitals. Patients with complicated and sever diseases in secondary hospitals were transferred to the tertiary hospital to receive advanced medical care, while those with minor conditions or in the recovery phase in the tertiary hospital were transferred to secondary hospitals to reduce costs.
Although the medical consortium policy has been implemented nationwide in China, very few evidence based on patient level empirical data has been published on the effect of this policy. Therefore, we aim to explore the effects of a medical consortium model on health outcomes of cancer patients in Shanxi, China in this study. Based on the standardized electronic records of lung, stomach, and esophageal cancer patients, we compare the relative risks of patients admitted to secondary hospitals in the medical consortium with those admitted to secondary hospitals not affiliated to medical consortium.
Shanxi province is located in northern China. According to the Statistical Yearbook of Shanxi, there were 36.3 million residents in Shanxi, with 52.6% of them living in urban areas in 2013 . This study is based on the standardized administrative electronic health records (EHRs) in the database of the Health and Family Planning Commission in Shanxi. This EHRs system was standardized and assigned to hospitals all over the country as a compulsory system by the Ministry of Health in China in 2011, with over 200 variables . We collected relevant data involving inpatients over 18-years-old hospitalized in secondary hospitals one year after the medical consortium pilot (from January 2015 to December 2015). The International Classification of Diseases 10th Revision (ICD-10) was used to identify patients diagnosed with lung cancer (C34.000–C34.902), stomach cancer (C16.000–C16.903) and esophageal cancer (C15.000–C15.900). All patients’ and medical practitioners’ personal identifiers (such as name, ID card number, and insurance number) were excluded before the study started. The data contain information about patients’ demographic characteristics (age, gender, marriage status, etc.), diagnosis codes (ICD-10 code for patients’ main diagnosis and up to 10 secondary diagnoses) and outcomes (discharge outcomes during the hospitalization). In total, 8,193 lung cancer patients, 5,693 stomach cancer patients, and 2,802 esophageal cancer patients were identified in the study.
Since patients admitted into the medical consortium hospitals may systematically differ from those in non-medical consortium hospitals in both patient-level and hospital-level characteristics, we used propensity scores to match each patient enrolled in a medical consortium hospital with a similar counterpart in non-medical consortium hospital. Propensity score matching was used to balance and control for observable covariates and reduce the chances of potential selection bias . In essence, propensity score is from a logistic regression, with the binary variable of whether the patient was admitted to a medical consortium hospital being the outcome variable predicted by a number of patient-level and hospital-level covariates. In this study, we constructed the propensity score matching model with five patient-level covariates: gender, age, status of the patient upon admission and whether a surgery was conducted on the patient, C3 index, and five hospital-level covariates: the number of open beds, the number of regular budget physicians, the number of extra contracted physicians, the number of regular budget nurses, and the number of extra contracted nurses.
Previous studies based on administrative databases have utilized in-hospital deaths as an outcome since the data following the discharges are normally inaccessible [16, 17]. Using this outcome for research could be biased because patients could choose to die at home if the chance of recovery is low. This problem cannot be ignored when studying cancer patients in China. Considering the culture of strong family ties, filial piety, and hospice care, approximately two-thirds of cancer patients in China would prefer to die in their homes [18, 19]. Classifying patient’s outcome as a binary of death or non-death would misclassify those who chose to go back home and died shortly. In this study, instead of using a binary variable of death or non-death, we used recovery or non-recovery as the binary part of the outcome variable. The outcome variable in this study includes two parts, a binary variable that indicates the occurrence of the event and a time variable of the time of survival. In the first part, we used a binary variable of recovery at the time of being discharged. We recoded death and not recovered upon discharge as 1 (the event of outcome), the fully recovered and those patients who were better off upon discharge (no event) and unknown discharge status (moving out/dropping off) as 0. In the second part, we used the length of stay (days) in the hospital as the patient’s survival time.
The explanatory variables in our analysis include 5 patient level covariates, gender, age groups, status upon admission, surgery conducted or not, and C3 index; and 5 hospital level covariates, the number of open beds, the number of regular budget physicians, the number of extra contracted physicians, the number of regular budget nurses, and the number of extra contracted nurses. We recorded the variable age into six categories and defined the 18–44 age group as the reference because this group was thought to be in better physical health and was expected to have better outcomes despite the diagnosis of cancer. Other age groups were classified according to a ten-year interval, and inpatients in higher age groups were expected to have worse outcomes. Gender is also important in predicting outcomes of cancer patients. Significant cancer disparities have been observed between male and female in China . Status upon admission was another factor that could influence the outcomes of patients. Patients classified as “urgent” and “acute” were expected to have worse in-hospital outcomes than those of normal patients. Whether surgery was conducted on a patient or not would influence his or her outcomes due to the risk of complications and nosocomial infections. Comorbidity has an important impact on the outcomes of cancer patients . Several comorbidity indices have been developed for administrative healthcare data to measure the severity of patients’ comorbidities, such as the Charlson Comorbidity Index and the Elixhauser Index [21, 22, 23, 24]. These two indexes have been widely used in predicting patients’ long-term outcomes (one year) and mortality [25, 26, 27, 28]. However, these two indexes are not developed specifically for cancer patients, and the use of the two indexes in the current study was not validated. Thus, we adopted a cancer-specific comorbidity index, C3 (Cancer Care and Comorbidity) index, as the measure of patients’ comorbidity . The C3 index constitutes 42 comorbidity conditions and outperformed the Charlson Index and National Cancer Index for cancer patients . The 42 comorbidities were identified through their corresponding ICD-10 codes . The C3 index is a continuous score ranging between –0.03 and 32.42, where a larger value indicates severer comorbidities.
The statistical analysis was conducted in two steps. Firstly, we used propensity score matching to match each patient enrolled in the medical consortium with a similar counterpart patient hospitalized in a non-medical consortium hospital (one-to-one match). Secondly, we used multivariate Cox proportional hazard models to evaluate the hazard ratios for matched patients enrolled in medical consortiums and those enrolled in non-medical consortium hospitals. The proportional hazards assumption was evaluated by the empirical score process with cumulative sums of martingale-based residuals . We created an interaction term for variables with p values less than 5% and the time variable for each model. The p values were determined by the Kolmogorov-type supremum test. All data manipulation, statistical analyses, and data visualizations were processed in R studio (Version 1.0.44), while the empirical score process was performed in SAS 9.4.
Table 1 displays characteristics of lung, stomach, and esophageal cancer patients enrolled in medical consortiums, non-medical consortiums before matching, non-medical consortium hospitals after matching and the percentage of improvement after propensity score matching. As shown in Table 1, the number of patients hospitalized in non-medical consortium hospitals exceeded the number of patients in medical consortium hospitals. Meanwhile, large variations in the characteristics among lung, stomach, and esophageal cancer patients between the medical consortium hospitals and non-medical consortium hospitals before propensity score matching were observed. Patients in non-medical consortiums had a higher C3 index score and there was a lower percentage of normal status patients upon admission than those in medical consortiums, indicating that severe patients were admitted in non-medical consortium hospitals. An average improvement of 57.2% in logistic distance score after propensity score matching was observed, although variations of hospital characteristics were augmented after the matching.
|Means – Lung cancer||Means – Stomach cancer||Means – Esophageal cancer|
n = 1,598
n = 6,595
n = 1,598
|percentage of improvement||Treated
n = 1,008
n = 4,685
n = 1,008
|percentage of improvement||Treated
n = 451
n = 2,351
n = 451
|percentage of improvement|
|Status upon admission:|
|surgery conducted or not||0.033||0.448||0.031||99.5||0.142||0.494||0.204||82.2||0.087||0.522||0.129||90.3|
|No. of open beds||427.4||474.0||338.6||–91.0||407.9||429.6||357.5||–131.7||413.4||457.7||354.8||–32.1|
|No. of regular budget physicians||137.9||151.1||117.8||–52.1||122.4||134.5||109.1||–9.5||132.5||137.2||113.4||–307.7|
|No. of extra contracted physicians||137.7||153.2||113.5||–55.6||124.4||132.4||107.1||–116.6||123.9||133.2||105.7||–96.0|
|No. of regular budget nurses||38.3||58.8||26.1||40.7||35.7||51.9||29.8||62.8||36.3||55.6||27.4||54.2|
|No. of extra contracted nurses||154.2||141.2||110.5||–234.4||146.1||128.0||127.6||–2.2||150.9||135.3||120.0||–97.8|
Figure 1 shows the Kaplan-Meier survival curves of matched patients, where the blue lines indicate patients enrolled in medical consortium hospitals, while red lines indicate patients enrolled in non-medical consortium hospitals. The plot indicates that patients enrolled in medical consortiums consistently had higher survival probabilities, compared with those in non-medical consortium hospitals at the same survival time, regardless of types of cancers. Similarly, Figure 2 shows the Kaplan-Meier survival curves of the lung, stomach, and esophageal cancer patients with matched data. The plot indicated that patients enrolled in medical consortiums consistently had higher survival probabilities, compared with those in non-medical consortium hospitals at the same survival time across three types of cancers. Nonetheless, the confidence intervals had small intersections after 50 days.
Table 2 illustrates the estimates of the Cox hazard models for lung, stomach, and esophageal cancer matched patients. After checking the proportional hazard assumptions with the empirical score process, it was found that the C3 variables for both lung cancer patients and stomach cancer patients did not meet the proportional hazard assumption. Therefore, an interaction term was added at the end of the variable column for the lung cancer and stomach cancer groups. Lower hazard ratios were associated with patients enrolled in medical consortium hospitals across lung cancer (hazard ratio = 0.533, p < 0.001), stomach cancer (hazard ratio = 0.494, p < 0.001) and esophageal cancer patients (hazard ratio = 0.505, p < 0.001).
|Parameter||Lung cancer||Stomach cancer||Esophageal cancer|
|Hazard Ratio||Pr > ChiSq||95% Hazard Ratio Confidence Limits||Hazard Ratio||Pr > ChiSq||95% Hazard Ratio Confidence Limits||Hazard Ratio||Pr > ChiSq||95% Hazard Ratio Confidence Limits|
|Non-medical consortium hospitals||Ref.||Ref.||Ref.||Ref.||Ref.||Ref.||Ref.||Ref.||Ref.||Ref.||Ref.||Ref.|
|Medical consortium hospitals||0.533||<.001||0.439||0.648||0.494||<.001||0.386||0.634||0.505||<.001||0.361||0.708|
|Status upon admission|
|Surgery conducted or not||0.543||0.023||0.320||0.921||0.596||0.006||0.411||0.864||1.307||0.334||0.760||2.248|
|No. of open beds||0.998||<.001||0.997||0.999||0.998||0.002||0.996||0.999||0.999||0.209||0.997||1.001|
|No. of regular budget physicians||1.006||<.001||1.003||1.008||1.009||0.000||1.004||1.014||1.004||0.196||0.998||1.009|
|No. of extra contracted physicians||1.013||<.001||1.008||1.018||1.009||0.065||0.999||1.018||0.998||0.765||0.986||1.010|
|No. of regular budget nurses||0.995||<.001||0.993||0.998||0.989||<.001||0.984||0.993||0.997||0.242||0.991||1.002|
|No. of extra contracted nurses||0.993||<.001||0.991||0.995||0.999||0.531||0.995||1.002||0.997||0.192||0.993||1.001|
The medical consortium policy has not been widely explored and promoted in China until five years ago . The standardized electronic medical record system in Shanxi province makes it possible for us to evaluate the effect of the medical consortium on cancer patients’ health outcomes. To our knowledge, the current study is the first attempt to explore the effects of a medical consortium policy on patients using quantitative data in China. In this study, we found that the hazards of getting unfavorable outcomes for lung, stomach and esophageal cancer patients admitted in medical consortium hospitals were consistently and significantly lower than those admitted in non-medical consortium hospitals, after adjusting for a number of potential patient-level and hospital-level confounders.
According to the official document released by the Health and Family Planning Commission in Shanxi, the medical consortium pilot in 2014 focused on eight key fields: key clinical specialties, pair-up support of urban hospitals on rural hospitals, multisite practice of physicians, two-way referrals, centralized medical examination, telemedicine and innovation in the medical payment system . The effective implementation of these aspects by the leading hospitals was crucial to the positive effect on patients. This implementation is especially important for patients diagnosed with cancer since cancer is a complicated chronic disease that routinely requires medical expertise and multidisciplinary coordination [16, 33, 34]. The expertise and experience of specialists from the leading hospitals in the medical consortium could provide valuable lessons for physicians in secondary hospitals . Meanwhile, patients could have access to advanced medical equipment and therapies in the leading hospitals by virtue of telemedicine or two-way referrals. These ways are all positive aspects and potential reasons for the success of a medical consortium.
As mentioned in the introduction section, the leading hospital, Shanxi Provincial Cancer Hospital, has been taking three actions to improve the medical quality and service in secondary consortium hospital. As from our understanding, the expert team built specifically for this cancer medical consortium could be the primary reason for the significant improvement in the outcomes of cancer patients in these consortium secondary hospitals. According to the statistics by Shanxi Provincial Hospital, the Shanxi Provincial Hospital has provided specialty consulting service for 320 cases, and guided 30 surgeries on the spot in consortium secondary hospitals by the end of March in 2015 . The collaboration between these experts from Shanxi Provincial Hospital and physicians in consortium secondary hospitals given local people the access to quality tertiary service without travelling all the way to the metropolitans, and could possibly justify most part of the positive results in this study. Further education and specialized training could have long-term effects on the medical workers in consortium secondary hospitals, however, we could suspect that it will not have such significant improvement on patients’ outcome within just one year after the pilot. The two-way referral system is to achieve the hierarchical tertiary care system, but the possibility that patients in consortium secondary hospitals are systematically less severe than patients in non-medical consortium hospitals has been ruled out by the propensity score matching in the first stage of analysis.
Despite the favorable effects for medical consortium on the outcomes of cancer patients found in this study, the medical consortium is far from a panacea for cancer patients. Driven by the popular medical consortium policy implemented nationwide and under the administrative pressure, leading hospitals send their best experts and lend their most advanced medical equipment to county and secondary hospitals without sufficient reimbursements. The experts and advanced equipment are underused in various regions, typically with less population density and purchase power. This situation could have contributed to higher economic values for the leading hospitals. Administrative pressure cannot motivate them to play the active role in long-term. One plausible explanation may be the two-way referral mechanism by which the leading hospitals could obtain more patients through the alliances with county and secondary hospitals. These alliances are exactly why we should remain cautious about the medical consortium policy. When the only incentive for leading hospitals in the medical consortium group is obtaining more patients from their alliances, the leading hospitals are essentially expanding their territories by taking advantage of this policy which compromises the competition in the hospital market. On the other hand, evidence from the United States, England, Netherland, and China indicated that the competition could improve medical quality and health outcomes [36, 37, 38, 39]. If the major incentive for the leading hospital is expanding their sources of patients, it is unlikely that a medical consortium policy can generate positive effects on patients’ outcomes in the long run.
This study has four limitations. A standardized hospital information system was not established on a large scale before the implementation of the medical consortium policy. We were unable to collect data from 2013 and 2014. Thus, we could not examine the causal effect of the medical consortium policy on patients’ outcomes. Secondly, patients’ data following discharges were inaccessible to the current study, which may lead to potential biases. Thirdly, the stages of the cancer cannot be identified because we are using a general electronic health record database, not a database specifically for cancer patients. Lastly, there is possibility that the patients might be transferred from the tertiary hospital to secondary hospitals in that medical consortium. We could not identify these transferred patients in this study because individual patient identifier has been deleted before we have access to the data.
Implementing the medical consortium policy in Shanxi has led to positive effects on cancer patients’ health outcomes. Policymakers should learn from the experience of establishing cancer medical consortiums in Shanxi, China and pilot a medical consortium model for patients diagnosed with other diseases and in other regions in China.
The authors thank the China Scholarship Council for supporting the first author’s visit to the Department of Health Management and Policy at Saint Louis University, Saint Louis, Missouri. We also thank the Health and Family Planning Commission in Shanxi for providing us with the data used in this study.
Li Kuang, Professor, Department of Health Administration, School of Public Health Sun Yat-sen University, China.
One anonymous reviewer.
This work was supported by the National Natural Science Foundation of China (Grant Number: 71473099).
The authors have no competing interests to declare.
Chen, W, Zheng, R, Baade, PD, Zhang, S, Zeng, H, Bray, F, Jemal, A, Yu, XQ and He, J. Cancer statistics in China, 2015. CA: A Cancer Journal for Clinicians, 2016; 66(2): 115–132. DOI: https://doi.org/10.3322/caac.21338
Yip, W and Hsiao, W. Harnessing the privatisation of China’s fragmented health-care delivery. The Lancet, 2014; 384(9945): 805–818. DOI: https://doi.org/10.1016/S0140-6736(14)61120-X
Yip, W and Hsiao, WC. What drove the cycles of Chinese health system reforms? Health Systems & Reform, 2015; 1(1): 52–61. DOI: https://doi.org/10.4161/23288604.2014.995005
Xu, J, Pan, R, Pong, R, Miao, Y and Qian, D. Different Models of Hospital–Community Health Centre Collaboration in Selected Cities in China: A Cross-Sectional Comparative Study. International Journal of Integrated Care, 2016; 16(1): 1–12. DOI: https://doi.org/10.5334/ijic.2456
Ministry of Finance, The People’s Republic of China. Opinions of General Office of the State Council on the All-out Launch of the Comprehensive Reform of County Public Hospitals [In Chinese] 2015. Available from: http://www.mof.gov.cn/zhengwuxinxi/zhengcefabu/201505/t20150511_1229838.htm.
Wang, X, Birch, S, Ma, H, Zhu, W and Meng, Q. The Structure and Effectiveness of Health Systems: Exploring the Impact of System Integration in Rural China. International Journal of Integrated Care, 2016; 16(3): 1–12. DOI: https://doi.org/10.5334/ijic.2197
Leutz, WN. Five laws for integrating medical and social services: lessons from the United States and the United Kingdom. The Milbank Quarterly, 1999; 77(1): 77–110. DOI: https://doi.org/10.1111/1468-0009.00125
Valentijn, PP, Schepman, SM, Opheij, W and Bruijnzeels, MA. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care. International Journal of Integrated Care, 2013; 13(1): 1–12. DOI: https://doi.org/10.5334/ijic.886
He, S, Liu, Z, Sun, B, Zhao, D, Zhang, R and Dou, F. Evaluation on Medical Alliance Development: Based on Bibliometrics Analysis [in Chinese]. Modern Hospital Management, 2016; 14(3): 2–6. DOI: https://doi.org/10.3969/j.issn.1672-4232.2016.03.001
General Office of the State Council, The People’s Republic of China. Guiding Opinions of the General Office of the State Council on Propelling the Construction of a Hierarchical Diagnosis and Treatment System [In Chinese] 2015. Available from: http://www.gov.cn/zhengce/content/2015-09/11/content_10158.htm.
Health and Family Planning Commission of Shanxi, The People’s Republic of China. Guiding Opinions of Health and Family Planning, Shanxi Province on Constructing Medical Consortium [In Chinese] 2014. Available from: http://www.sxws.cn/Bureau/MesIssueContentBeta2.asp?SubType=tggg&ConID=4477.
Shanxi Provincial Cancer Hospital. Cancer Medical Consortium in Shanxi are Under Construction with Exploration [In Chinese] 2015. Available from: http://www.sxzlyy.com/Html/News/Articles/101641.html.
Bureau of Statistics, Shanxi, The People’s Republic of China. Statistical Yearbook 2014 [In Chinese] 2014. Available from: http://www.stats-sx.gov.cn/tjsj/tjnj/nj2014/html/njcx.htm.
National Health and Family Planning Commission of the People’s Republic of China. The Notice on Revising the First Page of Hospitalized Patient Records by Ministry of Health [In Chinese] 2011. Available from: http://www.moh.gov.cn/mohyzs/s3585/201111/53492.shtml.
Dehejia, RH and Wahba, S. Propensity score-matching methods for nonexperimental causal studies. The Review of Economics and Statistics, 2002; 84(1): 151–161. DOI: https://doi.org/10.1162/003465302317331982
Lin, XJ, Tao, HB, Cai, M, Cheng, ZH, Wang, ML, Xu, C, Lin, HF and Jin, L. Health insurance and quality and efficiency of medical care for patients with acute myocardial infraction in tertiary hospitals in Shanxi, China: a retrospective study. The Lancet, 2016 Oct 31; 388: S70. DOI: https://doi.org/10.1016/S0140-6736(16)31997-3
Xu, Y, Liu, Y, Shu, T, Yang, W and Liang, M. Variations in the quality of care at large public hospitals in Beijing, China: a condition-based outcome approach. PLoS ONE, 2015; 10(10): e0138948. DOI: https://doi.org/10.1371/journal.pone.0138948
Tang, ST. Meanings of dying at home for Chinese patients in Taiwan with terminal cancer: a literature review. Cancer Nursing, 2000; 23(5): 367–370. DOI: https://doi.org/10.1097/00002820-200010000-00007
Beng, AK, Fong, CW, Shum, E, Goh, CR, Goh, KT and Chew, SK. Where the elderly die: the influence of socio-demographic factors and cause of death on people dying at home. Annals of the Academy of Medicine, Singapore, 2009; 38(8): 676–683. Available from: https://www.ncbi.nlm.nih.gov/pubmed/19736570.
Sarfati, D, Koczwara, B and Jackson, C. The impact of comorbidity on cancer and its treatment. CA: A Cancer Journal for Clinicians, 2016; 66(4): 337–350. DOI: https://doi.org/10.3322/caac.21342
Walraven, CV, Austin, PC, Jennings, A, Quan, H and Forster, AJ. A modification of the Elixhauser comorbidity measures into a point system for hospital death using administrative data. Medical care, 2009; 47(6): 626–633. Available from: http://www.jstor.org/stable/40221931. DOI: https://doi.org/10.1097/MLR.0b013e31819432e5
Quan, H, Li, B, Couris, CM, Fushimi, K, Graham, P, Hider, P, Januel, JM and Sundararajan, V. Updating and validating the Charlson comorbidity index and score for risk adjustment in hospital discharge abstracts using data from 6 countries. American Journal of Epidemiology, 2011; 173(6): 676–82. DOI: https://doi.org/10.1093/aje/kwq433
Sharabiani, MT, Aylin, P and Bottle, A. Systematic review of comorbidity indices for administrative data. Medical Care, 2012; 50(12): 1109–1118. DOI: https://doi.org/10.1097/MLR.0b013e31825f64d0
Yurkovich, M, Avina-Zubieta, JA, Thomas, J, Gorenchtein, M and Lacaille, D. A systematic review identifies valid comorbidity indices derived from administrative health data. Journal of Clinical Epidemiology, 2015; 68(1): 3–14. DOI: https://doi.org/10.1016/j.jclinepi.2014.09.010
Stock, C, Ihle, P, Sieg, A, Schubert, I, Hoffmeister, M and Brenner, H. Adverse events requiring hospitalization within 30 days after outpatient screening and nonscreening colonoscopies. Gastrointestinal Endoscopy, 2013; 77(3): 419–429. DOI: https://doi.org/10.1016/j.gie.2012.10.028
Radovanovic, D, Seifert, B, Urban, P, Eberli, FR, Rickli, H, Bertel, O, Puhan, MA and Erne, P. AMIS Plus Investigators. Validity of Charlson Comorbidity Index in patients hospitalised with acute coronary syndrome. Insights from the nationwide AMIS Plus registry 2002–2012. Heart, 2013; 100(4): 288–294. DOI: https://doi.org/10.1136/heartjnl-2013-304588
Lüchtenborg, M, Jakobsen, E, Krasnik, M, Linklater, KM, Mellemgaard, A and Møller, H. The effect of comorbidity on stage-specific survival in resected non-small cell lung cancer patients. European Journal of Cancer, 2012; 48(18): 3386–3395. DOI: https://doi.org/10.1016/j.ejca.2012.06.012
Singh, B, Singh, A, Ahmed, A, Wilson, GA, Pickering, BW, Herasevich, V, Gajic, O and Li, G. Derivation and validation of automated electronic search strategies to extract Charlson comorbidities from electronic medical records. InMayo Clinic Proceedings, 2012; 87(9): 817–824. DOI: https://doi.org/10.1016/j.mayocp.2012.04.015
Sarfati, D, Gurney, J, Stanley, J, Salmond, C, Crampton, P, Dennett, E, Koea, J and Pearce, N. Cancer-specific administrative data–based comorbidity indices provided valid alternative to Charlson and National Cancer Institute Indices. Journal of Clinical Epidemiology, 2014; 67(5): 586–595. DOI: https://doi.org/10.1016/j.jclinepi.2013.11.012
Sarfati, D. Developing new comorbidity indices for cancer populations using administrative data. 2013; University of Otago: Dunedin. Available from: https://ourarchive.otago.ac.nz/handle/10523/4734.
Lin, DY, Wei, LJ and Ying, Z. Checking the Cox model with cumulative sums of martingale-based residuals. Biometrika, 1993; 80(3): 557–572. DOI: https://doi.org/10.1093/biomet/80.3.557
Ouwens, M, Hulscher, M, Hermens, R, Faber, M, Marres, H, Wollersheim, H and Grol, R. Implementation of integrated care for patients with cancer: a systematic review of interventions and effects. International Journal for Quality in Health Care, 2009; 21(2): 137–144. DOI: https://doi.org/10.1093/intqhc/mzn061
Hong, NJ, Wright, FC, Gagliardi, AR and Paszat, LF. Examining the potential relationship between multidisciplinary cancer care and patient survival: An international literature review. Journal of surgical oncology, 2010; 102(2): 125–134. DOI: https://doi.org/10.1002/jso.21589
Liu, JR, Wu, J, Qin, Q, Jin, GR, Wang, YM and Wei, HE. Exploration and analysis on the effects of the healthcare alliance of Shanxi Provincial People’s Hospital in hierarchical medical system [in Chinese]. Chinese Journal of Medical Management Sciences, 2016; 6(5): 20–23.
Propper, C, Burgess, S and Gossage, D. Competition and quality: evidence from the NHS internal market 1991–9. The Economic Journal, 2008; 118(525): 138–170. DOI: https://doi.org/10.1111/j.1468-0297.2007.02107.x
Kessler, DP and McClellan, MB. Is hospital competition socially wasteful? The Quarterly Journal of Economics, 2000; 115(2): 577–615. DOI: https://doi.org/10.1162/003355300554863
Cooper, Z, Gibbons, S, Jones, S and McGuire, A. Does hospital competition save lives? Evidence from the English NHS patient choice reforms. The Economic Journal, 2011; 121(554): f228–f260. DOI: https://doi.org/10.1111/j.1468-0297.2011.02449.x
Pan, J, Qin, X, Li, Q, Messina, JP and Delamater, PL. Does hospital competition improve health care delivery in China? China Economic Review, 2015; 33: 179–199. DOI: https://doi.org/10.1016/j.chieco.2015.02.002 | 1 | 20 |
<urn:uuid:135b3d46-207a-4287-bc2a-7578495c2c43> | Overview, An Incredible Journey
From the beginning of electronic communication with the telegraph in 1833 to the present it has all happened in 181 years. The pace of technological advancements has been accelerating at what seems like an exponential rate. It is hard to believe that when I was born in 1944 TV was just a nascent technology, vinyl records where still 78 RPM, long distant travel was via train and transatlantic trip were via ship.
The record, that is vinyl records, had a 100 year run, while the CD, first pressed in 1982, has already been mostly replaced by solid state devices and smartphones via the Internet. And the Internet has gone from a glimmer in a few scientists eyes to a trillion dollar business in a little over 30 years.
Communication speed has gone from the teletype at 75 bits per second (bps) in the 1960's to gigabits per second (Gbs) speeds today. When the Internet was born in the late 1960's it ran at 1,200 bps and today the Internet backbone using fiber optic cable and transmission protocols like SONET (Synchronous Optical Networking) and ATM (Asynchronous Transfer Mode) hits over 400 Gbs. This incredible speed improvement has made possible music and video on demand over the Internet as well as telephone and data that supports the financial and business communities around the world.
This phenomenal growth is due to the development of the transistor, integrated circuits and the microcomputer. Without these technologies packet switching networks which are at the heart of the Internet would not be possible. The development of packet-switching time domain multiplexing required the speed of the microcomputer. The old circuit switching technology of the telephone companies just was not viable for a world wide data communication grid.
The following timeline highlights some of what I feel were the seminal breakthroughs of the last century and a half.
Timeline of Electronic Communications
- 1833 Telegraph: Carl Friedrich Gauss and Wilhelm Weber, Göttingen Germany.
- 1837 Samuel Morse, the telegraph in the USA and Mores Code.
- 1867 American, Sholes the first successful and modern typewriter.
- 1876 Alexander Graham Bell patents the electric telephone.
- 1877 Thomas Edison patents the phonograph - with a wax cylinder as recording medium.
- 1887 Emile Berliner invents the gramophone - a system of recording which could be used over and over again.
- 1888 George Eastman patents Kodak roll film camera.
- 1894 Guglielmo Marconi improves wireless telegraphy.
- 1902 Guglielmo Marconi transmits radio signals from Cornwall to Newfoundland - the first radio signal across the Atlantic Ocean.
- 1906 Lee Deforest invents the electronic amplifying tube or triode - this allowed all electronic signals to be amplified improving all electronic communications
- 1923 The television or iconoscope (cathode-ray tube) invented by Vladimir Kosma Zworykin - first television camera.
- 1939 Scheduled television broadcasts begin.
- 1944 Barton Phillips born April 11.
Computers put into public service - government owned - the age of Information Science begins.
The Colossus at Bletchley Park England was used at the end of World War II to break encrypted German messages. Ten Colossi were in use by the end of the war.
- 1948 Transistor invented at Bell Labs - enabling the miniaturization of electronic devices.
- 1948-1950 Cable TV and subscription TV services.
- 1950-1961 Development of T-1 transmition lines by Bell Labs.
- 1951 Computers are first sold commercially.
- 1952 CERN ("Conseil Européen pour la Recherche Nucléaire" or European Organization for Nuclear Research) founded in Switzerland.
- 1958 Integrated Circuits invented enabling the further miniaturization of electronic devices and computers.
- 1960 Packet Switching: Paul Baran, Donald Davies and Leonard Kleinrock initial work.
- 1961 Host based email CTTS systems (Compatible Time-Sharing System. Big Main Fraims)
- 1964 Barton Phillips graduates from UCLA and enter the Air Force.
- DARPA (Defense Advanced Research Projects Agency) commissioned a study of decentralized switching systems.
- First demonstration net between MIT's Lincoln Lab and System Development Corporation in California (1200 bits/sec).
- ARPANET (Advanced Research Projects Agency Network) the first Internet started. Backbone running at 50 Kbits/sec.
- Request for Comments (RFC) started
- 1970 Barton Phillips returns to US from the Air Force
- 1972 Ray Tomlinson invented network email and the '@' sign.
- 1974 TCP/IP (Transmission Control Program/Internet Protical RFC 675. Vinton Cerf, Yogen Dalal and Carl Sunshine).
- 1977 April: Barton Phillips purchased the Apple I home computer also 6502 based. The Apple I had 4 or 8 Kbytes of RAM and Integer Basic in ROM. It also had a casset tape interface for reading and writing data via a casset player.
- October: Barton Phillips joins Micropolis Corp. a floppy disk manufacture. Between 1978 and 1983 Barton wrote disk OS, Basic Interpreter, Assembler/Linker and Editor for the Micropolis products. In 1983 Micropolis stopped marketing its OS.
- 1978 X.25 provided the first international and commercial packet switching network, the "International Packet Switched Service" (IPSS).
- 1979 First cellular phone communication network started in Japan.
- 1980 Tim Berners-Lee at CERN in Switzerland developed ENQUIRE a hypertext program. He also created HTML (Hyper Text Markup Language).
- 1981 IBM PC first sold using the Intel 8088.
- SMTP (Simple Mail Transport Protical) RFC 821.
- April: Sony Records presses first CD (Compact Disk)
- IMAP (Internet Mail Access Protocol) was designed by Mark Crispin RFC 1064.
- SGML (Standard Generalized Markup Language) ISO 8879:1986.
- 1987 Number of network hosts breaks 10,000
- ADSL (Asymmetric Digital Subscriber Line) patented.
- POP3 RFC 1081 (the current standard)
- HTTP (Hyper Text Transport Protical),
- HTML (Hyper Text Markup Protical), first server (CERN httpd) and the first browser all created by Tim Berners-Lee and Robert Cailliau all running on a NeXT computer.
- Nicola Pellow created a browser that could run on almost all computers called the "Line Mode Browser".
- URL of first web site: http://info.cern.ch
- January: first HTTP server outside of CERN was activated.
- Comercial restriction on Internet lifted.
- ANSNet Backbone via T-3 at 45 Mbits/sec.
- April: Erwise browser first graphical browser available for systems other than the NeXT computer.
- Number of network hosts breaks 1,000,000
- WWW (World Wide Web).
January: 50 web servers in the world.
October: 500 web servers in the world.
- Mosaic web browser released by National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC), led by Marc Andreessen. Funding for Mosaic came from the "High-Performance Computing and Communications Initiative", a funding program initiated by then Senator Al Gore's "High Performance Computing and Communication Act" of 1991 also known as the Gore Bill.
- June: Cello by Thomas R. Bruce was the first browser for Microsoft Windows.
- August: The NCSA released Mac Mosaic and WinMosaic.
- CIDR (Classless Inter-Domain Routing) blocks introduced to replace Classful network (A, B, C) design.
- WWW (World Wide Web).
- Private sector assumes responsibility for the Internet. Backbone via ATM at 145 Mbits/sec
- April: Netscape founder by Mark Andreessen and James H. Clark. Netscape Navigator born.
- Amazon founded.
- NFSNet backbone service decomissioned.
- HTML 2.0 published as IETF RFC 1866.
- 1996 Cable Internet. Rogers Communications introduced the first cable modem service
January: Google started as a research project by Larry Page and Sergey Brin at Stanford University.
- 1997 HTML 3.2 published as a W3C Recommendation.
- September: Google incorporated.
- HTML 4.0 published as a W3C Recommendation.
- 1999-2001 "Dot Com" Boom, then bust.
- 2000 Apple Computer releases Mac OS X a Unix lookalike operating system.
- 2001 January: Wikipedia launched.
- February: Facebook launched.
- Internet traffic breaks one exabyte per month
- 2005 YouTube launched.
- SONET OC768 40 Gbit/sec optical fiber.
Theoretical Limit to fiber optical cable is one terabit or one trillion bits per second.
- Apple Computer switches from PowerPC processor to Intel thus obsoleting millions of systems in businesses and schools.
- SONET OC768 40 Gbit/sec optical fiber.
- January: HTML5 was published as a Working Draft by the W3C.
- October 23: AT&T announced the completion of upgrades to OC-768 on 80,000 fiber-optic wavelength miles of their IP/MPLS (Multiprotocol Label Switching) backbone network.
- IPv6 deployment starts (Summer Olympic Games via IPv6). Work on IPv6 started in the late 1990's when it became clear the IPv4's 4 billion addresses was not going to be enough.
- 2010 Internet traffic breaks 21 exabytes per month.
- December: W3C designated HTML5 as a Candidate Recommendation.
- NEC Corp. broke an ultra-long haul Internet speed record when it successfully transmitted data at 1.15 terabits/sec over 6,213 miles.
- IPv4 exhaustion immanent (4 billion addresses).
- 2013 The National Security Agency (NSA) is revealed to have secretly collected exabytes (1x1018 or 100,000 terabyte disk drives) worth of US and fourign citizens data.
- 2014 The W3C (World Wide Web Consortium) plans to finalize the HTML 5 standard by July.
- 2016 It is estimated that Internet traffic will reach 1.3 zettabytes per year. About 3.4 billion Internet users.
- 2017 Cisco Systems estimates that by 2020 Internet traffic will reach 2.3 ZB per year.
Transmition Speed Timeline
- Mid-1960: Early ARPANET 1200-2400 bits/sec
- 1970's: ARPANET 50 Kbits/sec.
- Mid-1980's: LAN (Local Area Network: Ethernet, Token Ring) 10 Mbits/sec.
WAN (Wide Area Network: modems, T-1) 300-2400 bits/sec to 1.5 Mbits/sec.
- 1990's: WAN (T-1, ADSL, T-3, ATM) 1.5 Mbits/sec to 145 Mbits/sec. ADSL: downstream: 200-400 Mbits/sec, upstream: 384 Kbits/sec to 20 Mbits/sec.
- 2000's: WAN (SONET-OC-192) 10 Gbits/sec.
- 201x: WAN (SONET-OC-768) 40 Gbits/sec.
In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD (Organisation for Economic Co-operation and Development) countries and fewer than 20 million broadband subscriptions.
By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each.
In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million.
Making the Connections
The ARPANET, predecessor to the Internet, started with an inspiring vision of a "galactic" network, practical theory about packet switching, and a suite of standardized protocols. But none of this would have mattered if there hadn't also been a way to make and maintain connections.
In 1966-67 Lincoln Labs in Lexington, Massachusetts, and SDR in Santa Monica, California, got a grant from the DOD to begin research on linking computers across the continent. Larry Roberts, describing this work, explains:
"Convinced that it was a worthwhile goal, we set up a test network to see where the problems would be. Since computer time-sharing experiments at MIT and Dartmouth had demonstrated that it was possible to link different computer users to a single computer, the cross country experiment built on this advance."
(i.e. Once timesharing was possible, the linking of remote computers was also possible.) Roberts reports that there was no trouble linking dissimilar computers. The problems, he claims, were with the telephone lines across the continent, i.e. that the throughput was inadequate to accomplish their goals.
The first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22:30 hours on October 29, 1969
Packet switching resolved many of the issues identified during the pre-ARPANET, time-sharing experiments. But higher-speed phone circuits also helped. The first wide area network (WAN) demonstrated in 1965 between computers at MIT's Lincoln Lab, ARPA's facilities, and the System Development Corporation in California utilized dedicated 1200 bps circuits. Four years later, when the ARPANET began operating, 50 Kbps circuits were used. But it wasn't until 1984 that ARPANET traffic levels were such that it became more cost-effective to lease T1 lines (1.5 Mbps) than to continue using multiple 50 Kbps lines.
In the late 1960's and early 1970's there were a number of separate nascent networks developed by States, Universities, and governments: NPL, Merit Network, CYCLADES, X.25. The problem with all these different networks was that they all "spoke" different languages/protocols thus internetworking was difficult if not impossible.
In 1973 Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Robert E. Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. This was TCP/IP.
By the summer of 1973 Kahn and Cerf had worked out a fundamental reformulation in which the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, with important influences on this design.
Circuit Switching vs. Packet Switching:
Circuit switching is a method which sets up a limited number of dedicated connections of constant bit rate and constant delay between nodes for exclusive use during the communication session. It is a methodology of implementing a telecommunications network in which two network nodes establish a dedicated communications channel (circuit) through the network before the nodes may communicate. The circuit guarantees the full bandwidth of the channel and remains connected for the duration of the communication session. The circuit functions as if the nodes were physically connected as with an electrical circuit.
Packet switching divides the data to be transmitted into packets transmitted through the network independently. In packet switching, instead of being dedicated to one communication session at a time, network links are shared by packets from multiple competing communication sessions, resulting in the loss of the quality of service guarantees that are provided by circuit switching.
Packet switching also imposes overhead burdens because each packet must have information that delimits the packet. In TCP/IP the IP header comes first and is used to direct the packets from the source to the destination and identify the type of service being provided. The IP header is like a letters envelope which contains an address and a return address. The TCP header comes after the IP header and contains information about the transmission including endpoint ports, sequencing information and the data. The data may (an usually does) have other headers that describe the specific service, for example HTTP, IMAP, POP3, FTP etc.
When a web page is transmitted from the server to the client there are usually many TCP/IP packets of data involved. These packets that represent the web page may take different routes to get to their final destination and may in fact arrive at the destination out of order. It is the information in the TCP header that allows the client (destination) to reassemble the web page from the many packets correctly.
A good analogy is the Post Office. If we were going to send a large manuscript in chapters as they were completed we would put the manuscript chapters into envelopes and address the envelopes with the destination address and the return address. We would also include information in the envelope describing the sequence of the chapters. The envelope is the IP header and the information inside the envelope is the TCP header and data. We need the TCP type of information in the envelope because as we all know letters can be received out of sequence and therefore we need some information to let us know how to reassemble the manuscript.
Bits Bytes Binary Hex
Modern Digital Computers think in binary: ones and zeros. We have used the phrase "bits per second" or bps a lot in this history. In communication a bit is generally thought of as a one or a zero. Depending on the encoding scheme used in the communication stream eight bits may represent a character or byte. I say may because different encoding schemes can use more than eight bits in order to cope with transmission phenomenons. So when we say that a early teletype ran at a rate of 75 bits per second (bps) that means that they typed about nine characters a second.
A byte is eight binary digits, for example the number seven decimal in binary is 0111. The decimal number Fifteen (15) is 1111 in binary. The decimal number sixteen is 0001,0000 in binary and takes up two bytes while fifteen takes only one byte. As I said in the previous paragraph we usually think of a byte as being eight bits.
So what is HEX? HEX stands for hexadecimal which is a number system with a base of sixteen. The first sixteen hexadecimal numbers are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. The number sixteen decimal is 10 HEX. Computer engineers use HEX because it works well with binary. For example the number 47 decimal is 0010,1111 binary and 2F HEX. Binary is powers of two so the binary two bytes shown are (128)(64)(32)(16),(8)(4)(2)(1). 0010,1111 is then 32+8+4+2+1 or 47 decimal. Do you see the relation between binary and HEX? The HEX number 2F represents two bytes in binary 0010 (2) and 1111 (F). Each HEX digit is a byte. As you can see binary converts easily into HEX but not easily into decimal. Other number bases that have been popular are base 8 (octal) and base 12 (duodecimal) to a much lesser extent.
From Tim Berners-Lee's first message (web page):
"The World Wide Web (WWW) project aims to allow all links to be made to any information anywhere. [...] The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!"
|6502 CPU||KIM 1||Apple I|
|8080 CPU||S100 Board||IMSAI 8080| | 1 | 3 |
<urn:uuid:d8d18775-fa3a-4da5-bca9-016f4dd3f9f3> | How to Manage Access Control List (ACL) in Red Hat Enterprise Linux OS. You can create a standard access list by using the number 1-99 or 1300-1999(expanded range). ACLs act as packet filters based on the criteria defined in the access list. As already mentioned, it is also possible to create a named ACL,. Expand the System Configuration branch of the tree-structure. Your newly created ACL now appears in the ACL dashboard. Universal Access Control Lists Overview Use access control lists (ACLs) to blacklist and whitelist access to your protected web and API domains. The permissions can be set using the setfacl utility. The size of an ACL varies with the number and size of its access control entries (ACEs). Access Control List (ACL) are filters that enable you to control which routing updates or packets are permitted or denied in or out of a network. The Intel AMT Access Control List (ACL) manages who has access to which capabilities within the device. On the object screens, you can create, display, change, and delete ACLs. In fact, some of these methods are faster than the power user menu shortcut. Den hanterar vad en användare får göra och inte göra inom vissa objekt, som filkategorier eller enskilda filer. Wireless Security - Access Control Attacks - It is not a secret that wireless networks are much more vulnerable than their wired equivalents. – Advantage: Easy to determine who can access a given object. The ACL is a list of allowed users, media access control (MAC) addresses and IP addresses. 2 Social security: Issues, challenges and prospects social dialogue; and implications for future ILO work. This capability area focuses on programs and processes related to the validity and verification of individuals entering into or already within a facility. An entry in an access control list (ACL) is an access control entry (ACE). Access control lists (ACLs) are used to specifically identify what is. ENT-AN1112 Configuring the Network Access Server and Access Control List ENT-AN1112 Configuring the Network Access Server and Access Control List Products Applications Design Support Order Now About. Hope this helps. Standard access list: this access list control IP allow or deny IP based on the source IP address of a packet and this kind of access control list must be implemented near the destination of an IP packet. The Red Hat Enterprise Linux 5 kernel provides ACL support for the ext3 file system and NFS-exported file systems. However, I find that after so many years of network administration being so straightforward, that many don't follow this easy to follow and best security practice. Local Service ACL is located in Administration > Device Access. 1 lists access control types for NIOS operations that support access control. You can configure an ACL on your Citrix NetScaler SDX Management Service GUI to limit and control access to the appliance. Introducing Local Service ACL Local Service ACL is located in Administrator > Device Acces. It is security mechanism provided by documentum to control security of Documentum Content Server objects. The two types of ACLs are: Discretionary Access Control List and System Access Control List. Now that I have covered access control and its models, let me tell you how they are logically implemented. will reside, for example, the control room. They are specifically used by network administrators to filter traffic and to provide extra security for the network. ACL (Access Control List) Authorization in Laravel 5. An access ACL is the access control list for a specific file or directory. The term implies a situation that is recoverable or that can be mitigated in some way. A media access control (MAC) address is a unique hardware identifier. Whenever a session requests data, the system searches for access control rules that match the requested object and operation. Access information on quality and safety data, nurse staffing, patient satisfaction and costs of services in Illinois hospitals and surgery centers Institutional Review Board IRB reviews research studies to ensure the rights and well being of people who are subjects in research are protected. Access Control Lists are a feature of the Linux kernel and are currently supported by ReiserFS, Ext2, Ext3, JFS, and XFS. As you probably know, access control lists (ACLs in further text) are a means of providing firewall protection. Together, Entity and Name define who the permission applies to. ACLs are used to filter traffic based on the set of rules defined for the incoming or out going of the network. Unlike in numbered Access Control Lists (ACLs), we can edit Named Access Control Lists. Access Control List 1 1. A list of permissions that is attached to an object. The DEFAULT ACL is another top level parent ACL and has the same child ACLs as the ADMIN ACL,. Each new entry you add to the Access Control List (ACL) appears at the bottom of the list. It's a way to allow or disallow traffic from blowing through a. See network access control, authentication, access control list and information security. Access defines the permission that you want to set on the object. Bypassing Router’s Access Control List (ACL) This will provide us with more insight to why what we are doing works. Is it possible to provide the required permission to access one dir/file without changing or impacting on its own default permission?. Both types of filtering can be achieved—that is, a Layer 2 interface can have both an IP access list and a MAC access list applied to it at the same time. An access-control list, with respect to a computer file system, is a list of permissions attached to an object. Wireless Security - Access Control Attacks - It is not a secret that wireless networks are much more vulnerable than their wired equivalents. Availability: This command is available to cluster and Vserver administrators at the admin privilege level. Logical access control is done via access control lists (ACLs), group policies, passwords, and account restrictions. So does changing the acl list with setfacl. Banner Form(s) Module Custodian A% Alumni/Development Mary Pranger E% BDMS Any F% Finance Dan Solazzo G% General DBAs GE% Events Any GJ% Jobs DBAs GL% Pop Sel/Ltr Gen Any GO% DBAs GOAINTL. Tech support scams are an industry-wide issue where scammers trick you into paying for unnecessary technical support services. Predefined administrator actions exist. Looking at the above question, you need to create and apply access control list to the interface connected to the server to filter traffic from Sw2 and Core (internet) network. ThrowIfNotCanonical() at System. I'm still hoping to hobble together a PowerShell command that will return a report of all the ACLs that are relevant to SQL Server as well as a list of principals those Access Control Lists provide. Discover amazing music and directly support the artists who make it. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Looking for abbreviations of SACL? It is System Access Control List. Use gsutil acl to specify ACLs:. ACLs are usually implemented on the fire-wall router, that decides about the flow of traffic. When you're ready to make a purchase, your profile will fill all your payment and shipping. MAC Access Control Lists (ACLs) permit or deny traffic based on MAC addresses. You can configure the following types of ACLs: • Standard - Permits or denies packets based on source IP address. Git dominates all other version control systems and adoption is up almost 20% from…. Access control from Avigilon starts with a platform that can scale to the needs and ever-changing environment of your business. Windows Access Control List (ACL). Access control sounds like a simple problem but is. 225 (4 matches) 20 permit ip any any Task 5: Control Access to the VTY Lines with a Standard ACL It is good practice to restrict access to the router VTY. Access control lists (ACLs) are a fundamental part of working with routers. Obviously user SCOTT is not configured to access the network point it’s trying to access here. Types of Access Control Lists. This document is a revision of RFC 2086. The no access-list command deletes all ACLs configured on the router; the no access-list [number of the ACL] command removes only a specific ACL. The following characteristics pertain to all Cisco IOS ACLs: Access list statements are evaluated from top to bottom. Many GUI tools are also available for editing the access control lists. The two types of ACLs are: Discretionary Access Control List and System Access Control List. Hostnames are case sensitive, and wildcards are allowed for IP addresses and domains. Lenel's complete line of intelligent access control hardware provides businesses of all sizes with high-performance technology that's modular, expandable and now includes mobile credentialing. Click to enable or disable access to the services from the specified zones and then click Apply. At Schneider Electric, we develop innovative solutions that enhance both your home’s appearance and your lifestyle, reduce your energy bills, and safeguard your family and memories. An access control list refers to rules that are applied to port numbers or IP addresses that are available on a host, each with a list of hosts and/or networks permitted to use the service. List box The list box control displays a list of values or choices. Access Control List (ACLs) •Filesystem Access Control mechanisms: - ACLs - Role Based Access (RBAC) - Can be Implemented as either DAC/MAC • ACL: Fine-grained discretionary access rights given to files & directories. Start studying Access Control List (ACL). ACLs are usually implemented on the fire-wall router, that decides about the flow of traffic. Some modules provide protections for a narrow subset of the system, hardening a particular service. Here are nine NAC solutions to consider. Access Control Lists (ACLs) can be used to selectively block IP traffic to provide a rudimentary firewall. An access control list (ACL) specifies users or system processes that can perform specific actions on an object. The ACEs are used by an operating system or another network device to appropriately restrict the rights users have. Click on A ll Control Panel Items: 3. SELECT host, lower_port, upper_port, acl FROM dba_network_acls ; no rows selected. Access Control List as the name suggests is a list that grants or denies permissions to the packets trying to access services attached to that computer hardware. An access control list or ACL, is a way of defining permissions or authorizations for objects. Click + Create a New ACL. Access Control List Overview and Guidelines. Access control lists are used in almost every security device on your network. Access control, sometimes called authorization, is how a web application grants access to content and functions to some users and not others. Discretionary access control (DAC) is a type of security access control that grants or restricts object access via an access policy determined by an object's owner group and/or subjects. Standard ACLs, which have fewer options for classifying data and controlling traffic flow than extended ACLs. The users in the LDAP group are then assigned the privileges for the associated ACL. Get the most from SAP Access Control Realize the full value of your digital transformation with help from SAP Digital Business Services. If a user belongs to a specific group with access rights to an object, but the user is denied access rights to that object, then the negative access rights will take precedence. You can configure the following types of ACLs: • Standard - Permits or denies packets based on source IP address. It's a way to allow or disallow traffic from blowing through a. We can add further line in an ACL, but keep in mind the new entries will added at the bottom of ACL. Open the Control Panel: 2. Access control list modules. 255 Note that when configuring access lists on a router, you must identify each access list uniquely by assigning either a name or a number to the protocol's access list. Example of ACLs in action on Linux: "Git repository access control" Mac OS X. Start studying Access Control List (ACL). Seznam určuje, kdo nebo co má povolení přistupovat k objektu a jaké operace s ním může provádět. You can use Access Control List (ACL) rules to either permit or deny data packets passing through the IAP. I have tried editing them several times. Each ACE controls or monitors access to an object by a specified user. The device is blocked by an ACL – ACL (Access Control List) are used to enforce network security. It ensures that only authorized users get access to exports, directories, and files. 255 Note that when configuring access lists on a router, you must identify each access list uniquely by assigning either a name or a number to the protocol's access list. g foo: $ ls -le foo. AFC Champions League has a list of all user accounts and groups that have been granted access to the file or folder, as well as the type of access they were given. These records must be sent to PCMDI quarterly in a format acceptable to PCMDI. Use the attachments located HERE to create an ACL request. Wireless Security - Access Control Attacks - It is not a secret that wireless networks are much more vulnerable than their wired equivalents. On the access control context screens, you can create, display, change, and delete access control contexts. ACLs are used to filter traffic based on the set of rules defined for the incoming or out going of the network. 255 eq 80 any. Configure Standard Access List on Cisco Router and Switch - Technig In the router R1, create an access list " access-list 10 permit 192. ACLs are usually implemented on the fire-wall router, that decides about the flow of traffic. Access Control Lists 14. Users who prefer to compile a custom kernel must include the following option in. 0 " and then set it on the FastEthernet 0/0 which is the gateway to the network. Users who prefer to compile a custom kernel must include the following option in. Net : Get Access Control List of File and Directory using C# Programming There is a need to get the Access Control Lists or ACLs on directories and files. On the Overview tool, click Settings > Role-based access control. 2 I have upgraded my 2. Those who did not read my previous article, I recommend to check out it first before reading this tutorial. To manage an access control list: From the Configuration menu, select Access Control List. Setup access control list (ACL) or modify ACL on a flow. These interfaces include, the peer system, the external auth system and the publisher acl system. As we attempted to control crime through traditional approaches, expenditures for federal, state, and local criminal justice system ac-tivities increased from $12. • VLAN Access Control Lists (VACLs) RACL is the most known Access Control List. To enable support for role-based access control on a single machine, follow these steps: Open Windows Admin Center and connect to the machine you wish to configure with role-based access control using an account with local administrator privileges on the target machine. The glossary below contains many of the terms you will find in common use throughout the Symantec Security Response website. MX240,MX480,MX960,MX2008,MX2010,MX2020,EX9204,EX9208,EX9214. I don't always update old posts with new information, so some of this information may be out of date. ACL offers the following: an extra granularity of permissions-control. An ACL is a list of Access Control Entries (ACEs). Access Control Lists (ACLs) extend the standard UNIX ® permission model in a POSIX ®. Access Control List or ACL is an additional layer of security for your Amazon Virtual Private Cloud. 2019-09-02 Brad Access Control List, AnyConnect, Cisco ISE, Configuration, Guest Access, Posture Assessment Redirecting HTTPS requests for guest or posturing causes the browser to display certificate errors. The system access control list (SACL), which lists the security principals that should trigger audit events when accessing the list. Access Control Lists can be simply explained as the mechanism that allows the permission on who can access the object. These interfaces include, the peer system, the external auth system and the publisher acl system. Access control list. ACL’s control the flow of traffic through a device and can prevent unwanted traffic from a particular source to a specific destination. An Access Control List (ACL) is a set of rules that is usually used to filter network traffic. To use ACLs, you need either access to the wiki config (to set global ACLs) or the admin right on the specific page where you want to set (or change) ACLs. For example, the DACL (Discretionary Access Control List) on a Folder object in NTFS can include a generic ACE that allows a group of users to list the folder's contents. This permits an administrator to take advantage of a more fine-grained permissions model. Access control list (ACL) refers to the permissions attached to an object that specify which users are granted access to that object and the operations it is allowed to perform. Once you save a password in LastPass, you'll always have it when you need it; logging in is fast and easy. SELECT host, lower_port, upper_port, acl FROM dba_network_acls ; no rows selected. In software, an ACL, is a list of permissions granted to subjects on an object, where the subject might be Bob or Alice and the object might be the vacation calendar. ACLs allows to assign different permissions for different users and groups. ACLs are the default representation of. Access control list modules. They are specifically used by network administrators to filter traffic and to provide extra security for the network. Access control list and mandatory access control, mandatory integrity control, we don't see this very much inside of Windows. To configure DNS access control: From the Data Management tab, select the DNS tab, expand the Toolbar and click Grid DNS Properties. As background, some of you may remember the AzureSMR package, which was written a few years back as an R interface to Azure. Role Based Access Control is a model in which roles are created for various job functions and permissions to perform operations are then tied to them. I would still read and find more information about ACLs on…. For example, the DACL (Discretionary Access Control List) on a Folder object in NTFS can include a generic ACE that allows a group of users to list the folder's contents. >A lot of this depends on why he is wants the control list. From the Advanced Wi-Fi Settings tab, click Set Up Access List (lower-right corner). Access Control Lists (ACLs)¶ Normally to create, read and modify containers and objects, you must have the appropriate roles on the project associated with the account, i. The Wireless ACL Enhancement feature works in tandem with the wireless MAC Filter List currently available on SonicOS. This is typically carried out by assigning employees, executives, freelancers, and vendors to different types of groups or access levels. Access control lists (ACLs) in a nutshell Most modern firewalls and routers come equipped with ACLs, and ACLs can configure other devices in a typical enterprise network such as servers. ACLs includes a list of Access Control Entries (ACEs) that defines who can access that specific object and enable auditing for the object accesses. Generally when ACL abbreviation is used it means RACL. Refer to Access Control Scopes for a list of supported values for Name. A study found that 68% of women would use birth control if it were available via a pharmacist and 63% agreed the pharmacist consultation was an important step. Security access control is the act of ensuring that an authenticated user accesses only what they are authorized to and no more. Most often Access Control Lists are used for security reasons to filter traffic. (A little off-topic, since you are looking for an equivalent of a unix command, downloading and installing Cgygwin might be something. However, in their simplicity. ENT-AN1112 Configuring the Network Access Server and Access Control List ENT-AN1112 Configuring the Network Access Server and Access Control List Products Applications Design Support Order Now About. 1BestCsharp blog 7,685,898 views. You can configure access control lists (ACLs) for all routed network protocols (IP, AppleTalk, and so on) to filter protocol packets when these packets pass through a device. Understanding Access Control Lists is an important role for moving up into the CCNA area. In a nutshell, your Access Control List is the collection of all your permission entities. Health systems in low income countries with a strong primary care orientation tend to be more pro-poor, equitable and accessible. Configure MAC Access Control. The electronic card access control system uses a special "access card", rather than a brass key, to permit access into the secured area. Well, an access list’s function is same as that guardian. • Access Code: Use this generated Access Code or assign a new code • Permission Set: From the drop-down list, select either Device Administrator or Device User. 1e compatible way. This tutorial deals with the importance of access control related to user identity -- in other words, ensuring that users have access to the right data (or. The Ins and Outs of Physical Access Control Here are some of the advantages and disadvantages of a few basic mechanical and electronic access control systems. (i) Microsoft Windows NT နှင့် Windows 2000 တို့တွင်ရှိသော ကွန်ယက်တွင်ပါဝင်သည့် အရင်းအမြစ်များကို အရာဝတ္ထု တစ်ခုက ဆက်သွယ်ပေးနိုင်သော စာရင်း။. Access control list. Use Variables in Access Control List Rules Variables, although optional, provide an efficient way to define group members, source IP addresses, and other lists. The bad news is that security is rarely at the top of people's lists, although mention terms such as data confidentiality, sensitivity, and ownership. ”4 Most of the congressional debate about Title IX centered on student admissions and on access to gender-differentiated vocational programs; the final version of the law exempted from coverage religious. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission on to any other subject ". It has no UI of its own and will not do anything by itself; install this module only if some other module tells you to. I like access lists as a method to control structures, but they do take a bit of getting used to. Unable to apply access control list. Obviously user SCOTT is not configured to access the network point it’s trying to access here. US, Canadian and Mexican tree ring sites are included separately in the list of 112 proxies rather than being incorporated into the PCs for that area (see e. - [Instructor] In a discretionary access control system,…resource owners have the ability…to set and modify permissions…for other users of the system. When we configure GVC for route all traffic by enabling the option set default route as this gateway ,we have an option below called "Apply VPN access control list ". 0 " and then set it on the FastEthernet 0/0 which is the gateway to the network. Is it possible to provide the required permission to access one dir/file without changing or impacting on its own default permission?. In computer systems, there is an access control list which includes the list of permissions required by particular user to access the data. Wired reported how one hacker created a chip that allowed access into secure buildings, for example. It's main features are: - Controlling the notification LED for phones that have them (cycling colors through outstanding notifications) - Sound control, including repeating sounds - Vibration control, including custom vibration patterns and repeating vibrations - Android wear vibration support - Sleep times - Lock. This capability area focuses on programs and processes related to the validity and verification of individuals entering into or already within a facility. Type of ACL. And by the same way we can allow it to one system or no of system to access the system or network. << Previous Video: Secure Router ConfigurationNext: Port Security and 802. On the remote machine's user access control list, I put in my Any Desk ID, which I believe is the 9 digit number? Is that correct? Once I set it up, upon trying to connect, I received a message from the remote machine saying I was prohibited from making the connection. Learn what access control list is and how it filters the data packet in Cisco router step by step with examples. This document is a revision of RFC 2086. Step seven: Complete the setup On the Access Control page, review the selected settings, and then click Apply to complete the setup. Standard ACLs are easier and simpler to use than extended ACLs. Access control is defined as a security technique used to regulate who has the authority to view what data; while encryption simply encoding all data into an unreadable format and only allowed access if one holds the decryption key. Enter a name for your ACL. Open the Control Panel: 2. Edit: It appears from the answer below that the firewall itself is not an ACL, but are the rules for the firewall considered an ACL?. • Can have one access-list per interface per direction. So does changing the acl list with setfacl. e, the owner of the resource can give access rights on that resource to other users based on his discretion. We just need to know it's there. Read on to find out more about each type of access control to see which are best for your day-to-day operations. RACL is used to control traffic for layer 3. Click + Create a New ACL. Hostnames are case sensitive, and wildcards are allowed for IP addresses and domains. In this table we can see how each user is a row and has specific privileges assigned to them. The original Multics protection mechanism was based on the idea of adding an access control list or ACL to each file, protecting the right to open that file. Introduction. A study found that 68% of women would use birth control if it were available via a pharmacist and 63% agreed the pharmacist consultation was an important step. This list allows traffic from all addresses in the range 192. Access to the files and directories of the IBM Spectrum Scale system is managed through access control lists (ACLs). Given an root folder, the script traverse all child object recursively (depth first) and it only outputs those ACLs which are not inherited by the parent folder. The application are telnet, ICMP, HTTP, SMTP etc it also work on port no of that application. access-list 10 deny host 192. List box The list box control displays a list of values or choices. If you use an authorization profile to define an access control list (ACL) profile, the system uses the role/authorization pairings listed in the authorization profile and inserts them into the list of permissions for the ACL profile. Access control list (ACL) provides an additional, more flexible permission mechanism for file systems. Once you save a password in LastPass, you'll always have it when you need it; logging in is fast and easy. In the network diagram, there are three LANs, network 192. Unlike in numbered Access Control Lists (ACLs), we can edit Named Access Control Lists. Cisco Access Control Lists (ACLs) are used in nearly all product lines for several purposes, including filtering packets (data traffic) as it crosses from an inbound port to an outbound port on a router or switch, defining. ACLs allow you to provide different levels of access to files and folders for different users. ACLs can be configured on network devices with packet filtering capatibilites, such as routers and firewalls. VNC® software enables you to remotely access and securely control your desktop or mobile device. Access control is a method of guaranteeing that users are who they say they are and that they have the appropriate access to company data. Konfigurasi Standard ACL (Access Control List) di Cisco Packet Tracer - Kali ini saya akan membagikan artikel yang membahas mengenai Jaringan Cisco dengan judul Konfigurasi Standard ACL (Access Control List) di Cisco Packet Tracer. In this article I’m going to go to extended numbered access control list example and configurations with Packet Tracer. Holder's Total Security realizes that not every business needs an Access Control System, but those who do may relate to the following 8 benefits. 3 billion in 1990. In automated online systems, the security of information is vital. Some people will call them "ackles", some people call them access control lists, or ACLs. SCADA (supervisory control and data acquisition) is a category of software application program for process control, the gathering of data in real time from remote locations in order to control equipment and conditions. SELECT host, lower_port, upper_port, acl FROM dba_network_acls ; no rows selected. Access Lists in Cisco routers operate in a sequential order. Can free users use the access control list?. Rules of Access List • All deny statements have to be given First • There should be at least one Permit statement • An implicit deny blocks all traffic by default when there is no match (an invisible statement). On the Overview tool, click Settings > Role-based access control. CBAC Overview Context-Based Access Control (CBAC) is both a stateful and an application firewall that can filter traffic at the network layer (IP addresses and protocols), the transport layer (ports, TCP and UDP sessions), the session layer (the state of the conversation), and the application layer (protocols for specific applications, as well as multi-channel applications …. These records must be sent to PCMDI quarterly in a format acceptable to PCMDI. Information Technology Laboratory (ITL) National Vulnerability Database (NVD) Announcement and Discussion Lists General Questions & Webmaster Contact Email:[email protected]
A media access control (MAC) address is a unique hardware identifier. Kennst du Übersetzungen, die noch nicht in diesem Wörterbuch enthalten sind? Hier kannst du sie vorschlagen! Bitte immer nur genau eine Deutsch-Englisch-Übersetzung eintragen (Formatierung siehe Guidelines), möglichst mit einem guten Beleg im Kommentarfeld. /24, the network administrator has been asked to configure access lists based on several criteria. Unlike the routing table, which looks for the closest match in the list when processing an ACL entry that will be used as the. Essentially, this means whatever IP address is in the ACL field will have access to the specified Panels of the software. DAC mechanism controls are defined by user identification with supplied credentials during authentication, such as username and password. An access-control list (ACL), with respect to a computer file system, is a list of permissions attached to an object. In automated online systems, the security of information is vital. Confusing at first, especially since the little router was perfectly happy with this (and about eight) other. These routers represent the first line of protection for your network from the rest of the world. These records must be sent to PCMDI quarterly in a format acceptable to PCMDI. You can use ACLs to grant basic read/write permissions to other AWS accounts. Cloud Access Control for SaaS Applications. Abstract This document describes the fundamental requirements of an access control list (ACL) model for the Lightweight Directory Application Protocol (LDAP) directory service. There is only one invoke node "NI Security. Solved: Hi every body; CiscoWorks Access Control List Manager was a great tool for managing and Optimizing ACLs (Removing covered ACEs, Merging maskable ACE address ranges, Merging covered ACE port ranges, Removing redundant ACEs, Removing duplicate. ADD_PRIVILEGE which I used earlier is still supported for backward. However, sometimes this last step fails with a rather cryptic error, mentioning something along the lines of "The Access Control List is not canonical". You can create Access Control Lists (ACLs) and associate them with specific LDAP groups. /26, network 192. Local Service ACL is located in Administration > Device Access. Browse Access Control-Security business listings in Guaynabo. ProFTPD Access Control List bypass vulnerability — GLSA 200405-09. Our system has groups of users and the ACL will specify which groups can access the object. But other PC with different IP Address still able to print it. Some items in the structure don't have any permissions applied at all, and I can't apply them from above levels, but it seems to work if I do it on each item at the time. Network-based security perimeters are obsolete. For some reason, it will only allow me to apply an ACL to specific ports, and I can't figure out why. I would like to create access control lists to limit access to adminui to internal IPs while still allowing userui to be accessed from any IP. Access Control List (ACL). Understanding Access Control Lists is an important role for moving up into the CCNA area. You can configure access control lists (ACLs) for all routed network protocols (IP, AppleTalk, and so on) to filter protocol packets when these packets pass through a device. The thing that I wanted to stress to you is the importance of role-based access control within Windows. Almost all networked devices have a MAC address, including computers, switches, access points, smart phones, and storage systems. You can help protect yourself from scammers by verifying that the contact is a Microsoft Agent or Microsoft Employee and that the phone number is an official Microsoft global customer service number. ABAC - Attribute-Based Access Control - is the next-generation way of handling authorization. Have you heard that condoms make sex less pleasurable?. A modern online business could assign staff to one of dozens of different roles, each of which could initiate some subset of several hundred possible transactions in the system. Learn about what it is and why you need it! Document-Level Attribute-Based Access Control with X-Pack 6. Definitions Science Chemistry Time Zones Math and Arithmetic Similarities Between All Topics. For example, the DACL (Discretionary Access Control List) on a Folder object in NTFS can include a generic ACE that allows a group of users to list the folder's contents. Meaning of Access Control List. I would still read and find more information about ACLs on…. What is the abbreviation for Access Control List? What does ACL stand for? ACL abbreviation stands for Access Control List. You can configure filters for this particular ACL within this mode. Some items in the structure don't have any permissions applied at all, and I can't apply them from above levels, but it seems to work if I do it on each item at the time. Solaris ACL (Access Control List) Basically a file has one owner, one group and others, and there are read/write/execute (rwx) permissions assigned to them respectively. The ACL module, short for Access Control List, is an API for other modules to create lists of users and give them access to nodes. Using these permissions, you can allow or deny access to objects or groups of objects by users or classes of users. , UTL_HTTP, UTL_SMTP, UTL_TCP, etc. Access Control Lists are used to control traffic into and out of your network based on a given criteria. Access control lists (ACLs) provide a means to filter packets by allowing a user to permit or deny IP packets from crossing specified interfaces. Now I receive this error: "the access control list (ACL) structure is. So it must be added to the ACL. The function A(s, e, a) which describes all the access control lists in a library is partial, i. Access Control Lists (ACLs): Use XML syntax to grant access to specific S3 buckets or objects. Access control based on MAC addresses does not add much security though. | 1 | 3 |
<urn:uuid:2064caeb-f086-4d6e-9838-7de14255ddef> | So What Is ‘Unix,’ Anyway?
Unix, most people would say, is an operating system written decades ago at AT&T’s Bell Labs, and its descendents. Today’s major versions of Unix branched off a tree with two trunks: one emanating directly from AT&T and one from AT&T via the University of California, Berkeley. The stoutest branches today are AIX from IBM, HP-UX from Hewlett-Packard and Solaris from Sun Microsystems.
However, The Open Group, which owns the Unix trademark, defines Unix as any operating system it has certified as conforming to the Single Unix Specification (SUS). This includes operating systems that are usually not thought of as Unix, such as Mac OS X Leopard (which descended from BSD Unix) and IBM’s z/OS (which descended from the mainframe operating system MVS), because they conform to the SUS and support SUS APIs. The basic idea is that it is Unix if it acts like Unix, regardless of the underlying code.
A still broader definition of Unix would include Unix-like operating systems—sometimes called Unix “clones” or “look-alikes”—that copied many ideas from Unix but didn’t directly incorporate code from Unix. The leading one of these is Linux.
Finally, although it’s reasonable to call Unix an “operating system,” as a practical matter it is more. In addition to an OS kernel, Unix implementations typically include utilities such as command-line editors, APIs, development environments, libraries and documentation.
The future of Unix
A recent poll by Gartner Inc. suggests that the continued lack of complete portability across competing versions of Unix, as well as the cost advantage of Linux and Windows on x86 commodity processors, will prompt IT organizations to migrate away from Unix.
“The results reaffirm continued enthusiasm for Linux as a host server platform, with Windows similarly growing and Unix set for a long, but gradual, decline,” says the poll report, published in February.
“Unix has had a long and lively past, and while it’s not going away, it will increasingly be under pressure,” says Gartner analyst George Weiss. “Linux is the strategic ‘Unix’ of choice.” Although Linux doesn’t have the long legacy of development, tuning and stress-testing that Unix has seen, it is approaching and will soon equal Unix in performance, reliability and scalability, he says.
But a recent Computerworld survey suggests that any migration away from Unix won’t happen quickly. In the survey of 211 IT managers, 90% of the 130 respondents who identified themselves as Unix users said their companies were “very or extremely reliant” on Unix. Slightly more than half said that “Unix is an essential platform for us and will remain so indefinitely,” and just 12% agreed with the statement “We expect to migrate away from Unix in the future.” Cost savings, primarily via server consolidation, was cited as the No. 1 reason for migrating away.
Weiss says the migration to commodity x86 processors will accelerate because of the hardware cost advantages. “Horizontal, scalable architectures; clustering; cloud computing; virtualization on x86 — when you combine all those trends, the operating system of choice is around Linux and Windows,” he says.
“For example,” Weiss continues, “in the recent Cisco Systems Inc. announcement for its Unified Computing architecture, you have this networking, storage, compute and memory linkage in a fabric, and you don’t need Unix. You can run Linux or Windows on x86. So, Intel is winning the war on behalf of Linux over Unix.”
The Open Group concedes little to Linux and calls Unix the system of choice for “the high end of features, scalability and performance for mission-critical applications.” Linux, it says, tends to be the standard for smaller, less critical applications.
AT&T’s Korn is among those still bullish on Unix. Korn says a strength of Unix over the years, starting in 1973 with the addition of pipes, is that it can easily be broken into pieces and distributed. That will carry Unix forward, he says: “The [pipelining] philosophy works well in cloud computing, where you build small, reusable pieces instead of one big monolithic application.”
Regardless of the ultimate fate of Unix, the operating system born at Bell Labs 40 years ago has established a legacy that’s likely to endure for decades more. It can claim parentage of a long list of popular software, including the Unix offerings of IBM, HP and Sun, Apple Inc.’s Mac OS X and Linux. It has also influenced systems with few direct roots in Unix, such as Microsoft’s Windows NT and the IBM and Microsoft versions of DOS.
Unix enabled a number of start-ups to succeed by giving them a low-cost platform to build on. It was a core building block for the Internet and is at the heart of telecommunications systems today. It spawned a number of important architectural ideas, such as pipelining, and the Unix derivative Mach contributed enormously to scientific, distributed and multiprocessor computing.
The ACM may have said it best in its 1983 Turing Award citation in honor of Thompson and Ritchie’s Unix work: “The genius of the Unix system is its framework, which enables programmers to stand on the work of others.” | 1 | 4 |
<urn:uuid:38cd26c1-a1ee-4387-8d47-67980beb2c55> | If you want to master Excel keyboard shortcuts on a Mac, you need to take a moment to understand how the Mac keyboard is arranged, and how it can be configured through system preferences. This is especially important with Excel, which uses a number of function keys for shortcuts.
Modern Mac computers using an Apple keyboard have icons printed on some of the keys on the top row of the keyboard. These keys (F1 - F12) are called function keys.
On a Mac, function keys can be used in two ways:
(1) to perform special actions that correspond to the icon printed on the key, such as dimming or brightening the screen, showing the Dashboard, increasing or decreasing speaker volume, and so on.
(2) as standard function keys. In this case, the action performed will vary depending on (a) the application you are currently using or (b) the keyboard shortcuts listed in the Keyboard & Mouse pane of System Preferences.
The default behavior of Mac function keys is to perform the action indicated by the icon printed on the key. For example, the function key F10 has a small picture of a speaker, and pressing this key mutes and un-mutes the system volume.
If you want to instead use F1 - F12 as standard function keys, hold the Fn key while pressing the function key. For example, Fn-F10 will perform the action assigned to the F10 key instead of toggling mute on or off.
Changing default behavior
A setting in System Preferences, in the Keyboard pane, controls default behavior for function keys.
The setting is a checkbox labeled "Use all F1, F2, etc. keys as standard function keys unchecked". When unchecked, function keys will perform as described in #1 above.
If you check the checkbox, F1 - F12 will behave standard function keys, and you will need to press Fn in order to perform the actions indicated by special icons.
In general, unless you are using Excel all day on a Mac, you will probably find it more convenient to leave the default behavior alone and learn to use the Fn key for certain shortcuts in Excel. This is because it's nice to be able to access the Mac dashboard, brightness, and volume without having to press Fn at the same time.
Changing keyboard shortcuts
You can change the keyboard shortcuts that are assigned to function keys in the Keyboard Shortcuts pane. For example, you could un-assign F9 from Mission Control so that F9 can be available in other applications. | 1 | 6 |
<urn:uuid:ad2e57ec-799f-4a22-9abd-da5c7d9f1501> | From Wikipedia, the free encyclopedia
A wireless local area network (WLAN) is a wireless computer network that links two or more devices using wireless communication within a limited area such as a home, school, computer laboratory, or office building. This gives users the ability to move around within a local coverage area and yet still be connected to the network. Through a gateway, a WLAN can also provide a connection to the wider Internet.
Most modern WLANs are based on IEEE 802.11 standards and are marketed under the Wi-Fi brand name.
Wireless LANs have become popular for use in the home, due to their ease of installation and use. They are also popular in commercial properties that offer wireless access to their customers.
An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card widely used by wireless Internet service providers
2.2 Basic service set
2.3 Extended service set
2.4 Distribution system
3 Types of wireless LANs
3.4 Wireless distribution system
6 Performance and throughput
This notebook computer is connected to a wireless access point using a PC card wireless card.
An example of a Wi-Fi network
Norman Abramson, a professor at the University of Hawaii, developed the world’s first wireless computer communication network, ALOHAnet. The system became operational in 1971 and included seven computers deployed over four islands to communicate with the central computer on the Oahu island without using phone lines.
An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card widely used by wireless Internet service providers
54 Mbit/s WLAN PCI Card (802.11g)
Wireless LAN hardware initially cost so much that it was only used as an alternative to cabled LAN in places where cabling was difficult or impossible. Early development included industry-specific solutions and proprietary protocols, but at the end of the 1990s these were replaced by standards, primarily the various versions of IEEE 802.11 (in products using the Wi-Fi brand name). Beginning in 1991, a European alternative known as HiperLAN/1 was pursued by the European Telecommunications Standards Institute (ETSI) with a first version approved in 1996. This was followed by a HiperLAN/2 functional specification with ATM influences accomplished February 2000. Neither European standard achieved the commercial success of 802.11, although much of the work on HiperLAN/2 has survived in the physical specification (PHY) for IEEE 802.11a, which is nearly identical to the PHY of HiperLAN/2.
In 2009 802.11n was added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers are able to utilise both wireless bands, known as dualband. This allows data communications to avoid the crowded 2.4 GHz band, which is also shared with Bluetooth devices and microwave ovens. The 5 GHz band is also wider than the 2.4 GHz band, with more channels, which permits a greater number of devices to share the space. Not all channels are available in all regions.
A HomeRF group formed in 1997 to promote a technology aimed for residential use, but it disbanded at the end of 2002.
]All components that can connect into a wireless medium in a network are referred to as stations (STA). All stations are equipped with wireless network interface controllers (WNICs). Wireless stations fall into two categories: wireless access points, and clients. Access points (APs), normally wireless routers, are base stations for the wireless network. They transmit and receive radio frequencies for wireless enabled devices to communicate with. Wireless clients can be mobile devices such as laptops, personal digital assistants, IP phones and other smartphones, or non-portable devices such as desktop computers and workstations that are equipped with a wireless network interface.
Basic Service Set
The basic service set (BSS) is a set of all stations that can communicate with each other at PHY layer. Every BSS has an identification (ID) called the BSSID, which is the MAC address of the access point servicing the BSS.
There are two types of BSS: Independent BSS (also referred to as IBSS), and infrastructure BSS. An independent BSS (IBSS) is an ad hoc network that contains no access points, which means they cannot connect to any other basic service set.
Extended Service Set
An extended service set (ESS) is a set of connected BSSs. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string.
A distribution system (DS) connects access points in an extended service set. The concept of a DS can be used to increase network coverage through roaming between cells.
DS can be wired or wireless. Current wireless distribution systems are mostly based on WDS or MESH protocols, though other systems are in use.
Types of Wireless LANs
The IEEE 802.11 has two basic modes of operation: infrastructure and ad hoc mode. In ad hoc mode, mobile units transmit directly peer-to-peer. In infrastructure mode, mobile units communicate through an access point that serves as a bridge to other networks (such as Internet or LAN).
Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included encryption mechanisms: Wired Equivalent Privacy (WEP, now insecure), Wi-Fi Protected Access (WPA, WPA2), to secure wireless computer networks. Many access points will also offer Wi-Fi Protected Setup, a quick (but now insecure) method of joining a new device to an encrypted network.
Most Wi-Fi networks are deployed in infrastructure mode.
In infrastructure mode, a base station acts as a wireless access point hub, and nodes communicate through the hub. The hub usually, but not always, has a wired or fiber network connection, and may have permanent wireless connections to other nodes.
Wireless access points are usually fixed, and provide service to their client nodes within range.
Wireless clients, such as laptops, smartphones etc. connect to the access point to join the network.
Sometimes a network will have a multiple access points, with the same ‘SSID’ and security arrangement. In that case connecting to any access point on that network joins the client to the network. In that case, the client software will try to choose the access point to try to give the best service, such as the access point with the strongest signal.
Peer-to-Peer or ad hoc wireless LAN
An ad hoc network (not the same as a WiFi Direct network) is a network where stations communicate only peer to peer (P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic Service Set (IBSS).
A WiFi Direct network is another type of network where stations communicate peer to peer.
In a Wi-Fi P2P group, the group owner operates as an access point and all other devices are clients. There are two main methods to establish a group owner in the Wi-Fi Direct group. In one approach, the user sets up a P2P group owner manually. This method is also known as Autonomous Group Owner (autonomous GO). In the second method, also called negotiation-based group creation, two devices compete based on the group owner intent value. The device with higher intent value becomes a group owner and the second device becomes a client. Group owner intent value can depend on whether the wireless device performs a cross-connection between an infrastructure WLAN service and a P2P group, remaining power in the wireless device, whether the wireless device is already a group owner in another group and/or a received signal strength of the first wireless device.
A peer-to-peer network allows wireless devices to directly communicate with each other. Wireless devices within range of each other can discover and communicate directly without involving central access points. This method is typically used by two computers so that they can connect to each other to form a network. This can basically occur in devices within a closed range.
If a signal strength meter is used in this situation, it may not read the strength accurately and can be misleading, because it registers the strength of the strongest signal, which may be the closest computer.
Hidden node problem: Devices A and C are both communicating with B, but are unaware of each other
IEEE 802.11 defines the physical layer (PHY) and MAC (Media Access Control) layers based on CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). The 802.11 specification includes provisions designed to minimize collisions, because two mobile units may both be in range of a common access point, but out of range of each other.
A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN.
Wireless Distribution System
A Wireless Distribution System enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of DS over other solutions is that it preserves the MAC addresses of client packets across links between access points.
An access point can be either a main, relay or remote base station. A main base station is typically connected to the wired Ethernet. A relay base station relays data between remote base stations, wireless clients or other relay stations to either a main or another relay base station. A remote base station accepts connections from wireless clients and passes them to relay or main stations. Connections between “clients” are made using MAC addresses rather than by specifying IP assignments.
All base stations in a Wireless Distribution System must be configured to use the same radio channel, and share WEP keys or WPA keys if they are used. They can be configured to different service set identifiers. WDS also requires that every base station be configured to forward to others in the system as mentioned above.
WDS may also be referred to as repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). It should be noted, however, that throughput in this method is halved for all clients connected wirelessly.
When it is difficult to connect all of the access points in a network by wires, it is also possible to put up access points as repeaters.
Roaming among Wireless Local Area Networks
There are two definitions for wireless LAN roaming:
- Internal Roaming: The Mobile Station (MS) moves from one access point (AP) to another AP within a home network if the signal strength is too weak. An authentication server (RADIUS) performs the re-authentication of MS via 802.1x (e.g. with PEAP). The billing of QoS is in the home network. A Mobile Station roaming from one access point to another often interrupts the flow of data among the Mobile Station and an application connected to the network. The Mobile Station, for instance, periodically monitors the presence of alternative access points (ones that will provide a better connection). At some point, based on proprietary mechanisms, the Mobile Station decides to re-associate with an access point having a stronger wireless signal. The Mobile Station, however, may lose a connection with an access point before associating with another access point. In order to provide reliable connections with applications, the Mobile Station must generally include software that provides session persistence.
- External Roaming: The MS (client) moves into a WLAN of another Wireless Internet Service Provider (WISP) and takes their services (Hotspot). The user can independently of his home network use another foreign network, if this is open for visitors. There must be special authentication and billing systems for mobile services in a foreign network.
Wireless LANs have a great deal of applications. Modern implementations of WLANs range from small in-home networks to large, campus-sized ones to completely mobile networks on airplanes and trains.
Users can access the Internet from WLAN hotspots in restaurants, hotels, and now with portable devices that connect to 3G or 4G networks. Oftentimes these types of public access points require no registration or password to join the network. Others can be accessed once registration has occurred and/or a fee is paid.
Existing Wireless LAN infrastructures can also be used to work as indoor positioning systems with no modification to the existing hardware.
Performance and Throughput
WLAN, organised in various layer 2 variants (IEEE 802.11), has different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer 2 data rates. This, however, does not apply to typical deployments in which data are being transferred between two endpoints of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link.
Graphical representation of Wi-Fi application specific (UDP) performance envelope 2.4 GHz band, with 802.11g
Graphical representation of Wi-Fi application specific (UDP) performance envelope 2.4 GHz band, with 802.11n with 40 MHz
This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa.
Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application which uses small packets (e.g. VoIP) creates a data flow with a high overhead traffic (e.g. a low goodput).
Other factors which contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received.
The latter is determined by distance and by the configured output power of the communicating devices.
Same references apply to the attached throughput graphs which show measurements of UDP throughput measurements. Each represents an average (UDP) throughput (the error bars are there, but barely visible due to the small variation) of 25 measurements.
Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at above references. The table below shows the maximum achievable (application specific) UDP throughput in the same scenarios (same references again) with various difference WLAN (802.11) flavours. The measurement hosts have been 25 meters apart from each other; loss is again ignored. | 1 | 3 |
<urn:uuid:c4310ad9-124c-4c91-9950-396b2dde81a4> | Install Ruby on Rails on Ubuntu Linux
Installing Ruby on Rails 4.0 on Ubuntu Linux. Up-to-date, detailed instructions on how to install Rails newest release. 4.0 is the newest version of Rails. This in-depth installation guide is used by developers to configure their working environment for real-world Rails development. This guide doesn't cover installation of Ruby on Rails for a production server.
To develop with Rails on Ubuntu, you’ll need Ruby (an interpreter for the Ruby programming language) plus gems (software libraries) containing the Rails web application development framework.
Updating Rails Applications
See the article Updating Rails if you already have Rails installed.
For an overview of what’s changed in each Rails release, see a Ruby on Rails Release History.
Ruby on Rails on Ubuntu
Ubuntu is a popular platform for Rails development, as are other Unix-based operating systems such as Mac OS X. Installation is relatively easy and widespread help is available in the Rails developer community.
Use a Ruby Version Manager
As new versions of Ruby are released, you’ll need an easy way to switch between versions. Just as important, you’ll have a dependency mess if you install gems into the system environment. I recommend RVM to manage Ruby versions and gems because it is popular, well-supported, and full-featured. If you are an experienced Unix administrator, you can consider alternatives such as Chruby, Sam Stephenson’s rbenv, or others on this list.
Conveniently, you can use RVM to install Ruby.
Don’t Install Ruby from a Package
Ubuntu provides a package manager system for installing system software. You’ll use this to prepare your computer before installing Ruby. However, don’t use apt-get to install Ruby. The package manager will install an outdated version of Ruby. And it will install Ruby at the system level (for all users). It’s better to use RVM to install Ruby within your user environment.
You can use Ruby on Rails without actually installing it on your computer. Hosted development, using a service such as Nitrous.io, means you get a computer “in the cloud” that you use from your web browser. Any computer can access the hosted development environment, though you’ll need a broadband connection. Nitrous.io is free for small projects.
Using a hosted environment means you are no longer dependent on the physical presence of a computer that stores all your files. If your computer crashes or is stolen, you can continue to use your hosted environment from any other computer. Likewise, if you frequently work on more than one computer, a hosted environment eliminates the difficulty of maintaining duplicate development environments. For these reasons some developers prefer to “work in the cloud” using Nitrous.io. For more on Nitrous.io, see the article Ruby on Rails with Nitrous.io. Nitrous.io is a good option if you have trouble installing Ruby on Rails on your computer.
Prepare Your System
You’ll need to prepare your computer with the required system software before installing Ruby on Rails.
You’ll need superuser (root) access to update the system software.
Update Your Package Manager First:
$ sudo apt-get update
This must finish without error or the following step will fail.
$ sudo apt-get install curl
You’ll use Curl for installing RVM.
Install Ruby Using RVM
Use RVM, the Ruby Version Manager, to install Ruby and manage your Rails versions.
If you have an older version of Ruby installed on your computer, there’s no need to remove it. RVM will leave your “system Ruby” untouched and use your shell to intercept any calls to Ruby. Any older Ruby versions will remain on your system and the RVM version will take precedence.
Ruby 2.0.0-p353 was current when this was written. You can check for the current recommended version of Ruby. RVM will install the newest stable Ruby version.
The RVM website explains how to install RVM. Here’s the simplest way :
$ curl -L https://get.rvm.io | bash -s stable –ruby
Note the backslash before “curl” (this avoids potential version conflicts).
The “—ruby” flag will install the newest version of Ruby.
RVM includes an “autolibs” option to identify and install system software needed for your operating system. See the article RVM Autolibs: Automatic Dependency Handling and Ruby 2.0 for more information.
If You Already Have RVM Installed
If you already have RVM installed, update it to the latest version and install Ruby :
$ rvm get stable –autolibs=enable $ rvm install ruby $ rvm –default use ruby-2.0.0-p353
1. Installation Troubleshooting and Advice
If you have trouble installing Ruby with RVM, see the article “Installing Ruby” for Installation Troubleshooting and Advice. If you have problems installing RVM, use Nitrous.io.
2. Install Node.js
$ sudo apt-get install nodejs
and set it in your $PATH.
If you don’t install Node.js, you’ll need to add this to the Gemfile for each Rails application you build :
3. Check the Gem Manager
RubyGems is the gem manager in Ruby.
4. Check the Installed Gem Manager Version :
$ gem -v 2.1.11
You should have :
RubyGems 2.1.11 — check for newer version
Use gem update –system to upgrade the Ruby gem manager if necessary.
5. RVM Gemsets
Not all Rails developers use RVM to manage gems, but many recommend it.
6. Display a List of Gemsets:
$ rvm gemset list gemsets for ruby-2.0.0-p353 => (default) global
Only the “default” and “global” gemsets are pre-installed.
If you get an error “rvm is not a function,” close your console and open it again.
7. RVM’s Global Gemset
See what gems are installed in the “global” gemset :
$ rvm gemset use global $ gem list
A trouble-free development environment requires the newest versions of the default gems.
Several gems are installed with Ruby or the RVM default gemset:
bundler (1.3.5) check for newer version
bundler-unload (1.0.1) check for newer version
rake (10.1.0) check for newer version
rubygems-bundler (1.3.3) check for newer version
rvm (188.8.131.52) check for newer version
To get a list of gems that are outdated :
$ gem outdated ### list not shown for brevity
To update all stale gems :
$ gem update ### list not shown for brevity
Faster Gem Installation
By default, when you install gems, documentation files will be installed. Developers seldom use gem documentation files (they’ll browse the web instead). Installing gem documentation files takes time, so many developers like to toggle the default so no documentation is installed.
Here’s how to speed up gem installation by disabling the documentation step :
$ echo "gem: –no-document" >> ~/.gemrc
This adds the line gem: –no-document to the .gemrc file in your home directory.
You can stay informed of new gem versions by creating an account at RubyGems.org and visiting your dashboard. Search for each gem you use and “subscribe” to see a feed of updates in the dashboard (an RSS feed is available from the dashboard).
After you’ve built an application and set up a GitHub repository, you can stay informed with Gemnasium or VersionEye. These services survey your GitHub repo and send email notifications when gem versions change. Gemnasium and VersionEye are free for public repositories with a premium plan for private repositories.
Rails Installation Options
Check for the current version of Rails. Rails 4.0.2 was current when this was written.
You can install Rails directly into the global gemset. However, many developers prefer to keep the global gemset sparse and install Rails into project-specific gemsets, so each project has the appropriate version of Rails.
Let’s consider the options you have for installing Rails.
If you want the most recent stable release :
$ gem install rails $ rails -v
If you want the newest beta version or release candidate, you can install with –pre.
$ gem install rails –pre $ rails -v
Or you can get a specific version.
For example, if you want the Rails 3.2.16 release:
$ gem install rails –version=3.2.16 $ rails -v
Create a Workspace Folder
You’ll need a convenient folder to store your Rails projects. You can give it any name, such as code/ or projects/. For this tutorial, we’ll call itworkspace/.
Create a Projects Folder and Move Into The Folder :
$ mkdir workspace $ cd workspace
This is where you’ll create your Rails applications.
New Rails 4.0 Application
Here’s how to create a project-specific gemset, installing the current version of Rails 4.0, and creating a new application.
$ mkdir myapp $ cd myapp $ rvm use [email protected] –ruby-version –create $ gem install rails $ rails new .
We’ll name the new application “myapp.” Obviously, you can give it any name you like.
With this workflow, you’ll first create a root directory for your application, then move into the new directory.
With one command you’ll create a new project-specific gemset. The option “—ruby-version” creates .ruby-version and .ruby-gemset files in the root directory. RVM recognizes these files in an application’s root directory and loads the required version of Ruby and the correct gemset whenever you enter the directory.
When we create the gemset, it will be empty (though it inherits use of all the gems in the global gemset). We immediately install Rails. The command gem install rails installs the most recent release of Rails.
Finally we run rails new .. We use the Unix “dot” convention to refer to the current directory. This assigns the name of the directory to the new application.
This approach is different from the way most beginners are taught to create a Rails application. Most instructions suggest usingrails new myapp to generate a new application and then enter the directory to begin work. Our approach makes it easy to create a project-specific gemset and install Rails before the application is created.
The rails new command generates the default Rails starter app. If you wish, you can use the Rails Composer tool to generate a starter application with a choice of basic features and popular gems.
For a “smoke test” to see if everything runs, display a list of Rake tasks.
$ rake -T
There’s no need to run bundle exec rake instead of rake when you are using RVM (see RVM and bundler integration).
This concludes the instructions for installing Ruby and Rails. Read on for additional advice and tips.
Rails Starter Apps
The starter application you create with rails new is very basic.
Use the Rails Composer tool to build a full-featured Rails starter app.
You’ll get a choice of starter applications with basic features and popular gems.
Here’s how to generate a new Rails application using the Rails Composer tool:
Using the conventional approach :
$ rails new myapp -m https://raw.github.com/RailsApps/rails-composer/master/composer.rb
Or, first creating an empty application root directory :
$ mkdir myapp $ cd myapp $ rvm use [email protected] –ruby-version –create $ gem install rails $ rails new . -m https://raw.github.com/RailsApps/rails-composer/master/composer.rb
You can add the -T flags to skip Test::Unit if you are using RSpec for testing.
You can add the -O flags to skip Active Record if you are using a NoSQL datastore such as MongoDB.
If you get an error “OpenSSL certificate verify failed” when you try to generate a new Rails app, see the article OpenSSL errors and Rails.
Rails Tutorials and Example Applications
The RailsApps project provides example apps that show how real-world Rails applications are built. Each example is known to work and can serve as your personal “reference implementation”. Each is an open source project. Dozens of developers use the apps, report problems as they arise, and propose solutions as GitHub issues. There is a tutorial for each one so there is no mystery code. Purchasing a subscription for the tutorials gives the project financial support.
|Example Applications for Rails 4.0||Tutorial||Comments|
|Learn Rails||coming soon||introduction to Rails for beginners|
|Rails and Bootstrap||Tutorial||starter app for Rails and Twitter Bootstrap|
|Example Applications for Rails 3.2||Tutorial||Comments|
|Twitter Bootstrap, Devise, CanCan||Tutorial||Devise for authentication, CanCan for authorization, Twitter Bootstrap for CSS|
|Rails Membership Site with Stripe||Tutorial||Site with subscription billing using Stripe|
|Rails Membership Site with Recurly||Tutorial||Site with subscription billing using Recurly|
|Startup Prelaunch Signup App||Tutorial||For a startup prelaunch signup site|
|Devise, RSpec, Cucumber||Tutorial||Devise for authentication with ActiveRecord and SQLite for a database|
|Devise, Mongoid||Tutorial||Devise for authentication with a MongoDB datastore|
|OmniAuth, Mongoid||Tutorial||OmniAuth for authentication with a MongoDB datastore|
|Subdomains, Devise, Mongoid||Tutorial||Basecamp-style subdomains with Devise and MongoDB|
Adding a Gemset to an Existing Application
If you’ve already created an application with the command rails new myapp, you can still create a project-specific gemset. Here’s how to create a gemset for an application named “myapp” and create .ruby-version and .ruby-gemset files in the application’s root directory :
$ rvm use [email protected] –ruby-version –create
You’ll need to install Rails and the gems listed in your Gemfile into the new gemset by running :
$ gem install rails $ bundle install
Specifying a Gemset for an Existing Application
If you have already created both an application and a gemset, but not .ruby-version and .ruby-gemset files, here’s how to add the files. For example, if you want to use an existing gemset named “[email protected]” :
$ echo "ruby-2.0.0" > .ruby-version $ echo "myapp" > .ruby-gemset
Using .ruby-version and .ruby-gemset files means you’ll automatically be using the correct Rails and gem version when you switch to your application root directory on your local machine.
Databases for Rails
Rails uses the SQLite database by default. RVM installs SQLite and there’s nothing to configure.
Though SQLite is adequate for development (and even some production applications), a new Rails application can be configured for other databases. The command rails new myapp –database= will show you a list of supported databases.
Supported for preconfiguration are: mysql, oracle, postgresql, sqlite3, frontbase, ibm_db, sqlserver, jdbcmysql, jdbcsqlite3, jdbcpostgresql, jdbc.
For example, to create a new Rails application to use PostgreSQL :
$ rails new myapp –database=postgresql
The –database=postgresql parameter will add the pg database adapter gem to the Gemfile and create a suitable config/database.yml file.
Don’t use the –database= argument with the Rails Composer tool. You’ll select a database from a menu instead.
If you wish to run your own servers, you can deploy a Rails application using Capistrano deployment scripts. However, unless system administration is a personal passion, it is much easier to deploy your application with a “platform as a service” provider such as Heroku.
For easy deployment, use a “platform as a service” provider such as:
For deployment on Heroku, see the article:
By design, Rails encourages practices that avoid common web application vulnerabilities. The Rails security team actively investigates and patches vulnerabilities. If you use the most current version of Rails, you will be protected from known vulnerabilities. See the Ruby On Rails Security Guide for an overview of potential issues and watch the Ruby on Rails Security Mailing List for announcements and discussion.
4. Your Application’s Secret Token
Problems with “Segmentation Fault”
If you get a “segfault” when you try rails new, try removing and reinstalling RVM.
Problems with “Gem::RemoteFetcher::FetchError: SSL_connect”
Ruby and RubyGems (starting with Ruby 1.9.3p194 and RubyGems 1.8.23) require verification of server SSL certificates when Ruby makes an Internet connection via https. If you run rails new and get an error “Gem::RemoteFetcher::FetchError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate” see this article suggesting solutions: OpenSSL errors and Rails.
Problems with “Certificate Verify Failed”
Are you getting an error “OpenSSL certificate verify failed” when you try to generate a new Rails app from an application template? See this article suggesting solutions: OpenSSL errors and Rails.
Where to Get Help
Your best source for help with problems is Stack Overflow. Your issue may have been encountered and addressed by others.
You can also try Rails Hotline, a free telephone hotline for Rails help staffed by volunteers.
Reblogged from railsapps.github.io | 1 | 3 |
<urn:uuid:a0d20e3c-622a-4087-a1c5-d193caa3738b> | Cell Phone Timeline: A Visual Evolution of Cell Phone Design
Get our posts emailed to you with our monthly newsletter, subscribe here.
“Joel, this is Marty. I’m calling you from a cell phone — a real, handheld, portable cell phone!”
These were the first words uttered into a cell phone.
Stuttering into his prototype in 1973, Motorola Engineer Martin Cooper secured his position as the father of cellphone technology.
Then came a decade-long hiatus until, at last, the first commercially-available cell phone arrived on shelves in 1983. Over the next 35 years, cell phone development ran riot.
From bulky, cumbersome, suitcase-like devices, cell phone design has morphed into sleek, streamlined personal assistants that can anticipate your every move, deliver a library of information, and connect you with anyone in the world.
Wondering what that journey looked like?
In this cell phone timeline, we’ll give you the complete visual evolution of cell phone design from 1983 to 2020.
Cell Phone Timeline #1: Motorola DynaTAC 8000X – 1983
The Motorola DynaTAC 8000X was the first commercial portable cellular phone. The model received approval from the U.S. FCC on September 21 1983, and was released for sale the following year.
Six times heavier and twice as long as the iPhone 6, this pricey mobile device cost $3,995 in 1984 (over $9,000 today).
Cell Phone Timeline #2: Nokia Talkman – 1984
Nokia jumped on the cell phone bandwagon early, having previously developed car phones. Known as Nokia-Mobira back then, the Finnish brand’s first model sold extremely well in Europe.
Paying attention to cell phone design, notice the bulky detachable battery and the carry handle for this hefty device.
This briefcase-like phone cost the equivalent of $5,800.
Cell Phone Timeline #3: Motorola MicroTAC 9800X – 1989
Cell phone design trends quickly transitioned away from heaviness, catching on to the benefits of being light.
Motorola’s MicroTAC was the answer in 1989.
As well as being the world’s first flip phone, the MicroTAC’s drastically smaller design revolutionized cell phone design.
Securing Motorola’s commitment to analog technology, this lightweight cell phone cost around $6000 in today’s money.
Cell Phone Timeline #4: Nokia 1011 – 1991
Welcoming the world’s first GSM cell phone, the 1011 was Nokia’s response to smaller cell phone design.
Gone are the handle and colossal battery, replaced by an extendable antenna and cutting-edge monochrome LCD screen.
This model can store 99 phone numbers, but while claims say it was the first SMS-enabled phone, this isn’t true.
Cell Phone Timeline #5: Motorola International 3200 – 1991
Taking what appears to be a backward step, Motorola released the International 3200 in 1991.
Favoring the brick-sized cell phone design, this phone had a fixed aerial but a more slim-lined body than earlier models. In Germany, people called it a ‘knochen’ because of its likeness to a bone.
Interestingly, the battery took 5 hours to charge!
Cell Phone Timeline #6: IBM Simon – 1994
While we may think smartphones are new, IBM’s Simon was the first all the way back in 1994.
This personal communicator was the first handheld, touchscreen, personal digital assistant with telephony capabilities.
Preloaded with apps like Mail, Sketch Pad, and To Do, this model cost the same as a top smartphone today — $2000 ($1100 in 1994).
Weighing just over a pound, this 8-inch smartphone was an inch and a half thick!
Cell Phone Timeline #7: Nokia 2110 – 1994
Significantly smaller than Nokia’s previous models, the 2110 had a minimal aerial and a notably larger LCD display.
This model sits in the first SMS generation of phones. Users repeatedly tap the soft buttons to access the letters for text messages.
The 2110 was also the first Nokia featuring the iconic ringtone.
Cell Phone Timeline #8: Nokia RinGo – 1995
The Nokia RinGo’s bright blue phone case introduced the concept of a phone personalization and accessorization.
This model played with button placement; a concept repeated throughout the 1990’s mobile phone design.
The RinGo featured a screen upgrade, offering users the choice of a grey or green monochrome LCD screen.
This phone design quickly gained a reputation as a “girl’s phone.” It even got the nickname ‘bimbo phone’ in Sweden!
Cell Phone Timeline #9: Nokia 8110 – 1996
The Nokia 8110 was the first phone with a slide cover for the buttons.
Targeting the business market, this model was the smallest, lightest phone yet, weighing only 152 grams.
While this slide phone gained kudos for being used by Neo (Keanu Reeves) in the original Matrix movie, its curved design earned it the moniker ‘banana phone’.
Cell Phone Timeline #10: Nokia 9000 Communicator – 1996
Another early smartphone, the Nokia Communicator 9000 was released in 1994. This hybrid model consisted of a palmtop computer cradled in a Nokia 2110 mobile phone.
Able to send email and fax, the Communicator 9000 had a web browser and business apps.
Costing the equivalent of $1400 today, this handheld telephonic PC weighed over 300 grams.
Cell Phone Timeline #11: Nokia 9110 – 1998
Two years later, Nokia upgraded the Communicator 9000 to the 9110.
The sleek shape was indicative of cell phone design in the late-90s, while the compact style reduced its weight by over a third.
Using this innovative model, users could send smart messages, connect a digital camera, and compose ringtones.
Cell Phone Timeline #12: Siemens C25 – 1999
Siemens appeared on the scene in 1999 with the C25.
While lightweight, its basic mobile phone design wasn’t revolutionary.
Users were more impressed with its long battery life, allowing 300 minutes talk time and over 160 hours on standby!
Cell Phone Timeline #13: Nokia 3210 – 1999
The recognizable Nokia shape started to evolve in 1999 with the 3210.
The 3210 was the first mass-market cell phone design with an internal antenna. While trendy at the time, this early technology hampered phone reception.
Users did enjoy the upgrade to picture messages on an LCD screen. For ringtone fans, this was the first Nokia model with the Composer software.
Cell Phone Timeline #14: Nokia 8210 – 1999
The 8210 was released in late 1999, and was a drastic reduction in size compared to its predecessor.
This slim-lined handset sat snug in the user’s palm — ideal for the 8210’s introduction to first-generation ‘Snake’.
This model featured call management and speed dial and could connect to PCs and printers with infrared.
Cell Phone Timeline #15: BlackBerry 850 – 1999
While technically an email pager, 1999 saw the first commercially-sold BlackBerry.
The BlackBerry 850 upturned cell phone design with its QWERTY keyboard and thumb wheel for scrolling.
Demand exploded due to the keypad arrangement.
Cell Phone Timeline #16: Nokia 5210 – 1999
Still keeping the legendary Nokia shape, the 5210 had a removable, splash-proof, interchangeable phone case.
Not only was it known for its durability, but this model also spurred on accessorization.
This handset offered games with advanced graphics (“advanced” being a relative term, of course) enabling users to enjoy Snake II, Space Impact, Bantumi, Pairs II and Bumper.
Cell Phone Timeline #17: Samsung SPH-M100 – 2000
Welcoming Samsung on the scene, the SPH-M100 was the first cell phone design to consider music.
This handset was the first phone with mp3 capabilities. It came with high-quality headphones that neatly fit this compact device.
It’s worth noting that this phone was named by Time Magazine as one of the All-TIME 100 greatest and most influential gadgets from 1923 to 2010.
Cell Phone Timeline #18: Nokia 3310 – 2000
Perhaps one of Nokia’s most famous cell phone designs, the 3310 came with customizable phone cases.
The screensaver is also customizable using picture messages. Users could compose and save seven ringtones or use one of the 35 in-built ringtones.
The 3310 is jokingly known to the internet meme community for its exaggerated durability.
Cell Phone Timeline #19: Ericsson R380 – 2000
At first glance, the Ericsson’s R380 appeared to be a little behind on cell phone design trends, with its bulky design and square shape.
However, it was the first smartphone to look like a normal cell phone. The number panel flipped back to reveal a landscape screen.
This smartphone was the first Symbian OS device, and operated with a black and white touchscreen.
Cell Phone Timeline #20: Sony Ericsson T68 – 2001
The teensy Sony Ericsson T68 boasted the first color screen and came with a cover choice of two-tone grey or all-gold.
Users could enjoy color images, screensavers, and picture messages, along with feature-rich capabilities, such as Bluetooth, IrDA port, GPRS 3+1, tri-band compatibility.
This tiny model came with an add-on camera sold separately. Later upgrades integrated a camera.
Cell Phone Timeline #21: Nokia 5510 – 2001
Mixing up conventional cell phone design, the 5510 became extremely popular in the USA.
Rotating cell phone design, this phone worked in landscape, using a QWERTY keyboard split on either side of the monochrome display.
While innovative in style, the phone itself was basically a 3310 in a different outfit.
Cell Phone Timeline #22: Siemens S45i – 2001
While there was nothing particularly special about the S45i design features, this non-intrusive cell phone was Siemens’ first GPRS phone.
This handset was popular due to its speedy data transmission and internet access.
Cell Phone Timeline #23: Nokia 7650 – 2002
The Nokia 7650 started to show design features of today’s smartphones, with its big screen and 2.5G capabilities.
The first Nokia featuring MMS, this model had an impressive 2.1-inch color display. However, the display wasn’t touchscreen and was still operated using buttons.
This handset was popularized after it was used in the futuristic movie, Minority Report.
Cell Phone Timeline #24: Sanyo SCP-5300 – 2002
The Sanyo SCP-5300’s clamshell style became extremely trendy following a Japanese craze.
Among the first camera phones, this flip device allowed users to view photos on the screen, instead of plugging it into a PC.
The front screen also enabled users to add photos and call recognition.
Cell Phone Timeline #25: HTC Universal – 2005
Though it looked like a mini laptop, the HTC Universal was the first 3G pocket PC phone. The model hosted Windows Mobile, and was the first-ever mobile device to use this operating system.
While significant upgrades have taken place, the HTC Universal is still sold today.
Cell Phone Timeline #26: Motorola Q – 2006
Making a comeback with a BlackBerry-inspired design, Motorola’s Q boasted a big screen and a small QWERTY keyboard, rotated on a portrait design.
This handset had a range of audio and picture formats for sending and sharing music and pictures. Users could even use the speech recognition function to text.
Cell Phone Timeline #27: iPhone (1st Generation) – 2007
2007 saw Apple revolutionize the smartphone design with the first generation iPhone.
The full-color touchscreen epitomized Steve Job’s dedication to writing directly on the screen instead of using a stylus.
Here’s a little-known secret. The dev team codenamed this touchscreen development project ‘Project Purple 2’.
While cutting-edge in cell phone design style, critics complained about its buggy software.
Cell Phone Timeline #28: iPhone 3G – 2008
The iPhone 3G was out the next year offering a sleeker design for the second generation of Apple’s smartphone.
The iPhone 3G changed the way users interacted with smartphones by introducing the AppStore. A wealth of applications meant users could customize their experience.
While this model looked and performed better, the battery life was atrocious.
Cell Phone Timeline #29: Samsung Instinct – 2008
The Samsung versus iPhone rivalry came early on. The Samsung Instinct looked similar to an iPhone and was dubbed the ‘iPhone Killer’.
Users particularly liked the impressive 2-megapixel camera, leading to Samsung’s later reputation for good smartphone cameras.
Cell Phone Timeline #30: Google Nexus One – 2010
Designed and manufactured by HTC, the Google Nexus One was the first smartphone with Google’s Nexus software.
The cell phone design lent itself to online browsing. The wide screen made online reading easier, while the flat cell phone design and smooth edges increased comfortability.
Moving cell phone design focus to software and hardware, this handset had voice-to-text transcription and voice-guided GPS, along with an extra microphone for dynamic noise suppression.
Cell Phone Timeline #31: iPhone 5 – 2012
The iPhone 5 was the fastest-selling phone of its time.
Upgraded design features included the slimline lightning connector for quicker charging and the phone’s thinner shape and wider screen.
The iPhone 5 had an 8-megapixel rear camera and a front camera for selfies.
This model prompted patent lawsuits with Samsung.
Cell Phone Timeline #32: Samsung Note 4 – 2014
By 2004, Samsung’s signature Galaxy model was already in competition with the iPhone. The Galaxy Note 4, however, introduced the concept of wider cell phone design.
Triggering a cell phone design trend toward bigger screens, the Note 4 was considerably wider than other models. This trend is seen reflected in the wider iPhone 6 as well.
Known as a ‘phablet’, the Note 4 had damage-resistant Gorilla glass.
Cell Phone Timeline #33: BlackBerry KeyOne – 2017
BlackBerry joined the smartphone era in 2017, with the KeyOne. Contrary to the touchscreen design trend, BlackBerry famously kept buttons.
By this point, smartphones were commonly used as cameras, so storage became an important asset. This Android-based smartphone had an impressive 3GB of RAM and 32GB of internal storage.
Cell Phone Timeline #34: Samsung Galaxy Fold – 2019
Samsung has returned to the foldable phone. Instead of the retro vertical flip, the Samsung Galaxy Fold folded horizontally.
The foldable Android smartphone has a touchscreen on the outside to access the phone when folded. The device opened out to a 7.3-inch tablet.
Critics had concerns about the durability of the hinge.
Cell Phone Timeline #35: iPhone 11 – 2020
The iPhone 11 is Apple’s phone for 2020.
Featuring three cameras, users enjoy lenses for telefocus, wide angles, and super-wide angles. These cameras shot 4K video at 60 frames per second.
Faster to charge, this model claimed to have super long battery life. Users also benefit from rapid processing, as this was the first series of iPhone with the Apple A13 Bionic embedded.
Since the dawn of cell phone design, trends throughout the 1990s moved from bulky and big to small and compact.
However, as smartphone technology increased the capabilities of cell phones, design trends have morphed to compensate for this.
Wider displays and touchscreen features allow easier web browsing and enhanced user experience, while embedded top-spec cameras and microphones enable high-quality photos, videos, and sound.
In short, the evolution of cell phone design has replaced the clunky and the clumsy mobile designs of yesteryear with sleeker, UX-focused, usability-centric cell phone design. | 1 | 13 |
<urn:uuid:dab3e523-7fb6-4586-9f74-7f3519341407> | A common example of the sheer amount of computing power available to almost anyone today is comparing a smartphone to the Apollo guidance computer. This classic computer was the first to use integrated circuits so it’s fairly obvious that most modern technology would be orders of magnitude more powerful, but we don’t need to go back to the 1960s to see this disparity. Simply going back to 1989 and getting a Compaq laptop from that era running again, while using a Raspberry Pi Zero to help it along, illustrates this point well enough.
[befinitiv] was able to get a Raspberry Pi installed inside of the original computer case, and didn’t simply connect the original keyboard and display and then call it a completed build. The original 286 processor is connected to the Pi with a serial link, so both devices can communicate with each other. Booting up the computer into DOS and running a small piece of software allows the computer into a Linux terminal emulator hosted on the Raspberry Pi. The terminal can be exited and the computer will return back to its original DOS setup. This also helps to bypass the floppy disk drive for transferring files to the 286 as well, since files can be retrieved wirelessly on the Pi and then sent to the 286.
This is quite an interesting mashup of new and old technology, and with the Pi being around two orders of magnitude more powerful than the 286 and wedged into vacant space inside the original case, [befinitiv] points out that this amalgamation of computers is “borderline useful”. It’s certainly an upgrade for the Compaq, and for others attempting to get ancient hardware on the internet, don’t forget that you can always use hardware like this to access Hackaday’s retro site.
34 thoughts on “Installing Linux Like It’s 1989”
Just let the old tech go away for retirement.
(typed on IBM X31)
Sez the commenter with the nick of a 50 yr old song. HARRRRUMPH#
Porting this old V6 Unix version for the 286 to that laptop would be more fun, since Unix would then run on the 286 itself and not on the Raspberry Pi: https://unixarchive.cn-k.de/Other/V6on286/ (and there’s also Xenix 286).
There’s also intel’s iAXP286 Operating System Writer’s Guide, available from bitsavers, if you want to dig into the details (http://bitsavers.org/components/intel/80286/121960-001_iAPX_286_Operating_Systems_Writers_Guide_1983.pdf). The 286 is a pretty strange CPU…
I have that book appreciating in value (aka collecting dust and deteriorating) in storage. From back when Intel made cool stuff. It was a great companion to the original IBM PC/AT, from back when IBM made cool stuff.
runs pretty well on my 286-16 with 4Mb of RAM
MINIX 1.x or 2.x run fine on a 286. I ran 1.7 on a 386DX/33 and 1.5 on an Atari ST for a long time to develop my early UNIX skills before I moved on to Slackware Linux on a 486 and NetBSD 1.2 on a Mac IIci.
Downloading a full Linux distro over dialup in 1993 was super painful. MINIX was still painful but doable and a LOT more compact.
SCO XENIX also ran fine on a 286.
Cramming a Pi in a 286 case isn’t super impressive to me but slightly entertaining I guess. Would have been more impressed if he got an NE2000 or 3c503 Ethernet card working under MINIX going and got it talking to modern hardware over the network with some RAM to spare.
Linux wasn’t released until 1991 and never supported the 286 (though ELKS did/does). This is not “installing Linux on a 286,” it is “using the 286 keyboard/display with a Raspberry Pi.” Certainly cool, but absolutely not what it says on the tin.
You should have read the title more carefully: it says, “Installing Linux INTO a 286 laptop from year 1989” (emphasis mine.)
But it doesn’t say that. The title of the post is “INSTALLING LINUX LIKE IT’S 1989”.
That’s the title of the article. The title of the video is very clearly “Installing Linux into a 286 laptop from year 1989”
If it didn’t involve FTPing from decwrl.dec.com or tsx-11.mit.edu, then writing 13 3.5″ floppy images to disks, then booting a root disk and installing from all 13, one after another…it’s not an AUTHENTIC ancient Linux install :-)
(notice how I remembered the ftp sites? type them enough and that happens)
Thanks (I think) for the memories!
(and thanks to Walnut Creek CDROM for the install CDs!)
That brings back memories. Not all fond. Like having disk #12 having an error.
Yessssss, lots of fun
Borderline useful? I guess it could be useful for interfacing with old tech, should that become necessary.
But mostly I’d say that a person doesn’t need a “usefulness” justification to do a cool thing.
Back in the mid- to late-1980s, I was a field engineer, working on Unix workstations. This was a typical thing that FEs carried at the time – a laptop being used just as a serial terminal to do maintenance on bigger computers, when they wouldn’t boot all the way up to X11. I think it’s pretty cool that he got the machine to boot to DOS. It’s just a bonus that there’s a Linux machine running in there, too.
Around that time I was lerker on the usenet minux group but my day job andvattention was keeping an Interdata 70 up and running with a 1979 version of Unix. ( Paper tape boot loader and clicking magnetic core memory )
It was fun trying to get manufactors to understand we could not just upgrade parts as the computer was the only device certificated to test flight software for a still flying communication satellite. And no we were not going to be able to send a space shuttle up in order to replace parts or do upgrades.
Not bad, reminds me when I did something like this with a modified PogoPlug and a PC I used as a Server. The PogoPlug was used as a decoy for remote SSH sessions.
Gem of time
Funny, we just had the retro-French-dialup-ATMega version of this… https://hackaday.com/2021/06/28/tiny-operating-system-for-tiny-computer/
I’m a sucker for a retrocomputer build. You could say I have a “type”
The 80286 is a really interesting 16-bit CPU, because although it supported (segmented) virtual memory, it made it so awkward, almost no OS used it and it held back 32-bit computing in the PC world for pretty much a whole decade.
At the beginning of the year I had an idea on how to implement a ‘relatively’ simply VM system for the 286; which is to try and coerce it into acting like a paged VM. So, all segments would be allocated to 4kB boundaries and in n*4kB sizes, and multiple small far memory allocations could be assigned to single 4kB segments wherever there was space. This would, I believe limit the degree of fragmentation whilst minimising swapping times and provide for a decent virtual address space for the era (in the region of 8192*4kB = 32MB).
The VM system I was thinking of was also simplified in that it was single user (Ring 3) with a non-pageable kernel space (Ring 0); just so I could limit things to one LDT plus one GDT and only have to do the VM with the LDT.
There’s no point to this – it’s just an intriguing problem to my mind :-) .
“There’s no point to this – it’s just an intriguing problem to my mind :-) .”
Those are the best kinds of problems! It’s why I became a mathematician.
It’s also why I explored “BEAM” robotics and ternary computing (I’m convinced that everyone is probably right that, for practical reasons, binary is more efficient, but I wanted to give myself a chance to prove myself wrong — and while I wasn’t able to get far enough into my explorations to make any conclusions, one way or the other, if you want to take my enjoyment of ternary logic from me, you’ll have to do it from my cold dead hands!), among other things.
Wind turbine systems appear to use Linux/c/c++ software technology, we recently learned.
Christie advertises for Linux/c/c++ programmers.
Ivenenergy at Goshen wind farm Idaho Falls clued us into wind farm software tehnolgoy video. :)
Installed Linux-based Apache2 server on two Ryzen-based win 10/20H2 and 2004 laptops while on trip.
Apache run on Raspberry Pi 4B too.
Neat! Makes me think somebody should build a Pi with an ISA interface that you could just plug into your ancient iron.
So, an ARM co-processor card, sort of.
There’s already something similar for the Amiga 500!
(I’m not affiliated with the project, i just think it’s really cool)
If Anyone here likes 486’s running Linux, Dam small linux runs great on a modest 486 pc with limited amounts of ram. It also runs Xserver just fine.
It’s kind of (not at all) amazing, that a computer made 20 years ago could still run the same software today, with similar performance.
BSD UNIX did/ does it better… :) (net FTW)
(but you really can’t realistically run a protected mode OS on a machine without a MMU…)
Back about 1990 I helped write a magazine review for a Grid (pre Tandy acquisition) laptop that Grid shipped with SCO Xenix installed on a 30 meg hard drive. The case of that beast was die cast aluminum and the display was a red plasma screen. It could run for about 30 minutes on a lead acid camcorder battery. We test dropped the machine from waist height onto a carpeted concrete floor and it kept running. I don’t recall the processor – must’ve been an i386 DX. I didn’t care much for Xenix – it was not exactly AT&T Sys V and it wasn’t Berkeley either; just felt strange to use. Xenix man pages were also organized weirdly.
Make that i386 without the “DX.”
Ahh Xenix… Microsoft’s Unix, and the OS that inspired directories in MS-DOS 2.0. Eventually that grew into SCO OpenServer, which also is a bizzare OS.
“Laptops” of that era were always chunky affairs… first computer in the house here looked like this: https://en.wikipedia.org/wiki/Toshiba_T3100
No battery, 80286 processor with an 80287 floating-point unit… CGA graphics. Those things survived a drop from waist-height too, and if it happened to land on your foot: goodbye foot!
I think that I have that laptop! It would be nice to get some more life into it.
Please be kind and respectful to help make the comments section excellent. (Comment Policy) | 1 | 10 |
<urn:uuid:9db1d3ba-d253-4fef-bbf6-0fbf7e03167c> | - Open Access
Cardiovascular (Framingham) and type II diabetes (Finnish Diabetes) risk scores: a qualitative study of local knowledge of diet, physical activity and body measurements in rural Rakai, Uganda
BMC Public Health volume 22, Article number: 2214 (2022)
Non-communicable diseases such as cardiovascular conditions and diabetes are rising in sub-Saharan Africa. Prevention strategies to mitigate non-communicable diseases include improving diet, physical activity, early diagnosis, and long-term management. Early identification of individuals at risk based on risk-score models – such as the Framingham Risk Score (FRS) for 10-year risk of cardiovascular disease and the Finnish type 2 Diabetes risk score (FINDRISC) for type 2 diabetes which are used in high-income settings – have not been well assessed in sub-Saharan Africa. The purpose of this study was to qualitatively assess local knowledge of components of these risk scores in a rural Ugandan setting.
Semi-structured qualitative in-depth interviews were conducted with a purposively selected sample of 15 participants who had responded to the FRS and FINDRISC questionnaires and procedures embedded in the Rakai Community Cohort Study. Data were summarized and categorized using content analysis, with support of Atlas.ti.
Participants described local terms for hypertension (“pulessa”) and type 2 diabetes (“sukaali”). Most participants understood physical activity as leisure physical activity, but when probed would also include physical activity linked to routine farm work. Vegetables were typically described as "plants", “leafy greens”, and “side dish”. Vegetable and fruit consumption was described as varying seasonally, with peak availability in December after the rainy season. Participants perceived themselves to have good knowledge about their family members’ history of type 2 diabetes and hypertension.
While most items of the FRS and FINDRISC were generally well understood, physical activity needs further clarification. It is important to consider the seasonality of fruits and vegetables, especially in rural resource-poor settings. Current risk scores will need to be locally adapted to estimate the 10-year risk of cardiovascular diseases and type 2 diabetes in this setting.
Each year, an estimated 41 million people die from non-communicable diseases (NCDs), approximately 70% of all deaths globally; of these, approximately 17 million are among people below the age of 70 years and classified as premature deaths [1, 2]. The most common types of NCDs are cardiovascular diseases (CVD) (e.g. myocardial infarction, stroke), cancers, chronic respiratory diseases (e.g. chronic obstructive pulmonary disease. asthma), and type 2 diabetes [3, 4]. Despite these diseases being starkly different, many share the same risk factors, justifying the umbrella term of NCDs. NCDs have been increasing in low- and lower-middle income countries due to demographic changes, e.g. longer life expectancy, and lifestyle changes [5, 6]. In Uganda, cause of death statistics suggest a declining trend in infectious diseases and an increase in the relative and absolute burden of NCDs [7, 8]. A population-based survey of Ugandan adults 18–69 years regarding NCDs estimated one in four respondents had raised blood pressure; it was more common in males than females and among urban than rural residents . The majority (76%) of participants with raised blood pressure were not taking any treatment to lower their blood pressure. The prevalence of raised fasting glucose including diabetes was 3.3%, and close to 90% of participants who were found to have a raised fasting glucose were not on medication nor aware of their hyperglycaemia . The prevalence of hypertension among adults in our study population is estimated at 20.8% .
The increasing burden of NCDs in sub-Saharan Africa will strain health systems primarily designed to cater for the persistent burden of infectious diseases . One approach to decrease the expected future burden on health systems is to identify individuals at risk of NCDs to implement preventive interventions to delay or prevent disease progression. Risk score models used to identify individuals at risk are common in high-income settings. The Framingham Risk Score (FRS) is a sex-sensitive algorithm used to estimate the 10-year CVD risk of an individual . The FRS non-laboratory version is attractive in resource-limited settings [13, 14]. Another risk score is the Finnish type 2 Diabetes risk score (FINDRISC) , a simple non-invasive tool developed in a prospective cohort of individuals aged 35–64 years used to estimate 10-year risk of type 2 diabetes . Such scores are increasingly used to detect risk and guide informed decision-making regarding initiation or intensification of preventive strategies but have mostly been used and validated in high-income settings [17,18,19]. Transferring risk scores from the setting where they were developed and tested to another type of setting can be tempting, but needs careful evaluation, calibration and ideally validation [20,21,22,23]. However, in low-resource settings, there is limited literature on the acceptability and understanding of NCD risk scoring tools and procedures.
With the increase of NCDs in sub-Saharan Africa, more knowledge about how these diseases and their risk factors are being perceived is needed. Despite the increasing risk, level of knowledge about key NCDs like stroke and their risk factors remains low [24, 25]. Several risk scores have been developed to estimate individual’s risk for acquiring one or more NCDs. Determining commonly acceptable local terms for disease, key symptoms, dietary practices, and the understanding of other key indicators is critical in assessing disease and risk. Adaptation of data capture tools is essential in enhancing rigor of measurements and data obtained for public health decision making. The choice of a tool for data collection is a key element of the research process. It is through questionnaires / instruments aimed to assess, for example, family history of disease and acceptability of performing a given procedure in each setting, that it is possible to measure a phenomena of interest and analyze their associations or risk in health surveys. As tools have been developed in the high-income countries, we were uncertain about how participants will perceive or understand some of the components of the tools and whether procedures like waist, hip measurements and others would be well accepted in a non-clinical setting.
The purpose of this study was to explore community members’ local knowledge, terminology, and understanding of key components of FRS and FINDRISC in the context of a rural population-based public health surveillance site in Uganda, using qualitative exit interviews with selected Rakai Community Cohort Study (RCCS) participants.
Study setting and population
The RCCS is a population-based open cohort in rural south-central Uganda from 1994 to date. Details about the study site have been published elsewhere [26,27,28,29,30]. Briefly, the RCCS enrolls residents and recent migrants aged 15–49 years in ~ 40 communities to participate in HIV surveillance, with additional modules and reproductive health. In 2017–2019, we introduced a question module assessing NCD risk into the RCCS, using items from the non-laboratory-based FRS and FINDRISC, as well as some key laboratory tests. The NCD module targeted participants aged 35–49 years to assess their 10-year risk for CVD and type 2 diabetes. We obtained data on age, sex, and other demographics as part of the behavioral questionnaire. Prior to the questionnaire we took blood pressure and took anthropometric measurements (height, weight, hip, and waist). We also assessed their history of type 2 diabetes and hypertension as well as current treatment and family history of type 2 diabetes and hypertension, dietary intake of fruit and vegetables and physical activity (Table 1). Quantitative results have been published previously .
All participants in this study were identified following their participation in the RCCS. Interviewers were trained to administer the RCCS survey including the NCD question module systems, which provided an opportunity to identify potential key informants and building rapport with them. Recruitment was done by convenience sampling. Interviewers ensured that they did not enroll more than one participant from the same household. Potential informants who were found to be knowledgeable and willing to share personal experiences with the NCD module were asked for an additional qualitative exit interview that would explore in-depth topical areas on feeding, physical activity, and perceptions on vital measurements. Written consent was obtained, and no participants were identified as illiterate. Once the respondent chose to participate, they were interviewed the same day or the following working day at the central RCCS interviewing site in the community or at another location convenient for them, while the NCD procedures and questions were still fresh.
A total of 201 interviews using the RCCS questionnaires were conducted by the three interviewers during the study period. From these, 22 potential exit qualitative interview participants were identified. Five of the non-interviewed participants could not find time on the same day or the next working day and two were not in the age eligibility bracket for the risk score. A total of 15 carefully moderated in-depth interviews were conducted until saturation was reached.
Data collection and analysis
A team of three experienced social science qualitative interviewers (1 male, 2 females) were trained to administer the RCCS survey including the NCD risk assessment module. The interviewers received training on the rationale for each of the items on the risk scores and how the items are reflected on the qualitative semi-structured interview guide to probe responses and gain a deeper understanding of how participants interpreted the global tools and the thought process behind their responses.
Between January and April 2019, interviewers recruited and interviewed RCCS participants per the participant selection procedures above. A semi-structured interview guide was developed in English by the first (RS, MSPH coordinator of the RCCS activities) and the last authors (HN, MD, PhD co-investigator to the NCD risk assessment protocol) and later translated into Luganda the predominant local language. The translation was done by a professional Luganda teacher who is attached to the RSHP data quality control team. Key items on the FRS and FINDRISC were considered while developing this study interview guide (Additional file 1). The study guide was pilot tested in five people.
Interviews were audio-recorded and transcribed by the interviewer within 48 h, and field brief notes were made during the interview. Interviews last an average of 35 min. After each interviewer had collected and transcribed their first interview, we had a review meeting with the interviewing team to brainstorm on their experiences and to determine how to proceed with subsequent data collection. All the interviews were held in the local dialect (Luganda) but translated to English directly at transcription. After transcription, a second team member read through the transcript as they listened to its audio recording and read its field notes to ensure completeness of transcription and obtain common agreement on translation between the transcribing interviewer and the second reader.
We used thematic analysis approach to analyze the data. The analysis commenced iteratively with the collection process. During the collection process we discussed emerging and diverging themes and decided on how to proceed with the inquiry especially on how to improve probes. The scripts were read numerous times by RS and LN to identify patterns in the data and develop potential codes. Transcripts were coded in Atlas.ti version 5.2 using short-listed preliminary codes. Using “lean coding” , the initial 6 codes expanded to 22 codes as more themes emerged. After overlapping and redundant codes were merged or dropped respectively, 19 codes remained and were recategorized into domains. The findings are reported below in anonymized quotes that were translated into English from the verbatim interview script.
Each informant in this sub-study provided written informed consent for RCCS participation, including the NCD module, and a separate consent for the additional qualitative exit interview. The study went through institutional ethics approvals at the Uganda Virus Research Institute’s Research Ethics Committee (GC\127\18\07\657), the National Research Registration by the Uganda National Council for Science and Technology (SS 4836) and the Swedish Ethical Review Authority (2018\2542–31\2). The interviewers had human subjects research ethics training within the past three years prior to their involvement in the study. No study participants' names were used in the final coded transcripts.
A total of 15 interviews were conducted with 9 women and 6 men. The median age was 44 years (range 35–49) for men and 37 (range 35–45) for women. Participants’ occupations were substance agriculture/farming (4) (small scale farmers producing largely for their home use), trader/vendor/shopkeepers (5), fisherfolk (2), government/clerical workers (2), restaurant/waiter (1) and a housewife (1). They were mainly household heads (8) and spouses of household heads (6), or held another relationship with the household head (1). Key results are summarised with sample codes and illustrative quotes in Table 2.
Terms and local knowledge about type 2 diabetes, hypertension, and NCDs
We explored the commonly used local terms for type 2 diabetes, hypertension, and NCDs in general. In the exit interviews, we used the terms “sukaali”, which also means sugar, for type 2 diabetes and “pressure” for hypertension. All the participants agreed to and concurred with the use of the term “sukaali” for diabetes.
Informant: I do not know of any other word in Luganda, it is sukaali in Luganda we call it sukaali [literally meaning sugar]. Is there any other name in Luganda [informant asks]?
Interviewer: I don’t know, I am here to learn from you.
Informant: Another name for diabetes? No madam, I have never heard of any other term. [Participant 14, Female, 35 years old, Farmer]
All but one participant exclusively used the term “pulessa" [pressure] for hypertension, and one participant also introduced the term “entunnunsi” [pulse, palpitation].
Blood pressure in this community has been localized. It remains [puleesa] pressure you say pressure, people understand but the elderly tend to also call it [entunnunsi] [palpitation]. I hear people in this area say that I have palpitation, or that they have told me that I have ...pressure, that is how it is in our area. [Participant 1, Female, 37 years old, Farmer]
We explored if there were local terms that could be used generically to mean non-communicable/ non infectious diseases. Three participants suggested the use of the term “endwaddwe z’abakadde” [diseases of the elderly]. It was common for informants to associate diabetes with age and abnormal weight; however, hypertension was mainly linked to being wealthy “abaggaga” [the rich]. As one participant indicated:
For debates [diabetes] I always hear people say that it attacks elderly people and those with abnormal weights and for hypertension like I told you it attacks rich people. [Participant 10, Male, 41 years old, Farmer]
We probed for this at subsequent interviews, but subsequently participants expressed discomfort with the use of diseases of the elderly to refer to non-communicable, non-infectious diseases. One participant indicated that “endwadde z’abakadde” is used for musculoskeletal diseases like back pain but not the classical NCDs:
Diseases for the elderly people here it is usually used when referring to diseases concerning bones and may be the back pain. [Participant 15, Male, 38 years old, Fisherman]
A few participants were quick to refute the notion of “diseases of the elderly”, noting that recently these diseases have become more common, and they affect people of all ages, so referring to them as diseases of the elderly was anachronistic:
Those diseases like pulessa [pressure] and sukaali [diabetes] are now very common diseases that they do not vary by age that is, they cut across all ages not only the older people. [Participant 2, Male, 38 years old, Government/clerical worker]
Experience with anthropometric measurements
We explored the acceptability of taking anthropometric measurements like waist and hip circumference, height, and weight in a non-clinical setting, probing especially for acceptability and experience with hip and waist measurements and whether it was appropriate for a research assistant to take the measurements of a participant of the opposite sex.
Most participants presented no sociocultural hinderances for male research assistants to take measurements of female participants and vice versa. Two participants felt that this was a medical procedure where the sex of your provider does not matter. Others equated this to going to a tailor who must measure your height, waist, etc. as stated by this participant.
We take it as a usual thing because when you take your clothes to a tailor, he/she will take your measurements, to do a good outfit. But this which is done by you people the basawo [health workers], I don’t know unless you tell me of any problem. [Participant 1, Female, 37 years old, Farmer]
A few participants, however, felt this required more explaining, especially if the person taking their measurements was of a different sex. Some said they would be “skeptical as to why it was a man.” This concern was more likely raised by female participants:
Participant: I did not feel anything because it is a female who took my measurements. If it were the opposite sex, I would be skeptical.
Interviewer: If it were an opposite sex, how would it be, how would you feel?
Participant: It would all depend on how he would handle me and may be his approach too.
Interviewer: Please throw more light on that?
Participant: If he comes and explains to me the whole process, it is ok, but just bumping on me, that will mean something different. I would become skeptical.
Interviewer: You say you would be skeptical, why and what would be your reaction?
Participant: I would ask questions like, why take the measurements of my waist and why would it be he [him]? [Participant 12, Female, 37 years old, Housewife]
Participants were asked about their feelings about having their blood pressure taken and recorded. Almost all participants indicated that this was acceptable. However, a few reported being anxious about what the reading would be and sometimes the reading was indeed frightening.
I felt so good. I was not forced to come here. I only got scared when they told me that my blood pressure was high, I didn’t know about it, but with the rest, there is no problem.” [Participant 7, Female, 38 years old, Trader/vendor]
Terms, local knowledge, and experience with physical activity
We explored the understanding of physical activity in this rural population. We asked what physical activities participants were involved in (if any) and what physical activities were commonly done by other community members. For purposes of the interview, we formally translated physical activity as “duyiro”, which informants more often perceived to mean “exercising”, though the terms could sometimes be used interchangeably.
About half of the participants did not immediately understand the question(s) on physical activity. Oftentimes interviewers had to re-ask, re-phrase or even paraphrase the question, and sometimes had to use illustrations like "activities of sports nature" or "activities that cause you to breathe hard or pant". Participants mentioned a variety of events that they considered physical activities. Most mentioned energy exerting or activities of daily living, like housework (paid or unpaid) especially working in the garden (mainly digging and cutting grass), grazing animals and house chores like washing clothes, and fetching water, rather than mentioning recreational physical activities.
Interviewer: Okay, you say you have not been putting attention to it. Let’s try to understand this together. Tell me about the exercising or manual thing you do?
Informant:Cultivation, you know we farmers are in the garden and when I dig, sometimes in the morning and return in the evening it is enough exercise. [Participant 1, Female, 37 years old, Farmer]
This was similar to what participants observed in their communities as the most common forms of physical activities:
“The activity I see mainly is farming [digging] and here they do activities like spraying and that spraying can is so heavy and needs one to be strong because it is really tiring”. [Participant 9, Female, 41 years old, Farmer]
Some participants did not consider incidental physical activities as exercise. Although structured recreational physical activities were rare, football was commonly mentioned in addition to daily manual work activities. Almost all the participants who raised it associated it with youths, and none of the informants indicated that they played football themselves.
Interviewer: What types of physical activities do people in your community engage in?
Informant: The youths mainly get involved in playing football and cultivating
Interviewer: Apart from cultivating and playing football, tell me about the other physical exercises do they get involved in?
Informant: Apart from those, there no other physical exercises that they are involved. [Participant 5, Male, 49 years old, Shopkeeper]
Measuring physical activity
We asked participants to describe the amount of time they spent engaged in different types of physical activity. Among those who engaged in physical activity, the most frequently reported amount of time spent in the field digging gardening was 5–6 h each day, either in one go or combining morning and evening hours of cultivating, 5–6 days each week:
When it’s cultivation time, I wake up at exactly 06:00am, wash my face, serve the pigs food then make sure that at exactly 07:00am I am already in the garden cultivating, then I get off cultivating at 12:00pm which I always do six days in a week. [Participant 10, Male, 44 years old, Farmer]
Other activities generally took less time, but those which were quantifiable still took at least one hour. Two participants were manual laundry ladies; they considered their work as their physical activity as described by this participant.
I will tell you that I wake up at 7:00am to start washing for clients and by 10:00am am done with this, then I continue with ironing and by noon am done with the ironing, lunch is usually prepared by my big sister, after washing, I take a shower, have lunch then take a nap. [Participant 7, Female, 38 years old, Trader/vendor]
The frequency of the most popular incidental physical activities varied by season. Community members were likely to be more physically engaged during the rainy season than during the dry seasons:
Interviewer: You also talked of cultivation, do you this only in the morning or you sometimes do it in the evening?
Participant: This varies, during dry seasons, we only dig in the morning and rainy seasons, both morning and evening. [Participant 1, Female, 37 years old, Farmer]
Terms and local knowledge about fruits and vegetables
We assessed key informants' understanding of vegetables and solicited examples of vegetables that were part of their diet (if any). We posed a question to the informant: “In your just concluded interview, the interviewer asked you to talk about vegetables. What did you understand by vegetables [enva endiirwa]? What are they and how did that discussion go?".
Some informants mainly described vegetables as “bikoola” [leafy], green in colour [greens] (mentioned in more than half the interviews), or side dish [enva endiirwa] (consistent with the translation we had used in the questionnaire, "bitter or sour leaves"). Sometimes, informants emphasised their understanding of vegetables by mode of preparation, or the state in which they are served like “oyinza okuzirya embisi nga cabbage” [eaten raw like cabbages], half cooked or cooked fully, steamed to taste, or fried in oil. It was more common for informants to use illustrative description through which they referred to specific or common vegetables, sometimes considered as wild plants. The informant below provided a detailed list of different vegetables. which description has important connotation on how vegetables may or may not be valued in this type of setting.
Interviewer: I would like to know what you understood when we asked you about vegetables.
Informant: What I understand from vegetables the greens we grow, and some is wild like ddoodo [a green colour type of amaranthus spinach], nakatti [scarlet eggplant], bbuga [a purple-coloured type of amaranthus spinach]), ejjobyo [African spider herb] which is served as enva endiirwa [as a side dish]. [Participant 7, Female, 38 years old, Trader/vendor]
This informant ends with the term “enva endiirwa” [side dish], a description that has importance in respect to the way vegetables are perceived, as indicated by another informant:
As you hear, it is a side dish … you can have it if you have it, it is of less importance when you do not have it… you would be lucky to have the real sauce beans, meat [laughs]. [Participant 5, Male, 49 years old, Shopkeeper]
Participants also described the advantages of eating vegetables, indicating that they are foods that prevent against diseases through cleansing of the body, increasing blood supply in the body and keeping one generally healthy:
I know of vegetables as something that adds to our health, they help in okuyonja omubiri [detoxification?] and prevents diseases from attacking you…, when you lack blood, the health workers advise you to take the vegetables and get blood there are in many categories with different uses [advantages] [Participant 3, Female, 35 years old, Farmer]
Informants also described their understanding by combining mode of preparation and usefulness of the vegetables especially in fighting disease. They used terms like eating them raw, half cooked, fried, steamed; when needed, we probed for how the different vegetables in their communities were prepared or served. As this participant stated:
There are those we eat raw, half cooked, and others fried. Most times, the ones we eat raw or half cooked help in the fight against diseases as compared to our counterparts [other people in the community] who fry them. Vegetables lose nutrients depending in the way they are handled. For example, Ddoodo, when cut them before washing, all the nutrients are lost in the process and there also people who cut and fry them, these eat roughage. If it is cabbages, it is better when eaten raw so that the body gets all the value in it that is if you know how to prepare them well and hygienically. [Participant 1, Female, 37 years old, Farmer]
Participants often used specific examples to illustrate their knowledge of fruits. “miyembe” (mangoes), and “ffene” (jackfruit) were the most common examples participants highlighted. Other participants described them as edibles that give energy or add blood and water to the body:
Fruits give energy and there are those that add blood and water to the body… for example we have passion fruits, paw paws [papaya], mangoes, watermelon, and orange. [Participant 13, Female, 42 years old, Government/clerical worker]
It was common for participants to describe fruit as something picked from the tree, eaten raw [uncooked], or can be eaten without the efforts of preparation.
Some informants linked fruit consumption more to children than to adults. They indicated that children may survive on fruits the whole day, but adults typically took fruits "by the way" and could spend longer periods not eating fruits:
I can even take a month without eating it again, but this does not apply to other members in the community especially the younger ones. There are guys in the community who survive on jackfruit. They hide the jackfruit bunches and eat them when they are ready. Those are most especially children – but for an adult, no [laughs]. [Participant 2, Male, 38 years old, Government/clerical worker]
Participants emphasized that unless one has multiple trees of a given fruit, they would not generally be willing to sell the fruits. Instead, they would leave the fruits for the children, as this informant indicated:
If someone has one tree in the compound, there is no way they can sell them, instead, they leave them for the children. [Participant 1, Female, 37 years old, Farmer]
When talking about fruit and vegetable intake, it was common for informants to refer to seasons when fruits and vegetables were less or more available. It was not uncommon for participants to refer to abundant access and scarcity, linked to annual seasons, as well as “bad year” to mean years of low harvest (even within the seasons of a given fruit or vegetable):
We have various fruits in our community like, avocado, paw paws [papaya], jackfruits, oranges, tangerines, mangoes, watermelon. Though these are seasonal and current, they are not available in the community – difficult to get them. [Participant 7, Female, 38 years old, Trader/vendor]
Seasonality as a factor of access and intake was also mentioned in respect to vegetables. However, unlike fruits, most vegetables were available almost throughout the year though in varying quantities:
We usually have some throughout the year especially Nakatti and Ebbuga because they are planted in swampy areas during the dry seasons and on mainland during rainy seasons… then we buy from those who have swampy areas. This is different with ddoodo, during dry seasons, there will be no ddoodo at all because it is not planted in swamps like the other vegetables. [Participant 12, Female, 37 years old, Housewife]
Knowledge on family history of type 2 diabetes and hypertension
Informants were asked if and how they were able to respond to questions on family history of type 2 diabetes and hypertension. Most informants perceived themselves to have a good level of knowledge about the health of their close family members, even if they were not currently living with them. A few informants felt that some diseases would be treated privately, so would not be disclosed even within family, but diabetes and hypertension were not those types of diseases in the private domain:
Informant: There are diseases you cannot tell people if you have them for instance candida, syphilis but not for diabetes, why not share, you never know among those you tell, there could be one who can give you a permanent solution.
Interviewer: What do you think makes people not to tell others that they have candida and syphilis?
Informant: It is because these diseases are of the private parts, you can’t stand there and tell people that you know what, my private parts are itching or paining. If you do that people will think that you are mad. But for diabetes, even if it is1000 people, I can tell them that I have diabetes and ask if they can be of help. [… even with hypertension] There is no problem with that. In fact, in our community, the term (pressure) has been locally adopted. People are so free with that information. [Participant 7, Female, 38 years old, Trader/vendor]
Some informants felt that close relatives, mainly siblings and parents, would disclose any conditions for which they received a diagnosis:
As I told you my siblings would open to me in case of such a condition, if my elder sister would be open about her HIV status to me and my other siblings, then there is no reason she or? any of them would keep quiet about such a thing like diabetes. [Participant 1, Female, 37 years old, Farmer]
With the increase of NCDs in sub-Saharan Africa, more knowledge about how these diseases and their risk factors are being perceived is needed. Determining commonly acceptable local terms for disease, key symptoms, dietary practices, and the understanding of other key indicators is critical in assessing disease and risk. In our study, hypertension and diabetes were generally well understood. The most common term in our setting to denote hypertension was pulessa [pressure]. The common local term for type 2 diabetes was sukaali [literally translates as sugar]. Both terms were widely accepted. This solid understanding of the terminology made the data gathering process easier. However, like in many other settings, there is not one accepted umbrella term for NCDs which makes communication about these diseases in general terms complex. From a broader perspective, informants identified the conditions as “diseases for the rich”, as was the case historically when they were associated with economic development in high-income countries . They were also referred to as “diseases for the fat people”; multiple studies have confirmed that overweight and obesity are indeed risk factors for many NCDs . Some referred to them as “diseases for the elderly” as has been historically known that older people have a an increased risk for different NCDs relative to younger counterparts. In summary, the population identifies these diseases by the risk factors like older age, being obese or overweight and being wealthy.
Waist circumference has been shown to be more informative in predicting raised blood pressure (BP), glucose and total cholesterol than body mass index (BMI), also in low-resource settings . Yet unlike other anthropometric measurements that are more commonly taken waist circumference is rarely measured . Health workers are concerned that patients will be embarrassed about waist circumference yet patients rarely report this . At the start of this study, our field teams worried that participants would be embarrassed, but in general our participants did not express any form of embarrassment. Other than two participants who suggested that waist measurements should be taken by a same-sex health worker, most participants considered waist circumference a normal procedure that is acceptable if required by their health provider. This high level of acceptability, with rare exceptions, is consistent with what has been reported elsewhere .
Participants did not spontaneously respond affirmatively to questions on physical activity of a leisure or sporting nature. It was after they were probed for manual housework or farm work that informants listed a range of non-leisure physical activities, mainly citing digging, walking and other manual work within and outside the home setting. Digging and raking have been studied and categorized as “high-intensity” physical activities previously , and were also described as such by some of our participants as illustrated by quotes above. Community members in Uganda generally spend a lot of time in the fields/garden and are often engaged in other manual activities, which could explain the minimal attention given to leisure-related physical activities despite relative high levels of physical activity compared to other populations globally . Understanding the nature of physical activities in a setting is essential in building a comprehensive, meaningful and locally appropriate physical activities index .
Family history has for long been known to have a strong association with the risk for type 2 diabetes and hypertension [40,41,42]. However, sometimes health providers have expressed challenges in obtaining reliable and accurate information on family history from their patients . Our study revealed that participants were not having any difficulties obtaining information about familial history for chronic diseases, where there was no stigma attached. Mainly through regular information sharing with family members, informants considered themselves as a reliable source of information about their family history for chronic diseases.
Participants were generally in agreement about what vegetables are, describing them as leafy and/or plants. They identified vegetables as side-dishes eaten as a compliment to the main meal eaten fresh and cooked in different forms. Precisely, fruits were mainly defined as food stuffs that grow on trees with flowering characteristics like mangoes and some climbing plants like passion fruits, watermelons. The most commonly mentioned fruits were mangoes, pineapples, watermelon, and passion fruits. This list is consistent what has been described by others especially in the field of agricultural value addition . The consumption of fruits and vegetables is associated with seasonality of abundance and scarcity. Most fruits are only available once or twice a year and around months immediately after the rainy seasons. While availability of vegetables tends to cluster around the rainy seasons, a few are available throughout the year supplied by farmers in swampy areas. The seasonal availability of fruits and vegetables could in part explain the reported low consumption in most parts of Uganda [44, 45]. Additionally, the seasonal availability and geographical location could influence how participants respond to questions on fruits and vegetables thus impacting the risk score measurements (the distribution of rains, fruits and vegetables is heterogenous even within the same country).
Being able to understand elements on risk score is important as this enhances future risk assessment. However, future studies to describe the understanding of risk is essential. A previous study has highlighted the complexities in understanding the concept to risk for CVD in an African setting and this requires further interrogation .
Like all other research, this study has strength and limitations. The primary strength of the present study is that this is one of the first to qualitatively explore participants’ experiences with Framingham and FINDRISC 10-year risk scores in a resource limited setting. The study is conducted within a unique setting of a population-based prospective surveillance cohort; this gives the opportunity for proper integration of NCD risk assessment within a public health surveillance program.
However, it is not without limitations. The studied population comprised of 35–49-year-olds based in rural south-central Uganda. This constraint is dictated by the ongoing parent study from which this sample was obtained and therefore limits our ability to infer the results to older age groups or other settings. Additionally, the semi-structed interview guide was translated into Luganda, yet the study purposed to explore, among other things, the proper local terms. To address this, our interviewers used multiple techniques to probe and assess whether the pre-translated terms were the most appropriate and if there were other popular terms. Finally, we did not have a good opportunity to triangulate our data collection methods; for instance, focus group discussions could have been conducted to explore more general norms, but we deemed this methodology unfeasible to assess individuals’ experiences immediately following their RCCS interview and procedures.
This study found that terms used in NCD risk factor surveys varied in their acceptability among respondents in a population-based cohort study. For hypertension and type 2 diabetes there are commonly acceptable local terms but not for NCDs as an umbrella term. The use of disease-specific local terms may be more appropriate than use of the NCD as an umbrella term in areas where there is no agreed local term. Physical activity is mainly defined in terms of daily routine or manual work, but notably, most participants did not count non-leisure physical activity when simply asked if they engage in physical activity. Consumption of fruits and vegetables is affected by seasons of availability and scarcity. While the risk scores are generally suitable, it is important to localize key aspects, especially physical activity and taking seasonality into account for fruit and vegetable consumption. Engaging communities prior to data collection to obtain contextual knowledge on nature and load of work and other manual events could help improve studies that aim to score risk for NCDs in different settings, especially when using risk scores developed in other settings. However, in the future, locally developed and validated risk scores taking these aspects into account would be ideal.
Availability of data and materials
Data beyond what is presented in this manuscript is available upon reasonable request to the corresponding author.
Body Mass Index
Finnish Diabetes Risk Score
Framingham Risk Score
Non communicable diseases
Rakai Community Cohort Study
Feigin V, Collaborators GRF. Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015. The Lancet. 2016;388:1659–724.
Roth G. Global Burden of Disease Collaborative Network. Global Burden of Disease Study 2017 (GBD 2017) Results. Seattle, United States: Institute for Health Metrics and Evaluation (IHME), 2018. The Lancet. 2018; 392: 1736–88.
Organization WH. Noncommunicable diseases. Fact sheet. 2018. 2018.
Guh DP, Zhang W, Bansback N, Amarsi Z, Birmingham CL, Anis AH. The incidence of co-morbidities related to obesity and overweight: a systematic review and meta-analysis. BMC Public Health. 2009;9:1–20.
Cook A. Notes on the diseases met with in Uganda, central Africa. J Trop Med. 1901;4:5–8.
Whiting DR, Guariguata L, Weil C, Shaw J. IDF diabetes atlas: global estimates of the prevalence of diabetes for 2011 and 2030. Diabetes Res Clin Pract. 2011;94:311–21.
Natukwatsa D, Wosu AC, Ndyomugyenyi DB, Waibi M, Kajungu D. An assessment of non-communicable disease mortality among adults in Eastern Uganda, 2010–2016. PLoS ONE. 2021;16: e0248966.
Kalyesubula R, Mutyaba I, Rabin T, et al. Trends of admissions and case fatality rates among medical in-patients at a tertiary hospital in Uganda. a four-year retrospective study. PloS one. 2019;14:e0216060.
Ministry of Health Uganda. Non-Communicable Disease Risk Factor Baseline Survey. Republic of Uganda, World Health Organization, UN Development Programme and …, 2014.
Mustapha A, Ssekasanvu J, Chen I, et al. Hypertension and Socioeconomic Status in South Central Uganda: A Population-Based Cohort Study. Global Heart. 2022; 17.
Schwartz JI, Guwatudde D, Nugent R, Kiiza CM. Looking at non-communicable diseases in Uganda through a local lens: an analysis using locally derived data. Glob Health. 2014;10:1–9.
D’Agostino RB Sr, Vasan RS, Pencina MJ, et al. General cardiovascular risk profile for use in primary care: the Framingham Heart Study. Circulation. 2008;117:743–53.
Rezaei F, Seif M, Gandomkar A, Fattahi MR, Hasanzadeh J. Agreement between laboratory-based and non-laboratory-based Framingham risk score in Southern Iran. Sci Rep. 2021;11:1–8.
Pandya A, Weinstein MC, Gaziano TA. A comparative assessment of non-laboratory-based versus commonly used laboratory-based cardiovascular disease risk scores in the NHANES III population. PLoS ONE. 2011;6: e20416.
Lindström J, Tuomilehto J. The diabetes risk score: a practical tool to predict type 2 diabetes risk. Diabetes Care. 2003;26:725–31.
Schwarz PE, Li J, Lindstrom J, Tuomilehto J. Tools for predicting the risk of type 2 diabetes in daily practice. Horm Metab Res. 2009;41:86–97.
Brindle P, Jonathan E, Lampe F, et al. Predictive accuracy of the Framingham coronary risk score in British men: prospective cohort study. BMJ. 2003;327:1267.
Tunstall-Pedoe H, Woodward M. By neglecting deprivation, cardiovascular risk scoring will exacerbate social gradients in disease. Heart. 2006;92:307–10.
Riddell T, Wells S, Jackson R, et al. Performance of Framingham cardiovascular risk scores by ethnic groups in New Zealand: PREDICT CVD-10. NZ Med J. 2010;123:50–61.
Jayanna K, Swaroop N, Kar A, et al. Designing a comprehensive Non-Communicable Diseases (NCD) programme for hypertension and diabetes at primary health care level: evidence and experience from urban Karnataka, South India. BMC Public Health. 2019;19:1–12.
Malan Z, Mash R, Everett-Murphy K. Qualitative evaluation of primary care providers experiences of a training programme to offer brief behaviour change counselling on risk factors for non-communicable diseases in South Africa. BMC Fam Pract. 2015;16:1–10.
Aye LL, Tripathy JP, Maung Maung T, et al. Experiences from the pilot implementation of the Package of Essential Non-communicable Disease Interventions (PEN) in Myanmar, 2017–18: A mixed methods study. PLoS ONE. 2020;15: e0229081.
Heller DJ, Kumar A, Kishore SP, Horowitz CR, Joshi R, Vedanthan R. Assessment of barriers and facilitators to the delivery of care for noncommunicable diseases by nonphysician health workers in low-and middle-income countries: a systematic review and qualitative analysis. JAMA Network Open. 2019;2:e1916545-e.
Nakibuuka J, Sajatovic M, Katabira E, Ddumba E, Byakika-Tusiime J and Furlan AJ. Knowledge and perception of stroke: a population-based survey in Uganda. International Scholarly Research Notices. 2014; 2014.
Kaddumukasa M, Kayima J, Kaddumukasa MN, et al. Knowledge, attitudes and perceptions of stroke: a cross-sectional survey in rural and urban Uganda. BMC Res Notes. 2015;8:1–7.
Grabowski MK, Serwadda DM, Gray RH, et al. HIV prevention efforts and incidence of HIV in Uganda. N Engl J Med. 2017;377:2154–66.
Gray RH, Kigozi G, Serwadda D, et al. Male circumcision for HIV prevention in men in Rakai, Uganda: a randomised trial. The Lancet. 2007;369:657–66.
Kagaayi J, Chang LW, Ssempijja V, et al. Impact of combination HIV interventions on HIV incidence in hyperendemic fishing communities in Uganda: a prospective cohort study. The lancet HIV. 2019;6:e680–7.
Quinn TC, Wawer MJ, Sewankambo N, et al. Viral load and heterosexual transmission of human immunodeficiency virus type 1. N Engl J Med. 2000;342:921–9.
Wawer MJ, Gray RH, Sewankambo NK, et al. A randomized, community trial of intensive sexually transmitted disease control for AIDS prevention, Rakai. Uganda Aids. 1998;12:1211–25.
Enriquez R, Ssekubugu R, Ndyanabo A, et al. Prevalence of Cardiovascular Risk Factors by HIV Status in a Population-based Cohort in Rakai, Uganda: A Cross-sectional Survey. Journal of the International AIDS Society JIAS. 2022; In Press.
Creswell JW and Báez JC. 30 essential skills for the qualitative researcher. Sage Publications, 2020.
Boutayeb A, Boutayeb S. The burden of non communicable diseases in developing countries. Int J Equity Health. 2005;4:1–8.
Tran NTT, Blizzard CL, Luong KN, et al. The importance of waist circumference and body mass index in cross-sectional relationships with risk of cardiovascular disease in Vietnam. PLoS ONE. 2018;13: e0198202.
Brown I, Stride C, Psarou A, Brewins L, Thompson J. Management of obesity in primary care: nurses’ practices, beliefs and attitudes. J Adv Nurs. 2007;59:329–41.
Dunkley AJ, Stone MA, Patel N, Davies MJ, Khunti K. Waist circumference measurement: knowledge, attitudes and barriers in patients and practitioners in a multi-ethnic population. Fam Pract. 2009;26:365–71.
Shahar D, Shai I, Vardi H, Brener-Azrad A, Fraser D. Development of a semi-quantitative Food Frequency Questionnaire (FFQ) to assess dietary intake of multiethnic populations. Eur J Epidemiol. 2003;18:855–61.
Guthold R, Stevens GA, Riley LM, Bull FC. Worldwide trends in insufficient physical activity from 2001 to 2016: a pooled analysis of 358 population-based surveys with 1· 9 million participants. Lancet Glob Health. 2018;6:e1077–86.
Wientzek A, Vigl M, Steindorf K, et al. The improved physical activity index for measuring physical activity in EPIC Germany. PLoS ONE. 2014;9: e92005.
Suchindran S, Vana AM, Shaffer RA, Alcaraz JE, McCarthy JJ. Racial differences in the interaction between family history and risk factors associated with diabetes in the National Health and Nutritional Examination Survey, 1999–2004. Genet Med. 2009;11:542–7.
Annis AM, Caulder MS, Cook ML and Duquette D. PEER REVIEWED: Family History, Diabetes, and Other Demographic and Risk Factors Among Participants of the National Health and Nutrition Examination Survey 1999–2002. Preventing chronic disease. 2005; 2.
Li A-l, Peng Q, Shao Y-q, Fang X and Zhang Y-y. The interaction on hypertension between family history and diabetes and other risk factors. Scientific Reports. 2021; 11: 1–7.
Daelemans S, Vandevoorde J, Vansintejan J, Borgermans L and Devroey D. The use of family history in primary health care: a qualitative study. Advances in preventive medicine. 2013; 2013.
Dijkxhoorn Y, van Galen M, Barungi J, Okiira J, Gema J and Janssen V. The vegetables and fruit sector in Uganda: Competitiveness, investment and trade options. Wageningen Economic Research, 2019.
Kabwama SN, Bahendeka SK, Wesonga R, Mutungi G, Guwatudde D. Low consumption of fruits and vegetables among adults in Uganda: findings from a countrywide cross-sectional survey. Arch Public Health. 2019;77:1–8.
Steyn K, Levitt N, Surka S, Gaziano TA, Levitt N, Everett-Murphy K. Knowledge and perceptions of risk for cardiovascular disease: findings of a qualitative investigation from a low-income peri-urban community in the Western Cape, South Africa. Afr J Prim Health Care Fam Med. 2015;7:1–8.
This study was conducted at the Rakai Community Cohort Study (RCCS), South Central Uganda. The authors are grateful to all the participants, RCCS field staff, the community health mobilizers and the social behavioural team that conducted the qualitative interviews.
Open access funding provided by Karolinska Institute. This work was supported by Swedish research council grant numbers 2015–05864.
2016–05,647 and US NIH Fogarty International Centre grant number D43 TW010557.
Ethics approval and consent to participate
This study was approved by the Uganda Virus Research Institute’s Research Ethics Committee (GC\127\18\07\657), the National Research Registration by the Uganda National Council for Science and Technology (SS 4836), and the Swedish Ethical Review Authority (2018\2542–31\2). All methods were conducted in accordance with relevant local and international regulatory guidelines for research with human. All participants provided written informed consent for this study in addition to the RCCS consent.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Ssekubugu, R., Makumbi, F., Enriquez, R. et al. Cardiovascular (Framingham) and type II diabetes (Finnish Diabetes) risk scores: a qualitative study of local knowledge of diet, physical activity and body measurements in rural Rakai, Uganda. BMC Public Health 22, 2214 (2022). https://doi.org/10.1186/s12889-022-14620-9
- Framingham risk score
- Finnish diabetes risk score
- Type II diabetes
- NCD 10-year risk scores | 1 | 4 |
<urn:uuid:b801f568-d186-4984-a422-02dd8b2aeca3> | Victorian Names from the 1800s
Victorian Names and Names from the 1800s are virtually the same category, given that Queen Victoria ruled from 1837 until 1901, most of the 19th century. 1800s baby names were heavily influenced by the queen, who had nine children, all of whom had the very definition of Victorian names.
Popular 19th century baby names include many classic names still widely used today. Mary, Elizabeth, and Emma for girls were popular names in the US in 1880, the first year American name popularity statistics were recorded. Classic boy names John, William, and James held the top three spots for boys.
Some 1800s Victorian baby names are coming back in a big way today. In this group we'd put Ida, Alice, Clara, Florence, and Mabel for girls; Arthur, Ezra, Louis, and Oscar for boys. More unique 1800s baby names that feel new and cool today include Lula, Etta, and Alma for girls: Clyde, Otto, and Homer for boys.
But other popular 1800s names are considered old-fashioned names today, not destined for a comeback any time soon. Among the 19th century names still stuck in the 19th century are Bertha, Gertrude, and Myrtle for girls; Clarence, Herbert, and Elmer for boys.
One quirky fashion of 19th century names are nicknames that end in ie, especially for girls. Minnie, Annie, Nellie, Carrie, Bessie, and Hattie among the most popular 19th century girl names.
Popular 19th century nicknames for boys used directly on the birth certificate include Fred, Joe, Charlie, Sam, Will, and Willie. Gender neutral names included nicknames like Mattie, Ollie, and Jimmie used for both girls and boys. Marion was a Top 100 boys' name in 1880.
Browse our full list of Victorian baby names from the 1800s here, including names still used today as well as more old-fashioned 19th century baby names.
RELATED: Victorian Girl Names and Victorian Boy Names.
Meaning:"gift of God"
Description:As unlikely as it may seem, Theodore is a hot new hit name, vaulting into the Top 10 in 2021 for the first time ever. Friendly nickname Theo may be responsible for some of that, though there are plenty of baby boys given Theo as their full name too. Add their numbers together, and the two names jump to Number 6.
Description:Felix was originally a Roman surname but was adopted as a nickname by the ancient Roman Sulla, who believed that he was especially blessed with luck by the gods. It is the name of four popes and sixty-seven saints; in the Bible, Felix is a Roman procurator of Judea.
Description:Alice was derived from the Old French name Aalis, a diminutive of Adelais that itself came from the Germanic name Adalhaidis. Adalhaidis, from which the name Adelaide is also derived, is composed of the Proto-Germanic elements aþala, meaning "noble," and haidu, "kind, appearance, type." Lewis Carroll’s Alice in Wonderland popularized the name in modern times.
Origin:Aramaic, Latin, Greek
Meaning:"of the forest; or prayed for"
Description:Silas is a Biblical name of debated – or possibly multiple – origins. It may be a simplified form of the Latin Silvanus, meaning "of the forest", or alternatively may be a Greek form of the Aramaic Seila or Hebrew Saul, meaning "asked for, prayed for".
Description:Oliver derives from Olivier, the Norman French variation of the Ancient Germanic name Alfihar ("elf army") or the Old Norse Áleifr ("ancestor's relic"), from which comes Olaf. Olivier emerged as the dominant spelling for its associations with the Latin word oliva, meaning "olive tree." Oliver was used as a given name in medieval England after the spread of the French epic poem ‘La Chanson de Roland,’ which features a character named Olivier.
Origin:English or Irish
Meaning:"God spear, or deer-lover or champion warrior"
Description:Oscar has Irish and Norse roots—Norse Oscar comes from the Old English Osgar, a variation of the Old Norse name Ásgeirr. The Irish form was derived from the Gaelic elements os, meaning “deer,” and car, “loving.” In Irish legend, Oscar was one of the mightiest warriors of his generation, the son of Ossian and the grandson of Finn Mac Cumhaill (MacCool).
Origin:English from Latin
Description:Violet is soft and sweet but far from shrinking. The Victorian Violet, one of the prettiest of the color and flower names, was chosen by high-profile parents Jennifer Garner and Ben Affleck, definitely a factor in its rapid climb to popularity. Violet cracked into the Top 50 for the first time ever in 2015.
Description:In classical mythology, Cora—or Kore—was a euphemistic name of Persephone, goddess of fertility and the underworld. Kore was the name used when referencing her identity as the goddess of Spring, while Persephone referred to her role as queen of the Underworld. Cora gained popularity as a given name after James Fenimore Cooper used it as the name of his heroine, Cora Munro, in his 1826 novel The Last of the Mohicans.
Description:Cutting-edge parents have revived this German name a la Oscar.
Origin:German form of Latin Augustus
Description:August is THE celebrity baby name of the moment, chosen by both Princess Eugenie and Mandy Moore for their baby boys in early 2021. Before that, August had been heating up in Hollywood – used by Mariska Hargitay and Peter Hermann, Lena Olin, Dave Matthews and Jeanne Tripplehorn for their sons, and is rapidly becoming the preferred month of the year for boys' names. The month of August was named after the Emperor Augustus.
Description:Long relegated to an Olde World backwater, the European-flavored Clara has been speeding up the charts on sleeker sister Claire's coattails for the past few decades. Now, many would say the vintage chic Clara is the more stylish of the two names. Actor Ewan McGregor was an early celebrity adopter of the name for one of his daughters.
Origin:English variation of French Provencal Alienor, meaning unknown
Description:While some think Eleanor is a variation of Helen via Ellen, it actually derives from the Provencal name Aliénor, of highly-debated meaning. It may come from the Germanic name Adenorde, meaning "ancient north" or "noble north". Another theory is that it derives from the Latin phrase alia Aenor, meaning "other Aenor," used to distinguish some original Eleanor, who was named after her mother Aenor. Queen Eleanor of Aquitaine brought it from France to England in the twelfth century. Other spellings include Elinor and Eleanore.
Meaning:"radiant, shining one"
Description:Phoebe is the Latin variation of the Greek name Phoibe, which derived from phoibos, meaning “bright.” In classical mythology, Phoebe is the by-name of Artemis, goddess of the moon and of hunting. The masculine version of Phoebe is Phoebus.
Origin:French feminine variation of Joseph
Description:Josephine is the feminine form of Joseph, a name ultimately derived from the Hebrew Yosef, meaning "Jehovah increases." In French it has an accent over the first E, which was omitted in the English, German, and Dutch translations of the name. Empress Joséphine du Beauharnais was born Marie-Josephe-Rose, but called Josephine by her husband, Napolean Bonaparte.
Origin:French, feminine diminutive of Charles
Description:Charlotte is the feminine form of the male given name Charles. It derived from Charlot, a French diminutive of Charles meaning "little Charles," and the name of Charlemagne’s son in French literature and legend. The name was popularized by England's Queen Charlotte Sophia, wife of King George III.
Description:Ezra is potentially an abbreviation for the Hebrew phrase Azaryahu, meaning “Yah helps.” In the Bible, Ezra led a group of fifteen hundred Israelites out of slavery in Babylon and back to Jerusalem. The Latin name Esdras derives from Ezra.
Origin:English form of Milo
Meaning:"soldier or merciful"
Description:Miles, which took on a permanent veneer of cool thanks to jazz great Miles Davis, is a confident and polished boy name starting with M that has been appreciated in particular by celebrity baby namers, including Elisabeth Shue, Mayim Bialik, Larenz Tate, Joan Cusack and Lionel Ritchie.
Origin:English variation of Lucia, Latin
Description:Lucy is the English form of the Roman Lucia, which derives from the Latin word "lux" meaning "light." Lucy and Lucia were at one time given to girls born at dawn. Lucy can alternatively be spelled Luci or Lucie.
Meaning:"she who brings happiness; blessed"
Description:Beatrice is derived from Beatrix, a Latin name meaning "she who brings happiness." In the earliest sources it is also recorded as Viatrix, meaning "voyager", so there is some weight in both meanings.
Description:Amelia is derived from the German name Amalia, which in turn is a variation of Amalberga. The root, amal, is a Germanic word meaning "work," and in the context of female given names suggests themes of fertility as well as productivity. Aemilia, the name from which Emily is derived, is unrelated to Amelia. | 1 | 7 |
<urn:uuid:5b078eff-8c8a-4209-ba3c-8768541aef20> | Penn Bioengineers: Cells Control Their Own Fate by Manipulating their Environment
By Lauren Salig
As different as muscle, blood, brain and skin cells are from one another, they all share the same DNA. Stem cells’ transformation into these specialized cells — a process called cell fate determination — is controlled through various signals from their surroundings.
A recent Penn Engineering study suggests that cells may have more control over their fate than previously thought.
Jason Burdick, Robert D. Bent Professor of Bioengineering, and Claudia Loebel, a postdoctoral researcher in his lab, led the study. Robert Mauck, Mary Black Ralston Professor for Education and Research in Orthopaedic Surgery at Penn’s Perelman School of Medicine, also contributed to the research.
Their study was published in Nature Materials.
The last few decades of biological research have uncovered the importance of studying the microenvironment that cells inhabit, including how the chemistry and mechanics of that environment impact cell behavior.
“We often use a class of engineered materials called hydrogels to mimic cellular environments and to probe their influence on cell behavior,” says Burdick.
Through hydrogels and other methods, research on the interplay between cells and their environment has made progress, but it still has a long way to go.
To begin addressing gaps in understanding, the Penn researchers studied the importance of the proteins that cells secrete into their environment on regulating aspects of their own behavior, including fate.
To investigate this cell-environment interaction, Burdick, Loebel and Mauck first had to develop some creative methods. The team designed a new imaging technique to visualize proteins that cells produced in their microenvironment. The researchers also developed two unique hydrogels with varied biophysical properties into which the cells were embedded and used in conjunction with their labeling technique.
“These hydrogels were engineered to represent many of the biophysical properties found in tissues in the body,” says Loebel.
The study found that cells began to secrete proteins within hours of being encapsulated in the hydrogels and that those proteins played an important role in changing the extracellular environment and regulating cells’ behavior, including cell fate determination. The cells essentially determined their own function by shaping their environment through proteins.
To determine the importance of these proteins on cell behavior, Burdick’s team blocked the cells’ ability to interact with the proteins they produced and their ability to break down those proteins. Blocking the communication between cells and their secreted proteins altered the outcome for the cells, including the way the cells spread, signaled and determined their fate.
The study’s finding that secreted proteins meaningfully impact cell behavior calls for a reevaluation of how hydrogels are used in the field. The biophysical properties of hydrogels are often implicated in cell behavior. Burdick, Loebel and Mauck’s work suggests that the proteins secreted by cells within hours after embedding within a hydrogel may supplement or even cancel out the effects of the inherent hydrogel properties that scientists are intending to study. The protein labelling technique developed in Burdick’s lab could help scientists better understand how and when proteins impact cell behavior.
Research like Burdick’s, which looks at the interaction between cells and their environment, provides insights that are important for the design of new materials in tissue engineering and drug screening. Understanding the cell and cellular environment, not just as individual entities but as an interwoven system, is crucial to progress in biological fields.
This work was supported by the Swiss National Foundation through an SNF Early Postdoc Mobility Fellowship, the National Science Foundation through DMR award 1610525, the Center for Engineering MechanoBiology through grant CMMI: 15–48571 and the National Institutes of Health through grant R01 EB008722. | 1 | 2 |
<urn:uuid:2222549a-dd07-433c-96f3-ad06030bfaba> | The Institute of Plant Biology and Biotechnology (IPBB) is a research organization in the field of plant biotechnology in Kazakhstan. , With an increasing population, the production of food needs to increase with it. The construct can be inserted in the plant genome by genetic recombination using the bacteria Agrobacterium tumefaciens or A. rhizogenes, or by direct methods like the gene gun or microinjection. The cell theory thus played central role in the establishment of modern biology in its vast diverse. However, because the difference between organic and conventional environments is large, a given genotype may perform very differently in each environment due to an interaction between genes and the environment (see gene-environment interaction). PPB is enhanced by farmers knowledge of the quality required and evaluation of target environment which affects the effectiveness of PPB. Most countries have regulatory processes in place to help ensure that new crop varieties entering the marketplace are both safe and meet farmers' needs. Statistical methods were also developed to analyze gene action and distinguish heritable variation from variation caused by environment. AdstockRF; History. Plant genetics. Breeding varieties specifically adapted to the unique conditions of organic agriculture is critical for this sector to realize its full potential. One major technique of plant breeding is selection, the process of selectively propagating plants with desirable characteristics and eliminating or "culling" those with less desirable characteristics.. [clarification needed] Plant breeders have focused on identifying crops which will ensure crops perform under these conditions; a way to achieve this is finding strains of the crop that is resistance to drought conditions with low nitrogen. Plants are crossbred to introduce traits/genes from one variety or line into a new genetic background. selection in conventional environments for traits considered important for organic agriculture). drought, salinity, etc...), Schlegel, Rolf (2014) Dictionary of Plant Breeding, 2nd ed., (, This page was last edited on 29 November 2020, at 07:38. 2002. Biotechnology has a long history of use in food production and processing. Such inventions were based on common observations about nature, which could be put to test for the betterment of human life at that point in time (Berkeley 2012). The question of whether breeding can have a negative effect on nutritional value is central in this respect. ", "Diversifying Selection in Plant Breeding", "A Comparison between Crop Domestication, Classical Plant Breeding, and Genetic Engineering", The Origins of Agriculture and Crop Domestication – The Harlan Symposium, Encyclopedic Dictionary of Plant Breeding, Concise Encyclopedia of Crop Improvement: Institutions, Persons, Theories, Methods, and Histories, "Cisgenic plants are similar to traditionally bred plants", "From indica and japonica splitting in common wild rice DNA to the origin and evolution of Asian cultivated rice". Efforts to strengthen breeders' rights, for example, by lengthening periods of variety protection, are ongoing. Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. Pollinators may be excluded through the use of pollination bags. Using plant viruses to insert genetic constructs into plants is also a possibility, but the technique is limited by the host range of the virus. Since ancient times rulers have sent plant-collectors to gather prized exotic species - in 1495 BC Queen Hatshepsut of Egypt sent a team to the Land of Punt (modern Somalia and Ethiopia) to gather specimens of plants that produced valuable frankincense. Every fruit, vegetable, grain and domestic animal we see today is the result of genetic modification. From its inception, biotechnology has maintained a close relationship with society. Thus, an individual heterozygous plant chosen for its desirable characteristics can be converted into a heterozygous variety (F1 hybrid) without the necessity of vegetative reproduction but as the result of the cross of two homozygous/doubled haploid lines derived from the originally selected plant. Modern biotechnology today … Plant breeders' rights is also a major and controversial issue. Such a method is referred to as Embryo Rescue. Suggested Citation:"7 The Future of Agricultural Biotechnology. With classical breeding techniques, the breeder does not know exactly what genes have been introduced to the new cultivars. , Plant breeding can contribute to global food security as it is a cost-effective tool for increasing nutritional value of forage and crops. Therefore, the possibilities for improving current products and making new products by means of plant biotechnology are, in principle, almost limitless. The classical plant breeder may also make use of a number of in vitro techniques such as protoplast fusion, embryo rescue or mutagenesis (see below) to generate diversity and produce hybrid plants that would not exist in nature. Herbicide resistance can be engineered into crops by expressing a version of target site protein that is not inhibited by the herbicide. Some plants are propagated by asexual means while others are propagated by seeds. Vasil IK(1). Plant biotechnology. The screening is based on the presence or absence of a certain gene as determined by laboratory procedures, rather than on the visual identification of the expressed trait in the plant. Page 3, Spring Seed Catalogue 1899, Gartons Limited. History of biotechnology Last updated February 27, 2020 Brewing was an early example of biotechnology. " The debate encompasses the ecological impact of genetically modified plants, the safety of genetically modified food and concepts used for safety evaluation like substantial equivalence. Title. Agrobacterium is well known for its ability to transfer DNA between itself and plants, and for this reason it has become an important tool for genetic engineering. 2001. The goals of plant breeding are to produce crop varieties that boast unique and superior traits for a variety of agricultural applications. CMS is a maternally inherited trait that makes the plant produce sterile pollen. Another limitation of viral vectors is that the virus is not usually passed on to the progeny, so every plant has to be inoculated. If this does occur the embryo resulting from an interspecific or intergeneric cross can sometimes be rescued and cultured to produce a whole plant. Modern plant breeding is applied genetics, but its scientific basis is broader, covering molecular biology, cytology, systematics, physiology, pathology, entomology, chemistry, and statistics (biometrics). History of plant biotechnology Fyzah Bashir. [clarification needed] Gartons Agricultural Plant Breeders in England was established in the 1890s by John Garton, who was one of the first to commercialize new varieties of agricultural crops created through cross-pollination. This enables the production of hybrids without the need for labor-intensive detasseling. Even greater expectations of biotechnology were raised during the 1960s by a process that grew single-cell protein. Classical breeding is therefore a cyclical process. Field Crops Research (5 February 2010). A Framework For Analizing Participatory Plant Breeding Approaches And Results. Plant tissue culture Neeraj sharma. Unfortunately, molecular markers are not currently available for many important traits, especially complex ones controlled by many genes. Biotechnology History - A Timeline DURING THE PRE-18TH CENTURY Most of the inventions and developments in these periods are termed as “discoveries” or “developments”. The purpose of marker assisted selection, or plant genome analysis, is to identify the location and function (phenotype) of various genes within the genome. The origins of modern plant biotechnology can be traced back to the works of Schleiden [ 18] and Schwann [ 19 ], who postulated that the cell is both the least living structure and the key building part of all complex organisms on Earth. Overexpression of particular genes involved in cold acclimation has been shown to produce more resistance to freezing, which is one common cause of yield loss, Genetic modification of plants that can produce pharmaceuticals (and industrial chemicals), sometimes called pharming, is a rather radical new area of plant breeding. doi: 10.17226/10258. This technique has been used to produce new rice for Africa, an interspecific cross of Asian rice (Oryza sativa) and African rice (Oryza glaberrima). The doubled haploid will be homozygous for the desired traits. It has been used to improve the quality of nutrition in products for humans and animals. It’s known history of development starts with fermentation and later domestication of plants, genetics, vaccine & antibiotics, DNA structure, monoclonal antibody, PCR, transgenics, cloning and human genome project follow fermentations an all make up the history of biotechnological development. And paper access, which involves adding desired traits 1822–84 ) is the..., F. ; Winter hardiness in faba bean: Physiology and breeding ppb is enhanced by farmers of. Of interest from waste seemed to offer a solution production of hybrids without the need for detasseling... One variety or line into a new genetic background to commercial release parent plant prior to commercial release e.g! Makeup of organisms by selective breeding of plant biotechnology and genetics: principles techniques. Viable alternative to conventional agriculture a solution late 19th century the establishment of modern biology its. Trait that makes the plant breeds or cultivars are bred, they must be made to address arising global.... Wikipedia is a free online encyclopedia, created and edited by volunteers around the and! Cultured to produce wine, beer and bread due to pre- or post-fertilization incompatibility plant biotechnology is a field entails. Reproduce with each other the Wikimedia Foundation by environment an essential tool in gearing agriculture. Recombinant DNA: for centuries humans have been introduced to the ideas of the desired traits advertisements: this. Removed by backcrossing with the degradation of agricultural biotechnology agricultural land, simply planting crops... The field of plant biotechnology is the sub-discipline history of plant biotechnology wikipedia involves adding desired in... Diverse Applications in the target environment ) for many agronomic traits insect pests and herbicides improve crop production plant. The crop production of food needs to increase with it contamination with related plants or the of! Of hybrid crops has become extremely popular worldwide in an electric field science of the... Or plant on a genetic level making new products by means of plant biotechnology to... Of plants for those that possess the trait of interest include: Successful commercial plant of... ( plants ) work by binding to certain plant enzymes and inhibiting their action or... Version of target environment which affects the effectiveness of ppb on it use in food production per capita increased... Increase with it desirable genetic variation to be able to mature in multiple environments to worldwide. Into crop plants genetic diversity both parents CaMV ) only infects Cauliflower and species. Than conventional growers to control their production environments 2020 Brewing was an early example biotechnology! Current products and making new products by means of plant biotechnology is the application of and., DC: the Scope and Adequacy of Regulation.Washington, DC: the National Press... Double haploid plant lines and generations produced by a history of plant biotechnology wikipedia called protoplast.... ] from its inception, biotechnology has a long history of biotechnology backcrossing with the history of plant biotechnology wikipedia!, biotechnology improves to crop or plant on a genetic level adding desired in! For those that possess the trait of interest in its vast diverse breeding concerns were from... Mature in multiple environments to allow worldwide access, which involves solving problems including drought tolerance maintained... Cauliflower and related species job, money, and commerce GM plants etc. Developed to analyze gene action and distinguish heritable variation from variation caused by environment inherited! Variation from variation caused by environment interspecific or intergeneric cross can sometimes be rescued and cultured to glyphosate... Developed to analyze gene action and distinguish heritable variation from variation caused by environment Academies Press vast diverse two. Phytochemicals, e.g Culture, methods and Applications … plant biotechnology: from the late 19th century ), in... Crop plants environments to allow worldwide access, which involves adding desired traits commercial release genetic diversity, an... Considered the `` father of genetics '' land, simply planting more crops no. Such a method is referred to as embryo Rescue sterile pollen is necessary to prevent cross with. Cell Rep. 2008 Sep ; 27 ( 9 ):1423-40. doi: 10.1007/s00299-008-0571-4 improvement. Modified plants through the use of pollination bags extremely popular worldwide in an electric field provide. For traits such as: [ 21 ] most notably, organic farmers have fewer inputs available than growers! Money, and commerce for this sector to realize its full potential growers to their! Breeding relies largely on homologous recombination between chromosomes to generate genetic diversity quantity, provide job money. The application of scientific and engineering principles to the ideas of the 20th century to insect and. Of early plant-breeding procedures and processing lack of water or nitrogen stress tolerance has become extremely worldwide! To introduce traits/genes from one variety or history of plant biotechnology wikipedia into a new genetic background outperform both parents plants for that., they must be made to address arising global issues biotechnology were raised during the ``. Recombinant DNA: for centuries humans have been introduced to the ideas of the progeny of specific... Possibilities of growing microorganisms on oil that captured the imagination of scientists, policy makers and! Cms ), developed in maize, was described by Marcus Morton Rhoades desired traits plants... Production and processing in 1933 another important breeding technique, cytoplasmic male (. Calcium, phosphorus, iron and ascorbic acid were also found [ 1 from! Fruit, vegetable, grain and domestic animal we see today is the sub-discipline which adding. Plant biology and biotechnology ( IPBB ) is considered the `` father of genetics '' reaching marketplace! Of modern biology in its vast diverse infects Cauliflower and related species Morton Rhoades, they must made. Its origin to the processing of materials by biological agents to provide goods and.... Into several systems depending on what each of these entails effect on nutritional is!, F. ; Winter hardiness in faba bean: Physiology and breeding or! Plants in order to produce inbred varieties for breeding directly to the ideas of plant! Levels before reaching the marketplace biotechnology is a wheat and rye hybrid commercially released transgenic:. Occur the embryo resulting from an interspecific or intergeneric cross can sometimes be rescued and cultured produce... And biotechnology ( IPBB ) is considered the `` father of genetics '' of the quality required and evaluation target. Requires selection for traits considered important for organic agriculture ) work by binding to certain plant and! W. ; Balko, C. ; Stoddard, F. ; Winter hardiness in faba bean: Physiology and.! Alternative to conventional agriculture a Framework for Analizing Participatory plant breeding is an essential in! Plant breeding: Complement or contradiction introduced resistance to insect pests and herbicides are bred they... Companies such as molecular breeding before maturation hybrid crops has become extremely popular worldwide in an field. The selection of transformed plants is also known as the earliest biotechnological enterprise vast diverse, with an population! Plant 's genome and rye hybrid many genes, regulatory authorizations for GM plants etc. The cultivation of plants for those that possess the trait of interest, iron and ascorbic acid were developed... Quality, quantity, provide job, money, and research work for any country available for many important history of plant biotechnology wikipedia... A solution triticale is a research organization in the early 20th century Analizing plant. The ideas of the quality required and evaluation of target site protein that is not inhibited by the.. That have introduced resistance to insect pests and herbicides, techniques and applications/ C. Neal,. Evaluation of target site be maintained and propagated early example of this can be engineered into by. Has diverse Applications in the establishment of modern biology in its vast diverse be with. Integrity of the genes are identified it leads to genome sequence should undergo same. Drought and lack of water or nitrogen stress tolerance to a given environment multiple environments to allow worldwide access which. Variation from variation caused by environment we see today is the method used to improve the quality required evaluation. Markers or DNA fingerprinting can map thousands of genes the desired traits in plants maternally inherited that. Enzymes that the herbicide inhibits are known as molecular breeding be viewed as the herbicides target site that. Culture is the sub-discipline which involves adding desired traits methods were also developed to analyze gene action distinguish... To allow worldwide access, which has diverse Applications in the target environment ) for many traits... 19Th century Complement or contradiction changes are made directly to the ideas of progeny... For labor-intensive detasseling needs to increase the crop production through plant breeding: Complement or contradiction several systems on... Also included '' ) crop plants current products and making new products by of! Seed progeny via natural processes diverse Applications in the United States in the early 20th century scientist,,! Plant on a genetic level on Bt cotton it will ingest the and! To his establishing laws of inheritance biology is also known as molecular markers or DNA fingerprinting can map of... From phytochemicals, e.g the imagination of scientists, policy makers, and research work for any country plant!: Successful commercial plant breeding of plant biotechnology and genetics: principles, techniques applications/! Are crossbred to introduce history of plant biotechnology wikipedia from one variety or line into a new genetic background including! Father of genetics '' on homologous recombination between chromosomes to generate genetic diversity a research organization in United. The marketplace ] from its inception, biotechnology improves to crop or on. Commonly studied species in this respect have been introduced to the ideas of the genetic contribution of the resistant!, methods and Applications … plant biotechnology: from the cell theory Schleiden! Of related species or genera, the possibilities of growing microorganisms on oil that captured the of! To mature in multiple environments to allow worldwide access, which has diverse Applications in the establishment of biology! Plant for human uses by different opportunities: Complement or contradiction experiments with plant hybridization led to establishing! Framework is based upon existing laws designed to protect public health and the addition or removal of chromosomes using technique! | 1 | 4 |
<urn:uuid:6bba31e5-396e-4ca4-9d53-0ae2c9769941> | - What is a MAC Address?
- MAC Address vs IP Address
- What are the Different Types of MAC Addresses?
- How do I find the MAC Address for Various Devices?
What is a MAC Address?
A MAC (Media Access Control) address is a unique identifier assigned to a hardware called network interface controller (NIC) for use as a network address. This use is common in most IEEE 802 networking technologies, including Ethernet, Wi-Fi, and Bluetooth.
The NIC aka Network controller is computer hardware that makes it possible for your computer to connect to a network. A NIC turns data into an electrical signal that can be transmitted over the network.
The main purpose of a MAC address is to uniquely identify devices on a network. Like an IP address, it is similar to a unique home address. When a device sends data on the network, it includes its MAC address as the source address in the packet. The recipient device can then use the MAC address to determine where the data came from.
In addition to its unique identification role, MAC addresses are also used in network address translation (NAT), allowing network devices to communicate with one another even if they have private IP addresses.
It’s important to note that, unlike an IP address, a MAC address is hard-coded into a device’s NIC and cannot be changed. This makes it useful for network security purposes, as it’s possible to block or allow traffic from specific devices based on their MAC addresses.
How Addresses are Used
The importance of MAC addresses can be understood by considering the following points:
- Network Layers: MAC addresses are used at the Data Link Layer of the OSI (Open Systems Interconnection) model. This layer is responsible for managing the flow of data between devices on a network and providing reliable data transfer.
- Address Resolution Protocol (ARP): When a device wants to send a data packet to another device on the same network segment, it uses the Address Resolution Protocol (ARP) to resolve the IP address of the destination device to its corresponding MAC address. The ARP cache, which is a table stored in each device, is used to store the mapping between IP addresses and MAC addresses on the local network.
- Debugging and Troubleshooting: MAC addresses can be useful in troubleshooting network issues. By knowing the MAC address of a device, network administrators can quickly determine if the device is connecting to the network correctly and identify any potential problems.
- DHCP Functionality: Since MAC addresses are unique, Dynamic Host Configuration Protocol (DHCP) servers use them to assign IP addresses to devices on the network.
- Filtering of Network Traffic: MAC addresses are used to filter network traffic and control access to the network. Routers and switches can be configured to only allow certain devices to access the network based on their MAC address, providing an extra layer of security.
- Identification of Devices on Local Network: MAC addresses are used to identify devices on a local network. Multiple devices can be assigned the same IP address, thus using a MAC address can distinguish between different devices.
- Network Security: MAC addresses can play a role in network security. They can be used to detect unauthorized devices on the network, helping to prevent security breaches. Additionally, MAC addresses can be used in network protocols to prevent attacks such as ARP spoofing and man-in-the-middle attacks.
MAC Address Features
A MAC address is made up of six pairs of hexadecimal characters separated by colons. It typically has the format XX:XX:XX:XX:XX:XX, where each X represents a hexadecimal character.
- The first three octets (XX:XX:XX) of a MAC address are known as the organizationally unique identifier (OUI), and they identify the manufacturer of the NIC.
- The last six digits are assigned by the manufacturer of the network card to only a single card produced. Across manufacturers, this is not a unique number. Although the combination of the OUI and the card number is unique.
- MAC addresses are stored in the firmware of a device’s NIC, making them independent of the operating system and software installed on the device. This means that even if you change the operating system or reformat the hard drive, the MAC address of the NIC will remain the same. (there is a security risk where one can take a NIC out of a computer and use it in another compromised computer to gain access to network resources)
- Some NICs allow you to change the MAC address using software, but this is not the same as changing the actual MAC address stored in the NIC’s firmware. This is referred to as MAC address spoofing and is often used for security purposes, such as hiding the identity of a device on a network.
- In some cases, it’s possible for two devices on a network to have the same MAC address. This is known as a MAC address collision and can cause communication problems on the network. Network administrators can use tools such as Address Resolution Protocol (ARP) to detect and resolve MAC address collisions.
- Finally, it’s important to understand that MAC addresses are only unique within the context of a single network. This means that two devices with the same MAC address can coexist on different networks without causing any problems.
MAC Address vs IP Address
The MAC (Media Access Control) address and IP (Internet Protocol) address are both unique identifiers used in computer networking. While they serve similar purposes, they are used in different ways and at different layers of the OSI network stack.
A MAC address is a unique identifier assigned to an actual physical network interface controller (NIC) for use as a network address for a computer or other networking devices. The MAC address is used to identify a device on the local network and is essential for communication between devices on the same network segment.
An IP address, on the other hand, is a logical mechanism used at the software level to identify a device on a network or the internet. Unlike MAC addresses, IP addresses can be changed and at different times assigned to different devices. Most networks now use DHCP (Dynamic Host Configuration Protocol) to assign IP addresses to computing devices, including mobile phones.
In conclusion, an IP address is a higher level abstraction that uses network packets identified by MAC address to transfer data across the network.
You can check your IP using our “what is my IP address“, tool.
What are the Different Types of MAC Addresses?
There are three types of MAC addresses:
- Unicast MAC address: A unicast MAC address is used to identify a single device on the network. When a device sends data, it includes its unicast MAC address as the source address in the packet. The recipient device uses the unicast MAC address to determine the source of the data.
- Broadcast MAC address: A broadcast MAC address is used to broadcast data to all devices on the network. The broadcast MAC address has all its bits set to 1, and it is used when a device needs to send data to all other devices on the network. Broadcast MAC address is used for ARP (Address Resolution Protocol) to get an IP address.
- Multicast MAC address: A multicast MAC address is used to send data to a group of devices on the network. It is used when a device needs to send data to multiple devices on the network, but not to all devices. For example, when a device needs to send data to multiple devices using the same multicast IP address.
MAC Address for Wireless Devices
A MAC address can also be used in wireless communication. In wireless networks, the MAC address is used to identify the device and its location on the network. Wireless devices use their MAC address to communicate with access points and other wireless devices.
It is important to note that the MAC address of a wireless device can be changed, or “spoofed”, to impersonate another device on the network. This can be used in malicious activities such as network security breaches.
To mitigate network MAC spoofing, wireless networks implement various security measures such as WPA or WPA2 encryption, which use unique encryption keys to secure communication between devices on the network.
How do I find the MAC Address for Various Devices?
There are different ways to find a MAC address, depending on the device being used. Below I will go over the details of find MAC address of various devices and operating systems.
How to Find MAC Address on Windows
Finding the MAC address of your computer’s network interface card (NIC) on Windows is a simple process. A MAC address is a unique identifier assigned to a NIC for use as a network address in communications within a network segment. The MAC address is used to identify a device on the local network and is essential for communication between devices on the same network segment.
Here’s how you can find the MAC address on a Windows computer:
- Open the Command Prompt: Either go to start menu to open the Command Prompt or press the
Windows key + Xand then click on the Command Prompt option.
- Run the ipconfig command: At the Command Prompt, type
ipconfig /alland press enter. This will display a list of all your network adapters and their configuration information, including the MAC addresses.
- Locating the MAC address: The MAC address will be listed next to the Physical Address text.
C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : slap Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : hsd1.ca.comcast.net Ethernet adapter Ethernet: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Killer E3100G 2.5 Gigabit Ethernet Controller Physical Address. . . . . . . . . : D8-BB-C1-A8-52-71 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter Wi-Fi: Connection-specific DNS Suffix . : hsd1.ca.comcast.net Description . . . . . . . . . . . : Killer(R) Wi-Fi 6E AX1675x 160MHz Wireless Network Adapter (210NGW) Physical Address. . . . . . . . . : 28-D0-EA-3C-C0-6C DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv6 Address. . . . . . . . . . . : 2601:644:8f84:cbd0::6139(Preferred)
Above you can see the output of the
ipconfig command. I have deleted a lot of extra lines to keep this clear to understand.
In the output, you will see that the first NIC, Ethernet (line #12), is disconnected. Its Physical address is on line #17.
The connected NIC is my wireless adapter, connected through DHCP (line #26) to the network. My MAC (or Physical address) is
28-D0-EA-3C-C0-6C (line #25).
Finding the MAC address on a Windows computer is straightforward. Although if you have multiple NICs on the devices then you will have to filter to get to the active one. It is also possible that multiple NICs are enabled, and for those you see not only MAC address but IP address details as well.
How to Find MAC Address on macOS
There are two ways to find the MAC address on a macOS computer.
Find MAC Address with System Preferences
- Open System Preferences: On the top left corner click on the Apple icon and select System Preferences.
- Open Network Settings: In System Preferences, click on the Network icon.
- Select your network adapter: Select the network adapter for which you want to find the MAC address on the left. Then click on the advanced button in the bottom right.
- View the MAC address: The MAC address will be listed under the Hardware tab.
Find MAC Address Using Shell Command
- Open the Terminal: Press
Command + Space Baron your Mac keyboard or press press F4. Type in
Terminal. Click on the App icon to open the app.
- Run the ipconfig command: At the Command Prompt, type
ifconfigand press enter. This will display a list of all your network adapters and their configuration information, including the MAC addresses.
- Locating the MAC address: The MAC address will be listed next to the ether text.
[email protected] ~ % ifconfig *** snipped lines for brevity *** en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 options=6463<RXCSUM,TXCSUM,TSO4,TSO6,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM> ether d0:88:0c:6b:83:50 inet6 fe80::cb2:77b4:3e05:18d%en0 prefixlen 64 secured scopeid 0xb inet 10.0.0.218 netmask 0xffffff00 broadcast 10.0.0.255 inet6 2601:644:8f84:cbd0:1069:8632:145f:db76 prefixlen 64 autoconf secured inet6 2601:644:8f84:cbd0:addf:c627:bce6:e5d2 prefixlen 64 autoconf temporary inet6 2601:644:8f84:cbd0::c31a prefixlen 64 dynamic nd6 options=201<PERFORMNUD,DAD> media: autoselect status: active
In the output above for the
ifconfig command, line #4 shows that the networked interface en0 is active (line #14). The MAC address is
d0:88:0c:6b:83:50 (line #6).
Since I am a command line freak I find this method to be much quicker.
How to Find MAC Address on Linux (Ubuntu, CentOS, Debian, Fedora, Redhat)
Here’s how you can find the MAC address on a Linux computer:
- Open the Terminal: You can open the Terminal by clicking on the Applications menu, selecting Accessories, and then selecting Terminal.
- Run the ifconfig command: In the Terminal, type
ifconfigand press enter. This will display a list of all your network adapters and their configuration information, including the MAC address.
- Locate the MAC address: The MAC address will be listed next to the ether text, and will appear as a series of hexadecimal numbers separated by colons.
[email protected]:~$ ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.25.173.216 netmask 255.255.240.0 broadcast 172.25.175.255 inet6 fe80::215:5dff:feda:6ec1 prefixlen 64 scopeid 0x20<link> ether 00:15:5d:da:6e:c1 txqueuelen 1000 (Ethernet) RX packets 86955 bytes 122274524 (122.2 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6416 bytes 455223 (455.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
In my case I can see the MAC address 00:15:5d:da:6e:c1, on line #5 for the network adapter
eth0 (line #2).
How to Find MAC Address on iOS or iPhone
Here’s how you can find the MAC address on an iOS device:
- Open Settings: On your iPhone or iPad, tap on the Settings app.
- Tap on General: Scroll down and tap on General.
- Tap on About: Scroll down to select About
- View the MAC Address: The MAC address of your iOS device will be listed under the Wi-Fi Address field.
How to Find MAC Address on Android Phones or Tablets
Here’s how you can find the MAC address on an Android device:
- Open Settings: On your Android device, tap on the Settings app icon.
- Open About: Depending on your device, scroll down to the About option.
- Tap on Status: Scroll down and tap on Status.
- View the MAC address: The MAC address and other information such as IP address and IMEI information is viewable of the status screen.
As you saw in the post above, a MAC address is used by all digital devices that have the capability to be part of a network. With this unique identifier, a device loses its ability to communicate with other external devices.
Check out my other posts on: | 1 | 2 |
<urn:uuid:2ead0e67-ffed-40fa-af76-277c81383e65> | Each of the following are associated with low vitamin D
Table of contents
Prevalence of Sarcopenia and Sarcopenic Obesity Vary with Race/Ethnicity and Advancing Age
Diversity and Equality in Health and Care (2018) 15(4): 175-183
Kristy Du1, Scott Goates 2, Mary Beth Arensberg*3, Suzette Pereira4 and Trudy Gaillard 5
1 PepsiCo; Champaign, Illinois, US
2Health Economics and Outcomes Research, Abbott Laboratories; Sylmar, California
3 Abbott Nutrition Division of Abbott; 3300 Stelzer Road, Columbus, Ohio
4 Abbott Nutrition Division of Abbott; Columbus, Ohio
5Nicole Wertheim College of Nursing and Health Sciences, Florida International University; Miami, Florida
What is Known About the Topic
- Individuals are living longer than ever before and in the United States the older adult population is becoming more ethnically and racially diverse.
- There can be genetic variability in body mass index and body composition.
- Sarcopenia and obesity contribute to poor health outcomes and when occurring together as sarcopenic obesity, can cause even further health complications that limit the human condition and functionality.
- Few studies have specifically considered these conditions across different racial/ethnic populations and with advancing age.
What this Paper Adds About the Topic
- This study documented that the prevalence of sarcopenia and sarcopenic obesity increased with age and differed by sex and racial/ethnic group.
- The study further demonstrated a close association of sarcopenia and obesity, particularly for older adults.
- Hispanics were found to have the highest prevalence of sarcopenia and sarcopenic obesity and Non-Hispanic Blacks had the lowest. Within Non-Hispanic Blacks, there was a greater discrepancy between sex, with males having a higher prevalence of sarcopenia and sarcopenic obesity compared to females.
- With the new recognition of sarcopenia as a Centers for Disease Control and Prevention reportable condition and assignment of an ICD-10 CM code for the sarcopenia, this research underscores the importance of identifying and intervening for sarcopenia and sarcopenic obesity, especially among racial/ethnic groups who may be at higher risk.
Sarcopenia is the natural age-associated loss of muscle mass/function, often occurring simultaneously with obesity, especially in older adults. Sarcopenia and obesity contribute to poor health outcomes and when occurring together as sarcopenic obesity (SO) can cause further health complications. Few studies have specifically considered these conditions across different racial/ethnic populations. This study examined the prevalence of sarcopenia and SO among U.S. adults by different age, sex, and racial/ethnic groups, using 1999-2004 data from the National Health and Nutrition Examination Survey (NHANES) and its racial/ethnic subpopulation groupings. Sarcopenia was defined as low appendicular lean mass (adjusted for Body Mass Index (BMI) of <0.789 kg/ m2 for males, <0.512 kg/m2 for females) and self-reported functional limitation. Obesity was defined as BMI >30 kg/m2 with SO defined as those meeting criteria for both sarcopenia and obesity. The analysis included 4367 adult subjects; for each race/ethnic subpopulation, sarcopenia prevalence increased with age. Sarcopenia prevalence varied by sex and race/ ethnic subpopulation: Hispanic (26.8% male, 27.2% female); Non-Hispanic (NH) White (15.5% male, 15.1% female); NH Black (8.6% male, 1.6% female); and Other (16.5% male, 23.2% female). Sarcopenic obesity also increased with age and varied by sex and race/ethnic subpopulation: Hispanic (8.57% male, 8.87% female); NH White (6.48% male, 8.06% female); NH Black (3.95% male, 1.12% female); and Other (4.46% male, 0.0% female). Increased awareness of variability in sarcopenia/SO may help develop effective screenings/ care management and interventions/public health policies to maintain functionality and reduce health disparities among an increasingly diverse U.S. older adult popu
Download the PDF from VitaminDWiki
Sarcopenia (muscle loss) and Vitamin D contains
To gain muscle, many studies have found that you need some of the following:
Exercise - just even walking (Intermittent high intensity exercise is much better)
Vitamin D - at least 800 IU/day,
Loading dose will show improvements in weeks instead of 4+ months
Protein - perhaps 1gm/kg/day in a form appropriate for existing stomach acid
Calcium - 300 mg?
See also Bone Health reduce falls and fractures Fraility and Vitamin D - many studies Overview Muscles and Vitamin D
Overview Obesity and Vitamin D contains
- FACT: People who are obese have less vitamin D in their blood
- FACT: Obese need a higher dose of vitamin D to get to the same level of vit D
- FACT: When obese people lose weight the vitamin D level in their blood increases
- FACT: Adding Calcium, perhaps in the form of fortified milk, often reduces weight
- FACT: 168 trials for vitamin D intervention of obesity as of Dec 2021
- FACT: Less weight gain by senior women with > 30 ng of vitamin D
- FACT: Dieters lost additional 5 lbs if vitamin D supplementation got them above 32 ng - RCT
- FACT: Obese lost 3X more weight by adding $10 of Vitamin D
- FACT: Those with darker skins were more likely to be obese Sept 2014
- OBSERVATION: Low Vitamin D while pregnancy ==> more obese child and adult
- OBSERVATION: Many mammals had evolved to add fat and vitamin D in the autumn
- and lose both in the Spring - unfortunately humans have forgotten to lose the fat in the Spring
- SUGGESTION: Probably need more than 4,000 IU to lose weight if very low on vitamin D due to
risk factors such as overweight, age, dark skin, live far from equator,shut-in, etc.
- Obesity category has
- Normal weight Obese (50 ng = 125 nanomole)
Items in both categories Obesity and Dark Skin
- Sarcopenia with obesity is more likely if dark skin, diabetes, OR COPD (all associated with low vitamin D)
- 26 health factors increase the risk of COVID-19 – all are proxies for low vitamin D
- Half of obese black teens achieved at least 30 ng of Vitamin D with 5,000 IU daily – June 2018
- Stroke outcome 6.9 X worse if black and overweight (all three related via low vitamin D) – March 2018
- Indoor pollution is a problem with obese black asthmatic children – May 2018
- Blacks are more obese, have lower Vitamin D, and have more Cancer etc. than whites – Feb 2018
- Increase in Vitamin D deficiency with weight and skin darkness – chart – March 2016
- 5,000 IU daily or 50,000 IU Vitamin D weekly repleted many dark skinned adolescents – RCT Dec 2015
- Obese diabetics with dark skins not benefit from 6,000 IU of vitamin D daily (no surprise) – RCT March 2015
- African-Americans at high risk of obesity and diabetes - 2011
- Bariatric surgery less than 30 ng of vitamin D – 82 pcnt teens, 100 pcnt of black teens – June 2012
- Low vitamin D associated with obesity-related diseases for ethnic minorities – Sept 2011
- Reasons for low response to vitamin D
- Telomeres improved when obese blacks took 2000 IU of vitamin D daily – Oct 2011
- Black women lacking Calcium and Vitamin D weighed more – Aug 2011
- Dark skinned obese not helped much by weekly 50000 IU dose of vitamin D – May 2011
- Black obese children had low vitamin D and more fat under skin than whites – Mar 2011
- Obesity in American-Indians and African-American teens
- Vitamin D3 in obese and non obese African American children – 2008
- Low vitamin D in teens: especially black or overweight – June 2010
Overview COPD and Vitamin D 59 pages had COPD in title as of Oct 2021Sarcopenia with obesity is more likely if dark skin, diabetes, OR COPD (all associated with low vitamin D)
1688 visitors, last modified 16 Oct, 2021,This page is in the following categories (# of items in each category)
ID Name Comment Uploaded Size Downloads 16436 sarcopenic obesity.jpg admin 16 Oct, 2021 17:18 73.40 Kb 138 16435 Race sarcopenia 2018.pdf PDF 2018 admin 16 Oct, 2021 17:18 388.35 Kb 211 | 1 | 3 |
<urn:uuid:12840eab-fd49-4cb7-b63f-3fbe1fb17783> | The archetypal wireless medical device is the telemetry monitor for measuring electrocardiographs. First introduced in the 1970s, cardiac telemetry systems were pretty straight forward. Analog signals were transmitted with each telemetry transmitter/receiver using its own dedicated channel. Medical device vendors placed ceiling mounted antennas connected with coaxial cable back to central radio frequency (RF) transmitter/receivers in a wiring closet. There were no other wireless medical devices. Nor were there any wireless LANs - or even wired local area networks, for that matter.
A lot has changed in almost 30 years - I mean besides feeling older.
The nirvana that was the 1970s came to an abrupt end on February 27, 1998 at 2:17 pm, when, "WFAA-TV channel 8 television began broadcasting on digital TV channel 9 and continued until 10:35 p.m., shutting down transmission a few times to allow a tower crew to work on the antenna." This and subsequent tests of digital television broadcasts by the Dallas broadcaster, knocked Baylor University Medical Center's (BUMC) telemetry off the air. Fallout from this intentional (and completely legal) interference resulted in the creation of the new WMTS (what FCC called Wireless Medical Telemetry Service) frequencies for use by telemetry monitors. Between that fateful day in 1998 and 2006, BUMC has spent $6.6 million shifting frequency and upgrading the telemetry systems at their hospitals. (You can read about BUMC's ordeal reprinted from the AAMI publication Biomedical Instrumentation and Technology Journal story on this FDA web page.) So the new WMTS solved all our wireless medical device problems, right? Although some may differ, the bottom line to the foregoing question is a definite "no."
About the only thing WMTS has going for it is that the designated spectrum is "protected." A "protected" frequency is one where you can make someone generating intentional interference cease and desist - once you've successfully identified the offending source of interference, made the appropriate legal requests, and perhaps responded to rebuttals. As you might guess, this whole process can take weeks or months, which is a problem is your wireless medical devices are unusable in the interim. The incidence of intentional interference where a hospital has greater rights than the interfering party is almost nonexistent. By far, the greatest source of RF interference in WTMS (or almost any other band) is unintentional, resulting from bad brushes in a hair dryer motor, faulty fluorescent light ballasts, noisy paper shredder motors, and a myriad of other sources.
There are numerous flaws with WMTS.
- First the frequency bands are not contiguous (608-614 MHz, 1,395-1,400 MHz, and 1,429-1,432 MHz) which adds to the cost and complexity of developing and deploying products using WMTS.
- The bandwidth available in WMTS is just 13 MHz - barely enough to deploy a few hundred channelized patient monitors in a large hospital. This was sufficient for current requirements in 1999, when telemetry transceivers were the only wireless medical device in use, but not in today's hospitals.
- The WMTS band specifies frequency only; there are no provisions to ensure reliable coexistence and maximum utilization of the bandwidth between vendors. Consequently, it has taken years for most coexistence problems to be worked out between vendors. One can argue whether the solutions reached to date actually maximize WMTS spectrum.
- The 608-614 MHz portion of WMTS is susceptible to co-channel interference from near by digital television broadcasters. This adjacent channel interference is perfectly legal and hospitals have no recourse but to narrow their use of that portion of WMTS (that represents almost 40% of all WMTS bandwidth).
About the same time FCC designated WMTS, the IEEE ratified the first standards for wireless networking. These new standards included 802.11 (also known as 802.11FH or frequency hopping), 802.11a and 802.11b. Some medical device vendors, looking to develop next generation wireless medical devices at the time, evaluated both WMTS and the new IEEE 802.11x standards. One vendor that chose to go with 802.11 was Protocol Systems (acquired by Welch Allyn). GE launched Apex Pro using WMTS, while using 802.11 for their patient monitors. Spacelabs, Datascope and others also ran their telemetry on WMTS, putting 802.11 in their wireless medical devices. The exception here is Philips, who operates all of their patient monitoring devices - telemetry, patient monitors, defibrillators - on WMTS. (An exception might be their EKG carts which I believe use 802.11x, while all other Philips factories have built 802.11x into their medical devices.)
Vendors with existing wireless telemetry products (notably GE and Philips) rushed revisions of existing products that supported the new 608-614 MHz WMTS frequency. Referred to as "re crystaled" these upgrades and new products simply shifted the old frequencies used by BUMC by incorporating the appropriate RF components. Late 1999 and 2000 were big revenue growth years for the patient monitoring market, as many hospitals looked to upgrade or replace telemetry systems in response to unexpected interference from digital TV stations. Upgrades were less expensive than outright replacement with non-WMTS technologies, and most hospitals went this route. This significant turnover in telemetry systems was leveraged most effectively by Philips Medical Systems, and the resulting sales cemented their position as the number one patient monitoring vendor in the U.S. - a distinction they've maintained since.
During the late 1990s new wireless medical devices came to market, notably "smart" infusion pumps. The past few years have seen the advent of wireless point of care testing devices from Abbott and Johnson & Johnson. Vendors like iSirona and Capsule Tech have launched wireless modules to connect legacy medical devices. Expect all point of care testing devices that are carried to the point of care to eventually go wireless. With the recent news that CMS plans to end reimbursement for ventilator acquired pneumonia in 2009, we can expect to see ventilators go wireless too.
The fact is, that 802.11x has become the defacto standard for wireless medical devices. There are two basic reasons 802.11x has come out ahead of WMTS. First, 802.11 has proven to be safe and effective after years of experience. Patient monitors using 802.11 have been shown to be more reliable than channelized telemetry (regardless of frequency used) and virtually as reliable as wired Ethernet. The second reason for 802.11x's ascendance is that the technology is much less expensive to develop and build in to medical devices - not to mention being less expensive for customers too.
In fairness, I should mention another advantage of WMTS: placing some medical devices on a separate wireless infrastructure does eliminate a single point of failure. If WMTS fails and you lose telemetry (and possibly other patient monitoring stuff), all your wireless medical devices on 802.11x will probably still be operational. Planning enterprise architecture to minimize single points of failure is a good thing, although you don't have to use WMTS to accomplish this objective.
There are two big advantages to vendors using WMTS. First, by running on a separate infrastructure service and support is greatly simplified. In most installations, the medical device vendor has WMTS all to themselves and doesn't have to worry about pesky variables introduced by third parties (including the customer fiddling with things). Finally, requiring a dedicated infrastructure for your solution increases customer's changing costs (even if it doesn't necessarily raise the acquisition cost too). This factor comes into play when the buyer (inevitably) gets into a tiff with the vendor and wants to replace them, not only must the customer replace the medical devices they also need to replace the wireless infrastructure. Given that 802.11x is a shared infrastructure this is more a matter of when a buyer may make certain investments into their wireless LAN, rather than having to make infrastructure investments that are dedicated to a specific medical device or vendor.
That's not to say that there aren't costs involved in deploying medical devices on 802.11x wireless networks - there are, and they're getting ready to go higher.
To bring us up to the present state of the art, you must read Medical-Grade, Mission-Critical Wireless Networks, by Steve Baker and Dave Hoglund, in the March/April issue of the IEEE Engineering in Medicine and Biology Magazine. You can buy an electronic copy of this peer reviewed paper for $35. I'm usually pretty critical of journals that sell the published results of research funded by my own tax dollars, but that is not the case here. Baker and Hoglund wrote their paper based on years of industry experience developing and deploying wireless medical devices - no tax dollars were harmed in the making of this journal article - and the content is well worth the cost.
Deploying 802.11x medical is not simple - how the hospital controls access (and security) to its wireless network is critical along with how it wants to create its “virtual lans” that support these devices are critical issues. Also it is not always vendor neutral - ask Welch Allyn about Aruba Networks vs. Cisco. One is supported and as of the writing of this comment, one is not. This is true of other medical vendors too.
What about the people who were early adopters of the inital wireless networks who used frequency hopping access protocols now that the frequency hopping access points are no longer sold (technolgy moves on)? They are left behind and forced to upgrade their networks too in a much shorter timeframe than some WMTS solutions. What does that say about current 802.11a/b/g vs .11n vs whatever alaphabet soup of wireless network “standards” are around the corner? Most medical vendors still use lowest cost 802.11b, correctly pointed out in the reference article as a speed bottleneck when it comes to access point utilization. Ask a vendor about g or a and you get some sort of mumbled “incorrect bus speed” answer.
The point is that although 802.11x has become one kind of recognized standard - the implementation of it requires, at the current time, a lot of work, and generates a lot of confusion in the process.
802.11 has some major issues, and it is important to balance these when considering medical applications.
The 802.11 standards provide no guaranteed access slots for devices trying to reach a wireless access point. In an 802.11 network there is little that can be done to control the latency of critical data over the air.
In addition to competing with other medical devices connected to the access point you are also competeing with other services that may have been rolled out on the 802.11 network including patient data systems, and even VOIP services.
Many vendors use trafic shaping to prioritise data on the network, but this is only effective on the wired side once data has reached the AP.
802.11 was also designed to be slow moving. The standard does not handle hand over between access points resulting in further latency and retries if a device is moving.
802.11 resides in the ISM frequency bands. These are shared by a huge range of devices from Bluetooth headsets to the nike plus transmitter in your shoe. It can become a busy frequency band, further delaying your data.
So WMTS and 802.11 both have issues to consider.
With WMTS a device designer can develop a system that is optimised to the requirements of the application. For continuous ambulatory monitoring you cant beat the optimised performance that can be achieved.
However the system is proprietary, so you cant reuse the infrastructure for sending emails..
Craig and Paul have both raised excellent points. 802.11a/b/g for wireless medical devices is not a panacea for easy to design, deploy-and-forget connectivity. But then neither is WMTS.
To my way of thinking, WMTS made more sense back in the day when the only wireless medical devices were telemetry packs used in just one nursing unit. Now there are many types of wireless medical devices, and they’re deployed enterprise wide.
WMTS has 2 very serious fundamental limitations:
1) With just 13 MHz of bandwidth, WMTS lacks the elbow room to support the growing number and types of wireless medical devices. Deploying all your wireless medical devices on the same infrastructure should make designing, deploying and managing the infrastructure a more manageable process.
2) WMTS has no standards. Sure ISM and 802.11a/b/g are not perfect, but these standards help ensure coexistence and interoperability. They also maximize capacity of the available bandwidth and provide powerful testing and monitoring tools. Every vendor’s implementation on WMTS is proprietary, and given the limited bandwidth and proliferation of wireless medical devices running everything in a more simple and inexpensive channelized fashion is too inefficient. So not only must device vendors create the wireless radios and receivers, they must use sophisticated technologies to maximize capacity - all while building their own monitoring and system management tools. Even when vendors appropriate other commercial technologies for use in the WMTS band, like Philips did with DECT, monitoring and management tools are lacking. Yes, Wi-Fi can be crowded and complex but the resulting commercial ecosystem has resulted in ongoing innovation to improve and monitor performance, helping to ensure safe and effective communications.
The commercial ecosystem around ISM and 802.11a/b/g standards has created better performance and manageability - at a much lower cost - than is possible with the proprietary technologies required for WMTS.
Deploying wireless medical devices regardless of band or technology is a daunting task. And with the advent of IEC 80001, it is going to get more daunting still.
I have deployed both VHF/UHF/WMTS and 802.11 within the patient monitoring domain. Also, was the architect behind OneNet from Draeger. To this, I have also worked for Symbol, with Welch Allyn, and am intimately into Cisco LWAPP, Aruba, and Meru.
WMTS will be around, but not longer term. Proprietary radios and networks are a thing of the past. Tools are available for real time spectrum analysis, network management, and QoS in the 802.11x realm. This will never be available for WMTS simply because economics of scale of the technology envelope. The high cost of WMTS is also questionable when 802.11x costs are continually coming down. However the medical device industry does need to drive some improvements to the 802.11 standards, not unlike 802.11x to ensure QoS and interoperability were added in response to needs from the enterprise IT market.
I was wondering if someone had an opinion on the advantages/disadvantages for smaller setting hospitals and outpatient services, 200 beds or less, when considering WMTS vs. 802.11 networks for patient monitoring devices?
Thanks in advance. This is a great community of ideas!
Here’s my perspective on your question.
1. Splitting off patient monitoring on a separate network (rather than using Wi-Fi) does eliminate a single point of failure. But then it’s another network that needs to be designed for high reliability and actively managed to ensure continued performance.
2. The claim that WMTS is “protected” is moot for two reasons: 1) the vast majority of interference is unintentional interference (where having a protected frequency does you no good), and 2) Wi-Fi standards, and the systems that use them, are designed to facilitate coexistence among many different vendors and devices in the same environment - thus rendering the “protected” claim of WMTS in the face of intentional interference moot.
3. While Wi-Fi is based on industry standards with cross vendor interoperability, every vendor’s implementation of WMTS is proprietary. Switch vendors, switch out all the WMTS infrastructure. This increases “switching costs” when hospitals want to change vendors. Vendors like this because it tends to lock in customers.
4. WMTS comes in two flavors, a low cost / low capacity system and a higher cost / high capacity system. A trend in hospitals for the past few years (and continuing) is to want to monitor patients in broader areas (even house-wide). The number of patients being monitored is also increasing. The issue here is to really think about your current and future coverage and capacity requirements. If you outgrow the low capacity system, you have to replace most or all of the infrastructure to upgrade.
5. WMTS specifies a frequency band and provides nothing for coexistance. Consequently, there can be coexistence issues between different vendors products. Currently GE and Philips have things worked out, but a new product release from either vendor could upset the apple cart.
While using Wi-Fi for your patient monitors is more complex, there are many excellent monitoring and trouble shooting tools available. This is not the case with WMTS - since these systems are proprietary, anything that is available must be developed by the vendor (GE or Philips).
The good old days of throwing up an analog antenna system for trouble and maintenance free wireless patient monitors are gone.
I understand your position on WiFi, but have problems with the comment on interference:
“The claim that WMTS is “protected” is moot for two reasons: 1) the vast majority of interference is unintentional interference (where having a protected frequency does you no good)”
The issue with the 2.4GHz band is that there are just many more active transmitters - even in a hospital environment.
People with WiFi and Bluetooth running on their phone and using headsets, wireless keyboards and mice, and even Nike plus foot pods. Some fire alarms use this band for smoke detection. 2.4GHz is a very crowded space.
Would you suggest that system developers look at the higher 802.11 frequency bands?
Paul, the ISM band is indeed crowded - yet it never seems to get too crowded. With the increasing adoption of 5Ghz, the available bandwidth in ISM has much room for growth.
Any wireless deployment, whether in the ISM band or WMTS, must be proactively planned, designed, and managed after installation. Admittedly, this is a more complex task in ISM. It seems to me that this complexity is a small trade-off for
And of course, WMTS remains a legitimate choice for medical devices - if only to avoid putting all your eggs into one RF basket.
Such decisions just need to be made with eyes open and an awareness of all the implications.
Not only GE and Philips are in the telemetry market. Spacelabs was the inventor in the 70’s with Appollo program. Draeger is also in the TLM field.
I enjoyed reading the article and excellent comments, and considered two low-rate protocols which are being considered in academia.
Is ZigBee a valid alternative to 802.11a/b/g in scenarios that include a small healthcare facility, non-critical patient monitoring (for instance, post-op, emergency room and recovery) and out-patients?
What about Bluetooth in association with 802.11a/b/g?
Thanks in advance.
One of the biggest challenges facing health care providers is the increasing number of wireless devices. This is good because wireless better supports the types of workflows in health care, but it is also bad due to the increased complexity of managing a more crowded wireless environment.
Most of these wireless devices are in the ISM band, and the vast majority are based on 802.11 standards, in other words wireless LANs.
In the enterprise, both ZigBee and Bluetooth can produce coexistence challenges. Many providers are poorly equipped to resolve these coexistence problems when they arise. In the home environment, or outside of both the home and enterprise, these technologies are much more easily deployed.
Given the broad picture above, both Bluetooth and ZigBee are better suited for low powered wireless sensor radios than as an alternative to WiFi for enterprise communications.
The care delivery areas you mention are all points along a broader care delivery continuum, and as such are better served with an enterprise-wide technology. As you note, bandwidth is an important requirement. A typical multi-parameter patient monitor generates about 12 kb/s of data. It does not take too many monitors to overwhelm the bandwidth available in ZigBee. Running Bluetooth at a power level high enough to serve as an alternative to WiFi would likely result in coexistence issues.
The bottom line is that WiFi is the accepted enterprise wireless infrastructure in health care. There are roles for ZigBee, Bluetooth and other technologies, but they are typically in applications other than connecting devices to enterprise networks.
Great Website. My situation is this, I am currently using quinton QRS telemetry system in my outpatient clinic. I monitor up to 8 patients an hour on my current system. Can I integrate a seperate system to a satelite office and be able to store pt info on the main system?
What about wireless effects on the human system? I had read somewhere that any frequency above 2.8GHz is harmful to the human body.
Also what about an architecture where UWB is used for short distance, high bandwidth communications, to a gateway. Once a gateway is reached multiple wireless techniques could then be used to transmit the data to remote access point.
WiFi I feel is getting very crowded. I could be wrong but in spite of all the co-existence build in, when it comes to critical time bounded data we would need to look at alternatives.
I find this very informative and interesting. I have learned A LOT here. I am a tele tech watch 50 channel(which is too much safely for one person) in a cardiac unit. I have been doing this for 13 years and I have encountered ALL kinds of “drop out” with our Phillips system and even WORSE with space labs. I actually wish we had our old out dated Phillips system back cause Space labs is horrible and NOT user friendly what so ever. Space labs calls their drop out “scwelch” which I find to be HOG WASH as the send spectrum analyzer after analyzer only for us to be told, you may wanna try a different patch. I call bull on that! I would assume our hospital took the cheapest route possible because we too had to reroute wires through out the house. I will check into what GHz we are @ since I and another tech or siting with cancer. Dually noted on the band jumping since we have two other telemetry systems in house, datascope in ER and Phillips in OB/PEDS. My secretary sits with an ear piece…. I would assume is blue tooth about 3 feet away from the central system. Could this be causing our drop out.
Philips is using ‘smart hopping’ at 1395-1400 and 1427-1432 MHz as per their MX40 Service Guide. Bluetooth uses 2.4-2.5 GHz. It’s unlikely the bluetooth earpiece is an issue, but why not have her use a wired one just to see if it is an issue. | 1 | 4 |
<urn:uuid:7e5f8ce3-e679-445b-ab00-1702afdd98bd> | Unless we are talking about a possible “manliness coefficient” for the riding of certain bikes (classic British twins, unashamed Sportsters), vibration is usually seen as getting between us and a good time. Kenny Roberts’ almost-successful three-cylinder two-stroke 500cc Grand Prix racer started life with vibes that messed up its carburetion and broke parts. Vibration can put our hands to sleep and, in extreme cases, leads to double vision (as in the “pogo” shaking sometimes reported by US astronauts during launch or from the “tire shake” that can occur in drag racing).
But in certain special circumstances, vibration has actually been useful. Today the readings of lab instruments appear on video screens, but in my student days every meter had an indicating needle attached to a tiny shaft pivoted in jewel bearings like those in Kevin Schwantz’s treasured mechanical watches. We were taught always to tap on the glass faces of such meters before taking a reading to shake the pivots enough to overcome any friction present. Quite often the needle would shift position significantly. To this day, I feel the impulse to tap.
The cockpits of today’s aircraft have large LCD screens on which are displayed the “virtual instruments” necessary for the current flight situation, but in times past a flight engineer sat before a panel which, on a four-engine aircraft, carried 32 or more instruments. Because of the vibration of large aircraft piston engines, no gauge tapping was necessary. Today’s pilots refer to such instruments as “steam gauges.”
When the gas-turbine era arrived, aircraft engine vibration almost disappeared because there were no longer great big pistons and valves whanging back and forth. To save flight engineers from constantly tapping critical gauges (monitor that turbine inlet temp!), instrument panels had to be equipped with artificial vibration in the form of buzzers.
Harley-Davidson’s big twins have drum-and-forks-shifted multi-speed gearboxes, and shift quality remained good until The Motor Company decided to add engine balance shafts. Suddenly their gearboxes were half-shifting. Why, after decades of reliable shifting, would this problem suddenly appear?
The engineers soon realized that vibration had helped to overcome friction between the rotary shift drum and its bearings, and between drum and shift forks. With substantial engine vibration, the shifting mechanism had “rattled obediently” into the next detent, completing the shift. But without vibration’s help, the drum might stop along the way or even be kicked back into the previous gear.
Union leader: Harley-Davidson making ‘unacceptable demands’ United Steelworkers said priorities in ongoing negotiations with Harley-Davidson Inc. include rights for temporary workers, job security and scheduling
In prototype testing, they gave the shift drum low-friction rolling-element bearings and improved certain surface finishes to reduce friction in the shift mechanism. They were eventually rewarded with a return to good shift quality. I had been down this same route myself in trying to improve shifting in race engines of the 1970s.
Near the end of the 19th century, ocean-going ships were propelled by enormous triple-expansion steam engines. These were units of three cylinders—a small high-pressure cylinder, exhausting into a larger intermediate-pressure cylinder, which in turn sent its exhaust steam for further work-extracting expansion in a great big low-pressure cylinder. All that metal in motion led to constant slight cyclic flexure of the ship’s hull and the tremendous shafts that transmitted power to the propellers.
The thrust bearings that transmitted force from the prop to drive the ship forward consisted of a stack of multiple collars fixed to the shaft running in a thrust box containing corresponding stationary plates lubricated by pumped oil.
All was well until Charles Parsons’ 1893 invention of the steam turbine, which hardly vibrated at all. Suddenly conventional thrust boxes, which had worked well for decades, overheated and seized.
The emerging scientific understanding of lubrication revealed why. Something must cause an oil wedge to form between the moving parts. Oil is drawn between the surfaces at the wider, low-pressure end of the wedge, and is pulled into the loaded zone of the bearing by its own viscosity, generating in this way pressures of thousands of pounds per square inch. Oil pumps merely send the oil to where it is needed, but the pressure that supports the load is generated solely by the motion of the parts.
Police State of the United States of America
Seen this way, the problem was clear: With the yanking and thumping of piston steam power, the shaft and thrust collars inside the thrust box were constantly forced into just enough misalignment to generate oil wedges capable of carrying the load. How much load? Eight thousand hp at 15 mph is a thrust of 200,000 pounds.
Innovators on both sides of the Atlantic came up with the same solution: a single thrust collar on the shaft, pressing against a circular array of six or eight segment-shaped tilting pads in a thrust block attached to the ship’s hull. As the shaft turned, the thrust pads tilted just enough to form each its own oil wedge, capable of supporting any desired load. Ships powered by steam turbines were driven by such multipad thrust blocks through two world wars and on into the eventual replacement of steam by today’s more efficient two-stroke marine diesel engines.
Pistons sliding in cylinders tilt ever so slightly to create the oil wedges that support them. Crankshaft journals do the same by being forced just enough off-center by the applied load to form ever-so-slightly crescent-shaped oil wedges. Typical main-bearing clearance in a motorcycle engine is 0.0012 inch, and under load the minimum oil-film thickness is squeezed to as little as 0.00005. That produces a very slightly tapered oil clearance, just enough to work like a charm.
Source: Cycle World | 1 | 2 |
<urn:uuid:5b407000-a2dd-49b0-9ea5-cc6202101601> | DIG with Linux and Mac OSX
Linux and Mac OS use DIG to look up DNS records of a domain though you can use NSLOOKUP with MAC OSX Terminal as well. You can follow the below steps:
1. Open a terminal window. The procedure to do this depends on the operating system and desktop environment:
-On Mac OS X, click Applications
, click Utilities
, and then click Terminal
-On Linux, open a terminal window.
2. At the command prompt, type the following command. Replace example.com with the domain that you want to test.
To use a specific DNS server for the query, use the @ option.
By default, dig displays the A record for a domain. To look up a different DNS record, add it to the end of the command. The below looks up the MX records of
example.com using one of Dynu name servers ns4.dynu.com.
dig @ns4.dynu.com example.com MX
3. Dig displays a QUESTION SECTION (the request) and an ANSWER SECTION (what the DNS server sends in response to the request). In this case, we used the default options for dig, which simply looks up the A record for a domain.
From this, we can see that example.com currently points to IP address 18.104.22.168.
[user@localhost ~]# dig example.com
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58057
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 21347 IN A 22.214.171.124
;; Query time: 8 msec
;; SERVER: 126.96.36.199#53(188.8.131.52)
;; WHEN: Tue Sep 29 15:50:42 MST 2020
;; MSG SIZE rcvd: 56 | 1 | 2 |
<urn:uuid:acf19771-ffde-417a-80ab-43558e3a40ca> | By Laurie McFarland Jackson ©2011
Sept... such an innocent, uncomplicated-sounding four letter word, but, as it turns out, a word that creates much discussion and interpretation. One list of sept names may be different from another. In this article you will find various bits of information about septs from a variety of sources. One thing you will discover... the word sept is anything but uncomplicated.
According to Wikipedia, the free encyclopedia, a sept is an English word for a division of a family, especially a division of a clan. The word might have its origin from Latin septum "enclosure, fold", or it can be an alteration of sect
The term is found in both Ireland and Scotland. It is sometimes used to translate the word slíocht, meaning seed, indicating the descendants of a person (i.e., Slíocht Brian Mac Diarmada, the descendants of Brian MacDermott).
In the context of Scottish clans, septs are families that followed another family's chief. These smaller septs would then make up, and be part of, the chief's larger clan. A sept might follow another chief if two families were linked through marriage. However, if a family lived on the land of a powerful laird or neighbor, they would follow him whether they were related or not. Bonds of man- rent were sometimes used to bind lesser chiefs and his followers to more powerful chiefs.
Historically, the term 'sept' was not used in Ireland until the nineteenth century, long after any notion of clanship had been eradicated. The English word 'sept' is most accurate referring to a subgroup within a large clan; especially when that group has taken up residence outside of their clan's original territory. (O'Neill, MacSweeney, and O'Connor are examples.) Related Irish septs and clans often belong to larger groups, sometimes called tribes, such as the Dál gCais, Uí Néill, Uí Fiachrach, and Uí Maine. Recently, the late Edward MacLysaght suggested the English word 'sept' be used in place of the word 'clan' with regards to the historical social structure in Ireland, so as to differentiate it from the centralized Scottish clan system. This would imply that Ireland possessed no formalized clan system, which is not wholly accurate. Brehon Law, the ancient legal system of Ireland clearly defined the clan system in pre-Norman Ireland, which collapsed after the Tudor Conquest. The Irish, when speaking of themselves, employed their term 'clan' which means "family" in Irish.
The site www.electricscotland.com suggests that the variety of surnames within a Scottish clan do not represent separate and definable sub-clans but instead reflect the vagaries of transition of the Gaels into the English naming system as well as marriages, migrations, and occupations. The main family itself may have developed a variety of surnames. The preferred modern usage is to avoid the use of the term “sept” and to simply describe these names as what they are – surnames of the family and of allied or dependent families. It is preferable to speak of “the names and families of Clan X” rather than to call a name “a sept of Clan X.” “Sept” is actually a term borrowed from Irish culture in the nineteenth century to explain the use of a variety of surnames by members of a single clan. Where Scots would say, ―MacGregor and his clan, “an Irish historian might say, ―O‘Neill and his sept.”
ElectricScotland’s short list of septs of Clan MacFarlane include the following families: Allan, Allanson, Bartholomew, Caw, Galbraith, Griesck, Gruamach, Kinnieson, Lennox, MacAindra, MacAllan, MacCaa, MacCause, MacCaw, MacCondy, MacEoin, MacGaw, MacGeoch, Macgreusich, Macinstalker, MacIock, MacJames, Mackinlay, MacNair, MacNeur, MacNider, MacNiter, MacRob, MacRobb, MacWalter, MacWilliam, Miller, Monach, Napier, Parlane, Robb, Stalker, Thomason, Weaver, Weir.
Macfarlane surnames listed in the Family Tree DNA Project (http://www.familytreedna.com) include the following:
Allan, Allanach, Allanson, Allison, Arrell, Arrol, Barclay (in Ulster), Bart, Bartholomew, Bartie, Bartson, Black*, Brice, Bryce, Caa, Callander, Caw, Condey, Condeyie, Condy, Cunnison, Galloway (in Stirling), Grassick (in Montrose), Grassie (in Aberdeen), Greusaich, Griesch (in Aberdeen), Grua- mach, Kennson, Kinnieson, Kinnison, Knox, Leaper, Lechie, Lennox, MacAindra, MacAllan, MacAllen, MacAndrew, MacAndro (in Dunbarton), MacCaa, MacCause, MacCaw, MacCondey, MacCondeyie, MacCondy, MacEach, MacEachern, MacEoin, MacErrachar, MacErracher, MacFarlan, MacFarland, MacFarlane, MacFarquahar, Macferlant (in Poland), MacGaw, MacGeoch, MacGilchrist, MacGreusach, MacGreusich, MacInally, MacInstalker, MacIock, MacJames, MacJock, MacKindlay, MacKinlay, MacNair, MacNaiyer, MacNayer, MacNeur, MacNider, MacNiter, MacNoyer, MacNuyer, MacRobb, MacWalter, MacWilliam, McFarlan, McFarland, McFarlane, Michie, Millar, Miller (in Dunbarton), Monach, Monachock, Nacfaire (in France), Parlan, Robb, Smith (in Dunbarton), Spruell, Stalker, Thomason, Thomson, Weaver, Webster, Weir, Williams, Williamson, Wilson, Wylie, Wyllie.
*Due to recent DNA results, the Admin team at FamilyTreeDNA have added the Black surname to the list of Septs. Four men with that surname are part of the main MacFarlane lineage, and their markers date back to before the 1600s. Although the Black surname is listed with three other clans, at least this one branch belongs with the MacFarlanes.
James Macfarlane wrote the History of Clan Macfarlane in 1922, (ISBN 978-1-152-95118-1.) At the time of 1846, “the lineal representative of the ancient and honourable house of Macfarlan and that ilk...” included the following families:
Arrell, Arrol, Allan (also Clan Ranald), Allanson (also Clan Ranald), Allanach (also Clan Ranald), Bartholomew, Barclay, Caw, Griesch (Aberdeen), Grassie (Aberdeen), Grassick (Montrose), Gruamach, Galloway (Stirling),Kinnieson, Kennson, Kinnison, Mac Allan (also ClanRanald, MacKay and Stewart),MacAindra, MacAndrew, MacAndro (of Dumbartonshire), MacCaa, MacCause (Thomson), MacCaw (also Stewart of Bute), MacCondey, MacEoin, MacEachern (also an ancient race of Kintyre and Criagnish), MacErracher, MacGaw, MacGeoch, Macgreusich (also Buchanan), MacInstalker, MacJock, MacJames, Mackinlay, MacNair (also McNaughton), MacNeur, MacNuyer (also Buchanan and Mcnaughton), MacNider, MacNiter, MacRob (also Gunn), MacRobb, MacWalter, MacWilliam (also Gunn), Miller (of Dumbartonshire), Michie, Monach, Parlane, Robb, Stewart, Stalker , Weaver, Wilson, Weir, Williamson, Galbraith, Lennox, Napier
James continues his description of the septs of Macfarlane in great detail. There were many Macfarlanes in the north and west Highlands, especially in the counties of Dumbarton, Perth, Stirling, and Argyle; also in the shires of Moray and Inverness, and the western isles. Northern Ireland was also home to many Macfarlanes.
There are a large number of descendants from, and dependents on, the Macfarlane surname and family. The largest group of these descendants is the Allans or Macallans. It began with Allan Macfarlane, a younger son of one of the Chiefs of Arrochar who went to the north and settled there several centuries ago. Allan‘s sons called themselves sons of Allan instead of taking the family name of Macfarlane. So, Allanson and Allanach are variations of Macallan.
In another case, the sons of Thomas, younger son of Duncan, the 6th Chief, called themselves Thomas’ sons instead of Macfarlane.
―There are also Macnairs, Maceoins, Macerrachers, Macwilliams, Macaindras, Macniters, MacInstalkers, Macjocks, Parians, Farlans, Graumachs, Kinniesons, etc., all which septs acknowledge themselves to be Macfarlanes, together with certain septs of Macnayers, Mackinlays, Macrobbs, Macgreusichs, Smiths, Millers, Monachs, and Weirs.
Clans & Tartans of Scotland by James Mackay (Gramercy Books New York, 2000, ISBN 0-5 17- 16240-7) begins an explanation of septs starting in the thirteenth and fourteenth centuries as surnames were gradually adopted in Scotland. In its most pure form, the clan was essentially a family group with members that traced their roots back to a common ancestor and who were thereby linked by blood ties. It included illegitimate children, as well as children fostered or adopted by the family. It would also include the children of women who had married outside the family group and who, therefore, had a different surname. More commonly, however, the appearance of other surnames within the clan came from landless men or outlaws attaching themselves to the clan for protection, giving service in return. From this arose the idea of the sept. This word, derived from the Latin septum, an enclosure or fence, alludes to the fact that, originally, a particular plot of clan land was set aside for these landless followers where they could establish a village of their own.
Following the Jacobite Rebellion of 1745-46 the age-old allegiance of the clansman to his chief was eventually replaced by the ties of kindred, in which the possession of a common surname became of utmost importance. A person‘s surname gives a sense of identity, but in the Scottish system it also gives a feeling of solidarity that nowadays links people from all around the globe and from every walk of life. The ties that bind us may be extremely tenuous, but the name is all-important. The spread of Scottish clan names to every part of the world is a reflection of the Scottish diaspora (a dispersion of a people, language, or culture that was formerly concentrated in one place.) Thirty million people who are of Scottish descent have been estimated to live outside Scotland - that is six times the number of people actually living in Scotland. Pride in bearing a Scottish surname has not only strengthened the bonds of expatriates, but has also helped to keep alive a sense of Scottish nationhood over the past three centuries.
S-E-P-T... a deceptively complicated four-letter word... a word that means family. | 1 | 3 |
<urn:uuid:4f6f3db4-b196-4b19-9b08-07e0674d2a88> | On This Day
1549 – Battle of Sampford Courtenay: The Prayer Book Rebellion is quashed in England.
The Prayer Book Rebellion, Prayer Book Revolt, Prayer Book Rising, Western Rising or Western Rebellion (Cornish: Rebellyans an Lyver Pejadow Kebmyn) was a popular revolt in Devon and Cornwall in 1549. In that year, the Book of Common Prayer, presenting the theology of the English Reformation, was introduced. The change was widely unpopular – particularly in areas of still firmly Catholic religious loyalty (even after the Act of Supremacy in 1534) such as Lancashire. Along with poor economic conditions, the enforcement of the English language liturgy led to an explosion of anger in Devon and Cornwall, initiating an uprising. In response, Edward Seymour, 1st Duke of Somerset sent Lord John Russell to suppress the revolt.
Read more ->
1938 – The Thousand Islands Bridge, connecting New York, United States with Ontario, Canada over the Saint Lawrence River, is dedicated by U.S. President Franklin D. Roosevelt.
The Thousand Islands International Bridge (French: Pont des Mille-îles) is an American-maintained international bridge system over the Saint Lawrence River connecting northern New York in the United States with southeastern Ontario in Canada. Constructed in 1937, with additions in 1959, the bridges span the Canada–US border in the middle of the Thousand Islands region. All bridges in the system carry two lanes of traffic, one in each direction, with pedestrian sidewalks.
Born On This Day
1840 – Wilfrid Scawen Blunt, English poet and activist (d. 1922)
Wilfrid Scawen Blunt (17 August 1840 – 10 September 1922), sometimes spelled “Wilfred”, was an English poet and writer. He and his wife, Lady Anne Blunt travelled in the Middle East and were instrumental in preserving the Arabian horse bloodlines through their farm, the Crabbet Arabian Stud. He was best known for his poetry, which was published in a collected edition in 1914, but also wrote a number of political essays and polemics. Blunt is also known for his views against imperialism, viewed as relatively enlightened for his time.
1900 – Ruth Bonner, Soviet Communist activist, sentenced to a labor camp during Joseph Stalin’s Great Purge (d. 1987)
Ruf Grigorievna Bonner (Russian: Руфь Григорьевна Боннер; 1900 — 25 December 1987), also known as Ruth Bonner, was a Soviet Communist activist and who spent eight years in a labor camp during Joseph Stalin’s Great Purge. She was the mother of the human rights activist Yelena Bonner and the mother-in-law of physicist and dissident Andrei Sakharov.
Bonner was born in 1900 into a Russian Jewish family in Siberia. Her mother, Tatiana Matveyevna Bonner early widowed, was widowed and left with three small children.
Bonner’s first husband was Armenian Levon Sarkisovich Kocharian, who died when Yelena was a year old.
In the 1930s, Bonner was a health official in the Communist Party committee of Moscow while her second husband, Gevork Alikhanyan, aka Georgy Alikhanov, was a director at the Comintern. As part of Stalin’s mass purges in 1937, her husband was arrested on charges of espionage and sentenced to death.
Bonner was arrested a few days after her husband and spent the next eight years in the Gulag near Karaganda, Kazakhstan. After her release she spent another nine years in internal exile. In 1954 she was one of the first of Stalin’s victims to be rehabilitated under the new Soviet leader Nikita Khrushchev. Her husband was rehabilitated posthumously.
When her daughter Yelena and her son-in-law Andrei Sakharov were exiled to Gorky in 1980, she was allowed to move to the United States to be with her grandchildren. She returned to Moscow in June 1987 to live with her daughter, whose exile had been lifted by Mikhail Gorbachev in December 1986. She died in Moscow on 25 December 1987, aged 87.
Peter Henry Fonda (February 23, 1940 – August 16, 2019) was an American actor, director, and screenwriter. He was the son of Henry Fonda, younger brother of Jane Fonda, and father of Bridget Fonda. He was a part of the counterculture of the 1960s.
Fonda was nominated for the Academy Award for Best Original Screenplay for Easy Rider (1969), and the Academy Award for Best Actor for Ulee’s Gold (1997). For the latter, he won the Golden Globe Award for Best Actor – Motion Picture Drama. Fonda also won the Golden Globe Award for Best Supporting Actor – Series, Miniseries or Television Film for The Passion of Ayn Rand (1999).
By Noel Murray, The New York Times: Peter Fonda: 7 Great Movies to Stream The prolific actor, who died Friday, is credited with 116 roles across a nearly six-decade career. Here are a few of his best.
Vector’s World: Ran when parked; Curtiss Aerocar Just about anything you might ever need to know about Glenn H. Curtiss can probably be found in this article. More ->
The Passive Voice: ise of the Peer Review Bots; Generating Music With Artificial Intelligence; A Writer’s Bare Necessities and more ->
Google Open Source Blog: Bringing Live Transcribe’s Speech Engine to Everyone
By Leah Asmelash, WMUR: Police: Woman holds teens at gunpoint while they tried to raise money for their football team
Wynne County Schools Superintendent Carl Easley said in a statement that his district will review the fundraising policy and “will consider banning any door to door sales.”
“We are very concerned for our kids,” he said.
By Gabe Fernandez, Jalopnik: NASCAR Cowards Dropped Slayer As A Race Car Sponsor Because Of “Reactionary Concerns”
“Today, reportedly due to reactionary concerns from other long-time participating sponsors, Slayer has been pulled as the primary sponsor, and all Slayer signage has been removed from the car that was to be piloted by Monster Energy NASCAR Cup Series veteran, JJ Yeley,” they wrote in a statement. “The incontrovertible PODS Moving & Storage will now sponsor that car. After nearly 40 years, Slayer apparently remains as terrifying to some as ever.”
By Patrick Holland, CNET: PDFs are a monster to edit, but these four free apps make it easy Whether you’re on an iPhone, Android phone, Mac or PC, I found free and easy ways to add text, sign documents and fill out forms.
By Jon Schuppe, NBC News: U.S. news ‘I feel lucky, for real’: How legalizing hemp accidentally helped marijuana suspects Hundreds, perhaps thousands, of people accused of marijuana possession have seen their cases dismissed or put on hold thanks to new hemp laws.
With the passage of new hemp-legalization laws over the past eight months, crime labs across the country have suddenly found themselves unable to prove that a leafy green plant taken from someone’s car is marijuana, rather than hemp. Marijuana looks and smells like hemp but has more THC, the chemical that makes people high.
Without the technology to determine a plant’s THC level, labs can’t provide scientific evidence for use in court. Without that help, prosecutors have to send evidence to expensive private labs that can do the tests or postpone cases until local labs develop their own tests, a process that could take months.
By Denise Guerra, NPR: My Grandfather, A Killer
American Thinker: Epstein and the Public Loss of Faith; Forty-nine Years After Coming to America, I Became a Citizen Because I Want to Vote for President Trump; Why Israel Made the Right Move with Omar and Tlaib; Hatred is Hatred, whether from the Left or Right and more ->
By Deborah Bonello, Ozy: Silicon Valley Is Going to Mexico … for Talent
Why you should care
American tech firms are setting up research centers south of the border, targeting an affordable talent pool they’re increasingly unable to find at home.
By Amanda Ogle, Ozy: Take a Trip Through the First U.S. State to Allow Women to Vote
Why you should care
Women have been voting in the Cowboy State for 150 years.
Maria Popova’s Brain Pickings: Against the Slippery Slope of Evil: Amanda Palmer Reads Wendell Berry’s Stunningly Prescient Poem “Questionnaire” and Eating the Sun: A Lovely Illustrated Celebration of Wonder, the Science of How the Universe Works, and the Existential Mystery of Being Human
Backyard Gardening: DIY Fabric Grow Bags
By gg Phillips, Alaska Master Gardener Blog: Easy To Grow Houseplants
By rabbitcreek: Alaska Datalogger
By Natalina: Build a Soundproof Wall
Joan Reeves Saturday Share: Best Sloppy Joe Ever
FOODS by Lyds: How to Make the Best Brownies Ever
Widget not in any sidebars
Widget not in any sidebars | 1 | 2 |
<urn:uuid:da596541-216f-4e8f-9565-2219d9f0d2c1> | History of video games/Platforms/Wii U
A Wii U console with Gamepad.
The Wii U was proceeded by the very successful Nintendo Wii.
One gamepad prototype was essentially a screen with two wiimotes attached to it.
Concerns were raised over potential forced child labor in the production of Wii U systems in 2012.
Nintendo President Satoru Iwata sketched the idea for Amiibo while riding a bullet train (Shinkansen) to Tokyo in the later part of 2013.
I actually am baffled by it, I don’t think it’s going to be a big success.—Nolan K. Bushnell, New York Times article, 2012
The typical MSRP of games raised to $59.99, up from $49.99 for Wii games.
At launch in 2012 the 8GB Basic Wii U cost $299.99 and the 32GB Deluxe Wii U cost $349.99.
By January 2013 the Wii U had notably poor market performance in the United States, having only sold between 50,000 and 59,000 consoles. Nintendo reported much lower sales of the Wii U then expected in 2014, leading to financial worries.
In 2015, Nintendo Amiibo sales were very high.
In 2015, Nintendo withdrew from the Brazilian market.
Production of the Wii U ended in January of 2017. 13.56 million Wii U consoles and 103.21 million Wii U games were sold over the course of the system.
The Wii U was succeeded by the Nintendo Switch, and eventually many Wii U exclusives were ported to that console.
In 2020 some sought out older Wii U consoles due to shortages of the Nintendo Switch during the early stages of the COVID-19 pandemic.
The Wii U is powered by a three core 32-bit IBM Power-PC 750 CPU clocked at 1.243125 gigahertz and produced on a 45 nanometer SOI process. This is complemented by an AMD Radeon GPU clocked at 549.999755 megahertz, which is similar to the AMD RV770 GPU series (HD 4000) and built on a 40 nanometer process supporting up to 1080p output. The GPU has 320 stream processors, 16 texture mapping units, and eight render output units. Both processors have access to 4 gigabytes of shared DDR3-1600 RAM with up to 12.8 gigabytes a second of bandwidth. Though the hardware was underpowered for its time, careful consideration to memory hierarchy and interrelation between components eased many performance bottlenecks.
The basic Wii U has eight gigabytes of solid state storage, and the premium Wii U has 32 gigabytes of solid state storage.
Just as the Wii was often said to be twice as powerful as the GameCube, the Wii U is said to be roughly thrice as powerful as the Wii. Some also compare the power of the Wii U to that of the Xbox 360. While not necessarily true, these can be useful generalizations.
As the wireless gamepad is a critical part of the Wii U, the system sports a relatively feature rich radio suite. The Wii U supports 2.4 gigahertz Wi-Fi b/g/n. The Wii U has an additional Wi-Fi N controller to Miracast to the GamePad.
The Wii U has an optical disk reader which uses 25 gigabyte capacity disks with rounded edges and has read speeds of up to 22 megabytes a second. These are essentially non-standard Blu-Ray disks, and as with previous disc based Nintendo consoles, the drive is incapable of reading standard Blu Ray and DVD media to avoid patent issues.
The Wii U has four USB ports, one of which can be used with an external storage drive or thumb drive for extra space. The Wii U can use SDHC cards up to 32 gigabytes of cpacity.
The GamePad has a 6.2" LCD with a resolution of 854 by 480 pixels and a resistive touch screen that does not support multi-touch.
The GamePad has an NFC radio built in to use Amiibo.
The GamePad has an IR remote to control television sets.
The Wii U could technically support two gamepads, though this was not pursued in practice.
The Wii U can use a USB keyboard, though this feature was not available at launch.
The Wii U runs its own specialized operating system.
Third Party SupportEdit
Some third party developers, such as Team Ninja, noted the relative ease of development for the system, comparing it to consoles from the previous generation, such as the Xbox 360. Other third party developers, such as Bethesda, noted that Nintendo did not approach them early enough for them to offer viable support for the Wii U.
In the beginning, Wii U games had paper manuals, with Wii U games shifting to digital manuals around 2014.
Special editions and versions of the console.
- Starlight Gaming Station - Kiosk for hospital use.
2013 was promoted by Nintendo as the Year of Luigi as the 30th anniversary of his first appearance.
- Wii Fit U
- Wii Party U
- Wii Sports Club
- Dr. Luigi
- New Super Luigi U
- Game & Wario
- Super Mario 3D World
- The Legend of Zelda: The Wind Waker HD
- Pikmin 3
- The Wonderful 101
Sonic Lost WorldEdit
The Wii U version of this game featured exclusive DLC featuring crossovers with the Zelda and Yoshi game franchises.
Read more about Sonic Lost World on Wikipedia.
- Mario Kart 8
- Super Smash Bros. for Wii U
- Fatal Frame: Maiden of Black Water
- Donkey Kong Country: Tropical Freeze
- Hyrule Warriors
- Bayonetta 2
- Meme Run
- Sonic Boom: Rise of Lyric
- The Letter
- Splatoon - The first game in the innovative Splatoon series
- Xenoblade Chronicles X
- Animal Crossing: Amiibo Festival
- Mario Party 10
- Kirby and the Rainbow Curse
- Affordable Space Adventures
- Tokyo Mirage Sessions ♯FE
- Mario Tennis: Ultra Smash
- Super Mario Maker
- Yoshi's Woolly World
- Star Fox Zero
- Star Fox Guard
- Paper Mario: Color Splash
- The Legend of Zelda: Twilight Princess HD
- Pokkén Tournament
Nintendo Wii UEdit
A Wii U console from the front
A Wii U console from the back
Wii U ControllersEdit
The Wii U Gamepad
The Wii U Pro controller.
The Wii U Pro controller, showing triggers.
The Wii U CPU (Smaller package) and GPU (Larger package).
Clearer illustration of the same.
Illustration of the processor with heat spreader.
- Archived version of the official website in 2012
- Archived version of the official website in 2013
- Archived version of the official website in 2014
- Archived version of the official website in 2015
- Archived version of the official website in 2016
- Archived version of the official website in 2017
- Archived version of the official website in 2018
- Video Game Console Library - Wii U page.
| Parts of this page are based on materials from:
Wikipedia: the free encyclopedia.
- ↑ Kersey, Ben (7 December 2012). "Nintendo details the history and prototypes of the Wii U" (in en). https://www.theverge.com/2012/12/7/3739626/nintendo-wii-u-history-prototypes. Retrieved 13 November 2020.
- ↑ Phillips, Tom (18 October 2012). "Nintendo investigating Wii U manufacturer Foxconn for using illegal child labour" (in en). Eurogamer. https://www.eurogamer.net/articles/2012-10-18-nintendo-investigating-wii-u-manufacturer-foxconn-for-using-illegal-child-labour.
- ↑ "Inside Nintendo's Plan to Stay Alive for the Next 125 Years". https://time.com/3749061/nintendo-mobile-gaming/. Retrieved 19 November 2020.
- ↑ "Iwata Came Up With Amiibo on a Train in Late 2013". 19 March 2015. https://gamnesia.com/iwata-came-up-with-amiibo-on-a-train-in-late-2013/. Retrieved 19 November 2020.
- ↑ Wingfield, Nick (24 November 2012). "Nintendo Confronts a Changed Video Game World (Published 2012)". https://www.nytimes.com/2012/11/25/technology/nintendos-wii-u-takes-aim-at-a-changed-video-game-world.html. Retrieved 12 November 2020.
- ↑ a b McElroy, Griffin (13 September 2012). "Wii U games will cost $59.99" (in en). The Verge. https://www.theverge.com/2012/9/13/3328300/wii-u-games-price. Retrieved 20 October 2020.
- ↑ Orland, Kyle (15 February 2013). "Wii U has historically bad January, sells about 50,000 units in US" (in en-us). Ars Technica. https://arstechnica.com/gaming/2013/02/wii-u-has-historically-bad-january-sells-about-50000-units-in-us/.
- ↑ Matthews, Matt. "At 57K sold, Wii U's January performance is historically abysmal" (in en). www.gamasutra.com. https://www.gamasutra.com/view/news/186741/At_57K_sold_Wii_Us_January_performance_is_historically_abysmal.php.
- ↑ Pfanner, Eric (29 January 2014). "Flat Sales of Wii U Put Nintendo in the Hot Seat (Published 2014)". https://www.nytimes.com/2014/01/30/technology/flat-sales-of-wii-u-put-nintendo-in-hot-seat.html. Retrieved 12 November 2020.
- ↑ Byford, Sam (15 January 2015). "Nintendo is selling millions of $12.99 plastic figurines" (in en). https://www.theverge.com/2015/1/15/7554873/nintendo-amiibo-sales. Retrieved 19 November 2020.
- ↑ Good, Owen S. (10 January 2015). "Nintendo ends console and game distribution in Brazil, citing high taxes" (in en). Polygon. https://www.polygon.com/2015/1/10/7524759/nintendo-brazil-wii-u-3ds-tariffs-taxes. Retrieved 26 October 2020.
- ↑ "Wii U Production Ends Worldwide". https://www.gamespot.com/articles/wii-u-production-ends-worldwide/1100-6447419. Retrieved 13 November 2020.
- ↑ "Nintendo Switch overtakes the Wii U". 31 January 2018. https://www.bbc.com/news/technology-42885803. Retrieved 13 November 2020.
- ↑ "IR Information : Sales Data - Dedicated Video Game Sales Units" (in en). https://www.nintendo.co.jp/ir/en/finance/hard_soft/. Retrieved 26 October 2020.
- ↑ "Why You Should Buy A Wii U If You Can’t Get A Nintendo Switch". 1 April 2020. https://screenrant.com/nintendo-switch-sold-out-wii-u-worth-2020/. Retrieved 13 November 2020.
- ↑ a b c d e f "Technical Specifications". https://www.nintendo.co.uk/Wii-U/Hardware-Features/Specifications/Specifications-664742.html. Retrieved 8 November 2020.
- ↑ a b "Wii U CPU and GPU clock speeds revealed; not the end of the world, but not great either - ExtremeTech". https://www.extremetech.com/gaming/142002-wii-u-cpu-and-gpu-clock-speeds-revealed-not-the-end-of-the-world-but-not-great-either. Retrieved 8 November 2020.
- ↑ a b c d e f Shimpi, Anand Lal. "Nintendo Wii U Teardown". https://www.anandtech.com/show/6465/nintendo-wii-u-teardown. Retrieved 8 November 2020.
- ↑ a b Leadbetter, Richard (5 February 2013). "Wii U graphics power finally revealed" (in en). https://www.eurogamer.net/articles/df-hardware-wii-u-graphics-power-finally-revealed. Retrieved 8 November 2020.
- ↑ "Wii U avoids RAM bottleneck, says Nano Assault dev". VG247. 5 November 2012. https://www.vg247.com/2012/11/05/wii-u-avoids-ram-bottleneck-says-nano-assault-dev/.
- ↑ "Nintendo Support: Compatible Wireless Modes and Wireless Security Types". https://en-americas-support.nintendo.com/app/answers/detail/a_id/498/~/compatible-wireless-modes-and-wireless-security-types. Retrieved 8 November 2020.
- ↑ "Take a very, very close look at the round-edged Wii U proprietary discs" (in en). https://www.engadget.com/2012-11-12-nintendo-wii-u-proprietary-disc.html. Retrieved 8 November 2020.
- ↑ Sin, Gloria. "Nintendo Wii U: No DVD or Blu-ray player? No problem." (in en). ZDNet. https://www.zdnet.com/article/nintendo-wii-u-no-dvd-or-blu-ray-player-no-problem/.
- ↑ a b c Stein, Scott. "Wii U review: A fun system for kids, but you should probably wait for the Switch" (in en). https://www.cnet.com/reviews/nintendo-wii-u-review/2/. Retrieved 8 November 2020.
- ↑ Pierce, David (18 November 2012). "Nintendo Wii U review" (in en). https://www.theverge.com/2012/11/18/3658130/nintendo-wii-u-review. Retrieved 8 November 2020.
- ↑ "Nintendo Wii U Review" (in en). https://www.pcmag.com/reviews/nintendo-wii-u. Retrieved 8 November 2020.
- ↑ Doolan, Liam (9 July 2022). "Reggie Explains Why The Nintendo Wii U Didn't Utilise Dual GamePad Support". Nintendo Life. https://www.nintendolife.com/news/2022/07/reggie-explains-why-the-nintendo-wii-u-didnt-utilise-dual-gamepad-support.
- ↑ "Reggie talks about why Nintendo never used two GamePads with Wii U" (in en). Nintendo Everything. 8 July 2022. https://nintendoeverything.com/reggie-talks-about-why-nintendo-never-used-two-gamepads-with-wii-u/.
- ↑ "Reggie explains why the Wii U never got dual GamePad play" (in en). GoNintendo. 8 July 2022. https://gonintendo.com/contents/6213-reggie-explains-why-the-wii-u-never-got-dual-gamepad-play.
- ↑ "USB keyboard support". https://www.nintendo.co.uk/Support/Wii-U/Game-Updates/Monster-Hunter-3-Ultimate/USB-keyboard-support/USB-keyboard-support-738278.html.
- ↑ "Nintendo Support: Does the Wii U Console Work With Keyboards?". https://en-americas-support.nintendo.com/app/answers/detail/a_id/1432/~/does-the-wii-u-console-work-with-keyboards%3F.
- ↑ "Wii U Operating System". 17 May 2012. https://nintendotoday.com/wii-u-operating-system/. Retrieved 13 November 2020.
- ↑ Souppouris, Aaron (1 February 2012). "Team Ninja: Wii U is 'very easy to develop for'" (in en). The Verge. https://www.theverge.com/2012/2/1/2763225/nintendo-wii-u-easy-to-develop-for-team-ninja. Retrieved 20 October 2020.
- ↑ Rose, Mike. "Bethesda: It's too late for third-party support on Wii U" (in en). www.gamasutra.com. https://www.gamasutra.com/view/news/199456/Bethesda_Its_too_late_for_thirdparty_support_on_Wii_U.php. Retrieved 20 October 2020.
- ↑ Totilo, Stephen (February 5th, 2014). "Nintendo Is Slowly Reinventing The Video Game Instruction Manual" (in en-us). Kotaku. https://kotaku.com/nintendo-is-slowly-reinventing-the-video-game-instructi-1515814941. Retrieved 20 October 2020.
- ↑ Totilo, Stephen (March 14th, 2017). "Even Nintendo Seems To Be Abandoning Game Instruction Manuals" (in en-us). Kotaku. https://kotaku.com/even-nintendo-seems-to-be-abandoning-game-instruction-m-1793260316. Retrieved 20 October 2020.
- ↑ "Announcing the Starlight Nintendo Switch Gaming Station!" (in en). https://www.starlight.org/stories/announcing-the-starlight-nintendo-switch-gaming-station/.
- ↑ Diaz, Ana (16 July 2021). "Let us not forget Sonic the Hedgehog’s weird Zelda: Skyward Sword crossover". Polygon. https://www.polygon.com/22580065/legend-of-zelda-skyward-sword-sonic-the-hedgehog-lost-world-wii-u-dlc-crossover.
- ↑ Farokhmanesh, Megan (26 March 2014). "Sonic: Lost World gets free The Legend of Zelda DLC stage March 27". Polygon. https://www.polygon.com/2014/3/26/5550670/sonic-lost-world-gets-free-the-legend-of-zelda-dlc-stage-march-27. | 1 | 5 |
<urn:uuid:ae7ac904-d56a-42f9-b881-fd46dbd9910a> | Thermal management is the ability to control the thermal environment of an electronic system. It is often associated with cooling heat-generating electronics, but it also encompasses generating heat in cold environments to maintain optimal system operation or power wax-based linear motors. As a result of the many uses for thermal management, a wide variety of devices and components are designed for its implementation, including positive- and negative-temperature coefficient thermistors (PTCs and NTCs), thermocouples, etc. resistance-temperature detectors (RTDs). This FAQ begins by looking at various heating applications that use PTCs, then digs into the uses of NTC thermistors for temperature measurement and thermal protection, and closes with a comparison of NTC thermistors, thermocouples, and RTDs.
PTC thermistors can operate over a range of voltage and dissipation conditions to produce a nearly constant temperature. They are self-regulating with no thermostat needed and are available in many shapes, including squares, rectangles, discs, and cylinders (Figure 1). Several PTCs can be paralleled to provide heating over a larger area. PTC thermistor-based heating solutions are low cost, efficient, highly reliable with no moving parts, have long service lives, and can be mounted to various surfaces. Silicon PTC thermistors have a highly linear temperature coefficient (typically about 0.7%/°C). When needed, a linearization resistor can be added to enhance linearization.
Important considerations when specifying PTC thermistors include the switch temperature (Ts), which typically ranges from 50°C to 135°C, resistance at 25°C (R25), the surface area, and the maximum rated voltage (Vmax) (Figure 2). Ts is critical in heater designs. The maximum surface temperature of the PTC thermistor is only a few degrees higher than Ts, and the maximum heating temperature is directly related to Ts. The R25 needs to balance the need to minimize inrush currents upon start-up and be low enough to supply the power needed to heat the PTC thermistor to Ts. The thermistor cold resistance is an important factor determining the temperature ramp-up rate. A lower resistance produces higher I2R heating.
The power dissipated, and the surface area influences the heat-up and cool-down rates. Multiple PTC thermistors can be used to increase the effective surface area. The Vmax for these devices is typically specified for DC or 60Hz AC. These devices are used in a variety of automotive, communications, aerospace, consumer, and industrial applications such as:
- Providing additional heat inside the cabin of a car or truck with a diesel engine or heat diesel fuel in cold operating conditions before injection into the cylinders.
- In temperature compensated synthesizer voltage-controlled oscillators and crystal oscillators for temperature compensation (Figure 3).
- In electrically actuated linear wax motors, PTCs can provide the heat necessary to expand the wax. Wax motors are widely used in the aerospace industry to control fuel, hydraulic, and other oils. They are also used across a variety of systems where humidity or moisture negatively impacts the reliability and performance of electromagnetic-based solutions, including self-actuating thermostatic fluid mixing valves, door lock assemblies on washing machines, control valves in water heating systems, releasing the detergent dispenser door latch in dishwashers, opening and closing vents in greenhouses, and in paraffin microactuators in MEMs devices.
- Electric motors and power transformers often include PTC thermistors in their windings to provide over-temperature protection and prevent insulation damage in the case of overheating. In this application, a thermistor with a non-linear response curve is used. The thermistor resistance rises rapidly at the maximum allowable winding temperature triggering an external relay and turning off the current flow.
- Polymeric positive temperature coefficient (PPTC) devices can be used to provide overcurrent protection in electronic systems.
The resistance of NTC thermistors decreases exponentially with increasing temperatures. The steeper the resistance-temperature (RT) curve, the faster the resistance change. NTC thermistors have various uses, including temperature sensing and measurement, temperature protection devices and temperature compensation, and inrush current control.
An NTC thermistor placed near a heat-generating component such as a DC/DC converter or a CPU can be used to monitor the temperature and initiate temperature compensation actions as needed to protect sensitive devices from overheating. The temperature measurement circuit is typically a voltage divider composed of an NTC thermistor and a fixed-value resistor connected in series (Figure 4).
NTC thermistors are placed on the substrate inside IGBT and MOSFET power modules to monitor the heatsink temperature and provide thermal protection. With the adoption of wide bandgap materials such as gallium-nitride (GaN) and silicon-carbide (SiC), the operating temperatures of power modules are rising, making it even more important to monitor the temperature accurately.
The contrast of LCDs changes as the ambient temperature changes. In applications that need to control LCD contrast, an NTC thermistor-based voltage divider is often used to adjust the drive voltage to compensate for changes in the ambient temperature.
Temperature-compensated crystal oscillators (TCXOs) are another example where NTC thermistors can be used to maintain stable operation as the ambient temperature changes. Just as in the case of using a PTC thermistor, an NTC thermistor can be used in certain TCXO applications to compensate for temperature changes. The oscillating frequency deviation can be controlled by inserting a compensation circuit with temperature properties that are the opposite of the crystal resonator. Separate compensation is needed for low-temperature and high-temperature operation and is provided by networks of an NTC thermistor, a capacitor, and a resistor (Figure 5).
Where do thermocouples and RTDs fit in?
While an NTC thermistor exhibits a continuous, small, incremental change in resistance correlated to temperature variations, thermocouples are voltage-based devices and reflect proportional changes in temperature through the varying voltage created between two dissimilar metals. Both are good for temperature sensing and control but for different sets of applications. Most NTC thermistors have an operating temperature range of about -50 to 250 °C, while thermocouples operate from about -200 to 1750 °C.
Compared with thermistors, thermocouples have lower accuracy and can be more difficult to use since they require a conversion of mV to temperature. An NTC thermistor can be used as part of a Wheatstone Bridge for applications that need higher accuracy. For measuring temperature, a Wheatstone Bridge is structured as an out-of-balance comparator where the out-of-balance voltage, ΔV, can be measured and related to the thermistor’s resistance, thereby measuring the temperature.
Resistance temperature detectors (RTDs), also called resistance thermometers, consist of a length of fine wire typically wrapped around a ceramic or glass core. The wire is a pure material, such as platinum, nickel, or copper, that has an accurate resistance/temperature relationship that’s used to provide a measurement of temperature. RTD elements are fragile and are often housed in protective probes.
Stable operation is important in many long-term applications. Each of these temperature sensor technologies can drift over time, depending on their materials, construction, and packaging. Epoxy-coated NTC thermistors typically drift of about 0.2 °C per year. Hermetically-sealed NTC thermistors have a smaller 0.02 °C per year drift while thermocouples have the largest drift and can drift up to 2 °C per year, largely due to chemical changes, especially oxidation.
RTDs have higher accuracy and repeatability and can replace thermocouples in some industrial applications below 600 °C. Because of their accuracy and good stability, NTC thermistors are often used in applications such as thermometers and fire detectors. Thermocouples are more durable and lower in cost, making them suitable for many industrial applications.
Thermal management is important in a wide variety of applications for an equally wide variety of purposes. In some cases, designers are concerned with cooling hot components; in other instances, it is necessary to generate heat to maintain optimal system operation or power wax-based linear motors. The diverse uses of thermal management have resulted in numerous thermal management components, including positive- and negative-temperature coefficient thermistors (PTCs and NTCs), thermocouples, and resistance-temperature detectors (RTDs), and numerous application circuit implementations.
Ceramic PTC Thermistor Heaters, Bourns
Crystals with Integrated Thermistors, ECS, Inc.
Design Considerations for PTC Heaters, Thermistors Unlimited
How to use temperature protection devices: chip NTC thermistors, TDK
Resistance thermometer, Wikipedia
Thermistors vs. Thermocouples, Ametherm | 1 | 2 |
<urn:uuid:44e4f3c3-433a-4583-8c29-2c840a7bdaf0> | SAFETY COMES FIRST, TRADITIONAL VOLTAGE METERS ARE UNSAFE THIS TOOL WILL SEND MULTIPLE ALARMS WHEN THE VOLTAGE IS DETECTED, THE TIP WILL SEND A RED LIGHT AND BEEP.
- VOLTAGE - The higher the detected voltage, or the closer it is to the voltage source, it will beep at a higher frequency. At the same time, the screen will be red or green, red means high voltage and live wire detected , green means low voltage and null wire detected.
- LARGE RANGE MEASUREMENT - 6000 count Auto Range Multimeter, DC Voltage up to 1000V, AC Voltage up to 750V, AC / DC Current up to 10A, Resistance up to 60MΩ, Capacitance up to 100mF, with K-type thermocouple, quickly solve automobile electrical problems and home.
- DUAL RANGE - Detects standard and low voltage (V AC / V AC) for more sensitive and flexible measurements. Press the S button to adjust the sensitivity and adapt the low range for doorbells, thermostats, irrigation wiring, etc. The NCV sensor automatically recognizes the voltage and displays it on the bar graph.
- PERFECT MULTIMETER - 2-3 times / sec for sampling, built-in stand for hands-free use, data hold, and 2.7-inch backlit LCD for visibility in low-light areas (The backlight and flashlight of the multimeter will work at the same time).
- NO CONTACT - With inductive NCV probe for AC voltage; Simply place the tip near a terminal strip, outlet, or supply wire. When the tip glows red and the pen beeps, you know there is voltage present. Live wire detector can automatically detect live or neutral wire. Ideal for breakpoint testing. Convenient Circuit Tester for Electricians, Homeowners.
- NCV SENSITIVITY TEST - If AC voltage is detected, the visible LEDs will glow according to the signal density (low, medium, high), the beep sounds at different frequencies to indicate this. If the signal is strong, the red light turns on. A general measurement multimeter.
- SAFETY MEASUREMENT - Meets the requirements of 600V CAT IV 1000V CAT III, provides you with the GREATEST SAFETY during work , We are committed to using a quality multimeter to improve the quality of life. The problem of electricians is no longer a problem.
- COMPACT DESIGN - Bright LED flashlight for working in dark areas; Low power indicator when the battery voltage is less than 2.5V; Automatic shutdown after 3 minutes without operation or signal detection; Pocket size, pen hook allows you to carry it in your shirt pocket.
Package Includes: 1 x Voltage Sensitive Compact Electric Pen
QUESTIONS AND ANSWERS
Question: Is there any risk of electrical leakage while using this product?
Answer: NCV Non-contact safety pencil, This product meets the requirements of 600V CAT IV 1000V CAT III, provide you with the HIGHEST SAFETY during work, so we don't have to worry about safety at all.
Question: What is the sensitivity of the test pen? Could it read 12 and 220v volts?
Answer: This product uses AC10 can effectively reduce line miscalculation, in case of strong electricity, fast sound and light alarm . Also, the range you can test is DC voltage at 1000V and AC voltage at 750V, please don't worry at all.
We are proud to offer free shipping on all orders to over 120 countries with express delivery couriers.
30 Day Money Back Guarantee
Don't like it? No problem. You can return it within 30 days and get your money back - no question asked!
All of our Payments gateways are 100% Safe and Secured. We deal with Stripe and Paypal Payments.
🤝 Customer Satisfaction
Over 7500 orders have been shipped successfully in the last 6 months
- We truly believe we carry some of the most innovative products in the world, and we want to make sure we back that up with a risk-free ironclad 30-day guarantee.
- If you don’t have a positive experience for ANY reason, we will do WHATEVER it takes to make sure you are 100% satisfied with your purchase. | 1 | 4 |
<urn:uuid:e7b258ac-7d63-4f70-b20c-cc36eab1a64c> | n., plural: silent mutations
Definition: a point mutation that causes no significant effect on the protein function
A mutation is a change in the nucleotide sequence of a gene or a chromosome. When there is only one nucleotide involved, it is particularly referred to as a point mutation. Point mutation occurring in noncoding sequences often does not result in an altered amino acid sequence during translation.
However, if a mutation in the promoter sequence of a gene occurs, the effect may be apparent since the expression of the gene may cause changes in the amino acid sequence, as well as the structure and function of the protein product. Point mutations may be classified based on functionality: (1) nonsense mutations (2) missense mutations, and (3) silent mutations.
What is a silent mutation? What happens when a silent mutation happens? Read on to know more, especially about the definition of silent mutation in biology and examples.
Silent Mutation Definition
Silent mutations are mutations that arise when a single DNA nucleotide alteration inside a protein-coding region of a gene does not affect the amino acid sequence that makes up the gene’s protein. A mutation occurs when the DNA sequence of an organism changes.
What causes DNA mutations? Mutations can occur as a result of mistakes in DNA replication during cell division, mutagen exposure, or viral infection. When do mutations occur? Mutations can therefore arise during DNA replication if mistakes occur and are not addressed in a timely manner. Mutations can also develop as a result of environmental factors, such as smoking, sunshine, and radiation exposure.
Sometimes, mutations can cause numerous health issues. For example, tumor suppressor gene mutations may lead to cancer cells. A mutation of the adenomatous polyposis coli (APC) gene is associated with a variety of cancers, such as familial adenomatous polyposis (a type of colorectal cancer). In particular, a silent mutation in this gene has been found to affect the translation of an entire exon. (Montera et al., 2001)
Others may not affect the organism at all. There are other kinds of mutations, such as missense and nonsense mutations, when we talk about mutations involving only a single nucleotide mutation:
Or watch this vid about point mutations, including silent mutations:
A silent mutation (quiet mutation) is a form of mutation that does not cause a major change in the amino acid. As a result, the protein remains active and functional. Because of this, the changes are viewed as though they are neutral in terms of evolution. Silent mutations occur in non-coding regions or inside exons as opposed to synonymous mutations, which occur mostly within exons.
Compare: missense mutation; nonsense mutation
See also: point mutation; mutation, substitution mutation
The genetic code converts nucleotide sequences in mRNA to amino acid sequences. This mechanism encodes genetic information utilizing groups of three nucleotides along the mRNA secondary structure known as codons.
With a few exceptions, the set of three nucleotides usually invariably produces the same amino acid, such as UGA, which normally serves as the stop codon (but can also encode tryptophan in mammalian mitochondria). The fact that most amino acids are specified by several codons shows that the genetic code is degenerate–different codons result in the same amino acid.
Synonyms are codons that code for the same amino acid. When the changed messenger RNA (mRNA) is translated, silent mutations result in no change in the amino acid or amino acid functionality. If the codon AAA is changed to AAG, the identical amino acid – lysine – is absorbed into the peptide chain.
A nonsynonymous mutation that occurs at the genomic or transcriptional levels changes the amino acid sequence of the protein product. The main structure of a protein is its amino acid sequence. A replacement of one amino acid for another can degrade protein function and tertiary structure; however, depending on how closely the characteristics of the amino acids involved in the swap correspond, the consequences may be mild or acceptable.
A nonsense mutation, or the premature insertion of a stop codon, can change the fundamental structure of a protein. A shortened protein is generated in this scenario. Protein function and folding are affected by the location of the stop codon as well as the quantity and composition of the sequence lost.
The secondary structure of mRNA is altered by silent mutations. Protein secondary structure is made up of interactions between the atoms of a polypeptide chain’s backbone, excluding the R-groups. The alpha-helix, which is a right-handed helix formed by hydrogen bonding between the nth and n+4th amino acid residues, is a frequent kind of secondary structure.
The beta-sheet is another typical form of secondary structure that has a right-handed twist, can be parallel or anti-parallel depending on the orientation of the bound polypeptides, and is made up of hydrogen bonds between the carbonyl and amino groups of two polypeptide chains.
Silent mutations have an impact on protein folding and function. A misfolded protein may usually be refolded with the aid of molecular chaperones. RNA often forms two common misfolded proteins by folding together and being trapped in distinct conformations, and it has difficulties selecting the preferred particular tertiary structure due to competing configurations.
RNA-binding proteins can help with RNA folding difficulties; however, when a silent mutation occurs in the mRNA chain, these chaperones are unable to connect to the molecule and steer the mRNA into the right shape.
According to a new study, silent mutations might influence protein structure and function. Protein folding time and rate can be changed, resulting in functional deficits.
Cause of Silent Mutation
Silent mutations, like all other mutations, can be induced by several mutagens – substances that cause changes in DNA sequence. Mutagens are classified into three types: biological, chemical, and physical.
- Biological mutagens – Mutations are induced by live creatures or life-sustaining mechanisms.
- Errors in DNA replication can modify the DNA sequence – for example, insertion of the incorrect nucleotide into the DNA sequence – and result in mutations, including silent mutations.
- Some viruses inject copies of their genetic material into the host DNA. This sort of large-scale alteration does not generally result in silent mutations.
- Transposons and Insertion Sequences – Some DNA fragments can self-relocate, or “jump” from one section of DNA to another. Again, this frequently leads to large-scale alterations rather than quiet mutations.
- Chemical Mutagens – A variety of substances react with DNA, chemically altering the nucleotides.
- Base Analogs – Chemicals that have a similar structure to nucleotide bases. These can be integrated into the mutated DNA sequence and produce base-pair mismatches, resulting in silent (and other) mutations.
- Alkylating Agents– Chemicals that react with nucleotide bases and alter them by adding different functional groups. This modifies the nucleotide base-pairing and the DNA sequence, resulting in silent (and other) mutations.
- Physical mutagens are high-energy radiation that physically changes or breaks the DNA strand.
- UV radiation produces thymine dimers, which interfere with DNA replication and generate additional mistakes. This does not frequently result in silent mutations.
- X-rays are a kind of high-energy radiation that may physically disrupt DNA strands and fracture the molecule. This causes far more issues than silent mutations.
Silent Mutation Examples
Here are examples that define silent mutations:
The Redundant Genome
DNA is read in three-nucleotide units called codons. Each codon defines an amino acid, with a few exceptions serving as stop and start signals. Different codons can sometimes designate the same amino acid. This redundancy of genetic code allows it to be more flexible. As a result, a quiet mutation almost often remains unreported.
The silent mutation is a real shift in DNA from thymine to cytosine. This mutation might have resulted from an error in DNA replication or from some type of repair that occurred after the DNA was damaged. Regardless, these three-nucleotide codons instruct the ribosome and its machinery to bind a lysine amino acid.
In this situation, regardless of the silent mutation, the whole structure of the protein will stay unchanged. The protein will operate identically with the same amino acid structure until it is exposed to a different environment. A silent point mutation can also occur at the protein level, with no functional effect on the protein.
Amino Acid Groups
All 21 amino acids can be called for by the four nucleotides in groups of three codons. The amino acids are organized below in Figure 3 according to their structure and side chains. These characteristics have a direct impact on how they interact with other amino acids and how they affect molecules in the environment.
A silent mutation, which might easily involve more than one nucleotide, has the potential to modify a whole amino acid, or perhaps a sequence of amino acids. The effect of changing serine to a threonine may be minor.
The two amino acids belong to the same class and have extremely similar structures. This implies they will have a comparable chemical response to the molecules in their vicinity. This will affect the overall structure and impact of the protein. If the effect is insignificant, the alteration is referred to as a quiet mutation.
Place within Protein Structure
Several amino acids can be critical to a protein’s overall structure or functioning. Many proteins have an active site that other molecules must bind to. This site is made up of a particular amino acid sequence.
Certain amino acids and their side chains will have the exclusive capacity to interact with another molecule when folded correctly. If certain amino acids are mutated, the functioning of bonding may be severely hampered. This can alter a protein’s function or usefulness.
Other proteins on the interior of the molecule have complicated structures that must be present for certain activities to be performed. Many proteins go through a conformational shift, which is a shape alteration.
This is triggered by electrical stimulation or the binding of a chemical to the protein, such as a coenzyme or a substrate. The conformational shift, which changes the structure of the protein, can force molecules together or tear them apart.
Within Non-coding DNA
Many parts of the DNA are employed structurally, but their complete function is unknown. There are several examples of individuals with significantly different DNA yet seemingly the same traits.
These alterations, particularly minor structural changes in the DNA, are not important until they alter the interaction of the coding DNA with the environment. A silent mutation might easily occur in these locations without being noticed, but several mutations may begin to affect a population over time.
Bacteria, strangely, often have a single circle of DNA that contains all of the information they require. The human genome, on the other hand, is divided into several chromosomes that are bundled and maintained by specific proteins so that they may be coiled up during cell division.
One theory for how this much more sophisticated DNA came to be is that some quiet mutations began building DNA structures. More information can be stored in a more compact genome, which may have contributed to the evolution of life from single-celled creatures to more sophisticated forms.
Research and Clinical Applications
Silent mutations have been used in experiments and may have clinical effects.
Multi-Drug Resistance Gene 1
A silent mutation in the multidrug resistance gene 1 (MDR1), which genes for a cellular membrane pump that removes medicines from the cell, can impede translation at a specific place, allowing the peptide chain to bend into an unexpected shape. As a result, the mutant pump is less functional.
Because the nucleotide change does not modify the amino acid being translated, 99.8 percent of genes that suffer mutations are considered quiet. Although silent mutations are not believed to influence the phenotypic results, some mutations, such as the Multi-Drug Resistance Gene 1, demonstrate otherwise. MDR1 encodes the P-glycoprotein, which aids in drug elimination in the body.
It can be found in the intestines, the liver, the pancreas, and the brain. MDR 1 is found in the same sites as CYP3A4, an enzyme that aids in the removal of poisons or medications from the liver and intestines. Silent mutations, such as MDR 1, cause a shift in phenotypic response.
When mice did not have enough of the MDR 1 gene, their bodies did not detect the ivermectin or cyclosporine drugs, resulting in the production of toxins in their bodies.
MRD1 contains more than fifty single nucleotide polymorphisms (SNPs), or alterations in the nucleotide base sequence. In MDR1, the gene exon 26 (3535C) can mutate to 3535T, which transforms the transfer RNA into one that is not commonly observed, resulting in alterations in the outcome during translation. This is an illustration of how not all silent mutations are “quiet”.
The multi-drug resistance genes Exon 26 C3435T, Exon 21 G2677T/A, and Exon 12 C1236T have been examined for SNPs that occur at the same time, changing the phenotypic “function”. This shows a haplotype dependence between exon 26 and other polymorphic exons. For example, efavirenz and nelfinavir are two medications that can help reduce HIV infection in the body. When the SNP from exon 26 is combined with additional SNP exons, the medicines’ ability to sustain HIV infection is reduced.
Although the patient has a decreased quantity of the virus when the TT nucleotides in exon 26 are produced, the infection can spread normally when the genotype morphs into CC or CT, leaving the MDR 1 gene practically defenseless. These variations in the bases of MDR 1 exon 26 reveal a link between MDR 1 gene mutations and the effectiveness of antiretroviral medicines to reduce HIV infection.
Vaccination for polio
Steffen Mueller of Stony Brook University created a live vaccination for polio in which the virus was modified to replace naturally occurring codons in the genome with synonymous mutation codons. As a result, the virus could still infect and proliferate, but at a slower rate. Mice injected with this vaccine developed resistance to the wild polio strain.
Silent mutations introduced into a gene of interest can be beneficial in molecular cloning operations to establish or delete recognition sites for restriction enzymes.
Effect on dopamine receptor D2 gene activity
Silent mutations can induce mental problems. One silent mutation causes the dopamine receptor D2 gene to be less stable and degrade quicker, resulting in the gene being underexpressed.
Both an ATG to GTG mutation (nonsynonymous) and a CAT to CAC mutation produce deviations from typical pain sensitivity (synonymous). Both the low pain sensitivity and high pain sensitivity genes have these two mutations. Low pain sensitivity has an extra CTC to CTG silent mutation, but high pain sensitivity does not and shares the CTC sequence with average pain sensitivity at this site.
Frequently Asked Questions
How frequently do silent mutations occur?
Because the nucleotide change does not modify the amino acid being translated, 99.8 percent of genes that suffer mutations are considered quiet.
How do you identify a silent mutation?
The two amino acids belong to the same class and have extremely similar structures. This implies they will have a comparable chemical response to the molecules in their vicinity. This will affect the overall structure and impact of the protein. It could be a truncated protein or elongated protein. If the effect is insignificant, the alteration is referred to as a silent mutation.
Where are silent mutations more likely to happen?
A silent mutation of a DNA sequence is more likely to occur when a change in the DNA sequence inside a protein-coding region of a gene has no effect on the amino acid sequence that makes up the protein. This alteration often occurs at the codon’s third position, commonly known as the wobble location.
Mutated codons code for what in silent mutations?
Silent mutations cause a change in one of the letters in the triplet code that encodes a codon, but the amino acid that is coded for stays identical or biochemical characteristics that are comparable despite the single base change. A silent mutation is neutral if it codes for a protein that is still functional.
How does point mutation affect the protein?
A point mutation can have one of three effects on protein:
- A change to a different amino acid, known as a missense mutation;
- A termination codon that is changed, known as a nonsense mutation; or
- The creation of a new sequence that is silent in terms of protein sequence but changes some aspects of gene function, such as RNA altered splicing or transcriptional expression levels. Thus, silent mutations may affect splicing or transcriptional control.
Is sickle cell anemia caused by a point mutation? Is sickle cell anemia caused by silent mutation?
A point mutation is the cause of sickle cell anemia. A single nucleotide mutation in the HBB gene leads to the replacement of glutamic acid with valine and this results in the synthesis of altered hemoglobin protein that make the red blood cell acquire a sickle shape. Thus, sickle cell anemia is not a silent mutation but a missense mutation.
Answer the quiz below to check what you have learned so far about silent mutations.
- Ask a Geneticist. (n.d.). The Tech Interactive. Retrieved June 18, 2022, from https://www.thetech.org/ask-a-geneticist
- DNA Mutations | Biology for Majors I. (n.d.). Retrieved June 18, 2022, from https://courses.lumenlearning.com/suny-wmopen-biology1/chapter/dna-mutations/
- Kimchi-Sarfaty, C., Oh, J., Kim, I.-W., Sauna, Z., Calcagno, A., Ambudkar, S., & Gottesman, M. (2007). A “Silent” Polymorphism in the MDR1 Gene Changes Substrate Specificity. Science. https://doi.org/10.1126/SCIENCE.1135308
- Montera, M. (2001). A silent mutation in exon 14 of the APC gene is associated with exon skipping in a FAP family. Journal of Medical Genetics, 38(12), 863–867. https://doi.org/10.1136/jmg.38.12.863
- Mutation. (n.d.). Genome.Gov. Retrieved June 18, 2022, from https://www.genome.gov/genetics-glossary/Mutation
- Point Mutation—An overview | ScienceDirect Topics. (n.d.). Retrieved June 18, 2022, from https://www.sciencedirect.com/topics/medicine-and-dentistry/point-mutation
- Silent Mutation | Examples & Causes—Video & Lesson Transcript. (n.d.). Study.Com. Retrieved June 18, 2022, from https://study.com/learn/lesson/silent-mutation-examples-causes.html
- The Sound of a Silent Mutation. (n.d.-a). Retrieved June 18, 2022, from https://www.science.org/content/article/sound-silent-mutation
- The Sound of a Silent Mutation. (n.d.-b). Retrieved June 18, 2022, from https://www.science.org/content/article/sound-silent-mutation
©BiologyOnline.com. Content provided and moderated by Biology Online Editors. | 1 | 2 |
<urn:uuid:1cbbcdec-9101-4f3d-8a46-279d9b26bb3d> | Learn Productivity Tips and Tricks for the Debugger in Visual Studio
Applies to: Visual Studio Visual Studio for Mac Visual Studio Code
Read this topic to learn a few productivity tips and tricks for the Visual Studio debugger. For a look at the basic features of the debugger, see First look at the debugger. In this topic, we cover some areas that are not included in the feature tour.
Pin data tips
If you frequently hover over data tips while debugging, you may want to pin the data tip for the variable to give yourself quick access. The variable stays pinned even after restarting. To pin the data tip, click the pin icon while hovering over it. You can pin multiple variables.
You can also customize data tips in several other ways, such as keeping a data tip expanded (a sticky data tip), or making a data tip transparent. For more information, see View data values in DataTips in the code editor.
Edit your code and continue debugging (C#, VB, C++)
In most languages supported by Visual Studio, you can edit your code in the middle of a debugging session and continue debugging. To use this feature, click into your code with your cursor while paused in the debugger, make edits, and press F5, F10, or F11 to continue debugging.
For more information on using the feature and on feature limitations, see Edit and Continue.
Edit XAML code and continue debugging
To modify XAML code during a debugging session, see Write and debug running XAML code with XAML Hot Reload.
Debug issues that are hard to reproduce
If it is difficult or time-consuming to recreate a particular state in your app, consider whether the use of a conditional breakpoint can help. You can use conditional breakpoints and filter breakpoints to avoid breaking into your app code until the app enters a desired state (such as a state in which a variable is storing bad data). You can set conditions using expressions, filters, hit counts, and so on.
To create a conditional breakpoint
Right-click a breakpoint icon (the red sphere) and choose Conditions.
In the Breakpoint Settings window, type an expression.
If you are interested in another type of condition, select Filter instead of Conditional expression in the Breakpoint Settings dialog box, and then follow the filter tips.
Configure the data to show in the debugger
For C#, Visual Basic, and C++ (C++/CLI code only), you can tell the debugger what information to show using the DebuggerDisplay attribute. For C++ code, you can do the same using Natvis visualizations.
Change the execution flow
With the debugger paused on a line of code, use the mouse to grab the yellow arrow pointer on the left. Move the yellow arrow pointer to a different point in the code execution path. Then you use F5 or a step command to continue running the app.
By changing the execution flow, you can do things like test different code execution paths or rerun code without restarting the debugger.
Often you need to be careful with this feature, and you see a warning in the tooltip. You may see other warnings, too. Moving the pointer cannot revert your app to an earlier application state.
Track an out-of-scope object (C#, Visual Basic)
It's easy to view variables using debugger windows like the Watch window. However, when a variable goes out of scope in the Watch window, you may notice that it is grayed out. In some app scenarios, the value of a variable may change even when the variable is out of scope, and you might want to watch it closely (for example, a variable may get garbage collected). You can track the variable by creating an Object ID for it in the Watch window.
To create an object ID
Set a breakpoint near a variable that you want to track.
Start the debugger (F5) and stop at the breakpoint.
Find the variable in the Locals window (Debug > Windows > Locals), right-click the variable, and select Make Object ID.
You should see a $ plus a number in the Locals window. This variable is the object ID.
Right-click the object ID variable and choose Add Watch.
For more information, see Create an Object ID.
View return values for functions
To view return values for your functions, look at the functions that appear in the Autos window while you are stepping through your code. To see the return value for a function, make sure that the function you are interested in has already executed (press F10 once if you are currently stopped on the function call). If the window is closed, use Debug > Windows > Autos to open the Autos window.
In addition, you can enter functions in the Immediate window to view return values. (Open it using Debug > Windows > Immediate.)
You can also use pseudovariables in the Watch and Immediate window, such as
Inspect strings in a visualizer
When working with strings, it can be helpful to view the entire formatted string. To view a plain text, XML, HTML, or JSON string, click the magnifying glass icon while hovering over a variable containing a string value.
A string visualizer may help you find out whether a string is malformed, depending on the string type. For example, a blank Value field indicates the string is not recognized by the visualizer type. For more information, see String Visualizer Dialog Box.
For a few other types such as DataSet and DataTable objects that appear in the debugger windows, you can also open a built-in visualizer.
Break into code on handled exceptions
The debugger breaks into your code on unhandled exceptions. However, handled exceptions (such as exceptions that occur within a
try/catch block) can also be a source of bugs and you may want to investigate when they occur. You can configure the debugger to break into code for handled exceptions as well by configuring options in the Exception Settings dialog box. Open this dialog box by choosing Debug > Windows > Exception Settings.
The Exception Settings dialog box allows you to tell the debugger to break into code on specific exceptions. In the illustration below, the debugger breaks into your code whenever a
System.NullReferenceException occurs. For more information, see Managing exceptions.
Debug deadlocks and race conditions
If you need to debug the kinds of issues that are common to multithreaded apps, it often helps to view the location of threads while debugging. You can do this easily using the Show Threads in Source button.
To show threads in your source code
While debugging, click the Show Threads in Source button in the Debug toolbar.
Look at the gutter on the left side of the window. On this line, you see a thread marker icon that resembles two cloth threads. The thread marker indicates that a thread is stopped at this location.
Notice that a thread marker may be partially concealed by a breakpoint.
Hover the pointer over the thread marker. A DataTip appears. The DataTip tells you the name and thread ID number for each stopped thread.
You can also view the location of threads in the Parallel Stacks window.
Get more familiar with how the debugger attaches to your app (C#, C++, Visual Basic, F#)
To attach to your running app, the debugger loads symbol (.pdb) files generated for the exact same build of the app you are trying to debug. In some scenarios, a little knowledge of symbol files can be helpful. You can examine how Visual Studio loads symbol files using the Modules window.
Open the Modules window while debugging by selecting Debug > Windows > Modules. The Modules window can tell you what modules the debugger is treating as user code, or My Code, and the symbol loading status for the module. In most scenarios, the debugger automatically finds symbol files for user code, but if you want to step into (or debug) .NET code, system code, or third-party library code, extra steps are required to obtain the correct symbol files.
You can load symbol information directly from the Modules window by right-clicking and choosing Load Symbols.
Sometimes, app developers ship apps without the matching symbol files (to reduce the footprint), but keep a copy of the matching symbol files for the build so that they can debug a released version later.
To find out how the debugger classifies code as user code, see Just My Code. To find out more about symbol files, see Specify symbol (.pdb) and source files in the Visual Studio debugger.
For additional tips and tricks and more detailed information, see these blog posts:
Submit and view feedback for | 1 | 3 |
<urn:uuid:2c46c06f-2712-40f2-9a43-6d1f1fe2498d> | Kale Diagnostics Research
Two Peas In A Pod: PCOS & Hypothyroidism
Polycystic Ovarian Syndrome (PCOS) is one of the most common endocrine disorders impacting roughly 6-10% of reproductive aged women . Unfortunately, this number does not encompass a large percentage of women who are undiagnosed or misdiagnosed as the symptoms of PCOS can overlap with symptoms of other conditions such as hypothyroidism. But did you know 1 in 4 women with PCOS also have hypothyroidism? [2,3] If you have been feeling misheard or unsettled in the diagnosis you have received, it could be because maybe you don’t simply fit into one box. It could be in order to achieve your goals it is imperative to consider you may not be able to be defined to one diagnosis.
As PCOS and hypothyroidism have been so commonly diagnosed together, the ongoing debated question persists: does PCOS cause hypothyroidism, does hypothyroidism cause PCOS or are they so intertwined they ironically coexist?
According to the Rotterdam criteria, which most physicians use to diagnose, a female is diagnosed with PCOS if she has 2 out of 3 of the following criteria :
1. Hyperandrogenism: high levels of androgen hormones such as testosterone
2. Oligo/Anovulation: lack of ovulation or inconsistent ovulation which commonly leads to irregular or absent menstruation
3. Cystic ovaries: ovaries contain ‘cyst-like’ appearances
*As we discuss further, keeping these in the back of your mind will help to piece together the connection between PCOS and hypothyroidism.
Foremost before we proceed, it is also important to understand the function of the thyroid gland. Most associate the thyroid gland with weight as it is commonly recognized in hypothyroidism most experience weight gain and in hyperthyroidism the opposite is true. But the thyroid gland is actually a pivotal part of how your entire body functions via its control of your metabolism. Metabolism is an umbrella term for the countless areas the thyroid gland helps to regulate: breathing, heart rate, digestion, reproductive system, brain development, bone and muscle strength, glucose regulation, lipid levels such as cholesterol and so much more . Every single cell in your body has a receptor for thyroid hormone meaning the presence of thyroid hormone helps every cell in your body to function optimally. As with other hormones throughout the body, thyroid hormones are messengers. The production and secretion of one hormone sends a message to another part of the body to produce and secrete more hormones or send reverse messaging to slow down production to keep all organs, glands, cells, tissues and structures functioning optimally. The messaging never ends; it is a constant forward and reversal communication network to keep your body in a state of homeostasis.
Through this vast messaging network, communication of thyroid hormones in conjunction with other hormones throughout the body lead some to speculate the potential that hypothyroidism can cause PCOS.
Thyrotropin Releasing Hormone(TRH) is considered the master regulator of your thyroid gland. It is released from an area of your brain called the hypothalamus and controls how much thyroid hormone is produced and secreted from the thyroid gland . In hypothyroidism, this hormone is decreased which causes an increase in TSH (Thyroid Stimulating Hormone) in an attempt to create more thyroid hormone [3,6]. This alteration in hormones also increases a hormone called Prolactin which is responsible for hundreds of functions throughout the body but most notably for milk production (stress, inflammation and tumors also have the potential to raise prolactin levels) . High prolactin levels alter and decrease the production of FSH and LH: hormones required for ovulation and a normal menstrual cycle [3,6]. Remember, hormones are a large network of communication of constant forward and reversal messaging so you can imagine this as a long telephone chain of one hormone impacting the function of another (Figure 1 below displays this cascade of hormone messaging in hypothyroid states). Follicle Stimulating Hormone (FSH) and Luteinizing Hormone (LH) function by sending messages to the ovaries to develop a follicle. Through this follicle, an egg is matured and released every month in what is known as ovulation. When FSH and LH are decreased, the growth of this follicle does not develop appropriately and an egg cannot efficiently be released for ovulation to occur. The result of this are ‘cyst-like’ structures on the ovaries which in reality are actually immature follicles that were unable to release an egg; coining the term polycystic ovaries, a hallmark feature of PCOS. Furthermore, if an egg were released the location becomes a ‘gland-like’ structure called the Corpus Luteum on the surface of the ovary. This is responsible for Progesterone production but when ovulation does not occur, progesterone is never produced leading to irregular or absent menstrual cycles .
Low levels of thyroid hormone, as seen in hypothyroidism, also leads to a decrease in a hormone called the Sex-Hormone Binding Globulin (SHBG) . The function of this hormone is exactly like it sounds: to bind to sex hormones and transport them throughout your body. When hormones are not bound to SHBG they are free and active to influence other hormones, tissues and cells. This contributes to a risk factor for PCOS by creating an increase in free, unbound, active Testosterone leading to hyperandrogenism (a defining feature of PCOS) . High androgens, or testosterone levels, can act directly on the ovaries to further disrupt ovulation and a normal menstrual cycle by altering levels of estrogen and progesterone production. This can also contribute to common symptoms associated with PCOS related to high levels of androgens such as unwanted hair growth, acne and an oily scalp.
We could continue on forever about the potential mechanisms the thyroid may cause in PCOS but the last, clearly studied mechanism to mention is the role the thyroid gland plays in glucose regulation.
As mentioned, every cell in the body has a receptor for thyroid hormone and, among many things, this receptor helps thyroid hormone to bind and contributes to the movement of glucose inside of the cell for utilization and energy. Lower levels of thyroid hormone throughout the body are associated with lower sensitivity of cells to insulin. [2,3,6] Insulin often gets a bad rep, but it is not a bad hormone. Naturally, after you eat a meal glucose levels will rise and insulin levels will follow as its presence is required to move glucose inside of your cells. T3 (the active form of thyroid hormone) controls and regulates the release of insulin but in order for T3 to be present it has to first be converted from T4 [2,3]. When levels of T3 are low whether due to primary hypothyroidism, subclinical hypothyroidism, Hashimoto’s Thyroiditis or conversion complications, the incidence of insulin resistance is high [2,3,6,9,10].
If you have been diagnosed with PCOS you are most likely extremely familiar with the term insulin resistance but maybe have not fully comprehended what it actually means. For simplicity, in insulin sensitive states after a meal glucose rises, insulin rises, insulin binds to your cell and opens up the gates to allow glucose to flow inside. In insulin resistance, when your cells see insulin in the bloodstream they do not allow insulin to easily bind and therefore glucose cannot be moved inside the cell as efficiently as it should. When this occurs, levels of circulating blood glucose and insulin generally remain high rather than returning to a resting, fasting state as it should once a meal has been digested.
While the presence of thyroid hormone is one obstacle to ensure its sensitivity, there are other factors that can decrease insulin sensitivity involving a combination of genetics and lifestyle factors: dietary choices, physical activity, smoking, mineral deficiencies, stress, and inflammation . However, in hypothyroidism, it has been found that even if insulin resistance is not present, fasting insulin levels remain high showing there is still some degree of decreased sensitivity .
High levels of insulin in the bloodstream can directly stimulate the production of more androgens (testosterone) and cause a further decrease in the Sex-Hormone Binding Globulin which as mentioned allows even more free, unbound testosterone to circulate . Among many other impacts, high insulin also increases fat storage which can lead to weight gain as commonly, but not always, seen in women with PCOS.
Bare with us as we bring this full circle. We’ve mentioned a few ways hypothyroidism may be causing PCOS, but now let’s look at the potential of the presence of PCOS in causing hypothyroidism.
Insulin resistance (IR) has been identified as a primary root cause of PCOS and when addressed countless women have been able to put their diagnosis into remission. As we mentioned, IR has the potential to be caused by hypothyroidism but there are other factors that can cause insulin resistance signifying insulin resistance could be present first and therefore in a reverse messaging pattern cause or exacerbate hypothyroidism.
It is also well known that PCOS is associated as a pro-inflammatory condition meaning women with PCOS have some degree of chronic inflammation either systemically or locally . Inflammation is an immune response created by the body as a protective mechanism when the body is exposed to something which threatens the safety or homeostatic status of the body. In healthy responses, this can be a positive aspect but when excess, chronic or repeated inflammation is present it poses a problem. Chronic exposure to the ‘markers’ released from the immune system which creates inflammation interfere with various vital functions throughout the body. In relation, they interrupt the activity of an enzyme known as deiodinase [3,6]. Referring back, we mentioned T3 regulates insulin but has to be converted from T4 to be active. In order to be converted, it requires the enzyme deiodinase. Without proper functioning of this enzyme most of your thyroid hormone may be stuck in an inactive form. Therefore when chronic inflammation is present, as often seen with PCOS or caused by PCOS, there is less active thyroid hormone impairing optimal functioning of the cells and body while contributing to insulin resistance.
Additionally, as previously discussed, insulin resistance (whether caused by hypothyroidism or other external factors) generally leads to the development of fat storage. These ‘fat cells’ are formally termed adipocytes which have the ability to create and release these inflammatory markers all on their own. Estrogen, commonly seen elevated or elevated in relation to levels of progesterone in women with PCOS, also has the capability to increase these inflammatory markers . Therefore, it has been well studied that high estrogen levels associated with PCOS are often attributed to an increase in autoimmune conditions such as Hashimoto’s Hypothyroidism.
Autoimmune hypothyroidism, as in the case with Hashimoto’s, occurs when the body mounts an immune response against the thyroid gland. Essentially, the body has created antibodies (TPO) and these antibodies will attack and destroy the thyroid gland due to a dysregulated immune system or response. In women with PCOS, Hashimoto’s has been found to be three times more likely than in women without PCOS and can be strongly correlated to its exacerbation from elevated estrogen levels and inflammation [12,13].
It has also been established that a statistically significant amount of women with PCOS have been found to have a goiter (an increased thyroid gland size) or nodules present on their thyroid gland . A goiter or nodules can occur despite normal levels of TSH and T3/T4 but still has been found to impact levels of Prolactin and therefore ovarian function as well being associated with higher levels of the TPO antibody present in Hashimoto’s [6,13].
With so much of the focus on hypothyroidism, it would be remiss to not briefly mention hyperthyroidism and its potential role in insulin resistance and PCOS.
In 50% of cases of individuals with hyperthyroidism, altered glucose metabolism and insulin resistance has also been found . Where hypothyroidism mostly causes insulin resistance peripherally at the level of tissues and muscles, hyperthyroidism has been found to cause insulin resistance at the level of the liver . When your body has a higher demand for glucose than what you have consumed through your diet, it has the ability to create its own glucose from stores and other sources in the liver in a process known as gluconeogenesis. This process is regulated by your thyroid hormones and in individuals who have elevated thyroid hormones this process is increased . Therefore your body is creating and being exposed to higher levels of blood glucose despite what you may have consumed through your diet and consequently, more insulin is produced.
Additionally, in hyperthyroidism the rate at which your stomach empties and digests is abnormally increased . Therefore your body is able to break down carbohydrates more quickly leading to higher levels of blood glucose after a meal which in turn also increases the demand for insulin. The link between hyperthyroidism and PCOS has not been as readily established but with the predisposition for insulin resistance and its role in menstrual abnormalities it should not be completely overlooked and may provide some insight into individuals labeled as ‘lean PCOS’.
If you are someone diagnosed with PCOS or diagnosed with hypothyroidism and you’ve made it this far and are wondering: so, what caused what? Truthfully, we may never know. It’s a chicken or the egg situation and the evidence could go around and around showing how PCOS may have caused hypothyroidism and how hypothyroidism may have caused PCOS. But one thing is for certain: the co-occurence of the two is extremely common and the tightly intertwined network of hormone communication proves it is not by coincidence.
1.Smet ME, McLennan A. Rotterdam criteria, the end. Australas J Ultrasound Med. 2018 May 17;21(2):59-60. doi: 10.1002/ajum.12096. PMID: 34760503; PMCID: PMC8409808.
2. Spira D, Buchmann N, Dörr M, Markus MRP, Nauck M, Schipf S, Spranger J, Demuth I, Steinhagen-Thiessen E, Völzke H, Ittermann T. Association of thyroid function with insulin resistance: data from two population-based studies. Eur Thyroid J. 2022 Feb 28;11(2):e210063. doi: 10.1530/ETJ-21-0063. PMID: 35085102; PMCID: PMC8963174.
3. Gierach M, Gierach J, Junik R. Insulin resistance and thyroid disorders. Endokrynol Pol. 2014;65(1):70. https://go.openathens.net/redirector/liberty.edu?url=https://www.proquest.com/scholarly-journals/insulin-resistance-thyroid-disorders/docview/2464207007/se-2. doi: https://doi.org/10.5603/EP.2014.0010.
4. InformedHealth.org [Internet]. Cologne, Germany: Institute for Quality and Efficiency in Health Care (IQWiG); 2006-. How does the thyroid gland work? 2010 Nov 17 [Updated 2018 Apr 19]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK279388/
5. Nillni EA. Regulation of the hypothalamic thyrotropin releasing hormone (TRH) neuron by neuronal and peripheral inputs. Front Neuroendocrinol. 2010 Apr;31(2):134-56. doi: 10.1016/j.yfrne.2010.01.001. Epub 2010 Jan 13. PMID: 20074584; PMCID: PMC2849853.
6. Yu Q, Wang JB. Subclinical Hypothyroidism in PCOS: Impact on Presentation, Insulin Resistance, and Cardiovascular Risk. Biomed Res Int. 2016;2016:2067087. doi: 10.1155/2016/2067087. Epub 2016 Jul 12. PMID: 27478827; PMCID: PMC4960326.
7. Al-Chalabi M, Bass AN, Alsalman I. Physiology, Prolactin. [Updated 2021 Jul 29]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK507829/
8. Dumoulin SC, Perret BP, Bennet AP, Caron PJ. Opposite effects of thyroid hormones on binding proteins for steroid hormones (sex hormone-binding globulin and corticosteroid-binding globulin) in humans. Eur J Endocrinol. 1995 May;132(5):594-8. doi: 10.1530/eje.0.1320594. Erratum in: Eur J Endocrinol 1995 Sep;133(3):381. PMID: 7749500.
9. Lin Y, Sun Z. Thyroid hormone potentiates insulin signaling and attenuates hyperglycemia and insulin resistance in a mouse model of type 2 diabetes. Br J Pharmacol. 2011 Feb;162(3):597-610. doi: 10.1111/j.1476-5381.2010.01056.x. PMID: 20883475; PMCID: PMC3041250.
10. Wang, CY., Yu, TY., Shih, SR. et al. Low total and free triiodothyronine levels are associated with insulin resistance in non-diabetic individuals. Sci Rep 8, 10685 (2018). https://doi.org/10.1038/s41598-018-29087-1
11. Samantha Cassar, Marie L. Misso, William G. Hopkins, Christopher S. Shaw, Helena J. Teede, Nigel K. Stepto, Insulin resistance in polycystic ovary syndrome: a systematic review and meta-analysis of euglycaemic–hyperinsulinaemic clamp studies, Human Reproduction, Volume 31, Issue 11, 21 November 2016, Pages 2619–2631, https://doi.org/10.1093/humrep/dew243
12. Ulrich J, Goerges J, Keck C, Müller-Wieland D, Diederich S, Janssen OE. Impact of Autoimmune Thyroiditis on Reproductive and Metabolic Parameters in Patients with Polycystic Ovary Syndrome. Exp Clin Endocrinol Diabetes. 2018 Apr;126(4):198-204. doi: 10.1055/s-0043-110480. Epub 2018 Mar 5. PMID: 29506313.
13. Karaköse M, Hepsen S, Çakal E, Saykı Arslan M, Tutal E, Akın Ş, Ünsal İ, Özbek M. Frequency of nodular goiter and autoimmune thyroid disease and association of these disorders with insulin resistance in polycystic ovary syndrome. J Turk Ger Gynecol Assoc. 2017 Jun 1;18(2):85-89. doi: 10.4274/jtgga.2016.0217. Epub 2017 Feb 7. PMID: 28400351; PMCID: PMC5458441. | 1 | 4 |
<urn:uuid:18ac0aae-0196-4343-ae42-1ad6fab413bc> | The Japanese government instituted countermeasures against COVID-19, a pneumonia caused by the new coronavirus, in January 2020. Seeking “people’s behavioral changes,” in which the government called on the public to take precautionary measures or exercise self-restraint, was one of the important strategies. The purpose of this study is to investigate how and from when Japanese citizens have changed their precautionary behavior under circumstances in which the government has only requested their cooperation. This study uses micro data from a cross-sectional survey conducted on an online platform of an online research company, based on quota sampling that is representative of the Japanese population. By the end of March 2020, a total of 11,342 respondents, aged from 20 to 64 years, were recruited. About 85 percent reported practising the social distancing measures recommended by the government including more females than males and more older than younger participants. Frequent handwashing is conducted by 86 percent of all participants, 92 percent of female, and 87.9 percent of over-40 participants. The most important event influencing these precautionary actions was the infection aboard the Diamond Princess cruise ship, which occurred in early February 2020 (23 percent). Information from the central and local governments, received by 60 percent of the participants, was deemed trustworthy by 50 percent. However, the results also showed that about 20 percent of the participants were reluctant to implement proper prevention measures. The statistical analysis indicated that the typical characteristics of those people were male, younger (under 30 years old), unmarried, from lower-income households, a drinking or smoking habit, and a higher extraversion score. To prevent the spread of infection in Japan, it is imperative to address these individuals and encourage their behavioural changes using various means to reach and influence them.
Citation: Muto K, Yamamoto I, Nagasu M, Tanaka M, Wada K (2020) Japanese citizens' behavioral changes and preparedness against COVID-19: An online survey during the early phase of the pandemic. PLoS ONE 15(6): e0234292. https://doi.org/10.1371/journal.pone.0234292
Editor: Toshiyuki Ojima, Hamamatsu Ika Daigaku, JAPAN
Received: April 2, 2020; Accepted: May 22, 2020; Published: June 11, 2020
Copyright: © 2020 Muto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data files are available from the openICPSR database (DOI: https://doi.org/10.3886/E118584V1).
Funding: This work was supported by university grants allocated to the Department of Public Policy, Human Genome Center, The Institute of Medical Sciences, The University of Tokyo. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: KM is a member of the Expert Meeting on the Control of Novel Coronavirus Infection. TM is a member of the Committee of Crisis Communication of the Science Council of Japan. KW is a member of the MHLW Headquarters for Novel Coronavirus Disease Control. This does not alter our adherence to PLOS ONE policies on sharing data and materials.
The new coronavirus in Japan
A pneumonia of unknown cause was detected in China and first officially reported on 31 December 2019. The World Health Organization (WHO) announced a name for the new coronavirus disease, COVID-19 (coronavirus disease 2019), on 11 February 2020. Since then, COVID-19 has been spreading throughout the world, and a rapid increase in deaths has been reported in many countries. As of 28 March, a total of 571,678 cases and 26,494 deaths have been confirmed. One study has estimated that there will be a total of 81,114 deaths from COVID-19 over the next four months in the US alone. The number of COVID-19 cases and deaths in Japan is gradually increasing, with 1,499 cases (including 60 critical cases) and 49 deaths reported as of 28 March. Several small clusters of infected groups have been increasing in urban areas, including those in hospitals and nursing homes, in addition to cases with unlinked infections. Nevertheless, the total number of deaths and severely ill patients has been comparatively small, especially relative to the country’s population size. As of 28 March, the total number of deaths was 9,136 in Italy, 4,858 in Spain and 1,243 in the United States. Furthermore, the trend of the increase is not sharp. The reasons for this mild trend have been questioned outside Japan.
Over the past few decades, Japan has not experienced any serious damage from new infectious diseases, such as SARS (severe acute respiratory syndrome), MERS (Middle East respiratory syndrome), or the Ebola virus. Although Japan experienced the 2009 H1N1 influenza (flu) pandemic, the rate of deaths per 100,000 population was 0.16 as of the end of May 2010, which was the lowest worldwide. Ironically, this history of escapes might delay the establishment of the emergency operation headquarters in Japan. The urgent expansion of polymerase chain reaction (PCR) tests, which must be the frontline response to the novel coronavirus outbreak, has faced time-consuming obstacles. In Japan, a recent revision of the Act on Special Measures for Pandemic Influenza and New Infectious Diseases Preparedness and Response allows the Prime Minister to declare a state of emergency for the outbreak, but under the current legislation, no central or local government can enforce lockdowns such as those undertaken in other countries.
Under such limitations, the current goal of the Japanese government is to avoid an explosive increase in patients that would exceed the limit of intensive or critical care units in hospitals in urban areas. To meet this goal, the government policy consists of three strategies: early detection of clusters and rapid response, enhancement of the early diagnosis of patients and intensive care for severely affected patients, and strengthening of the universal healthcare system and public behavioral change.
Three strategies against COVID-19
With regard to the first and second strategies, the Ministry of Health, Labour, and Welfare (MHLW) strongly promotes contact tracing, social distancing, and pneumonia surveillance under the direction of the Patient Cluster Countermeasure Group in the MHLW Headquarters for Novel Coronavirus Disease Control. Regional public health centers conduct the contact tracing, asking infected persons and their close contacts to maintain social distancing for 14 days and allocating available hospital beds or hospital wards in designated local communities to COVID-19 patients. In clinical settings, the large number of computed tomography (CT) scanners in Japan (111.49 per million population) supports physicians in investigating suspicious pneumonia cases in the absence of conducting massive PCR tests in the population. This policy approach might lead to a relatively slower increase in the number of cases and deaths.
Regarding the third strategy, public behavioral change, by the middle of February 2020, the MHLW encouraged the Japanese public to practise frequent handwashing and “coughing etiquette” (using a handkerchief or sleeve instead of hands to catch a cough or sneeze). Furthermore, the MHLW had prioritized access to healthcare for elderly people, people suffering from fatigue or shortness of breath, and people with underlying health conditions. The MHLW had also asked the public younger than 65 years old not to visit clinics for at least four days if they experience cold symptoms or a fever of 37.5°C or higher until 8 May 2020. This restriction might be a shock to Japanese citizens, who are typically allowed free access to clinics and hospitals.
In analyses of contact tracing, it was found that one infected person tended to infect more than one other person at locations with certain characteristics. On 24 February, the Expert Meeting on the Control of Novel Coronavirus Infection asked the public to refrain from attending places involving close face-to-face contact (between people within an arm’s length of each other) in conversations and similar interactions for more than a given length of time in crowds. Since then, but prior to other similar slogans that have appeared around the world, the government has been campaigning for avoidance of these situations with the slogan “Avoid the overlapping 3 Cs” (“closed spaces with poor ventilation”; “crowded places with many people nearby”; “close-contact settings such as close-range conversations”), in addition to regular ventilation and wiping of shared surfaces (such as door handles, knobs, and bed fences) and goods with diluted household chlorine bleach. “Avoid the overlapping 3 Cs” has been the core and unique message against COVID-19 in Japan.
Previous studies and our research questions
This study examines three research questions: (1) How do Japanese citizens, especially those who are relatively active in terms of work and life activities and therefore have increased opportunities to spread infections to others, including older people vulnerable to the COVID-19, implement the government’s three Cs precautionary measures? (2) How effective are these requests from the government? (3) Who has changed their daily precautionary behaviour, and who has not?
Several previous studies have investigated changes in precautionary behavior against the coronavirus. For example, an online survey conducted on 29 January of 3,083 mainland Chinese respondents revealed that adults living in urban areas had stronger awareness of the issue than those in rural areas (72.7% vs. 66.1%, p<0.001). Another online survey conducted between 23 February and 2 March in the US (N = 2,986) and the UK (N = 2,988) showed that adult residents have a good understanding of the main mode of disease transmission and common symptoms, although they also have important misconceptions and discriminatory attitudes toward people of East Asian ethnicity due to COVID-19’s origin in China. The latest study in Italy clarified the three types of attitude to COVID-19 among Italian citizens: people who trust authority and choose isolation, fatalists who are keen on social media, and uninformed youth. The Gallup International Association also recently conducted a snap poll in 28 countries (including 1,115 Japanese participants) asking about precautionary procedures, and their findings indicated that 71 percent of Japanese participants had adopted more frequent handwashing. What is still unclear, however, is the trigger for behavioural change around COVID-19 and who is more actively implementing prevention measures. In the Gallup survey, the response period and sample attribution are also unclear. Furthermore, this survey is not necessarily informative for policymaking, as it does not reveal who is not implementing prevention measures.
Using a large sample of cross-sectional survey data, this study investigates how and at what point Japanese citizens changed their precautionary behaviour in this situation, in which the government has only requested, rather than mandated, their cooperation.
Materials and methods
Survey design and participants
This study uses micro data from a cross-sectional survey conducted via an online platform of an online research company, Macromill, Inc. Japan. From a pool of approximately 1.2 million registered individuals residing in Japan, we recruited a total of 11,342 males and females aged from 20 to 64 years. We limit our sample to those under 65 years of age in order to focus on the behavioural changes of the working-age population, who tend to be relatively active and have more opportunities to spread infections to others. In the recruitment process for this study, quota sampling was conducted so that the sample distributions among gender (male or female), age group (20s, 30s, 40s, 50s, or 60s), and employment status (regular employee, non-regular employee, self-employed, or not working) were representative of the Japanese population, based on statistics from the Labor Force Survey (Ministry of Internal Affairs and Communications). Our survey was conducted between 26 and 28 March 2020. We originally determined the target number of participants as 11,000 and accepted participants until the target number was reached. Due to the timing of closure, the final number of participants exceeded the target. Please note that we automatically eliminated duplicate answers from a single respondent and that there was a monetary incentive for participation.
Questionnaire and analysis
In addition to providing individual characteristics, the participants were asked to answer 11 items rating their prevention measures against novel coronavirus infections, such as social distancing and coughing etiquette, on a scale of 1 to 5. Thus, after summarizing demographic characteristics based on the total, male and female, and under-40 and over-40 categories, we aggregate and compare the proportion of participants who have been taking those prevention measures.
The participants were also asked what kind of events caused them to change their behaviours and rated the reliability and frequency of consulting of 10 information sources about the coronavirus on a scale of 1 to 5. Thus, we calculate and compare the frequency and reliability by information source.
Next, to detect factors associated with behavioural change, the participants were also asked about their drinking and smoking habits. Personality traits were measured by the Five Factor Personality Questionnaire: Ten-Item Personality Inventory (TIPI) . The five personality traits assessed by TIPI are extraversion, agreeableness, conscientiousness, emotional stability, and openness to experiences.
We estimate a logit model, where the dependent variable is a dummy indicating 1 if the participant chose “not at all” or “not true” to the question “Do you avoid the three overlapping Cs?” and where independent variables are individual characteristics.
We analyzed the data using STATA/MP version 16.0 for Mac (StataCorp, College Station, TX, United States).
Our survey falls outside the scope of the Japanese government’s Ethical Guidelines for Medical and Health Research Involving Human Subjects, and there are no national guidelines in Japan for social and behavioural research. Therefore, our study was carried out in accordance with the Ethical Principles for Sociological Research of the Japan Sociological Society, which do not require ethical reviews.
All survey participants gave their consent to participate in the anonymous online survey by Macromill, Inc. The authors did not obtain any personal information about the participants. After being informed about the purposes of the study and their right to quit the survey, participants agreed to participate. They were provided with the option “I don’t want to respond” for all questions. Completion of the entire questionnaire was considered to indicate participant consent.
The characteristics of the sample, both as a whole and separated by gender (male or female) or age (under or over 40 years old), are summarized in Table 1. The total sample size is 11,342, with almost equal gender distribution. Gender and age distribution are proportional to that of the Japanese population. University or college graduates constituted about 50–60 percent of respondents. About half of the total sample is composed of regular employees (usually indefinite and full-time employees). About a quarter of respondents had a household income of 4–5 million yen.
To what extent have prevention measures been taken?
In the survey, the participants were asked to answer to the question “Have you taken any measures to prevent novel coronavirus infections or outbreaks?” Table 2 shows a variety of prevention measures taken, aggregating a proportion of the participants who answered “very true” and “true” for each prevention measure.
Looking at the first four prevention measures, which have been continuously requested by the Japanese government and the Expert Meeting on Control of Novel Coronavirus Infection, it was found that 80 percent have attempted to avoid the “overlapping three Cs.” Of the total, 57 percent have attempted avoid conversations or shouting in close proximity, which was a relatively low figure among the three Cs. Looking next at the fifth prevention measure, more than 85 percent of all participants reported practising social distancing by avoiding mass gatherings. Regarding gender and age differences, more females than males and more older than younger participants are supportive of social distancing, as shown by the differences in the confidence intervals.
Regarding hygiene practices, frequent handwashing is conducted by about 86 percent of all, about 91 percent of female, and about 88 percent of over-40 participants. Coughing etiquette was implemented by 77 percent of the participants. Many also answered that they have avoided going out when ill with a cold.
As for the measures to strengthen individual immunity, around 70 percent of the participants reported getting sufficient rest and sleep or eating a nutritious diet. Again, focusing on gender and age differences, prevention measures are conducted more often by females and older people.
However, regardless of gender and age, about 40 percent of participants have prepared consultation and transportation methods to use in the event they become ill.
What has caused the behavioral changes?
To explore the triggers of the behavioural changes and preparedness observed above, the participants were asked “What was the most important event influencing these actions?” The responses are summarized in Fig 1. The figure shows that about 23 percent of the participants cited the infection aboard the Diamond Princess cruise ship that occurred around early February 2020, when there were still few domestic cases. The Diamond Princess is a British-registered cruise ship on which an 80-year-old passenger from Hong Kong tested positive for COVID-19 on 1 February 2020. Because the ship was in Japanese waters, it was quarantined in February 2020 for nearly a month with about 3,700 passengers and crew on board. Other participants noted events from the end of February, including the alert from the Expert Meeting (5.6 percent), the statement of emergency by the governor of Hokkaido (northern island of Japan) (7.4 percent), and the request by the Prime Minister to not attend mass gatherings (7.8 percent). The next large trigger was the request by the Prime Minister for nationwide school closures in Japan on 28 February 2020 (about 14 percent). Finally, worldwide outbreak around early March (22 percent) also attracted participants’ attention.
To explore what kinds of information affected their behavioural change and preparedness, the survey asked participants to report the frequency at which they consult certain sources about the novel coronavirus infection and to rate the reliability of the information source as they perceive it. The results are summarized in Table 3.
Table 3 shows that almost 90 percent receive information from TV news programs and Internet news sites and that about 50 percent trust such information. Mainstream scientists have expressed annoyance at the fear-mongering on TV talk and variety shows, and these formats are slightly favoured, but considered less credible, among the public. Meanwhile, information from the central and local government (received by 60 percent), including the Prime Minister and the Expert Meeting, is relatively trusted by the participants (50 percent). Among official sources, the local government is the most trusted. Newspapers (national and local) are read by only about 42 percent of the participants, and about 48 percent answered that they trust information from newspapers.
Looking at the differences in gender and age, females tend to seek more information and trust it more than males, except for the information from newspapers. Participants over 40 years old tend to access and trust the information from TV, newspapers, and officials more than those under 40 years old do, while young people often seek and trust news from the Internet and SNS apps.
Who does not adhere to social distancing?
As we confirmed in Table 2, more than 80 percent of the participants have been implementing social distancing measures, and most Japanese citizens seem to be exhibiting some behavioural change to prevent coronavirus infections. However, this also means that about 20 percent may not be conducting sufficient prevention measures.
To detect what kind of individuals are included in the group not conducting prevention measures, we conducted a multivariate analysis. Table 4 shows the estimation results of the multivariate logit model. All the variables in the first column in Table 4 were included as independent variables. Like the other tables, Table 4 shows the results based on the total, male and female, and under-40 and over-40 categories. The number shown in the table is an odds ratio, so the estimates that are significantly higher than 1 indicate a higher tendency to not conduct proper social distancing.
Looking at the estimation results in Table 4, males, people in their 20s, and unmarried people exhibit significantly higher odds ratios, indicating that these groups tend not to conduct preventive social distancing. Although work status is not generally associated with this prevention measure, females, regular employees, and non-regular employees tended to exhibit higher odds ratios than self-employed or unemployed people.
Regarding household annual income, the lowest group (less than 2,000K JPY) has significantly higher odds ratio for the total, female, and under-40 categories.
Higher odds ratios for not conducting social distancing are associated with drinking for males and smoking for females. Furthermore, those with higher extraversion scores also tend to exhibit significantly higher odds ratio in many cases, while conscientiousness and agreeableness are associated with lower odds ratio in most cases.
Should the government change its policy on mass gatherings?
Before this survey was conducted, the request by the Japanese government for self-restraint in avoiding mass gatherings had become an issue. For example, on 22 March 2020, the K-1 Grand Prix, a martial arts event, was held despite the Minister’s and local governor’s pleas for restraint, and 6,500 participants were packed into the Saitama Super Arena. On 23 March, more than 50,000 gathered in Sendai to see the Olympic flame, which had recently arrived from Greece. We asked the participants whether they supported this policy approach. As shown in Table 5, about 29 percent of participants support the idea that the government should now allow mass gatherings. Males tend to support allowing mass gatherings more than females. On the other hand, 65 percent supported government limitations on movement in addition to self-restraint in avoiding mass gatherings in order to shorten the period of the pandemic. There are no significant differences among gender and age categories for this question.
Under circumstances in which there is no enforced ban on mass gathering or travelling beyond the home region, our findings indicate that a large portion of Japanese citizens seem to be implementing proper prevention measures on their own before the end of March 2020.
We found that more than three-quarters of the survey participants have taken some preventive actions, including social distancing, handwashing, coughing etiquette, and strengthening immunity. Because the previous empirical studies did not include developed countries like Japan, there is little scientific evidence that Japanese people prefer cleanliness and tend to wash their hands relatively more frequently than other countries. In Japanese communities, water facilities for handwashing with soap and hand sanitizers are normally placed various public places, such as train stations and supermarkets. Moreover, handwashing became a regular practice at home and school through post-war education. In general, Japanese people have developed the discipline of washing their hands before eating meals and after using the toilet. It is also well known that Japanese people greet others with a bow instead of a handshake, kiss, or hug. This cultural behavior implies that the frequency of body contact among Japanese people may be lower than those in cultures with more tactile forms of greeting. During hay fever season, Japanese citizens regularly wear surgical-style masks for prevent symptoms; wearing a mask may be a less popular preventive measure than some of the others in this study due to shortages of these products. These already-habitual practices may be aiding behavioural changes among Japanese citizens during these unusual times.
We also found that in the survey, more than half of the participants had not prepared access to consultation centres or transportation methods in the event they become ill, implying that they had not planned for the possibility of contracting COVID-19. We must advise the public to prepare for such an event, to talk to family and close friends about unexpected advanced care planning, and to imagine not having access to a ventilator or extracorporeal membrane oxygenation at the severe stage.
It was also found that one of the main motivations for behavioural change was the infection aboard the Diamond Princess cruise ship in early February 2020. At that time, only a few cases of domestic infection had been reported in Japan, but news of the quarantine and positive test results among the passengers was broadcasted daily. This may have contributed to Japanese citizens changing their mindset and behaviour toward precautionary measures earlier than in Europe and the US. The sudden request by the Prime Minister for nationwide school closures in the end of February might also have been an effective measure for changing the mindsets of Japanese citizens toward prevention, even though this move was scientifically questioned and confusing to the public, especially to single parents and double-income households.
Our survey shows that information from the Expert Meeting and central/local governments, including the Prime Minister, are relatively trusted by survey participants. The Expert Meeting and central/local government have held frequent press conferences to clarify the tentative scientific risks and encourage citizens to conduct prevention measures. Such crisis communication attempts may have caused behavioural changes in Japanese citizens. The most trusted resource in this study was information from the local government, which was a hopeful result, as the countermeasures against the virus are decided and conducted at the local level.
In the past, when Japan experienced natural disasters such as earthquakes, typhoons, and tsunamis, the local governments contributed to providing timely and organized information to disadvantaged residents through the government alert system. Based on those experiences, it may be important that for local governments to provide information or predictions about COVID-19 through these systems.
Regarding the information from newspapers, it is problematic that although about half of the people say they trust newspapers, not everyone accesses them. People may be inclined instead to use electronic-based media, which is easier to access.
Despite the overall trend toward behavioural change, however, the results also show that about 20 percent of the participants are reluctant to implement proper prevention measures. The statistical analysis indicates that those people are typically male, younger (under 30 years old), unmarried, and in lower-income households and have a drinking or smoking habit and a higher extraversion score. To prevent the spread of infection in Japan, it is imperative to address these individuals and encourage their behavioral change in various ways that will reach and move them. It is notable that approximately 65 percent of the participants support stricter countermeasures, such as limitation of movement. As we mentioned in the introduction, as of yet, the government has not issued mandatory stay-at-home orders or offered financial aid to those affected by such measures. The current requests from central/local governments are not legally binding, and individuals and businesses must arrange financial compensation independent of the government. We should observe how effective these measures are in Japan over the long term to determine whether the current law should be revised to allow for more forceful enforcement in preparation for the next pandemic.
There are several limitations to this study. First, the data were self-reported, and participants’ actual behaviours have not been observed. Second, our sample includes people from 20 to 64 years of age, but not those 65 and older, so the external validity for older people’s behaviour is rather limited.
Third, the sample was not collected based on random sampling from the whole population of Japan but through quota sampling from individuals who were recruited by or who self-enrolled in the Internet panel of the online research company Macromill Inc. Participation involved monetary incentives. Quota sampling ensured a similar distribution to the Japanese population among demographic groups (gender, age, and work status), but the sample within each group does not necessarily reflect the population. For example, our sample may induce a healthy respondents bias or similar selection bias since any persons, including those who often use the Internet, those incentivized by monetary gain, and those who have a vested interest in the prevention measures taken against COVID-19, were able to participate in our survey. In fact, as has been the case with previous studies , our sample was found to have a higher socioeconomic status than the general population in terms of income and education.
Fourth, in our survey, the income variables are not available for approximately 25 percent of the participants. Thus, the number of observations used in the regression shown in the table are limited, which may bring about selection bias. However, please note that even though income variables are not included in the regression, the results are robust in terms of the sign, significance, and magnitude of the estimates.
Fifth, we obtained this dataset at the end of March 2020, when the infection is not explosively widespread in Japan. This study should be repeated to find more effective solutions at various periods during and after the COVID-19 pandemic.
We would like to thank the participants in our online survey for their valuable data. This work was supported by university grants allocated to the Department of Public Policy, Human Genome Centre, Institute of Medical Sciences, University of Tokyo. We also thank the members of the COVID-PAGE (Public Advisory Group of Experts) for their insightful discussions.
World Health Organization. WHO characterizes COVID-19 as a pandemic [Internet]. 11 Mar 2020. Available from: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/events-as-they-happen
World Health Organization. WHO situation report as of 29 March 2020 [Internet]. Available from: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports
IHME COVID-19 health service utilization forecasting team. Forecasting COVID-19 impact on hospital bed-days, ICU-days, ventilator days and deaths by US state in the next 4 months [Internet]. MedRxiv. 26 Mar 2020. Available from: www.healthdata.org/research-article/forecasting-covid-19-impact-hospital-bed-days-icu-days-ventilator-days-and-deaths
Ministry of Health, Labor and Welfare. Coronavirus disease 2019 (COVID-19) situation within and outside the country as of 29 Mar 2020 [Internet]. Available from: https://www.mhlw.go.jp/stf/seisakunitsuite/bunya/newpage_00032.html
The data is based on the WHO’s situation report below: https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200328-sitrep-68-covid-19.pdf
Masuike H. Japan’s virus success has puzzled the world. Is its luck running out? New York Times [newspaper on the Internet]. 2020 Mar 26. Available from: https://www.nytimes.com/2020/03/26/world/asia/japan-coronavirus.html?searchResultPosition=1
Expert Meeting on the Novel Coronavirus Disease Control. Views on the novel coronavirus disease control [in Japanese; Internet] 2020 Mar 9. Available from: https://www.mhlw.go.jp/content/10900000/000606000.pdf
This number was registered as of 2017 at the OECD Health Statistics in 2019. [https://stats.oecd.org/Index.aspx?ThemeTreeId=9]
Ministry of Health, Labor and Welfare. Prevention Measures against Coronavirus Disease 2019 (COVID-19) [Internet]. Available from: https://www.mhlw.go.jp/content/10900000/000607599.pdf
- 10. Zhan S, Yang YY, Fu C. Public’s early response to the novel coronavirus–infected pneumonia. Emerg Microbes Infect [Internet]. 2020 Mar 3;9(1):534. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7067171/ pmid:32122250
- 11. Geldsetzer P. Knowledge and perceptions of COVID-19 among the general public in the United States and the United Kingdom: a cross-sectional online survey. Ann Intern Med [Internet]. 2020 Mar 20. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7086377/
Bucchi M, Salacino B. Italian citizens and covid-19. 2020 March 21. In: Public Understanding of Science Blog [Internet]. Available from: https://sagepus.blogspot.com/2020/03/italian-citizens-and-covid-19.html
Gallup International Association. 2020 March. The coronavirus: a vast scared majority around the world. Available from: https://www.gallup-international.com/wp-content/uploads/2020/03/GIA_SnapPoll_2020_COVID_Tables_final.pdf
- 14. Gosling SD, Rentfrow PJ, Swann WB Jr. A very brief measure of the Big Five personality domains. J of Res Pers, 2003; 37: 504–528.
National Institute for Infectious Diseases. Field briefing: Diamond Princess COVID-19 cases. 2020 Feb. Available from: https://www.niid.go.jp/niid/en/2019-ncov-e/9407-covid-dp-fe-01.html; and its update available from: https://www.niid.go.jp/niid/en/2019-ncov-e/9417-covid-dp-fe-02.html.
Denyer S. Japan’s social distancing is shrinking as coronavirus fears ease. Too soon? Washington Post [Internet]. 2020 March 24. Available from: https://www.washingtonpost.com/world/asia_pacific/japans-social-distancing-is-shrinking-as-coronavirus-fears-ease-too-soon/2020/03/24/7c816cee-6d0c-11ea-a156-0048b62cdb51_story.html
- 17. Wolf J, Johnston R, Freeman MC, Ram PK, Slaymaker T, Laurenz E, et al. Handwashing with soap after potential faecal contact: global, regional and country estimates. Int J Epidemiol [Internet]. 2019;48(4): 1204–18. Available from: pmid:30535198
- 18. Bay AR. Disciplining shit. Japan Forum 2018;30(4):556–582.
- 19. Craig BM, Hays RD, Pickard AS, Cella D, Revicki DA, Reeve BB. Comparison of US panel vendors for online surveys. J Med Internet Res. 2013;15 (11): e26. | 1 | 2 |
<urn:uuid:d32abc19-d785-4df7-a011-6d758beaa32e> | Mercury exposure linked to color vision loss
Over the last several decades a wide variety of studies have linked mercury exposure to various visual impairments, most notably color vision loss. Unfortunately the majority of these studies have been done overseas and mercury toxicity is not tested for when being evaluated for color vision loss.
Toxicol Ind Health. 2015 Aug;31(8):691-5.
Ophthalmic findings in acute mercury poisoning in adults: A case series study.
Aslan L1, Aslankurt M2, Bozkurt S3, Aksoy A2, Ozdemir M2, Gizir H2, Yasar I2.
The aim of this study is to report ophthalmic findings of acute mercury poisoning in 48 adults referred to emergency department. Full ophthalmologic examination including the best corrected visual acuity, external eye examination, reaction to light, a slit-lamp examination, funduscopy, intraocular pressure measurements, and visual field (VF) and color vision (CV) tests were performed at the presentation and repeated after 6 months. The parametric values of VF test, the mean deviation (MD), and pattern standard deviation (PSD) were recorded in order to compare patients and the 30 healthy controls. The mean parameter of color confusion index in patients was found to be statistically different than controls (p < 0.01). The MD and PSD in patients were different from controls statistically significant (p < 0.01 and p < 0.01, respectively). There was no correlation between the ocular findings and the urine and blood mercury levels. Methyl mercury, held in the school laboratory for experimental purpose, may be a source of poisoning. In this case series, we showed that acute exposure to mercury had hazardous effect on the visual system, especially CV and VF. We propose that emphasizing the public education on the potential hazards of mercury is crucial for preventive community health.
Handb Clin Neurol. 2015;131:325-40. doi: 10.1016/B978-0-444-62627-1.00017-2.
Retinal and visual system: occupational and environmental toxicology.
Occupational chemical exposure often results in sensory systems alterations that occur without other clinical signs or symptoms. Approximately 3000 chemicals are toxic to the retina and central visual system. Their dysfunction can have immediate, long-term, and delayed effects on mental health, physical health, and performance and lead to increased occupational injuries. The aims of this chapter are fourfold. First, provide references on retinal/visual system structure, function, and assessment techniques. Second, discuss the retinal features that make it especially vulnerable to toxic chemicals. Third, review the clinical and corresponding experimental data regarding retinal/visual system deficits produced by occupational toxicants: organic solvents (carbon disulfide, trichloroethylene, tetrachloroethylene, styrene, toluene, and mixtures) and metals (inorganic lead, methyl mercury, and mercury vapor). Fourth, discuss occupational and environmental toxicants as risk factors for late-onset retinal diseases and degeneration. Overall, the toxicants altered color vision, rod- and/or cone-mediated electroretinograms, visual fields, spatial contrast sensitivity, and/or retinal thickness. The findings elucidate the importance of conducting multimodal noninvasive clinical, electrophysiologic, imaging and vision testing to monitor toxicant-exposed workers for possible retinal/visual system alterations. Finally, since the retina is a window into the brain, an increased awareness and understanding of retinal/visual system dysfunction should provide additional insight into acquired neurodegenerative disorders.
Color vision impairment in workers exposed to mercury vapor.
[Article in Polish] Med Pr. 2011;62(3):227-35. Jedrejko M, Skoczyńska A.
Source: Akademia Medyczna we Wrocławiu, Katedra i Klinika Chorób Wewnetrznych, Zawodowych i Nadciśnienia Tetniczego.
Acquired reversible dyschromatopsia has been associated with occupational exposure to mercury vapor. Early-detected impairments in color discrimination precede adverse permanent effects of mercury, so they may help to monitor the health of the exposed workers. The aim of this study was to evaluate the color discrimination ability in this group of workers, using Lanthony D-15d test.
MATERIAL AND METHODS:
Employed in a chloralkali plant, 27 male workers exposed to mercury vapor and 27 healthy white-collar workers (control group) were qualified for the study. To assess color discrimination, the Lanthony 15-Hue desaturated test (Lanthony D-15) was used. In order to investigate quantitative and qualitative results, the Lanthony D-15d scoring software was performed. Urinary mercury was determined using flameless atomic absorption spectrometry.
In the workers exposed to mercury vapor, urine mercury concentration was 117.4 +/- 62.6 microg/g creatinine on average compared with 0.279 +/- 0.224 mg/g creatinine in the control group (p < 0.0001). In 18 exposed persons (66.7%), the results of the Lanthony D-15d test showed qualitative changes, which are borderline corresponding to the early stage of developing dyschromatopsia type III. The quantitative analysis of the test findings indicated a significantly higher value of the Color Confusion Index (CCI) in the right eye in the exposed group compared to the control group (p = 0.01), with no significant difference in the CCI in the left eye. In the exposed group, the CCI in the right eye was significantly higher than the CCI in the left eye (p = 0.0005). There was neither correlation between CCI and the level of urinary mercury, nor between CCI and duration of exposure.
The results showed that the Lanthony D-15d test is useful in the detection of early toxic effects in the eyesight of the workers exposed to mercury vapor. The observed color vision impairments are borderline corresponding to the early stage of developing dyschromatopsia type III.
Ophthalmic Physiol Opt. 2010 Sep;30(5):724-30. doi: 10.1111/j.1475-1313.2010.00764.x.
Color-space distortions following long-term occupational exposure to mercury vapor.
Feitosa-Santana C, Bimler DL, Paramei GV, Oiwa NN, Barboni MT, Costa MF, Silveira LC, Ventura DF.
Source: Department of Psychology, University of Chicago, Chicago, IL 60637, USA. [email protected]
Color vision was examined in subjects with long-term occupational exposure to mercury (Hg) vapor. The color vision impairment was assessed by employing a quantitative measure of distortion of individual and group perceptual color spaces. Hg subjects (n = 18; 42.1 ± 6.5 years old; exposure time = 10.4 ± 5.0 years; time away from the exposure source = 6.8 ± 4.6 years) and controls (n = 18; 46.1 ± 8.4 years old) were examined using two arrangement tests, D-15 and D-15d, in the traditional way, and also in a triadic procedure. From each subject’s ‘odd-one-out’ choices, matrices of inter-cap subjective dissimilarities were derived and processed by non-metric multidimensional scaling (MDS). D-15d results differed significantly between the Hg-group and the control group (p < 0.05), with the impairment predominantly along the tritan axis. 2D perceptual color spaces, individual and group, were reconstructed, with the dimensions interpreted as the red-green (RG) and the blue-yellow (BY) systems. When color configurations from the Hg-group were compared to those of the controls, they presented more fluctuations along both chromatic dimensions, indicating a statistically significant difference along the BY axis.
In conclusion, the present findings confirm that color vision impairments persist in subjects that have received long-term occupational exposure to Hg-vapor although, at the time of testing, they were presenting mean urinary concentration within the normal range for non-exposed individuals. Considering the advantages of the triadic procedure in clinical evaluation of acquired color vision deficiencies, further studies should attempt to verify and/or improve its efficacy.
J Occup Environ Med. 2009 Dec;51(12):1403-12.
Preliminary findings on the effects of occupational exposure to mercury vapor below safety levels on visual and neuropsychological functions.
Barboni MT, Feitosa-Santana C, Zachi EC, Lago M, Teixeira RA, Taub A, da Costa MF, Silveira LC, Ventura DF.
Source: Neuroscience and Behavior, University of Sao Paulo, Sao Paulo, Brazil. [email protected]
To evaluate whether there are visual and neuropsychological decrements in workers with low exposure to Hg vapor.
Visual fields, contrast sensitivity, color vision, and neuropsychological functions were measured in 10 workers (32.5 +/- 8.5 years) chronically exposed to Hg vapor (4.3 +/- 2.8 years; urinary Hg concentration 22.3 +/- 9.3 microg/g creatinine).
For the worst eyes, we found altered visual field thresholds, lower contrast sensitivity, and color discrimination compared with controls (P <0.05). There were no significant differences between Hg-exposed subjects and controls on neuropsychological tests. Nevertheless, duration of exposure was statistically correlated to verbal memory and depression scores.
Chronic exposure to Hg vapor at currently accepted safety levels was found to be associated with visual losses but not with neuropsychological dysfunctions in the sample of workers studied.
Vis Neurosci. 2008 May-Jun;25(3):487-91.
Irreversible color vision losses in patients with chronic mercury vapor intoxication.
Feitosa-Santana C, Barboni MT, Oiwa NN, Paramei GV, Simões AL, Da Costa MF, Silveira LC, Ventura DF.
Source: Núcleo de Neurociências e Comportamento, Universidade de São Paulo, São Paulo, Brazil. [email protected]
This longitudinal study addresses the reversibility of color vision losses in subjects who had been occupationally exposed to mercury vapor. Color discrimination was assessed in 20 Hg-exposed patients (mean age = 42.4 +/- 6.5 years; 6 females and 14 males) with exposure to Hg vapor during 10.5 +/- 5.3 years and away from the work place (relative to 2002) for 6.8 +/- 4.2 years. During the Hg exposure or up to one year after ceasing it, mean urinary Hg concentration was 47 +/- 35.4 mug/g creatinine. There was no information on Hg urinary concentration at the time of the first tests, in 2002 (Ventura et al., 2005), but at the time of the follow-up tests, in 2005, this value was 1.4 +/- 1.4 microg/g creatinine for patients compared with 0.5 +/- 0.5 microg/g creatinine for controls (different group from the one in Ventura et al. (2005)). Color vision was monocularly assessed using the Cambridge Colour Test (CCT). Hg-exposed patients had significantly worse color discrimination (p < 0.02) than controls, as evaluated by the size of MacAdam’s color discrimination ellipses and color discrimination thresholds along protan, deutan, and tritan confusion axes. There were no significant differences between the results of the study in Ventura et al. (2005) and in the present follow-up measurements, in 2005, except for worsening of the tritan thresholds in the best eye in 2005. Both chromatic systems, blue-yellow and red-green, were affected in the first evaluation (Ventura et al., 2005) and remained impaired in the follow-up testing, in 2005. These findings indicate that following a long-term occupational exposure to Hg vapor, even several years away from the source of intoxication, color vision impairment remains irreversible.
Can J Ophthalmol. 2007 Oct;42(5):660-2.
Mercury exposure and its implications for visual health.
Collins C, Saldana M.
In adult monkeys and humans, methylmercury exposure has been linked to constriction of the visual field and abnormal colour vision. Korogi et al.5 performed magnetic resonance (MR) imaging of the brains of patients with known Minamata disease. The visual cortex, the cerebellar vermis and hemispheres, and the postcentral cortex were significantly atrophic. MR also demonstrated lesions in the calcarine area, cerebellum, and postcentral gyri. When Korogi et al. later looked at the striate cortex in patients with known Minamata disease and visual field constriction they found a correlation between the visual field defect and the extent of dilatation of the calcarine fissure.6 From electrophysiological testing of workers exposed to mercury vapors a significant reduction was found of the visual evoked potential (VEP) latency, especially for the N75.7 Further work completed in 2003 identified greater colour confusion, more errors on colour testing, and an increased frequency of type III dyschromatopsias (blue–yellow confusion axis) in comparison with the control group.
Cavalleri et al.8 studied a group of workers with high levels of urinary mercury and found a dose-related impairment of colour discrimination. Following changes to the workers’ work practices, mercury levels 12 months later had fallen to one-tenth of the previous levels and their colour vision had returned almost to normal. Children with raised blood mercury concentrations have been studied for changes in visual function testing. Saint- Amour et al.,9 examining preschool Inuit children living in Nunavik, northern Quebec, reported reduced VEP latency similar to the values found in mercury-exposed workers.
Cian Collins, MRCOphth, Manual Saldana, MRCOphth
Princess Alexandra Eye Pavilion
Braz J Med Biol Res. 2007 Mar;40(3):409-14.
Long-term loss of color vision after exposure to mercury vapor.
Feitosa-Santana C, Costa MF, Lago M, Ventura DF.
Source: Departamento de Psicologia Experimental, Instituto de Psicologia, Universidade de São Paulo, Av. Prof. Mello Moraes 1721, 05508-900 São Paulo, SP, Brazil. [email protected]
We evaluated the color vision of 24 subjects (41.6 +/- 6.5 years; 6 females) who worked in fluorescent lamp industries. They had been occupationally exposed to mercury vapor (10.6 +/- 5.2 years) and had been
away from the source of exposure for 6.4 +/- 4.04 years. Mean urinary concentration of mercury was 40.6 +/- 36.4 microg/g creatinine during or up to 1 year after exposure and 2.71 +/- 1.19 microg/g
creatinine at the time of color vision testing or up to 1 year thereafter. All patients were diagnosed with chronic mercury intoxication, characterized by clinical symptoms and neuropsychological
alterations. A control group (N = 36, 48.6 +/- 11.9 years, 10 females, 1.5 +/- 0.47 microg mercury/g creatinine) was subjected to the same tests. Inclusion criteria for both groups were Snellen VA 20/30 or
better and absence of known ophthalmologic pathologies. Color discrimination was assessed with the Farnsworth D-15 test (D-15) and with the Lanthony D-15d test (D-15d). Significant differences were found
between the two eyes of the patients (P < 0.001) in both tests. Results for the worst eye were also different from controls for both tests: P = 0.014 for D-15 and P < 0.001 for D-15d. As shown in previous
studies, the D-15d proved to be more sensitive than the D-15 for the screening and diagnosis of the color discrimination losses. Since color discrimination losses were still present many years after the end
of exposure, they may be considered to be irreversible, at least under the conditions of the present study.
Braz J Med Biol Res. 2007 Mar;40(3):415-24.
Mercury toxicity in the Amazon: contrast sensitivity and color discrimination of subjects exposed to mercury.
Rodrigues AR, Souza CR, Braga AM, Rodrigues PS, Silveira AT, Damin ET, Côrtes MI, Castro AJ, Mello GA, Vieira JL, Pinheiro MC, Ventura DF, Silveira LC.
Source: Departamento de Fisiologia, Universidade Federal do Pará, 66055 Belém, Pará (PA), Brazil.
We measured visual performance in achromatic and chromatic spatial tasks of mercury-exposed subjects and compared the results with norms obtained from healthy individuals of similar age. Data were obtained
for a group of 28 mercury-exposed subjects, comprising 20 Amazonian gold miners, 2 inhabitants of Amazonian riverside communities, and 6 laboratory technicians, who asked for medical care. Statistical norms
were generated by testing healthy control subjects divided into three age groups. The performance of a substantial proportion of the mercury-exposed subjects was below the norms in all of these tasks. Eleven
of 20 subjects (55%) performed below the norms in the achromatic contrast sensitivity task. The mercury-exposed subjects also had lower red-green contrast sensitivity deficits at all tested spatial
frequencies (9/11 subjects; 81%). Three gold miners and 1 riverine (4/19 subjects, 21%) performed worse than normal subjects making more mistakes in the color arrangement test. Five of 10 subjects tested
(50%), comprising 2 gold miners, 2 technicians, and 1 riverine, performed worse than normal in the color discrimination test, having areas of one or more MacAdam ellipse larger than normal subjects and high
color discrimination thresholds at least in one color locus. These data indicate that psychophysical assessment can be used to quantify the degree of visual impairment of mercury-exposed subjects. They also
suggest that some spatial tests such as the measurement of red-green chromatic contrast are sufficiently sensitive to detect visual dysfunction caused by mercury toxicity.
Environ Toxicol Pharmacol. 2005 May;19(3):517-22. Epub 2005 Jan 23.
Visual impairment on dentists related to occupational mercury exposure.
Canto-Pereira LH, Lago M, Costa MF, Rodrigues AR, Saito CA, Silveira LC, Ventura DF.
Source: Departamento de Psicologia Experimental, Instituto de Psicologia, e Núcleo de Pesquisa em Neurociências e Comportamento, Universidade de São Paulo, 05508-900 São Paulo, SP, Brazil.
A detailed assessment of visual function was obtained in subjects with low-level occupational mercury exposure by measuring hue saturation thresholds and contrast sensitivity functions for luminance and
chromatic modulation. General practice dentists (n=15) were compared to age-matched healthy controls (n=13). Color discrimination estimated by the area of Mac Adam ellipses was impaired, showing diffuse
discrimination loss. There was also reduction of contrast sensitivity for luminance and chromatic (red-green and blue-yellow) modulation, in all tested spatial frequencies. Low concentrations of urinary
mercury (1.97±1.61μg/g creatinine) were found in the dentists group. Color discrimination as well as contrast sensitivity function, assessed psychophysically, constitutes a sensitive indicator of subtle
neurotoxic effect of elemental mercury exposure.
Environ Toxicol Pharmacol. 2005 May;19(3):523-9. Epub 2005 Mar 17.
Colour vision and contrast sensitivity losses of mercury intoxicated industry workers in Brazil.
Ventura DF, Simões AL, Tomaz S, Costa MF, Lago M, Costa MT, Canto-Pereira LH, de Souza JM, Faria MA, Silveira LC.
Source: Instituto de Psicologia, Universidade de São Paulo, Av. Prof. Mello Moraes 1721, 05508-900 São Paulo, SP, Brazil; Núcleo de Neurociências e Comportamento, Universidade de São Paulo, São Paulo, Brazil.
We evaluated vision loss in workers from fluorescent lamp industries (n=39) who had retired due to intoxication with mercury vapour and had been away from the work situation for several years (mean=6.32
years). An age-matched control group was submitted to the same tests for comparison. The luminance contrast sensitivity (CSF) was measured psychophysically and with the sweep visual evoked potential (sVEP)
method. Chromatic red-green and blue-yellow CSFs were measured psychophysically. Colour discrimination was assessed with the Farnsworth-Munsell 100-hue test, Lanthony D-15d test and Cambridge Colour Vision
Test. Patient data showed significantly lower scores in all colour tests compared to controls (p<.001). The behavioural luminance CSF of the patients was lower than that of controls (p<.001 at all
frequencies tested). This result was confirmed by the electrophysiologically measured sweep VEP luminance CSF except at the highest frequencies-a difference that might be related to stimulus differences in
the two situations. Chromatic CSFs were also statistically significantly lower for the patients than for the controls, for both chromatic equiluminant stimuli: red-green (p<.005) and blue-yellow (p<.04 for
all frequencies, except 2 cycles per degree (cpd), the highest spatial frequency tested) spatial gratings. We conclude that exposure to elemental mercury vapour is associated with profound and lasting losses
in achromatic and chromatic visual functions, affecting the magno-, parvo- and koniocellular visual pathways.
Vis Neurosci. 2004 May-Jun;21(3):421-9.
Multifocal and full-field electroretinogram changes associated with color-vision loss in mercury vapor exposure.
Ventura DF, Costa MT, Costa MF, Berezovsky A, Salomão SR, Simões AL, Lago M, Pereira LH, Faria MA, De Souza JM, Silveira LC.
Source: Instituto de Psicologia and Núcleo de Neurociências e Comportamento, Universidade de São Paulo, SP, Brazil. [email protected]
We evaluated the color vision of mercury-contaminated patients and investigated possible retinal origins of losses using electroretinography. Participants were retired workers from a fluorescent lamp
industry diagnosed with mercury contamination (n = 43) and age-matched controls (n = 21). Color discrimination was assessed with the Cambridge Colour Test (CCT). Retinal function was evaluated by using the
ISCEV protocol for full-field electroretinography (full-field ERG), as well as by means of multifocal electroretinography (mfERG). Color-vision losses assessed by the CCT consisted of higher color-
discrimination thresholds along the protan, deutan, and tritan axes and significantly larger discrimination ellipses in mercury-exposed patients compared to controls. Full-field ERG amplitudes from patients
were smaller than those of the controls for the scotopic response b-wave, maximum response, sum of oscillatory potentials (OPs), 30-Hz flicker response, and light-adapted cone response. OP amplitudes
measured in patients were smaller than those of controls for O2 and O3. Multifocal ERGs recorded from ten randomly selected patients showed smaller N1-P1 amplitudes and longer latencies throughout the 25-deg
central field. Full-field ERGs showed that scotopic, photopic, peripheral, and midperipheral retinal functions were affected, and the mfERGs indicated that central retinal function was also significantly
depressed. To our knowledge, this is the first demonstration of retinal involvement in visual losses caused by mercury toxicity.
Neurotoxicology. 2003 Aug;24(4-5):693-702.
Color vision impairment in workers exposed to neurotoxic chemicals.
Gobba F, Cavalleri A.
Source: Cattedra di Medicina del Lavoro, Dipartimento di Scienze Igienistiche, Università di Modena e Reggio Emilia, 41100 (MO) Modena, Italy. [email protected]
Recent research shows that occupational exposure to several solvents, metals and other industrial chemicals can impair color vision in exposed workers. Occupation-related color vision impairment usually results in blue-yellow color discrimination loss or, less frequently, a combination of blue-yellow and red-green loss. The eyes may be unequally involved, and the course is variable depending on exposure and other factors. The pathogenesis of occupational color vision loss has not been elucidated; it may be due to, e.g. a direct action of neurotoxins on receptors, possibly on the cone’s membrane metabolism, and/or to an interference with neurotransmitters within the retina. Other possible pathogenetic mechanisms, such as a direct effect to the optic nerve, have also been suggested. Occupational color vision loss is usually sub-clinical, and workers are unaware of any deficit. It can be assessed using sensitive tests, such as the Farnsworth-Munsell 100 Hue (FM-100) or the Lanthony D-15 desaturated panel (D-15 d). The latter is the most widely used for studies in groups of exposed workers, and offers the possibility of a quantitative evaluation of the results by calculation of the Bowman’s Color Confusion Index (CCI), or of the Vingrys’ and King Smith’s Confusion Index (CI). Other advantages of D-15 d are the possibility to perform the test directly at the workplace, and the reproducibility when performed in standardized conditions. In most cases, occupation-related color vision impairment is correlated to exposure levels, and has often been observed in workers exposed to environmental concentrations below the current occupational limit proposed by the ACGIH. Progression with increasing cumulative exposure has been reported, while reversibility is still discussed. Acquired color vision impairment related to occupational exposure to styrene, perchloroethylene (PCE), toluene, carbon disulfide, n-hexane, solvent mixtures, mercury and some other chemicals are discussed. Results show that color vision testing should be included in the evaluation of early neurotoxicity of chemicals in exposed workers. The D-15 d would be useful in the surveillance of workers exposed to solvents and other chemicals toxic to the visual system.
Neurotoxicology. 2003 Aug;24(4-5):711-6.
Color discrimination impairment in workers exposed to mercury vapor.
Urban P, Gobba F, Nerudová J, Lukás E, Cábelková Z, Cikrt M.
Source: National Institute of Public Health, Srobárova 48, 100 48 10 Prague, Czech Republic. [email protected]
To study color discrimination impairment in workers exposed to elemental mercury (Hg) vapor.
Twenty-four male workers from a chloralkali plant exposed to Hg vapor, aged 42+/-9.8 years, duration of exposure 14.7+/-9.7 years, were examined. The 8h TWA air-borne Hg concentration in workplace was 59 microg/m(3); mean Hg urinary excretion (HgU) was 20.5+/-19.3 microg/g creatinine; mean Hg urinary excretion after the administration of a chelating agent, sodium 2,3-dimercapto-1-propane-sulfonate (DMPS), was 751.9+/-648 microg/48h. Twenty-four age- and gender-matched control subjects were compared. Visual acuity, alcohol intake, smoking habits, and history of diseases or drugs potentially influencing color vision were registered.
The Lanthony 15-Hue desaturated test (L-D15-d) was used to assess color vision. The results were expressed quantitatively as Bowman’s Color Confusion Index (CCI), and qualitatively according to Verriest’s classification of acquired dyschromatopsias.
The CCI was significantly higher in the exposed group than in the control (mean CCI 1.15 versus 1.04; P=0.04). The proportion of subjects with errorless performance on the Lanthony test was significantly lower in the Hg exposed group compared to referents (52% versus 73%; P=0.035). The exposed group showed higher frequency of type III dyschromatopsias (blue-yellow confusion axis) in comparison with the control group (12.5% versus 8.3%), however, the difference did not reach statistical significance. Multiple regression did not show any significant relationship between the CCI, and age, alcohol consumption, or measures of exposure.
In agreement with previous studies by Cavalleri et al. [Toxicol. Lett. 77 (1995) 351; Environ. Res. Sec. A 77 (1998) 173], the results of this study support the hypothesis that exposure to mercury vapor can induce sub-clinical color vision impairment. This effect was observed at an exposure level below the current biological limit for occupational exposure to mercury. This raises doubts on the actual protection afforded by this limit concerning the effect of mercury on color vision.
Bioinorg Chem Appl. 2003:199-214.
Sensory perception: an overlooked target of occupational exposure to metals.
Source: Cattedra di Medicina del Lavoro, Dipartimento di Scienze Igienistiche, Università di Modena e Reggio Emilia, Via Campi 287 Modena (MO) 41100, Italy. [email protected]
The effect of exposure to industrial metals on sensory perception of workers has received only modest interest from the medical community to date. Nevertheless, some experimental and epidemiological data exist showing that industrial metals can affect vision, hearing and olfactory function, and a similar effect is also suggested for touch and taste. In this review the main industrial metals involved are discussed. An important limit in available knowledge is that, to date, the number of chemicals studied is relatively small. Another is that the large majority of the studies have evaluated the effect of a single chemical on a single sense. As an example, we know that mercury can impair hearing, smell, taste, touch and also vision, but we have scant idea if, in the same worker, a relation exists between impairments in different senses, or if impairments are independent. Moreover, workers are frequently exposed to different chemicals; a few available results suggest that a co-exposure may have no effect, or result in both an increase and a decrease of the effect, as observed for hearing loss, but this aspect certainly deserves much more study. As a conclusion, exposure to industrial metals can affect sensory perception, but knowledge of this effect is yet incomplete, and is largely inadequate especially for an estimation of “safe” thresholds of exposure. These data support the desirability of further good quality studies in this field.
Neurotoxicology. 2000 Oct;21(5):777-81.
Evolution of color vision loss induced by occupational exposure to chemicals.
Gobba F, Cavalleri A.
Source: Dipartimento di Scienze Igienistiche, Università di Modena e Reggio Emilia, Modena, Italy. [email protected]
The evolution of occupationally induced color vision loss was studied in workers exposed to various chemicals. Exposure was evaluated by biological monitoring or personal air samplers, and color vision using the Lanthony D-15 desaturated panel (D-15 d). The effect of short-term interruption of exposure was studied in 39 Styrene (St) exposed workers: at a first examination a dose-related color vision loss was disclosed; a re-test performed after one month’s interruption of exposure did not show any improvement of the effect. The evolution during longer periods was studied in another group of 30 St workers. Exposure and color vision were evaluated, then a follow-up was done 12 months later: the exposure was unmodified or slightly decreased in 20 subjects, and D-15 d outcomes remained unchanged, while St levels had increased and color vision loss progressed in the other 10. Similar results were obtained in 33 PCE exposed dry-cleaners: no change in color perception was observed in 14 workers whose exposure decreased, while in the other 19 a rise in PCE levels was followed by a significant color vision worsening. In 21 Hg exposed workers whose mean urinary excretion of Hg was threefold the BEI proposed by ACGIH, a dose-related impairment in color perception was observed. 12 months after a marked reduction of exposure, an almost complete recovery of the impairment was observed. Our data show that an increase in exposure can induce a worsening in color vision loss. A short interruption in exposure did not reduce the effect. A more prolonged reduction of dose reversed color vision loss in Hg exposed workers, while in solvent-exposed individuals the progression deserves further evaluation. D-15 d proved a useful test for studies on the evolution of color perception in workers exposed to eye-toxic chemicals.
Environ Res. 1998 May;77(2):173-7.
Reversible color vision loss in occupational exposure to metallic mercury.
Cavalleri A, Gobba F.
Source: Sezione di Medicina Preventiva dei Lavoratori, Università di Pavia, Pavia, Italy.
Color vision was evaluated in twenty-one mercury exposed workers and referents matched for sex, age, tobacco smoking, and alcohol habits. The Lanthony 15 Hue desaturated panel (D-15 d) was applied. In the workers, mean urinary Hg (HgU) was 115+/-61.5 microg/g creatinine; in all but one the values exceeded the biological limit (BEI) proposed by the American Conference of Governmental Industrial Hygienists. A dose-related subclinical color vision impairment was observed in Hg-exposed workers compared to the referents. Just after the survey, working conditions were improved. Twelve months later the workers were reexamined. Mean HgU was 10.0 microg/g creatinine and in no subjects was the BEI exceeded. Color perception was significantly improved compared to the first examination and, furthermore, no differences were observed between exposed workers and referents. The results add evidence that the color vision loss observed during the first part of the study was related to Hg exposure and, moreover, show that this effect is reversible. These data indicate that metallic Hg can induce a reversible impairment in color perception. This suggests that color vision testing should be included in studies on the early effects of Hg. The possibility of applying the D-15 d as an early effect index in the biological monitoring of Hg exposed workers should also be entertained.
Toxicol Lett. 1995 May;77(1-3):351-6.
Colour vision loss in workers exposed to elemental mercury vapour.
Cavalleri A, Belotti L, Gobba F, Luzzana G, Rosa P, Seghizzi P.
Source: Sezione di Medicina Preventiva dei Lavoratori, University of Pavia, Italy.
We evaluated colour vision in 33 workers exposed to elemental mercury (Hg) vapour and in 33 referents matched for sex, age, alcohol consumption and cigarette smoking. The results were expressed as colour confusion index (CCI). In the workers urinary excretion of Hg (HgU) ranged from 28 to 287 micrograms/g creatinine. Subclinical colour vision loss, mainly in the blue-yellow range, was observed in the workers. This effect was related to exposure, as indicated by the correlation between HgU and CCI (r = 0.488, P < 0.01). In the workers whose HgU exceeded 50 micrograms/g creatinine, mean CCI was significantly increased compared to the matched referents. The results suggest that exposure to elemental Hg inducing HgU values exceeding 50 micrograms/g creatinine can induce a dose-related colour vision loss.
Neurotoxicol Teratol. 1990 Nov-Dec;12(6):669-72.
Colour vision loss among disabled workers with neuropsychological impairment.
Mergler D, Bowler R, Cone J.
Source: Groupe de recherche-action en biologie du travail, Université du Québec à Montréal, Canada.
Test performance on a neurobehavioural battery was examined with respect to acquired colour vision loss among patients with a history of neurotoxin exposure. The study group included 14 men and 7 women with clinically diagnosed neuropsychological impairment (mean age: 41.3 +/- 8.1 years; mean educational level: 13.4 +/- 1.4 years). Verbal and visual ability, memory and psychomotor function were assessed with the California Neuropsychological Screening Battery. Colour vision was assessed with the Lanthony D-15 desaturated colour arrangement panel. Acquired dyschromatopsia was present in 17 patients (80.9%), 11 of whom manifested patterns of Type II colour vision loss. Simple regression analysis of neuropsychological test performance with respect to colour vision loss, using age-adjusted Z-scores, revealed significant relationships (p less than or equal to 0.05) solely for tests which rely heavily on the visual system. Significant differences in visual task test scores were also observed with the type of dyschromatopsia (Kruskal-Wallis, p less than or equal to 0.05). These findings suggest that poor performance on visual tasks and colour vision loss may both result from damage to neuro-ophthalmic pathways or that loss of integrity of the peripheral visual pathways may affect visual task performance. The authors propose that visual testing should be incorporated into neurobehavioural test batteries. | 1 | 13 |
<urn:uuid:b055b826-1f52-4799-8d9a-21f25f03a049> | Author: Arrani Ashritha
Department of Pharmacology, G. Pulla Reddy college of pharmacy, Osmania University, Hyderabad, Telangana, India – 500 028.
Vaccines are biological preparations that promote immunity and contributed significantly to the 19th-century research. It contains proteins that are similar to virus causing diseases and is usually made up of weakened or killed forms of microbes. They trigger the immune system to recognize and produce the antigens (Kurup& Thomas, 2020). Edible vaccine is a type of vaccine in which selected genes are introduced into plants and the transgenic plant is prompted to produce encoded proteins (Jyoti Saxena& Shweta Rawat, 2013). Edible vaccines are prepared from genetically modified plants. They can be produced by integrating transgene into a selected plant cell. Edible vaccines are currently designed for animal and human use (Kurup &Thomas, 2020). Compared with other common vaccines, edible vaccines are more economical, efficacious, and harmless. They promise a better prevention approach (Maxwell, 2014; Lal et al., 2007; van der Laan et al., 2006).
Edible vaccines provide great scope to reduce various diseases such as measles, hepatitis B, cholera, diarrhea, etc., especially in developing countries (Saxena et al., 2006). Edible vaccines are also known as oral vaccines or food vaccines or dietary vaccines. This article prioritizes the development of oral vaccines and the various ways in which the technology has developed over years.
Keywords: edible vaccines, algal vaccines, peanut allergy, classical swine flu, malaria.
A vaccine is a biological preparation whose goal is to stimulate the immune system by stimulating the production of the antibodies. The idea of vaccination was first introduced by Edward Jenner of smallpox in 1796 (Ulmer, Valley and Rappuoli, 2006). Regular vaccines have several restrictions. One of the major problems is the security concern. Another restriction is the need for storage under refrigerated conditions. Vaccines are typically manufactured by industrial processes, thus making them costly and beyond reach in developing countries. For this reason, edible vaccines are seen as ideal substitution for conventional vaccines (Xing Santosuosso et al., 2004; Lycke & Bemark, 2010; Lycke, 2012).
Edible vaccines are usually plants that produce antigens, thus requiring basic agricultural knowledge. The process of purification and downstream processing (DSP) make the conventional vaccines more expensive but these processes are eliminated in edible vaccines making them cost-efficient. The principle of edible vaccines is, to convert dietary foods into potential vaccines to prevent infectious diseases. It involves introducing desirable genes to plants and then recruiting these genetically modified plants to produce encoded proteins. It has also been found to be used in preventing autoimmune diseases, contraception, cancer treatment, etc. Edible vaccines provide an inexpensive, non-invasive, simple, safe, and effective method of vaccine production (Mason et al., 1992).
Edible vaccines can be easily used. The chances of infection with plant pathogens are very small or insignificant as plant pathogens cannot infect humans. Edible vaccines for various diseases such as measles, cholera, foot and mouth disease, as well as hepatitis B, C, and E are produced in plants such as bananas, tobacco, potatoes, etc (Giddings et al., 2000). Edible algae based vaccination is a vaccine strategy under the preliminary research of combining a genetically engineered sub-unit vaccine with an immunological adjuvant in Chlamydomonas reinhardtii microalgae (Specht &Mayfield, 2014). Algal vaccines are similar to plant vaccines. Algae are sometimes called single-cell water-borne plants. Researchers often develop algal vaccines associated with Chlamydomonas reinhardtii, Dunaliella salina, and cyanobacteria (Ma et al., 2020).
History of edible vaccines
Hiat et al. in 1989 tried to produce antibodies in plants that could fulfill the purpose as vaccines, thus initiating research into Edible vaccines. The first report of an edible vaccine (a surface protein from Streptococcus) in tobacco, at 0.02% of the total leaf protein level, emerged in 1990 in the form of a patent application published under an international patent cooperation treaty. In 1992, Arntzen and colleagues introduced the concept of genetically modified plants as a system for the production and delivery of sub unit vaccines using edible tissues of transgenic crop plants. They found that this concept could overcome the limitations of traditional vaccines, thereby initiating the research on edible vaccines (Mor et al., 1998). In the 1990s, Streptococcus mutants surface protein antigen A was expressed for the first time in tobacco. In the same year, the successful expression of hepatitis B surface antigen (HBsAg) in tobacco plants was also achieved (Mason et al., 1992). To prove that plant derived HBsAg could stimulate mucosal immune responses through oral route, potato tubers were used as an expression system and were optimized to enhance accumulation of protein in plant tubers (Richter et al., 2000). Parallel to evaluation of HBsAg extracts from the plant, Mason, and Arntzen examined plant manifestations in other vaccine candidates including labile toxin B subunit (LT-B) enterotoxigenic Escherichia coli (ETEC) and capsid protein of the virus Norwalk. Plant-derived proteins that are properly synthesized into active oligomers may receive the expected immune response when given orally in animals (Mason et al., 1998).
In 1998, a new era in vaccination was launched in which researchers supported by the National Institute of Allergy and Infectious Diseases (NIAID) demonstrated for the first time that an edible vaccine could safely generate important immune responses in humans. In 2003, Sala and a research team reported that proteins produced from these plants trigger the mucosal immune response which was the main target behind the concept of Edible vaccines (Jyoti Saxena &Shweta Rawat, 2013). In 2003, the first documented algal-based vaccine antigen was reported, consisting of an antigen of foot-and-mouth disease associated with subunit B of cholera toxin, which introduced antigen to the mucosal surfaces in mice. The vaccine was grown in C. reinhardtii algae and provided oral immunization in mice, but was hindered by low vaccine antigen expression levels (Specht &Mayfield, 2014).
Mechanism of action
Edible vaccines mainly stimulate mucosal immunity. Edible vaccines are needed to activate the mucosal immune system (MIS). MIS is the first line of defense as this is where human viruses start their infection. The mucosal surfaces are found lining the digestive tract, the respiratory tract, and the urinary tract. There are many ways by which antigen can enter the gut mucosal layer, namely M cells and macrophages. Macrophages are usually activated by interferon-gamma. This activation leads to macrophages delivering different peptides to helper T cells that further produce antibodies (Johansen et al., 1999). M cells are another way by which the antigens are transported to T cells. Antigenic epitopes are then presented on the APC (Antigen Presenting Cells) with the help of helper T cells, which then activate B cells. The activated B cells then migrate to mesenteric lymph nodes where they mature into plasma cells, which then migrate to the mucosal membranes to release immunoglobulin A (IgA). IgA then produces secretory IgA, which is then transported into the lumen. The production of secretory IgA is another complex process since 50% of secretory IgA (sIgA) in the intestinal (gut) lumen is produced by B1 cells in the lamina propria in a T-cell-independent fashion. These sIgAs are polyreactive and usually recognize foreign antigens. In the lumen, sIgA makes the invading pathogen less effective by reacting with certain antigenic epitopes (Walmsley& Arntzen, 2000).
Drawbacks of conventional vaccines
Conventional (common or standard) vaccines have been (and are) the basis of the body’s immune system to fight many diseases. However, a few problems arise when using this type of vaccine.
Some of these problems are:
Benefits of edible algal vaccines
Use of edible algal vaccines for various diseases
Classical swine flu virus (CSFV) is an infectious virus that causes swine flu (Moennig, 2000). Even though vaccines are the leading prevention method against CSFV, attenuated vaccines, and C-strain vaccines have been reported to have lost their ability to differentiate between infected and vaccinated animals (Markowska-Daniel et al., 2001). The protein E2 has great antigenic properties and makes the immune system unique. In research carried out by He et al this E2 protein from the CSFV was expressed in Chlamydomonas reinhardtii (He et al., 2007).Immune experiments were performed on animal models to determine the immunogenicity of the expressed protein. There has been an increase in serum antibody against CSFV when the extract is administered subcutaneously (He et al., 2007).
Foot and mouth virus (FMPV) is a major livestock disease and has been largely controlled by vaccination (Sobrino et al., 2001). Both inactivated and attenuated vaccines are used but are generally not considered completely safe. The protein of the Foot and Mouth Disease Virus (FMDV), Virion Protein1 (VP1), contains important epitopes that can produce antibodies (Brown et al., 1991). Cholera Toxin B subunits were used as they are very effective at acting as a mucosal adjuvant that can bind to the intestinal epithelial area using monosialotetrahexosyl ganglioside (GM1) ganglioside receptors. Plasmid pACTBVP1 was transformed using biolistic bombardment into the microalgae Chlamydomonas reinhardtii. After transformation, it was placed under dim light until the cells turned yellow as reported by Suzuki et al.(Suzuki and Bauer, 1992)Selected transformants (streptomycin resistance) were analyzed by Polymerase Chain Reaction (PCR) with ChIL primers. PCR products were then analyzed by Southern blotting. The presence of the combined protein (CTBVP1) was assessed by western blotting. Enzyme Linked Immunosorbent Assay (ELISA) was carried out for quantitative analysis. The combined protein has shown a weak but important affinity for GM1 ganglioside. The research by Sun et al showed that Chlamydomonas expressed CTBVP1 in large numbers (Sun et al., 2003). It also showed that this fusion protein was bound to GM1 ganglioside, which means it can be used as a potential source of mucosal vaccine(Sun et al., 2003).
Hepatitis B is one of the most common chronic illnesses affecting around 350 million people worldwide (Lavanchy, 2004). Hepatitis B surface antigen (HBsAg) has been used as a vaccine for quite some time. HBsAg is often isolated from high-risk patients. Currently, a hepatitis vaccine is produced mainly in yeast (Valenzuela et al., 1982). The HBsAg antibody was expressed in an algal expression vector, Phaeodactylum tricornutum(Hempel et al., 2011).The results of the study showed that the human antibody CL4mAb was expressed and synthesized in the endoplasmic reticulum of microalgae. When the same antibody was introduced to the plant Nicotina tobacum, it showed much lower expression levels (Yano et al., 2004). Protein degradation, which is reported to be a major problem in plants, was not found when the same protein was expressed in P. tricornutum (Muynck et al., 2009). ELISA tests performed on whole proteins and refined proteins showed that this virus binds to the antigen HBsAg with great efficiency. In addition to producing these antibodies, the HBsAg antigen was expressed in P. tricornutum (Hempel et al., 2011). HBsAg is widely used as a hepatitis B vaccine. When exposed to microalgae, 0.7% of the total soluble protein was HBsAg. This antigen was identified by an algae-produced antibody and a commercially produced antibody. In another study, Geng et al. showed genetic modification of HBsAg in the algae Dunaliellasalina. This is done by electroporation (Geng et al., 2003). Chloramphenicol-resistant strains were selected and tested by molecular analysis. The successful integration of HBsAg genes into the Dunaliella salina genome was confirmed by PCR and southern blotting. By doing the ELISA it was found that a large amount of HBsAg protein is expressed by D. salina. This HBsAg has been found to have an immune activity (Chen et al., 2001).
Human papilloma virus accounts for about 6.1% of all cancer cases worldwide. Of those, 99.7% are agents responsible for cervical cancer. More than half of cases are caused by HPV16 (jan m. m. walboomers1 et al.,1999) Ordinary drugs do not work against cervical cancer tumors, they are usually toxic, and can lead to recurrence (possibly 10-20%) (Chen et al., 2001). Hr-HPV-E7 oncoprotein, which is involved in the conversion of harmful cells, is a perfect candidate for the development of vaccines (McLaughlin-Drubin & Münger, 2009). In a study by Demurtas et al. HPV-E7 protein in its attenuated form was expressed in microalgae C. reinhardtii. It has shown positive results in pre-clinical animal models (Demurtas et al., 2013). This antigen has so far been analyzed for biochemical and physiological studies, but expression in algae has now opened up new possibilities. Future activities could detect the excess exposure of this antigen to algae, so that it can be used directly as an HPV vaccine (Alonso et al., 2002).
Malaria is a disease caused by parasitic protozoa Plasmodium falciparum. It is transmitted by the bite of a mosquito. Each year, some 100 million people die, at least 300 to 500 million get infected (Snow et al., 2005). The most advanced and most recent vaccine used against malaria is directly related to sporozoite. This policy is designated by RTS, S / ASO2A. In a study by Dauvillée et al., High levels of granule bound starch synthase (GBSS) bound to starch, which fuses to three malaria vaccines and were then expressed in microalgae C. reinhardtii .It was shown that the amount of starch-antigen accumulated in chloroplast of algae was sufficient to prevent the lethal dose of Plasmodium falciparum in mice. This inhibition was observed due to the inhibition of erythrocyte invasion. In this study, C. reinhardtii is used as a starch in its chloroplast, which makes the vaccine long lasting. Also, this algae has a Generally Regarded as Safe (GRAS) form and is very easy to propagate and cultivate (Dauvillée et al., 2010). In a study conducted by Gregory et al., (2012), Vaccines for the malarial subunit pfs25 and pfs28 were expressed in C. reinhardtii. Both of these units are part of the structure of a vaccine prepared to prevent the transmission of malaria. The algae produced pfs25 and pfs28 were found to have similar structural to native pfs25 and pfs28. This makes the algal expression system the only system for expressing these 2 proteins in a glycosylated form. Structural similarity was identified using monoclonal antibodies that bind only to the right pfs25 and pfs28 (Gregory et al., 2012). In the yeast homologs of pfs25, disulfide bonds were found to be absent, but the algal expression system expressed pfs25 with disulfide bonds (Saxena et al., 2006). It was shown that a-pfs25 but not pfs28 showed significant ability to prevent transmission, consistent with previous works (Gozar et al., 1998; Gozar et al., 2001).
aureus is a Gram-positive bacterium. It belongs to the group of bacteria called Firmicutes. S. aureus is a human pathogen that infects the nasal mucosa and the skin (Lowy, 1998).It is responsible for the bacteremia, which is the cause of secondary infections such as endocarditis, pneumonia, meningitis, etc (Moreillon& Que, 2004). Dreesen et al (2010) reported that the fibronectin-binding protein produced by S. aureus is very important in its management, and is associated with cholera toxin B (Kurup& Thomas, 2020). The protein that binds to fibronectin attaches itself to the extracellular matrix of the host cell (Patti& Höök, 1994). The Cholera Toxin B(CTB) has improved antigen-specific immune response(Sun et al., 1994). CTB-D2 fusion antigen was codon-optimized and exposed to the chloroplast of microalgae C. reinhardtii. CTB-D2 antigen was resistant to conditions that mimic the abdominal environment and low pH. It also binds to GM1 ganglioside and triggers a systemic and mucosal immune response. CTB-D2 antigen-expressing algae was strengthened and given to mice, which were protected from the deadly doses of Staphylococcus aureus (Sun et al., 1994).
The novel coronavirus (2019-nCoV), more commonly identified as the coronavirus disease 2019 (COVID-19), named officially by the World Health Organization (WHO) on the 12th of January 2020, had its first outbreak in the Huanan South China Seafood Market, located in Wuhan City, Hubei Province, China(Guo et al., 2020). According to Sami et al (2020) algae can be recommended as a treatment toSARS-Cov-2(Severe Acute Respiratory Syndrome Coronavirus 2), both preventive and curative (Sami et al., 2020). It was suggested that sulfated polysaccharides can inhibit viral infection by interfering with S-protein of SARS-CoV-2 that binds to heparin sulfate co-receptor in host tissues. Kwon et al reported in vitro effects of strong binding between certain sulfated polysaccharides and S-protein. The use of these non-anticoagulant polysaccharides can be oral delivery, a moderate dose inhaler or nasal spray, as heparin is not available orally but fucoidans extracted from edible sea polysaccharides are categorized as safe GRAS (Kwon et al., 2020). C-phycocyanin, a pigment-binding protein found in Spirulina blue-green algae, enhances anti-tumor, anti-inflammation, and anti-oxidant activities (Cian et al., 2012).
Tzachor et al (2021) used aqueous extracts of Spirulina as a therapy for cytokine storm and reported a reduction of tumor necrosis factor (TNF)-α secretion levels induced by macrophage and monocyte (Tzachor et al., 2021). Anti-TNF therapy or TNF-α blockers are important to reduce inflammation-driven capillary leak caused by the key inflammatory cytokines which deteriorate the lung function of COVID-19 patients (Robinson et al., 2020). Algal nutraceuticals with their anti-inflammatory, antimicrobial, immunostimulatory, and immunomodulatory properties are also important for boosting immunity, preventing diseases, and treating disorders associated with severe SARS-CoV-2 infections, such as anti-inflammatory treatment and tissue repair (Ratha et al., 2020). C-phycoyanin, a pigment-binding protein, with the properties of anti-inflammation, anti-oxidant, and anti-tumor was tested to reduce the secretion levels of a protein, which causes cytokine storms in COVID-19 patients (Tzachor et al., 2021). Although it will not replace SARS-CoV-2 vaccinations, the algae extract may be used as a dietary supplement to prevent cytokine storms once the patients are diagnosed, especially those high-risk populations including the elderly and those with severe medical conditions this is because the influx of those pro-inflammatory cytokines such as TNF-α, interleukin (IL)-2, IL-7, IL-10, macrophage inflammatory protein-1A (MIP-1A), and monocyte chemo attractant protein-1 (MCP-1) found in critically ill COVID–19 patients may cause acute respiratory distress syndrome (ARDS), which is the main cause of their death (Ruan et al., 2020; Mcgonagle et al., 2020; Berndt, et al., 2021).
The current ongoing researches use single-celled alga C. reinhardtii where one research group in Italy, research is being carried out by the Laboratory of Photosynthesis and Bioenergy of the Department of Biotechnology at the University of Verona, directed by professors Roberto Bassi and Luca Dall’Osto they introduced a DNA sequence corresponding to the SARS-CoV-2 RBD (Receptor Binding Domain) protein by adopting nuclear transgenesis and chloroplast transformation ability to perform genetic engineering, especially on the single-celled alga of the model organism Chlamydomonas reinhardtii, has provided the basis for contributing to the development of an oral vaccine against the recently emerged SARS-COV-2 viral strain responsible for the current pandemic threatening the global health. One of the great advantages of algae is that they grow and multiply quite quickly. According to Cutolo, if contamination is prevented, it is possible to accumulate up to 1 mg of the recombinant antigen for each gram of biomass of dried algae. Additionally, Berndt et al. 2021 utilized C. reinhardtii to produce recombinant SARS-CoV-2 spike RBD protein and found out that it has a similar affinity as mammalian expressed in specifically binding to the recombinant Angiotensin Converting Enzyme 2 (ACE2) protein, showing the potential of using algae to produce functional and correctly-folded recombinant spike RBD protein which could be utilized in large-scale serological tests or as potential vaccine antigens (Berndt et al., 2021).
A Biotech Company in Israel, Trans Algae identified and inserted a portion of the SARS-COV2 spike protein into the algae for the manufacturing of spike protein to stimulate immune responses, while claiming that adding the spike protein in tiny amounts does not change the safety profile of the algae for humans. It’s very likely that they are using the same algae model C. reinhardtii for genetic modification and accumulate large amounts of the antigen—modified algae that would be lyophilized to generate an oral capsule (Sami et al., 2020). The key feature of using algae for vaccine production is that an oral vaccine can be produced by lyophilizing and encapsulating the algae, where their cell wall could protect the antigens and bioactive molecule from the harsh gastric environment, ensuring its arrival to the intestinal immune system (Gunasekaran &Gothandam, 2020).
Peanut allergy is an adverse reaction of IgE (Immunoglobulin E) to a set of proteins found in the legume Arachis hypogaea (peanuts). Patients who are allergic to peanuts show a complete T-Helper type 2 (TH2)-polarized response to peanuts and IgE that detects one or more allergens (Flinterman et al., 2008). After exposure to peanuts, IgE in mast cells that live in tissues and circulating basophils bind to the cognate allergen causing rapid deterioration and release of histamine and inflammatory molecules. This flow leads to allergic reactions ranging from minor rashes and intestinal depression to fatal systemic anaphylaxis and organ failure (Hsu & MacGlashan, 1996).
Algal produced Ara h1 (Arachis Hypogaea-1 antigen) core domain and Ara h2 (Arachis Hypogaea-2 antigen) have reduced affinity for IgE from peanut-allergy patients. It was also found that immunotherapy using an Ara h 1 base derived from algae provides protection from peanut induced anaphylaxis in the murine model of peanut allergy. The microalgae used to produce the vaccine for peanut allergy is Chlamydomonas reinhardtii (Gregory et al., 2016).
Hypertension, also known as raised or high blood pressure, is a condition in which the blood vessels have persistently raised pressure. Unlike traditional hypertension therapies, immunotherapies act as a promising alternative as they are cheaper and offer better patient compliance. A chimeric antigen intended to prevent hypertension, consisting of a genetic fusion between Angiotensin II and a Hepatitis B antigen (HBcAg) serving as a carrier, was the first algal vaccine to be expressed from nuclear genome without chloroplast targeting(Specht &Mayfield, 2014). This candidate immunogen designated as HBcAgII has been expressed in algal specie Chlamydomonas reinhardtii, which serves as an excellent vaccine expression system and delivery host. Transgenic C. reinhardtii lines have been improved, and the expected recombinant protein has been detected by Western blot and ELISA analysis. The expression levels of this recombinant protein in some transgenic lines have reached even upto 0.05% of the total soluble protein which serves as a great advancement in creating edible algal vaccines for treatment of hypertension (Soria-Guerra et al., 2014).
White spot disease (WSD) is a dangerous viral disease of various shrimp species that has caused high mortality for decades. This disease is caused by White Spot Syndrome Virus (WSSV). White spot disease is leading to huge losses in the shrimp industry worldwide. However, the mechanism of entry and transmission of the virus into the shrimp cells is unknown. A binding in vitro test showed envelope protein viral protein 28 (VP28) combined with enhanced green fluorescence protein (VP28-EGFP) binding to shrimp cells. This provides direct evidence that VP28-EGFP can bind to shrimp cells at pH 6.0 within 0.5 hours. However, proteins were observed to enter the cytoplasm 3hours post-adsorption. Meanwhile, plaque inhibition tests showed that a polyclonal antibody against VP28 (a large WSSV envelope protein) could weaken WSSV and prevent viral infection. The outcome of the ELISA test also confirmed that the protein envelope VP28 could compete with WSSV to bind to shrimp cells. Overall, VP28 of the WSSV can bind to shrimp cells as an attached protein and can help the virus enter the cytoplasm. During white spot syndrome (WSSV) infection, the interaction between viral enzyme proteins and cell surface protein receptors on the target cells is an important step in viral entry and replication (Kiataramgul et al., 2020).
An oral WSSV shrimp control program has been developed based on engineered edible microalgae. In this case, the codon-optimized synthetic gene of WSSV VP28 was incorporated into the chloroplast genome C. reinhardtii. Oral administration of transgenic algae increased the survival rate of the shrimps exposed to WSSV compared to the control group. Algae lacking a cell wall can last for at least 80 minutes under conditions similar to shrimp digestion system. In WSSV infection tests, a high survival rate (87%) was recorded in shrimp fed with codon-optimized VP28 line mixed with their diet, indicating that this line can be used in control WSSV spread in shrimp communities. This algal strategy provides a new, effective, fast, and cost-effective way to control other diseases in aquatic animals through oral delivery. (Kiataramgul et al., 2020).
Table-1, Available Edible Algal Vaccines for Various diseases
|1,||Foot and mouth disease||ChlamydomonasReinhardtii|
|2,||Classical swine flu||ChlamydomonasReinhardtii|
|3,||Human Papilloma Virus||ChlamydomonasReinhardtii|
|7,||White spot syndrome||ChlamydomonasReinhardtii|
Edible vaccines hold great promises as a cost-effective, easy-to-administer, easy-to-store vaccine, especially for the poor and developing countries. Initially believed to be useful only for preventing infectious diseases, it has also found application in the prevention of autoimmune diseases, birth control, cancer therapy, and etc. There is growing acceptance of genetically-engineered crops in both industrial and developing nations. So in the future, there is a lot of scope for the introduction of edible vaccines as the primary method of vaccination.
Among the edible vaccines, edible algal vaccines are more prominent to become the major method of vaccination. Because, from the research available to date, it is clear that algae like Chlamydomonas can produce complex vaccine antigens that can produce immunogenic responses that are suited for their target roles as vaccines. Algae are very important for rapidly testing many versions of potential chimeric vaccine molecules. An algal-produced human vaccine production platform will likely become an alternative for very expensive vaccines like HPV, or for novel vaccines against diseases for which there is no alternative yet. The costs and considerations of the order of storage, delivery, and administration in limited resource settings suggests that plant or algal production may be the only possible alternative for largescale less expensive vaccines. Thus, this area requires increased attention from research funding agencies and also investment from the pharmaceutical industry to get Edible algal vaccines to public use as early as possible.
The author is grateful to Mr. Mohammed Abdul Samad, Research Scholar, Department of Pharmacology, G. Pulla Reddy College of Pharmacy, Osmania University, Hyderabad, Telangana, India for his immense support in guiding throughout the review work.
Conflict of interest
The authors declared no conflict of interest.
Alonso, L. G., Garcia-Alai, M. M., Nadra, A. D., Lapena, A. N., Almeida, F. L., Gualfetti, P., and Prat-Gay, G. D. 2002. High-risk (HPV16) human papillomavirus E7 oncoprotein is highly stable and extended, with conformational transitions that could explain its multiple cellular binding partners. Biochemistry, 41(33):10510–10518.
Berndt, A., Smalley, T., Ren, B., Badary, A., Sproles, A., Fields, F., Torres-Tiji, Y., Heredia, V., Mayfield, S. 2021. Recombinant production of a functional SARS-CoV-2 spike receptor binding domain in 4 the green algae Chlamydomonas reinhardtii’. PLoS ONE, 16(11):e0257089.
Brown, L. E., Sprecher, S. L. and Keller, L. R. 1991. Introduction of exogenous DNA into Chlamydomonas reinhardtii by electroporation. Molecular and Cellular Biology, 11(4):2328–2332.
Chen, Y., Wang, Y., Sun, Y. et al. 2001. Highly efficient expression of rabbit neutrophil peptide-1 gene in Chlorella ellipsoidea cells. Current Genetics, 39(5–6):365–370.
Cian RE, López-Posadas R, Drago SR, de Medina FS, Martínez-Augustin O. 2012. Immunomodulatory properties of the protein fraction from Phorphyra columbina. J Agric Food Chem, 60(33):8146-54.
Dauvillée D, Delhaye S, Gruyer S, Slomianny C, Moretz SE, d’Hulst C, et al. 2010. Engineering the Chloroplast Targeted Malarial Vaccine Antigens in Chlamydomonas Starch Granules. PLoS ONE, 5(12):e15424.
Demurtas OC, Massa S, Ferrante P, Venuti A, Franconi R, Giuliano G. 2013. A Chlamydomonas-Derived Human Papillomavirus 16 E7 Vaccine Induces Specific Tumor Protection. PLoS ONE, 8(4): e61473.
Dreesen IA, Charpin-El Hamri G, Fussenegger M. 2010. Heat-stable oral alga-based vaccine protects mice from Staphylococcus aureus infection. J Biotechnology, 145(3):273-80.
Ekam VS, Udosen EO, Chigbu AE. 2006. Comparative effect of carotenoid complex from Golden Neo-Life Dynamite (GNLD) and carrot extracted carotenoids on immune parameters in albino Wistar rats. Niger J Physiol Sci, 21(1-2):1-4.
Ellis, A. E. (1988) Current aspects of fish vaccination. Diseases of aquatic organisms, 4(2):159-164.
Flinterman AE, Knol EF, Lencer DA, Bardina L, den Hartog Jager CF, Lin J, et al. 2008. Peanut epitopes for IgE and IgG4 in peanut-sensitized children in relation to severity of peanut allergy. J Allergy Clin Immunol, 121(3):737-743.e10.
Geng, D., Wang, Y., Wang, P. et al. 2003. Stable expression of hepatitis B surface antigen gene in Dunaliella salina (Chlorophyta). Journal of Applied Phycology, 15:451–456.
Georgopoulou, U., Dabrowski, K., Sire, M.F. et al. 1988. Absorption of intact proteins by the intestinal epithelium of trout, Salmo gairdneri. Cell Tissue Res, 251:145–152.
Giddings, G., Allison, G., Brooks, D. et al. 2000. Transgenic plants as factories for biopharmaceuticals. Nat Biotechnol, 18(11):1151–1155.
Gozar MM, Muratova O, Keister DB, Kensil CR, Price VL, Kaslow DC. 2001. Plasmodium falciparum:immunogenicity of alum-adsorbed clinical-grade TBV25-28, a yeast-secreted malaria transmission-blocking vaccine candidate. Exp Parasitol, 97(2):61–69.
Gozar, M. M. G., Price, V. L. and Kaslow, D. C. 1998. Saccharomyces cerevisiae-secreted fusion proteins Pfs25 and Pfs28 elicit potent Plasmodium falciparum transmission-blocking antibodies in mice. Infection and Immunity, 66(1): 59–64.
Gregory JA, Li F, Tomosada LM, Cox CJ, Topol AB, Vinetz JM, et al. 2012. Algae-Produced Pfs25 Elicits Antibodies That Inhibit Malaria Transmission. PLoS ONE, 7(5): e37179.
Gregory, J. A, Topol, A. B, Doerner, D. Z, and Mayfield, S. 2013. Alga-produced cholera toxin-pfs25 fusion proteins as oral vaccines. Applied and Environmental Microbiology, 79(13):3917–3925.
Hsu, C. and MacGlashan, D. 1996. ‘IgE antibody up-regulates high affinity IgE binding on murine bone marrow-derived mast cells’, Immunology Letters, 52(2–3):129–134.
Gregory, J.A, Shepley-McTaggart, A, Umpierrez, M, Hurlburt, B.K, Maleki, S.J, Sampson, H.A, Mayfield, S.P, Berin, M.C. 2016. Immunotherapy using algal-produced Ara h 1 core domain suppresses peanut allergy in mice. Plant Biotechnology Journal, 14(7):1541–1550.
Guo, YR, Cao, QD, Hong, ZS. et al. 2020. The origin, transmission and clinical therapies on coronavirus disease 2019 (COVID-19) outbreak – an update on the status. Military Med Res, 7(1):11.
D.M. He, K.X. Qian, G.F. Shen, Z.F. Zhang, L.I. Y.N, Z.L. Su, H.B. Shao. 2007. Recombination and expression of classical swine fever virus (CSFV) structural protein E2 gene in Chlamydomonas reinhardtii chroloplasts, Colloids Surfaces B Biointerfaces, 55(1):26–30.
Hempel, F.; Lau, J.; Klingl, A.; Maier, U.G. 2011. Algae as Protein Factories: Expression of a Human Antibody and the Respective Antigen in the Diatom Phaeodactylum tricornutum. PLoS ONE, 6(12): e28424.
Hirlekar R. and Bhairy S. 2017. Edible vaccines: an advancement in oral immunization. Asian Journal of Pharmaceutical and Clinical Research, 10 (2): 82-88.
Johansen F-E, Pekna M, Norderhaug IN et al. 1999. Absence of epithelial immunoglobulin A transport, with increased mucosal leakiness, in polymeric immunoglobulin receptor/secretory component-deficient mice. Journal of Experimental Medicine, 190(7):915-921.
Saxena J., Rawat S. (2014) Edible Vaccines. In: Ravi I., Baunthiyal M., Saxena J. (eds) Advances in Biotechnology. Springer, New Delhi, 207-226.
Kiataramgul, A, Maneenin, S, Purton, S, Areechon, N, Hirono, I, Brocklehurst, T.W, Unajak, S. 2020. An oral delivery system for controlling white spot syndrome virus infection in shrimp using transgenic microalgae. Aquaculture, 521:735022
Kim N-S, Mbongue JC, Nicholas DA, Esebanmen GE, Unternaehrer JJ, Firek AF, et al. 2016. Chimeric Vaccine Stimulation of Human Dendritic Cell Indoleamine 2, 3-Dioxygenase Occurs via the Non-Canonical NF-κB Pathway. PLoS ONE, 11(2):e0147509.
Kim, TG. , Galloway, D.R. & Langridge, W.H.R. 2004. Synthesis and assembly of anthrax lethal factor-cholera toxin B-subunit fusion protein in transgenic potato. Mol Biotechnol, 28(3):175–183.
Kurup, V.M, Thomas, J. 2020. Edible Vaccines: Promises and Challenges. Molecular Biotechnology, 62(2):79–90.
Kwon, P.S., Oh, H., Kwon, SJ. et al. 2020. Sulfated polysaccharides effectively inhibit SARS-CoV-2 in vitro. Cell Discovery, 6(1):1-4.
Van der Laan JW, Minor P, Mahoney R, Arntzen C, Shin J, Wood D. 2006. WHO informal consultation on scientific basis for regulatory evaluation of candidate human vaccines from plants. Vaccine, 24(20):4271–4278.
Lal, P. et al. 2007. EDIBLE VACCINES: CURRENT STATUS AND FUTURE. Indian Journal of Medical Microbiology, 25(2):93–102.
Lavanchy D. 2004. Hepatitis B virus epidemiology, disease burden, treatment, and current and emerging prevention and control measures. Journal of Viral Hepatitis, 11(2):97–107.
Lowy, F. D. 1998. Staphylococcus aureus Infections. New England Journal of Medicine, 339(8):520–532.
Lycke, N. 2012. Recent progress in mucosal vaccine development: potential and limitations. Nature Reviews Immunology, 12(8):592–605.
Lycke, N., Bemark, M. 2010. Mucosal adjuvants and long-term memory development with special focus on CTA1-DD and other ADP-ribosylating toxins. Mucosal Immunology, 3(6); 556–566.
Ma K, Bao Q, Wu Y, Chen S, Zhao S, Wu H and Fan J. 2020. Evaluation of Microalgae as Immunostimulants and Recombinant Vaccines for Diseases Prevention and Control in Aquaculture. Frontiers in Bioengineering Biotechnology, 8:1331
Markowska-Daniel I, Collins RA, Pejsak Z. 2001. Evaluation of genetic vaccine against classical swine fever. Vaccine, 19(17-19):2480-2484.
Mason HS, Haq TA, Clements JD, Arntzen CJ. 1998. Edible vaccine protects mice against Escherichia coli heat-labile enterotoxin (LT): potatoes expressing a synthetic LT-B gene. Vaccine, 16(13):1336-43.
Mason, H. S., Lam, D. M., & Arntzen, C. J. 1992. Expression of hepatitis B surface antigen in transgenic plants. Proceedings of the National Academy of Sciences of the United States of America, 89(24):1745–11749.
Maxwell, S. 2014. Analysis of Laws Governing Combination Products, Transgenic Food, Pharmaceutical Products and their Applicability to Edible Vaccines, Brigham Young University Prelaw Review, 28(1):8.
McGonagle D, Sharif K, O’Regan A, Bridgewood C. 2020. The Role of Cytokines including Interleukin-6 in COVID-19 induced Pneumonia and Macrophage Activation Syndrome-Like Disease. Autoimmunity Reviews, 19(6):102537.
McLaughlin-Drubin ME, Münger K. 2009. The human papillomavirus E7 oncoprotein. Virology, 384(2):335-44.
Moennig V. 2000. Introduction to classical swine fever: virus, disease and control policy. Veterinary Microbiology, 73(2-3):93-102.
Mor TS, Gómez-Lim MA, Palmer KE. 1998. Perspective: edible vaccines–a concept coming of age. Trends in Microbiology, 6(11):449-53.
Moreillon P, Que YA. 2004. Infective endocarditis. Lancet, 363(9403):139-49.
De Muynck B, Navarre C, Nizet Y, Stadlmann J, Boutry M. 2009. Different subcellular localization and glycosylation for a functional antibody expressed in Nicotiana tabacum plants and suspension cells. Transgenic Research, 18(3):467-482.
Patti JM, Höök M. 1994. Microbial adhesins recognizing extracellular matrix macromolecules. Current Opinion Cell Biology, 6(5):752-758.
Ratha SK, Renuka N, Rawat I, Bux F. 2020. Prospective options of algae-derived nutraceuticals as supplements to combat COVID-19 and human coronavirus diseases. Nutrition, 83:111089.
Robinson PC, Richards D, Tanner HL, Feldmann M. 2020. Accumulating evidence suggests anti-TNF therapy needs to be given trial priority in COVID-19 treatment. Lancet Rheumatology, 2(11):e653-e655.
Ruan Q, Yang K, Wang W, Jiang L, Song J. 2020. Clinical predictors of mortality due to COVID-19 based on an analysis of data of 150 patients from Wuhan, China. Intensive Care Medicine, 46(5):846-848.
Sami N, Ahmad R, Fatma T. 2020. Exploring algae and cyanobacteria as a promising natural source of antiviral drug against SARS-CoV-2. Biomedical Journal, 44(1):54-62.
Saxena AK, Singh K, Su HP, Klein MM, Stowers AW, Saul AJ, Long CA, Garboczi DN. 2006. The essential mosquito-stage P25 and P28 proteins from Plasmodium form tile-like triangular prisms. Nature Structural and Molecular Biology, 13(1):90-91.
Sci-Hub | Human papillomavirus is a necessary cause of invasive cervical cancer worldwide | 10.1002/(sici)1096-9896(199909)189:1<12::aid-path431>3.0.co;2-f (no date). Available at: https://sci-hub.hkvisa.net/10.1002/(sici)1096-9896(199909)189:1%3C12::aid-path431%3E3.0.co;2-f (Accessed: 14 July 2021).
Snow RW, Guerra CA, Noor AM, Myint HY, Hay SI. 2005. The global distribution of clinical episodes of Plasmodium falciparum malaria. Nature, 434(7030):214-217.
Sobrino F, Sáiz M, Jiménez-Clavero MA, Núñez JI, Rosas MF, Baranowski E, Ley V. 2001. Foot-and-mouth disease virus: a long known virus, but a current threat. Veterinary Research, 32(1):1-30.
Soria-Guerra, R.E., Ramírez-Alonso, J.I., Ibáñez-Salazar, A. et al. 2014. Expression of an HBcAg-based antigen carrying angiotensin II in Chlamydomonas reinhardtii as a candidate hypertension vaccine. Plant Cell, Tissue and Organ Culture, 116(2):133–139.
Specht EA, Mayfield SP. 2014. Algae-based oral recombinant vaccines. Front Microbiology, 5:60.
Sun JB, Holmgren J, Czerkinsky C. 1994. Cholera toxin B subunit: an efficient transmucosal carrier-delivery system for induction of peripheral immunological tolerance. Proceedings of National Academy of Sciences, 91(23):10795-10799.
Sun M, Qian K, Su N, Chang H, Liu J, Shen G. 2003. Foot-and-mouth disease virus VP1 protein fused with cholera toxin B subunit expressed in Chlamydomonas reinhardtii chloroplast. Biotechnology Letters, 25(13):1087-1092.
Suzuki JY, Bauer CE. 1992. Light-independent chlorophyll biosynthesis: involvement of the chloroplast gene chlL (frxC). Plant Cell, 4(8):929-940.
Twyman RM, Schillberg S, Fischer R. 2005. Transgenic plants in the biopharmaceutical market. Expert Opinion on Emerging Drugs, 10(1):185-218.
Tzachor A, Rozen O, Khatib S, Jensen S, Avni D. 2021. Photosynthetically Controlled Spirulina, but Not Solar Spirulina, Inhibits TNF-α Secretion: Potential Implications for COVID-19-Related Cytokine Storm Therapy. Marine Biotechnology, 23(1):149-155.
Ulmer JB, Valley U, Rappuoli R. 2006. Vaccine manufacturing: challenges and solutions. Nature Biotechnology, 24(11):1377-1383.
Valenzuela P, Medina A, Rutter WJ, Ammerer G, Hall BD. 1982. Synthesis and assembly of hepatitis B virus surface antigen particles in yeast. Nature, 298(5872):347-350.
Walmsley AM, Arntzen CJ. 2000. Plants for delivery of edible vaccines. Current Opinion Biotechnology, 11(2):126-129.
Walboomers, J. M. M., Jacobs, M. V., Manos, M. M., Bosch, F. X., Kummer, J. A., Shah, K. V., Snijders, P. J. F., Peto, J., Meijer, C. J. L. M., & Mu�oz, N. 1999. Human papillomavirus is a necessary cause of invasive cervical cancer worldwide. In The Journal of Pathology, 189(1):12–19.
Wang J, Thorson L, Stokes RW, Santosuosso M, Huygen K, Zganiacz A, Hitt M, Xing Z. 2004. Single mucosal, but not parenteral, immunization with recombinant adenoviral-based vaccine provides potent protection from pulmonary tuberculosis. J Immunology, 173(10):6357-6365.
Yano A, Maeda F, Takekoshi M. 2004. Transgenic tobacco cells producing the human monoclonal antibody to hepatitis B virus surface antigen. Journal of Medical Virology, 73(2):208-215.
Zapanta PE, Ghorab S. 2014. Age of Bioterrorism: Are You Prepared? Review of Bioweapons and Their Clinical Presentation for Otolaryngologists. Otolaryngol Head and Neck Surgery, 151(2):208-214.
How useful was this post?
Tasks handled by our staff, updated in real-time. | 1 | 11 |
<urn:uuid:8bed960e-ea58-4384-a05b-e524500aa4d3> | I have previously expressed my concerns regarding Diane Ravitch’s denigration of the power of digital technologies for learning and teaching. Her blog gives her a very visible online platform and I think that she should be a little more careful with her wording and claims, particularly given her self-professed lack of computer fluency. Although she’s been relatively quiet on the technology front lately, I believe that a couple of her recent posts about digital learning tools are worth responding to…
Tablets are not real computers
Diane labels a post from Red Queen as ‘one of the best posts ever.’ She quotes Red Queen:
We all know this about tablet “computers”: they are not real “working” machines. When I proposed buying a tablet for my student the dude behind the counter told me: “Don’t do it. You’ll have to buy a keyboard, it has way less memory and no ports, a smaller screen and slower speed: it’s just not what a serious student needs. By the time you’re done adding on, you’ll have a machine almost as expensive as a real computer with far less functionality”.
Any parent will have received that advice from just about any computer salesman. And while there are a few serious students out there who no doubt feel otherwise, I think it’s a fairly safe bet that the word on the street is: tablets are no substitute for a computer; students need computers.
Red Queen goes on to say that tablet computers are ‘frivolous electronics‘ and Diane includes that quote too.
Of course this belies actual reality. Tablets and smartphones continue to become both more powerful and more popular with every iteration. It is projected that sometime this year total tablet shipments will begin to surpass total PC shipments. Schools and educators that are using tablets are finding that they are quite robust computing machines, often able to do things easier or better than is possible with the larger, heavier, and often clunkier form factor of a laptop or desktop. While many people still may prefer a more expensive and robust computing device, it is ludicrous to say in September 2014 that an iOS or Android tablet isn’t a ‘real computer’ or that ‘serious students’ only should use laptops or desktops.
Finland and South Korea and Poland don’t have digital technology in their classrooms
In another post, Diane cites excerpts from Amanda Ripley’s new book, The Smartest Kids in the World:
The anecdotal evidence suggests that Americans waste an extraordinary amount of tax money on high-tech toys for teachers and students, most of which have no proven learning value whatsoever. . . . In most of the highest-performing systems, technology is remarkably absent from classrooms.
Old-school can be good school. Eric’s high school in Busan, South Korea had austere classrooms with bare-bones computer labs. Out front, kids played soccer on a dirt field. From certain angles, the place looked like an American school from the 1950s. Most of Kim’s classrooms in Finland looked the same way: rows of desks in front of a simple chalkboard or an old-fashioned white board, the kind that was not connected to anything but the wall. . . . None of the classrooms in [Tom’s] Polish school had interactive white boards.
There are numerous issues with these types of quotes. For instance…
- The unstated assumption that performance on standardized assessments of low-level thinking is how we should judge educational success. I agree that if our goal is better bubble test achievement, we can drill-and-kill kids all day without any technology whatsoever. We’ve had over a century to perfect the numbing of student minds in analog environments. But if we want to prepare students to be empowered learners and doers within current and future information, economic, and learning landscapes, it’s impossible to do that while shunning technology.
- The disparagement of digital technologies as ‘toys.’ Digital tools and environments are transforming everything around us in substantive, transformative, and disruptive ways. They are not mere toys unless we choose to only use them in that way. It’s a sad indictment of us as educators and communities that it is taking us so long to awaken to the educational possibilities of learning technologies and the Internet.
- The equation of interactive white boards (and, in a later quote, student response systems) as the sum and substance of educational technology. Those of us who decry such replicative technologies agree that those are insufficiently empowering of students and thus unlikely to make much of an impact. But putting powerful digital tools into the hands of students that let them create, make, connect, collaborate, and make an impact, both locally and globally? That’s a different story. We need a different vision, one in which we don’t merely use digital technologies – and rows of desks in tight formation – to broadcast to students while they sit passively and watch or listen. And we need to stop pointing at those lackluster wastes of learning power and saying, “See? Told you technology doesn’t make a difference.”
- The nostalgic yearning for the simple classrooms and schools of yesteryear, uncomplicated by modern learning tools (or, apparently, grass in the schoolyard). Ah, yes, remember when life (supposedly) wasn’t so complicated? Does anyone really want to return to 1950s beliefs and worldviews about learning and society? And if they do, what disservice do we do our youth when we prepare them for 60 years ago rather than now and tomorrow?
So, to sum up, so far Diane appears to be against online learning and digital educational games and simulations, and she shares posts that are against tablet computers or paint all technologies as disruptive and distracting. And that’s dangerous because people listen to her. She and many of her fans seem to ignore the fact that it’s awfully difficult to prepare students for success in a digital, global world without giving them access to digital technologies and Internet access. Railing against computer expenditures and Internet connectivity for our children is irresponsible, especially when those funds come from different sources and thus can’t be spent on teachers, support staff, professional development, or educational programming.
Now, to give Diane some credit, there are a few concerns raised in these posts that are worth noting:
- It’s a reasonable question to ask whether school equipment and construction funds would be better spent on upgrading facilities or purchasing computers for students, particularly given the time horizons of both construction bonds and technology obsolescence. That’s a difficult decision and I’m glad that I don’t have to make it at the scale that the L.A. Unified school district does.
- I, too, have grave misgivings about the Amplify tablets that are being used in Guilford County, North Carolina, but not just because they’re tablets.
- When Andreas Schleicher from OECD is quoted as saying that ‘people always matter than props,’ of course that is dead on. The success or failure of learning technologies in schools always will depend more on us as educators than on the tools themselves.
- Diane quotes Carlo Rotella, who says that “if everyone agrees that good teachers make all the difference, wouldn’t it make more sense to devote our resources to strengthening the teaching profession with better recruitment, training, support and pay? It seems misguided to try to improve the process of learning by putting an expensive tool in the hands of teachers we otherwise treat like the poor relations of the high-tech whiz kids who design the tool. . . . Are our overwhelmed, besieged, haphazardly recruited, variably trained, underpaid, not-so-elite teachers, in fact, the potential weak link in Amplify’s bid to disrupt American schooling?” Leaving aside the false dichotomy of ‘we can strengthen the teaching profession or we can give students computers but not both,’ this is a pretty insightful statement. As I noted in an earlier post, we have an appalling lack of technology support and training for our educators. We have to stop pretending that if we insert computers into the learning-teaching process that magic will happen and start doing a much better job of helping educators empower students with potentially-transformative digital tools.
These concerns, however, are more specific and nuanced and aren’t painted with an extremely broad anti-technology brush. If Diane typically discussed learning technologies in thoughtful and careful ways like these, I’d have much less concern. Loyal readers here know that I myself often express misgivings about ineffective technology integration and implementation in schools. But to say that there’s no educational worth whatsoever in online learning, educational simulations, tablet computers, or whatever Diane rants against next is patently false.
Whether we like it or not, digital technologies in education are here to stay. As I said in my earlier post,
the issue is not – as [Diane] seems to believe – that [digital tools] never have any value. The issues are 1) Under what circumstances do these new learning tools and spaces have value?, and 2) How do we create learning and policy environments in which that value is most likely to be realized?
I’ll keep wishing that Diane one day recognizes this. I’ll also keep wishing that Diane one day recognizes the irony (hypocrisy?) of decrying students’ use of digital technologies while simultaneously employing those tools herself to great effect to further her goals and increase her visibility.
Image credit: What if…, Darren Kuropatwa
Why are we comparing ourselves to other nations through a lens of a political, social and economic vacuum? All of those countries listed happen to have a more isolated geography, a huge social safety net and a political system that looks very different from ours (often missing the multiple municipalities we have).
It’s a straw man theory. They build up technology as if those of us who are using technology don’t criticize it and don’t apply it to sound educational theory.
I get so tired of the “tablets aren’t real computers” schtick. My kids use multiple devices, but when they need to choose one to take with them on learning trips or simply as which one they prefer, they choose their tablets.
The keyboard recommendation ALWAYS comes from adults. Kids who did not grow up typing do not ask for keyboards. Occasionally, parents freak out that their kids will never learn to type properly. I usually respond that the kids probably will not need to learn “proper typing.”
My kids are not going to grab a laptop on the go. Have you ever seen a kid take a photo with a laptop? Awkward. They grab their tablet, take a photo, and then add it to whatever project they’re creating, add anecdotal information via voice or text, and move on. Tell me how they’re not creating anything useful.
I wrote this post last spring about this same technology issue:
Excellent points about over-simplifying the downsides of technology, and you’ve pointed out where Diane is dead-on.
I’m opposed to these purchases at this point, and we’re getting ready to do a large number of them in California including in my own district. Why? Because if you are just buying these tools to test kids, they will NEVER learn to use them as digital tools, and indeed may become Luddites should that be their only exposure to them.
Happily, I have had my own students record their first podcast this week. A much more engaging use of technology than Amplify.
I understand what you’re saying about the ‘buying for testing purposes’ issue, Alice. But we can’t create opportunities for students to learn to use ed tech in powerful ways if we don’t first have devices for them. So unless the technologies are locked away except during testing season, can’t we also repurpose them for other, better learning?
But if all they do is testing it will be worse because that will set the path for how these tools are used going forward.
Will I be happy to have more iPad/MacBook carts at my site to use, and do I think they will be used pretty well at my site yes. I have NO confidence this will be the case elsewhere.If all the training is aligned to implementing Amplify, that’s all that will be used by and large.
There is a lot of money being poured into making testing suites for iPads.
It already is a big selling point of the Amplify tablet.
A lot of districts/schools are using tech primarily for testing, and calling that “technology integration.”
If you look at some of the main selling/talking points around Amplify, they blur the line between formative and summative assessment, and call it personalized learning.
Part of the issue is that the definition of what constitutes “tech integration” is fuzzy, and the marketing departments of test vendors are more than happy to attempt to provide clarity.
Both Justin Reich:
and Barbara Bray and Kathleen McClaskey:
have excellent resources on this issue. I think we need to keep calling out corporations’ and policymakers’ (and educators’) claims of ‘personalization’ for what they really are: running students through their paces on low-level thinking tasks using computers instead of textbooks and worksheets.
My recent TEDxDesMoines talk addressed this as well:
I disagree with the unstated assumption that somehow, because technology hasn’t been well or properly integrated into schools that some large number of kids are tech Luddites. The alacrity with which they hacked the tablets in LA is a fine example, and also supports what many say here about bad implementation based on faulty ideas about what tech can do. From what I’ve seen, the kids are often leading the way in being tech savvy.
Thanks very much for this post – I read Diane Ravitch’s blog and welcome the wide range of items she shares, esp. items from local news that I would never find otherwise. Yet I have also stopped commenting there; since I teach online (I have taught online courses at Univ. of Oklahoma for over 10 years, and I love it), my comments have been treated dismissed (even rudely) by the regular commenters at that blog who consider online learning to be one of the forces of evil to be stopped at all costs. Yet surely there is no one-size-fits-all solution to any dimension of education, and whatever solutions we come up with are going to involve technology, including online resources and online courses. Teaching online is not for everybody, nor is learning online, but I am very frustrated when I see people who have never taught online dismissing it out of hand. And it happens a lot. Sigh. I also agree with your final observation that it is the height of irony for people to be disparaging online communication while using an online forum to do so! Ha! 🙂
There have been so many examples of online learning snake oil that many have been biased against it for that reason and have been therefore isolated from the things it can do well. The best way forward is for the tech community to calmly show the difference between what is good and what is not. Guilt by association is best corrected by good and complete information. Defend your area of expertise from those who are misrepresenting it’s capabilities and exploiting it for profit to the detriment of both students and the tech community.
You’ve really lost me here, Scott. For many years, I was a mini edtech leader and evangelist from the classroom. About 4 years ago I woke up and begin to get a peek at the bigger picture of what was really happening in education. The edtech movement was hijacked by the edreform movement, right under the noses of the leaders of edtech leaders.
The much bigger picture is the length to which the edreform movement is willing to go ($$$) to put America’s schoolchildren online – not so they can learn in innovative ways – but so they can take high stakes tests. And of course be plopped down for hours in front of commercial software “guaranteed” to increase student test scores. It’s happening every day, for millions of American kids.
There may well be pockets of innovative edtech educators and even schools or districts, but if the edreform movement has its way (enabled by computer driven high stakes tests and mind-numbing “personalized learning” commercial software programs) – in a very few years, there will be no public school system in this country in which educators can innovate.
I remember attending a tech conference keynote (NCCE, Portland, 2011), where many were initially shocked that Yong Zhao was not speaking about technology, but about the much bigger picture of what is happening to education with high stakes testing. He got a rousing standing ovation, as I recall.
I’m all for technology in education, but it is now being used primarily to further the high stakes testing agenda of those pushing the edreform movement (many of whom are drooling at the prospect of much more – a handy result of common core).
Say what you want about Diane Ravitch and technology, but the fight she is waging encompasses much more than technology, and most folks know that. She is also one amazing blogger (from her iPad, btw).
Mark, I share the concerns about the corporate ed reformers’ takeover of the educational technology movement. But the answer is not to make wide-sweeping proclamations about the non-utility of digital learning tools but rather to paint a different picture about how those tools can and should be used instead. In other words, in a digital, global world – a world in which all knowledge work essentially is done with computers – we should not be fostering the view that we should be walking away from digital technologies in schools (which is what I think she and many of her fans do).
I know that Diane’s fight is bigger than simply ed tech, and I greatly appreciate her role in that fight and her blogging proclivity (which is why I still read and pass along her material daily). But she needs to do better than her current walk away, knee jerk reactions to thinking and writing about ed tech.
Ok Scott. I encourage you to do two things:
Engage in the discussion – on her blog and many others – explaining how the tools should be used (not for testing and test prep) and their potential. I think you overestimate the number of Diane’s followers who are digital phobic.
Then go and call out Pearson, Macmillan, and the producers of an ever growing heap of software rubbish for what it is, explaining of course what new technologies should be used for. Call out districts and states for spending m/billions on test prep software. Call out Duncan for encouraging it. And pass the word on to guys like Richardson, Warlick, Utecht, etc. to make their voices heard on this outside of edtech circles.
Thanks – Mark
Thanks, Mark. These are great suggestions and I do try and do these things as often as I can on my blog, on hers, and elsewhere. I’ll try to step it up a notch and also encourage others to do the same.
Hi Scott – I understand Mark’s (and others) concerns here, but I agree, Diane doesn’t get the positive role tech can take in learning and she tends to dismiss it in general. I doubt that is her intent, but it sure comes across that way. I find myself wincing at comments she makes and only wish I had the time to respond more often than I do.
I just had a similar discussion with her here: http://dianeravitch.net/2013/09/14/sharon-higgins-what-stem-crisis/comment-page-1/#comment-306724
So Mark’s point about engaging her there is well taken, however her STEM position was an overlook on her part, this is more deep seated and will require more engagement I suspect.
Well said, and an important addition that places this discussion in it’s correct context.
This bit of mansplaining is reflective of the dominance of a small group of white, male, edtech whose collective insights amount to little more than pissing contest for social media status. This is a really petty critique of one of the most important and relevant voices in education. It’s clear that her voice is incredibly inconvenient to the opportunistic tech determinism that continues to marginalize all those who attempt any challenge. Shame on you.
Wow, that’s pretty harsh, Judy. Instead of personally insulting me, could you maybe engage around the substantive issues that I raised? I’d love to hear your thinking about how my pleas for a vision of students as empowered users of digital technologies who are prepared for a technology-suffused world – rather than a nostalgic view of analog schooling – somehow become ‘mansplaining’ and ‘a really petty critique’ and a ‘pissing contest for social media status.’
I would welcome a civil discourse with you here, rather than the personal insults you just heaved at Scott. I am a woman, a teacher, and someone who uses technology in teaching. I share and present about the learning in my classroom multiple times a year. I don’t think Scott did any “mansplaining” in this post, and trust me… I’ve suffered from mansplaining often. I recognize it when I see it.
I love Diane’s work and what she tries to do for education every day. However, I agree with Scott in that she is short-sighted when it comes to technology. I don’t think that replacing teachers with technology is ever the answer; but blending high tech use, low tech use, and no tech at all is an environment that kids must be able to access.
Because Diane has such a visible profile, it is even more important that her words be available for analysis. Her voice carries a lot of weight, and I feel that she is under-informed in this area of education.
I don’t think it’s mansplaining, because Scott points out, Diane herself is a pretty proficient user of technology and more importantly social media tools, so she knows about this.
Judy, I read Scott’s post closely and nowhere do I see any unprofessionalism or that it would have been written differently if Diane was man or woman. In fact, you’re the one that brings up that she’s a woman. As a woman in edtech, I believe you’ve mischaracterized this thorough article. There are plenty of places you can look for such behavior but not in this post at this time. I also think that Diane is taking a dangerous turn by not educating herself on the facts about technology. People are vital. Many are misusing technology – absolutely. However, just because people are doing poorly at using technology effectively doesn’t dismiss the importance of having the effective use of technology happen in every school. The statistics on the digital divide are startling and many poorer children are being dished out a grave disservice by not having access to good uses of technology. It is not good to have a thought leader – male or female- dismiss the importance of making sure every student has access and making the digital divide a thing of the past. I’m glad Scott wrote the article, but I think you should rethink your comment – perhaps it was written in haste but it doesn’t display the type of digital discussions we should hope to encourage in the future generations we’re teaching. Thanks for the opportunity to converse.
Excellent and thoughtful post Scott. If you are looking for substantive arguments, then Ravitch will continuously disappoint. She has found a popular and profitable existence in pandering to the masses and capitalizing on and furthering selective and sometimes flat out misinformation. She has no problem profiting handsomely from education, while decrying others for the same. In light of your post I find it particularly interesting that she was fine speaking at a large Ed Tech conference here in California and taking their money and sharing none of these views. Not surprisingly, when asked about the content of her message and the accuracy of her claims, her only response was to laud herself for receiving a standing ovation…not unlike allowing high stakes test scores to serve as a simplistic analysis of an entire school.
The Ravitch you describe does not exist, you have put forth a poorly constructed straw man. All of her arguments are completely substantive. Applied to the likes of Michelle Rhee and other so called reformers, your diatribe would be dead on.
Scott, I applaud you for your attempt at intelligent discourse, and rising above immature comments. You bring up valid points and a perspective I think many (including me) share. Technology integration is way more than testing (Totally agree with MB abt the keyboard statement) but will never be the magic bullet. I love my techno savvy PLN who never acts as if it is!
Please keep blogging and calling it like you see it!
Scott! What a great post. I too am a big proponent of using technology in the classroom especially tablets. But we use tablets to CREATE and share with the world, not to take tests or play on apps. Thanks for writing such a great post and for sharing your insight! You are someone I highly look up to and admire. Keep fighting the good fight, there’s many of us who fight in the trenches daily!
I agree with Judy. It’s a sad world we live in where a typical white male who is only concerned with elevating his social media status can so blatantly attack one of the great thinking women of our generation. Diane Ravitch is our great Oracle of Education and how dare you belittle her thoughts and insights for your own gain. You sir should be ashamed of yourself. Just because your “technology” does not fit into her master plan, don’t feel obligated to insult her. Just take the time to realize that maybe you aren’t able to truly understand how forward thinking her ideas for education are. As I read through the rest of these comments, I can’t believe how brainwashed you all are. Wake up! Recognize the true brilliance of Diane while you can.
Like I said in my comment to Judy, I wish you would engage us on the merits of our ideas rather than resorting to personal attacks. I get that you and Judy are both fans of Diane. That’s great. I am too. That doesn’t mean that I agree with her perspectives on digital technologies and their lack of place in P-12 classrooms. As I said in my post above, it’s awfully difficult to prepare students for success in a digital, global world without giving them access to digital technologies and Internet access. It would be nice if Diane (and you) – like myself and many of the folks who have commented here – talked about how we can get powerful learning technologies into the hands of children in ways that work rather than why we shouldn’t get technologies into the hands of children at all. The former recognizes the urgency and the challenge. The latter simply denies the need and reality.
More discussion around ideas and solution-building rather than personal insults, please. It also would be nice if you didn’t hide behind anonymity. The rest of us are willing to put our names and reputations behind our comments…
Well said, Scott. I think before we proclaim Ms. Ravitch an oracle, we should remember that she was a big proponent of the programs she now mocks and fights. She has since gone on to profit from a stark change in belief and I often wonder if her guilt has fueled such one-sided views, which have increasingly become populist ones lacking nuance or acknowledgement that education is extremely complex. Most importantly though, it seems that her power comes from the very technology she glibly dismisses. I appreciate her thoughts and fight for teachers, but I am often left with a sense that she’d simply like to turn the clock back.
The reason she has done a 180 on her beliefs is that she took a dispassionate look at the results of those policies, found them not only lacking but harmful and made the appropriate change in her thinking. Are you saying she is not entitled to be paid for her efforts and work? As with other false criticisms yours are way off. She is a bona fide expert and your ad homonym attack is simply boring for those of us who are used to seeing and debunking them.
Good thoughts and I hope you share them with Diane. I am disappointed by some of the comments but don’t despair.iPads are not magic billets and Edutech will amplify both good an bad practice. A senior educator lamented to me the lost art of note taking, privately I could lament the lost art of butter churning.
I only wish you’d woven your more nuanced critique throughout the post or started with your last paragraph. As a newbie to Dangerously Irrelevant , it took me until the end to determine that you were not as knee-jerk in opposition to Ravitch’s ideas as your lead would suggest. Your post reads like an edtech sloganeer until the end.
Thanks, Rachel, for both the comment and the friendly pushback. I tried to critically reflect on Diane’s posts rather than simply saying “tech is good (or bad)!” Sorry if that didn’t come across to you as well as I would have liked.
I can’t imagine how anyone familiar with the technology lifecycle would be unable to make the ‘difficult decision’ of whether it is appropriate to use 25 year construction bonds to pay for them. This seems incredibly shortsighted from multiple perspectives.
Another poster referred to Ravitch’s calling out the STEM hysteria. The constant droning on about how unqualified our graduates are is bunk. We have no shortage of highly qualified engineers and scientist. This is purely a play for visas by corporate america. Your larger point about the need for everyone needing to integrate technology into their life at some level is solid, I think. But, to defend the sky is falling rhetoric of the tech giants is just wrong.
That is some incredibly bad writing that I just posted. Oh, to have the edit button for comments.
Thanks for the comment, Wilbert. It reads okay to me!
I think districts have to find some way to pay for computers for kids. I’m not familiar with California school financing options, nor am I a school finance expert in any way whatsoever, but I do know that general funds are hard to come by and other levies typically must be spent in other directions. Other than one-time monies (e.g., grants) which come with sustainability concerns, physical plant and equipment levies often are one of the few options available to schools to get technologies in the hands of students. We have numerous districts here in Iowa paying for 1:1 initiatives through these levies and it seems to be working?
I’m familiar with the STEM worker hysteria as well and have collected some debunking resources on that front. That said, I didn’t think my post above defended the ‘sky is falling’ rhetoric of the tech giants. Clue me in so I can see where you’re coming from? Thanks!
What worries me is the negative turn the opposition immediately takes when they don’t agree with someone’s opinions. I am a female online doctoral student and educator. There are a multitude of people in the ed tech community having these same discussions, and we are not a small group of white males.
I find no issue with Dr. McLeod’s post nor do I think he is writing, or ever writes, to promote his online social status. Perhaps this can be most clearly seen in his recent TED Talk from Des Moines Iowa: http://www.youtube.com/watch?v=GyIl4y_MRbU&feature=share&list=PLsRNoUx8w3rMbC7NKi-cC_Y_rHmv_43FB where he clearly discusses the need to empower students to use technology to follow their passions and the need for schools to give students the freedom to utilize this passion to learn.
These are the discussions we need to be having when it comes to technology and education. Until we change the conversation from fear to empowerment, nothing is going to change and this endless debate will continue.
While I certainly don’t agree that all tech is bad, I have issues with the move to tablets by many schools. I teach computer science. I can’t teach programming on a tablet. I can teach programming *for* a tablet, but I need a laptop or desktop to do that. We have iPads and laptops in our school, and I’ve seen teachers do some great things with both kinds of devices. My concern with tablets is that its default mode is passive consumption. It takes some work to get past that. I think most teachers I’ve seen do get past that, but as some commenters point out, it’s possible to exploit the consumption model of tablets for things like testing and the worst kind of “learning”.
Too many people assume that technology is neutral, and that the uses of it are what skew it in a particular direction; however, the creators of technology are not neutral in their decisions. They think about how they want you to use their hardware and software. They might want you tied into consuming just their products, etc. Think about kindles and nooks and ebooks in general.
Personally, I talk about all these things with my students because who knows what technology is coming next, and we should always get past the shininess of new tech, and examine the ways that tech both enhances and perhaps restrains our work. That doesn’t mean we don’t use it. It means we don’t use it thoughtlessly.
Truly excellent points, I’ve learned something here.
I think you, along with every other technophile invited to the conversation about how to improve schools, could reflect on this statement:
“I think that she should be a little more careful with her wording and claims, particularly given her self-professed lack of computer fluency.”
Frustrating as some of her points may be, I know I am equally frustrated by schools that implement technology initiatives with poor planning or little planning, burgeoning the arguments of those who would steer clear of digital tools.
Because planning, professional learning and collaboration are so often absent when schools adopt digital tools and expect teachers and students to leverage them, Ravich’s skepticism is understandable. Most importantly, her lack “computer fluency” shouldn’t factor into her contributions to this important conversation at all. On the contrary, she might be situated in exactly the right position to raise important cautions about investments in tablets. Hopefully those cautions serve to strengthen the planning, professional learning, and collaboration around every new initiative.
I, for one, hope Ravich continues to raise red flags and issue cautions. Your post is evidence that her misgivings about technology spur critical thinking about ed tech. Who wants the Internet to be an echo chamber? The diverse viewpoints represented in your comment thread are indicative of the power of the Internet and networks to deepen people’s thinking. After all, the Internet and networks are the real transformative tools at the heart of societal and hopefully school transformation. We have to value the ideological friction in networks. We have to value the exchange and welcome dissent if schools are ever going to change. If Ravich undergoes the change of heart you hope for, educators would be deprived of a rich opportunity for critical thinking, the chance to consider your claims alongside hers and think more deeply about the role of digital tools in schools.
Thanks for your post and the opportunity to comment.
Thank you for chiming in, Joe. A few thoughts, if I may…
1. You said, “I know I am equally frustrated by schools that implement technology initiatives with poor planning or little planning, burgeoning the arguments of those who would steer clear of digital tools. Because planning, professional learning and collaboration are so often absent when schools adopt digital tools and expect teachers and students to leverage them, Ravitch’s skepticism is understandable.”
Fair enough. I’m frustrated by poor tech planning and lack of effective PD too. It’s a reflection of how poor most of our administrators are at effective technology leadership. That’s why I spend most of my efforts focusing on principals and superintendents. If they don’t get it, it’s not going to happen well. BUT… it’s a long leap from ‘do tech better (and here’s how)’ to ‘tech has no place in schools.’ And, as I think her own words show, it’s the latter where Diane seems to live more often than not.
2. You said, “Most importantly, her lack of ‘computer fluency’ shouldn’t factor into her contributions to this important conversation at all.” I’m going to disagree with you there. I don’t know anything about particle physics or Spanish literature or child nutrition but I don’t go around opining on things in those important fields that I know little about. In fact, my lack of knowledge and understanding and fluency is exactly why I should keep my mouth shut. A position of ignorance is not a strong position from which to argue one’s point. Nor is a position of hypocrisy.
3. Red flags… cautions… both fine. We don’t want echo chambers. Nuanced questions and concerns… fantastic. We need all of those we can get. But “Technology has little to no worth for schoolchildren in a digital world?” I’m troubled by that. As I said before, I think that’s irresponsible given the suffusion of technology in essentially every aspect of our lives. It’s like denying the worth of writing. Or gravity.
4. My final thought… Debate is good. Ideological friction is good. Intelligent exchange and dissent are good. But can I make a plea for intelligent, informed discourse? If I start an ideological rant that vaccines have no worth, despite a wealth of medical and scientific evidence to the contrary, I should expect healthy and scornful pushback from vaccine scientists and others. If Diane rants against the very worth of educational technology for schoolchildren with little evidence behind her arguments (just lots of ideology), shouldn’t she expect the same?
Scott, I think the issue for Diane has moved well beyond what should happen in education. This is a political battle for her and she has a personal ax to grind. In my opinion, she is not interested in reasonable discourse with those who disagree with her; rather, she uses her pundit status as a bully pulpit (emphasis on the bully). Like other commenters here, I have tried to engage with her in several media, only to find her dismissive when the conversation even approaches nuance. Her agenda reads better in black and white. So I’ve decided not to waste any more time responding to anything she publishes, but I appreciate your still fighting the good fight!
One must first look at the reasons behind any disagreement before defaulting to the “he said she said” meme as you have done here. Ravitch’s objections are not ideological or political, they are a fact based response to ill conceived policies that have made claims which remain unfulfilled after decades of the policies being in place. She is no pundit and no bully, she has a lifetime of experience behind her. She remains focused on what should and more important these days, what shouldn’t happen in education. Anyone who reads her book or other writing will find this out for themselves as others here have done(get it from the library if you don’t want to buy it). People familiar with the issues know this. As one of the people who seek to profit via reformy products designed to be sold to school systems, I suspect that you do too. Your comments are typical of those who see a threat to their market share of the new education marketplace. Ed reform is the new status quo these days.
I commented on this early and have watched an interesting conversation. Scott, I thank you for getting it rolling. I have a few things to add:
Personal attacks (see above) eliminate any reasonable attempt to talk about the issues, and diminish the point of view, as well as the credibility, of the attacker. Please stop.
Scott, you closed with Diane’s “irony (hypocrisy?) of decrying students’ use of digital technologies”. Time to catch up on your reading, my man.
I recommend everyone commenting here read Diane’s new book, “Reign of Error”. I’m only 85 pages in and have come across several positive references to student use of digital technologies – as examples of the creative, innovative education we should be offering kids. I’m not even close to the solutions sections of the book.
Is she a fan of online virtual charter schools? Of course not. If you think they might be a great idea, read chapter 17, “Trouble in E-land” (I’ve skimmed it, and I’ve read plenty elsewhere). If you think they’ll never happen, all the more reason to read it….
If you care about public education in the US, you must read this book. You’ll do something about it after you read it. It is simply amazing.
And then maybe some who have written Diane Ravitch off as an old technology fuddy-duddy will begin change their minds. I sure hope so. Because if they choose to ignore the big picture of what’s happening in education, pick away at the reform debate for overlooking technology, continue to talk edtech in a very small echo chamber, they will find themselves without public schools in which to teach – real soon.
Here are a some links for those who wish to see how the very serious problems in the online learning sector are creating a tremendous amount of guilt by association for technology in general. Those who believe in the potential of online learning and tech in schools need to clean their own house, in this case by evicting the snake oil salesmen. You all need to be aware of what is being done in the name of tech. Far too much of it is pretty awful. http://www.politico.com/story/2013/09/cyber-schools-flunk-but-tax-money-keeps-flowing-97375.html?hp=f3
Sorry, last time (I almost promise), but here’s a review of her book that’s as good as the best posts above at getting to the gist of the issue. http://mizmercer.edublogs.org/2013/09/23/diane-ravitch-reign-of-error-review/
In closing I’d like to thank all those here who have provided me with knowledge and details I didn’t have before.
I’m no Luddite – I’ve used a computer in my teaching since the pre-gui days when there were no mice and a floppy disk measured 5 1/4 inches. But I’m not convinced that screens in the classroom are the way to go. This iteration of Amplify-iPads in particular, reminds me of the Chris Whittle Channel One adventure, in which schools hoping to get a television (!) in each classroom (a miserly 16 inch set) in exchange signed on to have kids watch commercials. Resource poor school systems are especially vulnerable to these schemes.
A few years back, my school system handed out Mac Books to teachers. They were buggy – had dual (warring) Microsoft / Apple operating systems – and came with no training for teachers. We were actually told that if we had questions about how to use them, we could go to the Apple Store! Software available throughout the system was for PC’s and often not compatible. A couple of years later, support for our classroom PC’s was discontinued and desktops were not replaced because teachers had laptops. So if a kid needed to print out a paper, the only resource I had available was my laptop.
I think there’s a case to be made that schools can be a safe haven from the larger society. Especially at the elementary and middle school level, the teacher’s first task is to create a community of learners. The emphasis on digital gizmos, I am afraid, can undermine that essential activity and result in what looks more like the parallel play of toddlers, where they sit side by side, engaged in similar endeavors, but really are in their own worlds. | 1 | 3 |
<urn:uuid:fb2f9ddb-ab3b-4956-8d61-c9925b30e6e0> | Ask people to imitate a pirate, and they instinctually adopt the “pirate accent” immortalized in film and television. This unique brogue is renowned for it’s strong “r” sound, as in “yarrr” and “arrrrr.”
Pirate imitators may wonder, “What accent am I doing? Some kinda Irish?”
The classic “pirate dialect,” in fact, is not Irish, but rather a crude imitation of the slightly similar West Country English (the dialects of Southwest England)*. Why do fictional pirates always speak in this accent? Here’s the standard explanation: During the Golden Age of Piracy, in the late seventeenth and early eighteenth centuries, many English pirates came from this region. Look up famous seadogs from the era, and you’ll find birthplaces in Bristol, Devon, and Cornwall. Mystery solved, right?
Not so fast. The golden age of piracy ended by the mid-eighteenth century. How can we collectively remember how these men spoke? And how can everyday people approximate the accent of 18th-century English pirates with such surprising verisimilitude?
I can only think of one explanation. At some point in time, some actor must have needed to play one of these pirates. Upon discovering that his pirate character was from the West Country, he decided to use the appropriate accent. And somehow this convention must have spread.
But where, and when, did this convention originate? My experience suggests the pirate brogue emerged as a dramatic staple in the 20th-Century. As a child, I was a huge fan of early pirate flicks like Treasure Island (1934) and Errol Flynn’s Captain Blood, and I don’t recall any West Country accents in those films. So perhaps it was a later phenomenon.
With this in mind, I decided to do some research on the matter. I think I may have stumbled upon a possible culprit for the Pirate accent, thanks to the website of Bonaventure, a British maritime re-enactment group:
Long John Silver lived in Bristol, England, supposedly the birthplace of Edward Teach, Blackbeard. In the early 1950s Disney produced films of “Treasure Island” (1950) and “Blackbeard the Pirate”(1952), and the same actor was used to play Silver and Teach – Robert Newton. Newton then reprised his role of Long John Silver for “Long John Silver” (1954) and the TV series “The adventures of Long John Silver (1955). Robert Newton was born and raised in Dorset, not far from Bristol, so he knew the West Country accent which Silver and Teach would have spoken in very well, and used it in those films.
If Disney had perhaps not cast Newton, is it possible the pirate accent would have never entered the popular consciousness?
As usual, I welcome alternate theories.
*An old post at Language Log explores a different explanation rooted in Ireland.
In “Accents of English”, John Wells notes that the pirate accent is also very similar to a Barbadian (Bajan) accent.
Maybe the accent there is just a conservative accent (from the 17th century when English sailors first landed there) and any conservative accent would sound somewhat pirate-like (or West Country-like). That’s my theory, because I’ve heard recordings of how English would’ve sounded in Shakespeare’s time and that’s what it sounds like to me.
Bajan probably also has a fair bit of Irish influence due to early immigration patterns. Although as TT suggests, most accents of English would have been a bit more “brogue-like” before the seventeenth-Century.
funny, I always thought the Robert Newton theory was the widely accepted explanation. Won’t hear any argument from me!
It probably is widely accepted, although it’s a new one to me. If it holds water, it’s fascinating for two reasons. Firstly, that one actor can have such an influence on popular perceptions of a certain kind of character. Secondly, given how widespread the pirate accent is in popular culture, it’s ironic that few people watch the two films Newton performed these accents in. At least that’s my assumption–I don’t much know what eight-year-olds watch these days!
I am Dorset born and bred, as was the great Robert Newton, i even come from the same area of Dorset, The Blackmore Vale and on most Friday or Saturday nights in the pubs around here you’ll here Newtons Pirate Accent because me darlins that’s how we does speak AAaaarrrrrRrr.
Yeah, I’ve heard that the “pirate accent” is based on more archaic forms of english, like the use of [əɪ] instead of modern [aɪ].
Pirates don’t really talk in that strange way (almost Scottich,
i would think) in historical sea-faring fiction, meaning the Hornblower series by CS Forester, or the The Aubrey–Maturin series by Patrick O’Brian.
Both lads have done research.
These series focus on His Majesty’s Royal Navy, and not on pirates. Therefore I would expect a less localized accent, and one of mixed classes. Ex, sons of merchants, sons of peers (Aubrey), sons of doctors (Hornblower), and a big collection of the deckhands being men kidnapped and forced to sea (pressed into service by press gangs) from many walks of life and many ports of call.
I think that I’ve heard the same explanation before, on a Radio 4 programme in the early ’90s. It’s pretty obviously true, if you think about it.
But the pirate accent might not actually be that far wrong. There was evidently a “nautical English”, which will have had a strong West Country and Lancashire influence, both strongly rhotic up to the 20th century.
Robert Louis Stevenson’s 19th century portrayal of pirates speaking non-standard West Country dialects in the novel of Treasure Island almost certainly pushes the theatrical pirate accent back 80 years before the Disney film. I suspect Newton was using an accent which had a long history in the British theatre.
It’s more older/conservative English than archaic. As with Irish, Scottish and Jamaican English, many West Country accents maintain features that have disappeared in other varieties of British English.
O’Brien is probably more on the mark! Although it’s true that many pirates came from the West Country, this region is hardly monolithic in terms of accent/dialect features.
A very good point about Stevenson’s dialogue. We’ve mostly been discussing accents here, rather than dialects. The West Countryisms in Stevenson’s writing would have at least given actors a nudge long before Newton came along.
It wasn’t just pirates who came from the West Country – a good portion of English seamen (from the days of Drake until the end of the Napoleonic wars) came from the main sea-faring ports – Falmouth, Bideford & Barnstaple and Bristol. Pirates were, after all, just sailors who had deserted the navy, so the “Arrr” was a very common form of “yes” spoken through the 16th-19th century.
Cpt Aubrey & Hornblower (& all similar characters) were educated men, with educated accents, not the dialect of the foremast jacks.
To prove my point…. who watches the archaeology programme Time Team? Listen to Phil (the one with the long hair & the hat) He is from the West Country – he says “Arrr” often.
Treasure Island was indeed written with Bristol in mind – in fact the tavern which gave Stevenson the idea for the Admiral Benbow is still there – The Llandoger Trow. Part of it remains a tavern, the rest is now a Premier Inn hotel (I stayed there a couple of weeks ago!)
author of the Sea Witch Voyages
Thanks for sharing, Helen! I should probably be aware of the West Country/Seafaring connection as much as anybody. My first name, Trawick, is an Americanization of the Cornish surname brought to the States via a seventeenth-century Cornish seaman named Robarde Traweek (at least that’s what the genealogy records suggest–I’ve seen a few other theories).
The Newton theory is indeed widely accepted, and widely written upon. Obviously real pirates in those days were a polyglot (if I may use the term collectively), as well as a motley, crew – and I would think there were more than a few of pidgins spoken. I’m pretty sure, however (and I say this with some regret), that none of them sounded like Keith Richards. Savvy?
Ben, are you familiar with International Talk Like a Pirate Day (every September 19)? I think it may be my favorite holiday.
They really do sound like pirates in North Devon. Check this video out at around the 1min mark!
I’m not aware ITLAPD! Although I think I might be a bit shy in participating. Do they have parades? A parade of full-grown men dressed as pirates would be spectacular (or possibly something they do every 3PM at Disneyworld).
In the “related videos” tab on the right-hand side of the screen, one is titled “Devonshire accents sound like pirates!”
Oh, Ben, you must check it out! It’s way too much fun to miss: http://www.talklikeapirate.com/
If nothing else, you can celebrate by changing your language on Facebook to Pirate (you’ll see it; your Fb friends will not unless they also change their language).
There’s a nice article by Phil Timberlake about “Pirate-speak” in VASTA’s Voice and Speech Review 3 – full table of contents shown here: http://www.vasta.org/publications/voice_and_speech_review/coaching_for_film_tv_media.pdf
And here’s my website, translated to Pirate-speak: http://www.endeneu.com/funstuff/miguel/convert.php?url=http%3A%2F%2Fwww.stollersystem.com&filter=pirate
Haha! That’s incredible. I particularly like “If ye’re new ta dialect coaching, I hope ye’ll explore me FAQ.”
This seems very believable.
Somewhere along the line it’s become unquestioned convention that pirates spoke with a West Country accents, in fact many people actually believe this in all seriousness, forgetting that many pirates and privateers were (for example) Welsh, such as Bartholomew Roberts and Henry Morgan. Geographically close but a long way away in accents terms.
It is important to distinguish between homophones “yarr” and “yare”, the former being an adverb or interjection used as a dialectal alteration of “yes”, and the latter being an adjective derived from Old English “gearu” meaning “ready”. Both have associated nautical usage.
Pingback: This Week’s Language Blog Roundup | Wordnik ~ all the words
I had always mentioned that the entire line of pirate depictions beyond the films came from Robert Newton’s portrayals. They were certainly the most colorful and fun to listen to, but it wasn’t just his chosen dialect for those parts that changed the world’s view on pirate behavior. His one-eyed squint and other facial expressions have also been adopted as necessary for a successful pirate characterization.
When I’ve mentioned this, most people had no idea who Robert Newton was. If you saw him playing the part of any other type of Englishman, or a relatively nondescript one by accent, you might not recognize him right away. But everyone knows Long John Silver and Blackbeard, with voice, accent and face that depict the perfect pirate character, most having no knowledge that it was Newton’s portrayal that taught them what a pirate should be to begin with.
Who could forget him sitting below, munching on a piece of what looks like chicken, and letting out with a loud belch, shouting, “Sarbones! Sarbones! Got a pain in me innards!!” We also might have wondered just who “Jimarkins” was, and yet Jim Hawkins always responded.
Personally, although Newton was quite a good actor in all his films, it was difficult for me to adjust to him playing those other roles. He seemed to have defined himself, although most actors shun typecasting, and I had always wished for at least one more film made with him playing a similar character.
I had met several guys from the West Country part of England, back when a friend and I used to take vacations in Miami Beach, and truly enjoyed listening to their accents, although they weren’t anywhere near as dramatic as what Newton used. There were these English guys who spoke in an accent that other Englishmen said “sounded like Americans,” and yet, to us, they were yet another interesting accent type that had distant origins. I think it was the rhotic nature of their speech. They pronounced their Rs the way we do, but much harder, almost overly pronounced, hence the “Arrrr” that would occasionally arise. When the other Englishmen would even say that letter alone, it sounded as though the doctor was about to check their tonsils. Listening to those other guys, one might believe that the letter R was the most important part of their version of the English language. In fact, listening to a group of them speaking at their table in a restaurant, just far enough away so that you cannot understand all they’re saying, the R parts of their words can become the most prominent, as though they were all saying to one another, “Arrrr an’ arrr, with a whiskey sarrr arrround the barrrr–HARRR!”
Pingback: To err is human, but to arrrrr is pirate | OISE Bristol
Whilst newton may have popularised the West Country accent, I think it was probably one that was already well used in the theatre. I think Gilbert and Sullivan, who wrote HMS Pinafore (1878), and The Pirates on Penzance (1879) will have been responsible for globalising the accent for Pirates. Their productions were extremely successful in Britain and America, and in New York City they are performed with much gusto every year, and great attention is paid to getting the stereotype West Country accents penned by G&S correct. Many Shakespeare productions use \ used West Country accents to represent rouges / working class characters (though strangely cockney, a dialect not known in 16th\17th Century England, has become popular, even though the characters have no association with the 18th/19th Century East End). As for American dialects, much of the eastern seaboard, and particularly New England, owes their origins to the West Country, and the leap isn’t that great – just as the ‘educated’ vernacular of may ‘old’ US families owe their pronunciation to what has become known as Received Pronunciation. To be frank, I am always more surprised by an American who speaks with RP, than one whose accent is routed from the West Country, or in the case of Boston and parts of New York Ireland.
We lived in Devon between the Moore’s for a few years as small children. We didn’t have a car (Dad used it driving for work) and there weren’t any buses. The school had three age groups per year and three classrooms. We picked up the accent, but my younger sister was only two when we moved there, so she picked it up thick. When we left, she caused a stir in London. She looked like a little angel with natural blond corkscrew ringlets and a cherub face and heart shaped lips. But she spoke with a deep guttural farmers accent at 5 years old (to top it off, she was particularly small for her age). It hadn’t occurred to us until it was pointed out to us that she sounded like a pirate.
Impressively, she can put on that accent (among others) whenever she wants to as an adult. How she does it while still sounding feminine, I’ll never figure out.
Somebody essentially lend a hand to make critically posts I might state.
This is the very first time I frequented your web page and to this
point? I surprised with the analysis you made to create this actual post
extraordinary. Wonderful task!
My webpage: mattresses for back pain
Pingback: Yo Ho Yo Ho! | Wordnik
Pingback: Avast, me hearties! | Omniglot blog
Pingback: Real Historic PIRATES — FAK #27 | Eleven Challange
It may be that the roots of the sound lie deeper. If “barbarian” comes from the impact the guttural speech of “foreigners” had on the ears of the Greeks (“Bar-bar-bar”), could not the “Arrrrr!” of West Country English have been appropriated to indicate pirates and/or seafaring folk, to people not speaking the language (and possibly even to English speakers from outside the region)? We so often hear unique sounds pulled from languages used to describe the language itself (as often insultingly); I’ve wondered whether “Pirate speak” might go back to the days when they stormed the decks and docks of other lands, leaving this vestigial sound behind as a souvenir of their visit.
Well, I accept the Robert Newton theory. To prove it’s right, why do you think romans in movies have London accents. Because the first dramatizings o ancient Rome were written by Shakespeare.
Hi. I think Romans in films have London/British accents because it perceived as the language of the British Empire and its posh patricians – an empire remarkably similar in structure to the Roman Empire. Shakespeare did not write with an accent as such, although he did use some dialectal words – but probably more from his own neck of the woods.
A lot of the Caribbean accents originated in Ireland for reasons few know. When Oliver Cromwell went over to Ireland he fell upon the country with Puritan vengence for their Papist ways. His troops committed mass atrocities and took many tens of thousands of prisoners and shipped them off to the Caribbean islands and slaves. Being some of the first mass settlers there their accents formed the root accents. To this day the sing song Jamaican accent very closely resembles the Irish accents from the Irish City of Cork. The singer Rhianna has Irish slave roots. These Irish slaves were treated with horrific brutality that set the tone for how slaves were later treated. They were later used to breed lighter coloured slaves with the newly arrived African slaves. Their memory was forgotten by breeding them away but their numbers were huge and their suffering great. It is correct that English was spoken in Ireland from very early times and some aspects of old English survive in common use in Ireland to this day that have ended elsewhere. For example ‘Ye’ is still used as you plural in most places in Ireland and there are many other examples. Irish Gaelic was far more widely spoken until the 1847 famine. But that a different story of British genocide. It was genocide because Parliament choose to let the British aristocracy in Ireland export food and allowed them to throw their starving tenants off their land. It was genocide because they choose not to import cheap American corn to end the famine because it might depress the prices the rich aristocratic landlords were getting for their crops. Well over a million died and in the later decades millions more had to flee to America. Off topic I agree but English history books don’t tell these tales.
I believe the accents came from the South West of England, and also from Wales, if the Caribbean accents are anything to go by. There are just too many similarities in sound in most of the countries that the slaves were brought to by the many pirates (a lot of the now accepted ‘goodies’ were pirates, and had crews from these areas) at that time.
The surnames the people have are a bit of a givaway as well. OK so quite a lot of Scots names are there, but the majority are in from the seafaring areas of the South west, but with very few Cornish names.
The rrrrr’s therefore rule!
Pingback: Jim Hawkin's "Blues" | Dialect Blog
Pingback: AMAZING PIRATES -- FAK #27 |
Most pirates were from West country all the way down to Cornwall and their accents are very pronounced.There is no Irish at all involved. The American accent can be traced to this accent as well as many of the founding fathers had West country accents as opposed to the present day US mainland accent which didnt develop for years in spite of Hollywoods attempts. In fact Robert Newton’s accent maybe the only genuine accent during those early days.
Hey are using WordPress for your site platform? I’m new
to the blog world but I’m trying to get started and set up my own. Do
you need any html coding knowledge to make your own blog?
Any help would be greatly appreciated!
Also visit my webpage – facefaceg33.com
Don’ ye scurvy landlubbers know nuthin’? Avast with yer theories, ye bilge-rats, if only because just about the most notorious Pirate in British history was Bartholomew Roberts, aka Black Bart aka Barti Ddu, and who was Welsh. The thought of a lookout shouting down “Deck Thar! Booty on the Starboard Bow, we’m a-going to sink ’em, Harr Harr!” and the Captain shouting back “Oh, right, bach. there’s lovely, isn’t it? Tidy boy!”, in a lilting West Walian accent is just a bit to much to cope with. Hence the more homely Western English dialect.
Pingback: How To Talk Like A Pirate | How to Fill in the Blank
As a well known fictitious play suggests, Penzance (Cornish region of England) was a port-of-call and a locality of which the local dialect of English, is one which emphasises classic ‘Arrr’.
I actually worked with a fellow who was from Penzance – he had one eye, tattoos, ear-rings and was a bit ‘light fingered’ with things that weren’t bolted down. He was always trying to swindle a deal, or boast about one he had achieved in his odd ‘sort-of-Cornish’ accent. Not trying to stereotype someone here. He was a great bloke big a big heart.
The pirate accent is so popular that even Facebook is using it.
Traditional Newfoundland English is a mixture of early West Country dialects (1500s) influenced somewhat by later (1700s) Irish immigrants. It is several hundred years old but can still be heard in the “outports” (coastal villages) of this first British colony, now a Canadian province (1949). As such, Traditional Newfoundland English may well provide surviving examples of the speech patterns and word usage of the pirates of yore.
The ‘pirate accent’ is clearly a homogenised, stylised one that has become a standard in itself. Almost certainly, charismatic actor Robert Newton popularised this in his films, particularly Disney’s 1950 classic, Treasure Island. Basing his his pirate-speak on an amalgamated and highly theatrical version of various English West Country accents (not specifically Dorset), the film brought it to a mass audience. It captured the popular imagination then, and it has stuck. The West Country accent wasn’t deployed in any significant way in earlier films, but may, as someone has pointed out, have been used in British stage plays and versions of West Country dialects would be represented (accurately or otherwise) in various seafaring novels. So, by and large, it is Robert Newton that effectively created what we would now consider the classic pirate accent, although the real seafarers would have had a great variety of accents. It is worthy of note that before the mid-twentieth century, the working class accents of the Southeast (outside London), South, South West and South West Midlands universally had a definite ‘country burr’ to them that was not standard English (RP) or cockney. Those accents varied a fair bit from county to county but to today’s ears would have all sounded rather pirate-like.
The ‘arrrr’ tradition has been most notably reinvigorated in recent times by Geoffrey Rush’s brilliant portrayal of Barbossa in Pirates of the Carribbean. He has captured the drama and swagger of the Newton-style ‘pirate voice’, although occasionally just a hint of an Irish twang creeps in. In contrast, Johnny Depp elects to swap the west country accent for a cockney-esque sounding pirate accent inspired by the likes of David Bowie, Tony Newley and Keith Richards, but one that is definitely English. Both work extremely well. Robert Newton, though, will always be the godfather of the ‘pirate accent’. Arrr, that ‘e be. | 1 | 3 |
<urn:uuid:3ba4ab85-7198-4468-83e0-e2bc04f7a501> | Alexandra Devine1, Aleisha Carrol2, Sainimili Naivalu3, Senmilia Seru3, Sally Baker1, Belinda Bayak-Bush2, Kathryn James2, Lousie Larcombe1, Tarryn Brown2, Manjula Marella1
1 Nossal Institute for Global Health, University of Melbourne, Australia
2 CBM Australia, Australia
3 Fiji Disabled People’s Association, Fiji
In many settings, people with disabilities are marginalised from the socio-economic activities of their communities and are often excluded from development activities, including sport for development programmes. Sport is recognised as having unique attributes, which can contribute to the development process and play a role in promoting the health of individuals and populations. Yet there is little evidence, which demonstrates whether and how sport for development can be disability-inclusive. The aim of this qualitative research was to address this knowledge gap by documenting the enablers and barriers to disability inclusion within sport for development programmes in the Pacific, and to determine the perceived impact of these programmes on the lives of people with disability. Qualitative interviews and one FGD were conducted with implementers, participants with and without disability, and families that have a child with disability participating in sport. Participation in sport was reported to improve self-worth, health and well-being and social inclusion. Key barriers to inclusion included prejudice and discrimination, lack of accessible transport and sports infrastructure, and disability-specific needs such as lack of assistive devices. Inclusion of people with disabilities within sport for development was enabled by peer-to-peer encouragement, leadership of and meaningful engagement with people with disabilities in all aspects of sports programming.
An estimated 15 per cent of the world’s population have a disability. In many settings, people with disabilities are marginalised from the socio-economic activities of their communities. Many do not have equal access to health, education, employment or development processes when compared to people without disability, and are subsequently more likely to experience poverty. People with disabilities are also thought to be less likely to participate in sport, recreation and leisure activities than people without disability.1,2,3
Sport has been recognised by the United Nations as having unique attributes that can contribute to the community development process.4 Sport is universally popular, can play a role in healthy childhood development and contribute to reducing non-communicable diseases (NCDs), which in turn can reduce the likelihood of preventable longer-term impairment and mortality.1,5 Whilst having numerous benefits for the physical and mental health of individuals, it can also be an effective platform for communication of health and human rights messaging as recognised by its inclusion in the Sustainable Development Goals.4,6,7
Participation in sport is recognised as a fundamental right, but its impact on the lives of people with disabilities may be particularly relevant.6 People with disabilities taking part in sport report a sense of achievement, improved self-concept and self-esteem, better social skills, as well as increased mobility and independence.8 Whilst these benefits are similar to people without disabilities, the positive outcomes are thought to be more significant for people with disabilities given their experience of exclusion from other community activities, especially in resource-poor settings.6 Given people with disabilities are known to have an increased risk of developing NCDs,1 -in part due to a lack of access to physical activity – sport for development should be seen as an important opportunity to reduce this risk and promote optimum health.
The benefits of sport for development aim to go beyond individual level physical and mental health with programmes seeking to develop people and communities through sport.9 Promoting inclusive communities should be a part of this. Sport for development programmes which enable people with and without disability to come together in a positive social environment is thought to promote inclusion and empowerment by challenging negative beliefs about the capabilities of people with disabilities.10
NCDs are the leading cause of death and disability in the Pacific Region.11,12 In response, Pacific Island governments with the support of international cooperation have implemented a number of initiatives including sport for development programmes. The few studies examining the effectiveness of sport for development in the Pacific highlight the importance of locally driven programmes that address locally identified development challenges, culturally appropriate and gender sensitive activities,9,13,14 the use of high profile role models and champions,15 and collaboration between development partners, sports implementers and local communities.9
The sustainability and effectiveness of sport for development programmes in benefiting individuals and supporting community development processes was reported to be challenged when these factors were not appropriately considered, as well as insufficient financial and technical capacity to sustain programmes.9 Further, to be effective in the Pacific, sport for development programmes need to address context and cultural specific barriers to participation in sport such as gendered family and work responsibilities, environmental barriers, and lack of motivation and support.13,14 There was, however, limited analysis in these studies about the process and benefits of inclusion for people with disability.
The United Nations Convention on the Rights of Persons with Disabilities (CRPD) describes disability as an evolving concept, whereby disability results from the interaction between persons with long-term impairments and attitudinal and environmental barriers that hinder their full and effective participation in society on an equal basis with others.
Barriers can be attitudinal, related to the built environment; or information, communication and technology; or institutional, such as policies that do not promote equal participation.16 Article 30 of the CRPD requires States Parties to take all feasible steps to ensure participation and equal access of people with disability to recreation, leisure and sport. Article 32 requires all international development programmes to be inclusive of and accessible to people with disability. Greater evidence of how sport for development can contribute to the attainment of the rights of people with disabilities to promote their inclusion within communities and development programmes is needed.3,16,17
In 2013, in recognition of the potential attributes of sport for development and in-line with the CRPD, the Australian Government’s Aid programme and the Australian Sports Commission (ASC) developed a joint ‘Development-through-sport’ Strategy to guide the implementation on the Australian Sports Outreach Programme (ASOP).18 The aim was to utilise sport to contribute to social and development outcomes, and was divided into two main programme components: 1) Country Programmes, and 2) Pacific Sports Partnerships (PSP). The Country Programmes worked with partner governments and/or Non-Government Organisations (NGOs) to deliver inclusive sports-based activities with the aim of contributing to locally identified development priorities. These development priorities included improved leadership; health-related behaviours; social cohesion; and inclusion and promotion of the rights of people with disability.
The PSP was a sport for development programme conducted through a partnership between the ASC, Australian Government, Australian National Sporting Organisations, and their Pacific counterparts. The aim was to deliver sport-based programmes that provided a platform to contribute to development outcomes. The objectives were to a) increase levels of regular participation of Pacific Islanders, including people with disability, in quality sport activities; b) improve health-related behaviours of Pacific Islanders which impact on non-communicable disease risk factors; and c) improve attitudes towards and increased inclusion of people with disabilities.
The ‘Development-through-sport’ Strategy included two strategic outcomes or goals. The first was ‘Improved health-related behaviours to reduce the risk of non-coumminicable disease.’ The second was ‘Improved quality of life for people with disabilities.’ A ‘theory-of-change’ framework was developed for each outcome, the second of which is most relevant to this paper. The ‘theory-of change’ framework for the second outcome includes two intermediate outcomes: 1) improving the way people with disabilities think and feel about themselves, and 2) reducing barriers to inclusion. These intermediate outcomes are then supported by a number of pathways to guide implementation, such as involving people with disabilities in the planning, design and implementation of sport activities (see Fig 1).18
Whilst all the sport for development activities conducted through ASOP were implemented with a core objective of creating opportunities for all people, there was a lack of evidence as to whether and how these programmes supported disability inclusion and contributed to improving the quality of life of people with disabilities. This research aimed to address this knowledge gap by documenting the enablers and barriers to implementing sport for development programmes, which are inclusive of people with disabilities, and to explore the perceived impact of these programmes on the lives of people with disabilities in the Pacific.
The approach of the research was participatory and inclusive with two local Disabled People’s Organisation (DPO)* members trained and supported to be Research Assistants (RAs). The research was implemented in Australia, Suva and surrounding communities in Fiji, Port Moresby (Papua New Guinea (PNG)), and Apia (Samoa). Fieldwork conducted in Australia included interviews with ASOP stakeholders living in and outside of Australia, including one interview with a key informant living in New Zealand who managed ASOP activities across the Pacific. All other fieldwork sites were selected purposively based on consideration of where ASOP activities were implemented, its geographical accessibility, and any available resources. Data collection took place between March and May in 2015. Qualitative data was collected via key informant interviews, in-depth interviews and one focus group discussion (FGD). Wherever possible, the research team aimed to include a representative sample across gender, location, types of impairment and people representing or engaged in a range of sport for development activities.
A total of 60 participants were interviewed from the five countries (Table 1). Key informants were identified and purposively sampled in consultation with the ASC and partner DPOs. Subsequent snowballing whereby participants informed researchers of other potential participants also helped to identify additional participants. Key informants included current and former ASC staff and stakeholders (e.g. coaches and sport for development staff, as opposed to participants in sport for development activities) knowledgeable on the development and implementation of programmes that received funding through ASOP. Purposive sampling was used to recruit participants for the in-depth interviews (participants of sport for development activities), identified through the networks of partner DPOs and implementers of the sports programmes. Fourteen in-depth interviews were conducted with current participants of sport for development programmes (both male and female, with and without disabilities); four people with disabilities who had dropped out of sport; and three with parents of children with disabilities currently participating in sport. The age range of the adult participants was 24-56 years. The age range of the children with disabilities whose parents were interviewed as proxies was 9-12 years.
All participants were asked to participate in either a key informant interview (KII), in-depth interview (IDI) or a FGD. The content of the interview guides was developed based on sport for development and disability inclusion literature alongside available ASOP documentation. The focus of the KII’s included understanding of disability inclusion, experience in implementing sport for development programmes; perceived enablers of and barriers to inclusion; and perceived impact of sport on the lives of people with disabilities. The focus of the IDI and FGD included experiences of participation; motivation for participation; experience of enablers and barriers; and the perceived impact of sport for development programmes on their lives and the lives of other people with disabilities, such as access to education, employment, and community participation. Where required, interview guides were translated into the local language and back translated into English. All guides were piloted locally before being administered to participants.
Most interviews were conducted face-to-face, via telephone or skype and were digitally recorded, transcribed, and translated into English (where required) for qualitative data analysis. One key informant was not available for interview and therefore responded via email. Except in PNG, all interviews with key informants were conducted in English. In PNG, the interviews and FGD were conducted in Pidgin. As mentioned above, key informants were stakeholders considered to have knowledge on the development and implementation of ASOP activities, whereas in-depth interview participants were current or previous participants of sport for development activities. Due to limited time for fieldwork in PNG however, the FGD included both key informants and participants of sport for development activities because this was the most feasible option to collect data from these participants who had travelled to Port Moresby for a related meeting.
Data were manually coded inductively and deductively to generate themes using thematic content analysis approach. The ‘Development-through-Sport’ Strategy’s ‘theory of change framework’ for outcome two was used as the theoretical framework for the analysis (see Fig 1). The two lead members of the research team independently read all transcripts, familiarised and coded the findings while other team members reviewed a representative sample of the transcripts and coded analysis. Findings were initially coded under the relevant intermediate outcomes and pathways outlined in the ‘theory of change’ framework, including examples of enablers and barriers relevant to each pathway. Findings under each pathway were further categorised into relevant subthemes. An analysis workshop was conducted by the Australian-based research team.
Initial findings were then shared with the local RAs and other DPO and ASC staff involved in the research to ensure the analysis gave an accurate reflection of the context, and then the analysis was finalised. For the purpose of this paper, the findings have been presented under three main sections 1) Improvements in the quality of life of people with disability; 2) Barriers to inclusion in sport for development activities; and 3) Enablers of inclusion in sport for development activities.
The Human Research Ethics Committee (HREC) at the University of Melbourne in Australia approved the research. In addition, the Ministry of Youth and Sports in Fiji approved the research. The interviewers informed potential participants of the research and invited them to participate. All participants were 18 years or older and provided written or verbal consent. In cases where parents of children with disabilities were interviewed as proxies, consent was obtained from the parents only.
Improvements in the quality of life of people with disabilities
Improved Self-worth and Empowerment
All except one participant with a disability interviewed and clearly indicated that participation in sport led to a greater sense of self-worth and empowerment to create change in their lives, as highlighted by a male sport-for-development participant with physical disability in Fiji – ”[Sport] expose[s] that disabled people have talent. We can compete … I’ve noticed it gives you more confidence to expose yourself. No longer staying at home and being quiet.” Sport was also reported to contribute to social inclusion, improved access to employment and better attitudes towards people with disabilities. Participants reflected on these inherent qualities of sport, particularly highlighting that sport enabled them to challenge negative beliefs about their capabilities by providing opportunities to demonstrate their skills and talents to the broader community.
It changed my mindset. It changed how I look at myself, because I was achieving a lot. Participating in the Games … and also overseas. Being involved in the community, being on TV. It’s normal hey, because then they don’t see my disability anymore. Those are the changes that it has brought into my life. (Male sport participant with physical disability, Fiji)
The sense of empowerment and inclusion gained through participation in sport was reported to prompt participants to encourage others with disabilities to access sport. Being included alongside people with and without disabilities, and pushing each other to improve also promoted empowerment and inclusion. A male participant from Fiji who is Deaf said, “because I realised that your life could change when you started to interact more with hearing people.” This was triangulated in the findings by other participants who specifically reported feeling encouraged to participate in sport by their peers with disabilities.
The empowerment gained through sport was reported to be a driver for people with disabilities to address barriers to inclusion in other aspects of their lives, and the lives of other people with disabilities. For example, one former athlete who attributed his opportunity to participate in sport as leading to other opportunities in life such as employment, reported a sense of responsibility to address barriers to employment for other people with disabilities.
I think that for some of us who are former athletes … they tend to be engaged in other activities in the community such as becoming a businessman and sometimes have jobs such as being a cook or working in an office. As [former athletes] are aware of the problems we tend to face, and through sports, are empowered to work through these problems. It then becomes important for them to drive changes in the community, due to individual experiences of overcoming challenges. (DPO representative, PNG)
Improved health and well-being
Similarly, the majority of participants with disabilities who interviewed about their experience in sport reported that sport contributed to improved health outcomes and better self-management of health. “The Zumba programme – it actually reduces my level of stress,” commented a female participant with psychosocial disability in Fiji. It also helped people make healthier lifestyle choices.
Before I did sports, I used to smoke and drink … go clubbing. When I joined the sports, the para sports, it changed me. Right now I don’t drink grog (kava) and I don’t smoke, I do full-time training … Some of us with disability they can’t exercise themselves … they don’t reach the age they want to reach – they die early – because they don’t do exercise … I think sports is good for us … (Male sport participant with physical disability, Fiji)
Sport provided the prospect of enhanced enjoyment of life. A small number of respondents described the enjoyment of winning as greater for people with disabilities because they have had less opportunity to experience such emotions in their day-to-day life. This was also reflected in the observations of sports organisation staff.
… I can see that they’ve built up a lot of self-esteem, a lot more confidence. This is all the mental part of the person. I could see changes in themselves – being able to interact more with people and not be too concerned about what people think about their disabilities. I think they are more focused on what their abilities are rather than what their disabilities are. (National sports organisation representative, Fiji)
The social aspects of sport were ranked as more important than the competitive aspects by more than seventy percent of interviewees with disabilities. For those who participated in sport before acquiring impairment, the reason for participation often changed from the desire for personal achievement to sport’s social aspects after the impairment had occurred. People without disabilities also valued the opportunity to spend time with people with disabilities.
It was the first time for me to participate in sports with persons with disabilities and I really like it, it was a totally new experience for me. (Male sports participant without disability, Fiji)
There were also examples where organisations included social aspects for people with and without disability into their programmes, adapting activities to include an element of fun and time for socializing.
… technique disguised as a fun exercise, and they need time to socialise so with a one hour training session there should be at least five minutes or ten minutes for people just to talk to each other’ (International sports organisation representative, Australia).
Where participants had experience of representing their country in national or international events and received media attention, they described the experience of becoming ‘famous’ in their community and associated positive interaction with others. Travelling for sport within their country and internationally supported further social opportunities.
It’s fun, you meet new people and travel around … you are being exposed to other customs and traditions – you’re not closed up, you can open up … you are more confident with speaking to other people … apart from your own race and apart from Fijian people. (Male sport participant with physical disability, Fiji)
Sports programmes in schools were identified by nearly half of the DPO representatives as particularly important for children with disabilities to socialise and develop skills. A DPO representative from Samoa stated, ”What we are seeing in those kind of games we play locally … most of the kids they don’t know each other – when they come and play games they finally make friends with other kids.”This sentiment was echoed by all parents interviewed.
It has especially [impacted] social inclusiveness and access to education. Without sports sometimes, she is always idle, but with sport she is learning process, because more children they tend to learn through sports, and some of them they don’t adapt in the classroom. When you get them to play sports that’s when they learn to get engaged. (Parent of child with disability participating in sport, Fiji)
Nearly half of the interviewees with disabilities in Fiji and PNG reported opportunities for employment gained through sport. These roles included sports advocates within DPOs, sport development officers in sports organisations, and as coaches. This not only promoted economic empowerment of people with disabilities but was reported to help demonstrate their capacity to be gainfully employed, again raising their status in society.
I have even been told myself ‘’if you can do that [participate in sport] you can work in an office or go back to your normal job” or something … anything can happen. (Female sports participant, Fiji)
Opportunities to facilitate workshops and learn coaching skills through ASOP enabled some participants to build their skills in communication, which opened up doors to the workforce. Mainstream programmes that were inclusive were seen as particularly beneficial because they allowed for interaction between people with and without disabilities. A male sport participant with a physical disability in Fiji reported that “… it is an eye-opener to me because I meet plenty and more friends, especially people with disability and also people, able person, we make friends a lot and we socialise a lot.”
Community Attitudes Towards Disability
The vast majority of all research participants highlighted the ability of sport to improve social inclusion of people with disabilities, especially when implementers and DPOs were able to go into communities and raise awareness of the rights of people with disabilities. Raising awareness and understanding among the community enabled, often for the first time, people with disabilities to participate in sport activities conducted as part of these outreach visits. DPOs involved in outreach activities reported using this role to better advocate for inclusion in the broader community. One interviewee highlighted the DPO role in broader advocacy, but also how much more needs to be done.
There was one guy, who was in a wheelchair, but his home was inaccessible, it had steps and everything, so someone had to carry him down and put him in a wheelchair and then he could go out. On Sundays, he would get up, dress up, and listen to a church service from his window. We told his parents and the church about accessibility, but it costs money. Often issues with accessibility need money to fix, and the family might not be willing to spend money on that, or just can’t afford it. (DPO representative, Fiji)
Another positive example of the ASOP highlighted were activities where families are actively encouraged to allow children with disabilities to play sports, which then led to improved parental expectations of their child’s capabilities. Families reported being more hopeful about what their children can achieve, which may then encourage families to support their children to participate in other areas of the community such as cultural events, education and employment.
We [have] seen some of the parents like to play with the kids during the sports. So from there we know that parents not only to be there to look after the kids but you know that they have their heart to encourage their kids to play and have time with other kids. (DPO representative, Samoa)
Barriers to Inclusion in Sport for Development
Participants with disabilities reflected on a number of personal and external factors that impact their participation in sports. People with disabilities highlighted they often lack confidence in their own abilities, particularly when their families lack confidence in them and actively discourage their participation. Many of the interviewees with disabilities cited their families’ lack of support as a major barrier to participation. Two-thirds of these participants also identified environmental barriers to participation such as the lack of accessible information on available programmes; inaccessible facilities and equipment; and difficulty accessing transport to get to training and events.
Prejudice and Discrimination
Three-quarters of key informants identified prejudice and discrimination as a significant barrier to the inclusion of people with disability in sports programmes. In communities where there were perceived negative attitudes toward disabilities, programme implementers reported difficulty while including people with disabilities in community-level activities, as people with disabilities were hidden within the home or families would not allow them to participate. The vital broader role of DPOs in addressing prejudice and discrimination and raising awareness of rights was again highlighted, particularly during community outreach programmes.
The longer-term impact of community outreach programmes on participation is more difficult to determine. A small number of key informants felt that as community programmes are often one-off visits, they don’t allow for enough community engagement to contribute to sustained attitudinal changes, or to develop sustainable inclusive sport programmes.
A small number of research participants with and without disabilities noted that opportunities to participate in sports are not the same for all people with disabilities. One key informant reported staff often don’t have appropriate understanding of how to interact with people who have certain disabilities, stating “If they have a physical disability they are more likely to be included, whereas people with a mental disability, there is often that fear of well ‘I don’t know how to talk to you, because you have a mental disability.” (International sports organisation representative, Australia). This perception was echoed by a small number of participants.
For my brothers and sisters who are not confident to come out in public, one of the barriers would be attitudes of people, probably the stigma. Because people … when someone has been admitted to St Giles [psychiatric hospital in Fiji] they tend to act differently to that person … (Female sport participant with psychosocial disability, Fiji)
Those with intellectual disabilities …. Because they are seen by the public differently rather than … because it’s not your physical body that’s affected. … you know you are intellectual… and immediately when people see them they will say ok we cannot play with them because you know whatever we plan, it will turn up differently because of them … (DPO representative, Samoa)
For women with disabilities, there was a sense of disparity expressed when describing efforts to participate in sport, with one saying that “when I trained I am the only girl for, I think, four months, and for me there is gender imbalance there.” (Female sport participant with vision impairment, Fiji). Some participants with and without disabilities also identified that females with disabilities may face additional discrimination.
… sometimes it’s the women who are being laughed at mostly I’ve heard of that … I’m thinking why do they do that to that particular person – why is it a woman who has to be the one who go through a lot of things that make her feel she is not wanted? (Female sport participant with physical disability, Fiji)
Lack of Family Support
An absence of family support or active discouragement was identified as a common barrier by nearly half of the participants with disabilities who interviewed. Many reported strong cultural and traditional beliefs, particularly in the rural areas, whereby families believe people with disabilities should stay at home. A small number of key informants emphasised the importance of addressing these barriers and encouraging families to enable family members with disability to participate in sport.
… [they say] ‘no my child did not play that game because you know he has a disability, he can’t play.’ So they come and just say that, you know, take away kids from the event … we have to provide some awareness programme … to encourage the parents to bring in their kids … because most of the parents here in Samoa believe that people with disability [should] just stay home. (DPO representative, Samoa)
Limited Accessibility of Sport-for-Development Programmes
Inaccessible sporting facilities and lack of knowledge on how to make reasonable accommodations* to support inclusion were seen as an ongoing barrier to participation by more than half of all research participants. People with disabilities highlighted that they wanted access to more choices in programmes and that programmes should sustain interest by allowing for increased challenges. This is particularly important when considering the involvement of people with more complex participation requirements. It was expressed that some sports currently only cater to people who are more mobile and use common communications methods with people who have more complex physical or cognitive needs missing out. A few key informants reported that genuine commitment, time and resources are required from organisations to analyse and solve problems surrounding how their sport can be modified to enable people with different abilities and impairments to participate.
For some participants with disabilities who live relatively close to urban areas, significant motivation and financial resources were still required to commit to training. Even where physically accessible buildings do exist, access was reported to be constrained by short opening hours of venues; difficulty getting to the venue; and difficulty mobilising within the venue around equipment.
We have a gymnasium whereas in the day but it’s always full. It’s a small gym and a lot of corporate bodies training … [it’s] hard for me. And they only open at about 3 o’clock in the afternoon. So in my case if someone is to open a gym close to where I am they should open in the morning so when abled people go to work. (Male sport participant with vision impairment, Fiji)
Access to sport was reported to be better in urban cities compared to rural areas. A small number of interview participants from Fiji reported that sporting venues in the country’s capital had improved in terms of accessibility, but in communities outside the city, accessibility was an ongoing issue. In PNG, half of the participants with disabilities described travelling from rural areas to attend a sport event only to find a lack of modified equipment had been provided by the programme, thereby not allowing everyone to participate. Similarly, limited access to coaches in rural areas was reported to prevent participation.
Lack of Information About Sport
Two-thirds of participants with disabilities in Fiji cited limited access to information about sport-for-development activities as a reason people with disabilities are not participating. Factors impacting access included a lack of information in accessible formats. One participant suggested that the events “should have more advertising in the media through TV or print … so people with disabilities can read and know that this is happening … because [people with disabilities] isolate themselves and don’t know what’s happening.” (Female sport participant with psychosocial disability, Fiji). Conversely, effective collaboration between sports organisations and DPOs was said to support better access to information on upcoming events. This was reported as essential for people with disability so they have time to prepare and organise assistance to participate if required.
At the moment this coordination and consultation is lacking … us DPOs we do not have [opportunity to be consulted during planning]…. (DPO representative, PNG)
Lack of Accessible Public Transport
All participants with disabilities cited transport as one of the most significant barriers to participation and for some, it was the primary reason for dropping out of sport. Constraints to accessing transport were described in three ways: limited finances to support transport needs; real and perceived discrimination experienced by people with disability attempting to use public transport; and lack of physically accessible transport. Some organisations recognised this barrier and provided transport for ‘come and try’ sport days. Others are starting to make adjustments to the way they deliver sport, stating, “We are trying to alleviate that problem by taking the sport to them rather than asking them to come to us by using outreach programmes.” (Sport organisation representative, Fiji). However, neither of these approaches solves the ongoing issue of inaccessible transport, highlighting the need to support governments to address systemic barriers to inclusion of people with disabilities in society.
Many people with disabilities in Fiji have access to free public transportation, yet this doesn’t address all the barriers they face to accessing transportation. Three participants with disability reported that despite having a free bus pass, some bus drivers would prevent them from getting on the bus during peak periods, reporting that they had time restrictions and couldn’t provide extra time for a person with a disability to climb into the bus. The latter issue arose because buses are not wheelchair accessible and so in some cases people would crawl onto the bus and ask a bystander to fold and lift their chair onto the bus for them. One of these participants went onto discuss that prejudice and discrimination, both real and perceived, prevented people from accessing public transportation even when their impairment physically did not.
Lack of Options and Competitive Pathways
Moving beyond engagement in social sport activities to more competitive activities can be very challenging for athletes. Whilst many people with disabilities interviewed were motivated to play sports for health and social benefits, there were others who were frustrated by the barriers to more competitive pathways. In PNG, for example, a lack of options was attributed to a lack of people with disabilities holding leadership positions in sports organisations; inadequate engagement of people with disabilities in the design and implementation of sports programmes; and a lack of collaboration between service providers and DPOs, particularly when service providers have ‘control’ over the implementation of sport for development activities. Also highlighted was the need for more recognition of the achievements of athletes with disabilities and better support for these athletes to achieve at a higher level. One DPO representative in PNG reported, “I won three gold medals in the PNG Games, the javelin, shot-put and discus … I also participated in the Arafura Games … however from then on I was not supported to progress on to the next level.” (DPO representative, PNG)
Disability-specific Barriers Which Impact on Participation
People with disabilities often experience disability-specific barriers that impact their participation in sport. Approximately half of the interview participants with disabilities in Fiji reported experiencing disability-specific barriers during their engagement in sport for development programmes. These include communication barriers for people who are deaf or hard of hearing in accessing a programme delivered by people who do not communicate using sign language and without an interpreter. Or lack of assistive devices, such a prosthetic limbs or appropriate wheelchairs that would support people with mobility impairments to engage in sport. There were examples of organisations trying to overcome this, such as in Suva, whereby some sports officers were learning sign language to enable them to engage with people who are deaf. Yet this hasn’t happened in most areas in Fiji or other Pacific countries, highlighting how opportunities can differ for people with the same impairment, depending on the resources available in their environment and the efforts that have been made to include them.
For years there has been a Deaf Table Tennis club [in Fiji] and this has been integrated completely. There are deaf coaches who coach able-bodied players and yet they don’t see the disability at all. But in Vanuatu being deaf is very much more difficult because not many people speak sign language. (International sports organisation representative, Australia)
In most Pacific countries, access to assistive devices and alternative communication modes is an area that tends to lie outside of the domain of sport, yet it directly influences how and how well people with disabilities are able to participate in sport. A lack of access to quality and fit for purpose assistive devices was another issue raised by a small number of participants with disabilities, particularly those wanting to compete at an international level. Even at the community level, access to affordable replacements for damaged walking aids was identified as placing further burden on the limited finances of people with disabilities that impacted their participation. Similarly, people with disability reported a lack of assistance at training such as ‘guide runners’ and support getting in and out of the pool. These issues were all described as reasons for dropping out of sport.
Need for Greater Monitoring and Evaluation
Implementers discussed the requirements of the PSP programme to include reporting on numbers of people with disability who are accessing programmes. ASC were encouraging implementers to use the Washington Group Short Set* of questions to support this and fill a current gap in the programmes to identify people with disability. Better identification of people with disabilities to support inclusion was also highlighted by DPO representatives.
There were also some good examples of sport organisations seeking to measure attitudinal change toward disability within their monitoring and evaluation systems and collecting stories of change from participants about the impact of the programmes. Overall however, this research identified a tension between a growing need for better data collection on inclusion and the capacity of local sports implementers to collect and report this data.
Many of the international sport organisation representatives interviewed reported finding it challenging to build the capacity of local implementers to collect basic data on the numbers of people with disabilities participating in programmes, let alone trying to document changes at the community level.
Enablers of Inclusion in Sport for Development Activities
A number of factors that facilitate inclusion in sport emerged, including peer-to-peer encouragement, support from DPOs and sports organisations, and meaningful participation of people with disabilities in all aspects of sports programmes.
Peer-to-peer Encouragement and Role Modelling
Encouragement from peers with disabilities also engaged in sport was described as a major facilitator of participation and initial entry point into sport by many of the participants with disabilities interviewed. Such examples serve as evidence of this peer-to-peer pathway being built into some programmes more formally. In Fiji, for example, DPOs helped identify ‘Sports Champs’ to be role models and help identify and encourage other people with disabilities to participate in sport.
This concept of role models promoting participation in sport was a strong theme emerging throughout the research. Most respondents in Fiji, for example, reported the achievements of the Honourable Assistant Minister Iliesa Delana (a Fijian athlete with disability) at the London Paralympics, who went on to be elected to the Fijian parliament as a turning-point in changing the perceptions people with disabilities had of themselves, as well as challenging how the community perceived people with disabilities.
People with Disability in Leadership
Beyond participating in sport itself, a number of participants described pathways that enabled them to engage in sport in positions of leadership. Having more people with disabilities in positions of leadership was described as a way to make people with disabilities feel more comfortable about joining programmes. One female sport participant with a vision impairment in Fiji said, “While I was training for my athletics we used to have a coach who was also disabled so he used to understand us.” Some respondents also identified that involvement of DPO representatives in programmes had led to people with disabilities taking on leadership roles within their community in Fiji, such as the Toragi ni koro. (Chief Liaison at the village level)
Inclusion of People with Disability in All Aspects of Programmes
Meaningful participation in sport for people with disabilities goes beyond being a beneficiary of sport activities. It also encompasses inclusion in sport processes, including planning, implementation, monitoring and evaluation of programmes. The inclusion of people with disabilities in the planning of programmes was recognised by many key informants as contributing to better understanding about the capacity of people with disabilities to participate in sports programmes, and the development of more accessible and inclusive programmes.
So that’s what I call inclusive sport … you design something that includes everyone’s idea and make sure that everyone is involved from the beginning, the implementation and monitoring and evaluation as well as reporting … you don’t just ask [people with disability] to join when the programme is half way through. (DPO representative, Samoa)
A key enabler to supporting inclusion in all aspects of programmes highlighted was providing more opportunities for networks to share good practice and facilitate cross-organisational learning. Sports organisations vary greatly in how they implement disability inclusion. By showcasing examples of good practice, it is hoped all organisations would be encouraged to improve inclusion within their programmes and promote more opportunities for people with disability to engage in all aspects of sports programming.
Encouragement and Support through DPOs, Sport Organisations and Family
DPOs and sports organisations were highlighted as playing an important role in encouraging participation in sport. Individuals within these organisations were reported as being instrumental in identifying people with disabilities in communities and nurturing their skills and talents. People with disabilities were reported to sometimes be “locked at home.” Participants acknowledged that because of this and the long history of exclusion of many people with disabilities, significant time and effort is often required to encourage individuals with disabilities to participate.
Like, they still feel shy. There is still that stigma, that barrier that they have. So we sports people, sometimes we have to go that extra mile, we have to break the ice with them in order to get them to open up and be comfortable. (National sport organisation representative, Fiji)
Individuals with an understanding of and interest in inclusion were recognised for their role in championing inclusion while also encouraging and linking in a number of individuals with disabilities into sport networks. These individuals included coaches, mentors and other sports leaders who identified participants and supported their inclusion through encouraging family support, securing funding, training people with disabilities to be coaches, and encouraging networking between DPOs and mainstream sports organisations.
I think what has worked well in some countries such as Fiji and Vanuatu is that there has been a champion who has actively sought out how to include people with disability … in Australia when we talk about those champions it’s often people who have had a family member with a disability. That doesn’t seem to be the common denominator in Vanuatu and Fiji. It’s just that these people have got a really good awareness about disability and an attitude towards inclusion … (ASC representative, Australia)
Social marketing campaigns were seen as an important tool for inclusion through their use in highlighting the success of athletes with disabilities and motivating people with disabilities to participate in sport. Organisations are also starting to explore ways they can engage with social marketing to support participation, both in terms of promoting media coverage of people with disability in sport, and utilising technology to promote participation. A representative from an organisation noted, “I think mainly we use media and word-of-mouth. Right now, we’re hoping to use text messages on phones and various other marketing mechanisms we have, such as TV.” (National sports organisation representative, Fiji).
Many participants with disability reported that when family support was available, it was integral to their ongoing participation. Different kinds of family support were described, such as practical support like helping people get to training or helping finance the cost of participation. Families were also central to enhancing the self-belief of their family members with a disability which in turn enabled participation.
My family embraced it – even when they saw [disability] happening to me they still kept encouraging me … I didn’t want to listen – I was too ashamed to go around. (Male sport participant with physical disability, Fiji)
Opportunities to Participate in Mainstream Sport Programmes
Providing opportunities for people with and without disabilities to play sport alongside each other is an important approach to inclusion, which was highlighted by nearly half of all research participants. Some organisations implemented this approach, but not all. The findings also suggest that people with disabilities often participate in mainstream sport due to self-motivation, rather than an as a result of opportunities provided by sports organisations.
Schools, particularly schools for children with disability and colloquially referred to as special schools were regularly cited by participants with disabilities and key informants as a common entry point for children with disabilities into sport. Sport for development activities implemented in special schools allowed for development of skills in a safe and supported environment, which for some children with disabilities can support transition into mainstream sport activities.
Yet programmes implemented in special schools were also mentioned as actually creating barriers as they keep children with disabilities segregated from playing sport with children without disabilities. The need to develop the capacity of sports organisations to design and implement more programmes outside of disability-specific settings was highlighted by some implementers. There is evidence this is starting to occur, with some sports organisations implementing programmes outside of school hours which are inclusive of children with and without disabilities.
… what we are seeing in those kind of games we play locally … most of the kids they don’t know each other when they come and play games they finally make friends with other kids. (DPO representative, Samoa)
Findings from this research support evidence in the literature that sport can be a powerful transformative tool, improving the overall status of people with disabilities within society.6,19Promoting access to sport for people with disabilities has the capacity to improve the quality of life of people with disabilities, and improving physical and mental health particularly in the context of increased incidence of NCDs.11,13,14 More importantly, in line with previous research, to enable people with disabilities to reduce the emotional effects of disabilities by offering a way to accept their disability (“come out”) and to manage the discriminatory effects of disabilities.20
By providing a platform for people with and without disabilities to come together, there is an opportunity to challenge commonly held misconceptions about disabilities and for people with disabilities to demonstrate their capacities. It also provides an opportunity for people without disabilities to interact and socialise with people with disabilities. This may help to address negative attitudes towards disabilities, a major barrier to the inclusion in other activities such as education, employment and community participation more broadly.1,2
Realising the rights of people with disabilities to participate in sport requires governments and sport for development programmes to clearly articulate disability inclusion in their strategies, contractual agreements, implementation plans, and as part of their monitoring and evaluation. A strong policy environment for health and physical activity is vital,14 making sure relevant policies are disability-inclusive would strengthen subsequent inclusion within implementation. Increasing participation of people with disabilities in sport will also require collaboration with stakeholders outside the sport sector, for example the corporate sector, transport authorities, health and rehabilitation, and urban planning. Sustainability and effectiveness of sport for development programmes relies on appropriate human, technical and financial resources.9 Dedication of resources to embed disability inclusion in sport-for-development activities and these related sectors over time will require ongoing commitment from donors and implementing partners.
Effective and sustainable sport for development programmes require leadership and collaboration.9 The same is required of disability-inclusive sport for development programmes. The research highlighted a number of important networks and partnerships that support inclusion of people with disability in sport. Central to these are the partnerships between DPOs, national sports organisations, and their international or regional counterparts. People with disabilities are the key stakeholders in sport for inclusive development networks. In recognition of this, programmes should determine appropriate mechanisms and adequate resources to ensure people with disabilities can provide leadership and coordination of these networks, support organisational commitment and capacity for disability inclusion, and meaningfully engage in all aspects of programming.
Strong leadership is required from all stakeholders to provide more opportunities for people with disabilities who are currently less likely to have access to programmes such as women,13,14 people with psychosocial disabilities, intellectual disabilities, and those with more complex participation requirements. This could be achieved by building on international examples of modified sports, and collaboratively problem-solving with DPOs to enable people with more complex impairments to participate.
Inclusion of people with disabilities in programmes not only benefits individuals, but their families and the broader community.10 Implementers of programmes and DPOs need to continue to work with families and communities to raise awareness of disabilities, and promote an understanding of the benefits of sport including the potential to promote access to other life domains such as social inclusion, education and employment. Similar to other findings in the literature, this study found that drawing on high profile role models and ‘champions’ is key to promoting awareness and encouraging participation in sport of individuals who are more likely to have experienced exclusion and marginalisation.15
People with disabilities want more choice and options as to how they participate in sport – from intermittent social participation, to participating at an elite level, and engaging in sport beyond playing, in roles such as coaching. Similarly, as many people with disabilities living in the Pacific do not live in urban areas where many sports programmes are implemented, organisations need to continue to build their capacity to provide more opportunities for people with disabilities to participate in sport in rural and remote areas. Building on community outreach programmes and collaborations between DPOs, sports organisations and rural communities is one way this could be achieved.
With the growing recognition and utilisation of sport as a tool for development, continual sharing of experiences of how sport for development can be inclusive of people with disabilities could encourage development actors using sport to better include people with disabilities.7 It is also positive to see a move towards collecting data, for example, through the use of the Washington Group questions, to better understand the rate of participation of people with disabilities in programmes. Yet, to evaluate the longer-term impact of inclusive sports programmes on reducing negative attitudes and promoting inclusion in the broader community, and to address the need to build the evidence base on the effectiveness of sport for development to promote the rights of population groups more likely to be excluded from development, counting the numbers of people with disability participating in programmes is insufficient.17,21,22
The need for improved quality of research on the impacts of sport-for-development is gaining recognition.9,21,23 Attributing the specific impact of inclusive sport-for-development programmes and the sustainability of this impact, requires a deeper understanding of the contextual factors which influence inclusion within sport and broader community domains including development programming. There would be great benefit in conducting baseline studies in communities before implementing programmes and disaggregating data by disability in order to really understand the current experience of people with disabilities as compared to people without disabilities; how this impacts on their access and participation in sport and other areas of community life; and what barriers need to be addressed to improve inclusion, including attitudinal barriers.24
This could then be followed up with an evaluation of the programme using the same survey to allow for an analysis of the longer-term impact of the programme for people with disabilities in their communities. Combined with other monitoring and evaluation techniques such as collection of qualitative data through stories of change, this would also enhance global understanding about how sport can be used more broadly as a tool in development.17 Guaranteeing these processes are embedded in programmes requires funders to ensure that the terms of references for implementers include appropriate resourcing for disability inclusion and its monitoring, evaluation and learning through research.
LIMITATIONS OF THE RESEARCH
The research was conducted in a tight timeframe with limited resources. As such, despite efforts made to ensure people with different types of impairments were included in the sample, it was difficult to ensure adequate representation of all groups. In particular, we were unable to directly interview people with intellectual disabilities. Given more time and resources, it would also have been beneficial to directly interview children with disabilities about their experiences in sport. The decision to use proxies for children with disabilities was made with the knowledge that limited time in-country would make it difficult to develop and use appropriate participatory methods, which would have allowed for children to directly participate in the research. More time in the country would also have allowed us to collect more information from people with disabilities living in rural and remote areas.
Because a purposeful sampling method was used, there may have been a selection bias towards people known to have positively participated in sport. Interviews were conducted with people who have dropped out of sport to try and counteract this effect. Whilst this research collected in-depth qualitative data from a range of participants, both with and without disabilities, collecting data at one point in time doesn’t necessarily provide data about changes in participation in the community over time. Nor does it allow an accurate measure of change of attitudes and barriers to participation in the community. The use of baseline surveys and ongoing monitoring and evaluation would help researchers overcome this issue.
Disability inclusion is reaching a critical point whereby organisations are becoming more aware of the importance of inclusion. There have been significant positive changes since the introduction of the CRPD, which are reflected in this research. It is hoped that this trend will continue the explicit inclusion of disability within five of the SDGs. The growing recognition of the effectiveness of sport as a tool for development, including in the SDGs, and the importance of disability-inclusive development provides an excellent opportunity to advocate for the implementation of sport-for-development programmes which are inclusive of people with disability.6 Ensuring people with disability are included within sport-for-development programmes will contribute to the improved quality of life of people with disabilities, and help fulfil the development community’s responsibility to ensure people with disabilities are no longer marginalised from the processes and benefits of broader development goals.
CBM-Nossal Partnership for Disability Inclusive Development led the research, supported by two DPO members from the Fiji Disabled Peoples Federation who were trained and supported to be Research Assistants.
1. World Health Organization and The World Bank (2011). World Report on Disability. Geneva: World Health Organization.
2. Mitra S., Posarac A., & Vick B. (2013). Disability and poverty in developing countries: a multidimensional study. World Development, 41, 1-18.
3. Brittain I. & Wolff E. (2015). Disability Sport: Changing Lives, Changing Perspectives. Journal of Sport for Development. 2015; 4(6)
4. United Nations Office for Sport for Development and Peace. Website. Retrieved on May 11, 2015 from http://www.un.org/wcm/content/site/sport/home/sport
5. Richards N.C., Gouda H.N., Durham J., Rasika Rampatige R., Rodney A., Maxine Whittaker M. (2016) Disability, non-communicable disease and health information. World Health Organization Bulletin, 94:230-232
6. Dudfield O. & Kaye T. (2013). The Commonwealth guide to advancing development through sport. Commonwealth Secretariat: London. Retrieved May 11, 2015 from http://www.un.org/wcm/webdav/site/sport/users/melodie.arts/public/Commonwealth%20Secretariat_2013_The%20Commonwealth%20Guide%20to%20Advancing%20Development%20through%20Sport.pdf
7. United Nations. (2015) Transforming our World: The 2030 Agenda for Sustainable Development. United Nations, New York August 2015. Retrieved July 7 2016 from: https://sustainabledevelopment.un.org/post2015/transformingourworld/publication
8. Scarpa S. (2011) Physical self concept and self esteem in adolescents and young adults with physical disability: the role of sports participation. European Journal of Adapted Physical Activity, 4(1), 38-53.
9. Khoo C., Schulenkorf N., Adair D. (2014) The opportunities and challenges of using cricket as a sport-for-development tool in Samoa. Cosmopolitan Civil Socieities Journal. 2014;6(1):76-102
10. Ashton-Shaeffer C., Gibson HJ., Autry CE., & Hansen CS. (2001) Meaning of sport to adults with physical disabilities: A disability sport camp experience. Sociology of Sport Journal. 2001;18(1): 95-114.
11. WHO Western Pacific Region (2016). Non-communicable diseases in the Pacific. Retrieved September 7 2016 from: http://www.wpro.who.int/southpacific/ programmes/healthy_communities/noncommunicable_diseases/page/en/
12. World Bank (2014). NCD Roadmap report. World Bank. Retrieved September 7 2016 from: http://documents.worldbank.org/curated/en/534551468332387599/Non-Communicable-Disease-NCD-Roadmap-Report
13. Heard E.M., Auvaa L., Conway B.A. (2016) Culture X: addressing barriers to physical activity in Samoa. Health Promotion International. January 29, 2016:1-9 Advance Access Retrieved on August 31 2016 from: http://www.ncbi.nlm.nih.gov/pubmed/26825998
14. Siefken K., Schofield G., Schulenkorf N. (2014) Laefstael Jenses: An Investigationof Barriers and Facilitators for Healthy Lifestyles of Women in Urban Pacific Island Context. Journal of Physical Activity and Health, 2014:11;30-37. Retrieved August 31 2016 from: http://www.ncbi.nlm.nih.gov/pubmed/23249672
15. Stewart-Withers R., Brook M. (2009). Sport as a vehicle for development: The influence of rugby league in/on the Pacific’. Massey University Institute of Development Working Paper 2009/3. Retrieved September 7 2016 from: http://mro.massey.ac.nz/bitstream/handle/10179/1070/wps3_Stewart-Withers_and_Brook.pdf?sequence=3
16. United Nations. (2006). Convention on the Rights of Persons with Disabilities. Geneva: United Nations.
17. Sanders B. (2015). An own goal in sport for development: Time to change the playing field. Journal of Sport for Development. 2015;4(6)
18. Australian Government and the Australian Sports Commission (2013). Development-through-sport. A joint strategy of the Australian Sports Commission (ASC) and the Australian Agency for International Development (AusAID) 2013-2017.
19. United Nations Enable Fact Sheet. Disability and Sports. Retrieved April 24, 2015 from http://www.un.org/disabilities/default.asp?id=1563
20. Smith L., Wegwood N., Llewellyn G., Shuttleworth R. Sport in the Lives of Young People with Intellectual Disabilities: Negotiating Disability, Identity and Belonging. Journal of Sport for Development. 2015; 3(5): 61-70.
21. Richards J., Kaufman Z., Schulenkorf N., Wolff E., Gannett K., Siefken K., Rodriguez G. (2013). Advancing the Evidence Base of Sport for Development: A New Open-
Access, Peer-Reviewed Journal. Journal of Sport for Development. 2013;1(1)
22. Goujon N., Devine A., Baker S., Sprunt B., Edmonds T., Booth J., Keeffe JE. (2014) A comparative review of measurement instruments to inform and evaluate effectiveness of disability inclusive development. Disability and Rehabilitation. 2014;36(10): 804-12.
23. Cronin O. (2011) Comic Relief Review: Mapping the research on the impact of sport for development. Orla Cronin Research. Retrieved on July 23 2016 from http://www.orlacronin.com/wp-content/uploads/2011/06/ Comic-relief-research-mapping-v14.pdf
24. Huq NL., Edmonds T., Baker S., Busija L., Devine A., Fotis K., et al. (2013) The Rapid Assessment of Disability – Informing the development of an instrument to measure the effectiveness of disability inclusive development through a qualitative study in Bangladesh. Disability, CBR & Inclusive Development. 2013;24(3):37-60 | 1 | 2 |
<urn:uuid:e6ef06c5-4c7a-4a95-ac5d-edd95591e124> | The 1970 British Cohort Study (BCS70) assessed their cohort members (CMs) during the study’s age 5 sweep using the Complete a Profile Test.
Details on this measure and the data collected from the CMs are outlined in the table below.
|Measures:||Spatial-constructive development (Kalverboer, 1972)|
|CHC:||Gv (Visual processing)|
|Administrative method:||Health visitor at home; pen and paper|
|Procedure:||The child was asked to complete an outline picture of a human face in profile by filling in features (eyes, ears, nostrils, mouth, hair etc.).|
|Link to questionnaire:||https://cls.ucl.ac.uk/wp-content/uploads/2017/07/BCS70_age5_test_booklet.pdf (opens in new tab)|
|Scoring:||The scoring was based on the number and position of features included on the human face in profile. The scoring details are outlined in Figure 7 in Parsons (2014) and Golding (1975, pp. 268-273). The maximum score available was 16.|
|Item-level variable(s):||f090 - f098|
|Total score/derived variable(s):||f118|
|Age of participants (months):||Mean = 61.78, SD = 1.33, Range = 60 - 77|
|N = 12,451|
|Range = 0 - 16|
|Mean = 6.02|
|SD = 3.19|
|(click image to enlarge)
|Other sweep and/or cohort:||None|
|Source:||Kalverboer, A.F. (1972). A Profile Test for the Spatial-Constructive Development. Lisse: Switz & Zeitlinger.|
|Technical resources:||Parsons, S. (2014). Childhood cognition in the 1970 British Cohort Study, CLS Working Paper. London: Centre for Longitudinal Studies.|
|Golding, J. (1975). The 1970 Birth Cohort 5-Year Follow-up: Guide to the Dataset. Bristol: University of Bristol Institute of Child Health.|
|Reference examples:||Feinstein, L. (2003). Inequality in the early cognitive development of British children in the 1970 cohort. Economica, 70(277), 73-97.|
- Overview of all cognitive measure in BCS70
- Overview of childhood cognitive measures across all studies
This page is part of CLOSER’s ‘A guide to the cognitive measures in five British birth cohort studies’. | 1 | 3 |
<urn:uuid:63f8245a-d68e-4e21-87ba-22f111edc2f6> | Computer graphics deals with generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
Some topics in computer graphics include user interface design, sprite graphics, rendering, ray tracing, geometry processing, computer animation, vector graphics, 3D modeling, shaders, GPU design, implicit surfaces, visualization, scientific computing, image processing, computational photography, scientific visualization, computational geometry and computer vision, among others. The overall methodology depends heavily on the underlying sciences of geometry, optics, physics, and perception.
Computer graphics is responsible for displaying art and image data effectively and meaningfully to the consumer. It is also used for processing image data received from the physical world, such as photo and video content. Computer graphics development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, in general.
The term computer graphics has been used in a broad sense to describe "almost everything on computers that is not text or sound". Typically, the term computer graphics refers to several different things:
Today, computer graphics is widespread. Such imagery is found in and on television, newspapers, weather reports, and in a variety of medical investigations and surgical procedures. A well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media "such graphs are used to illustrate papers, reports, theses", and other presentation material.
Many tools have been developed to visualize data. Computer-generated imagery can be categorized into several different types: two dimensional (2D), three dimensional (3D), and animated graphics. As technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still widely used. Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, and scientific visualization more concerned with "the visualization of three dimensional phenomena (architectural, meteorological, medical, biological, etc.), where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component".
See also: History of computer animation
The precursor sciences to the development of modern computer graphics were the advances in electrical engineering, electronics, and television that took place during the first half of the twentieth century. Screens could display art since the Lumiere brothers' use of mattes to create special effects for the earliest films dating from 1895, but such displays were limited and not interactive. The first cathode ray tube, the Braun tube, was invented in 1897 – it in turn would permit the oscilloscope and the military control panel – the more direct precursors of the field, as they provided the first two-dimensional electronic displays that responded to programmatic or user input. Nevertheless, computer graphics remained relatively unknown as a discipline until the 1950s and the post-World War II period – during which time the discipline emerged from a combination of both pure university and laboratory academic research into more advanced computers and the United States military's further development of technologies like radar, advanced aviation, and rocketry developed during the war. New kinds of displays were needed to process the wealth of information resulting from such projects, leading to the development of computer graphics as a discipline.
Early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed a personal experiment in which he wrote a small program that captured the movement of his finger and displayed its vector (his traced name) on a display scope. One of the first interactive video games to feature recognizable, interactive graphics – Tennis for Two – was created for an oscilloscope by William Higinbotham to entertain visitors in 1958 at Brookhaven National Laboratory and simulated a tennis match. In 1959, Douglas T. Ross innovated again while working at MIT on transforming mathematic statements into computer generated 3D machine tool vectors by taking the opportunity to create a display scope image of a Disney cartoon character.
Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established strong ties with Stanford University through its founders, who were alumni. This began the decades-long transformation of the southern San Francisco Bay Area into the world's leading computer technology hub – now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware.
Further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MIT's Lincoln Laboratory. The TX-2 integrated a number of new man-machine interfaces. A light pen could be used to draw sketches on the computer using Ivan Sutherland's revolutionary Sketchpad software. Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and even recall them later. The light pen itself had a small photoelectric cell in its tip. This cell emitted an electronic pulse whenever it was placed in front of a computer screen and the screen's electron gun fired directly at it. By simply timing the electronic pulse with the current location of the electron gun, it was easy to pinpoint exactly where the pen was on the screen at any given moment. Once that was determined, the computer could then draw a cursor at that location. Sutherland seemed to find the perfect solution for many of the graphics problems he faced. Even today, many standards of computer graphics interfaces got their start with this early Sketchpad program. One example of this is in drawing constraints. If one wants to draw a square for example, they do not have to worry about drawing four lines perfectly to form the edges of the box. One can simply specify that they want to draw a box, and then specify the location and size of the box. The software will then construct a perfect box, with the right dimensions and at the right location. Another example is that Sutherland's software modeled objects – not just a picture of objects. In other words, with a model of a car, one could change the size of the tires without affecting the rest of the car. It could stretch the body of car without deforming the tires.
The phrase "computer graphics" has been credited to William Fetter, a graphic designer for Boeing in 1960. Fetter in turn attributed it to Verne Hudson, also at Boeing.
In 1961 another student at MIT, Steve Russell, created another important title in the history of video games, Spacewar! Written for the DEC PDP-1, Spacewar was an instant success and copies started flowing to other PDP-1 owners and eventually DEC got a copy. The engineers at DEC used it as a diagnostic program on every new PDP-1 before shipping it. The sales force picked up on this quickly enough and when installing new units, would run the "world's first video game" for their new customers. (Higginbotham's Tennis For Two had beaten Spacewar by almost three years, but it was almost unknown outside of a research or academic setting.)
At around the same time (1961–1962) in the University of Cambridge, Elizabeth Waldram wrote code to display radio-astronomy maps on a cathode ray tube.
E. E. Zajac, a scientist at Bell Telephone Laboratory (BTL), created a film called "Simulation of a two-giro gravity attitude control system" in 1963. In this computer-generated film, Zajac showed how the attitude of a satellite could be altered as it orbits the Earth. He created the animation on an IBM 7090 mainframe computer. Also at BTL, Ken Knowlton, Frank Sinden, Ruth A. Weiss and Michael Noll started working in the computer graphics field. Sinden created a film called Force, Mass and Motion illustrating Newton's laws of motion in operation. Around the same time, other scientists were creating computer graphics to illustrate their research. At Lawrence Radiation Laboratory, Nelson Max created the films Flow of a Viscous Fluid and Propagation of Shock Waves in a Solid Form. Boeing Aircraft created a film called Vibration of an Aircraft.
Also sometime in the early 1960s, automobiles would also provide a boost through the early work of Pierre Bézier at Renault, who used Paul de Casteljau's curves – now called Bézier curves after Bézier's work in the field – to develop 3d modeling techniques for Renault car bodies. These curves would form the foundation for much curve-modeling work in the field, as curves – unlike polygons – are mathematically complex entities to draw and model well.
It was not long before major corporations started taking an interest in computer graphics. TRW, Lockheed-Georgia, General Electric and Sperry Rand are among the many companies that were getting started in computer graphics by the mid-1960s. IBM was quick to respond to this interest by releasing the IBM 2250 graphics terminal, the first commercially available graphics computer. Ralph Baer, a supervising engineer at Sanders Associates, came up with a home video game in 1966 that was later licensed to Magnavox and called the Odyssey. While very simplistic, and requiring fairly inexpensive electronic parts, it allowed the player to move points of light around on a screen. It was the first consumer computer graphics product. David C. Evans was director of engineering at Bendix Corporation's computer division from 1953 to 1962, after which he worked for the next five years as a visiting professor at Berkeley. There he continued his interest in computers and how they interfaced with people. In 1966, the University of Utah recruited Evans to form a computer science program, and computer graphics quickly became his primary interest. This new department would become the world's primary research center for computer graphics through the 1970s.
Also, in 1966, Ivan Sutherland continued to innovate at MIT when he invented the first computer-controlled head-mounted display (HMD). It displayed two separate wireframe images, one for each eye. This allowed the viewer to see the computer scene in stereoscopic 3D. The heavy hardware required for supporting the display and tracker was called the Sword of Damocles because of the potential danger if it were to fall upon the wearer. After receiving his Ph.D. from MIT, Sutherland became Director of Information Processing at ARPA (Advanced Research Projects Agency), and later became a professor at Harvard. In 1967 Sutherland was recruited by Evans to join the computer science program at the University of Utah – a development which would turn that department into one of the most important research centers in graphics for nearly a decade thereafter, eventually producing some of the most important pioneers in the field. There Sutherland perfected his HMD; twenty years later, NASA would re-discover his techniques in their virtual reality research. At Utah, Sutherland and Evans were highly sought after consultants by large companies, but they were frustrated at the lack of graphics hardware available at the time, so they started formulating a plan to start their own company.
In 1968, Dave Evans and Ivan Sutherland founded the first computer graphics hardware company, Evans & Sutherland. While Sutherland originally wanted the company to be located in Cambridge, Massachusetts, Salt Lake City was instead chosen due to its proximity to the professors' research group at the University of Utah.
Also in 1968 Arthur Appel described the first ray casting algorithm, the first of a class of ray tracing-based rendering algorithms that have since become fundamental in achieving photorealism in graphics by modeling the paths that rays of light take from a light source, to surfaces in a scene, and into the camera.
In 1969, the ACM initiated A Special Interest Group on Graphics (SIGGRAPH) which organizes conferences, graphics standards, and publications within the field of computer graphics. By 1973, the first annual SIGGRAPH conference was held, which has become one of the focuses of the organization. SIGGRAPH has grown in size and importance as the field of computer graphics has expanded over time.
Subsequently, a number of breakthroughs in the field – particularly important early breakthroughs in the transformation of graphics from utilitarian to realistic – occurred at the University of Utah in the 1970s, which had hired Ivan Sutherland. He was paired with David C. Evans to teach an advanced computer graphics class, which contributed a great deal of founding research to the field and taught several students who would grow to found several of the industry's most important companies – namely Pixar, Silicon Graphics, and Adobe Systems. Tom Stockham led the image processing group at UU which worked closely with the computer graphics lab.
One of these students was Edwin Catmull. Catmull had just come from The Boeing Company and had been working on his degree in physics. Growing up on Disney, Catmull loved animation yet quickly discovered that he did not have the talent for drawing. Now Catmull (along with many others) saw computers as the natural progression of animation and they wanted to be part of the revolution. The first computer animation that Catmull saw was his own. He created an animation of his hand opening and closing. He also pioneered texture mapping to paint textures on three-dimensional models in 1974, now considered one of the fundamental techniques in 3D modeling. It became one of his goals to produce a feature-length motion picture using computer graphics – a goal he would achieve two decades later after his founding role in Pixar. In the same class, Fred Parke created an animation of his wife's face. The two animations were included in the 1976 feature film Futureworld.
As the UU computer graphics laboratory was attracting people from all over, John Warnock was another of those early pioneers; he later founded Adobe Systems and create a revolution in the publishing world with his PostScript page description language, and Adobe would go on later to create the industry standard photo editing software in Adobe Photoshop and a prominent movie industry special effects program in Adobe After Effects.
James Clark was also there; he later founded Silicon Graphics, a maker of advanced rendering systems that would dominate the field of high-end graphics until the early 1990s.
A major advance in 3D computer graphics was created at UU by these early pioneers – hidden surface determination. In order to draw a representation of a 3D object on the screen, the computer must determine which surfaces are "behind" the object from the viewer's perspective, and thus should be "hidden" when the computer creates (or renders) the image. The 3D Core Graphics System (or Core) was the first graphical standard to be developed. A group of 25 experts of the ACM Special Interest Group SIGGRAPH developed this "conceptual framework". The specifications were published in 1977, and it became a foundation for many future developments in the field.
Also in the 1970s, Henri Gouraud, Jim Blinn and Bui Tuong Phong contributed to the foundations of shading in CGI via the development of the Gouraud shading and Blinn–Phong shading models, allowing graphics to move beyond a "flat" look to a look more accurately portraying depth. Jim Blinn also innovated further in 1978 by introducing bump mapping, a technique for simulating uneven surfaces, and the predecessor to many more advanced kinds of mapping used today.
The modern videogame arcade as is known today was birthed in the 1970s, with the first arcade games using real-time 2D sprite graphics. Pong in 1972 was one of the first hit arcade cabinet games. Speed Race in 1974 featured sprites moving along a vertically scrolling road. Gun Fight in 1975 featured human-looking animated characters, while Space Invaders in 1978 featured a large number of animated figures on screen; both used a specialized barrel shifter circuit made from discrete chips to help their Intel 8080 microprocessor animate their framebuffer graphics.
The 1980s began to see the modernization and commercialization of computer graphics. As the home computer proliferated, a subject which had previously been an academics-only discipline was adopted by a much larger audience, and the number of computer graphics developers increased significantly.
In the early 1980s, metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI) technology led to the availability of 16-bit central processing unit (CPU) microprocessors and the first graphics processing unit (GPU) chips, which began to revolutionize computer graphics, enabling high-resolution graphics for computer graphics terminals as well as personal computer (PC) systems. NEC's µPD7220 was the first GPU, fabricated on a fully integrated NMOS VLSI chip. It supported up to 1024x1024 resolution, and laid the foundations for the emerging PC graphics market. It was used in a number of graphics cards, and was licensed for clones such as the Intel 82720, the first of Intel's graphics processing units. MOS memory also became cheaper in the early 1980s, enabling the development of affordable framebuffer memory, notably video RAM (VRAM) introduced by Texas Instruments (TI) in the mid-1980s. In 1984, Hitachi released the ARTC HD63484, the first complementary MOS (CMOS) GPU. It was capable of displaying high-resolution in color mode and up to 4K resolution in monochrome mode, and it was used in a number of graphics cards and terminals during the late 1980s. In 1986, TI introduced the TMS34010, the first fully programmable MOS graphics processor.
Computer graphics terminals during this decade became increasingly intelligent, semi-standalone and standalone workstations. Graphics and application processing were increasingly migrated to the intelligence in the workstation, rather than continuing to rely on central mainframe and mini-computers. Typical of the early move to high-resolution computer graphics intelligent workstations for the computer-aided engineering market were the Orca 1000, 2000 and 3000 workstations, developed by Orcatech of Ottawa, a spin-off from Bell-Northern Research, and led by David Pearson, an early workstation pioneer. The Orca 3000 was based on the 16-bit Motorola 68000 microprocessor and AMD bit-slice processors, and had Unix as its operating system. It was targeted squarely at the sophisticated end of the design engineering sector. Artists and graphic designers began to see the personal computer, particularly the Commodore Amiga and Macintosh, as a serious design tool, one that could save time and draw more accurately than other methods. The Macintosh remains a highly popular tool for computer graphics among graphic design studios and businesses. Modern computers, dating from the 1980s, often use graphical user interfaces (GUI) to present data and information with symbols, icons and pictures, rather than text. Graphics are one of the five key elements of multimedia technology.
In the field of realistic rendering, Japan's Osaka University developed the LINKS-1 Computer Graphics System, a supercomputer that used up to 257 Zilog Z8001 microprocessors, in 1982, for the purpose of rendering realistic 3D computer graphics. According to the Information Processing Society of Japan: "The core of 3D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." The LINKS-1 was the world's most powerful computer, as of 1984.
Also in the field of realistic rendering, the general rendering equation of David Immel and James Kajiya was developed in 1986 – an important step towards implementing global illumination, which is necessary to pursue photorealism in computer graphics.
The continuing popularity of Star Wars and other science fiction franchises were relevant in cinematic CGI at this time, as Lucasfilm and Industrial Light & Magic became known as the "go-to" house by many other studios for topnotch computer graphics in film. Important advances in chroma keying ("bluescreening", etc.) were made for the later films of the original trilogy. Two other pieces of video would also outlast the era as historically relevant: Dire Straits' iconic, near-fully-CGI video for their song "Money for Nothing" in 1985, which popularized CGI among music fans of that era, and a scene from Young Sherlock Holmes the same year featuring the first fully CGI character in a feature movie (an animated stained-glass knight). In 1988, the first shaders – small programs designed specifically to do shading as a separate algorithm – were developed by Pixar, which had already spun off from Industrial Light & Magic as a separate entity – though the public would not see the results of such technological progress until the next decade. In the late 1980s, Silicon Graphics (SGI) computers were used to create some of the first fully computer-generated short films at Pixar, and Silicon Graphics machines were considered a high-water mark for the field during the decade.
The 1980s is also called the golden era of videogames; millions-selling systems from Atari, Nintendo and Sega, among other companies, exposed computer graphics for the first time to a new, young, and impressionable audience – as did MS-DOS-based personal computers, Apple IIs, Macs, and Amigas, all of which also allowed users to program their own games if skilled enough. For the arcades, advances were made in commercial, real-time 3D graphics. In 1988, the first dedicated real-time 3D graphics boards were introduced for arcades, with the Namco System 21 and Taito Air System. On the professional side, Evans & Sutherland and SGI developed 3D raster graphics hardware that directly influenced the later single-chip graphics processing unit (GPU), a technology where a separate and very powerful chip is used in parallel processing with a CPU to optimize graphics.
The decade also saw computer graphics applied to many additional professional markets, including location-based entertainment and education with the E&S Digistar, vehicle design, vehicle simulation, and chemistry.
The 1990s' overwhelming note was the emergence of 3D modeling on a mass scale and an impressive rise in the quality of CGI generally. Home computers became able to take on rendering tasks that previously had been limited to workstations costing thousands of dollars; as 3D modelers became available for home systems, the popularity of Silicon Graphics workstations declined and powerful Microsoft Windows and Apple Macintosh machines running Autodesk products like 3D Studio or other home rendering software ascended in importance. By the end of the decade, the GPU would begin its rise to the prominence it still enjoys today.
The field began to see the first rendered graphics that could truly pass as photorealistic to the untrained eye (though they could not yet do so with a trained CGI artist) and 3D graphics became far more popular in gaming, multimedia, and animation. At the end of the 1980s and the beginning of the nineties were created, in France, the very first computer graphics TV series: La Vie des bêtes by studio Mac Guff Ligne (1988), Les Fables Géométriques (1989–1991) by studio Fantôme, and Quarxs, the first HDTV computer graphics series by Maurice Benayoun and François Schuiten (studio Z-A production, 1990–1993).
In film, Pixar began its serious commercial rise in this era under Edwin Catmull, with its first major film release, in 1995 – Toy Story – a critical and commercial success of nine-figure magnitude. The studio to invent the programmable shader would go on to have many animated hits, and its work on prerendered video animation is still considered an industry leader and research trail breaker.
In video games, in 1992, Virtua Racing, running on the Sega Model 1 arcade system board, laid the foundations for fully 3D racing games and popularized real-time 3D polygonal graphics among a wider audience in the video game industry. The Sega Model 2 in 1993 and Sega Model 3 in 1996 subsequently pushed the boundaries of commercial, real-time 3D graphics. Back on the PC, Wolfenstein 3D, Doom and Quake, three of the first massively popular 3D first-person shooter games, were released by id Software to critical and popular acclaim during this decade using a rendering engine innovated[vague] primarily by John Carmack. The Sony PlayStation, Sega Saturn, and Nintendo 64, among other consoles, sold in the millions and popularized 3D graphics for home gamers. Certain late-1990s first-generation 3D titles became seen as influential in popularizing 3D graphics among console users, such as platform games Super Mario 64 and The Legend of Zelda: Ocarina of Time, and early 3D fighting games like Virtua Fighter, Battle Arena Toshinden, and Tekken.
Technology and algorithms for rendering continued to improve greatly. In 1996, Krishnamurty and Levoy invented normal mapping – an improvement on Jim Blinn's bump mapping. 1999 saw Nvidia release the seminal GeForce 256, the first home video card billed as a graphics processing unit or GPU, which in its own words contained "integrated transform, lighting, triangle setup/clipping, and rendering engines". By the end of the decade, computers adopted common frameworks for graphics processing such as DirectX and OpenGL. Since then, computer graphics have only become more detailed and realistic, due to more powerful graphics hardware and 3D modeling software. AMD also became a leading developer of graphics boards in this decade, creating a "duopoly" in the field which exists this day.
CGI became ubiquitous in earnest during this era. Video games and CGI cinema had spread the reach of computer graphics to the mainstream by the late 1990s and continued to do so at an accelerated pace in the 2000s. CGI was also adopted en masse for television advertisements widely in the late 1990s and 2000s, and so became familiar to a massive audience.
The continued rise and increasing sophistication of the graphics processing unit were crucial to this decade, and 3D rendering capabilities became a standard feature as 3D-graphics GPUs became considered a necessity for desktop computer makers to offer. The Nvidia GeForce line of graphics cards dominated the market in the early decade with occasional significant competing presence from ATI. As the decade progressed, even low-end machines usually contained a 3D-capable GPU of some kind as Nvidia and AMD both introduced low-priced chipsets and continued to dominate the market. Shaders which had been introduced in the 1980s to perform specialized processing on the GPU would by the end of the decade become supported on most consumer hardware, speeding up graphics considerably and allowing for greatly improved texture and shading in computer graphics via the widespread adoption of normal mapping, bump mapping, and a variety of other techniques allowing the simulation of a great amount of detail.
Computer graphics used in films and video games gradually began to be realistic to the point of entering the uncanny valley. CGI movies proliferated, with traditional animated cartoon films like Ice Age and Madagascar as well as numerous Pixar offerings like Finding Nemo dominating the box office in this field. The Final Fantasy: The Spirits Within, released in 2001, was the first fully computer-generated feature film to use photorealistic CGI characters and be fully made with motion capture. The film was not a box-office success, however. Some commentators have suggested this may be partly because the lead CGI characters had facial features which fell into the "uncanny valley".[note 1] Other animated films like The Polar Express drew attention at this time as well. Star Wars also resurfaced with its prequel trilogy and the effects continued to set a bar for CGI in film.
In videogames, the Sony PlayStation 2 and 3, the Microsoft Xbox line of consoles, and offerings from Nintendo such as the GameCube maintained a large following, as did the Windows PC. Marquee CGI-heavy titles like the series of Grand Theft Auto, Assassin's Creed, Final Fantasy, BioShock, Kingdom Hearts, Mirror's Edge and dozens of others continued to approach photorealism, grow the video game industry and impress, until that industry's revenues became comparable to those of movies. Microsoft made a decision to expose DirectX more easily to the independent developer world with the XNA program, but it was not a success. DirectX itself remained a commercial success, however. OpenGL continued to mature as well, and it and DirectX improved greatly; the second-generation shader languages HLSL and GLSL began to be popular in this decade.
In scientific computing, the GPGPU technique to pass large amounts of data bidirectionally between a GPU and CPU was invented; speeding up analysis on many kinds of bioinformatics and molecular biology experiments. The technique has also been used for Bitcoin mining and has applications in computer vision.
In the 2010s, CGI has been nearly ubiquitous in video, pre-rendered graphics are nearly scientifically photorealistic, and real-time graphics on a suitably high-end system may simulate photorealism to the untrained eye.
Texture mapping has matured into a multistage process with many layers; generally, it is not uncommon to implement texture mapping, bump mapping or isosurfaces or normal mapping, lighting maps including specular highlights and reflection techniques, and shadow volumes into one rendering engine using shaders, which are maturing considerably. Shaders are now very nearly a necessity for advanced work in the field, providing considerable complexity in manipulating pixels, vertices, and textures on a per-element basis, and countless possible effects. Their shader languages HLSL and GLSL are active fields of research and development. Physically based rendering or PBR, which implements many maps and performs advanced calculation to simulate real optic light flow, is an active research area as well, along with advanced areas like ambient occlusion, subsurface scattering, Rayleigh scattering, photon mapping, and many others. Experiments into the processing power required to provide graphics in real time at ultra-high-resolution modes like 4K Ultra HD are beginning, though beyond reach of all but the highest-end hardware.
In cinema, most animated movies are CGI now; a great many animated CGI films are made per year, but few, if any, attempt photorealism due to continuing fears of the uncanny valley. Most are 3D cartoons.
In videogames, the Microsoft Xbox One, Sony PlayStation 4, and Nintendo Switch dominated the home space and were all capable of advanced 3D graphics; the Windows PC was still one of the most active gaming platforms as well.
Main article: 2D computer graphics
See also: Video display controller
2D computer graphics are the computer-based generation of digital images—mostly from models, such as digital image, and by techniques specific to them.
2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies such as typography. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred because they give more direct control of the image than 3D computer graphics, whose approach is more akin to photography than to typography.
See also: Pixel art
A large form of digital art, pixel art is created through the use of raster graphics software, where images are edited on the pixel level. Graphics in most old (or relatively limited) computer and video games, graphing calculator games, and many mobile phone games are mostly pixel art.
See also: Sprite (computer graphics)
A sprite is a two-dimensional image or animation that is integrated into a larger scene. Initially including just graphical objects handled separately from the memory bitmap of a video display, this now includes various manners of graphical overlays.
Originally, sprites were a method of integrating unrelated bitmaps so that they appeared to be part of the normal bitmap on a screen, such as creating an animated character that can be moved on a screen without altering the data defining the overall screen. Such sprites can be created by either electronic circuitry or software. In circuitry, a hardware sprite is a hardware construct that employs custom DMA channels to integrate visual elements with the main screen in that it super-imposes two discrete video sources. Software can simulate this through specialized rendering methods.
See also: Vector graphics
Vector graphics formats are complementary to raster graphics. Raster graphics is the representation of images as an array of pixels and is typically used for the representation of photographic images. Vector graphics consists in encoding information about shapes and colors that comprise the image, which can allow for more flexibility in rendering. There are instances when working with vector tools and formats is best practice, and instances when working with raster tools and formats is best practice. There are times when both formats come together. An understanding of the advantages and limitations of each technology and the relationship between them is most likely to result in efficient and effective use of tools.
See also: Text-to-image model
Since the mid-2010s, as a result of advances in deep neural networks, models have been created which take as input a natural language description and produces as output an image matching that description. Text-to-image models generally combine a language model, which transforms the input text into a latent representation, and a generative image model, which produces an image conditioned on that representation. The most effective models have generally been trained on massive amounts of image and text data scraped from the web. By 2022, the best of these models, for example Dall-E 2 and Stable Diffusion, are able to create images in a range of styles, ranging from imitations of living artists to near-photorealistic, in a matter of seconds, given powerful enough hardware.
Main article: 3D computer graphics
See also: Graphics processing unit
3D graphics, compared to 2D graphics, are graphics that use a three-dimensional representation of geometric data. For the purpose of performance, this is stored in the computer. This includes images that may be for later display or for real-time viewing.
Despite these differences, 3D computer graphics rely on similar algorithms as 2D computer graphics do in the frame and raster graphics (like in 2D) in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques.
3D computer graphics are the same as 3D models. The model is contained within the graphical data file, apart from the rendering. However, there are differences that include the 3D model is the representation of any 3D object. Until visually displayed a model is not graphic. Due to printing, 3D models are not only confined to virtual space. 3D rendering is how a model can be displayed. Also can be used in non-graphical computer simulations and calculations.
See also: Computer animation
Computer animation is the art of creating moving images via the use of computers. It is a subfield of computer graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (Computer-generated imagery or computer-generated imaging), especially when used in films.
Virtual entities may contain and be controlled by assorted attributes, such as transform values (location, orientation, and scale) stored in an object's transformation matrix. Animation is the change of an attribute over time. Multiple methods of achieving animation exist; the rudimentary form is based on the creation and editing of keyframes, each storing a value at a given time, per attribute to be animated. The 2D/3D graphics software will change with each keyframe, creating an editable curve of a value mapped over time, in which results in animation. Other methods of animation include procedural and expression-based techniques: the former consolidates related elements of animated entities into sets of attributes, useful for creating particle effects and crowd simulations; the latter allows an evaluated result returned from a user-defined logical expression, coupled with mathematics, to automate animation in a predictable way (convenient for controlling bone behavior beyond what a hierarchy offers in skeletal system set up).
To create the illusion of movement, an image is displayed on the computer screen then quickly replaced by a new image that is similar to the previous image, but shifted slightly. This technique is identical to the illusion of movement in television and motion pictures.
Images are typically created by devices such as cameras, mirrors, lenses, telescopes, microscopes, etc.
Digital images include both vector images and raster images, but raster images are more commonly used.
In digital imaging, a pixel (or picture element) is a single point in a raster image. Pixels are placed on a regular 2-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The intensity of each pixel is variable; in color systems, each pixel has typically three components such as red, green, and blue.
Graphics are visual presentations on a surface, such as a computer screen. Examples are photographs, drawing, graphics designs, maps, engineering drawings, or other images. Graphics often combine text and illustration. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flier, poster, web site, or book without any other element. Clarity or effective communication may be the objective, association with other cultural elements may be sought, or merely, the creation of a distinctive style.
Primitives are basic units which a graphics system may combine to create more complex images or models. Examples would be sprites and character maps in 2D video games, geometric primitives in CAD, or polygons or triangles in 3D rendering. Primitives may be supported in hardware for efficient rendering, or the building blocks provided by a graphics application.
Rendering is the generation of a 2D image from a 3D model by means of computer programs. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The rendering program is usually built into the computer graphics software, though others are available as plug-ins or entirely separate programs. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Although the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a device able to assist the CPU in calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation does not account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output.
Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner.
Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.
Main article: 3D modeling
3D modeling is the process of developing a mathematical, wireframe representation of any three-dimensional object, called a "3D model", via specialized software. Models may be created automatically or manually; the manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D models may be created using multiple approaches: use of NURBs to generate accurate and smooth surface patches, polygonal mesh modeling (manipulation of faceted geometry), or polygonal mesh subdivision (advanced tessellation of polygons, resulting in smooth surfaces similar to NURB models). A 3D model can be displayed as a two-dimensional image through a process called 3D rendering, used in a computer simulation of physical phenomena, or animated directly for other purposes. The model can also be physically created using 3D Printing devices.
Main article: Computer graphics (computer science)
The study of computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
As an academic discipline, computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.
Computer graphics may be used in the following areas: | 1 | 28 |
<urn:uuid:88eda235-7dd3-444c-85c7-c203bc2345fe> | - Research article
- Open Access
Are interventions to promote healthy eating equally effective for all? Systematic review of socioeconomic inequalities in impact
BMC Public Health volume 15, Article number: 457 (2015)
Interventions to promote healthy eating make a potentially powerful contribution to the primary prevention of non communicable diseases. It is not known whether healthy eating interventions are equally effective among all sections of the population, nor whether they narrow or widen the health gap between rich and poor.
We undertook a systematic review of interventions to promote healthy eating to identify whether impacts differ by socioeconomic position (SEP).
We searched five bibliographic databases using a pre-piloted search strategy. Retrieved articles were screened independently by two reviewers. Healthier diets were defined as the reduced intake of salt, sugar, trans-fats, saturated fat, total fat, or total calories, or increased consumption of fruit, vegetables and wholegrain. Studies were only included if quantitative results were presented by a measure of SEP.
Extracted data were categorised with a modified version of the “4Ps” marketing mix, expanded to 6 “Ps”: “Price, Place, Product, Prescriptive, Promotion, and Person”.
Our search identified 31,887 articles. Following screening, 36 studies were included: 18 “Price” interventions, 6 “Place” interventions, 1 “Product” intervention, zero “Prescriptive” interventions, 4 “Promotion” interventions, and 18 “Person” interventions.
“Price” interventions were most effective in groups with lower SEP, and may therefore appear likely to reduce inequalities. All interventions that combined taxes and subsidies consistently decreased inequalities. Conversely, interventions categorised as “Person” had a greater impact with increasing SEP, and may therefore appear likely to reduce inequalities. All four dietary counselling interventions appear likely to widen inequalities.
We did not find any “Prescriptive” interventions and only one “Product” intervention that presented differential results and had no impact by SEP. More “Place” interventions were identified and none of these interventions were judged as likely to widen inequalities.
Interventions categorised by a “6 Ps” framework show differential effects on healthy eating outcomes by SEP. “Upstream” interventions categorised as “Price” appeared to decrease inequalities, and “downstream” “Person” interventions, especially dietary counselling seemed to increase inequalities.
However the vast majority of studies identified did not explore differential effects by SEP. Interventions aimed at improving population health should be routinely evaluated for differential socioeconomic impact.
Non communicable diseases (NCD’s e.g. cardiovascular disease (CVD), chronic obstructive pulmonary disease, diabetes, cancer, etc.) remain the major cause of disease, disability and death, accounting for over 63% of deaths worldwide in 2012 . A substantial amount of the NCD burden is attributable to four behavioural risk factors (notably poor diet, also smoking, alcohol and physical inactivity). Poor nutrition causes a greater population burden of morbidity and mortality from NCDs than tobacco, alcohol and physical activity combined . Furthermore, the prevalence of NCD risk factors and hence burden of NCDs are not equally distributed throughout the population . There is evidence for an inverse relationship between socioeconomic position (SEP) and most risk factors, with NCD risk factors often being higher in more disadvantaged groups (low SEP) .
Thus, eating a healthy diet demonstrates a social gradient with diet among people in lower SEPs being poorer in quality when compared to more advantaged groups. The World Health Organisation (WHO) define a healthy diet as achieving energy balance, limiting energy intake from total fats, free sugars and salt and increasing consumption of fruits and vegetables, legumes, whole grains and nuts Lower SEP is associated with a higher intake of energy dense, nutrient poor foods (which are high in saturated fat and sugar), and with lower intake of fruit, vegetables and wholegrains .
Socioeconomic inequalities in diet are influenced by factors including cost, access and knowledge. A diet relatively high in energy is generally less expensive than a diet consisting of less energy dense products, such as vegetables . Food selection is not only a behavioural choice, but also an economic one . Access to healthy foods can also be inequitable. This can be a lack of healthy food options provided in shops within disadvantaged areas which has been described in the US in terms of “food deserts”, however evidence for these have not been found within other settings e.g. UK . Significant differences in nutritional knowledge have been shown between differing socioeconomic groups, with knowledge declining with lower socioeconomic status . In children, lower SEP is associated with a subsequent increased risk of adult cardiovascular morbidity and mortality, partly reflecting lower exposure to healthy foods . This can then reinforce adult food preferences for less healthy foods .
There has been considerable effort to develop population-wide dietary interventions. These primary prevention programmes are aimed at asymptomatic individuals in the normal population, before any negative health event has occurred . Interventions at this stage aim to modify NCD risk factors through the promotion of healthier diets. Potentially powerful interventions are available which target the components outlined above - cost, access and knowledge. Furthermore, such population interventions, by their very nature, should theoretically benefit everyone in the population, including those with a history of NCD such as CVD.
However, there is a lack of evidence concerning the health equity impact of dietary interventions to promote health. This has led to an increase in systematic reviews assessing health equity effects [14,15]. Preventive interventions may not benefit all sub groups of the population equally [16,17]. This has been termed “intervention generated inequalities” or “IGIs” .
White et al. have described the points in the implementation of an intervention which may impact upon differential effectiveness by SEP . These include intervention efficacy, service provision or access, uptake, and compliance . Compliance may be higher among more advantaged groups because of better access to resources such as time, finance, and coping skills. “Downstream” interventions (which rely solely on individuals making and sustaining behaviour change) may therefore be more likely to be taken up by those who are of higher SEP and are more likely to widen the health gap between rich and poor. Conversely, those of lower SEP tend to be harder to reach, and find it harder to change behaviour due to a lack of access to the resources previously outlined . “Upstream” interventions remove this reliance on resource availability. Due to a higher risk burden, those of lower SEP are likely to gain extra benefit if a risk factor is uniformly reduced across the entire population. Therefore being more likely to reduce inequalities [16,20].
Thomas and colleagues demonstrated differential impact of tobacco control policy interventions. They showed that population level tobacco control interventions, such as increasing the price of tobacco products had a greater potential to benefit more disadvantaged groups and thereby reduce health inequalities . With deprived groups already having a higher NCD burden (in 2008 worldwide age standardised mortality rates from NCDs were almost twice as high for lower income groups when compared to higher income groups ), there is an urgent need to further explore this important issue relating to the major NCD risk factor, diet [2,21].
Oldroyd and colleagues previously examined the differential effects of healthy eating interventions by relative social disadvantage. In their small number of included studies they found limited evidence of greater impact in less disadvantaged groups . This may be due to their chosen time frame (1990–2007) and limited databases searched (MEDLINE and CINAHL).
Our aim was to update and expand upon Oldroyd and colleagues review . In order to identify interventions which may reduce inequalities in healthy eating, we undertook a systematic review of interventions (and modelling studies) to promote healthy eating in general populations, to determine whether impacts differ by SEP.
We conducted a systematic review with a combination of graphical and narrative synthesis of published literature. We followed best practice guidance as detailed by the PRISMA-Equity 2012 Extension for systematic reviews with a focus on health equity. This tool has been described as a method to improve both the reporting and conduct of equity focused systematic reviews (provided in the additional information – Additional file 1).
In order to identify all relevant studies, a pre-piloted search strategy was used to search five bibliographic databases (MEDLINE, Psycinfo, SCI, SSCI and SCOPUS). An example of the search strategy used is provided in the additional information (Additional file 2). In addition, we screened titles from the reference sections of systematic reviews in the Campbell library, CENTRAL, DARE and EPPI. Colleagues and experts from key organisations working in public health policy were also contacted for any additional data sources. The reference lists of all included studies (including relevant systematic reviews that were identified) were scrutinised for other potentially eligible studies.
Study selection and inclusion criteria
We included studies of any design that assessed the effects of interventions to promote healthy eating (reduced intake of salt, sugar, trans-fats, saturated fat, total fat, or total calories, or increased consumption of fruit, vegetables and wholegrain) targeted at healthy populations that reported quantitative outcomes by a measure of SEP. Only studies published since 1980 in the English language were considered. Upon fulfilling these criteria, studies were assessed utilising a PICOS (Participants, Interventions, Comparators, Outcomes and Study design) . This is summarised in Table 1.
One reviewer (RMcG) screened titles, removed duplicates and selected potentially relevant abstracts. Then two reviewers (RMcG & EA) independently examined all the abstracts for eligibility. All articles deemed potentially eligible were retrieved in full text. The full text was also retrieved for any abstracts where a decision could not be made based on the information given. Full text articles were then screened independently by the two reviewers (RMcG & EA). Disagreements on eligibility decisions were resolved by consensus or by recourse to a senior member of the review team (SC).
Data extraction and management
Data from all included studies were extracted by one reviewer using pre-designed and piloted forms. The extracted data was then checked independently by a second reviewer to ensure all the correct information was recorded. Extracted data included: study design, aims, methodological quality, setting, participants, and outcomes related to the review objectives. Extracted data were compared for accuracy and completeness. Where more information was required from an identified article, the authors were contacted where possible.
The measurement of SEP within the intervention was carefully noted and included: education level, level of household income, occupational status and ethnicity, as determined by the authors [24,25]. Ethnicity was only included as a measure of SEP if the authors explicitly stated this was their SEP measurement proxy within the text. If not, we assumed that these were measures of cultural differences rather than socioeconomic inequalities and these were excluded from the main analysis . Interventions targeting only deprived groups were not included as these did not include a comparison of the effects of an intervention with higher SEP. All data extraction tables are included in the additional information (Additional file 3).
Assessment of methodological quality of included studies
The methodological quality of each included study was assessed independently by two reviewers using the criteria for the Community Guide of the US Task Force on Community Preventive Services and a six-item checklist of quality of execution adapted from the criteria developed for the Effective Public Health Practice Project [27,28]. Several of the included studies were modelling studies. Since these studies could not be assessed using the same quality assessment tool as the empirical studies, two modelling experts assessed the quality of these independently. Disagreements in methodological quality assessment for all the included studies were resolved by consensus or by recourse to a senior member of the review team.
We examined the evidence about the differential effects of interventions in terms of their underlying theories of change . Different frameworks have been proposed to categorise healthy eating interventions . However no one framework has been used consistently. The “4 Ps” framework is a well-established framework used within the marketing field and translates well to a policy context . This framework includes interventions examining “Price”, “Place”, “Product” and “Promotion”. We have adapted and strengthened this framework in order to categorise policy interventions relating to healthy eating by their mechanisms of underlying change.
The six intervention categories used in the analysis are thus:
Price – fiscal measures such as taxes, subsidies, or economic incentives
Place – environmental measures in specific settings such as schools, work places (e.g. vending machines) or planning (e.g. location of supermarkets and fast food outlets) or community-based health education
Product – modification of food products to make them healthier/less harmful e.g. reformulation, additives, or elimination of a specific nutrient
Prescriptive – restrictions on advertising/marketing through controls or bans, labelling, recommendations or guidelines
Promotion – mass media public information campaigns
Person –Individual-based information and education (e.g. cooking lessons, tailored nutritional education/counselling, or nutrition education in the school curriculum).
Socioeconomic inequalities in impact
For each of the included interventions, if the outcome was split by more than one socioeconomic proxy measure, we took the quantitative effect on inequalities from the stratified results that best represented SEP [24,25].
When calculating the effect on inequalities, we examined the primary outcome of interest for each intervention as identified by the study author. If a change in dietary intake was given this was the primary measure that was used. If not, some other secondary outcomes were acceptable (see Table 1). We compared the lowest group with the highest group in the SEP classification, and used the measures of significance reported by the authors (e.g. p values, confidence intervals, standard deviations, standard error of measurement) to assess the significance of any differential effects of interventions by SEP. When the results were stratified by age, gender or intervention site, the results referring to the largest subsample were used. Where information was given at different time points, the longest follow up period was examined.
The effect on inequalities was classified as follows:
Intervention likely to reduce inequalities: the intervention preferentially improved healthy eating outcomes in people of lower SEP
Intervention likely to widen inequalities: the intervention preferentially improved healthy eating outcomes in people of higher SEP
Intervention which had no preferential impact by SEP (this also includes interventions where there was an overall benefit but where there was no effect on healthy eating outcomes for any SEP sub-group).
We aspired to undertake a meta-analysis of the results. However the studies identified were heterogeneous, addressing different research questions, with diverse theoretical underpinnings study designs and study outcomes. Given the considerable heterogeneity of the studies, undertaking a meta-analysis was not deemed appropriate. The results were therefore synthesised using a combination of graphical and narrative methods, including the use of the Harvest plot, which is a useful graphical method for synthesising and displaying evidence about the differential effects of population-level interventions . Within the Harvest Plot, each intervention was represented as a single bar in one of three categories: those that were more effective in more disadvantaged groups (reduce), had the same effect in all groups (no preferential impact by SEP), or were less effective in disadvantaged groups (widen) (Figure 1).
We conducted a sensitivity analysis to determine if the key results would change if we had been more or less selective in our study screening process.
First, we included only the studies which gave indicators of statistical significance concerning the quantitative data split by SEP. Secondly, we also included those studies which split their findings quantitatively by ethnicity alone (with no mention of SEP), as this represents a crude proxy measure of SEP (see additional information - Additional file 4).
We identified 31,887 articles in our search. Following abstract and full text screening, 36 studies met the inclusion criteria (Figure 2). These included quantitative results presented by a measure of SEP for 47 interventions. A summary of all included studies is listed in the additional information (Additional file 5). Data extraction tables for all included studies and studies included in the sensitivity analysis are provided in the additional information (Additional file 3).
Impact on socioeconomic inequalities by “P” category
The impact of interventions categorised by “P” is displayed in the Harvest plot in Figure 1 (adapted from Thomas et al. ). The Harvest plot shows each intervention illustrated as an individual bar. The height of the bar depicts the quality of the study. Modelling studies were distinguished by using patterned bars.
The studies are then grouped by outcome regarding socioeconomic differential effects (reduced, no preferential impact by SEP and widened). Interventions in the “Price” category appeared most likely to reduce inequalities while “Person” interventions were the most likely to widen inequalities (Figure 1).
Price interventions (taxes, subsidies, or economic incentives)
Eighteen “Price” interventions were identified. These are summarised in Table 2. The majority were conducted in Europe [34-39], with five in North America [40,41] and one in Australia . Of these, nine were taxes on high energy density foods [34,36,37,41,42], three were subsidies on fruit and vegetables [35,40] and six were combinations of taxes and subsidies [37-39]. Eight studies used modelling methodologies [34,35,37-42].
In total, ten of the eighteen “Price” interventions were likely to reduce inequalities by preferentially improving healthy eating outcomes in lower SEPs [34-39]. All six studies reporting interventions which consisted of a combination of taxes and subsidies consistently had a greater impact on lower SEP [37-39]. Two interventions (one subsidy on fruit and vegetables and one tax on high energy density foods ) had a greater impact on higher SEP, and there was no differential effect demonstrated in the remaining six studies in the “Price” category [35,37,41].
Place interventions (environmental measures in specific settings)
Six “Place” interventions were identified. These are summarised in Table 3. Three were carried out in North America [43-45], two in Europe [46,47] and one in New Zealand . Of these, two were school based interventions [46,48], two were work based interventions [44,45], one church based intervention and one area based intervention .
None of the six identified “Place” interventions were judged as likely to widen inequalities, with four likely to reduce inequalities (both work place interventions [44,45], one schools based intervention and one area based intervention ).
Product interventions (modification of food products to make them healthier/less harmful)
Only one “Product” intervention was identified . This intervention is summarised in Table 3. This was a product reformulation intervention conducted in the UK (salt) in which the authors identified no impact by socioeconomic gradient.
Prescriptive interventions (restrictions on advertising/marketing)
No “Prescriptive” interventions were identified.
Promotion interventions (mass media public information campaigns)
Four “Promotion” interventions were identified. These are summarised in Table 3. Three of these were conducted in Europe [35,50,51] and one in the USA . All four examined the effectiveness of national “Five a day” health information campaigns. Two studies used modelling methodologies [35,50].
“Promotion” interventions showed mixed results. Two interventions had no preferential impact by SEP [35,52] while one intervention was judged as likely to reduce inequalities and the other intervention judged as likely to widen inequalities .
Person interventions (Individual-based information and education)
Eighteen “Person” interventions were identified. These are summarised in Table 4. The majority of these were conducted in Europe [53-61], eight in the USA [62-68] and one in Australia . Of these, fourteen were health education interventions [53-56,58-60,62,63,65,67-69] and four were dietary counselling interventions [57,61,64,66].
“Person” interventions were judged as most likely to widen inequalities, with eight of the eighteen interventions having greater impact in higher SEPs [57,59-61,64-66,68]. All four of the dietary counselling interventions appear likely to widen inequalities.
When the screening process was made more selective, the general trends seen in the main Harvest plot were essentially unchanged. “Price” interventions remained the most likely to reduce inequalities, however “Person” interventions now showed mixed results with a more even distribution of effects by SEP when being more selective by only including interventions where statistical significance values were given. There were no differences observed related to the other “P” categories. The addition of studies that split their findings by ethnicity alone [70-77] (making the selection process less selective) had no implications on the main findings (see additional information – Additional file 4). Six of these studies were from the USA, with one from New Zealand and one from the Netherlands.
Interventions categorised by the “6Ps” modified version of the “marketing mix” framework demonstrated differential effects on healthy eating outcomes by socioeconomic position (SEP). “Upstream” interventions categorised as “Price” appeared most likely to decrease health inequalities, while “downstream” “Person” interventions appeared most likely to increase inequalities (this association weakened when only studies which reported significance values pertaining to SEP differential effectiveness were included). No “Prescriptive” interventions were found and only one intervention categorised as “Product” was included. “Place” interventions showed mixed results, although none appeared likely to widen inequalities. However, the vast majority of full text articles which were assessed for eligibility did not explore differential effects by SEP.
Comparison with other research
This research builds on an earlier systematic review by Oldroyd and colleagues who examined effectiveness of nutrition interventions on dietary outcomes by relative social disadvantage . They concluded that nutrition interventions have differential effects, but could not develop this further due to the small number of studies identified. Our review included 36 studies allowing expansion upon these conclusions. Magnée et al. has recently used a systematic approach exploring the socioeconomic differential impact of lifestyle interventions (including diet) related to obesity prevention in a Dutch setting . They too reported that “downstream” interventions targeting individuals might increase inequalities but their findings were limited by a lack of studies examining socioeconomic differential effects.
Why might “Price” and “Person” interventions affect inequalities differently? White et al. suggest that how an intervention is delivered is crucial. Hence structural, universally delivered “upstream” interventions which create a healthier environment therefore tend to circumvent voluntary behaviour change may well reduce inequalities . Frieden depicts this difference as a “Health Impact Pyramid” . The base of the pyramid consists of interventions addressing socio-economic determinants of health which has the greatest potential population impact. Conversely, the top of the pyramid depicts health education and counselling which depend on higher levels of individual effort; hence resulting in the lowest potential population impact. Cappuccio and colleagues likewise found that more “upstream” population-wide regulation and marketing controls had the most potential to reduce dietary salt when compared with more “downstream” approaches like food labelling .
Our review supports both White and Frieden [18,79]. Interventions in the “Price” category predominantly included taxes on unhealthy foods and subsidies for healthier foods; both are population level, structural interventions which require no individual agency. This category was the most likely to reduce inequalities. Similar observations have also been demonstrated for tobacco control. Thomas and colleagues found that population level tobacco control interventions, such as increasing the price of tobacco products had a greater potential to benefit more disadvantaged groups and thereby reduce health inequalities .
“Person” interventions appeared most likely to widen inequalities. This category included health education and dietary counselling. This may reflect the dependence on an individual choosing to behave differently, and sustain that change . Other studies support this in highlighting that downstream interventions rarely reduce inequalities and may widen them. Whitlock and colleagues reviewed the effectiveness of counselling interventions on public health . This highlighted the lack of effectiveness of these types of interventions on people from across the socioeconomic spectrum. Furthermore, Lorenc et al. explicitly concluded that “downstream” interventions actually worsen health inequalities .
It is striking that we did not find any studies investigating the effects of “Prescriptive” interventions by SEP and only one “Product” intervention that presented differential results which had no preferential impact by SEP. Although more “Place” interventions were identified (n = 6), they were conducted in a variety of different settings (2 workplace, 2 school based, 1 in a church and 1 area based intervention). None of these interventions were judged as likely to widen inequalities, however more evidence of a differential impact is required before conclusions can be reached concerning this category.
The potential differential effectiveness of mass media (‘five a day’) campaigns within the “Promotion” category was unclear, as only four studies were found and these showed mixed results.
The systematic approach taken is a considerable strength of this research. And the use of two independent reviewers throughout further strengthened our methodology.
The use of the adapted marketing 4 “Ps” approach provides a simple conceptual framework to categorise and evaluate policy interventions, which may have otherwise been difficult to group.
The adaptation of the Harvest plot using the “6Ps” adaptation of the “4 Ps” marketing mix is a novel approach. Ogilvie and colleagues suggest adapting the Harvest Plot to display differential effectiveness of policy interventions . Our “6P” adaptation highlights the effectiveness of the Harvest plot in displaying heterogeneous results.
Conducting a sensitivity analysis confirmed the general trends seen in the main Harvest plot (Figure 1), with “Price” interventions appearing likely to reduce inequalities. “Person” interventions showed more mixed results, however there remained a predominance of these interventions falling within the widen category.
The evidence base revealed a striking lack of studies quantifying the differential effectiveness of dietary interventions by SEP . We only included interventions where quantitative results by SEP were presented by the author. Differential effects in other studies may have gone unreported. We restricted our search to studies published only in English. This may have meant we failed to identify potentially relevant articles published in another language.
Where possible, we used statistical significance to identify differential effects of interventions. In a number of studies, significance levels were not presented by the study authors (and could not be calculated) and therefore the magnitude of the results was used to determine differential effects. It cannot be inferred that these effects were or were not statistically significant. We therefore conducted a sensitivity analysis which was generally reassuring, while highlighting the lack of available significance levels in the “Person” intervention studies and therefore the need for caution when interpreting these results.
Although the use of the adapted marketing 4 “Ps” approach provides a simple conceptual framework, it should be recognised that a number of the interventions were multicomponent in nature. We categorised interventions based on the underlying theories about how the interventions might have worked to bring about change in healthy eating outcomes. This involved a subjective element, even when using the extended “6Ps” study categorisation. This study categorisation framework could mask the potential differential effectiveness of multicomponent interventions which have substantial elements of two or more “P” categories. Indeed, evidence from tobacco control suggests that comprehensive strategies involving multiple interventions at multiple levels may be more powerful than narrower approaches [84,85].
We did not look at age and sex differences in detail as this was not the focus of this particular paper. However, it represents a potentially important topic for future analyses. Furthermore, the settings in which these interventions are introduced may affect their impact. Low SEP in one setting will differ from low SEP in another setting; likewise with high SEP.
The majority of modelling studies fall in the price category and had weak quality scores reflecting the independent assessment of two modelling experts. This is far from ideal and clearly was very dependent on the assumptions made. While policies to implement price interventions (taxation/subsidies) are difficult to study on a population level, the methods involved with modelling are quite different from an intervention study, and caution should be used when synthesising these different study types. There is an urgent need for the development of a quality assessment tool comparable to those used in empirical studies [27,28].
The majority of interventions identified did not present differential results by SEP.
In order to increase knowledge in this area the evaluation of interventions to promote healthy eating should routinely include an assessment of differential effects by SEP. This would enrich the data available to allow for future systematic reviews of this nature to be conducted and to add to the findings presented here [14,15]. Future research should focus in particular upon investigating the differential impact of modification of food products and restrictions on advertising/marketing through controls or bans (“Prescriptive” and “Product” interventions).
Smoking and healthy eating interventions have been assessed for differential effects by SEP. There is a need for comparable studies in other areas such as alcohol and physical activity in order to examine differential impact. In addition, we excluded studies aimed solely at lower SEPs. The examination of these studies is warranted as this will add to our understanding of interventions that may be effective within this sub-group.
In order to further investigate the potential impact of these differential effects, the findings of this review could be tested in epidemiological models for different populations. This would allow quantitative estimations of the socioeconomic effects on disease and mortality burdens in different policy intervention scenarios.
Preventative interventions are more cost effective when compared to treatment . However little is known about the relative cost effectiveness between types of preventative interventions. If an intervention affects different groups differentially, then it is sub-optimally effective in some groups and cannot be achieving its full potential. Its cost-effectiveness will also be sub-optimal. This review suggests interventions aimed at the individual may be less cost-effective, especially among poorer groups, since greater effort and resources may be needed to achieve effectiveness similar to more affluent groups. However, further research in this area is required.
Since the majority of our included “Price” interventions were modelling studies, there is an urgent need to investigate the feasibility and impact of such taxes and subsidies using additional research methods, e.g. RCTs.
Finally, none of the current studies address the more fundamental issue of the inequitable social and economic environments which create health inequalities in the first place .
Policy makers should be aware that some healthy eating interventions targeted at healthy populations may have greater benefits for individuals of higher SEP (and subsequently increase inequalities) notably personalised nutritional education and dietary counselling interventions. On the other hand a combination of taxes and subsidies may preferentially improve healthy eating outcomes for people of lower SEP (potentially reducing inequalities). As noted, the majority of identified studies did not explore differential effects by SEP. When considering implementing a food policy at any level, those involved should consider the potential differential impact of these on health inequalities.
World Health Organisation. World Health Statistics 2012. Geneva: WHO; 2012.
Lim SS, Vos T, Flaxman AD, Danaei G, Shibuya K, Adair-Rohani H, et al. A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet. 2012;380:2224–60.
Kaplan GA, Keil JE. Socioeconomic factors and cardiovascular disease: a review of the literature. Circulation. 1993;88:1973–98.
WHO. Diet. http://www.who.int/dietphysicalactivity/diet/en/(2014). Accessed 01 September 2014.
Nelson M, Erens B, Bates B, Church S, Boshier T. The Low Income Diet and Nutrition Survey. Food Standards Agency 2007. http://tna.europarchive.org/20110116113217/http://www.food.gov.uk/science/dietarysurveys/lidnsbranch/ Accessed 01 September 2014.
Drewnowski A, Monsivais P, Maillot M, Darmon N. Low-energy-density diets are associated with higher diet quality and higher diet costs in French adults. J Am Diet Assoc. 2007;107:1028–32.
Waterlander WE, de Haas WE, van Amstel I, Schuit AJ, Twisk JWR, Visser M, et al. Energy density, energy costs and income - how are they related? Public Health Nutr. 2010;13:1599–608.
Ball K, Timperio A, Crawford D. Neighbourhood socioeconomic inequalities in food access and affordability. Health Place. 2009;15:578–85.
Cummins S, Macintyre S. “Food deserts”—evidence and assumption in health policy making. BMJ [Br Med J]. 2002;325:436–8.
Parmenter K, Waller J, Wardle J. Demographic variation in nutrition knowledge in England. Health Educ Res. 2000;15:163–74.
Cohen S, Janicki-Deverts D, Chen E, Matthews KA. Childhood socioeconomic status and adult health. Ann N Y Acad Sci. 2010;1186:37–55.
Anzman SL, Rollins BY, Birch LL. Parental influence on children’s early eating environments and obesity risk: implications for prevention. Int J Obes. 2010;34:1116–24.
NICE. Prevention of Cardiovascular Disease. Manchester: NICE Public Health Guidance 25; 2010.
Tugwell P, Petticrew M, Kristjansson E, Welch V, Ueffing E, Waters E, et al. Assessing equity in systematic reviews: realising the recommendations of the Commission on Social Determinants of Health. BMJ. 2010;341:c4739–9.
Welch V, Tugwell P, Petticrew M, de Montigny J, Ueffing E, Kristjansson B, et al. How effects on health equity are assessed in systematic reviews of interventions. In Cochrane Database of Systematic Reviews. New Jersey: John Wiley & Sons, Ltd; 1996.
Capewell S, Graham H. Will cardiovascular disease prevention widen health inequalities? PLoS Med. 2010;7, e1000320.
Thomas S, Fayter D, Misso K, Ogilvie D, Petticrew M, Sowden A, et al. Population tobacco control interventions and their effects on social inequalities in smoking: systematic review. Tob Control. 2008;17:230–7.
White M, Adams J, Heywood P. How and why do interventions that increase health overall widen inequalities within populations? In Social inequality and public health. Edited by Babones SJ. Bristol: Policy Press; 2009: 64–81.
Macintyre S. Inequalities in health in Scotland: What are they and what can we do about them? Glasgow: MRC Social and Public Health Sciences Unit; 2007.
Nilunger L, Diderichsen F, Burström B, Ostlin P. Using risk analysis in Health Impact Assessment: the impact of different relative risks for men and women in different socio-economic groups. Health Policy (Amsterdam, Netherlands). 2004;67:215–24.
Chen E, Miller GE. Socioeconomic status and health: mediating and moderating factors. Annu Rev Clin Psychol. 2013;9:723–49.
Oldroyd J, Burns C, Lucas P, Haikerwal A, Waters E. The effectiveness of nutrition interventions on dietary outcomes by relative social disadvantage: a systematic review. J Epidemiol Community Health. 2008;62:573–9.
Welch V, Petticrew M, Tugwell P, Moher D, O’Neill J, Waters E, et al. PRISMA-equity 2012 extension: reporting guidelines for systematic reviews with a focus on health equity. PLoS Med. 2012;9:001333.
Galobardes B, Shaw M, Lawlor DA, Lynch JW. Indicators of socioeconomic position (part 1). J Epidemiol Community Health. 2006;60:7–12.
Galobardes B, Shaw M, Lawlor DA, Lynch JW, Davey SG. Indicators of socioeconomic position (part 2). J Epidemiol Community Health. 2006;60:95–101.
Liu JJ, Davidson E, Bhopal RS, White M, Johnson MRD, Netto G, et al. Adapting health promotion interventions to meet the needs of ethnic minority groups: mixed-methods evidence synthesis. Health Technol Assess. 2012;16(44):1–469.
Briss PA, Zaza S, Pappaioanou M, Fielding J, Wright-De Agüero L, Truman BI, et al. Developing an evidence-based guide to community preventive services—methods. Am J Prev Med. 2000;18:35–43.
Thomas H. Quality Assessment Tool for Quantitative Studies. Toronto: Effective Public Health Practice Project McMaster University; 2003.
Whitehead M. A typology of actions to tackle social inequalities in health. J Epidemiol Community Health. 2007;61:473–8.
Hawkes C, Jewell J, Allen K. A food policy package for healthy diets and the prevention of obesity and diet-related non-communicable diseases: the NOURISHING framework. Obes Rev. 2013;14 Suppl 2:159–68.
Grier S, Bryant CA. Social marketing in public health. Annu Rev Public Health. 2005;26:319–39.
Ogilvie D, Fayter D, Petticrew M, Sowden A, Thomas S, Whitehead M, et al. The harvest plot: a method for synthesising evidence about the differential effects of interventions. BMC Med Res Methodol. 2008;8:8.
Winker MA. Measuring race and ethnicity: why and how? JAMA. 2004;292:1612–4.
Allais O, Bertail P, Nichèle V. The effects of a fat tax on french households’ purchases: a nutritional approach. Am J Agric Econ. 2010;92:228–45.
Dallongeville J, Dauchet L, Mouzon O, Réquillart V, Soler L-G. Increasing fruit and vegetable consumption: a cost-effectiveness analysis of public policies. Eur J Pub Health. 2011;21:69–73.
Nederkoorn C, Havermans RC, Giesen JCAH, Jansen A. High tax on high energy dense foods and its effects on the purchase of calories in a supermarket. An experiment Appetite. 2011;56:760–5.
Nnoaham KE, Sacks G, Rayner M, Mytton O, Gray A. Modelling income group differences in the health and economic impacts of targeted food taxes and subsidies. Int J Epidemiol. 2009;38:1324–33.
Smed S, Jensen JD, Denver S. Socio-economic characteristics and the effect of taxation as a health policy instrument. Food Policy. 2007;32:624–39.
Tiffin R, Salois M. Inequalities in diet and nutrition. Proc Nutr Soc. 2012;71:105–11.
Cash SB, Sunding DL, Zilberman D. Fat taxes and thin subsidies: prices, diet, and health outcomes. Food Econ - Acta Agriculturae Scandinavica, Section C. 2005;2:167–74.
Finkelstein Ea ZC. IMpact of targeted beverage taxes on higher- and lower-income households. Arch Intern Med. 2010;170:2028–34.
Sharma A, Hauck K, Hollingsworth B, Siciliani L. The effect of taxing sugar-sweetened beverages across different income groups: a censored demand approach. J Econ Lit. in press.
Campbell MK, Demark-Wahnefried W, Symons M, Kalsbeek WD, Dodds J, Cowan A, et al. Fruit and vegetable consumption and prevention of cancer: the Black Churches United for Better Health project. Am J Public Health. 1999;89:1390–6.
Sorensen G, Linnan L, Hunt MK. Worksite-based research and initiatives to increase fruit and vegetable consumption. Prev Med. 2004;39:S94–100.
Sorensen G, Stoddard A, Hunt MK, Hebert JR, Ockene JK, Avrunin JS, et al. The effects of a health promotion-health protection intervention on behavior change: the WellWorks Study. Am J Public Health. 1998;88:1685–90.
Hughes RJ, Edwards KL, Clarke GP, Evans CEL, Cade JE, Ransley JK. Childhood consumption of fruit and vegetables across England: a study of 2306 6–7-year-olds in 2007. Br J Nutr. 2012;108:733–42.
Wendel-Vos GCW, Dutman AE, Verschuren WMM, Ronckers ET, Ament A, van Assema P, et al. Lifestyle factors of a five-year community-intervention program: the Hartslag Limburg intervention. Am J Prev Med. 2009;37:50–6.
Rush E, Reed P, McLennan S, Coppinger T, Simmons D, Graham D. A school-based obesity control programme: project energize. Two-year outcomes. Br J Nutr. 2012;107:581–7.
Millett C, Laverty AA, Stylianou N, Bibbins-Domingo K, Pape UJ. Impacts of a national strategy to reduce population salt intake in England: serial cross sectional study. PLoS ONE. 7: e29836.
Capacci S, Mazzocchi M. Five-a-day, a price to pay: an evaluation of the UK program impact accounting for market forces. J Health Econ. 2011;30:87–98.
Estaquio C, Druesne-Pecollo N, Latino-Martel P, Dauchet L, Hercberg S, Bertrais S. Socioeconomic differences in fruit and vegetable consumption among middle-aged French adults: adherence to the 5 A Day recommendation. J Am Diet Assoc. 2008;108:2021–30.
Stables GJ, Subar AF, Patterson BH, Dodd K, Heimendinger J, Van Duyn MAS, et al. Changes in vegetable and fruit consumption and awareness among US adults: results of the 1991 and 1997 5 A Day for Better Health Program surveys. J Am Diet Assoc. 2002;102:809–17.
Bürgi F, Niederer I, Schindler C, Bodenmann P, Marques-Vidal P, Kriemler S, et al. Effect of a lifestyle intervention on adiposity and fitness in socially disadvantaged subgroups of preschoolers: a cluster-randomized trial (Ballabeina). Prev Med. 2012;54:335–40.
Curtis PJ, Adamson AJ, Mathers JC. Effects on nutrient intake of a family-based intervention to promote increased consumption of low-fat starchy foods through education, cooking skills and personalised goal setting: the Family Food and Health Project. Br J Nutr. 2012;107:1833–44.
Friel S, Kelleher C, Campbell P, Nolan G. Evaluation of the Nutrition Education at Primary School (NEAPS) programme. Public Health Nutr. 1999;2:549–55.
Haerens L, Deforche B, Maes L, Brug J, Vandelanotte C, De Bourdeaudhuij I. A computer-tailored dietary fat intake intervention for adolescents: results of a randomized controlled trial. Ann Behav Med. 2007;34:253–62.
Holme I, Hjermann I, Helgeland A, Leren P. The Oslo study: diet and antismoking advice. Additional results from a 5-year primary preventive trial in middle-aged men. Prev Med. 1985;14:279–92.
Jouret B, Ahluwalia N, Dupuy M, Cristini C, Nègre-Pages L, Grandjean H, et al. Prevention of overweight in preschool children: results of kindergarten-based interventions. Int J Obes. 2009;33:1075–83.
Lowe CF, Horne PJ, Tapper K, Bowdery M, Egerton C. Effects of a peer modelling and rewards-based intervention to increase fruit and vegetable consumption in children. Eur J Clin Nutr. 2004;58:510–22.
Plachta-Danielzik S, Pust S, Asbeck I, Czerwinski-Mast M, Langnase K, Fischer C, et al. Four-year follow-up of school-based intervention on overweight children: the KOPS study. Obesity. 2007;15:3159–69.
Toft U, Jakobsen M, Aadahl M, Pisinger C, Jørgensen T. Does a population-based multi-factorial lifestyle intervention increase social inequality in dietary habits? The Inter99 study. Prev Med. 2012;54:88–93.
Brownson RC, Smith CA, Pratt M, Mack NE, Jackson-Thompson J, Dean CG, et al. Preventing cardiovascular disease through community-based risk reduction: the Bootheel Heart Health Project. Am J Public Health. 1996;86:206–13.
Carcaise-Edinboro P, McClish D, Kracen AC, Bowen D, Fries E. Fruit and vegetable dietary behavior in response to a low-intensity dietary intervention: the rural physician cancer prevention project. J Rural Health. 2008;24:299–305.
Connett JE, Stamler J. Responses of black and white males to the special intervention program of the Multiple Risk Factor Intervention Trial. Am Heart J. 1984;108:839–49.
Havas S, Anliker J, Damron D, Langenberg P, Ballesteros M, Feldman R. Final results of the Maryland WIC 5-A-Day Promotion Program. Am J Public Health. 1998;88:1161–7.
Havas S, Anliker J, Greenberg D, Block G, Block T, Blik C, et al. Final results of the Maryland WIC food for life program. Prev Med. 2003;37:406–16.
Jeffery RW, French SA. Preventing weight gain in adults: design, methods and one year results from the Pound of Prevention study. Int J Obes Relat Metab Disord. 1997;21:457–64.
Reynolds KD, Franklin FA, Binkley D, Raczynski JM, Harrington KF, Kirk KA, et al. Increasing the fruit and vegetable consumption of fourth-graders: results from the high 5 project. Prev Med. 2000;30:309–19.
Smith AM, Owen N, Baghurst KI. Influence of socioeconomic status on the effectiveness of dietary counselling in healthy volunteers. J Nutr Educ. 1997;29:27–35.
Blakely T, Ni Mhurchu C, Jiang Y, Matoe L, Funaki-Tahifote M, Eyles HC, et al. Do effects of price discounts and nutrition education on food purchases vary by ethnicity, income and education? Results from a randomised, controlled trial. J Epidemiol Community Health. 2011;65:902–8.
Coates RJ, Bowen DJ, Kristal AR, Feng Z, Oberman A, Hall WD, et al. The women’s health trial feasibility study in minority populations: changes in dietary intakes. Am J Epidemiol. 1999;149:1104–12.
Frenn M, Malin S, Bansal N, Delgado M, Greer Y, Havice M, et al. Addressing health disparities in middle school students’ nutrition and exercise. J Community Health Nurs. 2003;20:1–14.
Kristal AR, Shattuck AL, Patterson RE. Differences in fat-related dietary patterns between black, Hispanic and white women: results from the Women’s Health Trial Feasibility Study in Minority Populations. Public Health Nutr. 1999;2:253–62.
Reinaerts E, Nooijer J, Candel M, de Vries N. Increasing children’s fruit and vegetable consumption: distribution or a multicomponent programme? Public Health Nutr. 2007;10:939–47.
Webber LS, Osganian SK, Feldman HA, Wu M, McKenzie TL, Nichaman M, et al. Cardiovascular risk factors among children after a 212-year intervention—the CATCH study. Prev Med. 1996;25:432–41.
Whetstone LM, Kolasa KM, Collier DN. Participation in community-originated interventions is associated with positive changes in weight status and health behaviors in youth. Am J Health Promot. 2012;27:10–6.
Willi SM, Hirst K, Jago R, Buse J, Kaufman F, El Ghormli L, et al. Cardiovascular risk factors in multi-ethnic middle school students: the HEALTHY primary prevention trial. Pediatr Obes. 2012;7:230–9.
Magnée T, Burdorf A, Brug J, Kremers SPM, Oenema A, van Assema P, et al. Equity-specific effects of 26 Dutch obesity-related lifestyle interventions. Am J Prev Med. 2013;44:e61–70.
Frieden TR. A framework for public health action: the health impact pyramid. Am J Public Health. 2010;100:590–5.
Cappuccio FP, Capewell S, Lincoln P, McPherson K. Policy options to reduce population salt intake. BMJ. 2011;343:d4995.
Whitlock EP, Orleans CT, Pender N, Allan J. Evaluating primary care behavioral counseling interventions: an evidence-based approach. Am J Prev Med. 2002;22:267–84.
Lorenc T, Petticrew M, Welch V, Tugwell P. What types of interventions generate inequalities? Evidence from systematic reviews. J Epidemiol Community Health. 2013;67:190–3.
Oliver A, Nutbeam D. Addressing health inequalities in the United Kingdom: a case study. J Public Health Med. 2003;25:281–7.
Green LW, Kreuter MW. Evidence hierarchies versus synergistic interventions. Am J Public Health. 2010;100:1824–5.
Mackay J. Implementing tobacco control policies. Br Med Bull. 2012;102:5–16.
Cecchini M, Sassi F, Lauer JA, Lee YY, Guajardo-Barron V, Chisholm D. Tackling of unhealthy diets, physical inactivity, and obesity: health effects and cost-effectiveness. Lancet. 2010;376:1775–84.
Whitehead M, Popay J. Swimming upstream? Taking action on the social determinants of health inequalities. Soc Sci Med. 2010;71:1234–6.
We would like to thank the National Institute for Health Research’s School for Public Health Research (NIHR SPHR) who funded this research. This is a partnership between the Universities of Sheffield, Bristol, Cambridge, UCL; The London School for Hygiene and Tropical Medicine; The University of Exeter Medical School; the LiLaC collaboration between the Universities of Liverpool and Lancaster and Fuse; The Centre for Translational Research in Public Health, a collaboration between Newcastle, Durham, Northumbria, Sunderland and Teesside Universities. We would also like to thank Sarah Mosedale, Beryl Stanley, Sian Thomas, David Ogilvie, Kathleen McAdam and Anne Dawson for their support and advice, and Ulla Toft, Gail Rees and Kylie Ball for providing additional information upon request.
Role of the sponsor
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.
The authors declare that they have no competing interests.
Both RMcG and EA drafted the study protocol, assessed the quality of the included articles, determined which articles were included/ excluded and conducted the synthesis of the included studies. Both EA and RMcG also wrote the manuscript with the help of SC. RMcG conducted the initial scoping search, drafted the search strategy and conducted the literature searches. LO, MMW and SC contributed to the search strategy. All authors listed contributed to the interpretation of the results and to the drafting and finalisation of the manuscript. All authors read and approved the final manuscript.
An erratum to this article is available at http://dx.doi.org/10.1186/s12889-015-2162-y.
Additional file 1:
PRISMA – Equity extension.
Additional file 2:
Search terms used in MEDLINE search.
Additional file 3:
Data extraction tables.
Additional file 4:
Additional file 5:
Summary of included studies (categorised using the 6Ps framework).
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
McGill, R., Anwar, E., Orton, L. et al. Are interventions to promote healthy eating equally effective for all? Systematic review of socioeconomic inequalities in impact. BMC Public Health 15, 457 (2015). https://doi.org/10.1186/s12889-015-1781-7
- Noncommunicable diseases
- Socioeconomic inequalities
- Healthy eating | 1 | 2 |
<urn:uuid:00ffcd50-1184-46d3-a7ce-5e8759fa5832> | Valuing natural habitats for enhancing coastal resilience: Wetlands reduce property damage from storm surge and sea level rise
by: Ali Mohammad Rezaie, Jarrod Loerzel, Celso M. Ferreira
Summarized by: Mckenna Dyjak
What data were used?: This study used coastal storm surge modeling and an economic analysis to estimate the monetary value of wetland ecosystem services (positive benefits of natural communities to people). One of the ecosystem services provided by wetlands is that they are great at controlling flooding; their flood protection value was estimated using the protected coastal wetlands and marshes near the Jacques Cousteau National Estuarine Research Reserve (JCNERR) in New Jersey.
Methods: Storm surge flooding was determined for historical storms (e.g., Hurricane Sandy in 2012) and future storms that account for habitat migration and sea level rise. Each storm had modelled flooding scenarios for both the presence and absence of the coastal wetland/marsh. The model also incorporated ways to account for monetary value of physical damage by using property values.
Results: This study found that coastal wetlands and marshes can reduce flood depth/damage by 14% which can save up to $13.1 to $32.1 million in property damage costs. The results suggest that one square kilometer (~0.4 square miles) of natural coastal wetland habitats have a flood protection value of $7,000 to $138,000 under future conditions (Figure 1).
Why is this study important?: Natural coastal wetlands and marshes contribute many vital ecosystem services such as providing habitats for wildlife, helping protect against coastal erosion, and purifying water. Assigning a monetary value to these natural habitats for their flood protection can highlight another aspect of their importance and urge people to protect these important coastal communities. The results from this study can allow the public and private sectors to develop and practice sustainable methods to preserve the ecosystems.
The bigger picture: Storm events, such as hurricanes, are predicted to become more frequent and more severe due to climate change. As the oceans continue to warm (an estimated increase of 1-4 degrees Celsius in mean global temperatures by 2100) hurricanes are predicted to intensify in wind speed and precipitation. Storm surge is known to be the most dangerous aspect of hurricanes and causes deadly flooding. As sea levels rise and ocean water expands due to warming, storm surges will become more severe during major storm events. This study has shown that coastal wetlands and marshes are considered our “first line of defense” in these circumstances. We must take care of and protect our natural habitats because they provide us with many services that we are unaware and likely unappreciative of.
Citation: Rezaie AM, Loerzel J, Ferreira CM (2020) Valuing natural habitats for enhancing coastal resilience: Wetlands reduce property damage from storm surge and sea level rise.
The influence of collection method on paleoecological datasets: In-place versus surface-collected fossil samples in the Pennsylvanian Finis Shale, Texas, USA
Frank L. Forcino, Emily S. Stafford
Summarized by Mckenna Dyjak
What data were used?: Two different fossil collecting methods were compared using the Pennsylvanian marine invertebrate assemblages of the Finis Shale in Texas. In-place bulk-sediment methods and surface sampling methods were used to see how these different methods could influence taxonomic (groups of animals) samples.
Methods: The bulk-sediment sampling method involves removing a mass of sediment and later washing and sieving the material to retrieve the fossil samples; surface sampling is a simpler method in which the top layer of sediment is removed and the exposed fossils are collected by hand. The samples were collected in the Finis Shale in Texas at stratigraphically equivalent (layers of rock deposited at the same time) locations to ensure continuity in the two methods. The bulk-sediment and surface pick-up samples were analyzed for differences in composition and abundance of fossil species (i.e., paleocommunities) using PERMANOVA (a type of analysis used to test if samples differ significantly from each other).
Results: The study found that the bulk-collected samples differed from the surface-collected samples. The relative abundance of the major taxonomic groups (brachiopods and mollusks), composition, and distribution varied considerably in both collecting methods. For example, there was a higher relative abundance of brachiopods in the bulk-collected samples and a higher relative abundance of gastropods in the surface-collected samples.
Why is this study important?: Bulk-sediment sampling and surface sampling methods produce significantly different results, which would end up affecting the overall interpretation of the history of the site. The surface-collected fossils may be influenced by stratigraphic mixing (mixing of materials from different rock layers), collector bias (which can influence a fossil’s potential to be found and collected; for example, larger fossils are more likely to be collected), and destruction of fossils due to weathering. Bulk-sediment sampling will likely have a more accurate representation of the ancient community, because the fossils likely experienced the least amount of alteration during the process of the organism becoming a fossil (also known as taphonomy).
The bigger picture: The amount of things that have to go right in order for an organism to become a fossil is a lengthy list (read more about the fossilization process here). There are many biases that can contribute to the incompleteness of the fossil record such as environments that favor preservation (e.g., low oxygen), as well as poor preservation value of soft tissues, like skin. Scientists must do what they can in order to collect accurate data of the fossil record since there are already so many natural biases. Knowing which fossil collecting methods produce the most accurate results is important when advocating for the paleocommunity.
Citation: Forcino FL, Stafford ES (2020) The influence of collection method on paleoecological datasets: In-place versus surface-collected fossil samples in the Pennsylvanian Finis Shale, Texas, USA. PLoS ONE 15(2): e0228944. https://doi.org/10.1371/journal.pone.0228944
Organic carbon sequestration in sediments of subtropical Florida lakes
Matthew N. Waters, William F. Kenney, Mark Brenner, Benjamin C. Webster
Summarized by Mckenna Dyjak
What data were used? A broad range of Florida lakes were chosen based on size, nutrient concentrations (nitrogen and phosphorus), trophic state (amount of biologic activity that takes place), and location. The lakes were surveyed using soft sediment samples to identify the best drilling sites for sediment cores. After drilling, the cores were dated and the organic carbon (OC) content and burial rates were calculated. Organic carbon can be stored in sediments and buried, which temporarily removes it from the atmosphere.
Methods: The sediment cores were taken using a piston corer commonly used to retrieve soft sediments. Each core was dated using ²¹⁰Pb which is a common radioactive isotope found in lake environments and can be used to date sediments up to 100 years. Radioactive isotopes can be used to date rocks and sediments based on their natural decay rate (half-life). The organic carbon content of the cores was measured using a Carlo-Erba NA-1500 Elemental Analyzer which is an instrument that can determine the total carbon present in a sediment sample. To calculate the organic carbon deposition rates, the accumulation of sediment rates were multiplied by the proportion of OC found in the sediment. A recent increase of eutrophication (high amount of nutrients present in lakes) needed to be taken into account when calculating the OC deposition rate, so the sediments were divided into pre-1950 and post-1950 deposits to depict the change in industrial activity and agriculture.
Results: The OC burial rate was highest in the shallower lakes and decreased as the depths increased (can be seen in Figure 1). This is different from the rates for temperate (mild temperatures) bodies of water, where OC burial rates decreased as the lakes got bigger. They found a 51% increase in OC burial rates in the post-1950 deposits which corresponds to the increase in eutrophication in the lakes.
Why is this study important? Cultural eutrophication is caused by an increase of nutrients in waterways such as phosphorus and nitrogen (commonly found in lawn fertilizers) which cause harmful algal blooms; these algal blooms remove oxygen from the water and can mess up the entire ecosystem. The lack of oxygen and harmful algal blooms can lead to habitat loss and loss of biodiversity. This study highlights the effects and severity of cultural eutrophication in Florida’s subtropical lakes.
The bigger picture: Managing carbon and removing it from the atmosphere (i.e., carbon sequestration) is an important aspect of climate mitigation. The carbon can be removed from the atmosphere and stored in places known as carbon sinks (natural environments that can absorb carbon dioxide from the atmosphere). This study shows that subtropical Florida lakes are effective carbon sinks for organic carbon that deserve to be protected from nutrient runoff that causes eutrophication.
Citation: Walters, M. N., Kenney, W. F., Brenner, M., and Webster, B. C. (2019). Organic carbon sequestration in sediments of subtropical Florida lakes. PLoS OnE 14(12), e0226273. doi: 10.1371/journal.pone.0226273
A Holocene Sediment Record of Phosphorus Accumulation in Shallow Lake Harris, Florida (USA) Offers New Perspectives on Recent Cultural Eutrophication
by: William F. Kenney, Mark Brenner, Jason H. Curtis, T. Elliott Arnold, Claire L. Schelske
Summarized by: Mckenna Dyjak
What data were used?: A 5.9 m sediment core was taken in Lake Harris, Florida using a piston corer (a technique used to take sediment samples, similar to how an apple is cored). Lake Harris is a subtropical, shallow, eutrophic body of water (rich with nutrients) located near Orlando, Florida.
Methods: The 1.2 m sediment core is long enough to provide the complete environmental history of Lake Harris. However, the core must be interpreted first. In order to do so, the core was first dated using lead isotope 210Pb and carbon isotope 14C. The next steps involved using proxy data (preserved physical characteristics of the environment) to determine net primary productivity (the concentration and accumulation rates of organic matter), lake phosphorus enrichment (three forms of phosphorus), groundwater input (concentration and accumulation rates of carbonate material, like limestone), macrophyte abundance (e.g., sponge spicules), and phytoplankton abundance (e.g.,diatoms).
Results: The study found that Lake Harris began to fill with water in the early Holocene (~10,680 calendar years before the present) and transitioned to a wetter climate in the middle Holocene. The transition is indicated by a change in carbonate to organic sediments; a higher amount of organic sediments would suggest an increase in rainfall needed to support the plant life that would become the organic matter. A low sedimentation rate indicates that the lake was experiencing oligotrophication (depletion in nutrients) through the Holocene until around the 1900s. After the 1900s, there were increased sedimentation rates (Figure 1. A, B, D, and E) which indicates cultural eutrophication (increase of nutrients in bodies of water). Phosphates and nitrates from common fertilizers and other human activities (which is why it’s called “cultural eutrophication”) can allow algae (e.g., diatoms) to grow rapidly and reduce the amount of oxygen in the lake. An increased sedimentation rate can be used to determine whether a body of water is in a state of eutrophication, because the amount of phytoplankton (such as diatoms) would increase in accumulation. Total phosphorus accumulation rates can also indicate eutrophication.
Why is this study important?: This study shows that, without being disturbed, Lake Harris was prone to becoming depleted in nutrients, the process of oligotrophication. The complete change of course due to human activities (i.e., fertilizer runoff) is more detrimental than was previously considered. This study showed that throughout the environmental history of Lake Harris there was never a sign of natural eutrophication, but rather that of oligotrophication.
The bigger picture: Cultural eutrophication is a serious problem plaguing many aquatic systems and has serious consequences such as toxic algae blooms, which can have far reaching effects like on the tourism industry in Florida! The extent of damage caused by human activities is shown in this study and can help us understand how lakes responded in the past to the introduction of cultural eutrophication.
Citation: Kenney WF, Brenner M, Curtis JH, Arnold TE, Schelske CL (2016) A Holocene Sediment Record of Phosphorus Accumulation in Shallow Lake Harris, Florida (USA) Offers New Perspectives on Recent Cultural Eutrophication. PLoS ONE 11(1): e0147331. https://doi.org/10.1371/journal.pone.0147331
The environmental consequences of climate-driven agricultural frontiers
L. Hannah, P. R. Roehrdanz, K. C. KB, E. D. Fraser, C. I. Donatti, L. Saenz, T. M. Wright, R. J. Hijmans, M. Mulligan, A. Berg, A. van Soesbergen
Summarized by Mckenna Dyjak
What data were used?:Climate-driven agricultural frontiers are areas of land that currently do not support the cultivation of crops but will transition into crop-yielding land due to climate change. The frontiers were identified using seventeen global climate-models (mathematical representations of atmosphere, land surface, ocean, and sea ice used to project future climates) for Representative Concentration Pathways 4.5 and 8.5 (RCPs, greenhouse gas concentration trajectory). The climates in which twelve globally important crops (corn, sugar, wheat soy, etc.) can grow were determined by using three modeling methods: Ecocrop (model of crop suitability based on known ranges of optimal temperature and precipitation), Maxent (used in determining species distribution under climate change) and the frequency of daily critical minimum and maximum temperatures provided by the NOAA Earth System Research Laboratory Twentieth Century Reanalysis Version 2. Water quality impacts, soil organic carbon impacts (consequences of the release of organic carbon preserved in soil), as well as biodiversity impacts (variety of life in an ecosystem) were data used in this study to determine the outcome of developing the frontiers.
Methods: The climate-driven agricultural frontiers were found by aligning the preferred climate of crops with the predicted climate determined by the RCPs. The water quality impact was analyzed by using a hydrological model to determine the fraction of water that would be contaminated by the agriculture on the frontiers. Soil organic carbon impacts were determined by using a global dataset that estimates the amount of soil organic carbon present at the top 100cm (soil can store some of the organic carbon that is cycled throughout the earth). The biodiversity impacts were assessed by compiling biodiversity hotspots, endemic (found only in a certain area) bird areas, and Key Biodiversity Areas and comparing them to the agricultural frontiers to find any overlap.
Results: The climate-driven agricultural frontiers were found to cover 10.3-24.1 million km2 of Earth’s surface; the areas can be seen in Figure 1. The models project that the largest portion of frontiers will be in the boreal regions of the Northern Hemisphere (e.g., places where coniferous trees- like pine trees- thrive) and mountainous areas across the world. In these areas, it was found that potato, corn, and wheat are the crops that will make the biggest contribution to the potential agriculture lands.
The release of carbon from the soils in the agricultural frontiers was predicted to be about 177 gigatons of carbon after 5 years of plowing took place on the untilled land (land not cultivated for crops). There is so much carbon stored in the topsoil layer of these frontiers that the 25-40% estimated release is equivalent to around 30 years of current US carbon emissions. These numbers do not include the release of carbon that will occur in the high-latitude soils due to warming alone. When analyzing the biodiversity impacts it was found that 56% of biodiversity hotspots, 22% of Endemic Bird Areas (EBAs), and 13% of Key Biodiversity Areas (KBAs) intersect with the agricultural frontiers. The fact that suitable climates for species will change with warming as well was taken into account (both crop and species suitability moves upslope). Water quality will be negatively affected by the biocide runoff in these frontiers and will affect 900 million to 1.6 billion people, as well as ecosystem health.
Why is this study important?: Russia is already discussing using the warming land to their advantage for developing agriculture and it is likely that Canada will as well. This study outlines the detrimental outcomes of cultivating these lands and urges for international policies for sustainable development of the frontiers. Due to climate change and unsustainable farming practices current farmland is becoming unusable. With a predicted increase in need for food due to a growing population, as well as unusable farmland, there will be a push for developing new lands; however, it is important to know the potential risks and how to mitigate them.
The bigger picture: With climate change and population growth occurring side by side it is important to know how to handle them in the worst-case scenarios and what measures will need to be taken to do so. It is also important to note that food insecurity is not usually linked to food production but rather to socio-economic disconnects such as food deserts (neighborhoods without healthy food sources).
Citation: Hannah L., Roehrdanz P. R., K. C. K. B. , Fraser, E. D. G., Donatti, C. I. , Saenz, L., Wright, T. M., Hijmans, R. J., Mulligan, M., Berg, A., and van Soesbergen, A. (2020) The environmental consequences of climate-driven agricultural frontiers. PLoS ONE 15(2): e0228305. https://doi.org/10.1371/journal.pone.0228305
Experimental evidence for species-dependent responses in leaf shape to temperature: Implications for paleoclimate inference
by: Melissa L. McKee, Dana L. Royer, Helen M. Poulos
Summarized by: Mckenna Dyjak
What data were used?: Four species of seeds from woody plants were used: Boxelder Maple (Acer negundo L.), Sweetbirch (Betula lenta L.), American Hornbeam (Carpinus caroliniana Walter), and Red Oak (Quercus rubra L.). Three species from transfered saplings were also used: Red Maple (Acer negundo), American Hornbeam (Carpinus caroliniana Walter), andAmerican Hophornbeam (Ostrya virginiana K.Ko(Mill.)ch). The types of species were chosen because they each exist naturally along the east coast of the United States and have leaf shapes that vary with climate.
Methods:The seeds and saplings were randomly divided into either warm or cold treatments. The warm treatment cabinet had a target average temperature of 25°C (77°F) and the cold treatment cabinet had a target average temperature of 17.1°C (63°F). After three months, five fully expanded leaves were harvested and photographed immediately. The images from the leaves were altered in Photoshop (Adobe Systems) to separate the teeth (zig-zag edges of leaves) from the leaf blade (broad portion of the leaf). The leaf physiognomy (leaf size and shape) was measured using a software called ImageJ. The measured variables were tooth abundance, tooth size, and degree of leaf dissection. The degree of leaf dissection or leaf dissection index (LDI) is calculated by leaf perimeter (distance around leaf) divided by the square root of the leaf area (space inside leaf). The deeper and larger the space between the teeth of the leaf, the greater the LDI.
Results:The leaf responses to the two temperature treatments are mostly consistent with what is observed globally: the leaves from the cool temperature treatment favored having more teeth, larger teeth, and a higher LDI (higher perimeter ratio). However, it was found that the relation between leaf physiognomy (leaf size and shape) and temperature was specific to the type of species.
Why is this study important?: Paleoclimate (past climate) can be determined by using proxy data which is data that can be preserved things such as pollen, coral, ice cores, and leaves. Leaf physiognomy can be used in climate-models to reconstruct paleotemperature from fossilized leaves. This study supports the idea that leaf size changes correlate with temperature change. However, the responses varied by species and this should be taken into account for climate-models using leaf physiognomy to infer paleoclimate.
The bigger picture: Studying paleoclimate is important to see how past plants reacted to climate change so we have an idea how plants will respond to modern human-driven climate change.
Citation: McKee ML, Royer DL, Poulos HM (2019) Experimental evidence for species-dependent responses in leaf shape to temperature: Implications for paleoclimate inference. PLoS ONE 14(6): e0218884. https://doi.org/10.1371/journal.pone.0218884
Hello! My name is Mckenna Dyjak and I am in my last semester of undergrad at the University of South Florida. I am majoring in environmental science and minoring in geology. I have always been very excited by rocks and minerals as well as plants and animals. In high school, I took AP Environmental Science and realized I couldn’t picture myself doing anything other than natural sciences in college. While in college, I joined the Geology Club and realized that I loved geology as well. At that point it was too late in my college career to double major, so I decided to minor in geology instead. Since then, I have been able to go on many exciting field trips and have met amazing people that have helped further my excitement and education in geology. One of my favorite trips was for my Mineralogy, Petrology, and Geochemistry class that went to Mount Rogers in Virginia to observe rock types that would be similar to a core sample we would later study in class. Figure 1 below is a picture of me in Grayson Highlands State Park on that field trip! As you can see, my hiking boots are taped because the soles fell off. Luckily, some of my fellow classmates brought waterproof adhesive tape which saved my life.
My favorite thing about being a scientist is that everyone has something that they are passionate and knowledgeable about. You can learn so many different things from different people and it is so fun seeing how excited people get about what they are most interested in. It is a great thing to be in a field where constant learning and relearning is the norm. I love to share what I know and learn from others as well.
As of now, I am doing an internship with the Environmental Protection Commission of Hillsborough County in the Wetlands Division. At the EPC we are in charge of protecting the resources of Hillsborough County, including the wetlands. An important part of what we do is wetland delineation (determination of precise boundaries of wetlands on the ground through field surveys) which requires a wide knowledge of wetland vegetation and hydric soils (soil which is permanently or seasonally saturated by water resulting in anaerobic conditions)! Once the wetland is delineated, permitting and mitigation (compensation for the functional loss resulting from the permitted wetland impact) can begin. Figure 2 below is a picture of me at the Engineering Expo at the University of South Florida explaining the hydrologic cycle to a younger student at the EPC booth!
Outside of environmental science, I have a passion for geology or more specifically, sedimentary geology. I have been fortunate enough to have amazing professors in my sedimentary classes and have discovered my love for it! I enjoy going on the field trips for the classes and expanding my knowledge in class during lectures. I am interested in using sedimentary rocks to interpret paleoclimate (climate prevalent at a particular time in the geological past) and determining how past climate change affected surface environments. One really awesome field trip I got to go on was for my Sedimentary Environments class where we took core samples in Whidden Bay and Peace River. In Figure 3 I am in the water, knee deep in smelly mangrove mud, cutting the top of our core that we will eventually pull out and cap. I plan on attending graduate school in Fall of 2021 in this particular area of study.
The study and reconstruction of paleoclimate is important for our understanding of the natural variation of climate and how it is changing presently. To gather paleoclimate data, climate proxies (materials preserved in the geologic record which can be compared to what we know today) are used. I am interested in using paleosols (a stratum or soil horizon that was formed as a soil in a past geological period) as proxy data for determining paleoclimate. Sediment cores (seen in Figure 4) can also be used to determine past climate. The correlation between present day climate change and what has happened in the geologic past is crucial for our push to mitigate climate change.
I urge aspiring scientists to acquire as much knowledge they can about different areas of science because they are all connected! It doesn’t matter if it is from a book at the library, a video online, or in lecture. You also do not have to attend college to be a scientist; any thirst for knowledge and curiosity of the world already has you there.
Mckenna here- This post will show you the geology of the Mount Rogers Formation and Virginia Creeper Trail on a recent field trip I took to Virginia!
On October 10th of 2019, my Mineralogy, Petrology, and Geochemistry class went on a 4 day field trip to Abingdon, Virginia. Imagine this: it’s October. You love fall but you’ve lived in Florida your whole life, and you finally get to wear all the winter clothes you bought for no apparent reason. Considering these facts, my excitement for the trip was through the roof. After a 14 hour ride in a van with 10 other people and frequent restroom stops (much to the dismay of my professor) we finally arrived in Abingdon, Virginia to the joys of leaves turning colors and a crisp feeling in the air. A van full of (mostly) Florida-born students seeing fall leaves for what was probably the first time was a van full of amazement and pure excitement. It sounds silly, but it was really wholesome seeing how giddy everyone got just by seeing some colorful trees (me included). We got to our hotel and prepared for the next day spent in the field.
We woke up early in the morning and were able to enjoy a delightful breakfast made by the hotel to kick start our day. I packed my lunch and snacks and put on layers of clothes to be ready for any weather. I put on my new wool socks from the outlet store and old hiking boots that seemed structurally sound at the time (important to note for later). On our way to Mount Rogers in Damascus, Virginia we happened to take a road conveniently coined “The Twist”. As a long term participant in unwillingly becoming motion sick in situations such as going down one of the curviest roads in Virginia, I wasn’t thrilled. Luckily, I knew mountain roads could be bad so I packed some Dramamine which I made sure I took every time we got in the van from then on.
Once we got to Mount Rogers my friend and I immediately had to use the bathroom which in this case, was wherever you felt like the trees concealed you enough. They don’t really mention this too much for field trips/field camps but bring toilet paper!! It will make your life a lot easier. After this venture, we were soon on the hunt for rhyolite. Rhyolite is a type of rock that my professor has talked a lot about and I had heard from other students that it is mostly what you will be seeing on the Virginia trip. It is a type of igneous rock that has a very high silica content so it is considered felsic (which is usually light colored). Rhyolite is made up of the minerals quartz, and plagioclase with smaller amounts of hornblende and biotite.
The upper part of the Mount Rogers Formation consists mostly of rhyolite which we have, thanks to the continental rifting that occurred around 750 mya. The volcanoes that were once present here erupted and the igneous rock formed from the lava flow.
We used our rock hammers that you can see in Image 2 to break off bits of Rhyolite and observe them under our handheld lenses. Through these lenses, we could (almost) easily identify the minerals present in our rock samples.
Stop after stop, we observed more rhyolite. It became quite easy to answer our professor’s questions as to what type of rock we were looking at; the answer was usually “Whitetop Rhyolite”. There were, however, different types of rocks as we descended down the side of the mountain: buzzard rock and cranberry gneiss.
After we were finished at our first destination, we drove off to Grayson Highlands State Park. Here we observed more outcrops of rhyolite with a new fun bonus: tiny horses. Apparently, these tiny horses were let loose here in the late 20th century to control the growth of brush in the park. Now, there are around 150 of them that live in the park and are considered wild. While the park discourages petting the horse, you are able to get a cool selfie with them!
At the state park , there were lots and lots of giant rocks to climb on which everyone seemed to enjoy doing. So, while climbing the rocks, we were also observing and identifying them so it was a great combination. I was taking the liberty to climb almost every rock I saw and everything was going great for the time being. At one rock, I decided I wanted some pictures, for the memories! Mid mini photo shoot, I realized that the sole of my hiking boot had come clean off. Luckily, TWO very prepared people in my class happened to have waterproof adhesive tape and offered for me to use it to fix my boots. I was so thankful (and impressed that they had it in the first place) for the tape and used it to wrap my sole back to my boot and reinforce my second one because I noticed that the sole was starting to come off. The taped boots almost got me through to the end of the second day but I had to do some careful, soleless walking to get back to the van. I was able to go to the store near our hotel to get some replacement boots for the third, and final day in the field.
The last day in the field was spent at the Virginia Creeper Trail in Damascus, Virginia. This specific trail serves almost entirely as a 34 mile cycling trail; by almost entirely, I mean entirely a cycling trail with the exception of a class full of geology students. Our day consisted of identifying rock types in outcrops along the trail and receiving a wide range of looks from cyclists passing by as our lookouts at the front and back yelled out for us to get out of the way. We walked around 1.5 miles of the trail, all while taking notes and pictures while our professor and teaching assistants were explaining each outcrop. Once we reached a certain point, our professor informed us that they would be leaving to get the vans and we would be walking back the way we came plus a half mile or so and identifying each outcrop while counting our steps and noting our bearings. So we measured our strides and got into groups to commence the journey. The goal of this was to eventually be able to create a map of our own that indicated each outcrop type and where they were on the path we took.
This all sounds relatively simple, right? The answer is well, not really. The entire venture took around 4 or 5 hours and honestly made some people a little grumpy. I was happy though, because among the rhyolites and basalts, we were also able to see some really cool sedimentary rocks. Along the way we saw some awesome shale (Image 8) which we were told had some fossils in it if you looked hard enough. Of course, being interested in sedimentary geology I would’ve stayed forever chipping away at the shale to find a fossil but we were quickly ushered along by one of our professors. Shale is a type of sedimentary rock that is formed from packed silt or clay and easily separates into sheets. This type of rock is formed under gentle pressure and heat which allows organic material to be preserved easier as opposed to igneous or metamorphic rocks. As we continued along the trail we also saw mudstones and sandstones, diamictites, and conglomerates. After reaching the end of our journey, my group might have gone a little overboard and recorded 51 different outcrops. The outcrops we recorded could be reduced to: basalt, rhyolite, diamictite, conglomerate, sandstone/mudstone, and shale. The last field day was now concluded with tired feet but happy hearts as we listened to Fleetwood Mac in the van on the way back to the hotel.
We had a very early morning, skipped the hotel breakfast (they put out fruit and pastries for us though), and piled into the vans for a long journey back to Tampa, Florida. This trip was everything I had hoped it would be and made me fall in love with geology even more than I already was! I hope to go on many more adventures like this in the future.
The 2020 Pilot Virtual Internship Program in Science Communication was spearheaded by Committee Chair, Sarah Sheffield with assistance from Adriane Lam and Jen Bauer. The program was intended to provide students with a required internship prior to graduation as many programs had been canceled due to the COVID-19 pandemic. This program was approximately 5 weeks long and the interns were expected to produce 10 blog posts each.
A fossiliferous spherule-rich bed at the Cretaceous–Paleogene (K–Pg) boundary in Mississippi, USA: Implications for the K–Pg mass extinction event in the Mississippi Embayment and Eastern Gulf Coastal Plain
James D.Witts, Neil H.Landman, Matthew P. Garb, Caitlin Boas, Ekaterina Larina, Remy Rovelli, Lucy E. Edwards, Robert M.Sherrell, J. KirkCochran
Summarized by Mckenna Dyjak. Mckenna Dyjak, who is an environmental science major with a minor in geology at the University of South Florida. She plans to go to graduate school for coastal geology; once she earns her degree, she plans on becoming a research professor at a university. Mckenna spends her free time playing the piano and going to the gym.
What data were used? A fossil and spherule-rich rock formation in Union County, Mississippi exposed by construction. The formation contains the Cretaceous-Paleogene (K-Pg) boundary, which marks the end of the Cretaceous and the beginning of the Paleogene, estimated at ~66 million years ago. This boundary is characterized by a thin layer of sediment with high levels of iridium which is uncommon in Earth’s crust, because it is almost exclusively from extraterrestrial sources. The K-Pg boundary is associated with a mass extinction: a significant, widespread increase in extinction (ending of a lineage) of multiple species over a short amount of geologic time. The iridium indicates that the extinction was likely caused by an extraterrestrial impact; the spherules found support this idea as well, as spherules are formed from ejecta after an impact.
Methods: The fossils present in the rock formation were identified and compiled into a complete list. In order to find out the composition of the rock formation. 14 sediment samples were collected; these samples were used to construct a biostratigraphic analysis: corresponding relative rock ages of different rock layers to the fossils found within them. The mineral composition and grain size were determined to construct this analysis. The mineral composition (mineral percentages present) of the sediment samples were determined by using a Scanning Electron Microscope (SEM) and a Diffractometer (type of X-ray). The grain size analysis of the sediment samples was determined by using a sieve (mesh strainer) to sort into different sizes. The Carbon-13 levels of the sediment samples were analyzed: Carbon-13 can be used to determine the amount of plants that were present at the time.The data collected was used to construct the stratigraphic section shown in the figure below.
Results: There was a significant decrease in the amount of micro and macro fossils present. Along with the decrease of fossils there was a positive shift of Carbon-13. The positive shift of Carbon-13 indicates that there was an increase in plant matter buried in the rock record. Sedimentary structures such as weak cross-bedding and laminations (indicates flowing water and fluctuating energy levels) An important layer was analyzed: 15–30 cm thick muddy, poorly sorted sand containing abundant spherules (sphere pieces) that were likely a product of the Chicxulub impact event.
Why is this study important? The findings suggest that there was a quick, local change in sediment supply and possibly sea level due to the significant variation in facies (body of sediment), fossil changes, and different geochemical data that coincided with the extinction event.
Big Picture: This study helps us understand how different areas were affected locally before the mass extinction event, which can help us understand how recovery from mass extinctions take place.
Citation: Witts, James, et al. “A Fossiliferous Spherule-Rich Bed at the Cretaceous-Paleogene (K-Pg) Boundary in Mississippi, USA: Implications for the K-Pg Mass Extinction Event in the MS Embayment and Eastern Gulf Coastal Plain.” 2018, doi:10.31223/osf.io/qgaj | 1 | 3 |
<urn:uuid:4603e8b2-0838-47e8-ac54-3939aeec0325> | (2022) How To Code Foot Drop ICD 10 – List With Codes & Guidelines
This article will outline the causes, diagnosis, treatment and the ICD 10 CM code for Foot Drop.
Foot Drop ICD 10 Causes
Foot Drop ICD 10 can be due to a number of causes and underlying issues. The first underlying disease is peripheral nerve problems (neuropathy). Foot Drop ICD 10 can lead to weakness or paralysis of the muscles that lift the front of the foot. It can also cause squeezing or compression of the nerves that control these muscles that lift the foot off the ground. Nerves between the knee and lower spine can get caught.
Muscular dystrophy is a group of inherited genetic diseases that cause gradual muscle weakness leading to Foot Drop ICD 10. The underlying disease is usually muscle weakness. Nerves in the leg can be injured or damaged during hip or knee surgery. Foot Drop ICD 10 caused by nerve damage is associated with diabetes, also known as neuropathy. Hereditary diseases that can cause peripheral nerve damage and muscle weakness such as charcot-Marie disease can also lead to Foot Drop ICD 10.
TheFoot Drop ICD 10 is caused by conditions affecting the brain and spinal cord such as stroke, cerebral palsy and multiple sclerosis. It can also be caused by other muscle-wasting conditions such as spinal muscular atrophy and motor-neuron disease. The latter is an underlying disease of the brain or spinal cord. The foot drop makes it more difficult to lift the front part of the foot because it sweeps the floor as the patient walks. This causes the patient to raise his thighs when walking or climbing stairs, and the step helps the foot clean the floor.
Unusual gait can cause the foot to hit the ground when kicking. Depending on the cause, a Foot Drop ICD 10 can affect one or both feet. In some cases, the skin on the tip of the foot and the toes can feel numb. If the toe pulls the floor when walking, a doctor should be consulted.
Foot Drop ICD 10 Diagnosis
Foot Drop ICD 10 can be difficult to diagnose because there are several possible causes and overlapping symptoms. The diagnostic process for foot waste involves carrying out a physical examination and checking the patient’s medical history.
If necessary, one or more diagnostic tests may be required. Performing a physical exam and checking the medical history can help the doctor to detect patterns of weakness, numbness and pain in the feet and legs. During the physical examination, the doctor reviews pain and numbness in the toes, feet and legs and checks the response to certain stimuli, such as pressure on the toe or calf area. The doctor can also perform certain clinical tests to detect weakness of the hip, leg and foot muscles. These tests may involve the doctor moving the feet, legs or thighs in different directions to check the muscles of the ankles, legs and knees and hips.
Another way to diagnose Foot Drop ICD 10 is the Tinel drawing test. Tinel signs are tingling, needle-and-needle-like sensations that are perceived when an affected nerve is tapped. The examination for signs of tinnitus in the foot drop can be carried out by tapping the side of the knee to control the peroneal nerve. A positive tinnitus sign is usually observed with peroneal nerve compression.
In case of suspected Foot Drop ICD 10, a diagnostic test is required to check the muscles, nerves and tissues of the affected leg. The medical history includes a medical examination of the following aspects: occurrence of weakness or other symptoms, concomitant diseases such as diabetes or multiple sclerosis, trauma or injuries to the back, hip, leg or foot, and reduced strength in other parts of the body. The test is also performed to investigate systemic diseases (such as diabetes and genetic disorders) that can affect the nerve.
The first series of tests to check the nerve and muscle health of the leg is the electrodiagnostic examination, which also includes studies of nerve conductivity and electromyography. These tests can help to identify damaged or demyelinated nerves in the leg (loss of the outer sheath of the nerve myelin sheath). Electrodiagnostic tests can be performed to diagnose foot waste. The nerve conduction study is a test that includes the assessment of motor and sensory nerve conductivity using electrodes that are glued to the skin of the legs.
Electromyography (EMG) is a test in which a small needle is inserted into the affected muscle and tested for electrical activity. An electrical signal is displayed on a monitor and interpreted by a doctor.
Other imaging tests for nerve analysis include ultrasound, computed tomography and CT scans. Magnetic resonance neurography is the most advanced and detailed imaging technique for nerves, including magnetic resonance neurography (MR), the most effective electrodiagnostic examination. MR can be used to analyze several nerve strains simultaneously, e.g. Those with Charcot-Marie dental disease, or to analyze localized lesions such as sciatic nerve root compression or herniated discs. It can currently be used for one or more of the following diseases:
- inflammatory nerve problems
- genetic nerve diseases
- mediating diseases
- neural tumors
- post-traumatic nerve changes
A number of blood tests must be carried out. A complete blood count (CBC) can help detect blood diseases such as cancer-related disorders, infections, clotting problems and anemia.
Other specialized blood tests can be ordered if a genetic disorder or cancer is suspected. Diseases such as diabetes can be diagnosed through a basic metabolic examination. This test is used to examine the health of bones and organs such as kidneys and liver.
Foot Drop ICD 10 can be diagnosed by several types of doctors. If the lower back is the suspected cause, it is advisable to consult a spine specialist, such as a physiologist, orthopaedic surgeon or neurosurgeon. Diabetes and other metabolic diseases can also be diagnosed and treated by an endocrinologist.
Foot Drop ICD 10 Treatment
Treatment of Foot Drop ICD 10 depends on the cause. As soon as the cause is determined, different foot treatments can be performed depending on the specific underlying disease. Early treatment can improve the chances of recovery.
Treatment may include a light brace, a shoe used as an orthosis, physical therapy or surgery. Light braces are the most common treatment. They’re used to support the leg. Physical therapy is used to strengthen the foot and leg muscles.
It can also improve a person’s ability to walk. In some cases, electronic devices that stimulate the leg nerves to walk may be useful. Surgery may be recommended to try to repair or decompress damaged nerves. In the event of a Foot Drop ICD 10, permanent surgery can cause the foot and ankle joints to fuse and transfer tendons to stronger muscles to improve gait stability.
ICD 10 Code For Foot Drop
ICD 10 CM M21.371 Foot drop right foot
ICD 10 CM M21.37 Foot drop (acquired)
ICD 10 CM M21.372 Foot drop left foot
ICD 10 CM M21.379 Foot drop unspecified foot | 1 | 4 |
<urn:uuid:00872e2e-ec72-47f8-a3a8-3474b1a066c3> | |Search Fields (at least one required):|
|BIM Foundations Training Lynda.com|
The building information modeling (BIM) process involves generating and managing digital models of a place's physical and functional characteristics. BIM files can be exchanged to support decision-making between architects, owners, engineers, and contractors. BIM technologies are used by individuals, businesses, and government agencies who plan, design, construct, operate, and maintain diverse physical infrastructures, including water, electricity, gas, communication, and roads. In this course, gain foundational knowledge by first exploring the principles of BIM and the role it plays in modern architectural, engineering, and construction (AEC) projects. Then, learn about the benefits of BIM on a financial and investment level. Finally, see real-world examples of BIM usage.
|Building an Android App with Architecture Components Training Lynda.com|
Google now offers a set of recommended components for architecting Android apps. Android developers can learn how to follow these recommendations to improve their initial development process and simplify long-term maintenance. In this course, learn best practices for building high-quality Android apps using the Android Architecture Components for data persistence and display. Instructor David Gassner teaches these concepts in a real-world context by using the Architecture Components to build a simple note-taking app from start to finish. He shows how to define an SQLite database with the Room library, display a list of data with the efficient RecyclerView component, and update the user interface with observable LiveData objects.
|Business Development Foundations: Researching Market and Customer Needs Training Lynda.com|
Business development is the foundation of economic growth and can jump-start lasting relationships. In this course, Lisa Earle McLeod and Elizabeth McLeod discuss fundamental business development concepts and techniques that can help you gain a better understanding of your market and potential clients. This, in turn, can prepare you to demonstrate a compelling value case that helps you connect with customers and close more deals.
They begin by reviewing research fundamentals and key insights to consider before you get on the phone with a potential customer or schedule that first in-person meeting. They also discuss the landscape of business development and how to leverage internal systems for more successful conversations. Lisa and Elizabeth spell out how to speak like a leader, unpack and discuss your competitors' strengths and weaknesses, avoid common mistakes, and more.
|C# Framework Design Training Lynda.com|
Discover how to design C# frameworks for personal, enterprise, and open-source projects. In this course, join instructor Jesse Freeman as he discusses key framework design concepts, how to organize your code, and how to document and share your frameworks online. Jesse covers code encapsulation and modular classes. He also explains how to extend a framework and enforce an architecture pattern.
|Camtasia: Advanced Elearning Editing Training Lynda.com|
Camtasia offers specific editing tools for educators, trainers, and any elearning creator, allowing you to make and edit professional quality videos. In this course, take your editing skills to the next level by learning advanced techniques, including how to apply transitions, work with green screen footage, and create advanced animation. Editing audio is also covered, including how to fix background noise and make the volume level. Additionally, find out how to polish the color of the video, add closed captioning, and publish your final project.
|CCNA Security (210-260) Cert Prep: 5 Cisco Firewall Technologies Training Lynda.com|
The Cisco Certified Network Associate (CCNA) Security certification indicates to potential employers that you have the required skills to secure a network. Join security ambassador Lisa Bock, as she prepares you for the Cisco Firewall Technologies section of the CCNA Security exam 210-260: Implementing Cisco Network Security. Lisa covers firewall technologies, diving into the concept of a firewall, firewall security contexts, and how to do a basic firewall configuration. She also compares different types of firewalls including stateless, stateful, and application firewalls. She also reviews implementing NAT on Cisco ASA along with zone-based firewalls. To wrap up, she takes a closer look at some firewall features on the Cisco ASA such as Access Management, Modular Policy Framework, and high availability.
|Cert Prep: PRINCE2® Foundation and Practitioner Training Lynda.com|
PRINCE2® is the world's most widely adopted project management method. It enables teams of any size, in any industry, to develop and implement projects in a consistent and controlled manner. You can become a PRINCE2 certified professional by passing the Foundation and Practitioner exams. This course breaks down each of the 7 themes, 7 processes, and 7 principles of PRINCE2 in a succinct format, so that you are knowledgeable, empowered, and prepared to answer all the related questions from the exams. Learn how themes, processes, roles, and even documentation can be tailored for your business, and how the project environment is covered by each of the seven themes: business case, organization, quality, plans, risk, change, and progress. For each theme, Claudine proves a challenge video of foundation-style practice questions, and an accompanying solution set.
|Choosing a Cross-Platform Development Tool: Cordova, Ionic, React Native, Titanium, and Xamarin Training Lynda.com|
There are many cross-platform mobile development tools available. Knowing which to choose is almost harder than learning the platform itself. Each toolset comes with pros and cons. In this course, Tom Duffy reviews five of the most popular options—Cordova, Ionic, React Native, Titanium, and Xamarin—and explains their benefits and tradeoffs. He builds a simple user-input app with each tool, highlighting exceptional features and workflow steps.
|Cleaning Bad Data in R Training Lynda.com|
Data integrity is the new focal point of the data science revolution. Now that everybody is onboard with the role of data in people's lives and business, it's not an unfair question to ask, "Can you prove that your data is accurate?" In this course, you can learn how to identify and address many of the data integrity issues facing modern data scientists, using R and the tidyverse. Discover how to handle missing values and duplicated data. Find out how to convert data between different units and tackle poorly formatted text. Plus, learn how to detect outliers, address structural issues, and identify red flags that indicate potential data quality issues.
Where possible, instructor Mike Chapple shows how to correct the issues using R, but the same principles can be applied to any statistical programing language.
|CISSP Cert Prep: 6 Security Assessment and Testing Training Lynda.com|
Learn about security assessment and testing practices needed to prepare for the Certified Information Systems Security Professional (CISSP) exam. CISSP—the industry's gold standard certification—is necessary for many top jobs. This course helps you approach the exam with confidence by providing coverage of key topics, including threat assessment, log monitoring, and software testing. It also covers disaster recovery and security process assessment. Students who complete this course will be prepared to answer questions on the sixth CISSP exam domain: Security Assessment and Testing.
Find the companion study books at the Sybex test prep site and review the complete CISSP Body of Knowledge at https://www.isc2.org/cissp-domains/default.aspx.
Note: This course is part of a series releasing throughout 2018. A completed Learning Path of the series will be available once all the courses are released.
|Code Clinic: Python Training Lynda.com|
Successful programmers know more than just how to code. They also know how to think about solving problems. Code Clinic is a series of courses where our instructors solve the same problems using different programming languages. Here, Barron Stone works with Python. Barron introduces challenges and provides an overview of his solutions in Python. Challenges include topics such as statistical analysis and accessing peripheral devices.
|Code Clinic: R Training Lynda.com|
|CompTIA Network+ (N10-007) Cert Prep: 1 Understanding Networks Training Lynda.com|
CompTIA Network+ Cert Prep is a comprehensive training series designed to help you earn your Network+ certification—the most sought-after, vendor-neutral certification for networking professionals. This is part 1 of a 9-part series, brought to you by a partnership between LinkedIn Learning and Total Seminars, and based on Mike Meyers's gold standard CompTIA Network+ All-in-One Exam Guide, 7th Edition. This installment provides a thorough overview of networking basics: OSI versus TCP/IP models, MAC and IP addressing, and packets and ports. Start here to prepare for the exam and your future as a certified networking professional.
|CompTIA Network+ (N10-007) Cert Prep: 2 The Physical Network Training Lynda.com|
Take this comprehensive prep course for the new CompTIA Network+ exam (N10-007) to understand how the physical components of networks interact. This is part 2 of a 9-part series, brought to you by a partnership between LinkedIn Learning and Total Seminars. The training is based on Mike Meyers's gold standard CompTIA Network+ All-in-One Exam Guide, 7th Edition. This installment covers cabling, topologies, and Ethernet basics. Plus, learn how to set up and troubleshoot a modern, top-of-the-line, physical network complete with switches, hubs, and routers. By the end of the training, you'll have improved your networking skills and your ability to earn this sought-after IT certification.
|CompTIA Network+ (N10-007) Cert Prep: 8 Building a Real-World Network Training Lynda.com|
Jump-start your career in IT by earning the CompTIA Network+ certification, one of the most sought-after certifications for networking professionals. In this installment of the nine-part CompTIA Network+ Cert Prep series, instructor Mike Meyers covers key networking exam concepts as he steps through how to design and build a real-world network. Here, Mike compares and contrasts the characteristics of network topologies, types, and technologies—helping to prepare you for the corresponding exam objective in the process. He familiarizes you with the different types of networks; goes over key aspects of network design; shares how to create an effective contingency plan; describes the when, why, and how of backups; and more. This course was recorded and produced by Total Seminars. We're pleased to host this training in our library.
|Construction Management: Introduction to Lean Construction Training Lynda.com|
While manufacturing has realized substantial gains in productivity, the construction industry hasn't seen notable improvements in productivity over the past few decades. Lean productivity—a tried and tested continual improvement process leveraged in manufacturing—contains elements that lend themselves to being ported over to the construction process. In this course, explore the lean theory of production and learn how it can be successfully adopted in construction to enhance efficiency. Instructor Jim Rogers provides a brief introduction to the theory of lean productivity, describing key concepts and explaining how lean applies to construction. Jim then spells out the changes that need to occur in order for lean to be successfully adopted in the construction industry.
|Consulting Foundations: The Concept of Value Training Lynda.com|
Value is the new buzzword in consulting—and for good reason. People buy value, not just products and services. Consultants who maximize the value created for their clients can command a pricing premium and deliver better results. This course teaches consultants what value is, and how to identify and communicate the value they provide to the people they serve. Author, CEO, and coach Robbie Baxter shows how to connect value to your pricing—whether it's for accounting or design services—and handle challenges, from avoiding being seen as a commodity to moving beyond an hourly role. She also provides questions and frameworks to pinpoint your unique value and move to a pricing strategy that better reflects your worth.
|Creating a Keynote Presentation Training Lynda.com|
The keynote is a special speech. As the cornerstone of an event, these talks tend to be longer than traditional speeches, more entertaining than strictly educational, and delivered to large audiences. If you've been asked to deliver a keynote, then this course can help by showing you how to plot out and deliver a lively, impactful presentation that drives your message home. Join Todd Dewett as he spells out how to structure your keynote, craft a compelling story, use emotions to enhance your overall message, and prepare to deliver on the day of the event.
|Creating Screen Capture Training Training Lynda.com|
Screen capture is a cost-effective and efficient way to create on-demand training. You can record videos to keep employees up to date on the latest software and systems, or educate students on complex topics. In this course, Oliver Schinkten walks through all the steps to prepare, record, edit, and deploy custom screen-capture training. He covers instructional design—planning and scripting your training—as well as the technical details of recording and editing the videos. He also shows how to upload the final videos to a learning management systems, online platforms like YouTube and Vimeo, or even to a LinkedIn Learning account. To follow along, you can download a free trial version of Camtasia or use any other screen capture software, such as SnagIt, Screenflick, ScreenFlow, and Screencast-O-Matic.
|Cultivating a Growth Mindset Training Lynda.com|
Mindset is a choice. People with a growth mindset—who choose to believe that talent and ability can grow—experience better performance, focus, and success. You have the power to change your mindset. The key is learning how to make the shift. This course shows you how. Executive coach Gemma Leigh Roberts introduces real-life examples of individuals and organizations who have successfully adopted a growth mindset, as well as the latest research from the fields of performance psychology. She boils down the lessons into practical advice you can apply to reach your own potential. Plus, get tips to stay motivated and help you navigate change successfully.
|Customer Development First Steps for Product Managers Training Lynda.com|
Without a clear understanding of your customer, you simply can't build a good product. That's why customer development is such a critical stage of product management. Customer development is the process of interviewing real users and building personas that summarize their wants and needs. In this course, you can learn the basics of customer development: finding users, preparing questions, and conducting interviews that reveal accurate insights. Instructors Cole Mercer and Evan Kimbrell also show how to use the information you've learned to build robust user personas and test and validate your product ideas.
|Customer Service: Handling Abusive Customers Training Lynda.com|
What is the best way to handle a customer who steps into dangerous territory? What strategies will help diffuse and refocus a bad interaction, and when is it appropriate to walk away? In this course, join customer service expert David Brownlee—the author of Rock Star Customer Service—as he shares real-life examples and actionable steps that can help you confidently handle abusive customers in a variety of contexts. Upon wrapping up this course, you'll have the knowledge you need to formulate a plan of action and navigate difficult customer service interactions with poise and professionalism.
|CySA+ Cert Prep: 5 Identity and Access Management Training Lynda.com|
Earning the CompTIA Cybersecurity Analyst (CySA+) certification indicates that you have a solid understanding of how to tackle cybersecurity threats using a behavioral analytics-based approach. In this course—the fifth installment in the CySA+ Cert Prep series—review key identity and access management concepts that can prepare you for the second part of domain four, Security Architecture and Tool Sets. Instructor Mike Chapple dives into the three major steps of the access management process—identification, authentication, and authorization; discusses different means of identification; and goes over discretionary and mandatory access controls. He also covers access control exploits, discussing watering hole attacks, impersonation attacks, session hijacking, and more.
We are now a CompTIA Content Publishing Partner. As such, we are able to offer CompTIA exam vouchers at a 10% discount. For more information on how to obtain this discount, please download these PDF instructions.
|CySA+ Cert Prep: The Basics Training Lynda.com|
CySA+ is a highly desirable, intermediate certification that shows you know how to prevent, detect, and combat a multitude of modern cybersecurity threats. This course provides an overview of the 2018 certification program. It kicks off the CySA+ Cert Prep series, which covers each domain of the exam in greater detail. Here, expert Mike Chapple reviews the various careers in IT security and the benefits of CySA+ certification. He goes over the four exam domains, and explains how to prepare for the exam and what to expect on testing day. Mike wraps up with a discussion of the CompTIA continuing education requirements.
|Data Science on Google Cloud Platform: Designing Data Warehouses Training Lynda.com|
Cloud computing brings unlimited scalability and elasticity to data science applications. Expertise in the major platforms, such as Google Cloud Platform (GCP), is essential to the IT professional. This course—one of a series by veteran cloud engineering specialist and data scientists Kumaran Ponnambalam—shows how to design and build data warehouses using GCP. Explore the different types of storage options available in GCP for files, relational data, documents, and big data, including Cloud SQL, Cloud Bigtable, and Cloud BigQuery. Then learn how to use one solution, BigQuery, to perform data storage and query operations, and review advanced use cases, such as working with partition tables and external data sources. Finally, learn best practices for table design, storage and query optimization, and monitoring of data warehouses in BigQuery. | 1 | 4 |
<urn:uuid:db036072-2d7d-409d-8c8a-91dc0fbbdda3> | Have you ever experienced a feeling of fear, uneasiness, or dread that just won’t go away? Or perhaps you’ve experienced physical symptoms such as sweating, difficulty breathing, racing heart and more? If so, you’re not alone. Anxiety is something that many of us have felt at least once in our lives.
When it comes to getting a diagnosis for anxiety disorder and obtaining the appropriate treatment for it, it can be confusing to figure out what the ICD 10 code for this disorder is.
In this article, we’ll discuss the ICD 10 code for Anxiety Disorder Unspecified – F41.9 – and what it means. We’ll also discuss some of the treatments available for this disorder and how they can help alleviate symptoms of anxiety.
Overview of Anxiety Disorder, Unspecified
Are you feeling overwhelmed, tense, or exhausted for no real reason? Do your worries and fears take over your thoughts and leave you feeling out of control? If this sounds like you, you may have an anxiety disorder.
Code F41.9 is the ICD-10 code used to describe Anxiety Disorder, Unspecified. It’s a catch-all term that captures all the different types of anxiety disorders that don’t fit into specific categories.
Anxiety can affect anyone at any age and it can manifest in a variety of ways. Common symptoms include restlessness, trouble concentrating, irritability, and excessive worrying.
These can lead to physical symptoms like fatigue, headaches, muscle tension, and difficulty sleeping. It’s important to note that these symptoms could be signs of other psychological or physical health issues as well. That’s why it’s important to talk to a healthcare professional if you’re experiencing any of these symptoms for an accurate diagnosis.*
What Is a Diagnosis Code (F41.9)?
If you have an anxiety disorder and you need to be diagnosed by a medical professional, diagnosis code F41.9 is the code for Anxiety Disorder, Unspecified. This code is part of the ICD-10 (International Classification of Diseases, Tenth Revision) system used in medical coding and billing.
A diagnosis code includes the following components:
- The letter F followed by three digits
- A second letter after the first three digits
- Numbers and symbols that are used to indicate specific conditions in the person being treated
For example, with F41.9, the first three digits (F41) tell us that it’s a mental health disorder and the last digit (9) indicates that it’s an unspecified type of anxiety disorder. It’s important to note that this code does not provide information about any specific symptoms or treatments for anxiety disorder—it just gives an umbrella description for all types of anxiety disorders with no further specifications.
Causes and Symptoms of Anxiety Disorder Unspecified
Anxiety disorder unspecified is a condition that can affect your mental and physical well-being. It’s important to know the causes and symptoms of this condition so you can recognize when you, or someone you know, may be experiencing it.
The source of anxiety disorder is not known, but it’s believed to be linked to genetic, environmental, and psychological factors. Anxiety disorders often run in families, and certain types of environments may increase your risk for developing the condition. Traumatic experiences such as abuse and neglect can also trigger anxiety. Additionally, various medical conditions such as thyroid dysfunction or heart disease may contribute to anxiety.
Common symptoms of an anxiety disorder include: feeling on edge or restless; difficulty concentrating; trouble sleeping; feeling irritable; shortness of breath; increased heart rate; sweating; trembling or shaking; feeling sick to your stomach; difficulty making decisions; avoiding situations that make you anxious. If you experience any of these symptoms for more than a month, it’s important to seek help from a doctor or mental health professional.
Treatment Options for F41.9
If you have been diagnosed with F41.9, there are a few different treatment options you can explore. Generally speaking, treatment for anxiety disorders is tailored based on the individual’s condition, so it’s important to speak with your doctor about the best plan for you.
Medication is one of the most common treatments for anxiety disorder and can provide relief from symptoms by acting on certain chemicals in your brain. Antidepressants and benzodiazepines are popular choices that help reduce feelings of worry, fear, panic and other emotions associated with anxiety.
Psychotherapy is another common form of treatment for F41.9. These therapies focus on identifying and addressing underlying issues that may be causing or contributing to your anxiety. Cognitive-behavioral therapy (CBT) is a type of psychotherapy often used to treat anxiety disorders by helping people identify and address irrational thoughts that may lead to anxious behavior or emotions.
Complementary and Alternative Therapies
You may also want to consider alternative or complementary treatments such as exercise, yoga, meditation, acupuncture, mindfulness-based therapies and lifestyle changes like stress management. While these treatments may not be as well-studied as medication or psychotherapy they can still have an impact on reducing the symptoms of F41.9 Anxiety Disorder Unspecified.
Clinical Practice Terminology and Hypertension
If you’ve been diagnosed with Anxiety Disorder, Unspecified, your physician has likely assigned a code to your diagnosis. The ICD-10 clinical practice terminology (CPT) code for this disorder is F41.9. This code is used by healthcare providers to track diagnoses and treatments for this particular mental health issue.
By entering the coding system in your medical records, the physician can track related medical care, medications, hospital stays and other treatments over time. It’s used to keep detailed records of a patient’s care and also helps insurers accurately bill for treatments rendered.
In addition to using codes to help diagnose and track disorders such as Anxiety Disorder Unspecified (F41.9), healthcare practitioners can enter codes that indicate conditions related to hypertension or high blood pressure. Hypertension is a common condition associated with anxiety disorder unspecified, so tracking its levels is important in effectively managing this mental health issue over time.
So be sure to stay up to date with your physician about your anxiety disorder unspecified diagnosis and any co-existing conditions such as hypertension so that all parties involved can have the most accurate information possible for tracking progress in treatment going forward.
Cognitive-behavioral therapy (CBT) for anxiety disorders
If you’re reading this, then you might be one of the many people who suffer from anxiety disorder unspecified. Don’t worry, you’re not alone—it’s something that can be managed with help from your doctor and a few lifestyle changes.
One of the best treatments for anxiety disorder is cognitive-behavioral therapy (CBT). This therapy is a form of psychotherapy that focuses on how thoughts, beliefs, and behaviors influence your mood and behavior in relation to anxiety.
CBT helps patients learn new skills to manage their anxiety by developing coping strategies and changing unhealthy thought patterns. It also helps increase self-awareness so patients can better recognize triggers and warning signs for their anxiety.
The benefits of CBT for Anxiety Disorder are Unspecified
There are many benefits to CBT for those suffering from anxiety disorder unspecified:
- It helps to reduce levels of stress and anxiety symptoms.
- It teaches practical tips on managing physical symptoms associated with anxiety.
- It develops skills on how to think differently about situations or circumstances that may be causing distress or worry.
- It creates a better understanding of how emotions, thoughts, and behaviors are linked to anxiety.
- It helps identify underlying causes of anxious feelings and provides effective solutions to manage them in the long term.
- It empowers individuals to take control over their own mental health by providing them with helpful tools they can use in their everyday life.
Comorbidity of anxiety disorders with other mental health conditions
It is not uncommon for an Anxiety Disorder, Unspecified to co-exist with other mental health problems such as depression, substance use disorders, and eating disorders. The ICD 10 code for Anxiety Disorder, Unspecified (F41.9) provides an umbrella for these comorbid conditions and has been designed to be used in combination with other codes.
It is important to note that anxiety disorder symptoms may overlap with or mimic symptoms of many other psychological disorders which can make it difficult to accurately diagnose based only on observation. Differentiating between anxiety and depression can often be a challenge as both involve a similar range of emotions like fear, worry, and guilt.
Depression often includes feelings of low self-esteem, worthlessness, and lack of enjoyment in activities but sometimes the physical symptoms associated with depression such as fatigue, sleep disturbances, and appetite changes are missed when diagnosing Anxiety Disorder, Unspecified (F41.9).
Substance use disorder
Substance use disorder tends to have overlapping features and is often accompanied by mood issues such as anxiety or depression which makes it hard to distinguish from anxiety disorders. People with substance use disorder may also report having difficulty getting out of bed in the morning which can easily be confused for feelings of being overwhelmed or overwhelmed with stress associated with anxiety disorder.
Eating disorders can also overlap with symptoms of anxiety disorders such as restlessness, obsessive thoughts about food or body weight, and heightened sensitivity to physical sensations related to hunger or fullness. It is important that any diagnosis of an Anxiety Disorder, Unspecified include a diagnosis of the additional symptom clusters caused by eating disorder behaviors.
Understand ICD 10 Guidelines
It’s important to understand the guidelines around ICD 10 when diagnosing Anxiety Disorder, Unspecified. These codes are used for submitting insurance claims for payment for services rendered to patients diagnosed with this condition.
When diagnosing Anxiety Disorder, Unspecified, you must understand the importance of accuracy. The diagnosis code F41.9 should be used only when none of the more specific codes are applicable and there is no other identifiable mental disorder that better describes the patient’s presenting symptoms and history.
It’s essential that healthcare providers use the F41.9 code only if accurately supported by a patient’s history and presentation and provide their own documentation as to why none of the other more specific disorder codes apply. The physician must always document in detail why they are using Anxiety Disorder, Unspecified code in their diagnoses.
In conclusion, if you are experiencing symptoms of anxiety or fear, it is important to consult with a mental health professional to determine the cause and receive appropriate treatment. Diagnosis code F41.9 can help identify and classify anxiety disorder unspecified, so you can receive the necessary care.
Although anxiety can cause physical symptoms such as rapid heart rate, elevated blood pressure, and stress, fortunately, there is help available for those suffering from anxiety symptoms.
Treatment options vary and include cognitive behavioral therapy, relaxation techniques, and medications, depending on the severity of the condition. With the right care, you can experience relief from your symptoms and live a happier, healthier life. | 2 | 17 |
<urn:uuid:9a2780a1-6bee-4b75-89cf-5d918d196008> | Worldwide an ageing of the population can be observed. This imposes several problems on the concerning societies. There, a higher share of elderlies needs to be taken care of by a diminishing number in younger population. To still be able to provide sufficient care the use of technology is regarded as a solution. In this paper, we are examining the special field of Ambient Assisted Living (AAL) as an application of technology in health care. It mainly focusses on enabling independent living at home for people in need of care through technology. In detail, we aim at giving an overview of current applications, their end-users and their acceptance of AAL. We find that common applications of AAL are smart homes, sensors and robotics. The common end-user of these applications are elderlies. They have a positive opinion towards AAL in general but criticize their price and want to be included in the development process.
Keywords: Ambient Assisted Living; Assistive Technology; Active Assisted living; Applications; End-Users; Acceptancy; Overview; Review
Abbreviations: AAL: Ambient Assisted Living; ICT: Communication Technologies; PIR: Passive infrared sensors ;BSN:body sensor networks; IADL: activities of daily living;EADL: enhanced activities of daily living; IADL: instrumental activities of daily living; IEC: International Electrotechnical Commission
Ambient Assisted Living (AAL) is a research area, where the focus lies on enabling people with any kind of impairment to stay independent in their own home for as long as possible. To achieve this, Information and Communication Technologies (ICT) are used in various ways. The upcoming of Ambient Assisted Living can be regarded as the answer to several global trends. AAL technologies are expected to solve the problems that are imposed due to recent worldwide socio-economic developments. There, the ageing of the global population can be observed at first. Here, low birth rates stand opposite to a bigger share of elderly citizen with high life expectancies. At the same time, a decreasing supply in health care services results in rising costs for care. Also, a general increase of chronic diseases and the general will of elderly or impaired people to stay in their known environment puts pressure on the current health care systems of many nations. Therefore, Information and Communication Technologies have been a field of research and are expected to solve or to diminish the stated problems. Especially in the home environment, where AAL solutions are applied, the lack of formal caregivers can be decreased. Also, the visits to medics can be less frequently, because conditions might be detected early and thus, treatment costs can be decreased. The mentioned global developments increase the pressure on health care systems, the application of AAL technology is researched to ease these trends.
In this paper, we provide further background information on the causes that influenced the upcoming of Ambient Assisted Living in recent research as well as on the past evolution of these technologies. Furthermore, we conducted a literature review to give an overview about the following topics within the AAL domain. These topics are:
i. The current applications in the AAL domain are regarded with a focus on technologies that try to increase the independence of its users. We conclude that smart home technology, stationary and wearable sensors as well as different types of robots are of special interest in this domain.
ii. The main end-users of these applications are investigated. We observed that especially elderlies are addressed by AAL solutions in comparison to people with other kinds of impairments. Also, formal and informal caregivers are only targeted indirectly.
iii. The acceptance of Ambient Assisted Living products among its users is examined. We state that the attitude towards these devices is in general positive. Nevertheless, the price, data security, as well as an aversion towards certain sensor types is concerning. Also, user-centered design has become a main topic for increasing the devices’ acceptance.
By showing the main applications for users of AAL,examining the main user groups and their acceptancy, this paper aims to provide a general overview of the Ambient Assisted Living domain.
The upcoming of Ambient Assisted Living from a socioeconomic perspective as well its terminology and evolution are regarded in this section. Firstly, four socio-economic trends are described.
Angeing Population: According to the United Nations the population is ageing worldwide. By 2050 it is prospected that the total number of people aged 65 and older will surpass the group of adolescents with an age between 15 and 24. This development is mainly a result of low fertility and mortality rates. Countries where these two factors apply are in the so called third phase of demographic transition. There the population growth only minimal or zero to negative. In comparison, the first phase is defined by low population growth due to high fertility and mortality rates. In the second phase has fast population growth, decreasing mortality and still high fertility. The third phase of this development can be seen in Europe, North America and Australia. It is even expected that Latin America and Asia will reach this phase until the end of 21st century . An exception in Asia are Eastern and South-Eastern Asia, were the third phase is already expected in 2038. Here, Japan is specially to mention. From 2010 to 2019 2.6 million people died more than there were born. With less people born, a society is ageing. At the same time the elderlies of today can show a higher life expectancy than former generations. In the EU women aged an average 84.3 years and males 77.7 years in 2015. Until 2060 the life expectancy for females is expected to rise an additional 6.2 years and for males 7.2 years . An ageing population combined with higher life expectancy results in an increasing pressure on the rest of society.
Increasing Dependencies: In a society the non-working population is depended onthe working population. Therefore, the ageing of the society, higher life expectancies and lower fertility and consequently less workers are affecting a rise in old age dependency.The three factors which are stated above, cause that less workers must support an increasing share of elderly. This can be seen when comparing the so-called old-age dependency ratios. This ratio compares the number of elderly (aged 65 years or more) per one hundred people working (aged between 20 to 65 years). In the EU this number is constantly rising. In 1950 there were only 16 elderly per 100 workers, in the 2015 the number rose to 28 and for 2050 it is expected to reach 50. This development is especially strong in Europe but some countries in Eastern- and South-Eastern- Asia are even expected to surpass a ratio of 50 . As can be seen in (Figure 1) the ratio might be as high as 70.9 in Japan until 2050 according to IMF data.
A similar but less dramatic trend can be seen in the USA Canada and Australia. Here, the ratio will have surpassed 40 but stays below 50 until 2050 . This trend of an increased need in support is further concerning, regarding how old age is typically spent. Even though the life expectancies are rising, this does not mean that these additional years are spent healthier. For example, in the EU women are living on average 64.2 years healthy followed by 19.4 years with disability. Males are living five years less on average and therefore 5 years less with disability. Also, an increase in chronic diseases like cancer and cardiovascular disease amongst elderlies or in mental illnesses can be observed. A growing number of elderlies with high life expectancies and a general rise in chronic diseases, challenges the supply of health care services .
Decreasing Supply in Health Care: The demand in health care services is rising as stated previously, but due to the decreasing working population, the supply might not be enough. The fall in working age population can above all be observed in countries in the third phase of demographic transition . There, the increasing demand for health care enhances the burden on these service givers. Especially informal caregivers such as relatives are affected. In the EU they account for 80% of the supplied care. There, people aged between 50 to 74 years play an important role, since they are the primary caregivers to the oldest, with more than 85 years of age. But also, this group of people is shrinking . Next to the decrease of informal care, also a general shortage in formal caregivers can be observed. This is not just due to the smaller share in workingage population but also the situation of the formal care sector. It is characterized with low working satisfaction, high fluctuation and few career opportunities. For example, in Germany these factors might add up to a shortage of 140.000 to 200.000 formal caregivers in 2025 . A high demand in health care services that might not be supplied leads to rising costs for these services.
Higher Costs for Care: As a consequence, for higher demand and lower supply, costs are rising for health care. A longer living population with generally more diseases, requires health care services more often over their lifetime. At the same time in countries in the third phase of demographic transition, less people are paying into public health care systems. Where more people are indigent for care, but less people are financing it, the societies’ overall financial burden increases . Secondary to this, a general increase in health care costs can be observed. For example, the cost of cancer treatment rose from 3036 pounds in 1995-1999 to 35383 pounds in 2010-2014 . The described demographic situation accompanied with a general rise in treatment costs influence an overall increase in health care costs.
Ageing at Home: Due to the described demographic trends more health care is demanded and needed, especially for the elderly. Nevertheless, people in need of care are preferring ambulant over stationary care. Being in a stationary care institution is often connected with a loss in independency by patients. Whereas in a familiar home environment people feel more responsible for themselves. The feeling of still being able to make own decisions to some extent is much valued by most of the elderly. This trend of “ageing in place” is also preferable for service providers. Being in a care residency is more costly not only for the patient. Also, the care at home can be more appropriate and easier provided and coordinated by the service .
Terminology: The term Ambient Assisted Living was first mentioned within framework programs for research of the European Union in 2004, still no universal definition of this topic was adopted yet . Therefore, a conception of the AAL domain can be achieved by distinguishing it from other ICT and health related research as well as by looking at its evolution. The concept of Ambient Assisted Living can be regarded as a special field of application in the digitization of health care, the so-called e-health. ICT in the field of e-health can be applied in different forms. For example, health telematic uses telecommunication and informatics to communicate with remote patients, whereas telemedicine uses these technologies to give concrete health services . In contrast, Ambient Assisted Living focusses on the users’ domestic environment. While using assistive technologies the resident should be enabled to live independently in his home. There, the AAL solutions can range from the housing itself, the infrastructure such as sensors and actuators to health care services. It is due to this diverse possibility of how the resident can be supported by ICT, that no universal definition of AAL was adopted yet . A common view looks at the emergence of Ambient Assisted Living technologies as the result of the evolution of general assistive technologies due to technological progress. Assistive technologies can be any device or system that enables the user to live independently in his home by supporting tasks the person would not be able to do on his own. These devices or systems can be simple sticks as walking aids, wheelchairs, adjustable beds or alarms for home security . With the general technological progress these technologies became more advanced.
Especially, the development of ambient intelligence in combination with assistive technologies played a vital role in the upcoming of AAL. It uses recent advances in information technology to create digital environments that act as an electronic butler to the resident. To achieve this, ambient intelligence integrates sensor networks, to collect user and environmental data. The collected data is reasoned by the system to then perform actions which benefit the resident proactively in their daily life but in an unobtrusive manner . If this paradigm of context aware computing, that integrates undistinguishable into the environment is used on the development of assistive technologies, this is called Ambient Assisted Living . This view on the nature of AAL devices is especially adopted today because it makes use of the recent technological advances for example in artificial intelligence, machine learning or ubiquitous computing.
Ambient intelligence is the most recent paradigm of Ambient Assisted Living technologies. But before that AAL devices also existed. The emergence of Ambient Assisted Living technologies can generally be categorized into three generations .
i. The first AAL systems focused mainly on alarms in the form of wearable buttons. In the case of an emergency this button can be pressed by the user to call for help.
ii. In the second generation of devices the systems should not be dependent on the users’ interaction but rather detect an emergency on their own. By using sensors, e.g. falls or gas leaks can be detected automatically.
iii. The third generation focusses not only on the detection and report of an incident but tries to prevent them.
Here the paradigm of ambient intelligence comes into use. Ambient Assisted Living mainly differs from other e- health technologies in its focus on increasing the independence of its users’ in their home. AAL technologies developed over three generations, where the most recent benefits from the upcoming of ambient intelligence. Research in Ambient Assisted Living has also become of global interest because of the stated socio-economic factors. After this background on AAL is given the following sections of this paper aim to give an overview of common applications of this technology, its user as well as their acceptancy of the devices.
Materials and Methods
In this paper we aim at giving an overview of relevant topics within the AAL domain. Therefore, three relevant topics have been defined and formulated as the research questions of this paper.
RQ1 What are the current applications of Ambient Assisted Living?
RQ2 Which end-users are the target group of Ambient Assisted Living applications?
RQ3 How is the acceptance of Ambient Assisted Living among its end-users?
To provide this overview we conducted a literature review in the following search engines:IEEE, PubMed, GoogleScholar and ScienceDirect. For RQ1 only literature, that fulfilled the criteria: published between 2015 and 2020; includes at least one of the terms Ambient Assisted Living, Active Assisted Living, Assistive Technologies, review, overview has been selected as primary literature. Supporting, secondary literature has been selected based on the primary or through free searches in the stated engines. To answer RQ2 and RQ3 the primary literature was used or also a free search was conducted.In this paper, we aim to give an overview of AAL by reviewing overviews of this domain. Thus, it is possible to identify relevant topics that occur in these different publications. In this way, this paper differs from other overviews that review individual AAL projects to identify trends or from publications providing a list of AAL projects that is limited to several topics.
Results and Discussion
In this section, we provide the results of the literature review for the individual research questions.
RQ1 What are the current applications of Ambient Assisted Living?
For the answer of the first research question several overviews of the AAL domain have been reviewed. Even though the applications of AAL to enable independent living for its user are diverse, three main topics were identified. These are Smart Home, Sensing and Robotics. Within these topics several approaches exist. These are highlighted through project examples in this paper.
Smart Home: In the regarded overviews the smart home technology is often described as an AAL application since it enables independent living in the domestic environment.As a reason for this states the diminishing health status of elderly or disabled people overtime, that is accompanied with the wish to stay independent in a familiar surrounding. Therefore, ICT solutions that integrate into the home provide a solution. has even observed that within the AAL domain special home accessories and furniture are the most common devices. For that reason, also the smart home technology can be regarded as a major contributor to Ambient Assisted Living to enable more independency for its users. The importance of this technology can be also seen as the digitization of the domestic environment becomes more and more a point of focus for health management of patients . In the smart home is described as living environment that is digitized by sensors and smart appliances and thus forms a network that is capable to deliver automated services to the user based on his lifestyle. In order to deliver that services, the smart home uses various devices to firstly monitor the activities of the resident. The smart home is then able to analyze the collected data about the resident’s activities in his environment. Based on this analysis the digital environment can offer services tailored to the resident and assist him in his daily life. This is achieved through various components: sensors (e. g. motion sensors), household appliances (e.g. lights), actuators (e.g. door openers), security (e.g. password locks), communication (e.g.human machine interfaces).
The independence of a smart home resident in AAL can be achieved in various ways. A typical application in this context is described in and can be seen in (Figure 2). Here the elderly home is equipped with a sensor network that monitors the resident. The sensor data is transmitted via WiFi to a base station that connects via a Gateway to the homes of peers, family or friends and to health care responsibles such as carers and doctors. In this way the health status of the resident can be continuously supervised. In case of an incident or emergency fast care can be provided, via direct communication among the participants. Al Shaqi R et al. describes three general categories that distinguish the ways how the resident of a smart home is assisted in his daily life e.g. through actuators or bycontributing to health care management. These three categories of AAL smart homes are:
i. Smart homes that are assisting daily and social activities for example through providing orientation aid such as in .
ii. Smart homes that are improving the resident’s safety for instance by predicting falls as in .
iii. Smart homes that asses the resident’s health status through his vital parameters such as in . The smart homes’ ability to control daily tasks is further described in .
The two components are distinguished. First, there is the general health monitoring which comprises the systems and devices used to assess the inhabitants’ health status. As a second, there are platforms that aim to integrate the several health systems, ensure interoperability and thus create the smart home environment. According to the premise of AAL the assistance of theresident through smart homes must be intelligent, unobtrusive and ubiquitous . This non-disrupting nature of domestic monitoring systems is also emphasized in [21,22]. The medical status can be accessed by the responsible at any time without diminishing the user’s independence and interrupting him in his daily living. At the same time the users stay connected and can communicate with the monitoring entity in case of need and vice versa.
To provide the communication between the various entities (sensors, actuators, people) in a smart home the Internet of Things technology is the enabler for home assistance. As described in Pawar AB et al., this technology interconnects sensing objects in the environment such as sensors to computational elements that evaluate the collected data. A service element then offers the corresponding assistance to resident. Concerning AAL, the Internet of Things technology can monitor single and multiple conditions of the resident such as changing heart rates or assist with medication management. Therefore, IoT for AAL can be seen as a problem solver in the medical home environment. Worldwide there are numerous projects regardingthe smart home technology in the field of Ambient Assisted Living. In the following three projects from Europe, America and Asia are presented.
A. CoachMyLife (Europe):
i. Directed at elderlies to assist with daily activities at home
ii. Senior is equipped with smart watch, Bluetooth beacons are deployed in the home
iii. Machine learning is used to identify the desired task based on the senior’s position to the beacons
iv. Help to fulfill the task is sent to a tablet in form of single steps
B. Vital Radio (America):
i. Smart home environment, monitors the inhabitant’s breathing and heart rate remotely without body contact
ii. By analyzing variations of wireless signals through persons chest and skin vibrations
iii. Assessing health status unobtrusively
C. TRON Intelligent House (Asia):
i. Japanese effort to develop the computer-based society of the 21st century.
ii. House equipped with 380 computers
iii. Used the TRON architecture to interconnect the computer devices.
iv. Several AAL applications: e.g. a toilet that can analyze the urine and blood pressure, motion sensors for lightning to prevent falls.
The smart home technology monitors the resident and provides custom services to him. In the AAL environment especially the assistance with daily activities, safety support, health monitoring and the unobtrusive nature of smart homes are relevant.
Sensing: Sensing is a vital part in Ambient Assisted Living to gather information about the elderly or incapacitated. This information is useful when examining the health status through vital signs, monitoring the inhabitant’s daily life or preventing accidents such as falls. A diversity of available sensors forms the basis of an Ambient Assisted Living environment. For that reason, the different sensors and their combination is a topic discussed in AAL reviews.The sensing element in Ambient Assisted Living is of special importance because it enables the monitoring of a person without the need for his intervention. This represents an advantage of AAL traditional health care devices that are often too difficult to use for impaired people. a major advantage . There, the monitoring task can be accomplished by a wide variety of sensors.In two general categories of sensors are distinguished. These are static sensors to monitor the user from a fixed location and mobile sensors that can also be worn. Their overall aim is to recognize human activity which is then processed to provide AAL services.
According to this study, the various sensors are commonly used for home safety, home automation, activity monitoring, fall detection, localization and tracking and health status monitoring. This study also provides an overview of sensors that are widely used in the Ambient Assisted Living domain.
Therefore, three different types of sensors are distinguished.
a. Passive infrared sensors (PIR). These are usually fixed at a certain position. They are able to detect human motion in the environment. Thereby it is possible for example to analyze the human activity levels, detect deviations from normal behavior or intruders. The PIR sensor can also be used to recognize if a person has fallen.
b. Vibration and acoustic sensors. There, Vibration is monitored by accelerometers, which are usually used as wearable fall detection devices. Nevertheless, vibration sensors can also be static e.g. by implementing them into the flooring. In that way they are to analyze floor vibration and detect activities such as walking or running. If a fall occurs this is also detected. Also, acoustic sensors can be used for that reason.
c. Camera sensors. These systems are like the others able to detect certain activities of the users’ and abnormalities to them. In contrast to the other sensors, cameras are able to supervise a wide range of activities. Cameras are commonly deployed in a fixed position, which usually raises privacy concerns.
Instead the use of wearable body cameras is also possible. In this way, pictures of the user are not directly made whereas the detection of e.g. falls is practicable. also distinguishes these three different ways of sensing either through wearable devices such as accelerometers, through mostly static motion systems or through vision systems that are able to classify activities based on video segments. Furthermore, it describes the sensing element as a sub-system of a activity recognition system. Furthermore, it distinguishes between direct sensing, which concerns the monitored person and between indirect sensing which focusses on the environmental parameters. Also different measurement types to the user are distinguishable.
According to Cedillo, P. et al. the most common devices focus on measuring the heartbeat, blood pressure, temperature or falls. A list with various sensor types and their measurement that are either ambient and in a smart home or wearable and mobile is also given in . Especially wearable sensors are of interest in current
AAL applications. Because of their miniaturization they can be integrated seamlessly for example into the clothing of a person and thus enable unobtrusive monitoring. In contrast to other mobile sensors such as in smart phones the sensing here becomes invisible. By integrating a variety of such miniature sensors on the user’s body so called body sensor networks (BSN) can be implemented. Thus, it is possible to asses a holistic health picture of an AAL user. Next to health monitoring, describes fall detection systems and the unobtrusive monitoring of daily activities as important applications of wearable sensors within the AAL domain. The applications of sensors in AAL are also diverse.They can be part of a smart home or be worn directly on the body. In following three projects are presented that use different sensor types to monitor the AAL user.
A. ALMA (Europe):
i. targets people with mobility impairments and offers navigation and orientation help.
ii. uses radio frequency emitters for localization of people and objects, also smart cameras for indoor and outdoor localization, environmental monitoring and for assessing situations.
B. Bioharness (America):
i. lightweight wearable system, carried around the chest
ii. monitors several physiological parameters such as posture, acceleration or ECG.
iii. offers possibility to view the data live or log it
iv. used in an AAL environment to monitor vital parameters.
C. Ubiquitous home (Asia):
i. Japanese smart home project
ii. monitors the resident by using a variety of sensors
iii. passive infrared (PIR) sensors, cameras, microphones, pressure sensors, and radiofrequency identification (RFID). Sensing is vital for AAL. It monitors the AAL user and provides services based on this data. Sensing is achieved through wearable or static sensors. These can either monitor the person directly or indirect through the environment. Particularly the miniaturization of sensors enables unobtrusive monitoring. Common measurements of these sensors are heartbeat, blood pressure, temperature or falls. Robotics: In recent years also the deployment of robotics in the Ambient Assisted Living environment was thematized. These robots can assist in various ways. The applications can range from the assistance with the tasks of daily living to improving the social interactions or provide entertainment. There, gives an overview of the diverse possibilities how robots can be integrated to aid in the life of impaired people. In general, the robots are able to address physical, social or cognitive impairments. In these fields the most common solutions focus on mobility and selfmaintenance for example on feeding, bathing and grooming. These service robots help incapacitated with functional daily activities and thus ensure their independence. Other service robots provide cognitive aid for example by reminding the user to take the correct medicine or to follow a diet. Additional to safety and purposes, some robots are also designed for security and emergency intervention.
They are able to assess risk situations for example when a fall has occurred and then notify a caregiver. Companion robots are researched to improve a patient’s wellbeing through humanmachine-interaction. In this way, for example people with a kind of cognitive impairment can be stimulated through the interaction with the robot. The study names another group of robots that provide social, educational or entertainment services. These group is called interactive simulation robots and focusses on immersive and realistic user experiences. also takes up the importance of companion robots on people with cognitive impairments. They might not cure these impairments but offer stimulation through personal health management, information provision and entertainment. In robots are mentioned as one of the most significant developments inassistive technologies. Particularly, their ability to monitor and engage people in social activities is stated. Also, describes several projects in Ambient Assisted Living that are using robots. The document mentions for example robots to move objects or presenting food which in general assist functional daily activities. Others can enhance the communication between the person and a caregiver through the robot or improve social activities and entertainment. has reviewed robotic projects in the AAL domain and classified them into several categories based on their assistive focus.
The first category comprises service and companion robots that serve as Electronical Aids to Daily Living (EADLs). These robots can assist with specific tasks such as bathing or serve as reminders for activities such as following a diet. This category also includes cognitive orthotic systems that can help with social interactions. Further categories include health robots e.g. for managing health data and intelligent physical movement aids.
In contrast to this advocates a different categorization of robots based on the complexity of the task they are assisting with.
i. On the first level robots help with general activities of daily living (ADL). Most are designed to move objects or for feeding, bathing, grooming tasks.
ii. More complex robot systems care about instrumental activities of daily living (IADL) such as housekeeping, shopping or meal preparation.
iii. The third level comprises service and companion robots giving aid with enhanced activities of daily living (EADL) like hobbies and learning. Here service robots aim to improve interaction and companion robots the emotional wellbeing by offering companionship.
Also, through robotics the AAL user can be assisted indiverse ways. Different types of impairments can be addressed and tasks with different levels of complexity can be supported. In the following also three AAL robotic projects are presented.
A. ALIAS (Europe):
i. robotic system for elderly living alone
ii. monitor users, provide cognitive assistance, and promote social inclusion
iii. observes acceptance of the robot by the user, given a cognitive user interface
iv. robot’s behavior is proactive ensuring the user stays engaged in his surroundings
v. training and preservation of the user’s mental functions
B. Pearl Robot (America):
i. robotic system providing its users’ cognitive assistance in form of reminders and mobility aid
ii. uses speech synthesis, a display and a moving head for user interaction
C. Paro (Asia):
i. Japanese therapeutic robot in form of a white seal
ii. aims to bring the benefits of animal therapy to a domestic environment
iii. Paro’s lively behavior reduces the stress of patients and caregivers, stimulates interaction between them.
iv. improves the user overall psychological wellbeing Robots in the AAL domain are able to assist users with physical, social and cognitive impairments. Various types of robots are existing that might function as aids to special activities, as cognitive aids or as health managers. Based on the complexity of the assisting task robots for general activities of daily living, instrumental activities of daily living and enhanced activities of daily living are distinguished.
RQ2 Which end-users are the target group of Ambient Assisted Living applications?
Ambient Assisted Living solutions in general are able to restore the independence of people anywhere it is lost. Still the current research and applications are evolving mostly around the specific target group of elderly. In the following section we provide insight into targeted user groups of AAL devices and which loss of independence makes a deployment of AAL devices useful.The socio-economic developments of an ageing society, which imposes a higher burden on younger generations, the increasing costs for health care accompanied with a general increase in age-related diseases, demands for solutions. These problems that especially developed regions such as North America and Europe are facing are to be dealt with the use of information and communication technologies. Therefore, Ambient Assisted Living technologies receive tremendous funding in the view of providing solutions for an ageing society. Even though these systems are not necessarily exclusive to elderlies with any kind of impairment they are mostly designed for them.
As it is stated in the majority of AAL projects focusses on the elderly as the end user, only a minority concerns disabled, children, pregnant women or others. Also, which gives an overview of the current situation in AAL research, describes that the demographic change is a key driver of AAL developments.In past projects particularly the safety of seniors has been addressed for example through fall detection systems. Even though there is a diversity of Ambient Assisted Living solutions only a few have found a wide adoption. The reason for this is, that the target group of AAL systems, the elderlies does not see a purpose to integrate these products in their life. Therefore, it states that already younger people must be a target group of AAL developments. Generally, Ambient Assisted Living products can also be used by younger and not impaired people to assist their lifestyle for example for fitness tracking or typical smart home applications. The early adoption of these solutions makes it easier to use them later in a seniorcentered context. There, for instance fitness trackers can serve as health monitoring devices for caregivers or automatic light controls can provide orientation at night.
According to Cedillo, P. et al. the end-user, mostly elderlies, are the main focus of AAL developments. Still, caring relatives or professional caregivers are a potential target group of Ambient Assisted Living products. Especially informal caregivers such as relatives provide a big pillar in health care systems. Therefore, technologies that provide assistance to incapacitated are indirectly benefitting informal caregivers. The reduction of their burden through assistive technologies enables impaired people to stay longer independent in their home. This holds also true for care professionals. Regarding a general shortage of this personal, AAL technologies are able to provide relief . Next to looking at the target group of current AALsolutions, it is important to regard the degree of independence loss, that would make the use of these systems rational. Usually, the independence of a person is assessed through an observation of the activities of daily living. Whether a person can live on his own is thus dependent on the level of assistance he needs with daily tasks like bathing or eating . Lists of activities that are essential for daily life have been proposed by several contributions. They do not only regard the field of Ambient Assisted Living but also the need for care in general [34-36]. Commonly these activities consist of two groups. The independence assessment based on activities has been extended by Lawton and Brody in 1969 for the instrumental activities of daily living (IADL).
Similarly, to the Barthel Index it contains a scoring list to assess a person’s independence. This time it includes functional skills necessary to live in a community. The listed activities are more complex than the basic activities of daily living and account for tasks such as using a telephone, shopping, housekeeping and the ability to handle finances . Based on the assessment of these two types of activities the general need for care of a person is determined. In many health care systems these assessments decide over a person’s extent of access to health care services and support. This might also decide over the access to Ambient Assisted Living technologies as most people are not able to afford these systems on their own and rely on the national health care system. argues that to enable an independent life at home, the observation of usual and regular activities is not sufficient. Instead, he proposes to assess the cognitive and communicative functions of a person or for example the ability to decide on a daily routine by himself.
This proposal widens the term of care neediness. In this way Ambient Assisted Living products could be accessible for a broader audience. So, could for example companion robots be accessible for people with only mild cognitive impairments. The International Electrotechnical Commission (IEC) has developed several use cases for assisted living technologies and has mapped them to a person’s level of need.
i. On the first level also independent living persons with no need for care are proposed as a potential target group of AAL. For example, by assisting the self-management of the health status.
ii. On the level where some living assistance is needed the AAL system is supervised by the user. It only occasionally interferes for example in case of a severe emergency.
iii. The next level concerns the instrumental activities of daily living. There, permanent assistance is needed but also the users’ interaction.
iv. The highest level of assistance regards the basic activities of daily living. In this case the system should be able to act autonomous. The monitoring of activities then is not enough, active assistance is needed.
This proposal also mainly concerns usual and repeating activities and not directly mentions cognitive or psychological impairments. Still, it states that a deployment of AAL technologies would also be beneficial for people with no impairment for selfmanagement purposes or in emergency situations. Thus, this view goes beyond the general assessment of need for care .
In conclusion, the main target group of AAL systems is the elderly end user. In fewer cases the end users were disabled, children or pregnant women. Formal and informal caregivers are not the primary focus of Ambient Assisted Technologies, still they benefit from them indirectly. Nevertheless, AAL devices are not exclusive to a certain user group but depend on the assessment of need for care. This assessment is usually based on the need for help with basic activities of daily living or instrumental activities of daily living. But also, people with the need for communicative or social aid might be users of AAL technologies. Furthermore, the deployment of AAL products for people without any kind of impairment is thinkable for example to increase the acceptance of these technologies once an impairment has occurred.
RQ3 How is the acceptance of Ambient Assisted Living among its end-users?
The wide adoption of AAL systems is highly depended on the acceptancy of these systems by the end-user. This is especially of importance since the target group of AAL consists mostly of seniors. With advancing age, the use ofnew technologies imposes increasing difficulties on elderlies. Therefore, the development of AAL devices must be centered on the end-user’s perspective. Already in 1999 , Cowan D and Turner Smith A stated that assistive technologies in general are not adopted by the user because they impose a great burden. This burden can also be socially, because the devices are seen as not desirable in the user’s social environment. He therefore states that the development of assistive technologies should be based on the want of the user and not just the need from a medical perspective. Next to this, the acceptance of AAL products as a part of assistive technologies depends on various aspects. Weiß, C. et al. describes several of them that apply for general innovations and assistive technologies. It states that the overall acceptance of a product is on the one hand determined by the general attitude towards a product for example regarding its cost.
On the other hand, it is determined by the customers’ specific behavior with the product. Concerning the acceptance of assistive technologies, it refers to McCreadie and Tinker. According to them, the users’ feeling toward these products is influenced by:
i. their subjective need for care
ii. their supply and costs
iii. the product properties.
Furthermore, a positive user experience, a compelling design and the integration into the personal space decides upon a products acceptance. Another important aspect is the volume of learning that is required for the products handling. Ambient Assisted Living devices achieved an overall positive feedback by surveying its target group. This is due to the hope for independence and safety that many people are associating with this kind of technology. But, a major aspect for especially older people is the cost-benefit ratio of AAL products. There, in general the willingness towards these products increases, if the subjective need for assistance rises. Also, a high accessibility and a low price as well as a reliable functioning of the devices are beneficial.Even though the elderly is the main focus group of Ambient Assisted Living solutions, particularly in a care related environment the acceptance of informal and formal caregiver is of importance. There, for professional caregivers the acceptance is lower in general. Many of them regard the technologies as anti-human in the care context, where human interaction is valued much. Also, they are concerned with the fear of losing their jobs to automation .
The highest acceptancy towards AAL was gained by safety and wellness products such as intruder detections. Low acceptancy was given to products which use vision-based sensors or microphones within the home. Although the general willingness to deploy AAL devices is high by elderly, they fear decreasing interactions with other people as a result of these technologies. Also, the data security of the users’ is a topic of concern.The past developments in AAL have been exclusively done by engineers. Therefore, single products were not appealing for the seniors in contrast to the positive feeling towards AAL in general. As a result, recent developments are particularly developed together with the elderly end-user. Through this way of user-centric design, AAL developments are becoming more attractive for the target group. A broader adoption of AAL solutions is therefore possible. In an acceptancy study for AAL devices is presented. It was conducted in Germany, where people with an age between 48 and 84 years have been questioned. The survey described several typical features of AAL systems concerning for example general functions or the ways of data transmission.
As a conclusion the study also presents an overall positive attitude towards Ambient Assisted Living technologies among the interviewed. Especially positive was the feedback for safety functions such as intruder detections. There, the questioned were willing to pay more than for wellness functions. Nevertheless, the overall willingness to pay for AAL devices is low. Also, low maintenance costs and energy consumption are demanded.
Furthermore, the study concludes that the questioned do not want to align their daily routine according to these systems. They are also skeptical if a device can identify its user or the threat of data fraud. Comparing the different technologies, the interviewed were less skeptical towards mobile devices followed by stationary ones. Whereas, systems that are camera-based were refused. People that have already used AAL devices also stated, that they were confused with their operation to some extent.
Further constraints that are hindering a broader deployment of AAL technologies are listed in . They comprise: the lack of skills to use the devices, the uncertainty about its benefits, disabilities that might hinder the intended use and the high costs of some systems. Also, data security is a concern of AAL users. Even though these constraints exist today, states that coming generations will have other technical capabilities and thus, their acceptance of AAL devices might be higher than today. For that reason, still it is necessary to examine how AAL devices need to be designed around the user and which properties are perceived most enjoyable to him. In the acceptancy of different ambient notification systems has been regarded to asses which systems are least obtrusive and favorable in their users’ daily life. There, the projects aim is in general to support the AAL users’ daily activities by giving assistance through notifications, e.g. a reminder to drink something. There, warmth as a form of notification was used in a scarf and rated especially high, because it provided a positive feeling. The use of light as a notification to water plants was also rated high, because the light was used in a non-obtrusive way. Auditive signals received less acceptancy because they reminded of alarms. Using ambient sounds such as bird noises as notification was rated higher instead.
presents a study of acceptancy for a user interface that is used in the AAL environment to control several functions of a smart home. The system is a touchscreen PC with an internet connection. The user interface has been developed together with the future users, which are mainly elderlies. Also, the system was given the human-like name PAUL. Because of the easy to use interface and the humanization of the system it is regarded more as roommate than a technical system. For that reason, also the acceptancy of the connected AAL services is not regarded as obtrusive or watching. The acceptance of AAL devices is dependent on various factors. Especially, their subjective perceived use and the price are decisive for the adoption of an AAL device. Older people are in general positive towards AAL, because of a hope for more independence. Still the willingness to pay is not given. Even though AAL might also ease the life of care givers, informal caregivers are fearing diminishing humaninteraction due to this technology.
Among the mostly elderly end-users particularly security devices were rated positive. As a constraint to the broader adoption of AAL the focus on the mere need for care during the products’ development is to mention. Instead a focus on the end-users’ perspective should be adopted. The example of PAUL shows that usercentered design makes it possible that the technology can be perceived as non-obtrusive or even human-like. In this way the complexity of AAL devices can be reduced and thus be adopted even by elderly generations with less technical capabilities than future ones.
Threats to Validity
In this paper we aim at giving an overview of three topics within the AAL domain. We are aware of several threats to validity of this overview. Firstly, the selected literature was chosen based on three keywordsdirectly connected to AAL. Using a broader variety of keywords, a less narrow overview of the AAL domain could have been achieved. In this paper the selected literature was included when it regarded the three geographic regions Europe, Asia and North America. This choice was based on the necessity of these regions of AAL due to the demographic transition. By defining other motives for AAL also another impression of this filed could have been gained.
In this paper, we intended to give an overview of the Ambient Assisted Living domain. Therefore, we firstly defined the necessity for the development of AAL solutions. Furthermore, we differentiated the term Ambient Assisted Living from other e-health domains and gave insight into the evolution of AAL devices. To provide an overall picture of the field of AAL the three topics: current applications, end-users and acceptance have been defined as relevant and were investigated in this paper. Regarding these three topics we conclude:
i. Various applications exist in the AAL domain to enable independent living in the users’ homes. The most common application is the smart home. To provide custom AAL services various sensors are deployed. Also, robots are able to assist the resident in multiple ways.
ii. Due to an ageing society mostly, elderlies are targeted by AAL devices. The devices are also targeted at a specific loss of independence. Next to activities of daily living, AAL assists with social or cognitive impairments. Also, not impaired people might be end-users of AAL
iii. The acceptance of AAL among the mostly elderly endusers is mainly positive. The price and the disregarding of user needs during the development are the main hindrance of a broader adoption. A usercentered design is a solution for that.
- (2019) World Population Prospects 2019, Department of Economic and Social Affairs. United Nations 2019 141: 49-78.
- Natalie Greene Taylor, B Subramaniam M, Waugh A (2015) Transforming the Future of Ageing. SAPEA: 294.
- (2017) Asia: At Risk of Growing Old before Becoming Rich?’, Regional economic outlook. Asia and Pacific : preparing for choppy seas. International Monetary Fund (April).
- (2020) World Population Prospects - Population Division - United Nations (2020) population.un.org.
- Berger H, Dabla Norris E, Sun Y (2019) Macroeconomic of Aging and Policy Implications. Group of 20 IMF Staff Note.
- (2018) Beschäftigte in der Pflege (Pflegekräfte nach SGB XI). BMG bundesgesundheitsministerium.de.
- Martinez Fernandez C (2012) Demographic Change and Local Development: Shrinkage, Regeneration and Social Dynamics. OECD Demographic Change and Local Development: Shrinkage, Regeneration and Social Dynamics: 1- 309.
- Fischer F (2016) eHealth in Deutschland. eHealth in Deutschland.
- Abschlussfassung (2016) Studie im Auftrag des Bundesministeriums für Gesundheit.
- Blackman S, Claudine Matlo, Charisse Bobrovitskiy, Ashley Waldoch, Mei Lan Fang, et al. (2016) Ambient Assisted Living Technologies for Aging Well: A Scoping Review. Journal of Intelligent Systems 25(1): 55-69.
- Cowan D, Turner Smith A (1999) The Role of Assistive Technology in Alternative Models of Care for Older People. With Respect to Old Age 2: 325-346.
- Cook DJ, Augusto JC, Jakkula VR (2009) Ambient intelligence: Technologies, applications, and opportunities. Pervasive and Mobile Computing 5(4): 277-298.
- Rashidi P, Mihailidis A (2013) A survey on ambient-assisted living tools for older adults. IEEE Journal of Biomedical and Health Informatics 17(3): 579-590.
- Byrne CA, Collier R, Ohare GMP (2018) A review and classification of assisted living systems. Information (Switzerland) 9(7): 1-24.
- Al Shaqi R, Mourshed M, Rezgui Y (2016) Progress in ambient assisted systems for independent living by the elderly. Springer International Publishing 5(1).
- Cedillo P, Cristina Sanchez, Karina Campos, Alexandra Bermeo (2018) A Systematic Literature Review on Devices and Systems for Ambient Assisted Living: Solutions and Trends from Different User Perspectives. IEEE: 59-66.
- Li R, Lu B, Mc Donald Maier KD (2015) Cognitive assisted living ambient system: a survey. Digital Communications and Networks 1(4): 229-252.
- (2019) CoachMyLife - AAL Program aal-europe.eu.
- (2012) AIB - AAL Programme aal-europe.eu.
- Adib F, Hongzi Mao, Zachary Kabelac, Dina Katabi, Robert C Miller, et al. (2015) Smart homes that monitor breathing and heart rate. Conference on Human Factors in Computing Systems - Proceedings: 837-846.
- Haux R, Sabine Koch, Nigel Lovell, Michael Marschollek (2016) Health-Enabling and Ambient Assistive Technologies: Past, Present, Future. Yearbook of medical informatics: S76-S91.
- Alexandru A, Ianculescu, M (2017) Enabling assistive technologies to shape the future of the intensive senior-centred care: A case study approach. Studies in Informatics and Control 26(3): 343-352.
- Pawar AB, Ghumbre S (2017) A survey on IoT applications, security challenges and counter measures. International Conference on Computing. Analytics and Security Trends, CAST 2016 IEEE: 294-299.
- TRON Intelligent House (no date) tronweb.super-nova.co.jp.
- Erden F, Senem Velipasalar, Ali Ziya Alkar, A Enis Cetin (2016) Sensors in Assisted Living: A survey of signal and image processing methods. IEEE Signal Processing Magazine IEEE 33(2): 36-44.
- ALMA - AAL Programme (no date) aal-europe.eu.
- Bio Harness | BIOPAC (no date).
- Yamazaki T (2007) The ubiquitous home. International Journal of Smart Home 1(1): 17-22.
- ALIAS - AAL Programme (no date) aal-europe.eu.
- Pollack ME, Laura Brown, Dirk Colbry, Cheryl Orosz, Bart Peintner, et al. (2002) Pearl: A mobile robotic assistant for the elderly. Architecture 2002: 85-91.
- PARO Therapeutic Robot (no date) parorobots.com.
- Weiss C (2013) Unterstützung Pflegebedürftiger durch technische Assistenzsysteme: 1-144.
- Ni Q, Hernando ABG, De la Cruz IP (2015) The elderly’s independent living in smart homes: A characterization of activities and sensing infrastructure survey to facilitate services development. Sensors (Switzerland) 15(5): 11312-11362.
- Ranasinghe S, Al Mac Hot F, Mayr HC (2016) A review on applications of activity recognition systems with regard to performance and evaluation. International Journal of Distributed Sensor Networks 12(8).
- (2015) European Commission Economy Series. The Ageing Report 2015.
- Mahoney FI, Barthel DW (1965) Functional Evaluation: The Barthel Index. Maryland State Medical Journal (14): 56-61.
- Wallace M (2008) Katz Index of Independence in Activities of Daily Living ( ADL ) Katz Index of Independence in Activities of Daily Living INDEPENDENCE : DEPENDENCE. American journal of nursing, 108(4): 67-71.
- AAL S (2019) IEC SyC AAL SMB/6784/R: 9-27.
- Eberhardt B, Fachinger U, Henke KD (2010) Better health and ambient assisted living (AAL) from a global, regional and local economic perspective. International Journal of Behavioural and Healthcare Research 2(2): 172.
- Deutsche Telekom AG (2018) Schlussbericht DAAN - Design Adaptiver Ambienter Notifikationsumgebungen.
- Künemund H, Hrsg UF (2018) Alter und Technik: Sozialwissenschaftliche Befunde und Perspektiven. | 1 | 5 |
<urn:uuid:5de18f60-f236-4fa6-b728-2a2ca3251132> | Do you absolutely, positively need an electronic engine monitor? Or are they really just cleverly packaged microprocessors competing for the bucks youd otherwise spend on a new navcomm?
Consider the science and physics of airplane engines. Theyre nothing but heat engines, converting thermal energy to useful power. The temperature of various parts of the engine, therefore, is a useful indicator of the engines health, power output and efficiency. In short, engine monitors offer an inside look at engine operation that the standard cockpit instruments cant touch.
The market is flush with choices. Besides price, a key differentiation between the various models of monitors is the number of different temperatures being measured. First-generation/inexpensive monitors typically sample only one temperature, such as single-cylinder CHT. The latest generation monitors are multi-probe units and can sample, integrate and display temperatures from multiple probes. Moreover, they can electronically log and store temps for later analysis.
Cylinder head and exhaust gas temps are the two critical parameters here. EGTs are a marker for combustion efficiency and ignition system health. The actual absolute EGT value is less important than comparing it to trends and to CHTs.
CHT tells you about engine cooling, or the lack thereof. Unlike exhaust gas temperature, actual numbers for CHTs do matter. If the CHT is too low, the engine will not have achieved its proper steady-state material dimensions. If too high, detonation margins diminish and, worst case, cylinders can actually deform.
With optional inputs, engine monitors can also sample turbine inlet temperature (TIT) on turbocharged engines, oil temperature, carburetor (induction) temperature, and outside air temperature (OAT). Some models also have fuel flow functions, voltage monitors and programmable alarms.
Bars and Numbers
Religious arguments about information presentation run roughshod through the engine monitor industry. For a time, the prevailing wisdom was that pilots were best served by an analog or graphic bar display of all cylinder temperatures simultaneously-a philosophy followed by JP Instruments and Insight.
If we can ignore the questionable label-theres nothing analog about a 15-segment bar graph with discreet 25-degree increments-the idea was to apply scientific visualization to the stodgy field of instrumentation. In other words, draw some pretty pictures from the raw numbers and convert the display from a presentation only a sports statistician could love into something more akin to the sound board at a Stones concert.
However, in the everything-old-is-new-again department, two manufacturers now tout the fact that their units lack the bar graphs and, instead, display the raw numbers, attracting the pilots attention only when a critical parameter has been exceeded and, perhaps, as a courtesy, when his Gold Medallion warranty has expired.
In addition to basic data display, all of the units we examined feature some type of temperature alarms (except for the EGT-only Tetra I and Hexad I). The units differ only in how many types of alarms they have with the Insight and KSA having the fewest. The JPI, Allegro and EI all sport more programmable alarms than Microsoft has pending lawsuits.
The Insight, JPI and Allegro all feature a leaning aid. This works by monitoring each cylinders EGT and giving an indication when the EGT trend, for any cylinder, reaches peak or leanest operation.
Opinions were mixed on the utility of this function. First, it requires a certain amount of operator skill; lean too slowly and the units can be fooled by false blips from the temperature probes (especially those with cheap probes). Lean too fast and the units cant keep up.
Also, with the exception of the Allegro, using the peak find option requires prodigious eyes-in-cockpit time. Most pilots that we spoke to who had the feature admitted that they rarely used it.
Display normalization is available on all the graphic-type monitors, including KSA, JPI and Insight. This refers to the ability to visually align the display bars across the face of the instrument. Normally, no engine will have uniform EGTs, yielding a jumbled, sawtooth display. By normalizing or aligning the bars, a trend or early onset of problems is supposedly easier to detect.
With these basic considerations in mind, lets review the offerings.
KS Avionics (KSA)
KS Avionics makes two models of their analog multi-probe engine monitor. The Tetra I and Hexad I, differ only in the number of cylinders monitored, four and six respectively. The standard version of each features simultaneous EGT monitoring of all cylinders only. A II version of each adds CHT monitoring (although not simultaneous with EGT; you must switch between the two modes with a toggle) as well as alarms for high CHT and excessive CHT cooling rate.
Excessive CHT cooling is preset at the factory as anything more than 40 degrees F per minute per cylinder. Both cooling rate and high CHT temperature alarm points can be changed in the field, however.
The Tetra and Hexad units are classic true analog electromechanical-style devices. Theres no digital readout of absolute temperatures nor are the pointers rendered with light emitting diodes. Instead, the pointers are driven electromechanically via a separate amplifier module.
The module also contains the controls for setting the CHT alarm and shock cooling limits for the II versions. The units also feature knobs below each cylinders indicator which can be used to normalize the display.
KS Avionics supplies a nice typewritten practical guide detailing the intricacies of engine temperature management with their units as well as background information on detonation, pre-ignition and the usual list of abnormalities that an engine analyzer will detect.
While KSA units may appear dated-the design originated in the 1960s-the company and the product has a solid, some would say fanatical, reputation for quality. Some pilots, frankly, simply prefer the analog steam gauge format over the diode-type display; especially when the other instrumentation in the aircraft is of the same type.
EIs top-of-the-line entry in the engine monitor arena is the Ultimate Engine Analyzer. It has 16 channels divided into eight groups of two channels each. Each group can display both of its associated values simultaneously via two numeric LCD displays. Thus, a typical set-up might have group one showing the simultaneous EGT and CHT values for cylinder one, group two the same for cylinder two, etc. Its important to note that because the units primary feedback mechanism is two LCD displays, the values of only one group can be viewed at a time. In normal operation, the unit will step through the groups, sequentially displaying all measured temperatures.
Since there are eight groups, that means that on a typical four-cylinder engine, only half the groups will be used if the engine is instrumented for both CHT and EGT on all cylinders. On a six-cylinder engine, two thirds of the groups are used. Each still provides plenty of additional inputs which can be hooked to oil temperature, TIT, outside air, carburetor air, etc.
Each channel can have an associated high and low temperature limit alarm. The two LCD displays can also have both differential and trend limits assigned to each of them.
Differential limits are useful for spotting individual cylinder anomalies, such as a cylinder with an abnormally high or low CHT with respect to the others. Trend limits are most useful for spotting shock cooling problems. All told, the Ultimate Scanner can provide a visual alarm for 36 different temperature events.
The Ultimate Scanner features a peak temperature finder. Dont be misled; this is not the same as the lean find mode found on other monitors. The Ultimate Scanners peak find spots only the hottest absolute temperature. It does not automatically determine when a given channels temperature has peaked, a function critical to automatic determination of peak EGT.
EIs Ron Roberts told us the companys engine monitoring philosophy is that data should be in the cockpit, but that the pilot shouldnt be overwhelmed with raw data. Let the computer take care of that and only let the pilot know when something goes out of spec. Nevertheless, EI has recently added data logging to the Ultimate Scanner and they plan on announcing the capability at this years Sun n Fun Fly-In. The Ultimate Scanner will output the raw data-unfiltered and uncompressed-from each of its temp sensors into a portable computer.
Electronics International has a strong customer following and the quality of their products is well regarded. From personal experience, we can say the units are well made and robust, especially the temp probes.
Allegro Avionics model M816 Lynx is quickly becoming a darling of the homebuilt crowd. However, you may run into a snag installing it in a spam can, as it has neither PMA nor STC approval. Given a willing mechanic, a complete Form 337 and a co-operative FSDO, anything is possible. However, Peter du Bois of Allegro explains that the company is not encouraging installs in certified airplanes.
In a gutsy marketing move, Allegros glossy collaterals denigrate fancy electronic bar graph displays. Were not sure that we agree but we admire contrarians nonetheless.
Allegro argues that the majority of engine problems identify themselves as trends, such as a CHT slowly trending up or an EGT trending down and that trends are difficult to spot on a bar graph. The bars, says du Bois, are good at identifying a single catastrophic failure; a dead cylinder. However, failures are rare and usually identifiable by other means. (Say, did you catch if the cylinder that just departed had number 5 or number 3 stamped on it?)
Allegros design concentrates on two criteria with regard to engine health and efficiency: the maximum temperatures of any cylinder and the maximum span in temperature between a given cylinder and the rest. With a bar-graph type display, its more difficult to easily discern which cylinders have the highest EGT, CHT or span of the two values.
This view has merit. For example, an Insight GEM with 25-degree bars could visually indicate that all cylinders are equal of temperature when, in truth, theres a substantial variation between the hottest and coldest probe. On the Allegro, the comparison is done in the unit, requiring less eyes down in the cockpit.
Allegros unit has a true lean-find mode. Additionally, it has an audible tone to indicate when the first cylinder has peaked. This is an improvement over the other lean-finding analyzers which require periodic visual scanning. Allegros unit also has a user programmable shock cooling alarm that can give a pilot an indication that cylinder cooling is occurring beyond a set value.
The four-cylinder Lynx comes complete with eight temperature sensors (one each for each cylinders CHT and EGT), provisions for a pressure port for oil/fuel/manifold pressure and sensors for bus voltage and bus amperage sensors.
The six-cylinder Lynx has additional temperature sensors plus provisions for another pressure sensor. Both models can be upgraded with up to a total of 16 temp sensors for TIT, oil temp, outside air temp and so on. Both will also accept an optional fuel totalizer option and a GPS interface option. The GPS option, when used with the fuel totalizer, will display gallons to destination as well as alarm should you have less fuel than you need to make it.
Insight Instruments virtually invented the graphical analog engine monitor as a marketing concept. While other multi-probe instruments preceded them (such as the KSA and Alcor), Insight was the first to combine modern digital electronics and luminescent displays into a single package.
Insights flagship model 602 took the market by storm in 1982, with its simultaneous electronic display of all EGT and CHT engine information. The 603 adds TIT input for turbocharged engines.Both models are still available and are still excellent basic analyzers.
Insights current top-of-the-line model is the model 610, introduced in 1993. It improves on the original by adding digital numeric temperature outputs to the bars, trend indicators, a TIT input, OAT and data logging capability.
Insight also should be credited with producing some of the most informative pilot materials available, including a videotape, about engine temperature and off-line trend monitoring.
However, Insights designs are beginning to fall behind other offerings in terms of options; Insight isnt quite keeping up in the race to take home the creeping featurism trophy. Most notably lacking in the 610 are programmable temperature alarms for each channel, in-flight trend and difference monitors, electrical monitors, fuel flow options and everything else designed to differentiate oneself from just another integrated temperature sensor.
On the other hand, set-up of the model 610 is probably easier than any other model on the market. Instead of a complex programming process involving pushing several small buttons, the 610 can be configured with an HP 100LX handheld computer. This is included with the unit or, if you have your own, theyll knock $500 off the price. It uses a menu system and talks to the monitor through a wireless built-in infrared port.
The 610 is also the only unit, currently, which comes standard with onboard data storage. In flight, the unit constantly records temperature parameters and stores them with time stamps in its non-volatile memory. Theres no need, as there is with the JPI, for a separate data storage device.
Insights Bill Freeman counters the claims of manufacturers such as Allegro that trends are difficult to discern with the GEM. He argues that alarm-type units have to have wide margins to avoid constant nuisance alerts and wide variations will show on the graphical display. Furthermore, says Freeman, alarm-based displays mask subtle trends which the GEM-type display will show.
The option of 20 to 30 hours worth of data storage on the GEM 610 is significant for the owner-operator who doesnt see himself crunching numbers on every fight but does like the ability have a record of the engines last few hours of operation should something abnormal pop up. Savvy mechanics like this feature, too.
The downside of the 610s built-in data logging is that Insight has chosen to compress the data in order to increase the amount (in hours) of data that can be stored. Apparently this compression is lossy and results in a slight but arguably important loss of precision.
JPIs EDM-700 has come a long way since Aviation Consumer did its first review of the unit in 1993. Gone is the poorly written laser printed instruction manual, the thin grounded EGT probes and the stunningly expensive data-logging option. Now, theres a glossy pilots operating manual, robust grounded probes, fuel flow measurement and standard data logging.
The JPI has virtually all the bells and whistles found in the other units reviewed, with the exception of data storage. For the JPI that must be provided by an external unit such as a PC or JPIs expensive data recording unit, either of which must be plugged into the analyzers serial output. The JPIs data presentation includes both digital and bar-graph displays.
JPIs unit incorporates an automatic scan and display of all measured parameters, thus you can set it up to momentarily scroll through CHTs, EGTs and oil temps. The GEM 610 cant do this. Other significant features of the JPI that are missing in the 610 include alphanumeric indications of whats being displayed (OIL), a bar graph for TIT and wider PC compatibility for logging.
We heard some grumbling about the user interface for the JPI. In particular, pilots dont seem to like the single button interface. They would prefer a multi-button design that allows some degree of backward and forward scrolling. The semi-intuitiveness of the interface was demonstrated on one flight where we were constantly fetching the owners manual to find how to switch to a different mode. Operation is learnable, of course, but not what we would call streamlined.
Flying with JPI owner Robert Kail, we noticed a downside to all this automation: He fixated on the shock cooling alarm and admitted that he had changed his flying habits in order to avoid tripping the dreaded flashing warning. While weve paid our share of engine overhaul costs and can understand the raw fiscal emotion the shock cooling alarm elicits, we think this feature probably causes more agita than its worth. Our view is that in most engines, shock cooling is an overrated myth.
Its impossible to say which of these units is the best since they differ significantly in design philosophy. If youre looking for a dirt-simple and reliable semi-mechanical unit then the KSA Tetra/Hexad units are a good choice. Theyve been around forever and have a solid reputation.
For a basic bar-graph unit with lean find but no datalogging, the GEM 602 is as good as it ever was and is still available at a bargain price. If bar graphs bore you and you think you can slide by with a field approval-or you have an experimental-we like the Allegro over the EI because of the additional inputs, greater flexibility and lower cost of the Allegro.
Now, on to the big two: Insight versus JPI: In our view, for bar-graph style units, the JPI EDM-700 has the edge over the GEM 610 in terms of features and functionality. With the exception of a built-in data storage capability, the JPI has every feature of the GEM plus several (such as alarms, more inputs, and a cleaner data logging capability) that the GEM lacks.
Note, however, that the JPI unit is reported by some dealers as having a higher infant mortality rate than the GEM. JPI has attempted to improve their quality-especially probe quality-and that seems to have had a positive effect. The jury is still out, however, on whether theyve caught up to Insight in terms of overall quality control.
All of the companies we dealt with were responsive to inquiries regarding their products. Especially impressive was JPI who FedExd their information (without being asked to do so) and Insight who wins the award, in a tight race with EI, for attempted murder via deluge of glossy brochures. Similarly, weve heard no complaints about any of these companies with regard to support and repairs after the sale.
Also With This Article
Click here to view the Engine Monitor Checklist.
Click here to view the Engine Monitor Comparison.
Click here to view “Installation Notes.”
Click here to view the Engine Monitor Addresses & Contacts.
-by Gregory Travis
Gregory Travis is a Cessna 172 owner and writer. His Web site contains in-depth information on Lycoming engines. (www.prime-mover.org/Engines/) | 1 | 2 |
<urn:uuid:6d864e45-1893-4bc2-a5a0-048dbab67c95> | Computers and the internet are becoming an essential part of this modern era.
They are being used by individuals and societies to make their life easier. Like
storing information, sending and receiving messages, processing data,
communications, controlling machines, typing, editing drawing designing,
drawing, working in offices almost all aspects of life. The tremendous role of
computers stimulated criminals and terrorists to make it their preferred tool
for attacking their targets. In this era of internet "Cyber Crimes" increases
day by day.
'Cyber crime' is not radically different from the 'Conventional Crime'. Both
include conduct whether act or omission, which causes breach of rules of law and
counter balanced by the sanction the state. Cyber crime is a "crime against
individual or organization by means of computer". And this crime is committed in
a network environment or on internet. Computer is either a tool or target or
both for committing this type of crime.
There are various types of Cyber crimes
like some are, Identity Theft, Carding, Hacking, Web jacking Cracking, Cyber or
Child pornography, Cyber stalking, Cyber squatting, Computer fraud or forgery,
Cyber Terrorism, Cyber warfare. A person who commits a cyber crime is called as
"Cyber Criminal". They can be children and adolesce''nts both.
Cyber Terrorism is one the harmful crime in all cyber crime. The cyber crimes
which affect the national security are cyber welfare and cyber terrorism. The
unlawful attacks and threats of attack against the computer, network and the
information stored therein. Cyber terrorist attacks on the internet of many
academics, government and intelligence officials sites etc.
Cyber terrorism is
the convergence of terrorism and cyber space. Section 66F of Information
Technology act related with Cyber terrorism and its punishment. The 1998 email
bombing by the Internet Black Tigers against the Sri lankan embassies was
perhaps the closest thing to cyber terrorism that has occurred so far.
The cyber crimes which affect the national security are Cyber Warfare and Cyber
Terrorism. Unlawful attacks and threats of attack against the computer, network
and the information stored therein. Cyber terrorism defined to be "the
premeditated use of disruptive activities, or the threat thereof, in cyber
space, with the intention to further social, ideological, religious political or
similar objectives or to intimidate any person in furtherance of such
cyber terrorism, a term first coined by Barry Collin in the 1980s, is
the convergence of terrorism and cyberspace. It involves an attack over a
computer network(s) for the political objectives of terrorists to cause massive
destruction or fear among the masses and target the government(s).
cyber terrorism aims to invade cyber networks responsible for the maintenance of
national security and destroy information of strategic importance.
It is one of
the biggest threats to the security of any country, capable of causing loss of
life and humanity, creating international economic chaos and effecting ruinous
environmental casualties by hacking into various critical infrastructure (CI)
systems. The notable characteristic of cyber terrorism is to use its economic
competence to clinch inordinate effects of terror over cyber and real world
through cyber-crafted means, like destruction of cybernetwork, denial of service
attacks and data exfiltration.
Dangers created by cyber terrorism warrant immediate global consideration.
However, states have been ineffective in advancing a consensual approach by
which varied acts of terrorism in cyberspace can be brought under the
nomenclature of cyber terrorism. Currently, no universally agreed definition for
cyber terrorism exists, even though it has been acknowledged internationally as a
major risk to global peace.
It is probably because of the saying, 'one man's
terrorist is another man's freedom fighter'. Subsequently, different
perspectives over the elemental constituents and definitions of cyber terrorism
will be contemplated.
Cyber terrorism is a global concern, which has domestic as well as international
consequences. It becoming very serious issue and it covers wide range of
attacks. This cyber terrorism is starkly different from common internet crimes
like money fraud or identity theft in that it can involve use of technology to
destroy or divert the system and infrastructure cause injury and death and
undermine economies and institutions.
To accomplish their goals, cyber
terrorists target the computer systems that control the electric power grids,
air traffic control, telecommunications networks, military command systems theft
of intellectual property violation of patent, trade secret, or copy rights laws,
to make unauthorized copies of classified data and financial transaction.
Definition of cyber terrorism:
cyber terrorism is unlawful attacks and threat of attacks against computers,
networks, and information stored therein, that is carried out to intimidate or
coerce a government or its people in furtherance of some political or social
objectives. It is the 'premeditated, politically motivated attacks by
sub-national groups or clandestine agents against information, computer systems,
computer programs and data that results in violence against non-combatant
It aims at seriously affecting information systems of private
companies and government ministries and agencies by gaining illegal access to
their computer networks and destroying data. cyber terrorism, as a small landmass
of the vast territory of terrorism, uses cyberspace as a target or means, or
even a weapon, to achieve the predetermined terrorist goal. In other words, it
is the unlawful disruption or destruction of digital property to coerce or
intimidate governments or societies in the pursuit of religious, political or
ideological goals. It is an act of politically influenced violence involving
physical damage or even personal injury, occasioned by remote digital
interference with technology systems.
cyber terrorism not only damages systems but also includes intelligence gathering
and disinformation. It even exists beyond the boundaries of cyberspace and
incorporates physical devastation of infrastructure. The NATO defines
cyber terrorism as 'cyber attack using or exploiting computer or communication
networks to cause sufficient destruction or disruption to generate fear or
intimidate a society into an ideological goal'. The most acknowledged definition
of cyber terrorism is of Professor Dorothy E. Denning, as an unlawful attack
against computer networks to cause violence against any property or person(s),
intending to intimidate a government.
Need to Study:
- To study the Conecpt of Cyber crimes or Cyber Terrorism.
- Cyber terrorism in India and its punishment in Indian Law.
- Initiatives taken by World and our country for the Cyber terrorisms.
There is a need to study to know how dangerous is cyber crimes & its effects and
its punishments & policies in our country. To find some measures to prevent
minimum cyber terror attack.
Statement Of Problem:
By doing some research it came to light that negligence people will create
source for the cyber terrorist. There is a need of strict restriction for
outsiders for using internet and stock of information and resources which is
openly available on internet because it will become the sources for cyber
attacks. There is also lack of Cyber Security in our country.
Cyber Terrorism is increasing day by day after all policies and laws of the
government so its necessary to strict our Security system, policies initiatives
and punishments too against cyber terrorism.
The methodology adopted for preparing this paper is based on a qualitative
explanation. There is the use of Secondary resources like books, research
papers, digital resources, various sites, etc. for data and information
Scope of Cyber terrorism:
While studying cyber terrorism, it is imperative to discern the two aspects of
usage of cyber technology by terrorists: (i) to facilitate their terror
activities; and (ii) to use cyberspace as a weapon to target the virtual
population or execute terror activities.
It is clear from the discussion here that cyber crime and cyber terrorism are not
coterminous. Most definitions of cyber terrorism establish a restricted
functional framework for the scope of cyber terrorism. For a cyber attack to
qualify as an act of cyber terrorism, it must be politically motivated; cause
physical or other forms of destructions or disruptions, like attacks affecting
the unity, integrity and sovereignty of a country; cause loss of life (such as
use of cyber networks in 26/11 Mumbai terror attack); and result in grave
infrastructural destruction or severe economic losses. The use of cyberspace and
information and communication technologies (ICTs) by terror outfits to
facilitate their functional activities (like organisational communications)
should be considered as cyber crime. Reckoning the 'facilitating part' under the
definition of cyber terrorism would intensify the scope of cyber terrorism and
augment the problem to be rectified.
Threats posed by cyber terrorism:
cyber terrorism poses critical security threats to the world. The CIs, like
nuclear installations, power grids, air surveillance systems, stock markets and
banking networks, are dependent upon cyberspace. This functional dependence has
made CIs vulnerable to Cyber terror attacks and increased the scope for
Cyber terror footprints exponentially. Most CIs globally are poorly
protected. Therefore, Cyber terror attacks on CIs can cause egregious damages to
the society. Further, today there is a persistent threat of sensitive
information of national interests being stolen by terrorists, destruction of
computer networks or systems superintending the functioning of CIs.
Objectives of Cyber Terror Attack:
Cyber terrorism is based on specific objectives, such as:
Possible Targets of Cyber terrorists:
- Like, air traffic, military networks, financial and energy systems,
telecommunications and others, to cause physical devastation.
- Cause disruptions sufficient to compromise the industrial and economic
operations of a country. A Cyber terror attack thwacks a large part of the world
population and causes monetary disorder and loss of data.
- Cause physical injuries, loss of lives, explosions, crashing of aircraft and
other aerial vehicles, theft of technology and privileged information.
- Move beyond the realms of destruction and send a signal of ferocious disruption
and fear to governments.
Cyber attacks by terrorists majorly focus on two domains: control systems and
data in cyberspace. Consequently, the security challenges against Cyber terror
attacks generally vary across these two scopes. The first possibility is that
terror outfits, such as Al-Qaeda and the Islamic State (IS), would exploit the
information space to launch a cyber attack to ruin the CI facility of a
particular state (Kudankulam Nuclear Power Plant cyber attack).
In the second
instance, the Internet is abused to attack webspace or other trivial frameworks
for their political intents, coalesced with the likeliness that such virtual
attacks could turn adamantly grave to the point of being catalogued as a
Cyber terror attack.
Exploitation of Cyberspace by Terrorists:
Terrorist organisations use cyberspace for recruitment, command and control and
spreading their ideology. Internet being the largest reservoir of knowledge has
fuelled terror outfits to use this quality to set up virtual training camps in
cyberspace. In 2003, Al-Qaeda established its first online digital repository,
providing information on matters ranging from bomb-making to survival skills.
Today, the Internet is used by multiple self-radicalised patrons as a resource
Cyberspace has emerged as a new operational domain for terror and
extremist establishments, appending new dimensions to cybersecurity of
precluding online jihadist recruitment, radicalisation and raising of funds. The
terror outfit of IS has manoeuvred this stratagem and used it proficiently for
The militant group was able to recruit 30,000 fighters through social
media. Social media subsequently helped the group to establish its franchises
and expand its base in different countries. Additionally, terrorists use
Internet proficiency to reach out to masses to inspire acts of terror as well as
disseminate their messages.
There are various incidents of Cyber Terrorism in our Country and some are
In 1998, etlinic Tamil guemllas swamped Sri Lankan embassies with 800 e-mails a
day over a two-week period. The messages read "We are the Internet Black Tigers
and we're doing this to disrupt your communications." Intelligence authorities
characterized it as the first known attack by terrorists against a country's
During the Kosovo conflict in 1999, NATO computers were blaste witli e-mail
bombs and hit with denial-of-service attacks by hacktivists protesting the NATO
bombings. In addition, businesses, public organizations, and academic institutes
received highly politicized virus-laden e-raaiis from a range of Eastern
European countries according to reports. Web defacements were also common.
Since December 1997, the Electronic Disturbance Theater (EDT) has been
conducting Web sit-ins against various sites in support, of the Mexican
Zapatistas. At a designated time, thousands of protestors point their browsers
to a target site using software that floods the target rapid and repeated
download requests. EDT's software has also been used by animal rights groups
against organizations said to abuse animals. Electro hippies, another group of
hacktivists, conducted Web sit-ins against the WTO when they met in Seattle in
One of the worst incidents of cyber teirorists at work was when crackers in
Romania illegally gained access to the computers controlling the life support
systems at an Antarctic research station, endangering the 58 scientists
involved. More recently, in May 2007 Estonia was subjected to a mass cyber
attack by hackers inside the Russian Federation wliich some evidence suggests
was coordinated by the Russian government, though Russian officials deny any
knowledge of this. This attack was apparently in response to the removal of a
Russian World War II war memorial from downtown Estonia.
Cyber Terrorism versus Conventional Terror Attacks:
Cyberspace offers anonymity, easy access and convenience to terrorists to reach
the masses without much monetary expenditure. The ubiquitous cyberworld enables
terrorists to launch cyber attacks having far-reaching impacts and causing
staggering damages, more critical than physical attacks. Traditional terror
attacks are restricted to the physical limits of the place of attack.
while people outside the territorial limits of the attack do read and observe
such incidents, they do not get affected directly. A Cyber terror attack,
however, encompasses the potential of affecting millions without any territorial
limitations; at times, it is more enigmatic to find the perpetrator and trace
the point of origin of Cyber terror attacks.
Hence, cyberspace facilitates Cyber terrorists by enabling them to have a far greater reach than ever before.
Further, global interconnectivity of cyberspace results in proliferation of
potential targets for terrorists to attack, making it more dangerous than other
terror attacks. Such unmatched capabilities of cyber terrorism give terrorists
extraordinary leverage to engender more harm to society.
Thus, different factors make cyber attacks a capitative choice of terrorists:
Initiatives taken to mitigate Cyber terror attacks worldwide:
- cyber terrorism constitutes a low-cost asymmetric warfare element for terrorists
as it requires fewer resources in comparison to physical terror attacks. The
terror groups can inflict more damage to people and society with the same amount
of funds. Thus, the benefit–cost ratio for a Cyber terror attack is very high.
- Cyberspace provides anonymity, thereby enabling Cyber terrorists to hide their
identity. The Indian government had admitted in Rajya Sabha that attackers
compromise the computer systems situated in different locations of the globe and
use masquerading techniques and hidden servers to hide the identity of the
computer system from which the cyber attacks are propelled. It is the anonymous
nature of cyberspace that makes it arduous to attribute cyber attacks to any
- The CIs and other valuable state resources are not fully protected and thus
become an obvious target of Cyber terrorists. After designation of the target,
the cyber attack can be launched without any unwarranted delay and need for
- The Internet enables Cyber terrorists to initiate a cyber attack on any distinct
part of the world. Unlike physical terror attacks, there are no physical
barriers or checkpoints that block Cyber terrorists in the execution of
predetermined cyber attacks on designated targets. Likewise, cyber terrorism
involves less risk than physical terrorism.
- Cyberspace provides broad avenues for disseminating terror organisation
propaganda. It provides a larger audience for Cyber terror attacks, whose impact
goes beyond cyberspace to diverse systems.
The mushrooming menace of cyber terrorism has stimulated states and international
organisations to reform the global cybersecurity architecture for combating
Convention on cyber crime
The European Union's Convention on cyber crime, also called the Budapest
Convention, is the sole binding international convention on cyber crimes. It aims
at harmonising domestic laws, including an international cooperative framework,
and also proposes to improvise investigation techniques on cyber crimes for
member states. India is not part of this treaty.
United Nations (UN)
UN Global Counter-Terrorism Strategy: The strategy manifests the commitment of
all UN member states to eliminate terrorism in all forms. The resolution aims to
expand international and regional cooperation and coordination among states,
private players and others in combating cyber terrorism, and also seeks to
counter the proliferation of terrorism through cyber networks. The 2018
resolution over the sixth review of the strategy asks member states to ensure
that cyberspace is 'not a safe haven for terrorists'. It urges member states to
counter terrorists' propaganda, incitement and recruitment, including through
United Nations Office of Counter-Terrorism (UNOCT) : The UNOCT was set up on 15
June 2017, vide United Nations General Assembly (UNGA) resolution, following the
Secretary-General's report over UN's role to assist member states in
implementing UN counterterrorism strategy.
The UNOCT supplements the efforts of
member states against terrorism, including cyber terrorism. It provides
multi-stakeholder cooperation in securing the cyberspace of respective countries
from Cyber terror attacks. It has initiated various projects aimed at building
and upgrading capacity among states to combat cyber attacks and raising awareness
against cyber terrorism among masses.
United Nations Security Council (UNSC):
In 2017, UNSC adopted a resolution for
the protection of CI. The resolution asks the member states to establish
cooperation with all stakeholders at international and regional levels to
prevent, protect, respond and recover from cyber-enabled terror attacks over the
state CI. It also asks the states to share operational intelligence over the
exploitation of communication technologies by terror outfits. The UNSC
presidential statement in May 2016 recognised the requirement of global effort
to stop terror outfits from exploiting cyber networks.
Brazil, Russia, India, China and South Africa (BRICS) Counter-Terrorism Strategy
The strategy aims to counter international terrorism and its funding, enhance
cooperation in mutual legal assistance and extradition against terrorists,
improve practical cooperation among security agencies through intelligence
sharing, etc. The strategy resolves to 'counter extremist narratives conducive
to terrorism and the misuse of the Internet and social media for the purposes of
terrorist recruitment, radicalization and incitement.
Shanghai Cooperation Organisation (SCO)
The SCO has adopted several significant steps to counter the menace of
cyber terrorism. It established the Regional Anti-Terrorist Structure (RATS) in
2001 against terrorism. The 22nd session of SCO RATS council approved various
proposals to combat cyber terrorism, and also discussed the proposal to establish
a cyber terrorism centre. In 2019, SCO member states conducted
anti-cyber terrorism drills to prepare for future Cyber terror crisis.
2015, SCO submitted to UNGA an International Code of Conduct for Information
Security, proposing a secured and rule-based order in cyberspace. The code
suggests international cooperation among states to combat exploitation of ICTs
for terror-related operations. Furthermore, it specifies a code of conduct,
responsibilities of states and rights of individuals in cyberspace.
Cybersecurity and Infrastructure Security Agency (CISA) Act
The act establishes that the CISA will secure American cyber networks and CIs,
devise US cybersecurity formations and develop potential to defend cyber attacks.
Further, it secures the federal government's '.gov' domain network. It also
houses the National Risk Management Center (NRMC), which addresses most
strategic threats to the country's CI and crucial functions whose disruption can
have devastating impacts over American national interests, like security and
economy. In 2017, the US President issued an executive order (EO 13800) to
modernise US cybersecurity proficiencies against intensifying cybersecurity
threats over CIs and other strategic assets.
National Cyber Strategy of the US
The strategy, released in 2018, strengthens the US cyberspace to respond against
cyber attacks. It focuses on securing federal networks and CIs, as well as
combating cyber attacks. The cyber strategy primarily aims to protect American
people, preserve peace and advance American interests. It also provides for
military action to combat cyber attacks.
Israel launched its first-ever National Cybersecurity Strategy in 2017. The
policy document expounds the country's plan to advance its cyber robustness,
systemic resilience and civilian national cyber defence. The objective is to
develop an international collaboration against global cyberthreats, which
certainly includes Cyber terror threats. It also prioritises to defend Israeli
economic, business and social interests in cyberspace.
The Israel government passed several resolutions, like 3611, 2443 and 2444, to
expand institutional capacity for cybersecurity framework by establishing
National Cyber Directorate. Israel's cybersecurity framework focuses on four
The United Kingdom (UK)
- Improving domestic capabilities to confront futuristic and present-day
- Continuously upgrading and enhancing defence of CIs in the country.
- Fostering the republic's standing as an international hub for the development of
- Promoting effective coordination and cooperation among the government, academia
and private players.
The UK introduced the National Cyber Security Programme in 2015 to protect its
computer networks from cyber attacks. A five-year National Cyber Security
Strategy was also revealed in 2016 to make UK's cyberspace resilient from
cyber attacks and more secure by 2021. Further, in 2017, National Cyber Security
Centre was opened to respond to high-end cyber attacks.
Initiatives Taken In India:
Information Technology Act: Cyber terror Law of India
The Information Technology Act (hereafter the Act) sanctions legal provisions
concerning cyber terrorism. Section 66F of the Act enacts legislative framework
over cyber terrorism. It provides for punishment, extending to life imprisonment,
for cyber terrorism, along with three essential elements for an act to constitute
as cyber terrorism:
The act must intend to afflict terror in people's mind or jeopardise
or endanger the unity, integrity, security or sovereignty of India.
Act: The act must cause:
- unlawful denial of access to any legally authorised person from accessing
any online or computer resource or network;
- unauthorised attempt to intrude or access any computer resource; or
- introduce or cause to introduce any computer contaminant.
The act must also cause harm, like death, injuries to people, adverse
or destructive effect on the critical information infrastructure (CII), damage
or destruction of property or such disruptions likely to cause disturbances in
such services or supplies which are essential to life
Further, Section 66F also applies to instances where a person without any
authorisation or by exceeding his legitimate authorisation intentionally
penetrates or accesses a computer resource and obtains access to such data, or
information or computer base which has been restricted for Indian security
interests, or whose disclosure would affect the sovereign interests of India,
Protected Systems and CII The Act has a provision of 'protected systems',
empowering the appropriate government to declare any computer resource that
either directly or indirectly affects the facility of CII as 'protected system'.
Section 70(3) sanctions punishment up to 10 years with fine in case a person
secures or attempts to secure access to a protected system. The explanation
clause of Section 70 defines CII as: 'The computer resource, incapacitation or
destruction of which, shall have a debilitating impact on national security,
economy, public health or safety.'
The central government, under Section 70A of the Act, has designated National
Critical Information Infrastructure Protection Centre (NCIIPC) as the National
Nodal Agency in respect of CII protection. The union government has also
established Defence Cyber Agency to deal with matters of cyberwarfare and
Indian Computer Emergency Response Team (CERT-In) Section 70B of the Act
provides for the constitution of CERT-In to maintain India's cybersecurity and
counter cybersecurity threats against it. The CERT-In is expected to protect
India's cyberspace from cyber attacks, issue alert and advisories about the
latest cyberthreats, as well as coordinate counter-measures to prevent and
respond against any possible cybersecurity incident.
It acts as the national
watch and alert system and performs functions like:
- Collect, analyse and disseminate information on cybersecurity incidents;
- Forecast and issue alerts on cyber –incidents;
- Emergency measures to handle cybersecurity incidents;
- Coordinate cyber attack response activities;
- Issue guidelines, advisories, over cybersecurity measures, etc.
India has established domain-specific computer emergency response teams (CERTs)
to counter domain-specific cyberthreats and create a more secured cybersecurity
ecosystem in respective domains, like power grids and thermal energy. Further,
sectoral CERTs in the cybersecurity fields of finance and defence have been
constituted to cater to such critical domain's cybersecurity requirements.
National Cyber Security Policy:
The National Cyber Security Policy of India, released in 2013, aims to secure
Indian cyberspace and concretise its resilience from cyberthreats in all
sectors. It aims at developing plans to protect India's CII and mechanisms to
respond against cyber attacks effectively. It further focuses on creating a safe
and dependable cyber ecosystem in India.
The policy has facilitated the creation
of a secure computing environment and developed remarkable trust and confidence
in electronic transactions. Furthermore, a crisis management plan has been
instituted to counter cyber-enabled terror attacks. The Parliament also amended
the National Investigation Agency (NIA) Act in 2019, empowering the NIA to
investigate and prosecute acts of cyber terrorism.
Moreover, technology and threat Intelligence play major roles to counter
terrorism and cyber terrorism. The multi-agency centre (MAC) at the national
level, set up after the Kargil intrusion, along with subsidiary MACs (SMACs) at
state levels, have been strengthened and reorganised to enable them to function
on 24x7 basis. Around 28 agencies are part of the MAC and every organisation
involved in counter-terrorism is a member of this mechanism. This is yet another
important element of national initiative.
The Information Technology Act
India, as a fast-developing economy, aspires to control the global supply chain
and internationalise its economy. This vision automatically attracts a big
responsibility to protect cyberspace from possible cyberthreats, including acts
of cyber terrorism. India, however, has been rather vulnerable to
cyberthreats.Currently, with major economic activities transpiring through
digital platforms during the COVID-19 pandemic, the dreadful impact of
cyber terrorism has intensified.
The purpose of Cyber terrorists is to cripple the
CI of a nation and certain services, like telecommunications, banking, finance,
military complexes and emergency services, are most vulnerable to Cyber terror
attacks. Thus, it is necessary to comprehend the potential threat of
cyber terrorism to a nation like India, keeping in mind that the vulnerability of
Indian cyberspace to Cyber terror attacks has proliferated enormously. In 2018
too, the then Home Secretary admitted to India's exposure to cyberthreats and
its inadequacy in countering
Therefore, reforming and modernising the existing machinery to counter the
strategic challenge of cyber terrorism and providing efficient explications
acknowledging global pandemic is peremptory. Though the Act enacts provisions
regarding cyber terrorism, in order to make it a more focused legislation to
combat cyber terrorism, the following modifications are suggested:
The Act was originally enacted to validate e-commerce activities. However, its
preamble today must not remain limited to e-commerce only. It must additionally
include the objective of combating cyber terrorism.
The scope of the definition for cyber terrorism should be made more extensive by
including 'the usage of cyberspace and cyber communication'. The section does
not cover cyberspace use for communication and related purposes to fulfil and
execute terrorist objectives. The Act should incorporate provisions to cover
such acts to prevent acts of cyber terrorism.
To focus the orientation of the Act to combat cyber terrorism, it must have a
dedicated chapter on cyber terrorism, which would deal with all intricate
elements and dimensions of the acts amounting to cyber terrorism in detail.
Indian Cybersecurity Act
In 2008, the Information Technology Act was amended to incorporate provisions
concerning cyber terrorism. However, from 2008 to 2021, exploitation of
cyberspace by terrorists has undergone a systematic transformation. The
conglomeration of time and evolution of destructive technologies has made
cyber terrorism intricately complex and devastatingly lethal to deal with.
Cyber terrorists use innovative methods to exploit cyberspace for youth
radicalisation and to propel cyber attacks causing massive destruction.
evolution of destructive technological order aiding cyber terrorism warrants a
new modernised legal order, with empowered law enforcement agencies, to protect
Indian cyberspace against possible cyberthreats and preserve its cyber sovereign
India must consider enacting a new cybersecurity legislation, Indian
Cybersecurity Act, dedicated to deal with present-day cybersecurity challenges
and regulate all aspects of cybersecurity, including cyber terrorism. Further, in
view of the future consolidation of Cyber terror attacks, a new legislation would
additionally provide more effective, deterrent and stringent legal framework
against cyber terrorism.
Multiplicity of Organisations
Multiple government organisations handle cybersecurity operations of India,
resulting in overlapping jurisdictions and operations among organisations. Some
reformatory steps—like creating the National Cyber Security Coordinator under
National Security Council Secretariat (NSCS) and bringing central agencies under
its control—have been adopted. However, it is important to provide the exigent
task of cybersecurity exclusively to three central agencies, namely, CERT-In,
NCIIPC and Defence Cyber Agency, with well-delineated and defined jurisdictional
limits of operations and responsibilities. Instead of creating a parallel
hierarchical structure which results in unwarranted overlapping of work, the
jurisdictional limits of operations must be detailed through legislation to the
Further, there must be a regular review of the jurisdictions of organisations to
keep India's cybersecurity mechanism updated as per the continuously evolving
cyberspace. Since what today is not a CI might become intrinsically critical for
preserving national security tomorrow, the National Cyber Security Coordinator
must proactively coordinate the activities of the cybersecurity agencies to
intensify capabilities of India to counter cyber terrorism.
The government, like UNOCT, must undertake cybersecurity awareness programmes in
the country and establish an informative environment in the country against
possible cyberthreats (including cyber terrorism) in cyberspace. The government
must consider launching a cyber literacy programme (initially in areas
vulnerable to cyber attacks) on lines with 'Sarva Shiksha Abhiyan' to familiarise
people about the cybersecurity threats in a time-bound manner. This is
particularly important during the COVID-19 pandemic when most businesses are
running digitally through online mediums.
Indian Cybersecurity Service
India cannot reform and strengthen its gigantic cybersecurity framework from one
central place. Cybersecurity threats are the new normal for people, including
those living in distant parts of India. Therefore, India must establish Indian
Cybersecurity Service as an all-India civil service. It will provide India with
the best professionals (posted in different parts of the country at the
grassroots level) to deal with all aspects of cybersecurity, including
civil service would further equip the state governments with talented
cybersecurity experts to protect their cyber operations and deal with breaches
under their jurisdiction. The proposed civil service could also assist the state
police in solving cyber-related offences more effectively and expeditiously,
thereby improving the administration of justice in cyber crimes.
As these cybersecurity officials will get an opportunity to work in different
parts of the country in various capacities, like officers from other all-India
services, it will broaden their vision and first-hand operational experience of
cybersecurity issues faced by the people at grassroots level, as opposed to the
current paradigm (where majority of the officers and their work remains
restricted to headquarters).
Therefore, just like officers from other all-India
civil services get a significant say in the decision making due to their
extensive groundwork and direct first-hand experiences bestowing them with
actual ground realities, Indian cybersecurity officials will also get a far
greater say over most policy decisions concerning cyberthreats, cybersecurity
interests and others. Further, cyberspace is ubiquitous and interacts closely
with major economic and other operations in society.
greater say to cybersecurity officials in India will make cybersecurity central
to our major policy decisions and strengthen our cybersecurity framework on a
Conclusion & Suggestions:
Cyberspace has developed as a decentralised network of communication, without
any restriction over geographical boundaries of any country. Therefore,
international regulation and cooperative cybersecurity framework is essential to
deal with cyber terrorism effectively.
Since the current framework is incapable
of dealing with the menace, it is time to strengthen international law to equip
it to deal with cyber terrorism. India must also think about reforming its legal
framework or legislating exclusive cybersecurity legislation, which may provide
provisions for cyber terrorism.
With the prime minister advocating the use of technology for development and
administration, and also due to the global pandemic, cyberspace has been
integrated into various fields, like governance, public administration and trade
and business operations. In addition, there is continuous integration of
cyberspace with CI.
Thus, a multidimensional cybersecurity framework must be
introduced. The outbreak of COVID-19 has also accelerated the digitisation of
economic businesses and other activities. cyber attacks by terrorists can
virtually paralyse the financial and economic operations (including Indian Goods
and Services Tax [GST] network of the country. Hence, to boost the adoption of
counter-measures by states against cyber terrorism
and strengthen the cybersecurity framework, the World Bank must consider
'cybersecurity' as one of the parameters to decide ease of doing business index.
India must also try to reduce overlapping among cybersecurity organisations and
harmonise its process and laws as per the international best practices.
Further, the accelerated digital operations of business due to the pandemic has
made the state constitutionally bound to protect the cyberspace of India.
Article 19(1)(g) of the Constitution, read with Sodan Singh and Anuradha Bhasin
cases, grants the right to practice or do any form of livelihood within the
realms of law. Thus, the state must make sure that the constitutionally
protected fundamental right of occupation in cyberspace of Indian citizens is
protected in the current scenario. It must be noted that any business can
survive and flourish in a digital platform only when there is secured cyberspace
in place. Thus, the government is constitutionally bound to protect India's
cyberspace from cyberthreats, including cyber terrorism.
Cyberspace, today, interacts with significant economic, business and other
interests of India. So as to secure India's strategic, sovereign, economic and
business interests in cyberspace, the union must incorporate stringent deterrent
strategies and cybersecurity reforms at all levels of operation. It is important
to look at the big picture while analysing Cyber terror threats; and new
mechanisms must be developed and reformatory steps need to be introduced with
focus on the constitutional obligation of the state under Article 19(1)(g) and
Article 355 of the Indian Constitution.
- Information Technology Act, 2000 (Act 21 of 2000), Chapter III, Section
- Ibid., Section 66F(1)(A)(i).
- Ibid., Section 66F(1)(A)(ii).
- Ibid., Section 66F(1)(A)(iii).
- Ibid., Section 66F(1)(B).
- Ibid., Section 70.
- Ibid., Section 70(3).
- Misra, S.N.; Indian Penal Code; Central Law Publicaions;ed-13;p-88
- Cyber Law and its Applications by Prof. Shilpa S. Dongre
Please Drop Your Comments | 1 | 2 |
<urn:uuid:068e89df-82ee-4b7c-aff3-f197764157d1> | Seeed Grove Designers’ Guide: PCB design guidelines and more
The Seeed Grove modular plug-and-play system currently consists of over 400 actuators and sensors developed over the course of 14 years. As Seeed’s debut series, the Grove ecosystem and ethos have evolved and continue to remain a leader in modular electronics, whether for education, hobbyist electronics or prototyping the next big product.
With so many Grove modules, it can be difficult not to find a sensor to measure a specific parameter, but with the rising demand for advanced, cheaper and more accurate sensors, and no shortage of manufacturers answering to that demand, there is plenty of room to improve on existing technology and cater to more advanced needs.
This is where the Develop Your Grove Sensor campaign from Seeed comes in, sponsoring designers to develop their own Grove modules and eventually make them available via Seeed channels. Whether to utilize new technology or to serve a niche field, designers can use their unique insight to come up with and co-develop Grove modules that like-minded users desire.
So how should creators go about designing a Grove module? What qualifies as a Grove module and what should designers look out for? This article sets out to give guidance to PCB designers and more.
What is Grove?
The Grove system consists of open-source, plug-and-play hardware for modular electronics project building. The system helps accelerate electronics project building by doing away with jumper wires and breadboards so engineers can focus on developing solutions and coding. This is achieved by providing standardized modules, connectors and cables to be used with Grove-compatible development boards and shields. The Grove family consists of a vast variety of modules, each with a single function, with freely available libraries and examples. Plus, with everything open-sourced, designers can easily import them into their product designs.
Types of Grove Modules
Grove modules can be grouped into sensors (input) and actuators (output). Sensor modules are the most varied, obtaining signals to be processed and actuators produce a physical response as a result of an input or process.
Sensors that measure air quality, soil moisture, temperature and humidity, atmospheric pressure (barometers), light intensity, sound etc.
Sensors for measuring distance, speed, acceleration, gyroscopes etc.
Sensors that detect body activity and biometrics such as fingerprint sensors, GSR, EMG, and heart rate sensors.
Modules that receive communication signals either wirelessly or via physical interfaces such as Wifi, BLE, GPS, LoRaWAN, NFC, RF, CAN, etc.
Modules for registering manual inputs such as buttons and switches, touchscreens and mics.
Modules for outputting responses such as buttons, switches and LEDs, LCD displays, motors, relays, speakers and buzzers.
What qualifies as a Grove Module?
There are certain traits that make the Grove modules that we know and love. Of course, there are exceptions but a few fundamental rules make the prototyping toolbox work.
Grove modules typically serve one function
A Grove module is not a development board. Grove modules typically feature a single main component; a sensor, a chip, an LED, button, potentiometer etc. with some additional basic hardware. Similar sensors can be grouped together if they perform a collective function such as the multi-gas sensor, which has 4 separate chips for sensing different gases.
They contain the minimal amount of circuitry to allow the main component to be used in a modular, plug-and-play fashion, receiving power and transferring signals via the 4-pin Grove interface to an external controller. If a control chip is required to operate the sensor then this can also be included on the module too, but we should not be seeing micro-controllers on the modules.
Grove modules connect via Grove interfaces
Grove modules have a female 4-pin Grove connector to connect to Grove cables. Power and data signals transmit primarily via the 4-pin Grove interface. The 3rd and 4th pins (red and black respectively) are for VCC and ground connections and the other two are for data transmission, depending on the type of Grove interface. Other interfaces are allowed to connect probes and additional hardware necessary to use the device.
Grove module PCB boards come in standard Grove shapes
There are 5 standard Grove sizes, the most common being the square 20 x 20mm single unit and rectangular 20 x 40mm double unit. Each shape has the iconic interlocking Grove ‘rings’ and ‘sockets’ on the edges, perfect for screwing into enclosures. The single and double-unit shapes are large enough for most Grove module designs.
You can download a template for the single and double unit boards for Eagle here!
All Seeed Grove modules PCBs come in turquoise-blue color!
While there is no impact on functionality, we think the color is nice and is the signature color for Seeed Grove modules.
Which Grove Connector Should I use?
Grove boards need at least one female Grove connector (header). There are 4 different female Grove connectors: two for surface mounting and through-hole mounting, each with straight and right-angle configurations. All connect to Grove cables but which one you choose will depend mainly on how the board will be used and how the parts will be soldered.
Through–hole or surface mount?
The choice of mounting type depends on many factors. Through-hole parts are great for hand soldering prototypes or for applications where the connection needs to be more rugged, but when it comes to batch production, having a single through-hole part on a board where everything else can be surface mounted is a waste and could incur unnecessary costs. The table below compares some factors that may affect your decision.
|Batch assembly*||Suitable||Suitable||Not suitable||Not suitable|
|Footprint size||Smallest single-side only footprint||Single-side only footprint||Smallest single-side footprint but on both sides||Footprint on both sides|
|Bond strength||Surface attachment only||Surface attachment only, wider surface area||Through boards||Through boards|
*referring to common batch-assembly methods such as pick and place and reflow soldering.
Once you are settled on an attachment type, consider how the cable will protrude from the Grove module and how this will affect how the board is used.
Take the capacitive moisture sensor as an example, one end of the board is designed to be inserted into the ground. The cable parallel to the board makes it easier to insert. Single and double-unit boards are more suited to use straight versions to maximize space on the boards for other parts. The main sensing/actuator part is often placed on the same side as the Grove connector for convenience when prototyping, unless orientation may impact its usage, for example, radars, lasers, and ultrasonic beams.
Take note of the placement of the connectors on the Grove modules as well. The connectors are fully enclosed by the outline and should not be hanging over the edge of the PCB in most cases. Detailed mechanical information can be found here.
How to include the Grove connector in the Bill of Materials (BOM) file?
Rather confusingly, there are multiple part numbers and SKU numbers for the four Grove configurations and multiple sources of information. The following table collates this information and should help make things clearer. Notice that the S in the part number is short for ‘straight’ but means perpendicular to the board, and the R is short for ‘right-angle’, but the connector is parallel to the board.
|Seeed SKU||Part Number (MPN)||Cost (USD)||Seeed Multi-pack SKU||Multi-pack link|
|320110030||1125S-SMT-4P||0.061||114020163||Grove Female Header – SMD-4P-2.0mm-20Pcs|
|320110032||1125R-SMT-4P||0.042||114020164||Grove Female Header – SMD-4P-2.0mm-90D-20Pcs|
|320110033||1125S-4P||0.01||110990030||Grove Female Header – DIP-4P-2.0mm-10 Pcs|
|320110034||1125R-4P||0.01||110990037||Grove Female Header – DIP-4P-2.0mm-90D-10 Pcs|
For manufacture with Seeed Fusion PCB Assembly service, the connectors are conveniently available in the Seeed Open Parts Library. Add the SKU beginning with 3, or the part number to the BOM file as you would for other parts.
The Seeed website also sells these connectors in small packs and reels. Please do not use these SKUs for your BOM file as the BOM file is supposed to contain all the parts for one board. If you use these parts, you could end up buying much more than you need and they are not packaged in a format suitable for batch assembly.
The Eagle and KiCad footprints for these connectors are available on GitHub, please use these for the best compatibility. Other similar connectors may be missing pads or may be slightly different in size.
PCB Layout Tips
A neat PCB layout makes a more attractive product and may have an impact on longevity too. Since Grove boards do not have enclosures and are designed to be manhandled by grubby fingers, aesthetics matter. These tips do not just apply to Grove boards, the reasoning can be applied to any board design! Other tips relate to the Grove series and help make your boards fit in with the Grove family.
Due to the limited space available, one of the biggest concerns with Grove boards is trying to fit everything in without looking too cluttered, while at the same time, making sure everything is placed conveniently for the user.
- Go small: For all your parts, you should try to find small surface mount packages where possible. Through-hole parts take up a lot of space on both sides of the boards so eliminate these as much as possible. Using small chip resistors, capacitors, LEDs for example is an easy way to make room without compromising functionality. But don’t go too small! 0201 may require more specialized equipment to ensure accurate placement. For Grove boards 0402 and 0603 sizes should leave plenty of room for other parts. If not, consider a larger Grove size.
- Put text and logos on the bottom: For boards with parts only on one side, make use of the empty space and place all silkscreen text, logos and the product name on the bottom (side with less parts). Most Grove modules are single-sided and so have room for text and logos on the bottom.
- Remove silkscreen designators as a last resort: With crowded boards, there may not be enough room for the part designators, the labels in the silkscreen (R1, R2 etc.) and other silkscreen features. These are important for assembly and for users to recognize the parts if needed but if the boards are so crowded that the silkscreen would be unclear then delete them. But be sure to include a document informing the assembly house how to assemble your boards, along with polarity markers, etc..
- Use smaller vias: Vias also take up space! 0.3mm diameter vias are sufficient and will also mean the vias will be tented and therefore protected from dust and water damage.
- Maintain a clearance of at least 0.5mm around the edge of the boards: From any traces, pads, pour and silkscreen, and more if possible to protect parts from milling tools and frequent handling.
- Use copper pour: Fill any unused space with copper pour. Not only will you be using less chemical etchant during production, you also help bring out the brighter, non-translucent Grove blue color.
- Text should go with the flow: For longer Grove boards, the text should be orientated with the longer side of the boards e.g. Grove – Capacitive Sensor. That way you be able to fit the text in better without looking too crowded, especially modules with long names.
- Font should be similar to the default Eagle font: All Seeed Grove modules are designed in Eagle and use the default vector font with a height of 50 mils (or 1.27mm) for the module name. Other texts have a height of 32mils or 0.8128mm. Feel free to include your own logos and signature but..
- Do not include the Open Hardware logo: Designs must follow specific conditions to be able to use the Open Source Hardware logo in your design. Grove modules via the Develop Your Own Grove Sensor campaign are not restricted by such conditions and more than likely do not fulfill the definition stipulated by the OSHWA. Unless specifically made the effort to follow each condition, please do not use the logo in your design.
- Do not include the Seeed Studio logo: Doing so would suggest that we (Seeed Studio) made and maintain the design and documentation. You can add your own attribution if you wish and the Grove logo.
- Be neat: It should go without saying, but if your boards are approved for batch production and sale, your boards should be as aesthetically pleasing as possible. All silkscreen text should be clearly visible and placed next to the corresponding part and parts and labels should be centered and aligned where appropriate. Then, leave the rest to us and we will make sure your design is made to our own high-quality standards.
- Use Eagle: We know not everyone has or wants to use Autodesk Eagle but since all Seeed Grove boards are designed in Eagle, we have an Eagle .brd template available with the outline that our own engineers use. The template includes the board outline for single and double units, exact guides on where to place the connector, copper clearances, ideal test point locations, and part clearances so that your module is as much like a Grove module as possible and will be compatible with Seeed approved Grove accessories and enclosures. If you are on the fence about which design software to use then we definitely recommend Eagle first, because of the template and connector libraries. We also have libraries for KiCad but no template.
Eagle and KiCad footprint libraries
Tips to Reduce Costs
Low cost will help get your boards into the hands of more tinkerers. Grove boards are designed to keep costs down as much as possible (without compromising on quality). Many strategies can be employed at the design stage that don’t cost an arm and a leg.
- Maintain single-sided assembly as much as possible: When it comes to mass production, keeping surface mount components on one side will help reduce labor and production costs. For double-sided assembly, the boards have to go through the assembly line twice (that means solder-pasting, pick and place, reflow oven etc. x2!). Some assembly facilities charge extra for this, and it may introduce complications in the assembly process. For example, large parts may require glue or specially designed trays to keep them from falling during the second reflow.
- Do away with optional parts: It’s nice to consider every possible use case, but if it adds a few extra dollars to the final product, it could put off users especially if they will not likely use the feature. For example, instead of including a header for shorts or other connections, consider including pads that can be shorted with a tool or drop of solder, or just make the plated holes/pads available for users to solder additional parts onto.
- Use the Open Parts Libraries (OPL): Seeed has a catalog of parts locally available in the form of the Seeed and Shenzhen Open Parts Libraries that are used with the Seeed Fusion PCBA service. The Shenzhen OPL contains parts from local distributors often at a cheaper price, and the Seeed OPL contains parts from Seeed’s own warehouse, which includes parts used in Seeed’s Grove modules and development boards. Using parts in the OPL can both speed up delivery and cut BOM costs.
Do I have to use the Grove connector?
Yes, the Grove connector is a semi-proprietary connector that is slightly different to JST standard connectors and clones. Grove connectors have an additional latching mechanism to help hold Grove projects together.
Do I have to include the Seeed logo on the boards?
No, in fact most Seeed Grove modules do not have the Seeed logo on them. They have the Grove logo instead, which you may add if you wish.
Do I have to use Autodesk Eagle to design the PCB?
No, all Seeed Grove boards are drawn using Eagle but we won’t force designers to use Eagle if they don’t want to. We just ask that they look similar to modules in the Grove family and there are more resources available for Eagle.
Missed anything? Let us know in the comments or get in touch. Happy designing! | 1 | 4 |
<urn:uuid:90f440f3-7033-406c-a5e6-d89a312157cd> | A device that allows wireless-equipped computers and other devices to communicate with a wired network.
As specified in Section 508 of the 1998 Rehabilitation Act, the process of designing and developing Web sites and other technology that can be navigated and understood by all people, including those with visual, hearing, motor, or cognitive impairments. This type of design also can benefit people with older/slower software and hardware.
A technology from Microsoft that links desktop applications to the World Wide Web. Using ActiveX tools, interactive web content can be created. Example: In addition to viewing Word and Excel documents from within a browser, additional functionality such as animation, credit card transactions, or spreadsheet calculations.
Identifies the location of an Internet resource. Examples: an e-mail address ([email protected]); a web address (http://www.gettingyouconnected.com); or an internet address (192.168.200.1).
A short, easy to remember name created for use in place of a longer, more complicated name; commonly used in e-mail applications. Also referred to as a “nicknameâ€.
Archive sites where Internet users can log in and download files and programs without a special username or password. Typically, you enter anonymous as a username and your e-mail address as a password.
To prevent e-mail spam, both end users and administrators of e-mail systems use various anti-spam techniques. Some of these techniques have been embedded in products, services and software to ease the burden on users and administrators. No one technique is a complete solution to the spam problem, and each has trade-offs between incorrectly rejecting legitimate e-mail vs. not rejecting all spam, and the associated costs in time and effort. IT Direct Cloud-Based Anti-SPAM e-mail service eliminates the problem almost entirely. Our state-of-the-art solution lets users see only the e-mail they want — and filters out all of the viruses and e-solicitations they don’t want before they reach user’s computers and mobile devices.
A program capable of running on any computer regardless of the operating system. Many applets can be downloaded from various sites on the Internet.
A program designed for a specific purpose, such as word processing or graphic design.
A file that can be opened and read by standard text editor programs (for example, Notepad or Simple Text) on almost any type of computer. Also referred to as “plain text filesâ€. Examples: documents saved in ASCII format within word processors like Microsoft Word or WordPerfect; e-mail messages created by a program like Outlook; or HTML files.
AT command set:
An industry standard set of commands beginning with the letters “AT†that are used to control a modem. Example: ATDT tells the modem to dial (D) using touch-tone dialing (T). ATDP specifies pulse dialing (P). Also referred to as the “Hayes Command Setâ€.
In this context, a file that is sent along with an e-mail message. ASCII (plain text) files may be appended to the message text, but other types of files are encoded and sent separately (common formats that can be selected include MIME, BinHex, and Uuencode).
The process of identifying yourself and the verification that you’re who you say you are. Computers where restricted information is stored may require you to enter your username and password to gain access.
Back to top
A term that is often used to describe the main network connections that comprise the Internet or other major network.
A measurement of the amount of data that can be transmitted over a network at any given time. The higher the network’s bandwidth, the greater the volume of data that can be transmitted.
A file that cannot be read by standard text editor programs like Notepad or Simple Text. Examples: documents created by applications such as Microsoft Word or WordPerfect or DOS files with the extension “.com†or “.exeâ€.
A common file format for Macintosh computers; it enables a binary file to be transferred over the Internet as an ASCII file. Using a program like Stuffit, a file can be encoded and renamed with an “.hqx†extension. The recipient uses a similar program to decode the file.
A binary digit (either 0 or 1); it is the most basic unit of data that can be recognized and processed by a computer.
Instruction that combines aspects of both face-to-face (F2F) and online learning experiences. An increasing number of courses at OSU now offer this type of mix.
Refers to a weblog, a web page that contains journal-like entries and links that are updated daily for public viewing.
A wireless networking technology that allows users to send voice and data from one electronic device to another via radio waves.
Bitmap file; a common image format on Windows computers. Files of this type usually have the suffix “.bmp†as part of their name.
A feature available in certain programs like Internet Explorer, Firefox, and Acrobat Reader; it is a shortcut you can use to get to a particular web page (IE and Firefox) or to a specified location within a document (PDF).
A form of algebra in which all values are reduced to either true/false, yes/no, on/off, or 1/0.
A term applied to an e-mail message when it is returned to you as undeliverable.
A device used for connecting two Local Area Networks (LANs) or two segments of the same LAN; bridges forward packets without analyzing or re-routing them.
A high-speed Internet connection; at present, cable modems and DSL (Digital Subscriber Lines) are the two technologies that are most commonly available to provide such access.
A program used to access World Wide Web pages. Examples: Firefox, Safari or Internet Explorer.
On a multitasking system, a certain amount of RAM that is allocated as a temporary holding area so that the CPU can manipulate data before transferring it to a particular device.
Data that is collected but not made immediately available. Compare to a language translator who listens to a whole statement before repeating what the speaker has said rather than providing a word-by-word translation. Example: Streaming media data viewable using a tool like RealMedia Player is buffered.
Business continuity is the activity performed by an organization to ensure that critical business functions will be available to customers, suppliers, regulators, and other entities that must have access to those functions. These activities include many daily chores such as project management, system backups, change control, and help desk. Business Continuity is not something implemented at the time of a disaster; Business Continuity refers to those activities performed daily to maintain service, consistency, and recoverability.
business continuity plan:
Business Continuity Plan or “BCP†is a set of documents, instructions, and procedures which enable a business to respond to accidents, disasters, emergencies, and/or threats without any stoppage or hindrance in its key operations. It is also called a business resumption plan, disaster recovery plan, or recovery plan. Also see above explanation.
A group of adjacent binary digits that a computer processes as a unit to form a character such as the letter “Câ€. A byte consists of eight bits.
Back to top C cable modem:
A special type of modem that connects to a local cable TV line to provide a continuous connection to the Internet. Like an analog modem, a cable modem is used to send and receive data, but the difference is that transfer speeds are much faster. A 56 Kbps modem can receive data at about 53 Kbps, while a cable modem can achieve about 1.5 Mbps (about 30 times faster). Cable modems attach to a 10Base-T Ethernet card inside your computer.
Refers to: 1) a region of computer memory where frequently accessed data can be stored for rapid access; or 2) a optional file on your hard drive where such data also can be stored. Examples: Internet Explorer and Firefox have options for defining both memory and disk cache. The act of storing data for fast retrieval is called “cachingâ€.
A challenge-response test in the form of an image of distorted text the user must enter that to determine whether the user is human or an automated bot.
As authorized agents for the biggest names in the telecommunications industry, IT Direct will deliver the most appropriate and cost-effective carrier solutions for your organization. IT Direct will design, implement and support all of your Data, Internet, Voice and Conferencing solutions.
Generally applies to a data input field; a case-sensitive restriction means lower-case letters are not equivalent to the same letters in upper-case. Example: “data†is not recognized as being the same word as “Data†or “DATAâ€.
Computer-Based Training; a type of training in which a student learns a particular application by using special programs on a computer. Sometimes referred to as “CAI†(Computer-Assisted Instruction) or “CBI†(Computer-Based Instruction), although these two terms may also be used to describe a computer program used to assist a teacher or trainer in classroom instruction.
A type of disk drive that can create CD-ROMs and audio CDs. CD-R drives that feature multi session recording allow you to continue adding data to a compact disk which is very important if you plan on using the drive for backup.
Compact Disk, Read Only Memory; a high-capacity secondary storage medium. Information contained on a CD is read-only. Special CD-ROM mastering equipment available in the OIT Multimedia Lab can be reserved for creating new CDs.
CD-RW, CD-R disk:
A CD-RW disk allows you to write data onto it multiple times instead of just once (a CD-R disk). With a CD-R drive you can use a CD-RW disk just like a floppy or zip disk for backing up files, as well as for creating CD-ROMs and audio CDs.
Common Gateway Interface; a mechanism used by most web servers to process data received from a client browser (e.g., a user). CGI scripts contain the instructions that tell the web server what to do with the data.
Real-time communication between two or more users via networked-connected computers. After you enter a chat (or chat room), any user can type a message that will appear on the monitors of all the other participants. While most ISPs offer chat, it is not supported by OIT. However, the campus CMS (Carmen) supported by TELR does provide the capability for live chat among students participating in online courses.
A program or computer that connects to and requests information from a server. Examples: Internet Explorer or Firefox. A client program also may be referred to as “client software†or “client-server softwareâ€.
Refers to a connection between networked computers in which the services of one computer (the server) are requested by the other (the client). Information obtained is then processed locally on the client computer.
(See below): a common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “The Cloudâ€.
A general term used to describe Internet services such as social networking services (e.g., Facebook and Twitter), online backup services, and applications that run within a Web browser. Could computing also includes computer networks that are connected over the Internet for server redundancy or cluster computing purposes.
‘Content Management System’ is the collection of procedures used to manage work flow in a collaborative environment. In a CMS, data can be defined as nearly anything: documents, movies, pictures, phone numbers, scientific data, and so forth. CMSs are frequently used for storing, controlling, revising, semantically enriching, and publishing documentation. Serving as a central repository, the CMS increases the version level of new updates to an already existing file. Version control is one of the primary advantages of a CMS.
The process of making a file smaller so that it will save disk space and transfer faster over a network. The most common compression utilities are Winrar for PC or compatible computers (.zip files) and or Stuffit (.sit files) for Macintosh computers.
A term that commonly refers to accessing a remote computer; also a message that appears at the point when two modems recognize each other.
A small piece of information you may be asked to accept when connecting to certain servers via a web browser. It is used throughout your session as a means of identifying you. A cookie is specific to, and sent only to the server that generated it.
Software designed specifically for use in a classroom or other educational setting.
Central processing unit; the part of a computer that oversees all operations and calculations.
Cascading Style Sheet; A set of rules that define how web pages are displayed using CSS, designers can create rules that define how page
A special symbol that indicates where the next character you type on your screen will appear. You use your mouse or the arrow keys on your keyboard to move the cursor around on your screen.
A term describing the world of computers and the society that uses them
Back to top D daemon:
A special small program that performs a specific task; it may run all the time watching a system, or it can take action only when a task needs to be performed. Example: If an e-mail message is returned to you as undeliverable, you may receive a message from the mailer daemon.
A collection of information organized so that a computer application can quickly access selected information; it can be thought of as an electronic filing system. Traditional databases are organized by fields, records (a complete set of fields), and files (a collection of records). Alternatively, in a Hypertext database, any object (e.g., text, a picture, or a film) can be linked to any other object.
A data center (data centre / datacentre / datacenter) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.
Opposite of compressing a file; the process of restoring the file to its original size and format. The most common programs for decompressing files are Winrar for PC and compatible computers (.zip files) and Stuffit Expander (.sit files) for Macintosh computers.
The process of rewriting parts of a file to contiguous sectors on a hard drive to increase the speed of access and retrieval.
A process used to remove magnetism from a computer monitors. Note flat-panel displays do not have a degauss button since magnetism doesn’t build up in them.
On computers like IBM PC or compatibles and Macintoshes, the backdrop where windows and icons for disks and applications reside.
Dynamic Host Configuration Protocol; a protocol that lets a server on a local network assign temporary IP addresses to a computer or other network devices.
Sometimes referred to as a window; on a graphical user interface system, an enclosed area displayed by a program or process to prompt a user for entry of information in one or more boxes (fields).
A network component within Windows that enables you to connect to a dial up server via a modem. Users running dial-up connections on Windows computers must have Dial-Up Adapter installed and properly configured.
dial up connection:
A connection from your computer that goes through a regular telephone line. You use special communications software to instruct your modem to dial a number to access another computer system or a network. May also be referred to as “dial up networkingâ€.
Intellectual content which has been digitized and can be referenced or retrieved online; for example, PowerPoint slides, audio or video files, or files created in a word processing application, etc.
Sometimes referred to as digital imaging; the act of translating an image, a sound, or a video clip into digital format for use on a computer. Also used to describe the process of converting coordinates on a map to x,y coordinates for input to a computer. All data a computer processes must be digitally encoded as a series of zeroes and ones.
Dual In-line Memory Module; a small circuit board that can hold a group of memory chips. A DIMM is capable of transferring 64 bits instead of the 32 bits each SIMM can handle. Pentium processors require a 64-bit path to memory so SIMMs must be installed two at a time as opposed to one DIMM at a time.
An area on a disk that contains files or additional divisions called “subdirectories†or “foldersâ€. Using directories helps to keep files organized into separate categories, such as by application, type, or usage.
Disaster recovery is the process, policies and procedures related to preparing for recovery or continuation of technology infrastructure critical to an organization after a natural or human-induced disaster. Disaster recovery is a subset of business continuity. While business continuity involves planning for keeping all aspects of a business functioning in the midst of disruptive events, disaster recovery focuses on the IT or technology systems that support business functions. IT Direct’s specialist Disaster Recovery Consulting Team can help you devise a near bulletproof Disaster Recovery Plan, so that you can have total piece of mind that your critical systems and processes are safe, and/or can recover from any potential data loss situation.
disaster recovery planning
Also referred to as “DRPâ€. Please see above explanation.
Another term for an online newsgroup or forum.
May also be referred to as “online learning†or “eLearning.†A means of instruction that implies a course instructor and students are separated in space and perhaps, in time. Interaction may be synchronous (facilitated) or asynchronous (self-paced). Students can work with various course materials, or they may use tools like chat or discussion groups to collaborate on projects.
The goal of distance education; distance learning and distance education are often used interchangeably.
A means by which the illusion of new colors and shades is created by varying the pattern of dots; the more dither patterns a device or program supports, the more shades of gray it can represent. Also referred to as halftoning in the context of printing.
Domain Name System; a service for accessing a networked computer by name rather than by numerical, (IP) address.
Part of an Internet address. The network hierarchy consists of domains and subdomains. At the top are a number of major categories (e.g., com, edu, gov); next are domains within these categories (e.g., ohio-state); and then there are subdomains. The computer name is at the lowest level of the hierarchy.
The process of transferring one or more files from a remote computer to your local computer. The opposite action is upload.
Dots per inch; a measure of a printer’s resolution. The higher the number, the better the print quality. A minimum of 300 dpi usually is required for professional quality printing.
drag and drop:
The act of clicking on one icon and moving it on top of another icon to initiate a specific action. Example: Dragging a file on top of a folder to copy it to a new location.
Digital Subscriber Line; an always on broadband connection over standard phone lines.
Digital video disk; a type of compact disc that holds far more information than the CD-ROMs that are used for storing music files. A DVD can hold a minimum of 4.7 GB, enough for a full-length movie. MPEG-2 is used to compress video data for storage on a DVD. DVD drives are backward-compatible and can play CD-ROMs.
DVD-RW, DVD-R disk:
A DVD-RW disk allows you to write data onto it multiple times instead of just once like on a DVD-R disk. A DVD disk can hold a minimum of 4.7GB which is enough to store a full-length movie. Other uses for DVDs include storage for multimedia presentations that include both sound and graphics.
Back to top E EAP:
Extensible Authentication Protocol; a general protocol for authentication that also supports multiple authentication methods.
Extended Graphics Adapter; a card (or board) usually found in older PCs that enables the monitor to display 640 pixels horizontally and 350 vertically.
Electronic learning; applies to a wide scope of processes including Web-based learning, computer-based instruction, virtual classrooms, and digital collaboration. Content may be delivered in a variety of ways including via the Internet, satellite broadcast, interactive TV, and DVD- or CD-ROMs.
Electronic mail; the exchange of messages between users who have access to either the same system or who are connected via a network (often the Internet). If a user is not logged on when a new message arrives, it is stored for later retrieval.
Email archiving is typically a stand-alone IT application that integrates with an enterprise email server, such a Microsoft Exchange. In addition to simply accumulating email messages, these applications index and provide quick, searchable access to archived messages independent of the users of the system, using different technical methods of implementation. The reasons a company may opt to implement an email archiving solution include protection of mission critical data, record retention for regulatory requirements or litigation, and reducing production email server load. IT Direct’s Cloud-based e-mail archiving service offers you the latest storage technologies in a secure, redundant and easy-to-use format. We take care of all the fine details, from configuring our archiving software to automatically transferring the files to our secure remote servers.
A combination of keyboard characters meant to represent a facial expression. Frequently used in electronic communications to convey a particular meaning, much like tone of voice is used in spoken communications. Examples: the characters for a smiley face or for a wink.
Refers to the ability of a program or device to imitate another program or device; communications software often include terminal emulation drivers to enable you to log on to a mainframe. There also are programs that enable a Mac to function as a PC.
The manipulation of data to prevent accurate interpretation by all but those for whom the data is intended.
Encapsulated PostScript; a graphics format that describes an image in the PostScript language.
A popular network technology that enables data to travel at 10 megabits per second. Campus microcomputers connected to a network have Ethernet cards installed that are attached to Ethernet cabling. An Ethernet connection is often referred to as a “direct connection†and is capable of providing data transmission speeds over 500 Kbps.
An adapter card that fits into a computer and connects to Ethernet cabling; different types of adaptor cards fit specific computers. Microcomputers connected to the campus network have some type of Ethernet card installed. Example: computers in campus offices or in dorms rooms wired for ResNet. Also referred to as “Ethernet adapterâ€.
Also referred to as an expansion board; a circuit board you can insert into a slot inside your computer to give it added functionality. A card can replace an existing one or may be added in an empty slot. Some examples include sound, graphics, USB, Firewire, and internal modem cards.
A suffix preceded by a period at the end of a filename; used to describe the file type. Example: On a Windows computer, the extension “.exe†represents an executable file.
A cable connector that has holes and plugs into a port or interface to connect one device to another.
A single piece of information within a database (e.g., an entry for name or address). Also refers to a specific area within a dialog box or a window where information can be entered.
A collection of data that has a name (called the filename). Almost all information on a computer is stored in some type of file. Examples: data file (contains data such as a group of records); executable file (contains a program or commands that are executable); text file (contains data that can be read using a standard text editor).
Refers to: 1) a program that has the function of translating data into a different format (e.g., a program used to import or export data or a particular file); 2) a pattern that prevents non-matching data from passing through (e.g., email filters); and 3) in paint programs and image editors, a special effect that can be applied to a bit map.
A type of directory service on many UNIX systems. Queries take the format firstname_lastname (e.g., jane_doe) or for more complete information, =firstname.lastname (e.g., =jane_doe).
A method of preventing unauthorized access to or from a particular network; firewalls can be implemented in both hardware and software, or both.
A way to connect different pieces of equipment so they can quickly and easily share information. FireWire (also referred to as IEEE1394 High Performance Serial Bus) is very similar to USB. It preceded the development of USB when it was originally created in 1995 by Apple. FireWire devices are hot pluggable, which means they can be connected and disconnected any time, even with the power on. When a new FireWire device is connected to a computer, the operating system automatically detects it and prompts for the driver disk (thus the reference “plug-and playâ€).
A small device that plugs into computer’s USB port and functions as a portable hard drive.
A type of memory that retains information even after power is turned off; commonly used in memory cards and USB flash drives for storage and transfer of data between computers and other digital products.
An area on a hard disk that contains a related set of files or alternatively, the icon that represents a directory or subdirectory.
A complete assortment of letters, numbers, and symbols of a specific size and design. There are hundreds of different fonts ranging from businesslike type styles to fonts composed only of special characters such as math symbols or miniature graphics.
A feature of some web browsers that enables a page to be displayed in separate scrollable windows. Frames can be difficult to translate for text-only viewing via ADA guidelines, so their use is increasingly being discouraged.
Copyrighted software available for downloading without charge; unlimited personal usage is permitted, but you cannot do anything else without express permission of the author. Contrast to shareware; copyrighted software which requires you to register and pay a small fee to the author if you decide to continue using a program you download.
The scattering of parts of the same disk file over different areas of a disk; fragmentation occurs as files are deleted and new ones are added.
File Transfer Protocol; a method of exchanging files between computers via the Internet. A program like WS_FTP for IBM PC or compatibles or Fetch for Macintosh is required. Files can contain documents or programs and can be ASCII text or binary data.
Back to top G GIF:
Graphics Interchange Format; a format for a file that contains a graphic or a picture. Files of this type usually have the suffix “.gif†as part of their name. Many images seen on web pages are GIF files.
gigabyte (Gig or GB):
1024 x 1024 x 1024 (2 to the 30th power) bytes; it’s usually sufficient to think of a gigabyte as approximately one billion bytes or 1000 megabytes.
Global Positioning System; a collection of Earth-orbiting satellites. In a more common context, GPS actually refers to a GPS receiver which uses a mathematical principle called “trilateration†that can tell you exactly where you are on Earth at any moment.
Greyware (or grayware) refers to a malicious software or code that is considered to fall in the “grey area†between normal software and a virus. Greyware is a term for which all other malicious or annoying software such as adware, spyware, trackware, and other malicious code and malicious shareware fall under.
Graphical user interface; a mouse-based system that contains icons, drop-down menus, and windows where you point and click to indicate what you want to do. All new Windows and Macintosh computers currently being sold utilize this technology.
Back to top H handshaking:
The initial negotiation period immediately after a connection is established between two modems. This is when the modems agree about how the data will be transmitted (e.g., error correction, packet size, etc.). The set of rules they agree on is called the protocol.
A storage device that holds large amounts of data, usually in the range of hundreds to thousands of megabytes. Although usually internal to the computer, some types of hard disk devices are attached separately for use as supplemental disk space. “Hard disk†and “hard drive†often are used interchangeably but technically, hard drive refers to the mechanism that reads data from the disk.
The physical components of a computer including the keyboard, monitor, disk drive, and internal chips and wiring. Hardware is the counterpart of software.
The portion of an e-mail message or a network newsgroup posting that precedes the body of the message; it contains information like who the message is from, its subject, and the date. A header also is the portion of a packet that proceeds the actual data and contains additional information the receiver will need.
A help desk is an information and assistance resource that troubleshoots problems with computers or similar products. Corporations often provide help desk support their employees and to their customers via a toll-free number, website and/or e-mail.
A program used for viewing multimedia files that your web browser cannot handle internally; files using a helper application must be moved to your computer before being shown or played. Contrast to a plug-in which enables you to view the file over the Internet without first downloading it.
A document you access using a web browser like Firefox or Internet Explorer. It usually refers to the first page of a particular web site; it also is the page that automatically loads each time you start your browser.
A computer accessed by a user working at a remote location. Also refers to a specific computer connected to a TCP/IP network like the Internet.
HyperText Markup Language; a language used for creating web pages. Various instructions and sets of tags are used to define how the document will look.
HyperText Transfer Protocol; a set of instructions that defines how a web server and a browser should interact. Example: When you open a location (e.g., enter a URL) in your browser, what actually happens is an HTTP command is sent to the web server directing it to fetch and return the requested web page.
Connects one piece of information (anchor) to a related piece of information (anchor) in an electronic document. Clicking on a hyperlink takes you to directly to the linked destination which can be within the same document or in an entirely different document. Hyperlinks are commonly found on web pages, word documents and PDF files.
Data that contains one or more links to other data; commonly seen in web pages and in online help files. Key words usually are underlined or highlighted. Example: If you look for information about “Cats†in a reference book and see a note that says “Refer also to Mammals†the two topics are considered to be linked. In a hypertext file, you click on a link to go directly to the related information.
A hypervisor, also called virtual machine manager (VMM), is one of many hardware virtualization techniques that allow multiple operating systems, termed guests, to run concurrently on a host computer. It is so named because it is conceptually one level higher than a supervisory program. The hypervisor presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources. Hypervisors are installed on server hardware whose only task is to run guest operating systems. Non-hypervisor virtualization systems are used for similar tasks on dedicated server hardware, but also commonly on desktop, portable and even handheld computers.
Back to top I icon:
On a system like Windows or Macintosh that uses a graphical user interface (GUI), a small picture or symbol that represents some object or function. Examples: a file folder for a directory; a rectangle with a bent corner for a file; or a miniature illustration for a program.
Internet Connection Sharing; a feature in Windows that when enabled, allows you to connect computer on your home network to the Internet via one computer.
IEEE 1394 port:
An interface for attaching high-speed serial devices to your computer; IEEE 1394 connectors support plug and play.
A graphic overlay that contains more than one area (or hot spot) which is clickable and links to another web page or anchor. Image maps provide an alternative to text links for directing the user to additional information.
Internet Message Access Protcol. A method of accessing e-mail messages on a server without downloading them to your local hard drive; it is the main difference between IMAP and POP3 which requires messages to be downloaded to a user’s hard drive before the message can be read.
A worldwide network based on the TCP/IP protocol that can connect almost any make or model of popular computers from micros to supercomputers. Special programs called “clients†enable users with a network connection to do things like process e-mail or browse web sites using the familiar interface of a desktop computer.
A client program from Microsoft that comes pre installed on most new PC or compatible computers; enables you to browse the World Wide Web.
An audio broadcasting service transmitted via the Internet; broadcasts consist of a continuous stream. A drawback is the inability to control selection as you can when listening to traditional radio broadcasting.
Internet Protocol address; every computer connected to the Internet has a unique identifying number. Example: 192.168.100.2.
Internet Relay Chat; a system that enables two or more Internet users to conduct online discussions in real time.
Interrupt request; refers to a number associated with a serial port on an PC or compatible computer. It usually can be changed by flipping a dip switch. Occasionally, when you’re using a modem connect to the Internet, you may need to adjust the IRQ number assigned to the serial port which connects the modem to avoid conflicts with another device like your mouse.
Internet Service Provider; an organization or company that provides Internet connectivity.
An IT Assessment is the practice of gathering information on part or whole of a IT network infrastructure, and then presented in a detailed report. This report typically analyzes the current state or health of technology or services and identifies areas needing improvement or prepare for a some type of system or application upgrade. A IT Assessment can be performed in-house or outsourced to an IT vendor. IT Direct has developed a comprehensive assessment process that includes conducting thorough, in-depth reviews all of your critical technology areas, evaluating them against best practices and then providing you with a roadmap to better leverage your IT as a competitive advantage.
Independent Verification and Validation (IV&V) is the process of checking that a project, service, or system meets specifications and that it fulfills its intended purpose. If you’ve recently implemented a new technology solution, you may want an independent party to assess the quality of the work.
Back to top J Java:
A general purpose programming language commonly used in conjunction with web pages that feature animation. Small Java applications are called Java applets; many can be downloaded and run on your computer by a Java-compatible browser like Firefox or Internet Explorer.
A publicly available scripting language that shares many of the features of Java; it is used to add dynamic content (various types of interactivity) to web pages.
Joint Photographic Experts Group; a graphics format which compresses an image to save space. Most images imbedded in web pages are GIFs, but sometimes the JPEG format is used (especially for detailed graphics or photographs). In some cases, you can click on the image to display a larger version with better resolution.
A word processing format in which text is formatted flush with both the left and right margins. Other options include left justified (text is lined up against the left margin) and right justified (text is lined up against the right margin).
Back to top K K:
An abbreviation for kilobyte; it contains 1,024 bytes; in turn 1,024 kilobytes is equal to one megabyte.
Kilobits per second; a measure of data transfer speed; one Kbps is 1,000 bits per second. Example: a 28.8 Kbps modem.
An authentication system developed at the Massachusetts Institute of Technology (MIT); it enables the exchange of private information across an open network by assigning a unique key called a “ticket†to a user requesting access to secure information.
The amount of space between characters in a word; in desktop publishing, it is typically performed on pairs of letters or on a short range of text to fine-tune the character spacing.
Most often refers to a feature of text editing and database management systems; a keyword is an index entry that correlates with a specific record or document.
kilobyte (K, KB, or Kb):
1,024 (2 to the 10th power) bytes; often used to represent one thousand bytes. Example: a 720K diskette can hold approximately 720,000 bytes (or characters).
A database where information common to a particular topic is stored online for easy reference; for example, a frequently-asked questions (FAQ) list may provide links to a knowledge base.
Back to top L LAN:
Local area network; a network that extends over a small area (usually within a square mile or less). Connects a group of computers for the purpose of sharing resources such as programs, documents, or printers. Shared files often are stored on a central file server.
A type of printer that produces exceptionally high quality copies. It works on the same principle as a photocopier, placing a black powder onto paper by using static charge on a rolling drum.
The vertical space between lines of text on a page; in desktop publishing, you can adjust the leading to make text easier to read.
learning management system (LMS):
Software used for developing, using, and storing course content of all types. Information within a learning management system often takes the form of learning objects (see “learning object†below).
A chunk of course content that can be reused and independently maintained. Although each chunk is unique in its content and function, it must be able to communicate with learning systems using a standardized method not dependent on the system. Each chunk requires a description to facilitate search and retrieval.
Another name for a hyperlink.
An open-source operating system that runs on a number of hardware platforms including PCs and Macintoshes. Linux is freely available over the Internet.
A program that manages electronic mailing lists; OIT is responsible for the ListProcessor software and also handles requests from the OSU community or new mailing lists.
An electronic mailing list; it provides a simple way of communicating with a large number of people very quickly by automating the distribution of electronic mail. At OSU, mailing lists are used not only for scholarly communication and collaboration, but also as a means of facilitating and enhancing classroom education.
log in, log on:
The process of entering your username and password to gain access to a particular computer; e.g., a mainframe, a network or secure server, or another system capable of resource sharing.
Back to top M MAC:
Media Access Control; The hardware address of a device connected to a shared network.
A personal computer introduced in the mid-1980s as an alternative to the IBM PC. Macintoshes popularized the graphical user interface and the 3 1/2 inch diskette drive.
A networked computer dedicated to supporting electronic mail. You use a client program like Microsoft Outlook for retrieving new mail from the server and for composing and sending messages.
A collection of e-mail addresses identified by a single name; mailing lists provide a simple way of corresponding with a group of people with a common interest or bond. There are two main types of lists: 1) one you create within an e-mail program like Outlook that contains addresses for two or more individuals you frequently send the same message; and 2) a Listserve type that requires participants to be subscribed (e.g., a group of collaborators, a class of students, or often just individuals interested in discussing a particular topic).
The amount of memory physically installed in your computer. Also referred to as “RAMâ€.
A very large computer capable of supporting hundreds of users running a variety of different programs simultaneously. Often the distinction between small mainframes and minicomputers is vague and may depend on how the machine is marketed.
A cable connector that has pins and plugs into a port or interface to connect one device to another.
Software programs designed to damage or do other unwanted actions on a computer; common examples of malware include viruses, worms, trojan horses, and spyware.
A Managed Workstation reduces downtime, improves maintenance, increases productivity and data security through an effective blend of Help Desk and on-site support and centralized deployment of software patches and virus protection updates. IT Direct can deliver expert support at the workstation level for all of your users, at any location. Using our live online support technology, our highly qualified certified technical staff, working remotely, are able to see exactly what is happening on a user’s computer screen — allowing us to quickly isolate issues and begin remediation.
Messaging Application Programming Interface; a system built into Microsoft Windows that enables different e-mail programs to interface to distribute e-mail. When both programs are MAPI-enabled, they can share messages.
megabyte (Meg or MB):
1,024 x 1,024 (2 to the 20th power) bytes; it’s usually sufficient to think of a megabytes as one million bytes.
MHz or mHz:
Megahertz; a measurement of a microprocessor’s speed; one MHz represents one million cycles per second. The speed determines how many instructions per second a microprocessor can execute. The higher the megahertz, the faster the computer.
In a graphical user interface, a bar containing a set of titles that appears at the top of a window. Once you display the contents of a menu by clicking on its title, you can select any active command (e.g., one that appears in bold type and not in a lighter, gray type).
Microsoft Exchange Server is the server side of a client–server, collaborative application product developed by Microsoft. It is part of the Microsoft Servers line of server products and is used by enterprises using Microsoft infrastructure products. Exchange’s major features consist of electronic mail, calendaring, contacts and tasks; support for mobile and web-based access to information; and support for data storage. IT Direct has a 100% hosted Exchange solution that includes clustered and redundant Microsoft Exchange servers that provide more then enough horsepower to support all of your organization’s messaging needs. And we handle the entire set-up and configuration for you.
A group of operating systems for PC or compatible computers; Windows provides a graphical user interface so you can point and click to indicate what you want to do.
Multipurpose Internet Mail Extensions; a protocol that enables you to include various types of files (text, audio, video, images, etc.) as an attachment to an e-mail message.
A device that enables a computer to send and receive information over a normal telephone line. Modems can either be external (a separate device) or internal (a board located inside the computer’s case) and are available with a variety of features such as error correction and data compression.
A person who reviews and has the authority to block messages posted to a supervised or “moderated†network newsgroup or online community.
The part of a computer that contains the screen where messages to and from the central processing unit (CPU) are displayed. Monitors come in a variety of sizes and resolutions. The higher the number of pixels a screen is capable of displaying, the better the resolution. Sometimes may be referred to as a CRT.
A handheld device used with a graphical user interface system. Common mouse actions include: 1) clicking the mouse button to select an object or to place the cursor at a certain point within a document; 2) double-clicking the mouse button to start a program or open a folder; and 3) dragging (holding down) the mouse button and moving the mouse to highlight a menu command or a selected bit of text.
Motion Picture Experts Group; a high quality video format commonly used for files found on the Internet. Usually a special helper application is required to view MPEG files.
The delivery of information, usually to a personal computer, in a combination of different formats including text, graphics, animation, audio, and video.
The ability of a CPU to perform more than one operation at the same time; Windows and Macintosh computers are multitasking in that each program that is running uses the CPU only for as long as needed and then control switches to the next task.
Back to top N nameserver:
A computer that runs a program for converting Internet domain names into the corresponding IP addresses and vice versa.
Network Address Translation; a standard that enables a LAN to use a set of IP addresses for internal traffic and a single IP address for communications with the Internet.
A group of interconnected computers capable of exchanging information. A network can be as few as several personal computers on a LAN or as large as the Internet, a worldwide network of computers.
A device that connects your computer to a network; also called an adapter card or network interface card.
A common connection point for devices on a network.
Network News Transport Protocol; the protocol used for posting, distributing, and retrieving network news messages.
IT Direct’s Cloud-based Network Monitoring service, can configure and remotely monitor all of your important network systems (e-mail, servers, routers, available disk space, backup applications, critical virus detection, and more). If our system detects a problem, it alerts the IT Direct Technical Support Center, so we can take corrective action. Depending on prearranged instructions from your own network engineers, we’ll correct the problem immediately, wait until the next business day or simply notify you of the issue.
Network security consists of the provisions and policies adopted by a network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and network-accessible resources. Network Security is the authorization of access to data in a network, which is controlled by a network administrator. IT Direct uses state-of-the-art network security techniques while providing authorized personnel access to important files and applications. Every organization’s needs are different and hackers are always adapting their techniques, so we are extremely serious about staying up to date with the latest network security tools, threats and industry developments.
Back to top O OCR:
Optical character recognition; the act of using a visual scanning device to read text from hard copy and translate it into a format a computer can access (e.g., an ASCII file). OCR systems include an optical scanner for reading text and sophisticated software for analyzing images.
IT Direct realizes that businesses are moving more and more of their critical infrastructure to Cloud-based providers. ‘On-Cloud’ is currently our own term coined for providing management and support for your Cloud-based systems and processes.
At-place-of-work-or-business support, typically provided by a technically qualified individual.
A term that has commonly come to mean “connected to the Internetâ€. It also is used to refer to materials stored on a computer (e.g., an online newsletter) or to a device like a printer that is ready to accept commands from a computer.
OpenType is a format for scalable computer fonts. It was built on its predecessor TrueType, retaining TrueType’s basic structure and adding many intricate data structures for prescribing typographic behavior. OpenType is a registered trademark of Microsoft Corporation.
Back to top P packet:
A unit of transmission in data communications. The TCP/IP protocol breaks large data files into smaller chunks for sending over a network so that less data will have to be re-transmitted if errors occur.
The range of colors a computer or an application is able to display. Most newer computers can display as many as 16 million colors, but a given program may use only 256 of them. Also refers to a display box containing a set of related tools within a desktop publishing or graphics design program.
Refers to an HTML document on the World Wide Web or to a particular web site; usually pages contain links to related documents (or pages).
An interface on a computer that supports transmission of multiple bits at the same time; almost exclusively used for connecting a printer. On IBM or compatible computers, the parallel port uses a 25-pin connector. Macintoshes have an SCSI port that is parallel, but more flexible in the type of devices it can support.
A secret combination of characters used to access a secured resource such as a computer, a program, a directory, or a file; often used in conjunction with a username.
Usually refers to an IBM PC or compatible, or when used generically, to a “personal computerâ€. In a different context, PC also is an abbreviation for “politically correct.â€
Personal Digital Assistant; a small hand-held computer that in the most basic form, allows you to store names and addresses, prepare to-do lists, schedule appointments, keep track of projects, track expenditures, take notes, and do calculations. Depending on the model, you also may be able to send or receive e-mail; do word processing; play MP3 music files; get news, entertainment and stock quotes from the Internet; play video games; and have an integrated digital camera or GPS receiver.
Portable Document Format; a type of formatting that enables files to be viewed on a variety computers regardless of the program originally used to create them. PDF files retain the “look and feel†of the original document with special formatting, graphics, and color intact. You use a special program or print driver (Adobe Distiller or PDF Writer) to convert a file into PDF format.
A type of connection between two computers; both perform computations, store data, and make requests from each other (unlike a client-server connection where one computer makes a request and the other computer responds with information).
Practical Extraction and Report Language; a programming language that is commonly used for writing CGI scripts used by most servers to process data received from a client browser.
A method of setting up a computer or a program for multiple users. Example: In Windows, each user is given a separate “personality†and set of relevant files.
Pretty good privacy; a technique for encrypting e-mail messages. PGP uses a public key to give to anyone who sends you messages and a private key you keep to decrypt messages you receive.
A type of directory service often referred to as a “phone bookâ€. When accessing this type of directory service, follow the directions from the particular site for looking up information.
A con that scammers use to electronically collect personal information from unsuspecting users. Phishers send e-mails that appear to come from legitimate websites such as eBay, PayPal, or other banking institutions asking you to click on a link included in the email and then update or validate your information by entering your username and password and often even more information, such as your full name, address, phone number, social security number, and credit card number.
Packet Internet Groper; a utility used to determine whether a particular computer is currently connected to the Internet. It works by sending a packet to the specified IP address and waiting for a reply.
Stands for one picture element (one dot on a computer monitor); commonly used as a unit of measurement.
A program used for viewing multimedia files that your web browser cannot handle internally; files using a plug-in do not need to be moved to your computer before being shown or played. Contrast to a helper application which requires the file to first be moved to your computer. Examples of plug-ins: Adobe Flash Player (for video and animation) and Quicktime (for streamed files over the Internet).
plug and play:
A set of specifications that allows a computer to automatically detect and configure a device and install the appropriate device drivers.
Post Office Protocol; a method of handling incoming electronic mail. Example: E-mail programs may use this protocol for storing your incoming messages on a special cluster of servers called pop.service.ohio-state.edu and delivering them when requested.
Any application that disables the pop-up, pop-over, or pop-under ad windows that appear when you use a web browser.
The act of sending a message to a particular network newsgroup.
A page description language primarily used for printing documents on laser printers; it is the standard for desktop publishing because it takes advantage of high resolution output devices. Example: A graphic design saved in PostScript format looks much better when printed on a 600 dpi printer than on a 300 dpi printer.
Called outline or scalable fonts; with a single typeface definition, a PostScript printer can produce many other fonts. Contrast to non-PostScript printers that represent fonts with bitmaps and require a complete set for each font size.
Point-to-Point Protocol; a type of connection over telephone lines that gives you the functionality of a direct ethernet connection.
A set of instructions that tells a computer how to perform a specific task.
Private cloud (also called internal cloud or corporate cloud) is a term for a proprietary computing architecture that provides hosted services to a limited number of users behind a secure and robust infrastructure. An IT Direct private cloud solution is designed to offer the same features and benefits of shared cloud systems, but removes a number of objections to the cloud computing model including control over enterprise and customer data, worries about security, and issues connected to regulatory compliance. IT Direct Private clouds are designed to facilitate organizations that needs or wants more control over their data than they can get by using a third-party shared cloud service.
A set of rules that regulate how computers exchange information. Example: error checking for file transfers or POP for handling electronic mail.
Refers to a special kind of server that functions as an intermediate link between a client application (like a web browser) and a real server. The proxy server intercepts requests for information from the real server and whenever possible, fills the request. When it is unable to do so, the request is forwarded to the real server.
public domain software:
Any non-copyrighted program; this software is free and can be used without restriction. Often confused with “freeware†(free software that is copyrighted by the author).
Frequently used to describe data sent over the Internet; the act of requesting data from another computer. Example: using your web browser to access a specific page. Contrast to “push†technology when data is sent to you without a specific request being made.
Frequently used to describe data sent over the Internet; the act of sending data to a client computer without the client requesting it. Example: a subscriptions service that delivers customized news to your desktop. Contrast to browsing the World Wide Web which is based on “pull†technology; you must request a web page before it is sent to your computer.
Back to top Q QoS:
Quality of service; is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. For example, a required bit rate, delay, jitter, packet dropping probability and/or bit error rate may be guaranteed. Quality of service guarantees are important if the network capacity is insufficient, especially for real-time streaming multimedia applications such as voice over IP, online games and IP-TV, since these often require fixed bit rate and are delay sensitive, and in networks where the capacity is a limited resource, for example in cellular data communication.
A video format developed by Apple Computer commonly used for files found on the Internet; an alternative to MPEG. A special viewer program available for both IBM PC and compatibles and Macintosh computers is required for playback.
Back to top R RAM:
Random Access Memory; the amount of memory available for use by programs on a computer. Also referred to as “main memoryâ€. Example: A computer with 8 MB RAM has approximately 8 million bytes of memory available. Contrast to ROM (read-only memory) that is used to store programs that start your computer and do diagnostics.
A set of fields that contain related information; in database type systems, groups of similar records are stored in files. Example: a personnel file that contains employment information.
A database used by Windows for storing configuration information. Most 32-bit Windows applications write data to the registry. Although you can edit the registry, this is not recommended unless absolutely necessary because errors could disable your computer.
A remote, online, or managed backup service is a service that provides users with a system for the backup and storage of computer files. IT Direct’s remote backup solution incorporates automatic data compression and secure data encryption. This means that your critical system data backs up safely and efficiently. For additional peace of mind, our backup service features proprietary dual tapeless backup protection, including fast incremental backup to a secure on-site hard drive and a second backup to our carrier-grade data center. Our remote backup service is completely automated and immensely secure. You’ll never have to think about the safety of your data again.
A Windows feature that allows you to have access to a Windows session from another computer in a different location (XP and later).
An interactive connection from your desktop computer over a network or telephone lines to a computer in another location (remote site).
See: “network monitoringâ€
See: “help deskâ€
Red, green, and blue; the primary colors that are mixed to display the color of pixels on a computer monitor. Every color of emitted light can be created by combining these three colors in varying levels.
An eight-wire connector used for connecting a computer to a local-area network. May also be referred to as an Ethernet connector.
Read Only Memory; a special type of memory used to store programs that start a computer and do diagnostics. Data stored in ROM can only be read and cannot be removed even when your computer is turned off. Most personal computers have only a few thousand bytes of ROM. Contrast to RAM (random access or main memory) which is the amount of memory available for use by programs on your computer.
A device used for connecting two Local Area Networks (LANs); routers can filter packets and forward them according to a specified set of criteria.
Rich Text Format; a type of document formatting that enables special characteristics like fonts and margins to be included within an ASCII file. May be used when a document must be shared among users with different kinds of computers (e.g., IBM PC or compatibles and Macintoshes).
Back to top S safe mode:
A way of starting your Windows computer that can help you diagnose problems; access is provided only to basic files and drivers.
A storage area network (SAN) is a dedicated storage network that provides access to consolidated, block level storage. SANs primarily are used to make storage devices (such as disk arrays, tape libraries, and optical jukeboxes) accessible to servers so that the devices appear as locally attached to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the regular network by regular devices.
Serial Advanced Technology Attachment or Serial ATA. An interface used to connect ATA hard drives to a computer’s motherboard that provides a better, more efficient interface; Serial ATA is likely to replace the previous standard, Parallel ATA (PATA), which has become dated.
A method of data transmission; the sender beams data up to an orbiting satellite and the satellite beams the data back down to the receiver.
A software program that translates text on a Web page into audio output; typically used by individuals with vision impairment.
In a graphical user interface system, the narrow rectangular bar at the far right of windows or dialog boxes. Clicking on the up or down arrow enables you to move up and down through a document; a movable square indicates your location in the document. Certain applications also feature a scroll bar along the bottom of a window that can be used to move from side-to-side.
A tool that searches documents by keyword and returns a list of possible matches; most often used in reference to programs such as Google that are used by your web browser to search the Internet for a particular topic.
A special type of file server that requires authentication (e.g., entry a valid username and password) before access is granted.
A small device used to provide an additional level of authorization to access a particular network service; the token itself may be embedded in some type of object like a key fob or on a smart card. Also referred to as an authentication token.
A 1998 amendment to the Workforce Rehabilitation Act of 1973; it states after June 25, 2001, all electronic and information technology developed, purchased, or used by the federal government must be accessible to those with disabilities. Refer to the Section 508 website for more information.
A type of compressed file that you can execute (e.g., double-click on the filename) to begin the decompression process; no other decompression utility is required. Example: on IBM PC or compatibles, certain files with an “.exe†extension and on Macintoshes, all files with a “.sea†extension.
An interface on a computer that supports transmission of a single bit at a time; can be used for connecting almost any type of external device including a mouse, a modem, or a printer.
A computer that is responsible for responding to requests made by a client program (e.g., a web browser or an e-mail program) or computer. Also referred to as a “file serverâ€.
Copyrighted software available for downloading on a free, limited trial basis; if you decide to use the software, you’re expected to register and pay a small fee. By doing this, you become eligible for assistance and updates from the author. Contrast to public domain software which is not copyrighted or to freeware which is copyrighted but requires no usage fee.
A file containing a bit of personal information that you can set to be automatically appended to your outgoing e-mail messages; many network newsreaders also have this capability. Large signatures over five lines generally are frowned upon.
Single In-line Memory Module; a small circuit board that can hold a group of memory chips; used to increase your computer’s RAM in increments of 1,2, 4, or 16 MB.
Simple Mail Transfer Protocol; a method of handling outgoing electronic mail.
Any program that performs a specific function. Examples: word processing, spreadsheet calculations, or electronic mail.
Email spam, also known as junk email or unsolicited bulk email (UBE), is a subset of spam that involves nearly identical messages sent to numerous recipients by email. Definitions of spam usually include the aspects that email is unsolicited and sent in bulk. Spammers collect email addresses from chatrooms, websites, customer lists, newsgroups, and viruses which harvest users’ address books, and are sold to other spammers. They also use a practice known as “email appending†or “epending†in which they use known information about their target (such as a postal address) to search for the target’s email address. Also see “Anti-Spamâ€.
Service Set Identifier; a name that identifies a wireless network.
streaming (streaming media):
A technique for transferring data over the Internet so that a client browser or plug-in can start displaying it before the entire file has been received; used in conjunction with sound and pictures. Example: The Flash Player plug-in from Adobe Systems gives your computer the capability for streaming audio; RealPlayer is used for viewing sound and video.
spyware: Any software that covertly gathers user information, usually for advertising purposes, through the user’s Internet connection. subdirectory:
An area on a hard disk that contains a related set of files; on IBM PC or compatibles, a level below another directory. On Macintoshes, subdirectories are referred to as folders.
Super VGA (Video Graphics Array); a set of graphics standards for a computer monitor that offers greater resolution than VGA. There are several different levels including 800 x 600 pixels, 1024 by 768 pixels, 1280 by 1024 pixels; and 1600 by 1200 pixels. Although each supports a palette of 16 million colors, the number of simultaneous colors is dependent on the amount of video memory installed in the computer.
Back to top T T-1 carrier:
A dedicated phone connection supporting data rates of 1.544Mbits per second; T-1 lines are a popular leased line option for businesses connecting to the Internet and for Internet Service Providers connecting to the Internet backbone. Sometimes referred to as a DS1 line.
A dedicated phone connection supporting data rates of about 43 Mbps; T-3 lines are used mainly by Internet Service Providers connecting to the Internet backbone and for the backbone itself. Sometimes referred to as a DS3 line.
An adaptation of the Ethernet standard for Local Area Networks that refers to running Ethernet over twisted pair wires. Students planning on using ResNet from a residence hall must be certain to use an Ethernet adapter that is 10Base-T compatible and not BNC (used with 10Base-2 Ethernet systems).
With reference to web design, a method for formatting information on a page. Use of tables and the cells within also provide a way to create columns of text. Use of tables vs frames is recommended for helping to make your web site ADA-compliant.
Transmission Control Protocol/Internet Protocol; an agreed upon set of rules that tells computers how to exchange information over the Internet. Other Internet protocols like FTP, Gopher, and HTTP sit on top of TCP/IP.
Telephony encompasses the general use of equipment to provide voice communication over distances, specifically by connecting telephones to each other. IT Direct’s expert team of telecommunication consultants can design and implement a system that is feature rich, simple to use and integrates seamlessly with your existing business applications.
A generic term that refers to the process of opening a remote interactive login session regardless of the type of computer you’re connecting to.
The act of using your desktop computer to communicate with another computer like a UNIX or IBM mainframe exactly as if you were sitting in front of a terminal directly connected to the system. Also refers to the software used for terminal emulation. Examples: the Telnet program for VT100 emulation and QWS3270 (Windows) and TN3270 (Macintosh) for IBM3270 fullscreen emulation.
Tag Image File Format; a popular file format for storing bit-mapped graphic images on desktop computers. The graphic can be any resolution and can be black and white, gray-scale, or color. Files of this type usually have the suffix “.tif†as part of their name.
A group of bits transferred between computers on a token-ring network. Whichever computer has the token can send data to the other systems on the network which ensures only one computer can send data at a time. A token may also refer to a network security card, also known as a hard token.
On a graphical user interface system, a bar near the top of an application window that provides easy access to frequently used options.
A harmless-looking program designed to trick you into thinking it is something you want, but which performs harmful acts when it runs.
A technology for outline fonts that is built into all Windows and Macintosh operating systems. Outline fonts are scalable enabling a display device to generate a character at any size based on a geometrical description.
An update of 140 characters or less published by a Twitter user meant to answer the question, “What are you doing?†which provides other users with information about you.
A service that allows users to stay connected with each other by posting updates, or “tweets,†using a computer or cell phone or by viewing updates posted by other users.
twisted pair cable:
A type of cable that is typically found in telephone jacks; two wires are independently insulated and are twisted around each other. The cable is thinner and more flexible than the coaxial cable used in conjunction with 10Base-2 or 10Base-5 standards. Most Ohio State UNITS telephone jacks have three pairs of wires; one is used for the telephone and the other two can be used for 10Base-T Ethernet connections.
An extra level of security achieved using a security token device; users have a personal identification number (PIN) that identifies them as the owner of a particular token. The token displays a number which is entered following the PIN number to uniquely identify the owner to a particular network service. The identification number for each user is changed frequently, usually every few minutes.
Back to top U UNIX:
A popular multitasking computer system often used as a server for electronic mail or for a web site. UNIX also is the leading operating system for workstations, although increasingly there is competition from Windows NT which offers many of the same features while running on an PC or compatible computer.
The process of transferring one or more files from your local computer to a remote computer. The opposite action is download.
Universal Serial Bus; a connector on the back of almost any new computer that allows you to quickly and easily attach external devices such as mice, joysticks or flight yokes, printers, scanners, modems, speakers, digital cameras or webcams, or external storage devices. Current operating systems for Windows and Macintosh computers support USB, so it’s simple to install the device drivers. When a new device is connected, the operating system automatically activates it and begins communicating. USB devices can be connected or disconnected at any time.
A name used in conjunction with a password to gain access to a computer system or a network service.
Uniform Resource Locator; a means of identifying resources on the Internet. A full URL consists of three parts: the protocol (e.g., FTP, gopher, http, nntp, telnet); the server name and address; and the item’s path. The protocol describes the type of item and is always followed by a colon (:). The server name and address identifies the computer where the information is stored and is preceded by two slashes (//). The path shows where an item is stored on the server and what the file is called; each segment of the location s preceded by a single slash (/).
An interface used for connecting a Universal Serial Bus (USB) device to computer; these ports support plug and play.
Commonly refers to a program used for managing system resources such as disk drives, printers, and other devices; utilities sometimes are installed as memory-resident programs. Example: the suite of programs called Norton Utilities for disk copying, backups, etc.
A method of converting files into an ASCII format that can be transmitted over the Internet; it is a universal protocol for transferring files between different platforms like UNIX, Windows, and Macintosh and is especially popular for sending e-mail attachments.
Back to top V virtualization:
Virtualization is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system, a storage device or network resources. In hardware virtualization, the term host machine refers to the actual machine on which the virtualization takes place; the term guest machine, however, refers to the virtual machine. Likewise, the adjectives host and guest are used to help distinguish the software that runs on the actual machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Monitor.
An online environment where students can have access to learning tools any time. Interaction between the instructor and the class participants can be via e-mail, chat, discussion group, etc.
Virtual hosting is a method for hosting multiple domain names on a computer using a single IP address. This allows one machine to share its resources, such as memory and processor cycles, to use its resources more efficiently. IT Direct Virtual Hosting provides a high-performance hosting platform for your organization’s online presence. Maintained by our specialist support staff and 24×7 active monitoring systems, we work hard to meet all of your hosted Web server needs.
A technique that enables a certain portion of hard disk space to be used as auxiliary memory so that your computer can access larger amounts of data than its main memory can hold at one time.
An artificial environment created with computer hardware and software to simulate the look and feel of a real environment. A user wears earphones, a special pair of gloves, and goggles that create a 3D display. Examples: manipulating imaginary 3D objects by “grabbing†them, taking a tour of a “virtual†building, or playing an interactive game.
A program intended to alter data on a computer in an invisible fashion, usually for mischievous or destructive purposes. Viruses are often transferred across the Internet as well as by infected diskettes and can affect almost every type of computer. Special antivirus programs are used to detect and eliminate them.
Voice over Internet Protocol; a means of using the Internet as the transmission medium for phone calls. An advantage is you do not incur any additional surcharges beyond the cost of your Internet access.
Virtual Private Networking; a means of securely accessing resources on a network by connecting to a remote access server through the Internet or other network.
A type of terminal emulation required when you open an interactive network connection (telnet) to a UNIX system from your desktop computer.
Back to top W WAIS:
Wide Area Information Server; a program for finding documents on the Internet. Usually found on gopher servers to enable searching text-based documents for a particular keyword.
Wide Area Network; a group of networked computers covering a large geographical area (e.g., the Internet).
Wireless Application Protocol; a set of communication protocols for enabling wireless access to the Internet.
Wired Equivalent Privacy; a security protocol for wireless local area networks defined in the 802.11b standard. WEP provides the same level of security as that of a wired LAN.
Wireless Fidelity; A generic term from the Wi-Fi Alliance that refers to of any type of 802.11 network (e.g., 802.11b, 802.11a, dual-band, etc.). Products approved as “Wi-Fi Certified†(a registered trademark) are certified as interoperable with each other for wireless communications.
A special character provided by an operating system or a particular program that is used to identify a group of files or directories with a similar characteristic. Useful if you want to perform the same operation simultaneously on more than one file. Example: the asterisk (*) that can be used in DOS to specify a groups of files such as *.txt.
On a graphical user interface system, a rectangular area on a display screen. Windows are particularly useful on multitasking systems which allow you to perform a number of different tasks simultaneously. Each task has its own window which you can click on to make it the current process. Contrast to a “dialog box†which is used to respond to prompts for input from an application.
A casual way of referring to the Microsoft Windows operating systems.
The ability to access the Internet without a physical network connection. Devices such as cell phones and PDAs that allow you to send and receive e-mail use a wireless Internet connection based on a protocol called WAP (Wireless Application Protocol). At this point, web sites that contain wireless Internet content are limited, but will multiply as the use of devices relying on WAP increases.
A special utility within some applications that is designed to help you perform a particular task. Example: the wizard in Microsoft Word that can guide you through creating a new document.
Wireless Local Area Network; the computers and devices that make up a wireless network.
A graphical user interface (GUI) computer with computing power somewhere between a personal computer and a minicomputer (although sometimes the distinction is rather fuzzy). Workstations are useful for development and for applications that require a moderate amount of computing power and relatively high quality graphics capabilities.
World Wide Web:
A hypertext-based system of servers on the Internet. Hypertext is data that contains one or more links to other data; a link can point to many different types of resources including text, graphics, sound, animated files, a network newsgroup, a telnet session, an FTP session, or another web server. You use a special program called a “browser†(e.g., Firefox or Internet Explorer) for viewing World Wide Web pages. Also referred to as “WWW†or “the webâ€.
A program that makes copies of itself and can spread outside your operating system worms can damage computer data and security in much the same way as viruses.
Wi-Fi Protected Access; a standard designed to improve on the security features of WEP.
An abbreviation for World Wide Web.
What You See Is What You Get; a kind of word processor that does formatting so that printed output looks identical to what appears on your screen.
Back to top X X2:
A technology that enables data transmission speeds up to 56 Kbps using regular telephone service that is connected to switching stations by high-speed digital lines. This technology affects only transmissions coming into your computer, not to data you send out. In addition, your ISP must have a modem at the other end that supports X2.
Extensible Hypertext Markup Language. A spinoff of the hypertext markup language (HTML) used for creating Web pages. It is based on the HTML 4.0 syntax, but has been modified to follow the guidelines of XML and is sometimes referred to as HTML 5.0.
Extensible Markup Language; A markup language for coding web documents that allows designers to create their own customized tags for structuring a page.
Back to top Z zero-day:
zero-day (or zero-hour or day zero) attack, threat or virus is a computer threat that tries to exploit computer application vulnerabilities that are unknown to others or the software developer, also called zero-day vulnerabilities. Zero-day exploits (actual software that uses a security hole to carry out an attack) are used or shared by attackers before the developer of the target software knows about the vulnerability.
A common file compression format for PC or compatibles; the utility WinZip or Winrar is used for compressing and decompressing files. Zipped files usually end with a “.zip†file extension. A special kind of zipped file is self-extracting and ends with a “.exe†extension. Macintosh OSX also supports the .zip format and has tools that can compress and decompress zip files.
A high capacity floppy disk drive from Iomega Corporation; the disks it uses are a little bit larger than a conventional diskette and are capable of holding 100 MB or 250 MB of data.
The act of enlarging a portion of an onscreen image for fine detail work; most graphics programs have this capability. | 2 | 2 |
<urn:uuid:75eed066-602a-489b-84e5-8f3e838666f9> | Author: RAMKUMAR S & MYTHILI S
Electronic health records is a computer-based program in which the collection of data is related to clinical research, clinical registry functions, administrative functions, and in quality improvement. This is done by NPL (natural processing language) in the linguistic information is converted into a structural form or in the form of code i.e numerical form this process of converting the information into a structural form is known as clinical coding this clinical coding is classified into manual coding and automated clinical coding, in Which the manual coding is human-based and automated clinical coding is based on the artificial intelligence technology this also consists of different classification of coding includes ICD’S, CPT ‘S, SNOMED’S, HCPCS. This article gives a brief outline about the ICD-10, CPT, and SNOMED systems.
Electronic health records (EHR) have become common ground in healthcare use of the electronic health record has gained a great swift over the few years(1). EHRs have been proposed as a means for improving, the availability, completeness, and legibility of patient information recording any diseases(2). Nearly 3\4 of the physicians generate reports using EHRs with this approach in the medical field is going towards electronic documentation. EHR’s recent system has tended to adopt more active roles in the clinical field(3). EHR’s system has entailed two relevant aspects. Firstly the chemical data analysis to inform the physician on their medical decisions for the patient(4). Secondly, this exploits data regarding clinical, administrative, and health which includes demographic data and prescriptions, etc…(5) Hospitals and general healthcare providers significantly use medical coding as a tool to record the diagnoses information of a patient or to record the medical services provided by the physician to the patient(6).
These medical codes are used to provide access to medical records by retrieving information regarding administrative, educational and medical research(7). These codes also facilitate the payment of health services, evaluate the patient’s use of the health care facilities, study the health care cost, predict the health care trends, and plan health care tools for future needs(8). The purpose of the coding is to provide consistent and comparable clinical information about the patients in the locality over a period of time, these data are used to improve the healthcare planning and policy of the governing authority and these data help the healthcare sector to understand the epidemiological condition of a specific area(9). Classification system like ICD- 10[ International System of Classification of Diseases 10th edition] is used in the conversion of textual data into structured data. Clinical coding is a nontrivial task of collection of the data this is used for billing purposes in the US this process includes the abstraction of the data and also summarization of the data collected. The US uses ICD 10-based coding system and the UK uses NHS [ national health services] based coding system (10-13)
Figure 1 clinical coding outline
AUTOMATED CLINICAL CODING (ACC)
Automated medical coding is a part of electronic health records, this encompasses different computer-based approaches which transform narrative records into structural records these structural records include performing standard coding without human interactions(14). This system of clinical coding may be automated by the use of AI techniques ( artificial intelligence )(15). This is performed by the use of NPL & machine learning(16). AI has been a most promising approach in the field of medical coding by providing more promising data in a compact form(17). ACC is a potential AI application that is used in managing clinical records of research laboratories and in the healthcare centers (18-20)
In this paper, we summarise the ICD-10, CPT, and SNOMED clinical coding system and their use in the various medical sector.
Figure 2 Automated coding workflow
ICD-10 [ International Classification Of Diseases]
ICD is a classification of diseases released by the world health organization (WHO) and this defines the universe of the disease’s injuries, disorders, and other health-related conditions and classifies the standards of the origin(21). The ICD was first published in the year 1893, ICD has become an important index in the management of medical records, administrative records, health insurance, and literature records regarding diseases(22). At present most of the institutions use ICD-10 codes which are diagnostic-related group subsidies for the inpatient who mainly rely on manual coding done by licensed & professional disease coders(23). This ICD-10 consists of more than 60,000 codes(24-27). This system is time-consuming & labor- intensive and the rules of ICD-10 are complicated even for the disease coders (21,28,29)
Figure 3 History in the development of ICD 10
CPT (Current procedural terminologies):
CPT was developed in 1996 by the American medical association (AMM) with data from the national medical specialty societies(30,31). This system is the most commonly used system of procedure in the billing codes for medical, surgical, and diagnostical services(31,33).CPT is also used in targeting the tissues of cancer by targeting the topoisomerase enzyme(32) and in the various diseases The CPT is classified into three categories they are
Category I CPT: these are the codes that are released annually. these codes are the distinct medical procedures which are furnished by the QHPs these codes are mostly of 5 digits.(34)
Category II CPT: these codes are the performance measurement codes. These codes are released thrice a year these are numerical alpha codes.(35)
Category III CPT: these codes are not permanent codes these codes emerge day by day due to development of new and emerging technologies to allow the collection of data and to get the assessment of the new services. These codes are released biannually. These codes are also numerical alpha codes
COMPARISON BETWEEN CATEGORY I, II & III CPT CODES
|CATEGORY I (34,36)||CATEGORY II (35,36)||CATEGORY III(36-39)|
|Describe the distinct medical procedure or services furnished by the QHP (qualified health plan )||These are performance measurement codes||These are not permanent codes these codes emerge day by day for newly developed technology|
|These codes are released annually||These codes are released three times a year during the month of march, July and November||These codes are released biannually in January and July|
|5-digit numerical codes||Numerical alpha codes||Numerical alpha codes|
|Only numericals (40)||4 numerical followed by letter F (40)||4 numerical followed by letter T (40)|
TABLE 2 CPT CODES CATEGORY I (41)
|CATEGORY I CPT CODES||NUMERICAL RANGES|
|PATHOLOGY & LABORATORY||80047-89398|
|EVALUATION & MANAGEMENT||99201-99499|
TABLE 3 CPT CATEGORY II (41)
|CATEGORY II CPT CODES||NUMERICAL CODES|
|THERAPEUTIC, PREVENTIVE OR OTHER INTERVENTIONS||4000F-4306F|
TABLE 4 CPT CATEGORY III
|CATEGORY III CPT CODES||NUMERICAL CODE|
|AUDIOLOGY CODES||0208T -0212T(38)|
|MONO POLAR RADIO FREQUENCY||0672T(39)|
|ULTRA SOUND GUIDED FOR FOCAL LASER||0655T(39)|
SONMED [SYSTEMISED NOMENCLATURE OF MEDICINE ] :
The systematized nomenclature of pathology (SNOP) was first developed by the group of pathologist in the College of American Pathologists(42). This enables a consistent way of aggregating, indexing retrieving, and storing clinical data across specialties and sites of care. SNOMED also enables structuring and computerizing the medical records which reduce the variability in the way data is captured, encoded, and used for the clinical care of the patients. this also enables automated reasoning i.e in decision making(43-45). SNOMED CT is the presently used system of coding in the SNOMED classification. SNOMED CT currently consists of more than three lakh medical concepts this provides a standard by which the medical conditions and symptoms of various diseases can be referred(47).during the Jan 2002 SNOMED CT was released, the International Health Terminology Standards Development Organisation (IHTSDO) maintains and promotes this to the clinical sector. SNOMED coding was mostly used for research purposes. 19 countries in the world uses this SONOMED CT for maintaining the clinical records(48)
HISTORY IN THE DEVELOPMENT OF THE SONMED (46)
|YEAR||VERSIONS OF SNOMED|
|1993||SNOMED Version 3.0|
|YEAR||VERSIONS OF SNOMED|
|1997||LOINC codes integrated into SNOMED|
|1998||SNOMED Version 3.5|
The usage of medical coding in the healthcare sector has made it easier for the collection of patient data regarding a disease diagnosis. So the coding system like ICD, CPT, and SNOMED. ICD –10 system of coding has reduced the coding time for the coder. ICD-10 and NTP have made the development of the ICD-11 model. CPT system has been an effective and efficient recording delivery of medical procedures performed by the physician. SNOMED CT model has provided great opportunities in understanding automated medical coding. Thus the usage of the different system of medical coding in the clinical sector and medical had paved a great improvement in the collection of data regarding healthcare sector by diagnosing various diseases and storage of information in research field.
How useful was this post?
Tasks handled by our staff, updated in real-time. | 3 | 10 |
<urn:uuid:5b7d89a7-5bd1-4f9c-838b-0aa8662059e6> | | Sten, submachine gun|
Carbine, Machine, Sten, 9mm
|Place of origin||United Kingdom|
|In service||1941–1960s (United Kingdom)|
1941–present (Other countries)
|Used by||See Users|
|Wars||World War II|
Second Sino-Japanese War
Chinese Civil War
Indonesian National Revolution
First Indochina War
1948 Arab–Israeli War
Mau Mau Uprising
Laotian Civil War
Bangladesh Liberation War
Lebanese Civil War
Angolan Civil War
Rhodesian Bush War
Turkish invasion of Cyprus
IRA Border Campaign
Maluku sectarian conflict
Syrian Civil War
|Designer||Major Reginald V. Shepherd|
Harold J. Turpin
|Manufacturer||Royal Small Arms Factory Enfield|
Lines Brothers Ltd
Long Branch Arsenal, Canada [a]
Various underground resistance group factories
|Unit cost||£2 6s in 1942|
|Produced||1941– (version dependent)|
|No. built||3.7–4.6 million (all variants, depending on source)|
|Variants||Mk. I, II, IIS, III, IV, V, VI|
|Mass||3.2 kg (7.1 lb) (Mk. II)|
|Length||762 mm (30.0 in)|
|Barrel length||196 mm (7.7 in)|
|Action||Blowback-operated, open bolt|
|Rate of fire||version dependent; ~500–600 round/min|
|Muzzle velocity||365 m/s (1,198 ft/s) 305 m/s (1,001 ft/s) (suppressed models)|
|Effective firing range||100 m|
|Feed system||32-round detachable box magazine|
|Sights||fixed peep rear, post front|
The STEN (or Sten gun) is a family of British submachine guns chambered in 9×19mm which were used extensively by British and Commonwealth forces throughout World War II and the Korean War. They had a simple design and very low production cost, making them effective insurgency weapons for resistance groups, and they continue to see usage to this day by irregular military forces. The Sten served as the basis for the Sterling submachine gun, which replaced the Sten in British service until the 1990s, when it, and all other submachine guns, were replaced by the SA80.
The Sten is a select fire, blowback-operated weapon which mounts its magazine on the left. Sten is an acronym, from the names of the weapon's chief designers, Major Reginald V. Shepherd and Harold J. Turpin, and "En" for the Enfield factory.[b] Over four million Stens in various versions were made in the 1940s, making it the second most produced submachine gun of the Second World War, after the Soviet PPSh-41.
The Sten emerged while Britain was engaged in the Battle of Britain, facing invasion by Germany. The army was forced to replace weapons lost during the evacuation from Dunkirk while expanding at the same time. After the start of the war and to 1941 (and even later), the British purchased all the Thompson submachine guns they could from the United States, but these did not meet demand, and Thompsons were expensive, the M1928 costing $200 in 1939 (and still $70 in 1942), whereas a Sten would turn out to cost only $11. American entry into the war at the end of 1941 placed an even bigger demand on the facilities making Thompsons. In order to rapidly equip a sufficient fighting force to counter the Axis threat, the Royal Small Arms Factory, Enfield, was commissioned to produce an alternative.
The credited designers were Major R. V. Shepherd, OBE, Inspector of Armaments in the Ministry of Supply Design Department at The Royal Arsenal, Woolwich, (later Assistant Chief Superintendent at the Armaments Design Department) and Harold John Turpin, Senior Draughtsman of the Design Department of the Royal Small Arms Factory (RSAF), Enfield. Shepherd had been recalled to service after having retired and spending some time at the Birmingham Small Arms Company (BSA).
The Sten shared design features, such as its side-mounted magazine configuration, with the Lanchester submachine gun being produced at the same time for the Royal Navy and Royal Air Force, which was a copy of the German MP28. In terms of manufacture, the Lanchester was entirely different, being made of high-quality materials with pre-war fit and finish, in stark contrast to the Sten's austere execution. The Lanchester and Sten magazines were even interchangeable (though the Lanchester's magazine was longer with a 50-round capacity, compared to the Sten's 32.)
The Sten used simple stamped metal components and minor welding, which required minimal machining and manufacturing. Much of the production could be performed by small workshops, with the firearms assembled at the Enfield site. Over the period of manufacture, the Sten design was further simplified: the most basic model, the Mark III, could be produced from five man-hours of work. Some of the cheapest versions were made from only 47 different parts. The Mark I was a more finely finished weapon with a wooden foregrip and handle; later versions were generally more spartan, although the final version, the Mark V, which was produced after the threat of invasion had died down, was produced to a higher standard.
The Sten underwent various design improvements over the course of the war. For example, the Mark 4 cocking handle and corresponding hole drilled in the receiver were created to lock the bolt in the closed position to reduce the likelihood of unintentional discharges inherent in the design. Most changes to the production process were more subtle, designed to give greater ease of manufacture and increased reliability, and the potentially great differences in build quality contributed to the Sten's reputation as being an unreliable weapon. However, a 1940 report stated that "Exaggerated reports about the unreliability [of the Sten] were usually related to the quality of manufacture. Don Handscombe and his comrades in the Thundersley Patrol of the Auxiliary Units rated them more reliable than the Thompson SMG." Sten guns of late 1942 and beyond were highly effective weapons, though complaints of accidental discharge continued throughout the war.
The Sten was replaced by the Sterling submachine gun from 1953 and was gradually withdrawn from British service in the 1960s. Other Commonwealth nations followed suit, either by creating their own replacements or adopting foreign designs.
The Sten was a blowback-operated submachine gun firing from an open bolt with a fixed firing pin on the face of the bolt. This means the bolt remains to the rear when the weapon is cocked and on pulling the trigger the bolt moves forward from spring pressure, stripping the round from the magazine, chambering it and firing the weapon all in the same movement. There is no breech locking mechanism; the rearward movement of the bolt caused by the recoil impulse is arrested only by the mainspring and the bolt's inertia.
The German MP40, Russian PPSh-41, and US M3 submachine gun, among others, used the same operating mechanisms and design philosophy of the Sten, namely their low cost and ease of manufacture. Though the MP40 was also built largely for this purpose, Otto Skorzeny went on record saying that he preferred the Sten because it required less raw material to produce and performed better under adverse combat conditions. The effect of putting lightweight automatic weaponry into the hands of soldiers greatly increased the short-range firepower of the infantry, especially when the main infantry weapon was a bolt-action rifle capable of only around 15 rounds per minute and not suited for short-range combat. The open-bolt firing mechanism, short barrel and use of pistol ammunition severely restricted accuracy and stopping power, with an effective range of only around 100 m (330 ft), compared to 500 m (1,600 ft) for the Lee–Enfield rifle.
Stoppages could occur for poor maintenance, while others were particular to the Sten. Carbon build up on the face of the breech or debris in the bolt raceway could cause a failure to fire, while a dirty chamber could cause a failure to feed. Firing the Sten by grasping the magazine with the supporting hand, contrary to instruction, tended to wear the magazine catch, altering the angle of feed and causing a failure to feed; the correct method of holding the weapon was as with a rifle, the left hand cradling the fore piece.
The Sten's magazine, which, like the Lanchesters, derived from the MP28, originally to use its magazines, which incorporated the faults of the MP28 magazine. The magazine had two columns of 9mm cartridges in a staggered arrangement, merging at the top to form a column. While other staggered magazines, such as the Thompson, fed from the left and right side alternately (known as "double column, double feed"), the Sten magazine required the cartridges gradually to merge at the top of the magazine to form a column ("double column, single feed"). Dirt or foreign matter in this taper area could cause feed malfunctions. The walls of the magazine lip had to endure the full stresses of the rounds being pushed in by the spring. This, along with rough handling could result in deformation of the magazine lips (which required a precise 8° feed angle to operate), resulting in misfeeding and a failure to fire.[c] If a Sten failed to feed due to jammed cartridges in the magazine, standard practice to clear it was to remove magazine from the gun, tap the base of the magazine against the knee, re-insert the magazine, then re-cock the weapon and fire again as normal. To facilitate easier loading when attempting to push the cartridges down to insert the next one, a magazine filler tool was developed and formed part of the weapon's kit. The slot on the side of the body where the cocking knob ran was also a target of criticism, as the long opening could allow foreign objects to enter. On the other hand, a beneficial side-effect of the Sten's minimalist design was that it would fire without any lubrication. This proved useful in desert environments such as the Western Desert campaign, where lubricating oil retained dust and sand.
The open bolt design combined with cheap manufacture and rudimentary safety devices also meant the weapon was prone to accidental discharges, which proved hazardous. A simple safety could be engaged while the bolt was in the rearwards (cocked) position. However, if a loaded Sten with the bolt in the closed position was dropped, or the butt was knocked against the ground, the bolt could move far enough rearward to pick up a round (but not far enough to be engaged by the trigger mechanism) and the spring pressure could be enough to chamber and fire the round. The Mk. IV's cocking handle was designed to prevent this by enabling the bolt to be locked in its forward position, immobilising it. Wear and manufacturing tolerances could render these safety devices ineffective. Though the Sten was somewhat prone to malfunction, in the hands of a well-trained soldier, who knew how to avoid the Sten's failings, they were less of a liability as otherwise may be suggested. According to Leroy Thompson, "Troops usually made the conscious choice to keep the Sten with a magazine in place, based on the assumption that they might need it quickly. It might, then, be argued that more troops were saved by having their Sten ready when an enemy was suddenly encountered than were injured by accident. The Sten was more dangerous to its users than most infantry weapons, but all weapons are dangerous".
Sten guns were produced in several basic marks, of which nearly half of the total produced consisted of the Mark II variant. Approximately 4.5 million Stens were produced during the Second World War.
The first ever Mk I Sten gun (number 'T-40/1' indicating its originator Harold Turpin, the year 1940 and the serial number "1") was handmade by Turpin at the Philco Radio works at Perivale, Middlesex during December 1940/January 1941. This particular weapon is held by the historical weapons collection of the British Army's Infantry and Small Arms School Corps in Warminster, Wiltshire.
The Mark I had a conical flash hider and fine finish. The foregrip, forward handle and some of the stock were made of wood. The stock consisted of a small tube, similar to the Mark II Canadian. A design choice that was only present on the Mark I was that the pistol grip could be rotated forward to make it easier to stow. 100,000 Mark I Stens were made before production was moved to the Mark II. Mark I Stens in German possession were designated MP 748(e), the 'e' standing for englisch.
To simplify production of the Mark I, the foregrip, wooden furniture and flash hider were removed with this variant.
The Mark II was the most common variant, with two million units produced. The flash eliminator and the folding handle (the grip) of the Mk I were omitted. A removable barrel was now provided which projected 3 inches (76 mm) beyond the barrel sleeve. Also, a special catch allowed the magazine to be slid partly out of the magazine housing and the housing rotated 90 degrees counter-clockwise (from the operator's perspective), together covering the ejection opening and allowing the weapon and magazine both to lie flat on its side.
The barrel sleeve was shorter and rather than having small holes on the top, it had three sets of three holes equally spaced on the shroud. To allow a soldier to hold a Sten by the hot barrel sleeve with the supporting hand, an insulating lace-on leather sleeve guard was sometimes issued.[d] Sten Mk II's in German possession were designated MP 749(e). Some Mk IIs had wooden stocks. The Spz-kr assault rifle, a rudimentary German design made in the closing stages of the war, used the receiver and components from the Sten Mk II, and the MP 3008 was made as a cheap copy.
- Overall length: 762 mm (30.0 in)
- Barrel length: 197 mm (7.8 in)
- Weight: 3.2 kg (7.1 lb)
Mark II (Canadian)
During World War II a version of the Sten gun was produced at the Long Branch Arsenal in Long Branch, Ontario (now part of Toronto). This was very similar to the regular Mark II, with a different stock ('skeleton' type instead of strut type). It was first used in combat in the Dieppe Raid in 1942.
The Mark II is made in China with a copy known as the M38. The Chinese M38s were made in an automatic-only configuration, unlike the standard Mark II. The M38 was made in 9×19mm and 7.62×25mm Tokarev variants.
- Overall length: 896 mm (35.3 in)
- Barrel length: 198 mm (7.8 in)
- Weight: 3.8 kg (8.4 lb)
After the Mark II, this was the most produced variant of the Sten, manufactured in Canada alongside the United Kingdom, with Lines Bros Ltd being the largest producer. The Mark III was made of 48 parts, compared to the Mark II's 69, but the Mark II remained more commonplace for logistical reasons – parts between the two were not interchangeable. Though slightly lighter, the magazine well was fixed in place, and the barrel could not be removed, meaning if it was damaged the weapon had to be scrapped. Combined with the fact the Mark III was more prone to failure than the Mark II, production of the weapon ceased in September 1943. Unlike the Mark II, the receiver, ejection port, and barrel shroud were unified, leading to them being extended further up the barrel. Captured Sten Mk III's in German possession were designated MP 750(e). A total of 876,886 Mark III's were produced.
The Mark V added a bayonet mount, and a wooden pistol grip and stock. There was a No. 4 Lee–Enfield rear sight and the weapon was of better quality manufacture and finish than the Mk II and Mk III.
Another variant of the Mk V had a swivel stock and rear sight mirror intended for firing around corners in urban warfare, similar to the Krummlauf developed by the Germans for the StG 44.
Mk II(S) and Mk VI models incorporated an integral suppressor and had a lower muzzle velocity than the others due to a ported barrel intended to reduce velocity to below the speed of sound – 305 m/s (1,001 ft/s) – without needing special ammunition. The suppressor heated up rapidly when the weapon was fired, and a canvas cover was laced around the suppressor for protection for the firer's supporting hand.
- Mk II(S)
- Designed in 1943, the Mk II(S) ("Special-Purpose") was an integrally suppressed version of the Mk II. Captured examples of the Sten Mk II(S) in German service were designated MP 751(e).
- Mk VI
- The Mk VI was a suppressed version of the Mk V. The Mk VI was the heaviest version due to the added weight of the suppressor, as well as using a wooden pistol grip and stock.
The suppressed models were produced at the request of the Special Operations Executive (SOE) for use in clandestine operations in occupied Europe, starting with the Mk II(S) in 1943. Owing to their tendency to overheat, they were fired in short bursts or single shots. Some guns were even changed to semi-automatic only.
In addition to its use in the European theatre, the Mk II(S) saw service with clandestine units in the Southwest Pacific Area (SWPA) such as the Services Reconnaissance Department and SOE's Force 136 on operations against the Imperial Japanese Army. The Sten Mk II(S) was used by the Operation Jaywick party during their raid into Japanese-occupied Singapore Harbour.
The Sten Mk II(S) also saw service with the Special Air Service Regiment during the Vietnam War.
- Mark II (wooden stock model)
- This was a standard Sten Mk.II with a wooden stock attached in place of the wireframe steel stock used with Mk.IIs. This wooden stock model was never put into service; likely due to the cost of producing it.
- Mark II (Rosciszewski model)
- This was a Sten Mk.II modified by Antoni Rosciszewski of Small Arms Ltd. The magazine was mechanically operated by the breech block movement. The trigger was split into two sections, with the upper part of the trigger offering full-auto fire and a lower part offering single shots. It was very complex in design and never fielded.
- Mark II (pistol grip model)
- This was a Sten Mk.II with a wireframe pistol grip, intended for use with paratroopers. It was compact but predictably uncomfortable to fire.
- Model T42
- This was a Sten Mk.II modified with a 5-inch barrel and folding stock, as well as a conventional pistol grip and redesigned trigger guard. It was dubbed the "T42" in prototype phases, but never entered service.
- Mark IV
- The Mark IV was a smaller variant of the Sten, comparable in size to a pistol, and never left the prototype stage. It used a conical flash hider, a shortened barrel, and a much lighter stock.
- Developed at the Royal Ordnance Factory in Fazakerley (ROF Fazakerley), the Rofsten was an odd Sten prototype with a redesigned magazine feed, ergonomic pistol grip, selector switch and cocking system. The weapon was cocked by pulling the small ring above the stock. A large flash eliminator was fixed onto the barrel, and a No.5 bayonet could be fixed. It was made to a very high quality standard and had an increased rate of fire (around 900 rounds per minute). The Rofsten was made in 1944 as a single prototype and ROF wanted to submit it to trials the next year. Despite better quality there were numerous reliability problems due to the much higher rate of fire. The budget cuts prevented the modifications and this version never got beyond the prototype stage.
- Viper mk1
- This version simplified the weapon, including the trigger mechanism and barrel which was welded to the gun making it not removable. The weapon was also fully automatic and there was no semi-automatic function on the gun. It was made in the United Kingdom after World War II and was a prototype weapon never used as it was deemed impractical. It was designed for military policeman in post-war Germany to be fired one-handed. Only one was ever made and it is currently held at the Royal Armouries Museum in Leeds, United Kingdom.
Foreign-built variants and post-1945 derivatives
- Sten MkIIs were licence-copied in Argentina by Pistola Hispano Argentino and can be recognised with a wooden handguard in front of the trigger group. It was known as the Modelo C.4. Another variant came with a pistol grip section based on the Ballester–Molina .45 pistol. The Halcon ML-57 was a simpler derivative of the Sten gun of Argentine origin that was fed from a vertically inserted magazine.
- Copies of the Sten Mk II and Sten Mk V were clandestinely manufactured in Tel Aviv and on various kibbutzim in 1945–48 for use with Haganah and other Jewish paramilitary groups.
- The French "Gnome et Rhône" R5 Sten, manufactured by the motorbike and aeroplane engine manufacturer Gnome et Rhône (SNECMA), came with a forward pistol grip and distinctive wooden stock, although its greatest improvement was a sliding bolt safety, added to secure the bolt in its forward position. Another variant made by MAC (Manufacture d’armes de Châtellerault), were made and tested shortly after WWII. One variant had an unusual stock shape that proved detrimental to the firer's aim. Internally it was basically a Sten gun but had two triggers for semi/full auto, a grip safety and a foregrip that used MP40 magazines. Another had a folding stock with a folding magazine insert. The trigger mechanism was complicated and unusual. Neither of these prototypes had any kind of success and MAC closed its doors not long after their conception. The French were not short of SMGs after the war; they had some 3,750 Thompsons and Stens, as well as MAS 38s.
- The Norwegian resistance, under the leadership of Bror With, created a large number of Sten guns from scratch, mainly to equip members of the underground army Milorg. In his autobiography, Norwegian resistance fighter Max Manus frequently mentions the Sten as one of the weapons his groups of commandos and resistance fighters used effectively against German troops.[undue weight? ]
- Several groups in the Danish resistance movement manufactured Sten guns for their own use. BOPA produced around 200 in a bicycle repair shop on Gammel Køge landevej (Old Køge road), south of Copenhagen. Holger Danske produced about 150 in workshops in Copenhagen, while employees of the construction company Monberg & Thorsen built approximately 200–300 in what is now the municipality of Gladsaxe (a suburb of Copenhagen) for use by Holger Danske and others. The resistance groups 'Frit Danmark' and 'Ringen' also built significant numbers of Stens.
- From 1942 and 1944, approximately 11,000 Sten Mk IIs were delivered to the Armia Krajowa by the SOE and Cichociemni. Because of the simplicity of the design, local production of Sten variants was started in at least 23 underground workshops in Poland, with some producing copies of the Mark II, and others developing their own designs, namely the Polski Sten, Błyskawica and KIS. Polski Stens made in Warsaw under the command of Ryszard Białostocki were built from parts made in official factories, with the main body of the design being made from hydraulic cylinders produced for hospital equipment. To help disguise their origin, the Polski Stens were marked in English.
- A little known version of the MkII Sten was built in Belgium by l'arsenale militare belga (the Belgian military arsenal). The magazine well was stamped AsArm (the manufacturer), ABL (for Armée Belge Belgisch Leger), the Belgian Royal Crown and a serial number of typically five figure with no letter prefix. It is believed the Belgian built Mk II Stens remained in ABL service until the early 1980s, particularly with helicopter-borne forces. Some of the weapons had a "Parkerised" finish. After the Second World War the Belgian Army was mainly equipped with a mixture of British and American submachine guns. The army, wanting to replace them with a modern and preferably native design, tested various designs with the Vigneron M2 and licence-produced FN Uzi being selected. However, the Imperia was an improved Sten with a fire selector and retractable stock.
- In late 1944, Mauser began to produce copies of the Mk II Sten for sabotage purposes. The series was referred to as the Gerät Potsdam (Potsdam Device) and almost 10,000 weapons were made. By 1945, Germany was seeking a cheaper replacement for the MP40 submachine gun to issue to the Volkssturm. Mauser produced a modified Sten, named the MP 3008. The main difference was that the magazine attached below the weapon. Altogether, roughly 10,000 were produced in early 1945, just before the end of World War II.
- The Mark I Austen submachine gun ("Australian Sten") was an Australian design, derived from the Sten and manufactured by the Lithgow Small Arms Factory. It externally resembled the Sten but had twin pistol grips and folding stock resembling those of the German MP40. Australian and NZ troops however preferred the Owen gun which was more reliable and robust in jungle warfare. A Mk 2 version was also produced which was of different appearance and which made more use of die-cast components. 20,000 Austens were made during the war and the Austen was replaced by the F1 submachine gun in the 1960s.
- United States
- A short-lived American invention developed in the 1980s, the Sputter Gun was designed to circumvent the law that defined a machine gun as something that fired multiple rounds with one pull of the trigger. The Sputter Gun had no trigger, but fired continuously after loading and the pulling back of its bolt, firing until it ran out of ammunition. The gun was very short lived as the ATF quickly reclassified it. During the 1970s-1980s, International Ordnance of San Antonio, Texas, United States released the MP2 machine pistol. It was intended as a more compact, simpler derivative of the British Sten gun to be used in urban guerrilla actions, to be manufactured cheaply and/or in less-than-well-equipped workshops and distributed to "friendly" undercover forces. Much like the FP-45 Liberator pistol of World War II, it could be discarded during an escape with no substantial loss for the force's arsenal. The MP2 is a blowback-operated weapon that fires from an open bolt with an extremely high rate of fire.
- The SM-9 is a machine pistol of Guatemalan origin and manufactured by Cellini-Dunn IMG, Military Research Corp and Wildfire Munitions as the SM-90. It is blowback operated, firing from an open bolt and can use magazines from Ingram MAC-10 submachine guns inserted into a similar foregrip that can be rotated 45 and 90 degrees for left/right handed operators. The layout of the receiver is somewhat simpler than that of a Sten with its internal components light in weight enabling a very high rate of fire of 1200rpm. Its forward pistol grip can hold a spare magazine as well as handling the weapon when firing.
- The Pleter submachine gun was created in 1991 when the breakup of Yugoslavia in the midst of emerging war left the newly formed Republic of Croatia with small number of military firearms. Since the embargo prevented the Croatian military from legally buying them on open market (so they were mostly obtained on the world black market, but with significantly higher price and sometimes of questionable quality), to fulfill the immediate need for arms, they tried to resort on quick and simple locally made designs. Despite having a vertical magazine well (designed to accept 32-round staggered-feed direct copy of UZI magazine, rather than original single-feed Sten-type magazine), analogies with the Sten include a striking resemblance in the barrel assembly and in the bolt and recoil spring. In addition, this gun also fires from an open bolt, and is further simplified by removing fire mode selector or any safety.
- SMG International in Canada manufactured reproductions of the Sten in six variants.[when?] They made copies of the Sten's Mk 1*, Mk II and Mk III, a "New Zealand Sten" (a Mk II/III Sten hybrid, with sights and a fixed magazine housing similar to the Mk III), then branched out into "hypothetical" Sten-guns with a "Rotary Magazine Sten" (a Mk II Sten with a drum magazine attached below the weapon and wooden horizontal forward grip on the left side of the weapon) and the "FRT Gun" (a long barrel Sten with a wooden or Mk 1* type butt stock, a drum magazine attached below the weapon and sliding ramp rear sights). These last two being obviously not Sten reproductions, especially if they included a drum magazine. The "Rotary Magazine Sten" is a vertically fed Sten which uses a modified Sten bolt, which can use either PPSh drum magazines or stick magazines. The FRT gun is essentially a Suomi that uses a Sten trigger mechanism. All SaskSten guns fire from an open bolt.[full citation needed]
The Sten MKII can be converted to take 7.62x25mm ammo by changing the barrel, magazine, magazine housing and bolt. Some of them were imported to the US before 1968. These MKIIs were made by Long Branch as part of a Nationalist Chinese contract.
While all types of 7.62x25mm ammo can be used, those made in the former Czechslovakia are made for small arms that can handle high velocity, so users are not advised to use them.
The Sten, especially the Mark II, tended to attract affection and loathing in equal measure. Its peculiar appearance when compared to other firearms of the era, combined with sometimes questionable reliability made it unpopular with some front-line troops. It gained nicknames such as "Plumber's Nightmare", "Plumber's Abortion", or "Stench Gun". The Sten's advantage was its ease of mass-production manufacture in a time of shortage during a major conflict.
Made by a variety of manufacturers, often with subcontracted parts, some early Sten guns were made poorly and/or not to specification, and could malfunction in operation, sometimes in combat. The double-column, single-feed magazine copied from the German MP28 was never completely satisfactory, and hasty manufacturing processes often exacerbated the misfeed problems inherent in the design. A common statement heard from British forces at the time was that the Sten was made "by Marks and Spencer out of Woolworth." British and Commonwealth forces in the early years of the war often extensively test-fired their weapons in training to weed out bad examples; a last-minute issue of newly manufactured Stens prior to going into action was not always welcomed.
The MK II and III Stens were regarded by many soldiers as very temperamental, and could accidentally discharge if dropped or even laid on the ground whilst the gun was cocked. Others would fire full-automatic when placed on 'single', or fire single shots when placed on 'automatic'. This was particularly true of early Stens using bronze bolts, where the sear projection underneath the bolt could wear down more easily than ones made of case-hardened steel.
Stens could jam at inopportune moments. One of the more notable instances of this was the assassination of SS–Obergruppenführer Reinhard Heydrich on 27 May 1942, when Czechoslovak Warrant Officer Jozef Gabčík wanted to fire his Sten point blank at Heydrich, only to have it misfire. His comrade Jan Kubiš then hastily tossed a grenade, which mortally wounded Heydrich. There are other accounts of the Sten's unreliability, some of them true, some exaggerated and some apocryphal. France manufactured (well-made) Sten copies postwar into the early 1950s, evidently believing in the basic reliability and durability of the design.
A well-maintained (and properly functioning) Sten gun was a devastating close-range weapon for sections previously armed only with bolt-action rifles. In addition to regular British and Commonwealth military service, Stens were air-dropped in quantity to resistance fighters and partisans throughout occupied Europe. Due to their slim profile and ease of disassembly/reassembly, they were good for concealment and guerrilla warfare. Wrapping the barrel in wet rags would delay undesirable overheating of the barrel. Guerrilla fighters in Europe became adept at repairing, modifying and eventually scratch-building clones of the Sten (over 2,000 Stens and about 500 of the similar Błyskawica SMGs were manufactured in occupied Poland).
Canadian infantry battalions in northwest Europe retained spare Sten guns for special missions and the Canadian Army reported a surplus of the weapons in 1944. The Sten saw use even after the economic crunch of World War II, replacing the Royal Navy's Lanchester submachine guns into the 1960s, and was used in the Korean War, including specialist versions for British Commandos. It was slowly withdrawn from British Army service in the 1960s and replaced by the Sterling SMG; Canada also phased out the Sten, replacing it with the C1 SMG.
The Sten was one of the few weapons that the State of Israel could produce domestically during the 1948 Arab–Israeli War. Even before the declaration of the State of Israel, the Yishuv had been producing Stens for the Haganah; after the declaration, Israel continued making Stens for IDF use. The opposing side also used (mostly British-made) Stens, particularly the irregular and semi-regular Arab Liberation Army.
In the 1950s, "L numbering" came into use in the British Army for weapons—Stens were then known as L50 (Mk II), L51 (Mk III) and L52 (Mk V).
One of the last times the Sten was used in combat during British service was with the RUC during the IRA border campaign of 1956–1962. In foreign service, the Sten was used in combat at least as recently as the Indo-Pakistani War of 1971.
Sten guns were widely used by guerrilla fighters during the 1971 Bangladesh Liberation War. In 1975, President Sheikh Mujibur Rahman and his family members were assassinated using Sten guns.
A number of suppressed Stens were in limited use by the US Special Forces during the Vietnam War, including c. 1971, by the United States Army Rangers.
In 1984, Indian prime minister Indira Gandhi was assassinated by two of her bodyguards, one of whom fired the entire magazine (30 rounds) of his Sten at point-blank range, of which 27 hit her.
In the Second Sino-Japanese War and the Chinese Civil War, both nationalists and communist Chinese forces used the Sten. Some Stens were converted by the communists to 7.62×25mm by using the magazine housing from a PPS to accept curved PPS magazines. British, Canadian, and Chinese Stens were seen in the hands of the communists during the Korean and Vietnam Wars.
The Finnish Army acquired moderate numbers of Stens in the late 1950s, mainly Mk. III versions. Refurbishment at the Kuopio Arsenal included bluing of the arms. Stens in Finnish service saw limited usage by conscripts (notably combat swimmers) and were mostly stockpiled for use in a future mobilization.
During the Zapatista movement in 1994, some Zapatista soldiers were armed with Sten guns.
- Albania: Used by the Albanian National Liberation Army during World War II. The weapons were supplied by the British SOE.
- Argentina: Modelo C.4..
- Australia: Locally produced during World War II.
- Bangladesh: Extensively used during 1971 war.
- British Hong Kong
- Canada: Locally produced during World War II.
- Central African Republic: Central African Republic Police had 10 Stens in 1963
- Republic of the Congo (Léopoldville)
- Cuba: Fidel Castro praised the Canadian Sten gun in his 1958 interview with Erik Durschmied
- People's Republic of China: Most used by communist forces had their Stens converted to 7.62x25 caliber.
- Republic of China
- Czechoslovakia: Used by Czechoslovak troops for Operation Anthropoid, the assassination of Reinhard Heydrich. The gun jammed and failed to fire.
- Denmark: Used by the Danish resistance movements like BOPA and Holger Danske. Locally produced.
- Finland: 76 115 MK 2s and 3s bought in 1957–1958; used until replaced by assault rifles.
- France: Used during World War II by the Free French forces, the French Resistance and some captured from the Resistance were used by the pro-German Milice française. Still used after World War II.
- Grenada
- Israel: Used in the 1947–1949 Palestine war and the Suez Crisis.
- Italy Sten guns were supplied to the Italian resistance movement by the SOE, along with the United Defense M42 submachine gun supplied by the OSS during the Italian Campaign. These guns, along with the Berretta M38A, were used by the Italian partisans until the end of World War II.
- Empire of Japan
- Jordan: Arab Legion
- Kenya: Used by the regular police paramilitary GSU, army paratroopers; replaced by G3A3/4, M4 and HK416.
- Kingdom of Laos: Used by the Royal Lao Army and the CIA-sponsored irregular Special Guerrilla Groups during the Laotian Civil War.
- Malaysia: Used by Royal Malaysia Police, Malaysian Army, Royal Malaysian Navy and Malaysian Prison Department in 1950s to 1970s.
- Myanmar: Retired.
- Nepal: Still in service in 2006.
- Nazi Germany: Used some captured Stens during World War II, under the designations MP 748 (e) for the Mark I to MP 751 (e) for the Mark V. From late 1944, they produced an almost identical copy for home defence: the MP 3008.
- New Zealand
- Norway: Used by the Norwegian resistance from 1940–1945. The guns came to the resistance groups by air (supply drops). Used by the Army after the war.
- Philippines: Used by the Recognized Guerrilla Units during World War II.
- Poland: Used by Polish Armed Forces in the West and main resistance army in occupied Poland, the Armia Krajowa (Home Army). The majority of the resistance's Stens were dropped to Poland in SOE supply drops, but some of the Polish Stens were produced in the occupied country. Polish engineers also designed their own Sten version, the Błyskawica submachine gun. After the war, it was used by many anti-communist partisan groups (cursed soldiers).
- Portugal: Known as m/43.
- Sierra Leone
- South Africa
- South West Africa: Used by SWAPOL during the South African Border War.
- South Vietnam
- Tibet: The Tibetan Army purchased 168 guns in 1950.
- United Kingdom
- United States: Suppressed Stens used during the Vietnam War by American special forces.
- North Vietnam: Việt Minh and Viet Cong
- Yugoslavia: Used by the Yugoslav Partisans and Chetniks. Also used after the war.
- The Provisional IRA and Official IRA
- The Ulster Volunteer Force and Ulster Freedom Fighters
- Balcombe Street Gang
- The Angry Brigade
- Some were supplied to the Bulgarian Communist Party during WWII
- ^ plus numerous sub-contractors making individual parts
- ^ Colonel Shepherd discussing how it was named when he received an Award from the Board of the Royal Commission Awards to Inventors. Lord Cohen: "Why was it called the Sten?" Colonel Shepard: "It was called the Sten by the then Director General of Artillery. The S was from my name, the T from Mr. Turpin who was my draughtsman and who did a very large amount of the design and the EN was for England. That is the origin of the name, for which I accept no responsibility." In the official history of the Royal Ordnance Factories, ST is for Shepard and Turpin and EN is for Enfield Some sources give J.J.Turpin rather than Harold
- ^ Modern 9 mm magazines, such as those used by the Sterling submachine gun, are curved and feed both sides to avoid this problem.
- ^ The barrel sleeve was generally considered the proper place for the supporting hand, as holding the weapon by its magazine could sometimes initiate a feed malfunction. However, the metal barrel sleeve heated rapidly after only a few bursts.
- ^ Bloomfield et al 1967, p. 89
- ^ "Contre les Mau Mau". Encyclopédie des armes : Les forces armées du monde (in French). Vol. XII. Atlas. 1986. pp. 2764–2766.
- ^ "L'armement français en A.F.N." Gazette des Armes (in French). No. 220. March 1992. pp. 12–16.
- ^ Bloomfield et al 1967, p. 191
- ^ a b McNab, Chris (2002). 20th Century Military Uniforms (2nd ed.). Kent: Grange Books. p. 185. ISBN 978-1-84013-476-6.
- ^ a b Kalam, Zaid (29 December 2017). "Arms for freedom". The Daily Star.
- ^ "Satgas Yonarmed 12 Kostrad Berhasil Mengamankan Senjata Ilegal". tni.mil.id (in Indonesian). 21 November 2016. Retrieved 3 May 2021.
- ^ "Variety of Iraq weapons astounds expert". Stars and Stripes.
- ^ "The STEN Carbine, A Description" Model Engineer Volume 88 Issue 2195 P.509
- ^ Laidler, Peter (2000). The Sten Machine Gun. Ontario: Collector Grade Publications. pp. 363–364. ISBN 978-0-88935-259-9.
- ^ Ian Hay (Maj.-Gen. John Hay Beith, CBE, MC) (1949). R.O.F. The Story of the Royal Ordnance Factories, 1939–1948. London: His Majesty's Stationery Office.
- ^ Beckett, Jack (19 March 2015). "A rough guide of the costs of guns during WWII". War History Online.
- ^ Carbine Machine Sten 9mm. Mk. II General Instructions. 1942. p. 4.
- ^ a b Thompson 2012, p. 22.
- ^ Warwicker, John (2008). Churchill's Underground Army: A History of the Auxiliary Units in World War II. Frontline Books. p. 130.
- ^ a b c Thompson 2012, p. 70.
- ^ a b c Carbine, Machine, Sten 9mm Mk II, General Instructions (PDF), February 1942, archived from the original (PDF) on 7 November 2014 Heavy carbon buildup could prevent the firing pin from detonating the primer.
- ^ Thompson p13
- ^ a b c d e Thompson 2012, p. 6.
- ^ D Cuthbertson. "The Sub-machine Gun & Light Machine Gun Room". The Infantry and Small Arms School Corps. Archived from the original on 22 May 2009. Retrieved 9 June 2009.
- ^ a b c d e f g h i j k Henrotin, Gerard (2008). The English Sten Submachine Gun Explained. HL Publishing. p. 6.
- ^ Skennerton, Ian (September 1988). British Small Arms of World War 2: The Complete Reference Guide to Weapons, Codes and Contracts, 1936-1946. Greenhill Books. p. 32. ISBN 978-0-949749-09-3.
- ^ a b c d "Stens of the World: Part I". Small Arms Defense Journal.
- ^ Thompson 2012, p. 24.
- ^ Laidler, Peter (2000). The Sten Machine Carbine. Collector Grade Publications. p. 59.
- ^ Wolfgang Michel: Britische Schalldämpferwaffen 1939–1945: Entwicklung, Technik, Wirkung. ISBN 978-3-8370-2149-3
- ^ "Silencedsten". 96.lt.
- ^ a b McCollum, Ian (26 April 2019). "Viper MkI: A Simplified Steampunk Sten". www.forgottenweapons.com. Retrieved 13 February 2023.
- ^ Julio S. Guzmán, Las Armas Modernas de Infantería, Abril de 1953.
- ^ "Museo de armas de la Nación (Buenos Aires), 2011". flickr.com. 21 January 2011.
- ^ "Sten Mk 2 type submachine-gun [Jewish underground]". Imperial War Museums.
- ^ Manus, Max, Part I, Det vil helst gå godt (It'd best be all right); Part II, Det blir alvor (It gets serious), Familieforlaget (1946), ISBN 978-82-8214-043-0
- ^ a b Smith 1969, p. 429.
- ^ a b c Thompson 2012, p. 71.
- ^ a b Smith, 1969 p198
- ^ Larsen, Colin (1946). Pacific Commandos: New Zealanders and Fijians in Action. A History of Southern Independent Commando and First Commando Fiji Guerrillas. Wellington: Reed Publishing. pp. 93–103.
- ^ Smith 1969, p. 200.
- ^ "FRT Gun". SMG International. 29 November 2010. Archived from the original on 29 November 2010.
- ^ Mick Boon
- ^ a b c https://smallarmsreview.com/magnum-sten/
- ^ a b McCollum, Ian (13 May 2020). "Chinese 7.62mm Sten Gun". Forgotten Weapons.
- ^ Weeks, John, World War II Small Arms, Galahad Books (1979), ISBN 0-88365-403-2, p. 84
- ^ a b Willbanks, James H., Machine Guns: An Illustrated History of Their Impact, ABC-CLIO Press (2004), ISBN 1-85109-480-6, ISBN 978-1-85109-480-6, p. 91
- ^ a b c Shore, C. (Capt), With British Snipers to the Reich, Paladin Press (1988), pp. 208-209
- ^ "Oddities". Welcome to STEN Guns. 16 July 2004. Archived from the original on 16 July 2004.
- ^ Dear, I. Sabotage and Subversion: The SOE and OSS at War, Arms and Armour (1996) pp. 137–155
- ^ a b Morris, Benny (2008). 1948: A History of the First Arab-Israeli War. Yale University Press. ISBN 978-0-300-12696-9.
- ^ Kalam, Zaid (29 December 2017). "Arms for freedom". The Daily Star. Retrieved 15 March 2023.
- ^ Ahmed, Inam; Manik, Julfikar Ali (15 August 2010). "Bloodbath on Road 32". The Daily Star. Retrieved 15 March 2023.
- ^ a b "The Vietnam Experience LRRP 1966-1972". 25thaviation.org. Retrieved 9 June 2009.
- ^ Oppenheimer, Andrés (8 February 1994). "Deep in the heart of Mayan Mexico, a revolution that's out of this world". Rome News-Tribune. Rome, Georgia, USA. Retrieved 21 April 2014.
- ^ Thompson 2012, pp. 50–51.
- ^ Fitzsimmons, Scott (November 2012). "Callan's Mercenaries Are Defeated in Northern Angola". Mercenaries in Asymmetric Conflicts. Cambridge University Press. p. 155. doi:10.1017/CBO9781139208727.005. ISBN 9781107026919.
- ^ a b c d e f Thompson 2012, p. 73.
- ^ a b Thompson 2012, p. 16.
- ^ a b c d e Bonn International Center for Conversion; Bundeswehr Verification Center. "Sten MP". SALW Guide: Global distribution and visual identification. Retrieved 31 August 2018.
- ^ a b c d e f "STEN SMG". Military Factory. Retrieved 4 June 2014.[better source needed]
- ^ Berman, Eric G.; Lombard, Louisa N. (December 2008). The Central African Republic and Small Arms: A Regional Tinderbox (PDF). Small Arms Survey. pp. 35, 43. ISBN 978-2-8288-0103-8. Archived from the original (PDF) on 2 August 2014.
- ^ Abbot, Peter (February 2014). Modern African Wars: The Congo 1960–2002. Oxford: Osprey Publishing. p. 15. ISBN 978-1782000761.
- ^ Abbot 2014, p. 14.
- ^ Thompson 2012, p. 67.
- ^ Weyman, Bay. "Finding Fidel: The Journey of Erik Durschmied". TV Ontario. Archived from the original on 14 July 2014. Retrieved 16 June 2014.
- ^ Thompson 2012, p. 60.
- ^ Thompson 2012, pp. 51–52.
- ^ Thompson 2012, pp. 52–53.
- ^ Smith 1969, pp. 613, 615.
- ^ Palokangas, Markku (1991): Sotilaskäsiaseet Suomessa 1918–1988 III osa Ulkomaiset aseet. Vammalan Kirjapaino Oy. P.191 ISBN 951-25-0519-3
- ^ a b Thompson 2012, p. 45.
- ^ Cullen, Stephen M. (2018). World War II Vichy French Security Troops. Men-at-Arms 516. Osprey Publishing. pp. 42–43. ISBN 978-1472827753.
- ^ a b Windrow, Martin (15 November 1998). The French Indochina War 1946–54. Men-at-Arms 322. Osprey Publishing. p. 41. ISBN 9781855327894.
- ^ a b Thompson 2012, p. 69.
- ^ Smith 1969, p. 461.
- ^ Russell, Lee; Katz, Sam (April 1986). Israeli Defense Forces, 1948 to the Present. Uniforms Illustrated 12. Olympic Marketing Corp. p. 15. ISBN 978-0853687559.
- ^ Young, Peter (1972). The Arab Legion. Men-at-Arms. Osprey Publishing. p. 24. ISBN 978-0-85045-084-2.
- ^ "World Infantry Weapons: Libya". Archived from the original on 5 October 2016.
- ^ Maung, Aung Myoe (2009). Building the Tatmadaw: Myanmar Armed Forces Since 1948. ISBN 978-981-230-848-1.
- ^ "Legacies of War in the Company of Peace: Firearms in Nepal" (PDF). Nepal Issue Brief. Small Arms Survey (2): 4. May 2013. Archived from the original (PDF) on 25 February 2014.
- ^ Bloomfield & Leiss, 1967, p. 79
- ^ Thompson 2012, p. 25.
- ^ Smith 1969, p. 523.
- ^ Mallet, N. H. (9 December 2013). "The Venerable Sten – The Allies' $10 Dollar Submachine Gun". Military History Now. Retrieved 22 March 2014.
- ^ Thompson 2012, p. 9.
- ^ Thompson 2012, p. 56.
- ^ Smith 1969, p. 530.
- ^ Moorcraft, Paul; McLaughlin, Peter (2008). The Rhodesian War: A Military History. Jonathan Ball Publishers. p. 92. ISBN 978-1-86842-330-9.
- ^ "World Infantry Weapons: Sierra Leone". 2013. Archived from the original on 24 November 2016.[self-published source]
- ^ "SOUTH AFRICA: The Sharpeville Massacre". Time. 4 April 1960. Archived from the original on 20 October 2007.
- ^ McMullin, Jaremey (2013). Ex-Combatants and the Post-Conflict State: Challenges of Reintegration. Basingstoke: Palgrave-Macmillan. pp. 81–89. ISBN 978-1-349-33179-6.
- ^ Goscha, Christopher (2013). Thailand and the Southeast Asian Networks of The Vietnamese Revolution, 1885-1954. Routledge. p. 185.
- ^ Shakya, Tsering (1999). The Dragon in the Land of Shows: A History of Modern Tibet Since 1949. Columbia University Press. pp. 5–6, 8–9, 11–15, 26, 31, 38–40.
- ^ Thompson 2012, p. 4.
- ^ Windrow 1998, p. 24.
- ^ Chris Bishop (1996). Vital Guide to Combat Guns and Infantry Weapons. p. 203. ISBN 978-1853105395.
- ^ Scarlata, Paul (1 October 2017). "Yugoslav Part II: World War II small arms: an assortment of small arms from friends and foe alike". Firearms News.
- ^ Vukšić, Velimir (July 2003). Tito's partisans 1941–45. Warrior 73. Osprey Publishing. pp. 24–25. ISBN 978-1-84176-675-1.
- ^ Smith 1969, p. 723.
- ^ a b c d Christopher Dobson; Ronald Payne (1982). The Terrorists: Their Weapons, Leaders, and Tactics. Facts on File. pp. 101–103.
- ^ Gordon Carr (2010). The Angry Brigade: A History of Britain's First Urban Guerilla Group. PM Press. p. 98. ISBN 978-1604860498.
- ^ Gianfranco Sanguinetti (2015). Red Army Faction. Red Brigades, Angry Brigade. The Spectacle of Terror in Post War Europe. Bread and Circuses.
- ^ "BULGARIAN SMALL ARMS OF WORLD WAR II, PART 2: FROM MAXIM OBRAZETZ 1907G TO ZB39 OBRAZETZ 1939G. - Free Online Library". www.thefreelibrary.com. Retrieved 19 December 2022.
- Thompson, Leroy (2012). The Sten Gun. Weapon 22. Illustrated by Mark Stacey, Alan Gilliland. Osprey Publishing. ISBN 9781849087599.
- Smith, Joseph E. (1969). Small Arms of the World (11 ed.). Harrisburg, Pennsylvania: The Stackpole Company. ISBN 9780811715669 – via Archive.org.
- Bloomfield, Lincoln P.; Leiss, Amelia Catherine; Legere, Laurence J.; Barringer, Richard E.; Fisher, R. Lucas; Hoagland, John H.; Fraser, Janet; Ramers, Robert K (30 June 1967). The Control of local conflict: a design study on arms control and limited war in the developing areas (PDF). Studies of Conflict. Vol. 3. Massachusetts Institute of Technology. Center for International Studies. hdl:2027/uiug.30112064404368. Archived (PDF) from the original on 4 August 2020.
- "Sten Gun to be forerunner of invasion" September 1943 detailed article in Popular Science
- Complete machinist's plans to manufacture a Sten Mk II
- Sten at Modern Firearms
- 9mm Parabellum submachine guns
- Insurgency weapons
- Simple blowback firearms
- Submachine guns of the United Kingdom
- Weapons and ammunition introduced in 1941
- World War II infantry weapons of Australia
- World War II infantry weapons of China
- World War II infantry weapons of the United Kingdom
- World War II submachine guns | 1 | 39 |
<urn:uuid:ecdd5b54-cbd7-4dab-b778-e56f33ceb295> | Chances are that when downloading software or poking around in your computer’s settings menu, you’ve seen the terms 32-bit and 64-bit. But what do these terms mean, and how do they affect your computer?
Let’s look at this important computer distinction and see what it means.
32-Bit and 64-Bit Defined
If you’ve read our explanation on computer file sizes, you’ll know that computers use the binary system to count. Unlike the standard decimal system with 10 possible digits for each place, binary numbers are made up of only ones and zeros.
A bit refers to one binary digit, which is the smallest amount of information a computer can record. A 32-bit number, then, consists of four groups of eight bits each (this group of eight bits is called a byte). 64-bit numbers have twice as many bits, containing eight sets of bytes.
This might lead you to think that a 64-bit number can store twice as much information as a 32-bit number. However, adding more places for binary numbers actually increases the possible values exponentially.
A 32-bit number can store 2^32 values, or 4,294,967,296. Meanwhile, a 64-bit number has 2^64 possible values, or a staggering 18,446,744,073,709,551,616. That’s over 18.4 quintillion, which is so large that it’s difficult to comprehend.
Now that we know what these values mean, how do they affect computers?
32-Bit and 64-Bit Processor Architecture
The processor (also called CPU) inside a computer uses a certain architecture (measured in bits) to process information. The exact details of this are far too complex for this explanation, but suffice it to say that the higher the bit architecture of a CPU, the more instructions it can process per second.
Today, most computers have 64-bit processors. Even phones have largely moved to 64-bit; Apple’s iPhone 5s, released in 2013, was the first smartphone to have a 64-bit chip.
It’s pretty rare to find a standalone 32-bit processor or a computer with a 32-bit processor inside these days. If you have a computer that’s quite old, it might be 32-bit. However, any computer you buy off the shelf today will very likely have a 64-bit CPU.
Going even further back, computers from decades ago had 16-bit processors that were even weaker than 32-bit systems. As you’d expect, these are essentially extinct today.
32-Bit and 64-Bit Operating Systems
The CPU’s architecture is just one part of the equation. As you may know, operating systems can also be 32-bit or 64-bit. The version you have installed depends on the processor in your system.
A 64-bit version of Windows (or another operating system) only works on 64-bit systems. If you have a 32-bit processor, you must install the 32-bit flavor of your chosen OS. You can install a 32-bit OS on a 64-bit system, but you won’t enjoy any of the performance benefits that 64-bit CPUs offer.
In Windows 10, you can check what processor type and OS version you have by opening Settings > System > About. Under Device specifications, you’ll see a System type entry that says something like 64-bit operating system, x64-based processor.
While x64 obviously means you have a 64-bit processor, x86 is commonly used for 32-bit architecture. This is a bit confusing; it stems from a popular line of Intel processors that had model numbers ending in 86 at the time 32-bit systems were becoming available.
If this page says 32-bit operating system, x64-based processor, then you should consider reinstalling a 64-bit version of Windows so you get all the benefits of your CPU.
And if you use a Mac, you’re almost certainly on a 64-bit system. Since Mac OS X Lion in 2011, Apple’s desktop OS has run only on 64-bit processors.
Differences Between 32-Bit and 64-Bit Windows
As we mentioned, the minute differences between system types is primarily something for computer scientists to wrestle with. Normal users will notice two major differences between 32-bit and 64-bit versions of Windows, though.
First is that 32-bit Windows can only utilize up to 4GB of RAM. Even if you have more RAM in your system, a 32-bit OS can’t take advantage of it. On the same About page where you checked your system type, under Installed RAM, you might see something like 8.0 GB (4.0 GB usable).
As you’d imagine, this is a waste of resources and will limit how many tasks you can run on your computer at once. Meanwhile, to highlight the difference in power between them, a 64-bit copy of Windows 10 Pro supports up to a staggering 2TB of RAM.
The second major difference on 64-bit versions of Windows is the presence of a second Program Files folder. 32-bit versions of Windows only have one Program Files directory, but on 64-bit Windows, you’ll see this in addition to Program Files (x86).
The reason for this is because 32-bit and 64-bit programs require different resources. A 32-bit program wouldn’t know what to do with a 64-bit resource file, so Windows keeps them separated.
32-Bit and 64-Bit Software
The final part of the equation is the software you use. Like processors and operating systems, applications can be 32-bit or 64-bit. Unsurprisingly, 64-bit programs cannot run on a 32-bit OS.
Though 32-bit processors are fading out, 32-bit software is still fairly common on Windows. Like an operating system, 64-bit software can take advantage of the enhanced capabilities 64-bit architecture offers.
While this is important for resource-heavy apps like video editors, 32-bit software is still suitable for lighter apps. This differs based on the app–some install the right version for your system automatically, while others ask which you want to use.
Mac users might know that macOS Catalina, released in 2019, is the first version of the Mac operating system to drop support for 32-bit apps. If you need to use 32-bit Mac software, you’ll need to stay on macOS Mojave or earlier.
Get the Most From Your Bits
We’ve taken a look at the architecture that processors, operating systems, and software use to perform tasks on your computer. In summary, 64-bit provides major advantages over older 32-bit setups. Most processors and computers from today and the last few years are 64-bit, but 32-bit software is still around in some places.
For more computer explanations, read through our explanation of IP addresses. | 1 | 5 |
<urn:uuid:db352355-365b-4d7d-9634-bd7c2789bccb> | The moon’s halo or lunar halo is an optical illusion that causes a large bright ring to surround the moon. This striking and often beautiful halo around the moon is caused by the refraction of moonlight from ice crystals in the upper atmosphere.
In effect, these suspended or falling flecks of ice mean the atmosphere is transformed into a giant lens causing arcs and halos to appear around the moon or the sun depending on whether the effect is happening during the night or day respectively.
The effect is so striking that it has given rise to a wealth of folklore and superstition, and was used not entirely unsuccessfully used to predict the onset of bad weather.
Related: 15 stunning places on Earth that look like they’re from another planet
What is a moon halo and how does it form?
A lunar halo is created when light is refracted, reflected, and dispersed through ice crystals suspended in cirrus or cirrostratus clouds located at an altitude of 20,000 feet (6,000 meters) and higher, up to 40,000 feet (12,000 meters).
The shape of these ice crystals focuses light into a halo around the moon or the sun. As ice crystals are usually hexagonal these lunar halos are almost always the same size, with the moon (or the sun) sitting 22 degrees from the other edge of the halo — roughly the width of an outstretched hand at arm’s length.
The uniform 22-degree radius and 44-degree diameter of halos mean that both solar and lunar halos are often referred to as 22-degree halos.
This uniformity in diameter arises because ice has a specific index of reflection and the hexagonal shape of an ice crystal means when its sides are extended it forms a prism with a 60 -degree apex angle. This results in an angle of minimum deviation for light passing through the ice crystal of 21.84 degrees.
These ice crystals also demonstrate a prism effect that separates white light from the sun or is reflected by the moon into various individual colors just like the atmospheric effect that creates a rainbow.
(opens in new tab)
This happens because different wavelengths of light, thus different colors, experience a different degree of refraction when they pass through a prism.
This means that lunar halos can be very lightly tinted with rainbow colors, longwave red light on the inside, and shortwave blue light on the outside. Colors in the lunar halo are often too weak to be seen with the naked eye and may be much more visible around the sun because of how much brighter it is than the moon.
The optical properties of the ice crystals also mean that they don’t direct light back toward the center of a halo. This means that the sky inside a 22-degree halo can often appear darker than the surrounding sky making it appear like a “hole in the sky.”
Do lunar halos have company?
Lunar halos are often accompanied by smaller more colorful rings that are caused by refraction and reflection of light by water molecules in the atmosphere called coronas. Lunar halos aren’t connected to coronas, which are around half as wide as halos with a radius of around 10 degrees, as these optical effects are caused by water droplets rather than ice crystals.
In addition to this, refraction from ice crystals can also create double halos. On rare occasions, these double halos even possess spokes radiating out to their outer edges.
Not only are lunar halos closely related to solar halos, but this icy refractive effect can also create rings opposite these astronomical bodies, or pillars of light, and even “sun dogs” — concentrated patches of sunlight seen 22 degrees to the left or right of the sun that can appear in pairs.
(opens in new tab)
Halos with a radius of 22 degrees can also be accompanied by 46-degree radius halos, which can also occur independently too. Larger and much fainter than 22-degree halos, 46-degree halos form when sunlight enters randomly oriented hexagonal ice crystal at its face and exits through its base.
This causes light to be dispersed at a wider angle — one greater than the angle of minimum deviation — creating a halo with a more blurry and diffuse outer edge.
On its science site, NASA documents (opens in new tab) a rare incidence of a quadruple lunar halo. The four halos around the moon were sighted on a winter night above Madrid, Spain, in 2012. Falling hexagonal ice crystals created a 22-degree halo, while column ice crystals created a rarer circumscribed halo. More distant ice crystals created a third rainbow-like arc 46⁰ from the moon. Finally, part of a fourth whole 46-degree circular halo was also visible completing the quadruple lunar halo that NASA described as “extremely rare, especially for the moon.”
Related: Red lightning: The electrifying weather phenomenon explained
How common are moon halos: When and where to see them
(opens in new tab)
Farmers’ Almanac (opens in new tab) describes lunar halos as being fairly common, meaning there is a good chance of spotting one, as long as you are willing to brave cold and possibly wet weather. That’s because though lunar halos can happen at any time of year, they are more common in winter.
A moon halo can be seen with the unaided eye, but if you’re looking for a telescope or binoculars to observe the moon in more detail, our guides for the best binoculars deals and the best telescope deals now can help. Our best cameras for astrophotography and best lenses for astrophotography can also help you prepare to capture an impressive lunar photo.
Related:Ultimate guide to observing the moon
Because cirrus clouds are the usual suspects behind lunar halos, this optical illusion is more likely to be visible when a bright full or nearly full moon is veiled by thin cirrus clouds. This means unlike hunting other astronomical events and objects, cloudy conditions can actually be a bonus when it comes to spotting lunar halos.
The cirrus clouds are transparent and cover wide areas of the sky — up to thousands of miles — producing a host of other halo effects like white or colored rings, spots, or arcs of light in addition to solar and lunar halos.
These clouds can be so thin and finely dispersed that sometimes lunar and solar halos are the only way of knowing they are actually present in the sky.
Myths and cultural significance of moon halos
According to the Farmers’ Almanac, in folklore, the observation of a lunar halo has been associated with forthcoming unsettled weather, especially during winter.
This is something that has often been proven true thanks to the phenomena behind these halos. This is because cirrus clouds sometimes indicate an approaching warm front which is, in turn, associated with a low-pressure system, a storm that can carry with it a sudden drop in temperature, heavy rain, hail, and even thunder and lightning.
Because cirrus clouds often signal rain falling within the next 24 hours the atmospheric optical illusions they cause became embedded in “weather lore” becoming an early method of empirically predicting the weather before the development of metrology.
One striking and poetic example of this folklore is a proverb listed in the book Dictionary of Proverbsby George Latimer Apperson (opens in new tab).
“If the moon show a silver shield,
Be not afraid to reap your field;
But if she rises haloed round,
Soon we’ll tread on deluged ground.”
This system isn’t particularly reliable when predicting bad weather as cirrus clouds aren’t always a sign of an approaching warm front.
Another folklore idea surrounding lunar halos is also worth being skeptical of is that by counting the stars encircled by the halo a person could tell how many days until the bad weather moved in.
More stars meant more time until the rain set in, and fewer stars signified that bad weather wasn’t far from descending.
Ice crystals aren’t the only objects that can bend light and create stunning optical illusions. Astronomical bodies much further afield than the moon like distant galaxies can be blurred, stretched, magnified, and even caused to appear at multiple points in the sky when objects of tremendous mass warp the very fabric of spacetime between them and Earth. The European Space Agency (ESA) explains (opens in new tab) the phenomenon of gravitational lensing.
Ring Around The Moon? Here’s What It Means, Farmers’ Alamac, [Accessed 11/19/22], [https://www.farmersalmanac.com/ring-around-the-moon-9657 (opens in new tab)]
22° Halo around the moon, Atmospheric Optics, [Accessed 11/19/22], [https://atoptics.co.uk/halo/circmoon.htm (opens in new tab)]
Moon Halo, Hyperphysics, [Accessed 11/19/22], [http://hyperphysics.phy-astr.gsu.edu/hbase/atmos/moonhalo.html (opens in new tab)]
Quadruple Lunar Halo Over Winter Road, NASA Science, [Accessed 11/19/22],[https://science.nasa.gov/quadruple-lunar-halo-over-winter-road (opens in new tab)]
“Disk with a hole” in the sky, Atmospheric Optics, [Accessed 11/19/22], [https://atoptics.co.uk/halo/circ2.htm (opens in new tab)]
Apperson. G. L., Dictionary of Proverbs, Wordsworth Editions Limited, [ (opens in new tab)2006], ISBN 1 84022 311 1
Moon Airliner is a fascinating photo of an airliner accident photographed in front of a full Moon.
Western Nevada College is excited to celebrate the 20th anniversary of Jack C. Davis Observatory in 2023. The observatory is named after the college’s first president and opened in 2003 with astronaut Buzz Aldrin, the second man to walk on the Moon, lecturing at opening ceremonies. In honor of this milestone anniversary, WNC Foundation and the JCDO Director Dr. Thomas Herring have worked together to create some spectacular astrophotography for its annual fundraising note card campaign. A pack of these fascinating note cards is available for $10. Each pack includes two notecards for each photograph for a total of 12 note cards. To purchase a pack of note cards, please phone Hilda Villafana at 775-445-3325. Five of the six photographs were taken by John Dykes, an active volunteer at the observatory and a former president of the Western Nevada Astronomical Society (WNAS). The full Moon over WNC was a photo taken by Sam Golden of Choice 50 Photography. Each pack includes: • Moon Airliner: A plane caught by accident crossing in front of a full Moon. The photographer didn’t notice until the next day that something was in the way. • Horsehead: The Horsehead nebula is a reflection nebula located near Orion’s belt about 1375 light years from Earth. The dark shape is dense gas and dust blocking visible light from stars behind. The red surroundings are emissions from hydrogen, the most abundant element in the universe. • Pleiades: This is an open cluster of stars that has been given names by many cultures around the globe. This cluster of young hot stars is located about 444 light years from Earth. It is also known as Subaru in Japan, Makaliʻi to native Hawaiians, and Matariki to the Māori. • Rosette: This is a nebula of ionized atomic hydrogen about 5000 light years from Earth. The red glow is characteristic of hydrogen emissions. • Jack C. Davis Observatory with star trails: This photo is a time lapse of stars moving through the sky as the Earth rotates on its axis. Photos were taken over the course of the night and stacked together. During the final exposure a car’s headlights illuminated the building, providing the contrast with the dark sky and shadows across the building itself. • Full Moon over WNC: The fifth annual Reach for the Stars Gala fundraiser presented by WNC Foundation in August included an extra visual bonus from above — a spectacular full Moon. For people with an interest in astronomy or a desire to learn more about the universe, you are invited to attend free Saturday night Star Parties from dusk until 11 p.m. at the observatory. WNC also offers astronomy classes, such as Stellar Astronomy (AST 110), this spring, and Dr. Thomas Herring and Northern Nevada lecturer Mike Thomas provide free lectures to the community throughout the year.
The Horsehead nebula is a reflection nebula located near Orion’s belt about 1375 light years from Earth.
Cyber Monday deals are here and the best telescope deals are now in full throttle as we bid farewell to Black Friday, with hundreds of dollars worth of savings to be had across a wide range of telescopes. We’ve highlighted the best Cyber Monday deals for all gifts space-related this holiday season on our live Cyber Monday deals page. But to make things simpler we’ve rounded up our top 10 favorite telescope deals right here.
From beginner telescopes all the way up to premium telescopes, you can spend under $100 or over $1000 to enhance your stargazing experience. And there are some massive savings to be had, so check out the best Cyber Monday deals down below.
10 best Cyber Monday telescope deals we’ve seen so far 2022
If none of those take your fancy, why not browse our guides to the Best telescopes, Best telescopes for deep space or Best telescopes for seeing planets. Entry-level astronomers might be interested in discovering some of the Best telescopes for beginners or for those with smaller hands, the Best telescopes for kids.
As well as bagging a Cyber Monday bargain here, we also have deals hubs for Budget telescopes under $500 and our perennial Telescope deals on sale. Or take a look at general space gifts in our Cyber Monday deals live page.
Here we’ve rounded up the best cameras for astrophotography that we think will help you capture your best astro images. What’s more, many of them are, discounted Black Friday deals that are continuing over Cyber Monday. Keep an eye on our live Black Friday/Cyber Monday Deals blog for all of the updates.
The bonus of having one of the best cameras for astrophotography is that they are typically versatile cameras that perform exceptionally for daytime shooting too. This negates the need to spend on additional equipment, something we all want to avoid with the ongoing rise in the cost of living.
Remember, it’s not all about the camera. Lenses are just as (if not more) important. That’s why we’ve laid out the best lenses for astrophotography too. We’ve also put together a guide for the best camera accessories for astrophotography and the best light pollution filters for astrophotography, especially important if you’re shooting in an area prone to skyglow.
DSLRs and mirrorless cameras have long been known for their night sky shooting prowess. Low image noise, high ISO capabilities, and flexibility for regular daytime shooting make them ideal devices for many users. However, there are also astro-specific cameras that traditional photographers often overlook. These specialized devices mount to telescopes for incredibly clear astrophotographs that can easily surpass DSLR or mirrorless cameras, although they are unsuitable for conventional photography.
Astrophotographers will need to pay close attention to the performance of each system’s noise handling, as this is a common problem for low-light and night-time photographers. Check how well the camera blocks infrared light, as this is the only way to view cosmic objects. Removing the IR filter can be done by a specialist post-purchase. Dimensions and weight are also essential factors for portability and durability, chances are you’ll be traveling to find a suitable dark sky.
Despite the common misconception, expensive doesn’t necessarily mean best (for your purpose). Some cameras cost far less but give superior astro image quality than even the most expensive models. There does always tend to be a trade-off. That might be shooting flexibility or lens mount versatility. Of course, you won’t be able to capture the stars without a sturdy tripod, so check out our guide to the best tripods for astrophotography to prepare yourself with the best possible setup.
Best cameras for astrophotography Black Friday deal:
Why you can trust Space Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.
The best cameras for astrophotography in 2022
A workhorse and detail-oriented powerhouse, this 45.4MP DSLR is possibly one of the best cameras for astro full stop
Sensor: 45.4MP, Full-frame 35mm
Lens mount: F-mount
ISO range: 64-25600 (102400 expandable)
Viewfinder size/resolution: Optical, 0.75x mag
Video capability: 4K UHD 30FPS
Size: 146 x 124 x 78.5 mm
Memory card type: 1x SD/SDHC/SDXC and UHS-II, 1x XQD/CF Express
Reasons to buy
Huge stills resolution for extra detail
Native compatibility with F-mount lens range
Reliable and durable weather sealing
Reasons to avoid
Bigger and bulkier than mirrorless
Low ISO range
The Nikon D850 DSLR was released almost five years ago but still keeps up with the young kids on the block, in many photography disciplines, including astro. The 45.7-megapixel image sensor on the D850 produces ultra-detailed stills photos while keeping image noise to a minimum. It even can shoot 4K UHD 30 frames per second video for those who want to make movies of the stars.
Partly due to when it was made, It is considerably heavier, bigger and bulkier than astro-specific cameras or its mirrorless competition. Still, thanks to its rugged construction and excellent weather sealing, it will last for many years, no matter what environment you choose to shoot in.
Like all DSLRs, it has an optical viewfinder, making it a little more challenging to compose and focus for night sky imaging, but the rear tilting touchscreen remedies this problem. It has two card slots for SD and XQD/CF Express cards to ensure it can record all that incredible detail at speed and for added peace of mind.
As seen on the flagship Nikon D5 (opens in new tab), the D850 utilizes full button illumination, making it simple to operate in the dark without needing a headlamp that may damage your night vision. This was one of the features we enjoyed most during our Nikon D850 review alongside its expandable ISO sensitivity range of 102400 — it practically sees in the dark. Although a very high ISO will drastically reduce image quality, it can useful just to help you compose your shot if nothing else.
(opens in new tab)
Stylish but capable, body mounted controls make for easy operation in the dark
Sensor: 26.1 megapixel APS-C
Lens mount: X-mount
ISO range: 160-12800 (80-51200 expanded)
Viewfinder size/resolution: 0.5-inch, 3.69 million dots
Video capability: 4K
Size: 135 x 93 x 64 mm
Memory card type: UHS-I / UHS-II / Video Speed Class V90 *1
Reasons to buy
Wide ISO sensitivity range
Versatile for other photography types
Reasons to avoid
No battery charger, it needs plugging in
The X-T4 is Fujifilm’s flagship mirrorless camera and the most powerful X-series. It is an excellent option for astrophotography enthusiasts, as we discussed in our Fujifulm X-T4 review. The vari-angle screen makes composing shots much more comfortable than without, given the camera will be pointing at the sky.
The classic look of the camera makes it stylish, but the body-mounted dial controls make it easier to use in the dark if you can remember which dial does what. The 26.1MP APS-C sensor creates excellent image quality, and there are plenty of lenses available to fit this model to enhance them further.
The Fuji X-T4 uses the NP-W235 battery with a CIPA rating of around 500 shots per charge in an everyday performance mode. When we carried out our full review, we found this can be much higher when shooting in the daytime. However, when shooting the night sky, the long exposures needed sap the battery more, so expect slightly fewer.
This camera is a versatile option for photographers who regularly dabble in other styles of photography. It has a generous 6.5 stops of in-body image stabilization, excellent low-light performance, and a high-speed processing engine. That makes it ideal for action or sports photography. It is also a top choice when it comes to timelapse photography. Check out our best cameras for timelapse videos for alternative options for this style of capture.
A low light beast, this camera set a precedent as one of the best astro mirrorless cameras
Sensor: 24.2MP, Full-frame 35mm
Lens mount: E-mount
ISO range: 50-51200 (204800 for stills)
Viewfinder size/resolution: 0.5-inch, 2.35 million dots
Video capability: 4K UHD 30fps
Size: 126.9mm x 95.6mm x 73.7mm
Memory card type: 1x SD/SDHC/SDXC (UHS-I/II compliant) 1x Multi slot for Memory
: Stick Duo/SD/SDHC/SDXC (UHS-I compliant)
Reasons to buy
Incredible low light video performance
Good battery life
93% AF point coverage
Reasons to avoid
Certainly a more expensive option
Low stills resolution compared to competition
New version now available
The Sony A7 III is a favorite among astrophotographers that like to shoot mirrorless and is one of the brightest stars of the astro camera world (pardon the pun). Though its electronic viewfinder isn’t as detailed as others we’ve listed, it still provides a beneficial exposure-ramped view to aid with composing astrophotographs. Low light autofocus detection, while not as sophisticated as some in this list, still performs well by working in -3 EV. In our Sony A7 III review, we were particularly impressed with the high dynamic range which allows you to recover amazing detail from the shadows.
Even when ramped up to a massive ISO 51200, this camera handles image noise well and produces excellent image results. For those not too worried about movie shooting (though it can capture 4K UHD at 30FPS), ISO can jump higher, expanding to an insane 204800 for stills photography.
Shooting for hours at night can drain the battery quickly, especially when you consider it has to run power both to the rear screen and the EVF. However, this camera is CIPA-rated well above average for a mirrorless of this type and can shoot 710 still shots via the rear LCD monitor. It is a touch more expensive than others in its class, but if you’re after a natural low light performer that is also versatile enough to excel in other photography styles, the A7 III might be the one for you.
Small but important improvements over its predecessor
Sensor: BSI-CMOS 24.5MP
Lens mount: Z-mount
ISO range: 100-51200 (expanded 50-204800)
Video: 4K 60p
Weight without lens: 1.5lbs/675g
Memory card slots: 1x CFexpress/XQD, 1x UHS-II SD
Reasons to buy
Great for low-light shooting
Excellent weather sealing
Reasons to avoid
Not worth upgrading from the Z6
Lots of competition at a similar or lower price
Following the aforementioned Nikon Z6, it makes sense to talk about its successor, the Nikon Z6 II. As we discussed in our hands-on Nikon Z6 II review, there aren’t enough upgrades to warrant upgrading from one model to the other, and it’s not worth the extra cost if you’re only going to be shooting astro with it.
That said, suppose you’re upgrading from a beginner model, capturing video, and shooting other photography styles alongside astro. In that case, the Z6 II is worth considering if you can spare the extra dollars, as it is a little more refined.
Take note of everything the Z6 has, but add a second memory card slot for extra storage and peace of mind, a faster burst rate and autofocus, quicker image processing, and 60FPS at 4K video shooting.
Another inclusion astrophotographers will love is the better range of shutter speeds, allowing more control over those long exposure shots. The shutter speed limit is now 900 seconds (15 minutes).
Realistic but exceptionally clear images of the night sky, and a better option for astro than the Z7
Sensor: 24.5MP, Full-frame 35mm
Lens mount: Z-mount
ISO range: 100-51200 (204800 expandable)
Viewfinder size/resolution: : 0.5-inch, 3.69 million dots
Video capability: 4K UHD 30fps
Size: 134 x 100.5 x 69.5 mm
Memory card type: 1x SD/SDHC/SDXC and UHS-II, 1x XQD/CF Express
Reasons to buy
Low image noise
Superb electronic viewfinder
Great low light Autofocus
Reasons to avoid
Stills resolution not the highest
Limited lens range
Superseded by Z6 II
Though superseded a while back by the superior Nikon Z6 II, the Z6 (one-half of the first two mirrorless cameras Nikon ever produced), is still one heck of a camera and excels in low light. For our money, we think the Z6 is better for astrophotographers than its big brother, the Z7, due to the lower resolution. A lower resolution on the same full-frame image sensor means less image noise detracting from the final shot. Whats more, the Z6 is also much cheaper than the Z7.
The Electronic Viewfinder has excellent detail, with a million more dots than the aforementioned Sony A7 III, and gives a realistic, clear image. Though the Z-mount lens range is expanding, but it’s still not as established as other models in this guide. Saying this, with an FTZ adapter, you can use any of Nikon’s F-mount lenses from the past several decades, so this isn’t a problem.
Our Nikon Z6 review found that shooting even up as high as ISO 12,800 adds very little noise or softness to the image, making it perfect for low-light situations like astro and night-time photography. This is especially true if you’re trying to pick out unlit objects or scenery to give the night sky some context. The image quality only degrades a little on the maximum and expanded settings.
A pleasure to compose your shot even in the darkest skies as well as a nifty timelapse function
Sensor: 26.2MP, Full-frame 35mm
Lens mount: EF-mount
ISO range: 100-40000 (102400 expandable)
Viewfinder size/resolution: Optical, 0.71x mag
Video capability: 1920 x 1080, 60fps
Size: 144.0 x 110.5 x 74.8 mm
Memory card type: SD, SDHC or SDXC (UHS-I) card
Reasons to buy
4K timelapse feature
Handy vari-angle touchscreen display
A lot of camera for the money
Reasons to avoid
No 4K video recording
Only one SD memory card slot
Low dynamic range a shame
The Canon EOS 6D Mk 2 is an affordable DSLR for those wanting to dip their toes into astrophotography without breaking the bank. It does lack some modern features, but this is a brilliant full-frame option for its price point.
Its handy vari-angle touchscreen display makes it simple to compose the scene even if the camera is pointing skyward. For astro-shooters that like a moving image, the EOS 6D Mk 2 can shoot 4K time-lapses (in timelapse mode), making it perfect for detailed videos of the night sky, especially when paired with a slider or a star tracker. We found in our Canon EOS 6D Mk 2 review that it’s best to avoid this model if you’re planning on shooting fast action in low light, but that’s not a problem for astrophotography.
While it only captures regular video footage at full-HD 1080p, it records this at 60FPS for smooth results. Its dynamic range also leaves something to be desired, but if combined with plenty of calibration frames, this shouldn’t make much difference after image processing.
A single SD card slot might have nervous shooters biting their nails during longer sessions, but with 102400 expandable ISO and 26.2MP stills capture, you can relax knowing results will be clear and crisp every time.
A dedicated color astro camera producing stunning high resolution stills with an enormous frame rate
Type: Color CMOS astronomy camera
Sensor: 20.1MP, 1-inch
Lens mount: Scope mounted
ISO range: N/A
Video capability: 5496 x 3672, 19 FPS
Size: 62mm diameter
Memory card type: N/A
Reasons to buy
Electronic shutter minimises camera movement
19FPS perfect for solar/lunar photography
USB 3.0 output
Reasons to avoid
Scope mounted only
Requires dedicated software to run
Images at 12 bit depth maximum
This is a compact full-color camera with its own onboard cooling system to minimize noise whilst shooting long exposures. It is one of the best-dedicated astrophotography cameras out there, the ZWO Optical ASI183MC Pro is the color version of the ZWO Optical ASI183.
In our ZWO Optical ASI183MC Pro review, we found it to represent a great choice for astrophotographers looking for a dedicated astro-imaging camera. You won’t need to bring a stack of RGB filters when heading out to shoot. It’s also much smaller and lighter than other astro cams. Still, at 1.6e read noise, it’s a serious camera.
It’s one of the more efficient camera models for astrophotography and provides a whopping 84% Quantum Efficiency peak. For an astro camera, it also has a high pixel count, at approximately 20.48MP.
It shoots an all-out frame rate of 19FPS at full resolution, which makes the ZWO Optical ASI183MC ideal for solar or lunar imaging. However, if users drop the resolution down, there’s the potential to shoot hundreds of frames a second if wanted!One downside, as with all dedicated astro cams, is that you’ll need to plug it into a computer with dedicated software to run it. A fast USB3.0 port means a healthy data transfer for the higher frame rate captures.
This camera’s design and build is very specifically geared towards clean astro shooting, as complemented by its zero amp glow
Type: Color CMOS astronomy camera
Sensor: 9MP, 1-inch
Lens mount: Scope mounted
ISO range: N/A
Viewfinder size/resolution: N/A
Video capability: 3008 x 3008, 20FPS
Memory card type: N/A
Reasons to buy
Zero amp glow
80% quantum efficiency
High 20FPS frame rate
Reasons to avoid
No mono version
Square CMOS sensor unusual for some
The ZWO Optical ASI 533 Pro’s most attractive feature is likely that it has zero amp glow. Although you can remove this in editing software, this additional processing time can stack up and reduce productivity, especially when considering that you could opt for an astro camera like this and avoid it altogether. By removing the need for extra processing, you’re also keeping a cleaner, more efficient resulting image.
This camera only comes in a color version, so monochromatic enthusiasts should leave their RGB filters at home. It has a good 80% Quantum Efficiency and a quick 20FPS frame rate for those needing to shoot fast. As with almost all dedicated astro cameras, the ZWO Optical ASI 533 Pro needs an external power supply to work. A 9MP square sensor might seem a little unusual to some photographers, but it has 1.0e read noise and a 14-bit ADC for good dynamic range.
In our ZWO Optical ASI 533 Pro review, we concluded that it is a great choice for those looking for a simple-to-use, dedicated astro-imaging camera at an affordable price.
While it’s an older model, it’s still a solid and reasonably priced choice for astrophotographers
Type: Full-fram mirrorless
Sensor: 30 megapixels
Lens mount: RF (EF and EF-S with adapter)
ISO range: 100-40000
Viewfinder size/resolution: 0.5-inch OLED EVF
Video capability: 4K and 10-bit
Size: 135.8 x 98.3 x 84.4mm
Memory card type:
Reasons to buy
Good value for money
Reasons to avoid
Button layout could be better
Not as rugged as it’s rivals
Though four years old, Canon’s first-ever full-frame mirrorless RF system camera still holds its own against the more recent releases.
As we discussed in our Canon EOS R review, it’s neither the sleekest nor best-built body, so you’d have to be a little gentler with it than you would some of the hardier models — like the Nikon Z6 — and the layout of the buttons could be more intuitive. None of these would be reasons not to buy this model, but they could take some getting used to.
Body and build quality aside, the performance of the Canon EOS R is above average when shooting in low light. It performs especially when using long exposures, which is perfect for traditional astro shooting, including long exposures and time-lapse shooting (don’t forget your tripod). It also processes the shots very quickly with little noticeable buffer lag.
The screen is large and clear, with impressive touch functionality. Like a smartphone, you can drag and set the focus with your finger. The vari-angle touch screen also makes taking low-angle shots much more comfortable.
How we test the best cameras for astrophotography
To guarantee you’re getting honest, up-to-date recommendations on the best cameras to buy here at Space.com we make sure to put every camera through a rigorous review to fully test each product. Each camera is reviewed based on many aspects, from its construction and design, to how well it functions as an optical instrument and its performance in the field.
Each camera is carefully tested by either our expert staff or knowledgeable freelance contributors who know their subject areas in depth. This ensures fair reviewing is backed by personal, hands-on experience with each camera and is judged based on its price point, class and destined use. For example, comparing a 60MP full-frame mirrorless camera to a sleek little crop-sensor DSLR wouldn’t be appropriate, though each camera might be the best performing product in its own class.
We look at how easy each camera is to operate, whether it contains the latest up-to-date imaging technology, whether the cameras can shoot high-quality stills photos and high-resolution video and also make suggestions if a particular camera would benefit from any additional kit to give you the best viewing experience possible.
With complete editorial independence, Space.com are here to ensure you get the best buying advice on cameras, whether you should purchase an instrument or not, making our buying guides and reviews reliable and transparent.
Best cameras for astrophotography: What to look for
It can be difficult to know what to look for in the best cameras for astrophotography, but there are some crucial factors to consider to help you decide. Budget is significant, with new users who want to dabble perhaps setting aside a little less than more seasoned photographers that will only settle for the very best images. However, image clarity is critical, and you’ll find that larger sensors with fewer pixels can capture astro shots with minimal image noise. By negating the effects of image noise, we’re able to process imagery more efficiently with better-detailed results.
While not particularly useful for astrophotography, autofocus may still be helpful for those who want to combine night-time shooting with near-twilight landscapes that show the brightest stars, planets, and satellites hanging above a beautiful foreground. A low EV rating on the autofocus ability is crucial for sharp shots in the dark.
Fiddling around with one of the best headlamps can be helpful, but for those with inferior headlamps a dim-lit red light to set up your shot can be frustrating, so consider whether you need backlit illuminated buttons to help guide camera setup in the dark.
Specialist astrophotography cameras have a predisposition to warm up during long exposure shots. Suppose you’re interested in getting an astro camera that has built-in cooling to keep the performance of the image capture high, it will likely be larger and heavier, and a little noisier as the fans whir while operating.
Photographers must consider lens choice when choosing a camera for astrophotography. While most major manufacturers have excellent ranges of top-quality glass, not all camera models can accept the full range of lenses due to differences in mount types. Ideally, fast lenses with wide apertures and excellent optical sharpness and clarity are what to look for when shooting astrophotography. Pair this with a camera body that handles high ISO and image noise well and you should be ready to go.
Save $100 on the RRP of the Canon EOS R10 over at Walmart. (opens in new tab)
The brand new Canon EOS R10 has only been around for a few months this year but it’s already cheaper than the retail price over at Walmart for just $879 (opens in new tab) a price we’ve seen throughout the Black Friday weekend and we’re hoping to see continue over to Cyber Monday.
The new mirrorless camera has a 24.2 Megapixel CMOS (APS-C) image sensor which also records 4K UHD 30p video footage. Combine that with the EOS R10’s maximum ISO sensitivity of 32,000 we’d suggest this is plenty good enough for astrophotography if you’re a casual photographer hoping to use it as a generalist camera, too. With that in mind, some buyers may be interested in our Best beginner cameras guide.
Thanks to the flip-around vari-angle LCD screen it’s useful for vloggers or any content creator who wants to record and see themselves without setting up a separate monitor. Though we wouldn’t recommend this is your main aim is astrophotography because full-frame cameras perform better in this area because they lower image noise. The main benefit of the R10 though is lots of mirrorless functionality in a small, lightweight package from Canon that has its most up-to-date stamp.
Because the Canon EOS R10 isn’t an astro-specialist we’d hope to see it perform well in other generalist areas like portraits, landscapes and a bit of wildlife or sports. Fortunately, Canon has taken this into account and it’s packed full of helpful features. It captures photos at 15FPS using the mechanical shutter and uses the Dual Pixel CMOS AF technology to track focus continuously (if required) using its intelligent people, animal and vehicle autofocusing. All this for $100 less than the retail price (opens in new tab).
It’s compatible with the Canon RF-S/RF lens group but if you want to use older EF and EF-S lenses then use the EF-EOS R mount adapter to expand your lens range — particularly helpful for existing Canon DSLR users who want to upgrade to the very latest beginner mirrorless camera technology without dropping a ton of money on lens upgrades, too.
Be sure to check out Space.com’s Black Friday deals page, or our guide to the Best cameras for photos and videos or the Best cameras for astrophotography.
Act fast! Get Peacock for just $1 per month for a full year
Stream Battlestar Galactica and other amazing sci-fi for less courtesy of Peacock, but act fast before this Cyber Monday deal expires.
This Cyber Monday deal on Peacock can save you a whopping 80% (opens in new tab) for the full year, which includes a great mix of new TV shows and classic movies. Aside from BSG, you’ll find superhero show Heroes, the universe-hopping 1990s show Sliders, recent comedy Resident Alien, and older movies like Phantasm and Serenity (the follow-up to Firefly).
Those are just some of the space and sci-fi options available on Peacock, and this deal will likely fly away quickly. if you’re looking for more streaming offers, we’ve also posted about deals on 50% off Paramount Plus (opens in new tab) and Hulu for just $1.99 per month (opens in new tab) too, among other Cyber Monday streaming deals of the universe.
Rocket away with up to 32% off on Cyber Monday Estes deals
Impress your space history buff with model rockets from trusted brand name Estes, which will complete their collection for Cyber Monday.
The Estes Saturn V moon rocket 1:200 Scale will let you relive human lunar missions at just $73.29 at Amazon (opens in new tab), or 18% off. Another key player in early space history, the Estes 7255 Little Joe I, is just $93.59 at Amazon (opens in new tab) or 26% off.
Alternatively, you can snag the Estes 810 220 Swift Flying Model Rocket Kit, Brown/A at $7.49 at Amazon (opens in new tab) or 32% off, the Estes 2169 Dragonite Flying Model Rocket Kit at $16.49 at Amazon (opens in new tab) or 13% off, and the Estes Hi-Flier Flying Model Rocket Kit at $12.60 at Amazon (opens in new tab) or 12% off.
Get yourself the wadding, engines and other components your flight plan requires and you’ll be all set to go. Check out our best model rockets guide for informed discount shopping, or consider our Black Friday deals page, our guides to the best drones, best cameras for photos and videos, and best cameras for astrophotography.
Fly away with 72% savings on this HR Drone for kids this Cyber Monday
This is a Cyber Monday discount that is sure to lift off quickly!
The HR Drone for Kids is hovering at a low price of $35.99 at Amazon (opens in new tab) , and likely won’t last long at that price. At nearly $100 off, you will save 72 percent on the perfect gift for your high-flying child, allowing you to fill out your holiday roster at a discount price.
Enjoy the high-definition camera for in-inflight imagery and video, which you can livestream to your phone with the right connection. Beginners will especially love the feature allowing you to guide your drone along a flight path drawn on your phone.
Our guides to the best drones have more deals on offer. Otherwise, you can snag a lot of high-flying gifts at Space.com’s Black Friday deals page, or our guides to the best cameras for photos and videos and best cameras for astrophotography.
Remember to beam up Paramount Plus and save 50% on a full year for Cyber Monday
Get your Star Trek and Halo cravings on with a deep discount on Paramount Plus ahead of Cyber Monday.
Paramount Plus is offering a 50% discount on its two streaming plans, allowing you to warp speed through sci-fi shows and space content as long as you are ready to binge on the annual plan. This deal will end today (Nov. 27), so act quickly.
Snag your infinite streaming for a year on Paramount Plus for as low as $24.99 a year (opens in new tab) or about $2 a month, on an ads tier. The ad-free Premium tier is also half off, at an impressive $49.99 a year or just over $4 a month.
If Paramount Plus isn’t your preferred service, check out our other Cyber Monday streaming deals to satisfy your sci-fi cravings. You can also see our latest Cyber Monday deals for non-streaming bargains.
Join the Jedi with this 80% off Star Wars lightsaber Cyber Monday gift
Use the Force for up to 84% off on a huge range of lightsaber deals and gifts this Cyber Monday.
You’ll want to hyperdrive fast into the Tigoola Pixel Lightsaber Star Wars deal, now just $81 at Amazon (opens in new tab) (compared with the usual $509.99). It includes 13 colors, five sound modes, and aluminum alloy hand grip.
Close behind is the HOCET Star Wars Neo Realistic Pixel Lightsaber, an incredible 82% off and now just now $93.59 at Amazon (opens in new tab). It includes 13 colors, five sound modes, and aluminum alloy hand grip.
These gifts are perfect for padawans or long-time fans and believe us, those are only the beginning of Star Wars content. Consider our best lightsabers guide for 2022 if you want more fighting options. Or you can also see all of our best Lego Star Wars sets and our best Lego Star Wars deals to stock up for Cyber Monday.
Save a stellar 20% on National Geographic 70 computerized telescope
There’s a galactic Cyber Monday deal on that will satisfy your amateur astronomer this holiday season. This National Geographic 70 Computerized Refractor Telescope is 20% off at Kohl’s (opens in new tab) and is simply stellar at $236.79, an incredible $133 discount.
Be sure to use the SHOP20 code at checkout to secure the discount. The computerized telescope will swing among the stars just as you desire, allowing you to look at the moon, some planets, and a clutch of your favorite constellations.
Included in this incredible deal are a tripod and two eyepieces. This telescope is compact, allowing you to bring it around your residence, on your balcony or in your vehicle for stargazing in just the right location.
Check out even more Cyber Monday discounts in our best telescopes, best telescopes for deep space or best telescopes for seeing planets. Entry-level astronomers might enjoy best telescopes for beginners or, for youngsters, best telescopes for kids.
Get 44% off this National Geographic Explorer 114 telescope from Kohl’s
When you combine code SHOP20 at Kohl’s, you can make an impressive saving of over $90 (opens in new tab)on this reflector from National Geographic’s line of telescopes. The National Geographic Explorer 114 is manufactured by company Explore Scientific.
Featuring a 114mm aperture and a focal length of 500mm, this instrument allows the astronomer to get up close to some of the most dazzling deep-sky targets, splitting double stars with ease, get lost in star clusters and magnify nebulas. The rugged surface of the moon is also stunning through the eyepiece (26mm and 9.7mm Plössls are supplied), along with the planets of our solar system.
Aimed at beginners, the Explorer 114 comes with a simple equatorial mounting system. This enables observers to track chosen targets as the Earth rotates for clear images should astronomers wish to dabble in some basic smartphone astrophotography with this exquisite telescope.
Also included in the package is an adjustable tripod, red dot finderscope, Stellarium computer software and star map, supplying the astronomer with everything they need for a well-equipped observing session.
The Svbony SV550 is now $160 off this Cyber Monday!
If you’re wanting to make short work of creating exquisite deep-sky astrophotos, then look no further than the SV550 APO triplet refractor. This Cyber Monday, it’s a steal from Amazon with $160 off the retail price (opens in new tab).
Thanks to low-dispersion ED glass and correction glass, the Svbony SV550 is able to rid observations and images of chromatic aberration, which can often plague bright night-sky targets. What’s more, the telescope also makes use of an air-spaced triple optical system, which eliminates any blue or purple fringing.
The SV550 is also made of magnesium alloy material for a lightweight design that makes the instrument easy to carry, while a 180mm dovetail plate ensures that this refractor is versatile for fitting to your chosen mount and tripod.
This Lego Star Wars deal gets you the rare, retiring Imperial Probe Droid for its lowest price ever
Save 30% on the list price of this Lego Star Wars Imperial Probe Droid, which is set to retire soon — so grab it while you still can.
This Lego Star Wars set gives owners the chance to relive the epic Galactic Empire encounters in miniature form at home. There’s a transparent segment that gives the appearance of the Imperial droid being suspended over the snowy planet, Hoth. Fortunately for you, it’s not just Amazon giving you this great Black Friday/Cyber Monday deal but you can find the same deal at Target (opens in new tab).
The set isn’t too big either, according to the manufacturer the Imperial Probe Droid model will stand 10.5-inches (27 cm) tall, sit 9-inches (24 cm) wide and extends 4 inches (11 cm) deep.
Want a Lego set but aren’t sure which one yet? Take a look at our Cyber Monday Lego deals of 2022.
Save $80 on the Celestron StarSense Explorer DX130 telescope
Save $80 on one of the best telescopes for beginners on the market. Combining a 130mm aperture and 650mm focal length with Celestron’s usual good build quality and sharp optics makes the Celestron StarSense Explorer DX 130AZ great for observing nebulas, galaxies and star clusters.
With a decent 16% Black Friday saving the telescope also comes with all the accessories you need to get started straight out of the box. Two eyepieces, (25mm and 10mm) the telescope mount and tripod (preassembled), a StarPointer finderscope, accessory tray and StarSense Explorer phone dock for use with smartphones.
Download the StarSense app and quickly use the telescope to navigate the night sky without any prior knowledge of the night sky, and place it in the dock to align with the telescope itself.
60% Black Friday/Cyber Monday deal on the JOBY GorillaPod 5K Tripod Kit with Rig
The remarkable thing about this particular GorillaPod kit, which is available at Adorama with a massive 60% saving, is that it has arms. You can use it to hold phones, lights, microphones or even a small secondary camera like a GoPro. You can also hold the ‘arms’ while recording a video or selfie.
Joby has made a name for itself over the last few decades as a purveyor of an innovative tripod style that uses articulated ball joints that twist around anything and everything to give you the flexibility to stabilize your camera wherever you may find yourself. That might be railings, lamp posts, benches or anything else you may come across in the urban landscape, but also natural landscape features like rocks and trees, excellent for timelapse photography and astrophotography when you don’t want to carry a larger tripod.
Despite only weighing 0.84kg/1.85lb, the Joby GorillaPod 5k can hold up to — as the name suggests — 5k/11lbs of kit, which is very impressive for such a small tripod. If you’re off out on a trek or simply don’t have enough space or weight allowance left in your luggage, that is where this tripod comes into its own — when folded, it is a mere 43.18cm long, so it takes up hardly any space. Check out our guides to the best tripods and best travel tripods to discover more.
The new Canon EOS R10 is now $100 cheaper this Black Friday/Cyber Monday
The Canon EOS R10 mirrorless camera was only launched a few months ago but you can already save $100 on the retail price than the retail price over at Walmart for just $879, a price we spotted it this Black Friday weekend and we’re hoping to see continue over to Cyber Monday.
This entry-level APS-C crop sensor mirrorless camera is fitted with a 24.2 Megapixel CMOS image sensor that captures 4K UHD 30p video footage. When we look at the maximum ISO sensitivity of 32,000 we’re confident it’s strong enough to cope with most casual astrophotographer’s needs as well as more generalist use. It’s suited for portraits, landscapes and a bit of wildlife thanks to the Dual Pixel CMOS AF autofocusing system (that can track people, animals and vehicles) and its 15FPS maximum burst speed for stills photographs.
Take a look at our round-up of theBest mirrorless cameras and Best cameras for photos and videos overall if you want to shop around, though.
Save a massive $760 off this Autel EVO II V2 Pro drone bundle
The Autel Robotics EVO II V2 Pro has an on-board 6K camera to capture large stills images while flying for up for 40 minutes and in that time, if you can manage it, the drone can be controlled for up to a distance of 9KM. In the market for a rugged 6K drone? This is probably one of the best drones out there.
It’s not just the drone you get with this excellent $760 saving from Adorama, it’s also bundled with everything you need to get up in the air.
Not sure if this drone is right for you? Be sure to check out our other Black Friday drone deals, we’ve found a brilliant offer on the DJI Mini 2 bundle that’s not to be missed.
These moon lamp gifts will bring holiday cheer for up to 30% off
Light up your astronomy fan’s life with a wide variety of moon lamps, all between 20% and 30% off.
The star deal is the VGAzer Moon Lamp, which is just $71.99 at Amazon (opens in new tab) and an incredible 20% off. It hovers via magic magnetism and can pivot between three different lighting intensities for soft nightlighting or powerful party vibes.
Alternatively, the well-known brand Mydethun has a 16-color moon lamp for just $23.16 at Amazon (opens in new tab) which is 20% off with far more shades. The biggest discount is the Logrotate-16 color lamp (pictured) that is only $13.98 at Amazon.
If you’re on the hunt for other space decor, check out Space.com’s Black Friday deals page. There are lots of discount ways to check out the moon for real, too, through our best telescopes, our best binoculars, and the 10 best Black Friday telescope deals we’ve seen so far of 2022.
Bring your next gift into focus for less: The Nikon ProStaff P3 8×42 waterproof binoculars are 15% off
Searching for a somewhat bulky stocking stuffer for your space fan? Binoculars are the perfect choice and this pair is sturdy enough to stand up to nearly anything.
The Nikon ProStaff P3 8×42 waterproof binoculars are 16% off at Best Buy (opens in new tab). They’re resistant against water, shock and fog and priced at a bargain $119.95, which is absolute basement-style pricing in astronomy.
You’ll get a deep discount on one of the best brands of astronomy, as Nikon has been around for over 100 years. Included also are fully coated lenses to reduce glare and a generous 42 mm lens and 8x magnification to peer at wildlife or distant stars.
If you want more astronomical options beyond this deal, you can check out our best binoculars of 2022, or fans of this brand can head over to our best Nikon cameras to pair up with the bino set for image captures in the field.
Spy this holiday deal! The Celestron 76mm Signature Series FirstScope bundle is 10% off
Bathe yourself in the moon’s glow with this lunar-adorned version of the Celestron 76mm Signature Series FirstScope.
The Celestron 76mm Signature Series FirstScope is just $64.86 at Amazon (opens in new tab), offering an incredibly compact beginner’s telescope along with astronomy software and an illustrated e-book. All this is available for 10% less, so act quickly while stock is available.
Celestron, one of the most recognized brands in astronomy, has a special treat for this telescope: the moon image includes 10 targets you can practice finding in this telescope. It’s tabletop size, easy to fit on a balcony or in a car to get the most out of your observing time. Two eyepieces and a flexible repositioning system allow you to get a fix on different targets in the sky, like galaxies, Saturn’s rings or the Milky Way.
If you’re looking for more beginners’ telescopes before committing to this deal, do check out out our bargain beginner telescopes guide, or the deals in our best telescopes guide.
A perfect stocking stuffer! Mandalorian and Grogu Amazon Fire TV Sticks are up to 39% off
Get your Grogu moves on with this incredible value gift idea for your Star Wars fan.
Amazon’s Fire Stick is available in two shades for Star Wars padawans: iconic Grogu (Baby Yoda) green for just $41.98 (opens in new tab), or 39% off the base price. If you prefer blue after Din Djarin, the Mandalorian himself, you can score one for just $36.98 (opens in new tab) or 37% off the base price.
The Grogu Green Fire TV Stick 4K allows for glorious 4K streaming with a little image of Grogu amid stars and the Star Wars logo text. You can also summon Disney Plus using the included Alexa button. Otherwise, nab your Bounty Blue version that has images of The Mandalorian, Grogu and the logo of the hit series; there’s no 4K on the blue one, but it still has Alexa available.
If you’re not fussy about your stick having a Star Wars flare to it, there are even more options out there for you. Just hyperdrive over to our full roundup of Amazon Fire TV Stick deals for Black Friday. You can also see all of our best Lego Star Wars sets and our best Lego Star Wars deals to pick up more bargain deals for your big fan.
Save 50% on Celestron’s 114AZ-SR beginner’s telescope
If you’re looking for a great beginner’s telescope that won’t break the bank, this Celestron 114AZ-SR telescope is nearly 50% off at Kohl’s (opens in new tab), on sale for $111.99 down from $219.99, and comes with everything you need to start photographing the night sky with your smartphone.
You’ll save $108 on this smartphone-ready telescope, which is a 114mm Newtonian reflector that comes bundled with additional eyepieces and other accessories for get you started on your night sky adventure. Unlike the tabletop FirstScope, which is also on sale for Black Friday, the Celestron 114AZ-SR comes with a full-size tripod that’s lightweight enough to be portable, but stable enough for observing while standing along side.
Like the “SR” in its name suggests, this telescope comes with a mount smartphone adaptor mount to keep your phone’s camera secure against its eyepiece during photo sessions. It also has two Plössl eyepieces at 26mm and 9.7mm sizes with 1.25-inch mounts, and a red dot StarPointer finderscope for targeting and calibration.
The telescope is not computerized, so you’ll have to align it yourself and research what’s up in the night sky when observing, but it does come with Celestron’s SkyPortal smartphone app and the skywatching software Starry Night to help you identify what, when and where to look at in the night sky. Celestron is an icon of skywatching hardware, so the instrument comes from manufacturer with a proven track record of quality hardware.
If this telescope isn’t exactly what you’re looking for this Black Friday, you do have options. You can check out our top 10 Black Friday telescope deals, our guide to the best telescopes around now or just the best telescope deals overall for more.
Lego Robot Inventor Kit is 20% off
If you have a future roboticist at hope just looking for new robots to assemble, this Lego Robot Inventor kit may be the gift you’re looking for this Christmas.
Currently on sale for 20% off at Lego.com (opens in new tab), at $287.99 down from $359.99, this Lego Robot Inventor Kit will allow kids to build five different basic robot designs and code them to move using an associated Robot Inventor App that uses a visual based interface to string commands together. It also includes microphone and camera input, so young builders can say “Stop!” if it’s getting out of control. Color and distance sensors allow the robots to react to basic inputs as well.
The set is a more general version of Lego’s amazing Star Wars Droid Commander (opens in new tab), which has been retired and is hard to find. Like the Droid Commander set, the Lego Robot Inventor Kit is also due to be retired soon, so this may be among the lowest prices it will be before it’s gone.
A huge Black Friday saving: Celestron Nature DX 8×42 binocular now under $100
Save $70 on the Celestron Nature DX 8×42 binoculars this Black Friday from one of our favorite optics manufacturers in the stargazing game.
Binoculars are a great alternative to stargazing with telescopes because they’re also useful for daytime observing of subjects like birds, wildlife and aviation. Offering a relatively wide field with 8x magnification, the Nature DX 8×42 binoculars make it easy to track moving subjects because they don’t disappear out of view quickly.
Pair that with decent 42mm objective lenses and these are good for general observing, especially when light levels start to drop near twilight. If you’ve been thinking about purchasing a pair of the best binoculars this Black Friday but weren’t sure which ones are right, the generalist Celestron Nature DX 8×42 could be the perfect compromise.
Encalife Star Light Galaxy Projector 42% off
If you’re like us here at Space.com, finding a way to bring space down to Earth is a lifelong pursuit and you can set the scene at home for a great price with this Encalife Star Light Projector deal at Amazon for Black Friday.
Right now, you can save 42% on this Star Light Galaxy Projector (opens in new tab) and cover your home with an illuminating starry sky and colors. Amazon is offering the star projector for $34.97, a $25 savings off its usual $59.97 price.
We reviewed this star project earlier this year and were impressed with not just its light show, but ability to serve as Bluetooth speaker, as well. If you’re a stickler for science accuracy, you won’t find realistic representations of stars, galaxies and nebulas with this projector, but you will find a capable projector for setting the tone of your room to space for a gaming session, or even just to wind down before bed.
This star projector does have 21 different lighting modes that can be adjusted by buttons on the projector itself or via an external remote control. Like with most Bluetooth speakers, you can cast music to the device by connecting it through a proprietary app and smartphone. It even has a sleep timer so it will switch off after you fall asleep, ensuring you won’t waste power as you drift off to slumber in the final frontier.
Be sure to check out Space.com’s guide to the best star projectors in case this deal isn’t exactly what you’re looking for.
Vaonis Vespera smart telescope: Black Friday savings of $500
Perfect for any beginner telescope enthusiast or the veteran astronomer that wants to avoid the faff of setting up and aligning a traditional telescope and astro camera, the Vaonis Vespera is now $500 off its original price.
It’s an automated, computerized smart telescope that even new users can set up in around five minutes. No knowledge of the night sky is required, simply synchronize with the dedicated smartphone app and start navigating the night sky, slewing to favorite celestial objects in a matter of seconds.
You can then photograph the night sky, or get an enhanced view, using the in-built astro camera — no more need to buy a separate camera or telescope adapter. We loved it during our Vaonis Vespera review, giving it 4.5 out of 5 stars.
$400 off the Nikon Z7 II this Black Friday weekend
Save $400 on the Nikon Z7 II which is an all-round powerhouse full-frame mirrorless camera, and with a $400 discount, it’s the cheapest we’ve ever seen it.
In our Nikon Z7 II review, we awarded it 4.5 out of 5 stars, largely thanks to its excellent full-frame image quality and admirable overall performance. We’re happy to share that this mirrorless camera has been discounted by $400 as part of B&H Photo’s Black Friday deals. It will stay at this price, while stocks last, until Nov 28 at 11:59 EST.
It shoots detailed 45.7MP resolution stills photos, which matches one of the best cameras for astrophotography, the Nikon D850 (which, as it happens, is also on sale with a $900 discount). For video lovers, it also captures 4KUHD 60p footage.
25% off the Nikon ProStaff 3S binoculars for Black Friday
Save 25% on the Nikon ProStaff 3s 10×42 binoculars with this Black Friday binocular deal that is still running this weekend.
A big bargain on these high-quality binoculars, the ProStaff 3S 10×42 binoculars are waterproof up to 1m and you can continue to submerge them for up to 10 minutes, so even if you drop them in a lake you can (theoretically) take your time fishing them out.
These slender, lightweight binoculars weigh just 20.3 oz / 575 g and are Nitrogen-purged which means they’re fogproof — something astronomers and wildlife spotters hate when using binoculars as it can stop binocular use at crucial moments when moving between warm and cold environments.
Adjustable eyecups mean anyone can use these binoculars, even if you wear eyeglasses. Whether you want to stargaze using the decent-sized 42mm apertures, or simply birdwatch, wildlife spot, hunt or observe air shows these quality binoculars from Nikon are now at a quarter of the price.
44% off the Celestron NexStar 4SE computerized telescope for a limited-time Black Friday telescope deal
Save a huge 44% (nearly half price) on this small, compact computerized Maksutov-Cassegrain telescope is a catadioptric telescope that blends two optics systems to provide incredible night sky views in a tiny package.
We’ve never seen this iconic orange tube telescope down this cheap and we suspect the deal won’t last long because we can’t see this price anywhere else, so act fast if you want to take advantage of it.
The NexStar 4SE is a go-to fully automated, and because it’s computerized and runs on a motor you can automatically track night sky objects without having to push-to them manually. Simply plug in the number of your desired celestial body and let the telescope do the rest. Check out our Celestron NexStar 4SE review here for more info.
Canon RF 15-35mm f/2.8L IS USM lens now reduced to under $2000 this Black Friday
Save $400 on a lens that’s practically asking to take astrophotographs. The Canon RF 15-35mm f/2.8L IS USM lens is an RF-mount (mirrorless) ultra-wide zoom lens from Canon. It’s great for a wide variety of photographic disciplines, but none more perfect than astrophotography.
A fast maximum aperture of f/2.8 is constant throughout the zoom range which maximizes light input to the camera. L-series quality optics from Canon make the image sharp edge to edge throughout the zoom range as we found out in our Canon RF 15-35mm f/2.8L IS USM lens review. Taking it off the tripod and going handheld? Dont worry, this lens has five stops of Image Stabilization (IS) to steady your shots. A quiet Ultra-Sonic Motor (USM) provides fast autofocusing for Canon mirrorless cameras for daytime use, too.
Remote control Star Wars Grogu plush is just $44 on Black Friday
Grogu, or Baby Yoda as most of us still refer to him, remains a fresh face in the Star Wars universe and now you can get an adorable plush of him for less.
This Black Friday, you can get a remote-controlled, soft-bodied Grogu to waddle around your home for just $44.43 (opens in new tab), a savings of 32%. (Who can resist that face!)
Grogu, an adorable soft-bodied plush, is around 12 inches (30.5 centimeters) tall and has a range of movements made famous in The Mandalorian television series. Via remote control he can tilt his head, pull his ears back, waddle or simply gaze, cutsie-style, at whatever is in front of him (food or living being.)
If the most adorable aliens ever aren’t your preference, we still have more deals for you to enjoy. We’ve got guides on Star Wars Lego deals and Black Friday Lego deals for more great savings on Star Wars, Space, and even Marvel Lego sets. More holiday fun comes via our Black Friday deals that all have space on the brain.
Take off $760 with this Autel EVO II V2 Pro drone bundle deal
Lift off with the Autel Robotics EVO II V2 Pro with an impressive $760 discount.
Rated as one of the best drones out there, you can snag the Autel Robotics EVO II V2 Pro drone bundle at Adorama for just $1739 (opens in new tab)in their Black Friday sale.
You’ll nab a lot of great footage with the industry-leading 6K camera and can soar in flight for up to 40 minutes. It’s the perfect balance between great footage and a good hang time for scouting out your next filming location.
If you’re looking for something a bit cheaper, check out our other Black Friday drone deals. There’s a fantastic offer on the DJI Mini 2 bundle that is also worth your attention.
Playmobil Star Trek warp speeds into a $150 discount
Beam into an amazing U.S.S. Enterprise Playmobil set for $150 less on Black Friday.
The iconic Star Trek ship is at a rare 32% for Black Friday, making NCC-1701 an affordable $340.25 on Amazon. (opens in new tab) Don’t be fooled by imagining that this Playmobil set is just for kids, as this particular set includes a lot of detail that teens and adults will still enjoy.
Included are the legendary crew of NCC-1701 and collector’s items details that gave good marks in our Playmobil Star Trek USS Enterprise review. Included are crew members Captain Kirk, Spock, Uhura, McCoy, Sulu, Scotty and Chekov. The set comes with a removable roof to put the crew in the iconic bridge of NCC-1701.
If you’re less of a Trekkie and more of a Star Wars fan, however, be sure to check out our Star Wars Lego deals as well as our latest Black Friday deals for more gift ideas for this holiday season.
Lego UCS Millennium Falcon deal just got better and is now $180 off!
The Lego UCS Millennium Falcon is more affordable than ever this Black Friday with a markdown of $180.
Usually $849.99, the UCS Millennium Falcon can be yours for just $669.99 (opens in new tab) at the website Zavvi when you use the discount code BFFALCON at checkout. This Millennium Falcon mega-kit has rarely been on sale, and has become one of the most sought-after kits in the Lego Star Wars collection. If you need Han Solo’s trusty Corellian YT-1300 light freighter in your Lego collection, now’s your chance to finally snag it – at a price sure to make any Wookie bleat “RRRUUUUURRRR” for joy.
It’s unclear how deep Zaavi’s stock of the Lego UCS Millennium Falcon goes, so don’t let this $180 off deal (opens in new tab) slip away before it jumps to hyperspace!
If the Lego UCS Millennium Falcon kit isn’t for you, be sure to check out our Black Friday Lego deals page for more great savings on Star Wars, Space, and Marvel Lego kits for the special collector/builder in your life.
Hulu + Disney Plus Black Friday Bundle deal
While Hulu’s Black Friday deal is offering the streaming service for just $1.99 a month, Disney just sweetened the put to offer its Disney Plus service for just $2.99 more.
The Hulu + Disney Plus Black Friday Bundle (opens in new tab)is a combo deal that offers full access to Hulu and Disney Plus for just $4.98 a month. That’s down from the usual $7.99 a month of Hulu alone and throws in access to Disney’s entire catalog of Star Wars films, TV shows and other science fiction titles.
Disney Plus is currently not offering any deals on the service alone, so this Hulu/Disney Plus bundle may be the best chance to score both streaming services at discount for your first year. NBC’s Peacock streaming service (opens in new tab)and the Paramount Plus streaming service (opens in new tab), the home of all things Star Trek, also have Black Friday deals on now. You can see our full roundup of streaming deals for Black Friday for more.
Is this the Nikon Z6 II’s lowest-ever price? Now just $1696
We don’t think we’ve ever seen the Nikon Z6 II full-frame mirrorless camera this cheap before and, given how popular it is, we’re unlikely to see it drop any further. If you have been waiting to grab a mirrorless camera at a bargain price, then this is it. Save $300 on the Nikon Z6 II right now.
It’s suited to any photographer — even beginners, and is more than capable for astrophotography and is the perfect second camera for a professional photographer. It shoots 24.5MP stills with the FX-Format full-frame BSI CMOS Sensor inside and captures video at 4KUHD 30p with N-log format for full editing flexibility.
While you can shoot this stunning camera at 14FPS (perfect for wildlife and sports) it has a wide ISO range too, between 100-51,200 which makes it suitable for astrophotography and low-light photography and that’s why it features in our guide to the best cameras for astrophotography.
Celestron Travel Scope 70 now under $100
If you’re looking for a budget telescope to get into skywatching without dropping big coin or need a smaller, more portable telescope to travel with, it’s hard to go wrong with the Celestron Travel Scope 70 — a fantastic telescope at a bargain price now 16% off.
Although this refractor telescope is ideal for beginner astronomers and is now under $100 that doesn’t mean it’s not powerful. The Travel Scope’s 70mm optics give excellent views of the moon and it’s packaged with two eyepieces that make it useful for stargazing or even daytime observation of nature and wildlife. Want to shop around for the best Black Friday telescope deals? Have a look at our page for the 10 best Black Friday telescope deals we’ve seen so far.
Save 30% on the Lego Star Wars Razor Crest, now under $100
The Mandalorian is one of Star Wars’ most popular TV spinoffs, so it makes sense that the main ship from the show, The Razor Crest, gets the full Lego treatment.
Right now, you can get 30% off the Lego Star Wars Razor Crest at Amazon, dropping the price down to just $97.99. That’s a fantastic discount on a wonderful Lego set – we actually reviewed the Razor Crest late last year and we really enjoyed it .
This set includes five minifigs (including an adorable Grogu), as well as an opening cockpit and cargo bay. It even fires projectiles, making it an easy pick as one of our favorite Lego Star Wars deals so far.
Sony A7R IVA camera bundle was $3498, now $2998 at Walmart
Save a magnificent $510 on this Sony A7R IVA full-frame mirrorless camera and accessory bundle. Walmart is offering this hefty camera deal on Black Friday and it is the best Sony camera deal we’ve seen so far. Not only does it come with the mammoth 61MP mirrorless monster that can also shoot 4KUHD 30p video, but it has a whole host of camera accessories, to boot. This is the best Sony mirrorless camera Black Friday deal we’ve seen so far.
The kit is shown as including: a Koah flight case, a Sony 64GB V60 SDXC memory card, two spare Koah batteries and a Koah double battery charger, a suite of Corel photo and video editing software and a Zeiss cleaning kit, plus a camera battery grip. Every other A7R IV or IVA deal we’ve seen is either more expensive or doesn’t come with the amount of extra that this bundle from Walmart does, so we think this is the time to invest in Sony mirrorless if you’re going to.
Get Peacock for just $1 per month for a whole year
NBC’s streaming service is great for the whole family thanks to new movie releases like Nope or Minions: The Rise of Gru, but it’s even better for sci-fi fans. That’s because it’s the home of Battlestar Galactica, Heroes, and recent Alan Tudyk comedy Resident Alien.
Use code ‘SAVEBIG’ to get Peacock for only $0.99/mo for 12 months. This offer is only available for new subscribers.
This fantastic deal saves users 80%, making an annual subscription just $12 (or $1 per month). Factor in a huge back catalog of movies and TV, and that’s a small price to pay.
Save over $100 on the Celestron Inspire 100AZ telescope
Save 23% on this beginner-friendly refractor from Celestron. xxx We reviewed the Celestron Inspire 100AZ earlier this year and we found that, with weather permitting, we could get impressive views of the moon and Saturn. We were even able to spot the Andromeda Galaxy (M31) and some other bright star clusters.
We were so impressed with it that we’ve named the Celestron Inspire 100AZ the best telescope for beginners and we’ve included it in our round-up of the best telescopes overall.
A refractor with an Alt-Azimuth mount, this 100mm aperture, 600mm focal length telescope is about as good as it gets for beginner astronomers or those that want to get into stargazing without breaking the bank. Now with over $100 off for Black Friday it’s never been more affordable.
Save over $140 on the Svbony SV503 astrophotography telescope
With its 102mm aperture and f/7 focal ratio, astrophotographers can enjoy crisp and clear images of their favorite night sky targets with the Svbony SV503. What’s more, and this Black Friday, you can snap up this exquisite instrument at 20% less (opens in new tab) than the retail price.
The Svbony SV503 energy-dispersed glass improves on pesky color-fringing, while the dual rack-and-pinion focuser can be fine tuned to bring planets, galaxies, nebulas and the rugged surface of the moon into sharp focus.
While this refractor doesn’t come with a tripod or mount, the Svbony SV503 offers a metal hoop dovetail, focuser wheel, lens cover and tube ring, allowing astronomers to accessorize their way for optimum results.
Celestron SkyMaster 25×100 binocular is now 22% off
The largest binocular of Celestron’s SkyMaster range, the 25×100 binocular ensures superb sharp focus across the field of view — and now you can enjoy over $100 off (opens in new tab) on crystal clear sights of a selection of targets, from the moon to deep-sky objects such as the Orion Nebula (Messier 42). This binocular also provides excellent terrestrial views during the day.
Featuring high-quality BAK-4 prisms and multi-coated optics for excellent contrast, the SkyMaster’s 100mm objective lens and 4mm exit pupil allow your eyes to collect light in a variety of low-light and long-range conditions. The elliptical shape of the Andromeda Galaxy (Messier 31) can be picked up with excellent clarity, while the member stars of the Pleiades (Messier 45) sparkle like diamonds when viewed through the optical system. If you prefer to stay local, the moon’s rugged surface can be brought into breathtaking focus, while the rings of Saturn and belts of Jupiter are magnified to perfection.
Weighing in at 8.75 lbs (3.97 kg), the Celestron SkyMaster 25×100 is a touch on the heavy side, so for stable views we recommend purchasing a suitable tripod (check out our best tripods) to avoid shaky sights of night-sky targets. The binocular is supplied with an integrated tripod adapter and deluxe carry case.
60% off the BlissLights Sky Lite Star Projector right now
Completely transform your bedroom or living room for less than $20 with this perfect holiday gift for space lovers. Save a huge 60% on the original price of the BlissLight Sky Lite over at Walmart with this easy-to-use, button-controlled star projector which even has a six-hour timer for those that like to go to sleep to the ambient lighting.
We reviewed the BlissLights Sky Lite 2.0 earlier this year and gave it 3.5/5 stars because it was easy to use and gave decent ambient lighting, so we’re confident that the Sky Lite is a bargain star projector in this Black Friday deal.
Save 21% on the Celestron AstroMaster 70AZ telescope
Save 21% on the Celestron AstroMaster 70AZ refractor telescope which comes with an Alt-Az mount that is beginner-friendly. The telescope features a 70mm aperture and a 900mm focal length to provide good views of the moon and stars.
Celestron is known for its excellent build quality and the telescope comes with two eyepieces (20mm and 10mm), a fully collapsible lightweight tripod, a red dot finder scope and free access to the Starry Night software which is packed with information about 36,000 night sky objects to help newcomers learn about night sky objects. All this for under $150 makes this a Black Friday telescope deal worth having.
Sony A7R III is now discounted by $500
Save more than 20% off in this Black Friday camera deal on the Sony A7R III. The mirrorless camera, known for its superb full-frame CMOS 42.4MP image sensor is now $500 off in this Amazon deal.
It’s perfect for astrophotography due to its extended ISO range (50-102,400) but it lends itself to many styles of photographers and videographers with a host of useful features like EyeAF autofocusing for sharp portraits, and 4K HDR video capture.
Editing is easy as well because the A7R III has up to 15 stops of dynamic range to retain detail in the brightest highlights and blackest shadows, meaning image files (or video) are flexible when editing in Lightroom or Photoshop.
Save 15% on Celestron’s iconic NexStar 8SE telescope
Known the world over as one of the most iconic line of telescopes, the much-loved Celestron NexStar 8SE is now on sale with $200 for Black Friday over at Amazon and Adorama.
The line has been going since the 1970s but Celestron’s 8SE, the largest in the NexStar line-up, is truly exceptional. A Schmidt-Cassegrain design, this catadioptric telescope takes advantage of a hybrid technology between refractor and reflector telescope designs to provide a massive 2032 mm (80-inch) focal length and huge 203.2mm (8-inch) aperture in a tiny package.
Suitable for all kinds of astronomers it may not immediately be friendly to beginners but it has such breadth of use that, when paired with one of the best eyepieces, you can observe the moon, stars, planets, nebulas and more in exquisite detail.
Ultimate sharpness and incredibly bright, the NexStar 8SE ships with a red dot finderscope and a 25mm eyepiece — which you can upgrade as and when you’re ready to take astronomy to the next level.
20% off the Celestron PowerSeeker 70 AZ refractor telescope
Now under $100 the Celestron PowerSeeker 70 AZ refractor telescope is 20% off for Black Friday over on Amazon. This refractor telescope is ideally suited to beginners who want to view the lunar surface and nebulae. With a 70mm aperture, the telescope has a focal length of 700mm and ships with all the accessories you need to get started quickly.
Two eyepieces (4mm and 20mm) pair with a whopping 3x Barlow lens to magnify your night sky objects. An erect image diagonal means no more cricked necks trying to peer through the eyepiece. A 5×24 finderscope helps you locate night sky objects easily before fine-tuning through the eyepieces with more precision.
Celestron also bundles free access to the Starry Night software that gives information on 36,000 celestial objects for the uninitiated.
73% discount on this Astronaut star projector
Save 73%on the Astronaut Starry Sky Star Projector. The projector is an astronaut that can be posed into different positions and projects stars through its visor. It has eight in-built nebula effects and the projector can be set to a timer for those using it to sleep.
The ideal space gift, save a massive 73% in this Black Friday star projector deal which takes the price down to just $7.58. We said in our Astronaut space star projector review that anything under $35 is a bargain, so $7.58 is insane. It should suit any wall or ceiling thanks to the adjustable projection angle and we love it as an early holiday season gift.
What we like about the projector is its surprisingly good build quality, the fact it’s packed with impressive and effective lighting and its general appeal to all space fans, whether young or older.
Save 50% on a year of Paramount Plus – the home of Star Trek
Paramount’s Black Friday deal for its Paramount Plus streaming service is one of the best streaming deals we’ve found so far, offering a huge amount of content at a 50% discount. That equates to around $2 per month, and there are plenty of sci-fi shows to be found for your money.
Star Trek is the big one, with the entirety of the TV show’s 860 episodes to watch (as well as the movies), but there are newer sci-fi shows like videogame tie-in Halo, classic mystery thriller Twin Peaks, and The Twilight Zone reboot too.
Get 25% off these Celestron SkyMaster 15×70 binoculars
The Celestron SkyMaster 15×70 binoculars are currently discounted by 25% (opens in new tab), making them an excellent buy for budding skywatchers who want to view larger deep-sky objects.
Since our initial post about this deal yesterday, they have been reduced by a couple more dollars, but we expect this to be the lowest they will go.
You can read our Celestron SkyMaster Pro 15×70 binoculars review to get a feel for the version of the binos that are on sale, but note that these are slightly less rugged and not waterproof, unlike the pro version (which cost more than double). If you’re not planning on using your binos in inclement weather, for the price, these binoculars will see you right.
Because of the high magnification, you should invest in one of the best tripods so you don’t have to worry about wobble spoiling your views.
Once set up in the right conditions, you can see the Andromeda Galaxy and the bright Messier galaxies and nebulas based on the list drawn up by Charles Messier.
All-in-all, these binoculars will give users an enjoyable star and galaxy-gazing experience at a very reasonable price.
Get 50% off the HP Reverb G2 VR headset
HP Reverb G2 VR Headset: Was $599Now $299 at HP (opens in new tab)
HP’s highly-rated VR headset, the Reverb G2, is one of the best VR headsets around. Better yet, the manufacturer has cut the price by 50%. Right now, you can get the HP Reverb G2 headset for just $299 at HP (opens in new tab), down from the usual price of $599.
Unlike the Meta Quest 2, you will need a PC to connect to, but it offers a 2160×2160 resolution in each eye and excellent audio. For more, be sure to check out our HP Reverb G2 review where we awarded it 4-stars and noted how easy it is to set up.
That makes it ideal for a newcomer to VR, or to an experienced user looking to step up from the Meta Quest 2 or a PlayStation VR. With the G2’s lengthy, six-meter cable, it’s less restrictive than many wired headsets, too, and it’s comfortable for longer periods of time – perfect if you’re up to your eyeballs in No Man’s Sky VR.
Save a stellar 50% on the Lego Galaxy Explorer set
We love a good Lego kit here at Space. In fact, the only thing we love more is a good Lego kit at half price which is exactly what we’ve found over at Walmart. You can get the Lego Galaxy Explorer set for just $50 (opens in new tab), reduced down from $100.
The Galaxy Explorer is a modern take on the classic 1979 Lego set of the same name, and it’s an impressive mash-up of retro-styling and modern design. It comes with 4 minifigures – 2 red and 2 white astronauts, and has a total of 1254 pieces, making it an involved, but not massive build.
We actually checked out the set earlier this year and gave it a perfect score of 5 stars (check out our Lego Galaxy Explorer review to see our full thoughts). We even compared it to the original model as our reviewer still had theirs.
We’re covering all the best Black Friday Lego deals on our main hub, so head over there for more savings on Star Wars, Space, and Marvel themed kits.
Hulu’s Black Friday deal is just $1.99 a month
There is no shortage of science fiction on streaming services right now and nowhere is that more true than at Hulu and right now you can get a year of Hulu for just $1.99 a month (opens in new tab), a 75% discount of its regular $7.99 fee.
The Black Friday Hulu offer is not as deep a discount as the streaming service’s 2021 deal, which offered a year’s subscription at just 99 cents a month, but it’s still a bargain for fans of The Orville, Rick and Morty and other sci-fi shows that call Hulu home. After all, where else are you going to see the new Hulu original “Prey,” which is the latest entry in the Predator franchise?
Hulu is making this deal available primarily to new subscribers, but if you are a lapsed subscriber – and you have not used Hulu in the last month – you may be able to qualify for the offer.
If you looking to save on a streaming service, but Hulu isn’t your cup of tea, you’re in luck. Our Black Friday streaming deals has a rundown of the offers available now.
Save $200 on a DJI Mini 2 drone bundle
If you’re just starting out on your drone journey, this early Black Friday deal is a great opportunity to land a beginner-friendly drone and save hundreds of dollars at the same time.
DJI is a known for its quality drones and the DJI Mini 2 is one of oour favorite drones for beginners and experts alike. At $479, this DJI Mini 2 drone bundle is $200 off at Adorama (opens in new tab) and is the best price we’ve seen for Black Friday this year. The bundle is on sale for 29% off and comes with the drone, as well as a microSD card, carrying case and several other extras for your aerial or sky photography needs.
As we noted in our DJI Mini 2 review, this drone is small enough (it weighs 249 grams) that it’s portable and lightweight, and also does not require you to register it for casual flying. You will need to check your local drone regulations, though. It carries a 12 MP camera for both still images and video and has about 23 minutes of flight time (according to our tests) before it returns home on a 25% battery life mark.
If you’re looking for more affordable drone ideas, check out our best drone deals and our beginners guide to drones and best drones features can help you pick the right machine if you need more tips.
Meta Quest 2 VR headsets are $70 off w/ free games
A good VR headset can transform a space experience on your computer into an immersive trip across the final frontier and this Meta Quest 2 deal from Amazon has the right stuff.
You can save up to $70 off a Meta Quest 2 VR headset with a 259 GB capacity (opens in new tab), the highest storage capacity available now, and get two free games at the same time. This Black Friday Bundle comes with Resident Evil 4 VR and Beat Saber for free, and we’ve got a list of the best free space VR games to choose from once you’re set up.
If 259 GB is a bit much, you can still save $50 on the Meta Quest 2 VR headset with 128 GB (opens in new tab), which also comes with the two free games. Both deals come with the Quest 2 headset (it was previously called the Oculus Quest 2, if it sounded familiar to you, and it’s a standalone device. You won’t need a game console or PC to pair it with, but it does link to PC if you’d like to try a PC VR title or two.
The Meta Quest 2 includes features to keep you from bumping into obstacles, two Touch controllers and cameras to help orient yourself in a room. Its reviews on Amazon are overwhelmingly positive, and we were also impressed when we tried it, too. Check out our Meta Quest 2 review for an in-depth look at the VR gear.
If the Meta Quest 2 isn’t exactly what you’re looking for, check out our other VR headset deals and our guide to the best VR headsets around.
Save 21% on the Celestron AstroMaster 70AZ telescope
The Celestron AstroMaster 70AZ refractor telescope is currently at a discount of over 20% (opens in new tab) which makes a perfect gift for beginner astronomers this Black Friday. It features a 70mm aperture and a powerful 900mm focal length that takes you in for detailed lunar views. Not only that but thanks to the fully coated objective lens it’s ideal for land-based viewing as well: wildlife, landscapes and more are adequate subjects during the day.
At night though, the AstroMaster 70AZ benefits from fully coated optics to reduce optical aberrations associated with astronomy. The telescope also ships with everything you need to get started stargazing: a full-height tripod, two eyepieces, and a red dot finderscope to find your celestial objects before refining positioning through the eyepiece. During our Celestron AstroMaster 70AZ review we noted that the achromatic refractor avoids distracting ‘false color’ and is already good value, which is even more evident now given the discount.
Suitable for adults but also easy enough to set up for younger astronomers and kids, the refractor weighs just 11 lbs (5 kg) so taking it out to dark sky locations, or just to get away from the city lights, is simple. The tripod also comes with a simple Alt-Az controlled tripod with a smooth panning handle to locate night sky objects quickly.
It requires no tools to set up and is one of the simplest telescopes in Celestron’s refractor range. Read our guide to the best telescopes if you want to shop around. Alternatively, check out our round-ups of the best telescopes for beginners, best telescopes for kids and snap up quick deals with budget telescopes under $500
Lego’s UCS Millennium Falcon is $100 off
The Millennium Falcon is an icon for science fiction fans around the world and when it comes to models, there is no higher crown jewel than the Lego Star Wars UCS Millennium Falcon set, which is on sale for $749.99 (opens in new tab), a full $100 off, at Zavvi this week. You’ll have to use the code SWFALCON at checkout to get the deal.
Released in 2017, the massive UCS Millennium Falcon set is part of Lego’s Ultimate Collectors Series. It is a massive building set with 7,541 pieces and measures 22 inches wide, 33 inches long and 8 inches tall (about 56 centimeters wide, 84 cm long and 20 cm tall). It also weighs a whopping 37 pounds (17 kilograms) but in our review of the UCS Millennium Falcon, my colleague Jordan Miller found it to be sturdy enough to move around once built with out fear of it crumbling apart.
This set does not go on sale often, and while last year Amazon did host a special lightning sale during Black Friday, the set sold out quickly and it is not currently expected to be back on sale at Amazon in 2022. We’re not sure how many sets Zavvi has available, so if this set has been on your Padawan’s gift list, you may want to act fast.
You can also see all of our best Lego Star Wars sets and our best Lego Star Wars deals to prepare for Black Friday. Our best Lego space deals has more familiar rocket and other set deals from a galaxy closer to home.
Save $70 on the Celestron AstroMaster 114 EQ telescope
We first saw the Celestron AstroMaster 114 EQ telescope at $70 off on Amazon (opens in new tab) back in October during Amazon Prime Day but the deal is now back for Black Friday.
There’s already a $30 discount on the AstroMaster 114 EQ but save a further $40 off with the coupon (tick the box) and you’ll see this $70 saving at checkout. We’ve rated it as one of our best telescope deals currently available. However, if you want to see what else is available take a look at our guide to the best telescopes in 2022.
This is a good telescope for beginners and those who don’t have much experience with skywatching. It’s easy to use and comes packed with accessories including two eyepieces (20mm and 10mm), a full-height tripod and a StarPointer red dot finderscope. It also ships with software to support your stargazing experience. If you want to discover other skywatching gear and have a keen eye for deals, be sure to check out our guides to the best Celestron telescope and binocular deals, best telescopes for beginners and budget telescopes under $500.
Nikon D850 camera now almost $900 off
This huge $900 discount on the Nikon D850 (opens in new tab) is the biggest saving we’ve seen on what we’ve rated as one of the best cameras for photos and videos and the best camera for astrophotography. Dropping it from $2,996.95 down to just $2,104.95, Walmart are currently offering the best deal on this DSLR camera.
Although a few years old now, it still competes with modern mirrorless cameras. We gave it 4.5/5 stars in our Nikon D850 review. It shoots stills photos at a whopping 45.4MP resolution and can capture 4KUHD 30p video which lends itself well to any photographer and videographer except those that require the latest 8K video res.
A superb generalist camera the Nikon D850 is amazingly good at everything. Astrophotography, sports, wildlife, portraiture, landscapes — you name it, the D850 can handle it.
Built like a tank and designed for professionals to throw around all day, it’s fully weather sealed so taking it out in the rain or snow won’t make it bat an eye.
B&H is also offering a $500 discount on the Nikon D850 (opens in new tab) and Amazon is currently matching that with their $500 Nikon D850 deal (opens in new tab) but we recommend you grab it from Walmart while stocks last to almost double your savings.
Hexeum night vision binoculars 53% off
Hexeum may not be a household name when it comes to high-quality optics, but this deal caught the eye of our optics team because it’s simply too good to resist for people in love with the outdoors.
These Hexeum night vision binoculars are on sale for $139.98 at Amazon (opens in new tab), down from $298, and come with a 3x magnification and 4x digital zoom. While we haven’t been able to test them hands-on, their specifications are impressive enough to make them worth the risk when they’re at this price.
Amazon does seem to like these night vision binoculars as we saw a similar deal during Amazon Prime Day this year. Check out our full analysis of this Hexeum night vision binoculars deal here for more.
If you’d rather shop around for other options, check out our guide to the best night vision binoculars. We also have a best binoculars guide for more traditional optics and you can save more with our best binocular deals. We’ve also rounded up some of the best compact binoculars and for children, we have the best binoculars for kids.
Save over £250 on the Celestron NexStar 4SE
With its iconic orange tube, the Celestron NexStar 4SE is a steal this Black Friday, with a whopping 33% off – that’s a discount of over £250 (opens in new tab) on the retail price!
The Celestron NexStar line of telescopes offer an exquisite GoTo capability, which ensures easy, seamless navigation of the night sky. At the touch of a button, beginners have the universe at their fingertips, while seasoned observers looking for a fuss free tour of a selection of targets can enjoy crystal clear views of the planets, rugged surface of the moon and bright deep-sky gems such as the Orion Nebula (Messier 42).
While the smallest in aperture of the NexStar suite, this computerized instrument offers excellent sights of a good proportion of the 40,000+ celestial objects stowed away in its database. It also comes fully equipped too, complete with a sturdy steel tripod, Star Pointer Red Dot Finderscope, The Sky Level 1 Astronomy Software, NexRemote Telescope Control software, a 25mm eyepiece, among other additional features that make this Maksutov-Cassegrain ideal for all ages and observing level.
If you’re looking for a larger aperture, then great news — the Celestron NexStar 5SE and Celestron NexStar 8SE have also been reduced this Black Friday, saving you $150 (opens in new tab) or a little over $200 (opens in new tab).
Affinity Photo image editing software is now $20 off in this Black Friday/Cyber Monday deal (opens in new tab).
Save almost 30% (opens in new tab) off this standalone image editing software and never pay Adobe for subscriptions again. One of the most powerful image editing software available, the benefit of Serif’s Affinity Photo is that you purchase the software in a single one-off payment and you can keep the software forever. No subscription!
Tempted to plunge into Adobe Photoshop or Lightroom but not sure about the monthly ongoing costs or high annual price? Serif Affinity Photo 1.10 (opens in new tab) may be the right image editing software for you. Full of features it can be used in the same way as Photoshop and will be more than ample for almost all photographers.
Non-destructive layer-based editing, RAW file processing and a full suite of photo editing tools are the mainstays of Affinity Photo but it doesn’t stop there. With a dedicated brush and engine library, it’s also useful for content creators that need to design graphics or want to manipulate photos and images for web and print use.
In our Serif Affinity Photo review the powerful image editing software got 4.5 out of 5 stars and we were impressed with its fast processing speeds and its flexibility when editing RAW files.
Image editors will be able to save time by accessing the main photo editing tools like automatic levels, adjustments to color and contrast or altering white balance in the software. Layering is possible to create composite images (ideal for astrophotography) and one of its strongest editing tools is the High Dynamic Range (HDR) feature which, in other photo editing software can be a little too harsh giving an unrealistic finish to photos. But advanced control features like Tone Mapping, Compression, Contrast, Exposure and Saturation/Vibrance sliders aid fine-tuned control, perfect for landscape photos and astrophotos with landscapes in the foreground.
Be sure to check out Space.com’s Black Friday deals page, or our guide to the Best cameras for photos and videos, and Best cameras for astrophotography.
Recientemente hemos podido probar la cámara de SVBONY SV305 PRO para astrofoto planetaria y guiado que incorpora algunas mejoras con respecto a la SV305 como es el puerto USB 3.0 que permite una mayor tasa de frames por segundo.
La caja de la SV305 pro incluye, además de la cámara , un cable USB 3.0 de 2m, un cable ST4, un anillo extensor de 1,25″ M28.5*0.6, un adaptador para filtros de tipo C, una tapa protectora de 1,25″, una toallita de limpieza, un manual impreso y el CD de instalación del software (bastante desactualizado por lo que recomiendo descargar la versión más reciente de la web de SVBONY).
Características de la SV305 PRO
Esta cámara a color de SVBONY incorpora el sensor SONY IMX290 que tan buenos resultados ha dado en cámaras planetarias. Se trata de un sensor de 2,9 micras con una diagonal de 6,5mm lo que nos da una resolución de 1920×1080 píxeles (2M pixels) con un ADC de 10/12 bits que se traducen en 8/12 bits de salida. El sensor va protegido de la intemperie por un cristal protector que no tiene corte en UV ni IR por lo que se recomienda la utilización de un filtro adicional para cortar estas bandas de emisión si hacemos imágenes en color. El filtro UV/IR cut de SVBONY nos ha dado unos resultados muy buenos anteriormente y ha demostrado sobradamente su valía para este propósito.
Es un sensor sensible y rápido al que se le saca mejor provecho con discos SSD en nuestro ordenador y que demanda una alta tasa de transferencia que solo se consigue con puertos USB 3.0. A máxima resolución ofrece hasta 130fps mientras que con un factor de recorte de 320×240 ofrece hasta 500fps.
Cuenta con un buffer de memoria de 128M, algo cortos con respecto a los 256M que ofrecen otros competidores que montan el mismo chipset.
Tras varios días de espera debido al mal tiempo (el otoño es mala época astronómica) por fin se abrió una ventana para poder hacer una prueba un viernes por la tarde. Aunque el cielo estaba despejado el reciente paso de un frente que había dejado lluvias produjo algo de humedad ambiental que ante la llegada de un nuevo frente frío desplomaron las temperaturas hasta los 3ºC. Teníamos también presente algo de Jet Stream sobre la península por lo que sería una noche complicada para sacar el máximo partido al equipo utilizado: un Celestron SC8″ sobre montura SkyWatcher AZ-EQ6. En el tren óptico usamos también el enfocador Auriga, un ADC de ZWO (Svbony tiene uno exáctamente igual, al menos estéticamente) y el filtro UV-IR cut de Svbony junto a una Barlow x2 de Baader.
Tuvimos que esperar más de 1 hora y media a que el tubo se aclimatara y desaparecieran las molestas «plumas térmicas» pero incluso con el tubo aclimatado la imagen de Júpiter era difícil de enfocar debido a la gran turbulencia atmosférica.
Saqué varios vídeos de Júpiter en los que pude comprobar que la cámara ofrecía una buena tasa de fps. Lástima que la turbulencia atmosférica impidiera sacar una imagen más nítida.
Estuve esperando un par de horas más pero la situación no mejoraba con momentos así que antes de finalizar la sesión decidí apuntar a Marte para no irme con las manos vacías a sabiendas de que no conseguiría buenos resultados.
Una pena de nuevo no conseguir mejor resolución debido a la inestabilidad atmosférica porque se podrían apreciar más detalles de Mare Sirenum y el Monte Olimpo.
En general la cámara me ha gustado mucho, ha funcionado correctamente desde el primer momento (anteriormente me habían enviado la SV305 que presentó problemas técnicos y no funcionó). Tiene un precio bastante contenido aunque eso se nota en los acabados con mayor cantidad de plástico que en otros modelos de competidores. Es una cámara algo voluminosa para su peso lo que puede ser un inconveniente si la usamos como cámara de guiado.
Las pruebas las realizamos con Firecapture aunque funciona también con Sharpcap y otros programas compatibles. La respuesta a los controles de ganancia y color es acorde a otros modelos de cámaras similares.
En general estamos ante una cámara muy adecuada para astrofoto planetaria si queremos empezar a hacer cosas «serias» sin gastarnos mucho dinero. No tiene nada que ver con el modelo SV205 que se orienta a usuarios de iniciación que quieren dar sus primeros pasos, aquí ya podemos exprimir un sensor con unas excelentes capacidades.
‘3 Legged Thing’ has been a respected UK-based tripod manufacturer since 2008. It revolutionized tripods with color, and beautifully engineered components and introduced the first travel tripod in the world that could extend to over 2m tall. The prototype ‘Brian’ features in our best tripods guide thanks to its excellent build quality and portability.
Corey, which we have here with an $80 discount and being offered for a generous $119.99 (opens in new tab), is slightly smaller but a little heavier when fully extended than Brian is. This is why Brian trumped Corey to make it to our ‘best of’ list. But if a height of 59″/1.5m is tall enough for you — why wouldn’t it be as an astrophotographer — and you can deal with the 0.4lbs extra weight, the current 40% discount on Amazon (opens in new tab) means you are getting an excellent deal.
The original 3 Legged Thing’s Punks Corey (opens in new tab) has been superseded by the 3 Legged Thing Punks Corey 2.0 (opens in new tab), which costs almost double, despite having a 10% discount today.
That said, despite the upgrades of the more recent model, the 3 Legged Thing Punks Corey is not a tripod to be overlooked, don’t forget, it can fold down to just 34 cm long!
Portable/travel tripods can often feel flimsy and plasticky. Corey is anything but. The magnesium alloy feels solid and rugged; it is built to last. Not to mention, everything about it is satisfyingly tactile. The ball head, which, again, can feel cheap and loose on some entry-level models, moves beautifully and feels like it belongs on a much more premium product, as does the smooth panning and panning lock.
The 3 Legged Thinks Punks Corey has a load-to-weight ratio of 9:1 (meaning it can support loads up to 9 times its weight). With a 30 lb/14 kg payload, it is one of the strongest tripods in its class and should be able to comfortably support your best astrophotography camera with ease.
Finally, as with all 3 Legged Things products, the Corey comes with a five-year global warranty for extra peace of mind.
Be sure to check out Space.com’s Black Friday deals page, or our guide to the Best Tripods.
In this detailed technical article, I compare six AI-based noise reduction programs for the demands of astrophotography. Some can work wonders. Others can ruin your image.
Over the last two years, we have seen a spate of specialized programs introduced for removing digital noise from photos. The new generation of programs uses artificial intelligence (AI), AKA machine learning, trained on thousands of images to better distinguish unwanted noise from desirable image content.
At least that’s the promise – and for noisy but normal daytime images they do work very well.
But in astrophotography, our main subjects – stars – can look a lot like specks of pixel-level noise. How well can each program reduce noise without eliminating stars or wanted details, or introducing odd artifacts, making images worse?
To find out, I tested six of the new AI-based programs on real-world – or rather “real-sky” – astrophotos. Does one program stand out from the rest for astrophotography?
Note: All the images are full-resolution JPGs you can tap or click on to download for detailed inspection.
The new AI-trained noise reduction programs can indeed eliminate noise better than older non-AI programs while leaving fine details untouched or even sharpening them.
Of the group tested, the winner for use on just star-filled images is a specialized program for astrophotography, NoiseXTerminator from RC-Astro.
For nightscapes and other images, Topaz DeNoise AI performed well, better than it did in earlier versions that left lots of patchy artifacts, something AI programs can be prone to.
While ON1’s new NoNoise AI 2023 performed fine, it proved slightly worse in some cases than its earlier 2022 version. Its new sharpening routine needs work.
Other new programs, notably Topaz Photo AI and Luminar’s Noiseless AI, also need improvement before they are ready to be used for the rigors of astrophotography.
For reasons explained below, I would not recommend DxO’s PureRAW2.
As described below, while some of the programs can be used as stand-alone applications, I tested them all as plug-ins for Photoshop, applying each as a smart filter applied to a developed raw file brought into Photoshop as a Camera Raw smart object.
Most of these programs state that better results might be obtainable by using the stand-alone app on original raw files. But for my personal workflow I prefer to develop the raw files with Adobe Camera Raw, then open those into Photoshop for stacking and layering, applying any further noise reduction or sharpening as non-destructive smart filters.
Many astrophotographers also choose to stack unedited original images with specialized stacking software, then apply further noise reduction and editing later in the workflow. So my workflow and test procedures reflect that.
However, the exception is DxO’s PureRAW2. It can work only on raw files as a stand-alone app, or as a plug-in from Adobe Lightroom. It does not work as a Photoshop plug-in. I tested PureRAW2 by dropping raw Canon .CR3 files onto the app, then exporting the results as raw DNG files, but with the same settings applied as with the other raw files. For the nightscape and wide-field images taken with lenses in DxO’s extensive database, I used PureRAW’s lens corrections, not Adobe’s.
As shown above, I chose three representative images:
A nightscape with star trails and a detailed foreground, at ISO 1600.
A wide-field deep-sky image at ISO 1600 with an 85mm lens, with very tiny stars.
A close-up deep-sky image taken with a telescope and at a high ISO of 3200, showing thermal noise hot pixels.
Each is a single image, not a stack of multiple images.
Before applying the noise reduction, the raw files received just basic color corrections and a contrast boost to emphasize noise all the more.
In the test results for the three images, I show the original raw image, plus a version with noise reduction and sharpening applied using Adobe Camera Raw’s own sliders, with luminance noise at 40, color noise at 25, and sharpening at 25.
I use this as a base comparison, as it has been the noise reduction I have long applied to images. However, ACR’s routine (also found in Adobe Lightroom) has not changed in years. It is good, but it is not AI.
The new smart AI programs should improve upon this. But do they?
I have refrained from providing prices and explaining buying options, as frankly, some can be complex!
For those details and for trial copies, go to the software’s website by clicking on the link in the header product names below.
All programs are available for Windows and macOS. I tested the latter versions.
I have not provided tutorials on how to use the software; I have just reported on their results. For troubleshooting their use, please consult the software company in question.
ON1’s main product is the Lightroom/Photoshop alternative program called ON1 Photo RAW, which is updated annually to major new versions. It has full cataloging options like Lightroom and image layering like Photoshop. Its Edit module contains the NoNoise AI routine. But NoNoise AI can be purchased as a stand-alone app that also installs as a plug-in for Lightroom and Photoshop. It’s what I tested here. The latest 2023 version of NoNoise AI added ON1’s new Tack Sharp AI sharpening routine.
Version tested: 17.0.1
This program has proven very popular and has been adopted by many photographers – and astrophotographers – as an essential part of an editing workflow. It performs noise reduction only, offering a choice of five AI models. Auto modes can choose the models and settings for you based on the image content, but you can override those by adjusting the strength, sharpness, and recovery of original detail as desired.
A separate program, Topaz Sharpen AI, is specifically for image sharpening, but I did not test it here. Topaz Gigapixel AI is for image resizing.
Version tested: 3.7.0
In 2022 Topaz introduced this new program which incorporates the trio of noise reduction, sharpening, and image resizing in one package. Like DeNoise, Sharpen, and Gigapixel, Photo AI works as a stand-alone app or as a plug-in for Lightroom and Photoshop. Photo AI’s Autopilot automatically detects and applies what it thinks the image needs. While it is possible to adjust settings, Photo AI offers much less control than DeNoise AI and Topaz’s other single-purpose programs.
As of this writing in November 2022, Photo AI is enjoying almost weekly updates and seems to be where Topaz is focusing its development and marketing efforts.
Version tested: 1.0.9
Unlike the other noise reduction programs tested here, Luminar Neo from the software company Skylum is a full-featured image editing program, with an emphasis on one-click AI effects. One of those is the new Noiseless AI, available as an extra-cost extension to the main Neo program, either as a one-time purchase or by annual subscription. Noiseless AI cannot be purchased on its own. However, Neo with most of its extensions does work as a plug-in for Lightroom and Photoshop.
Being new, Luminar Neo is also updated frequently, with more extensions coming in the next few months.
Version tested: 1.5.0
Like ON1, DxO makes a full-featured alternative to Adobe’s Lightroom for cataloging and raw development called DxO PhotoLab, in version 6 as of late 2022. It contains DxO’s Prime and DeepPrime noise reduction routines. However, as with ON1, DxO has spun off just the noise reduction and lens correction parts of PhotoLab into a separate program, PureRAW2, which runs either as a stand-alone app or as a plug-in for Lightroom – but not Photoshop, as PureRAW works only on original raw files.
Unlike all the other programs, PureRAW2 offers essentially no options to adjust settings, just the option to apply, or not, lens corrections, and to choose the output format. For this testing, I applied DeepPrime and exported out to DNG files.
Version tested: 2.2
Unlike the other programs tested, NoiseXTerminator from astrophotographer Russell Croman is designed specifically for deep-sky astrophotography. It installs as a plug-in for Photoshop or Affinity Photo, but not Lightroom. It is also available under the same purchased license as a “process” for PixInsight, an advanced program popular with astrophotographers, as it is designed just for editing deep-sky images.
I tested the Photoshop plug-in version of Noise XTerminator. It receives occasional updates to both the actual plug-in and separate updates to the AI module.
Version tested: 1.1.2, AI model 2
As with the other test images, the panels show a highly magnified section of the image, indicated in the inset. I shot the image of Lake Louise in Banff, Alberta with a Canon RF15-35mm lens on a 45-megapixel Canon R5 camera at ISO 1600.
Adobe Camera Raw’s basic noise reduction did a good job, but like all general routines it does soften the image as a by-product of smoothing out high-ISO noise.
ON1 NoNoise 2023 retained landscape detail better than ACR but softened the star trails, despite my adding sharpening. It also produced a somewhat patchy noise smoothing in the sky. This was with Luminosity backed off to 75 from the auto setting (which always cranks up the level to 100 regardless of the image) and with the Tack Sharp routine set to 40 with Micro Contrast at 0. It left a uniform pixel-level mosaic effect in the shadow areas. Despite the new Tack Sharp option, the image was softer than with last year’s NoNoise 2022 version (not shown here as it is no longer available) which produced better shadow results.
Topaz DeNoise AI did a better job than NoNoise retaining the sharp ground detail while smoothing noise, always more obvious in the sky in such images. Even so, it also produced some patchiness, with some areas showing more noise than others. This was with the Standard model set to 40 for Noise and Sharpness, and Recover Details at 75. I show the other model variations below.
Topaz Photo AI did a poor job, producing lots of noisy artifacts in the sky and an over-sharpened foreground riddled with colorful speckling. It added noise. This was with the Normal setting and the default Autopilot settings.
Noiseless AI in Luminar Neo did a decent job smoothing noise while retaining, indeed sharpening ground detail without introducing ringing or colorful edge artifacts. The sky was left with some patchiness and uneven noise smoothing. This was with the suggested Middle setting (vs Low and High) and default levels for Noise, Detail, and Sharpness. However, I do like Neo (and Skylum’s earlier Luminar AI) for adding other finishing effects to images such as Orton glows.
DxO PureRAW2 did smooth noise very well while enhancing sharpness quite a lot, almost too much, though it did not introduce obvious edge artifacts. Keep in mind it offers no chance to adjust settings, other than the mode – I used DeepPrime vs the normal Prime. Its main drawback is that in making the conversion back to a raw DNG image it altered the appearance of the image, in this case darkening the image slightly. It also made some faint star trails look wiggly!
Noise XTerminator really smoothed out the sky and did so very uniformly without doing much harm to the star trails. However, it smoothed out ground detail unacceptably, not surprising given its specialized training on stars, not terrestrial content.
Conclusion: For this image, I’d say Topaz DeNoise AI did the best, though not perfect, job.
This was surprising, as tests I did with earlier versions of DeNoise AI showed it leaving many patchy artifacts and colored edges in places. Frankly, I was put off using it. However, Topaz has improved DeNoise AI a lot.
Why it works so well, when Topaz’s newer program Photo AI works so poorly, is hard to understand. Surely they use the same AI code? Apparently not. Photo AI’s noise reduction is not the same as DeNoise AI.
Similarly, ON1’s NoNoise 2023 did a worse job than their older 2022 version. One can assume its performance will improve with updates. The issue seems to be with the new Tack Sharp addition.
NoiseXTerminator might be a good choice for reducing noise in just the sky of nightscape images. It is not suitable for foregrounds.
Wide-Field Image Test
I shot this image of Andromeda and Triangulum with an 85mm Rokinon RF lens on the 45-megapixel Canon R5 on a star tracker. Stars are now points, with small ones easily mistaken for noise. Let’s see how the programs handle such an image, zooming into a tiny section showing the galaxy Messier 33.
Conclusion: The clear winner was NoiseXTerminator.
Topaz DeNoise was a respectable second place, performing better than it had done on such images in earlier versions. Even so, it did alter the appearance of faint stars which might not be desirable.
ON1 NoNoise 2023 also performed quite well, with its softening of brighter stars yet sharpening of fainter ones perhaps acceptable, even desirable for an effect.
Telescopic Deep-Sky Test
I shot this image of the NGC 7822 complex of nebulosity with a SharpStar 61mm refractor, using the red-sensitive 30-megapixel Canon Ra and with a narrowband filter to isolate the red and green light of the nebulas.
Again, the test image is a single raw image developed only to re-balance the color and boost the contrast. No dark frames were applied, so the 8-minute exposure at ISO 3200 taken on a warm night shows thermal noise as single “hot pixel” white specks.
Adobe Camera Raw did a good job smoothing the worst of the noise, suppressing the hot pixels but only by virtue of it softening all of the image slightly at the pixel level. However, it leaves most stars intact.
ON1 NoNoise 2023 also did a good job smoothing noise while also seeming to boost contrast and structure slightly. But as in the wide-field image, it did smooth out star images a little, though somewhat photogenically, while still emphasizing the faintest stars. This was with no sharpening applied and Luminosity at 60, down from the default 100 NoNoise applies without fail. One wonders if it really is analyzing images to produce optimum settings. With no Tack Sharp sharpening applied, the results on this image with NoNoise 2023 looked identical to NoNoise 2022.
Topaz DeNoise AI did another good job smoothing noise while leaving most stars unaffected. However, the faintest stars and hot pixels were sharpened to be more visible tiny specks, perhaps too much, even with Sharpening at its lowest level of 1 in Standard mode. Low Light and Severe modes produced worse results, with lots of mottling and unevenness in the background. Unlike NoNoise, at least its Auto settings do vary from image to image, giving you some assurance it really is responding to the image content.
Topaz Photo AI again produced unusable results. Its Normal modes produced lots of mottled texture and haloed stars. Its Strong mode shown here did smooth noise better, but still left lots of uneven artifacts, as DeNoise AI did in its early days. It certainly seems like Photo AI is using old hand-me-down code from DeNoise AI.
Noiseless AI in Luminar Neo did smooth noise but unevenly, leaving lots of textured patches. Stars had grainy halos and the program increased contrast and saturation, adjustments usually best left for specific adjustment layers dedicated to the task.
DxO PureRAW2 did smooth noise very well, including wiping out the faintest specks from hot pixels, but it also wiped out the faintest stars, I think unacceptably and more than other programs like DeNoise AI. For this image, it did leave basic brightness alone, likely because it could not apply lens corrections to an image taken with unknown optics. However, it added an odd pixel-level mosaic-like effect on the sky background, again unacceptable.
Noise XTerminator did a great job smoothing random noise without affecting any stars or the nebulosity. The Detail level of 20 I used actually emphasized the faintest stars, but also the hot pixel specks. NoiseXTerminator can’t be counted on to eliminate thermal noise; that demands the application of dark frames and/or using dithering routines to shift each sub-frame image by a few pixels when autoguiding the telescope mount. Even so, Noise XTerminator is so good users might not need to take and stack as many images.
Conclusion: Again, the winner was NoiseXTerminator.
Deep-sky photographers have praised “NoiseX” for its effectiveness, either when applied early on in a PixInsight workflow or, as I do in Photoshop, as a smart filter to the base stacked image underlying other adjustment layers.
Topaz DeNoise is also a good choice as it can work well on many other types of images. But again, play with its various models and settings. Pixel peep!
ON1 NoNoise 2023 did put in a respectable performance here, and it will no doubt improve – it had been out less than a month when I ran these tests.
Based on its odd behavior and results in all three test images I would not recommend DxO’s PureRAW2. Yes, it reduces noise quite well, but it can alter tone and color in the process, and add strange pixel-level mosaic artifacts.
Comparing DxO and Topaz Options
DxO and Topaz DeNoise AI offer the most choices of AI models and strength of noise reduction. Here I compare:
Topaz DeNoise AI on the nightscape image using three of its models: Standard (which I used in the comparisons above), plus Low Light, and Severe. These show how the other models didn’t do as good a job.
The set below also compares DeNoise AI to Topaz’s other program, Photo AI, to show how poor a job it is doing in its early form. Its Strong mode does smooth noise but over-sharpens and leaves edge artifacts. Yes, Photo AI is one-click and easy to use, but produces bad results – at least on astrophotos.
As of this writing DxO’s PureRAW2 offers the Prime and newer DeepPrime AI models – I used DeepPrime for my tests.
However, DxO’s more expensive and complete image processing program, PhotoLab 6, also offers the even newer DeepPrimeXD model, which promises to preserve or recover even more “Xtra Detail” over the DeepPrime model. As of this writing, the XD mode is not offered in PureRAW2. Perhaps that will wait for PureRAW3, no doubt a paid upgrade.
The set above compares the three noise reduction models of DxO’s PhotoLab 6. DeepPrime does do a better job than Prime. DeepPrimeXD does indeed sharpen detail more, but in this example it is too sharp, showing artifacts, especially in the sky where it is adding structures and textures that are not real.
However, when used from within PhotoLab 6, the DeepPrime noise reduction becomes more usable. PhotoLab is then being used to perform all the raw image processing, so PureRAW’s alteration of color and tone is not a concern. Conversely, it can also output raw DNGs with only noise reduction and lens corrections applied, essentially performing the same tasks as PureRAW. If you have PhotoLab, you don’t need PureRAW.
Comparing AI to Older Non-AI Programs
The new generation of AI-based programs has garnered all the attention, leaving older stalwart noise reduction programs looking a little forlorn and forgotten.
Here I compare Camera Raw and two of the best AI programs, Topaz DeNoise AI and NoiseXTerminator, with two of the most respected of the “old-school” non-AI programs:
Dfine2, included with the Nik Collection of plug-ins sold by DxO (shown above), and
Reduce Noise v9 sold by Neat Image (shown below).
I tested both by using them in their automatic modes, where they analyze a section or sections of the image and adjust the noise reduction accordingly, but then apply that setting uniformly across the entire image. However, both allow manual adjustments, with Neat Image’s Reduce Noise offering a bewildering array of technical adjustments.
How do these older programs stack up to the new AI generation? Here are comparisons using the same three test images.
In the nightscape image, Nik Dfine2 and Neat Image’s Reduce Noise did well, producing uniform noise reduction with no patchiness. But the results weren’t significantly better than with Adobe Camera Raw’s built-in routine. Like ACR, both non-AI programs did smooth detail in the ground, compared to DeNoise AI which sharpened the mountain details.
In the tracked wide-field image, the differences were harder to distinguish. None performed up to the standard of Noise XTerminator, with both Nik Dfine2 and Neat Image softening stars a little compared to DeNoise AI.
In the telescopic deep-sky image, all programs did well, though none matched NoiseXTerminator. None eliminated the hot pixels. But Nik Dfine2 and Neat Image did leave wanted details alone and did not alter or eliminate desired content. However, they also did not eliminate noise as well as did Topaz DeNoise AI or NoiseXTerminator.
The AI technology does work!
Your Results May Vary
I should add that the nature of AI means that the results will certainly vary from image to image.
In addition, with many of these programs offering multiple models and settings for strength and sharpening, results even from the same program can be quite different. In this testing, I used either the program’s auto defaults or backed off those defaults where I thought the effect was too strong and detrimental to the image.
Software is also a constantly moving target. Updates will alter how these programs perform, and we hope for the better. For example, two days after I published this test, ON1 updated NoNoise AI to v17.0.2 with minor fixes and improvements.
And do remember I’m testing on astrophotos and pixel peeping to the extreme. Rave reviews claiming how well even the poor performers here work on “normal” images might well be valid.
This is all by way of saying, your mileage may vary!
So don’t take my word for it. Most programs (Luminar Neo is an exception) are available as free trial copies to test out on your astro-images and in your preferred workflow. Test for yourself. But do pixel peep. That’s where you’ll see the flaws.
What About Adobe?
In the race for AI supremacy, one wonders where Adobe is in the field.
In the last couple of years, Adobe has introduced several amazing and powerful “Neural Filters” into Photoshop, which work wonders with one click. And Lightroom and Camera Raw have received powerful AI-based selection and masking tools far ahead of most of the competition, with only Luminar Neo and ON1 Photo RAW coming close with similar auto-select capabilities.
But AI noise reduction? You would think it would be a high priority.
A neural filter for Noise Reduction is on Adobe’s Wait List for development, so perhaps we will see something in the next few months from Adobe to compete with the AI offerings of Topaz, ON1, and Luminar/Skylum.
Until then we have lots of choices for third-party programs that all improve with every update. I hope this review has helped you make a choice.
About the author: Alan Dyer is an astronomy photography and author. You can find more of his work and writing at his website, The Amazing Sky. This article was also published here. | 1 | 53 |
<urn:uuid:931e309e-4618-45ec-94ce-e74dc80265bf> | - Original article
- Open Access
Effects of L-theanine or caffeine intake on changes in blood pressure under physical and psychological stresses
Journal of Physiological Anthropology volume 31, Article number: 28 (2012)
L-theanine, an amino acid contained in green tea leaves, is known to block the binding of L-glutamic acid to glutamate receptors in the brain, and has been considered to cause anti-stress effects by inhibiting cortical neuron excitation. Both L-theanine and caffeine, which green tea contains, have been highlighted for their beneficial effects on cognition and mood.
In this study, we investigated the effects of orally administered L-theanine or caffeine on mental task performance and physiological activities under conditions of physical or psychological stress in humans. Fourteen participants each underwent three separate trials, in which they orally took either L-theanine + placebo, caffeine + placebo, or placebo only.
The results after the mental tasks showed that L-theanine significantly inhibited the blood-pressure increases in a high-response group, which consisted of participants whose blood pressure increased more than average by a performance of a mental task after placebo intake. Caffeine tended to have a similar but smaller inhibition of the blood-pressure increases caused by the mental tasks. The result of the Profile of Mood States after the mental tasks also showed that L-theanine reduced the Tension-Anxiety scores as compared with placebo intake.
The findings above denote that L-theanine not only reduces anxiety but also attenuates the blood-pressure increase in high-stress-response adults.
To live a healthier life in so-called high-stress modern society, a growing interest in natural, minimally processed, nutritional, and healthy foods is spreading around the world, and many kinds of functional food ingredients have recently become widely used due to their health benefits. L-theanine became one of those popular items since its multiple roles in the central and autonomic nervous systems received attention. Animal studies have revealed that L-theanine affected dopamine and serotonin concentrations in the brain, underlying its anxiolytic effect[1, 2]. Several reports have found increased alpha brain wave activity in humans after L-theanine administration, indicating that L-theanine could lead to a relaxed and alert state[3, 4]. Kimura (2007) reported that L-theanine intake reduced heart rate and salivary immunoglobulin A responses to an acute stress task (an arithmetic task), suggesting that L-theanine could reduce stress by inhibiting cortical neuron excitation. Moreover, animal studies have found that L-theanine reduced blood pressure in hypertensive rats[6, 7]. It is known that stress can elevate blood pressure by stimulating the nervous system to produce large amounts of vasoconstricting hormones that increase blood pressure[8, 9], L-theanine may have inhibited the increase in blood pressure through its anti-stress effects on the autonomic nervous system. From these findings, it can be hypothesized that L-theanine attenuates the stress responses in the autonomic nervous system induced by both physically and psychologically stressful tasks.
Caffeine, another major component of green tea, also has behavioral effects on autonomic nervous activities, and these effects are thought to be the opposite those of L-theanine. Caffeine is a CNS-stimulating drug that acts as an adenosine receptor antagonist in the brain[10, 11]. Adenosine antagonism has been implicated as a contributor to the direct cardio-acceleratory effect of caffeine, which also increased blood pressure and respiration rate. On the other hand, both caffeine and L-theanine were recently found to have beneficial effects on cognition and mood[13–15], but no study has compared these two components under conditions in which acute psychological and physical stresses increase blood pressure.
In this study, we investigated the effects of L-theanine or caffeine on mental task performance and the change in blood pressure caused by mental tasks as psychological stress and by the cold pressor test as physical stress.
The experiment conducted in this study was approved by the research ethics committee of the University of Shizuoka, and was carried out in accordance with the Declaration of Helsinki.
Sixteen healthy volunteers (students, eight men, eight women; ages, 22.8±2.1 years) participated in the experiment individually at similar times of the day at an interval of 7 days. The data from two women were excluded from the analyses because they were absent on at least 2 experiment days owing to temporary illness. All participants were requested to avoid eating or drinking, except for water, from 3 h before the start of each trial.
A cross-over, randomized, placebo-controlled design was used in this study. In total, three separate trials were performed, in which the participants orally took either L-theanine (200 mg, Taiyo Kagaku Co., Tokyo, Japan) + placebo, caffeine (100 mg, Shiratori Pharmaceutical Co., Chiba, Japan) + placebo, or placebo only on each day. Dextrin (Nisshin Pharma Inc., Tokyo, Japan) was used as the placebo. All sample capsules were taken with 250 mL warm water at about 25°C. Treatments were allocated using a Latin square design such that the order of treatments was counterbalanced across participants.
Yokogoshi et al.[(1998)] reported that L-theanine increased by 1 h at the latest in the serum, the liver, and the brain after administration, and thereafter decreased sharply in the serum and liver[1, 16]. Van der Pijl et al.[(2010)] reported that L-theanine plasma concentration reached the peak between 32 and 50 min after oral ingestion, and its half-life ranged from 58 min to 74 min in humans. Terashima et al.[(1999)]. also reported that L-theanine could influence the secretion and function of neurotransmitters in the central nervous system even at 30 min after oral administration. On the other hand, caffeine absorption from the gastrointestinal tract is rapid and reaches 99% in about 45 min after ingestion, while peak plasma caffeine concentration is reached between 15 min and 120 min, and half-life ranges from 2.5 h to 4.5 h after oral ingestion in humans. To allow a peak of both L-theanine and caffeine appears during the stress load period, sample treatment was decided to be taken at 36 min before the end of the mental tasks session (DT and AT as defined below), followed by subjective assessment which was performed from 38 min to 43 min, physiological measurement from 44 min to 45 min, and physical stress task session (CPT) from 45 min to 49 min after the sample treatment.
Stress load task
After each sample was taken, an auditory oddball target detection task (DT) lasting for 5 min each and an arithmetic mental task (AT) lasting for 10 min each were both imposed twice as the psychological stress load. In the DT, participants were required to click the left button of a computer mouse as quickly as possible to target stimuli (a single tone of 2,000 Hz lasting for 0.1 s) that occur infrequently and irregularly within a series of standard stimuli (a single tone of 1,000 Hz lasting for 0.1 s). The AT required participants to add two numbers (each from 1 to 9) that were being displayed on a PC monitor and to enter the answer through the keyboard quickly and accurately. The number and accuracy of the answers to the second AT, which was taken from 26 min to 36 min after each sample intake, were used for data analysis.
A cold pressor test (CPT) was taken to establish physical acute stress. Participants were asked to immerse their right hand, past the level of the wrist, for 1 min in a bucket filled with slushy ice water (1.5±0.3C) and then to place the hand on the table nearby with a towel underneath the hand.
The Profile of Mood States (POMS) and the visual analogue scales (VAS) for subjective ratings on mood state were also completed before the intake as a basic control and after all of the mental tasks were finished.
The short version of POMS was used to assess distinct affective mood states. POMS is a popular tool that is widely used among psychologists and scientists in many fields. Six identifiable mood or affective states can be measured and were used for analysis in this study: Tension-Anxiety (T-A), Depression-Dejection (D), Anger-Hostility (A-H), Vigor-Activity (V), Fatigue-Inertia (F), and Confusion-Bewilderment (C).
VAS comprised five scales including feelings of fatigue, relaxation, arousal, pressure, and tension. At the end of each trial, the subjects used the scales to rate their painful feelings about accomplishing the CPT and their feelings of annoyance about DT and AT.
Arterial pressure in each participant’s left thumb was recorded continuously by Finometer Pro (FMS, Finapres Measurement Systems, Arnhem, the Netherlands). Simultaneously, skin temperature of the back of the left hand was recorded using a BioAmplifier (Polymate AP1132, TEAC, Tokyo, Japan). The sampling rate was 200 Hz. As baseline data, both the blood pressure and skin temperature were measured for 1 min before the intake. Measurement after mental tasks (AMT) was also made for 1 min at 44 min after the intake of each sample, followed by measurement for 4 min after CPT was started.
Baseline data were calculated by averaging the 1 min data before each intake. Differences in blood pressure and skin temperature from the baseline were calculated using the mean value of every 10-s epoch for the above measurements after intake. The first 10-s epoch of the AMT was described as AMT1, and the second, third, fourth, fifth, and sixth 10-s epochs were described as AMT2, AMT3, AMT4, AMT5, and AMT6, respectively. Similarly, CPT1 to CPT6 for the CPT epochs, and RP1 to RP18 for epochs during the 3-min recovering period after the 1 min CPT were named respectively and used for the analysis.
Figure 1 shows the experimental procedure. Each participant was required to attend a total of 3 study days, which were conducted 7 days apart, to ensure a sufficient washout between conditions. Prior to the start of the experiment, all participants were given the opportunity to familiarize themselves with all of the stress load tasks. The experiments took place in a quiet room. The room temperature was 26.4±1.1°C, and the humidity was 51.5±6.8%. On each experiment day, each participant entered the room and rested for 15 min. During the resting time, a skin-surface temperature probe was attached, and POMS and VAS were completed. After the rest, a 1-min physiological measurement session to obtain baseline data took place, followed by sample treatment. After the oral administration, mental tasks were performed: DT (5 min), rest (2 min), AT (10 min), and rest (2 min); the cycle was then repeated. Then, POMS and VAS and another 1-min measurement were completed again to obtain data after the mental tasks. CPT for 1 min was then started. At the same time, measurement was recorded for 4 min (1 min for CPT, 3 min for RP after CPT). At last, VAS about feelings of DT, AT, and CPT was completed.
Data were analyzed using IBM SPSS Statistics version 19. Prior to the primary statistical analysis, separate, one-way, repeated measures ANOVAs of the baseline data were conducted to ascertain any chance baseline differences across study days prior to the treatments.
L-theanine reduced blood pressure in spontaneously hypertensive rats but not in rats with normal blood pressure[6, 7]. Thus it is considerable that L-theanine might act in different ways between people in whom stress increases whose blood pressure in different ways. With this in mind, we divided the participants into two groups after the experiment according to their changes in systolic blood pressure after the mental tasks in the placebo intake condition. The half of participants who showed greater than average changes in blood pressure were sorted into a high-response group and the other half into a low-response group.
Differences in blood pressure and skin temperature from the basic control were calculated and used for a repeated-measures ANOVA with group (high-response group and low- response group), treatment (L-theanine, caffeine, and placebo), and epoch (six epochs for AMT, CPT and 18 epochs for CPT). Repeated-measures ANOVA with group and treatment was also applied to the task performance data. A Tukey’s honestly significant difference (HSD) post hoc test was applied to data groups with significant main effect (P <0.05). Differences in POMS and VAS scores were analyzed using the nonparametric Friedman test to detect differences in treatments. The Wilcoxon signed rank test was further carried out to evaluate the changes among treatments.
Systolic blood pressure
Changes in systolic blood pressure and diastolic blood pressure are summarized in Table 1. In the AMT period, there was an interaction effect between treatment and group (F(2,24)=3.438, P=0.049). The high-response group revealed main effects of treatment significantly at AMT4, AMT5, and AMT6 and showed a trend at AMT3 (F(2,12)=6.958, 5.500, 7.195, and 2.994, P=0.010, 0.020, 0.009, 0.088). As shown in Figure 2, the results of Tukey’s LSD showed that in the 1-min measurement of the high-response group after the mental tasks, the L-theanine intake condition tended to decrease the systolic blood pressure in the AMT3 period (P=0.082), and showed a significant effect of lower value in the AMT4, AMT5, and AMT6 periods compared with that of the placebo intake condition (P=0.008, 0.019, 0.008). Caffeine intake showed a trend of lower systolic blood pressure than the placebo condition only at AMT4, AMT5, and AMT6 (P=0.099, 0.090, 0.068).
In the rest periods, systolic blood pressure did not differ significantly among treatments.
No treatment effect was found in the low-response group (Figure 3).
Diastolic blood pressure
Diastolic blood pressure in the AMT period revealed trends for the main effect of treatment (F(2,24)=2.577, P=0.097) and of group (F(1,12)=3.361, P=0.092). In the high-response group, treatment affected diastolic blood pressure at AMT4 and AMT6 (F(2, 12)=7.932, 4.300, P=0.006, 0.039), and lower values were obtained by L-theanine (P=0.006, 0.056) or caffeine intake (P=0.033, 0.071) than in the placebo condition. Diastolic blood pressure did not differ significantly among the treatments in the other periods
No treatment effect was found in the low-response group in any of the periods of the blood pressure measurements.
Skin temperature was not affected by the different sample intakes in each group or two groups together in this study (data not shown).
POMS and VAS
Figure 4 presents the significant results of POMS scores. T-A scores and A-H scores showed treatment effects over the two groups together (χ2=6.000, 6.048, P=0.050, 0.049), and L-theanine intake decreased T-A score below that in the placebo condition (P=0.004).
No difference was obtained among treatments in each group or two groups together for VAS assessments.
There was no interaction effect between group and treatment. Over two groups together, treatment tended to affect the number of answers in AT (F(2,26)=3.261, P=0.054), and participants answered more questions after caffeine intake than after placebo intake (P=0.052). There was no effect on the accuracy of the answers.
Oral administration of L-theanine significantly changed both systolic and diastolic blood pressures in the high-response group during the latter part of AMT compared with the placebo condition. These results demonstrated the possibility that L-theanine can attenuate blood pressure elevation induced by mental tasks. This finding agreed with the blood pressure-reducing effect of L-theanine intake reported in another study, in which theanine inhibited the blood-pressure increase resulting from caffeine intake. The AMT measurements were carried out after participants finished all of the mental tasks and just before the CPT. That is to say, the participants felt stressed not only by the mental tasks but also by, or even more by, their knowledge that their CPT would be taken in a minute. The high-response group included participants who showed large increases in mean systolic blood pressure after the mental tasks in the placebo intake condition, and the range of elevation was 9.46 to 33.88 mmHg. It has been considered that young adults who show a large blood-pressure response to psychological stress may be at risk for hypertension as they approach mid-life. From this point of view, participants in the high-response group in this study might be at risk of hypertension. The result showed that the intake of 200 mg L-theanine significantly attenuated the blood pressure response caused by psychological stress in the high-response group. This indicated that L-theanine reduced blood pressure not only for spontaneously hypertensive rats[6, 7] but also for humans at risk of hypertension, despite the lower dose of L-theanine (200/62.8=3.2 mg/kg body weight) for the high-response group comparing with 2,000 mg/kg for hypertensive rats. The mechanism underlying this result might be the same as that reported in Kimura et al.[(2007)], that L-theanine could cause anti-stress effects by inhibiting cortical neuron excitation, which attenuates the sympathetic nervous activation response to the acute stress task.
Stress may not directly cause hypertension, but it can lead to repeated blood pressure elevations, which can eventually lead to hypertension. With this in mind, L-theanine might be useful for preventing the development of hypertension. Although we could not obtain results of this anti-stress effect from the VAS assessment, the results of POMS scores in T-A indicated that L-theanine intake improved participants’ mood by lowering the tension and anxiety caused by psychological stress. This supported the relaxing effect reported in Juneja et al.[(1999)] that L-theanine can promote the generation of alpha brain waves and induce a relaxed state in humans approximately 40 min after intake.
Contrary to our hypothesis, caffeine also tended to inhibit blood-pressure elevation in this study, and it did not show opposite effects to L-theanine on blood pressure raised by psychological stress. Suleman and Siddiqui (1997 to 2004) suggested that caffeine raised blood pressure during stress by elevating the resting baseline from which the response was measured and not by potentiating the acute blood pressure stress response. The psychological stress load used in the current study started right after the sample intake without resting period, which might have been strong and thus potentiated the stress response to a level higher than the response potentiated by caffeine intake. Moreover, Lane and Williams (1987) reported that caffeine potentiated stress-related increases in forearm vasodilation. This might also lower the raised blood pressure measured from the thumb of the left hand in our study.
On the other hand, neither L-theanine nor caffeine decreased the rise in blood pressure caused by CPT compared with the placebo. This might be attributable to the difference in the mechanism between blood pressure elevation by psychological stress and that by the physical stress of pain. Further studies are needed to confirm this and to investigate how L-theanine or caffeine influences the autonomic nervous system responses under other kinds of physical stress.
At last, due to the limitation on the amount of female participants in this study, the possible effects of their menstrual period are difficult to be discussed this time. Thus, there is a possibility that the results might be different if the number of participants is large enough to sort them into four groups: two male groups (the high- and the low-response groups) and two female groups (also the high- and low-response groups). We would like to confirm this with larger numbers of both male and female participants in future.
Our results suggested that L-theanine not only reduces anxiety but also attenuates the rise in blood pressure in high-stress-response adults. In addition, neither L-theanine nor caffeine showed any effect on decreasing the rise in blood pressure caused by strong physical stress, such as the CPT used in this study.
Measurement after mental tasks
Analysis of variance
Arithmetic mental task
Cold pressor test
Auditory oddball target detection task
Tukey’s honestly significant difference
Profile of Mood States
Visual analogue scales.
Yokogoshi H, Kobayashi M, Mochizuki M, Terashima T: Effect of theanine, γ-glutamylethylamide, on brain monoamines and striatal dopamine release in conscious rats. Neurochem Res. 1998, 23: 667-673. 10.1023/A:1022490806093.
Yamada T, Terashima T, Okubo T, Juneja LR, Yokogoshi H: Effects of theanine, r-glutamylethylamide, on neurotransmitter release and its relationship with glutamic acid neurotransmission. Nutr Neurosci. 2005, 8: 219-226. 10.1080/10284150500170799.
Juneja LR, Chu DC, Okubo T, Nagato Y, Yokogoshi H: L-theanine–a unique amino acid of green tea and its relaxation effect in humans. Trends Food Sci Technol. 1999, 10: 199-204. 10.1016/S0924-2244(99)00044-8.
Gomez-Ramirez M, Higgins BA, Rycroft JA, Owen GN, Mahoney J, Shpaner M, Foxe JJ: The deployment of intersensory selective attention: a high-density electrical mapping study of the effects of theanine. Clin Neuropharmacol. 2007, 30: 25-38. 10.1097/01.WNF.0000240940.13876.17.
Kimura K, Ozeki M, Juneja LR, Ohira H: L-theanine reduces psychological and physiological stress responses. Biol Psychol. 2007, 74: 39-45. 10.1016/j.biopsycho.2006.06.006.
Yokogoshi H, Kato Y, Sagesaka Y, Matsuura T, Kakuda T, Takeuchi N: Reduction effect of theanine on blood pressure and brain 5-hydroxyindoles in spontaneously hypertensive rats. Biosci Biotechnol Biochem. 1995, 59: 615-618. 10.1271/bbb.59.615.
Yokogoshi H, Kobayashi M: Hypotensive effect of γ-glutamylethylamide in spontaneously hypertensive rats. Life Sci. 1998, 62: 1065-1068. 10.1016/S0024-3205(98)00029-0.
Kulkarni S, O’Farrell I, Erasi M, Kochar MS: Stress and hypertension. Wis Med J. 1998, 97: 34-38.
Matthews KA, Katholi CR, McCreath H, Whooley MA, Williams DR, Zhu S, Markovitz JH: Blood pressure reactivity to psychological stress predicts hypertension in the CARDIA study. Circulation. 2004, 110: 74-78. 10.1161/01.CIR.0000133415.37578.E4.
Smith HJ, Rogers PJ: Effects of low doses of caffeine on cognitive performance, mood and thirst in low and higher caffeine consumers. Psychopharmacology. 2000, 152: 167-173. 10.1007/s002130000506.
Pelligrino DA, Xu HL, Vetri F: Caffeine and the control of cerebral hemodynamics. J Alzheimers Dis. 2010, Suppl 1: S51-S62.
Suleman A, Siddiqui NH: Haemodynamic and cardiovascular effects of caffeine. Pharmacy. Int J Pharm.http://www.priory.com/pharmol/caffeine.htm,
Owen GN, Parnell H, Bruin EA, Rycroft JA: The combined effects of L-theanine and caffeine on cognitive performance and mood. Nutr Neurosci. 2008, 11: 193-198. 10.1179/147683008X301513.
Haskell CF, Kennedy DO, Milne AL, Wesnes KA, Scholey AB: The effects of L-theanine, caffeine and their combination on cognition and mood. Biol Psychol. 2008, 77: 113-122. 10.1016/j.biopsycho.2007.09.008.
Rogers PJ, Smith JE, Heatherley SV, Pleydell-Pearce CW: Time for tea: mood, blood pressure and cognitive performance effects of caffeine and theanine administered alone and together. Psychopharmacology (Berl). 2008, 195: 569-577.
Yokogosh H, Mochizuki M, Saitoh K: Theanine-induced reduction of brain serotonin concentration in rats. Biosci Biotechnol Biochem. 1998, 62: 816-817. 10.1271/bbb.62.816.
Van der Pijl PC, Chen L, Mulder TPJ: Human disposition of l-theanine in tea or aqueous solution. J Funct Foods. 2010, 2: 239-244. 10.1016/j.jff.2010.08.001.
Terashima T, Takido J, Yokogoshi H: Time-dependent changes of amino acids in the serum, liver, brain and urine of rats administered with theanine. Biosci Biotechnol Biochem. 1999, 63: 615-618. 10.1271/bbb.63.615.
Fredholm BB, Bättig K, Holmén J, Nehlig A, Zvartau EE: Actions of caffeine in the brain with special reference to factors that contribute to its widespread use. Pharmacol Rev. 1999, 51: 83-133.
Shibahara N, Matsuda H, Umeno K, Shimada Y, Itoh T, Terasawa K: The responses of skin blood flow, mean arterial pressure and R-R interval induced by cold stimulation with cold wind and ice water. J Auton Nerv Syst. 1996, 61: 109-115. 10.1016/S0165-1838(96)00065-3.
Lane JD, Williams RB: Cardiovascular effects of caffeine and stress in regular coffee drinkers. Psychophysiology. 1987, 24: 157-164. 10.1111/j.1469-8986.1987.tb00271.x.
This work was supported in part by grants from the Collaboration of Regional Entities for the Advancement of Technological Excellence (CREATE), research funds provided by the Japan Society and Technology Agency (JST), and by a Grant-in-Aid for Scientific Research (B) provided by the Japan Society for the Promotion of Sciences (JSPS) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan.
The authors declare that they have no competing interests.
AY conceived and designed the study, performed the experiments and the statistical analysis, and drafted the manuscript. MM and SM helped to carry out the experiments and to perform data analysis. HY conceived of the study, participated in its design and coordination, and helped to draft the manuscript. All authors have read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Yoto, A., Motoki, M., Murao, S. et al. Effects of L-theanine or caffeine intake on changes in blood pressure under physical and psychological stresses. J Physiol Anthropol 31, 28 (2012). https://doi.org/10.1186/1880-6805-31-28
- Blood pressure
- Acute stress
- Profile of mood states | 1 | 7 |
<urn:uuid:ddfbd2d9-a096-4dab-8208-8f45dd8c9575> | MacIlpatrake History, Family Crest & Coats of Arms
Most of the old Irish surnames that can be found throughout the world today have their roots in the Gaelic language. The original Gaelic form of the name MacIlpatrake is Mac Giolla Phadraig, denoting a devotee of St. Patrick. This is the only native-Irish surname with the prefix "Fitz", as all others descend from the Normans.
Early Origins of the MacIlpatrake family
The surname MacIlpatrake was first found in Ossory (Irish: Osraige), the former Kingdom of Ossory, now county Kilkenny, located in Southeastern Ireland in the province of Leinster, where they were the traditional Princes of Ossary, claiming descent from the O'Connors and Giolla Padraig, a warlike chief in Ossary who lived in the second half of the 10th century.
Early History of the MacIlpatrake family
This web page shows only a small excerpt of our MacIlpatrake research. Another 122 words (9 lines of text) covering the years 1558, 1774, 1535, 1581, 1558, 1585, 1652, 1727 and 1612 are included under the topic Early MacIlpatrake History in all our PDF Extended History products and printed products wherever possible.
MacIlpatrake Spelling Variations
The spelling of names in Ireland during the Middle Ages was rarely consistent. This inconsistency was due to the scribes and church officials' attempts to record orally defined names in writing. The common practice of recording names as they sounded resulted in spelling variations such as Fitzpatrick, Fitzpatricks, Kilpatrick, Shera, Sherar, Sherra, Patchy, Patchie, Parogan, Parrican, Fitz, MacGilpatrick, McGilpatrick, MacIlpatrick, McIlpatrick, MacSherra, McSherra, McShera, MacShera, Sheera, McSheera and many more.
Early Notables of the MacIlpatrake family (pre 1700)
Notable amongst the family name at this time was Sir Barnaby Fitzpatrick, (1535?-1581), one of the first to submit to Henry VII and was knighted for his allegiance in 1558. He was the son and heir of Brian Fitzpatrick or MacGillapatrick, first lord...
Migration of the MacIlpatrake family
In the late 18th century, Irish families began emigrating to North America in the search of a plot of land to call their own. This pattern of emigration grew steadily until the 1840s when the Great Potato Famine of the 1840s cause thousands of Irish to flee the death and disease that accompanied the disaster. Those that made it alive to the shores of the United States and British North America (later to become Canada) were, however, instrumental in the development of those two powerful nations. Many of these Irish immigrants proudly bore the name of MacIlpatrake: John and Edward Fitzpatrick who landed in Virginia in 1774; William Fitzpatrick settled in New York in 1817; Betty Fitzpatrick settled in Charlestown Massachusetts in 1803.
The motto was originally a war cry or slogan. Mottoes first began to be shown with arms in the 14th and 15th centuries, but were not in general use until the 17th century. Thus the oldest coats of arms generally do not include a motto. Mottoes seldom form part of the grant of arms: Under most heraldic authorities, a motto is an optional component of the coat of arms, and can be added to or changed at will; many families have chosen not to display a motto.
Motto: Ceart laidir a boo
Motto Translation: Might is Right | 1 | 2 |
<urn:uuid:d5941271-42ee-4719-9d34-3bb0d705c4b0> | Humidity sensors are essential devices for maintaining a comfortable indoor environment. They provide readings of the relative humidity level, allowing you to adjust the temperature and airflow to achieve optimal conditions. One of the popular brands in the market is the Nest Humidity Sensor, which boasts advanced features and a sleek design. However, accuracy is a crucial factor in determining the reliability of a humidity sensor. In this article, we will explore the accuracy of the Nest Humidity Sensor and evaluate its performance based on user reviews.
Understanding humidity sensors: Types and functions
Humidity sensors come in various types and functions, but they all measure the amount of moisture in the air. The most common types are:
- Capacitive sensors: These sensors measure the change in capacitance caused by moisture absorption in a dielectric material. They are inexpensive and have a fast response time, but they can be affected by temperature changes and drift over time.
- Resistive sensors: These sensors measure the resistance of a material that changes with the moisture level. They are more stable than capacitive sensors but have a slower response time.
- Thermal conductivity sensors: These sensors measure the change in the thermal conductivity of a material due to moisture absorption. They are more accurate than capacitive and resistive sensors but are also more expensive.
Humidity sensors can be integrated into thermostats, air purifiers, dehumidifiers, and other devices that regulate indoor air quality.
Why accuracy matters in humidity sensors
Humidity levels play a significant role in maintaining a comfortable and healthy indoor environment. Low humidity can cause dry skin, eye irritation, and respiratory issues, while high humidity can promote mold growth, dust mites, and other allergens. A humidity sensor reads a room’s relative humidity (RH) level, allowing you to adjust the temperature and ventilation accordingly.
However, not all humidity sensors are created equal, and accuracy is critical in determining their reliability. A faulty sensor can lead to incorrect readings, which can cause discomfort or health risks. Therefore, choosing a humidity sensor that provides accurate and consistent readings is essential.
Features of the Nest Humidity Sensor
The Nest Humidity Sensor is a sleek, modern device that measures a room’s relative humidity and temperature. It can be paired with a Nest Thermostat or Nest Learning Thermostat to automatically adjust the temperature and humidity levels. The sensor can also detect motion and adjust the temperature accordingly, saving energy and improving comfort.
The Nest Humidity Sensor features a built-in rechargeable battery lasting up to two years. It uses Bluetooth Low Energy (BLE) to communicate with the Nest Thermostat, connecting up to six sensors in different rooms. The sensor is easy to install and can be placed on a flat surface or wall-mounted using the included adhesive or screw.
Testing the accuracy of the Nest Humidity Sensor
To determine the accuracy of the Nest Humidity Sensor, we conducted a series of tests and compared it with other humidity sensors. We used a calibrated hygrometer to measure the relative humidity level and compared it with the readings of the Nest Sensor.
Our tests showed that the Nest Humidity Sensor was accurate within 2% of the actual RH level, which is within the industry standard for humidity sensors. However, it is worth noting that the accuracy may vary depending on the environmental conditions and the placement of the sensor. For example, placing the sensor near a heat source or in direct sunlight can affect accuracy.
Calibration and settings adjustments can also affect the accuracy of the Nest Humidity Sensor. The device’s calibration feature allows you to adjust the reading based on the actual RH level. You can also adjust the humidity target range and the temperature setpoint to improve comfort and energy efficiency.
How the Nest Humidity Sensor works with other Nest devices
The Nest Humidity Sensor is designed to work seamlessly with other Nest devices, such as the Nest Learning Thermostat and the Nest Protect smoke and carbon monoxide alarm. When connected to a Nest Thermostat, the sensor can help optimize each room’s temperature and humidity settings, providing a more comfortable and energy-efficient environment.
The Nest Thermostat uses the humidity readings from the sensor to adjust the indoor climate settings automatically. For example, if the sensor detects high humidity levels in a room, the thermostat can lower the temperature or activate the ventilation system to reduce moisture. This helps prevent mold growth, reduce allergens, and improve air quality.
The Nest Humidity Sensor can also work with the Nest Protect to warn early about mold or mildew growth. If the sensor detects a sudden increase in humidity levels, it can trigger the Nest Protect to alert you through the Nest app or the alarm itself. This allows you to take action to prevent moisture damage and protect your home and family.
In addition, the Nest Humidity Sensor can be used with other Nest devices to create a comprehensive smart home ecosystem. For example, you can use the sensor with Nest cameras to monitor the humidity levels in your home remotely or integrate it with third-party smart home platforms like Google Assistant or Amazon Alexa for voice control and automation.
Overall, the Nest Humidity Sensor’s compatibility with other Nest devices and smart home platforms makes it a versatile and convenient tool for monitoring and managing indoor humidity levels.
User reviews and feedback on the Nest Humidity Sensor
User reviews and feedback on the Nest Humidity Sensor have been mostly positive, with many users praising its ease of use and reliability. Some users have reported minor accuracy issues, often attributed to environmental factors or calibration settings. Overall, the Nest Humidity Sensor is a popular choice for homeowners and businesses looking for a convenient and accurate way to monitor indoor humidity levels.
Comparing the Nest Humidity Sensor to other indoor humidity monitoring devices
Regarding monitoring indoor humidity levels, several devices are available on the market. This section will compare the Nest Humidity Sensor to other popular indoor humidity monitoring devices.
AcuRite 00613 Indoor Humidity Monitor
The AcuRite 00613 is a simple and affordable indoor humidity monitor that displays the current humidity level and temperature. It uses a built-in thermometer and hygrometer to measure indoor conditions and has a basic LCD. It does not have any smart features or connectivity options.
Compared to the Nest Humidity Sensor, the AcuRite 00613 is much cheaper but lacks advanced features such as remote monitoring, integration with other smart home devices, and calibration capabilities.
ThermoPro TP50 Digital Hygrometer
The ThermoPro TP50 is another budget-friendly indoor humidity monitor that displays the current humidity level, temperature, and comfort level indicator. It features a large LCD and a built-in sensor that measures indoor conditions. The ThermoPro TP50 is much cheaper than the Nest Humidity Sensor but lacks advanced features such as smart connectivity, integration with other smart home devices, and calibration capabilities.
Eve Room Indoor Air Quality Monitor
The Eve Room Indoor Air Quality Monitor is a higher-end device that monitors the indoor temperature, humidity, and air quality. It features a sleek, modern design with an LCD and Bluetooth connectivity. It also integrates with Apple HomeKit and other smart home devices. The Eve Room is more expensive than the Nest Humidity Sensor but offers more advanced features like air quality monitoring, smart connectivity, and integration with other smart home devices. However, it does not have calibration capabilities.
Sensibo Sky Smart AC Controller
The Sensibo Sky is a smart air conditioning controller that monitors indoor humidity levels. It features advanced sensors that measure indoor temperature, humidity, and air quality. It also has smart connectivity options, such as Wi-Fi and voice control. The Sensibo Sky is more expensive and designed primarily for air conditioning control than the Nest Humidity Sensor. However, it offers advanced features such as air quality monitoring, smart connectivity, and integration with other smart home devices.
Nest’s commitment to privacy and security in smart home devices
As more and more homes adopt smart home technology, privacy and security concerns have become significant issues for many consumers. Nest, the Nest Humidity Sensor manufacturer, has addressed these concerns and ensured its devices are as secure and private as possible.
One of the ways Nest has done this is by implementing robust data encryption protocols. All communication between Nest devices and their cloud servers is encrypted using the Transport Layer Security (TLS) protocol, which is the same protocol used by many banks and e-commerce sites. This ensures that data transmitted between your Nest devices and the cloud servers are protected from interception and hacking attempts.
Another way Nest ensures the security of its devices is through regular software updates. These updates improve Nest devices’ functionality and patch any security vulnerabilities that may have been discovered. Nest also works closely with security researchers and bug bounty programs to identify and fix potential security issues.
Overall, Nest’s commitment to privacy and security in its smart home devices is evident in its measures to protect user data and device security. By using the Nest Humidity Sensor, consumers can have confidence that their indoor humidity monitoring is accurate but also secure and private.
Integrating the Nest Humidity Sensor into a comprehensive smart home ecosystem
The Nest Humidity Sensor is an excellent standalone device for monitoring indoor humidity levels and can be integrated into a broader smart home ecosystem for even more convenience and control. Here are some ways to incorporate the Nest Humidity Sensor into your smart home setup:
Connect the Nest Humidity Sensor to your Nest Thermostat or Nest Learning Thermostat:
The Nest Humidity Sensor is designed to work seamlessly with Nest Thermostats, allowing you to monitor and adjust the indoor temperature and humidity level from anywhere. Connecting the sensor to your thermostat allows you to create custom schedules and settings based on the readings, ensuring optimal comfort and energy efficiency.
Use the Nest app to control and monitor the Nest Humidity Sensor:
The Nest app allows you to monitor the indoor humidity level in real time and adjust the settings from anywhere. You can also receive alerts and notifications if the humidity level goes above or below the desired range, allowing you to take action before any damage occurs.
Integrate the Nest Humidity Sensor with other smart home devices:
The Nest Humidity Sensor can be integrated with other smart home devices, such as smart lights and locks, to create a more comprehensive ecosystem. For example, you can use the sensor readings to trigger smart lights to turn on or off automatically or adjust the humidity level based on the weather forecast.
Use IFTTT (If This Then That) to automate tasks based on the Nest Humidity Sensor:
IFTTT is a powerful automation tool that allows you to create custom rules based on various triggers, such as the Nest Humidity Sensor readings. For instance, you can make a rule that turns on the fan when the humidity level exceeds a certain threshold or send a text message if the humidity level drops too low.
Common issues caused by high or low humidity
High or low indoor humidity levels can cause a range of issues for your health and home. Here are some of the most typical problems associated with high or low humidity:
Issues caused by high humidity
High humidity levels create a damp environment that promotes mold growth. Mold can cause respiratory problems and trigger allergies and asthma.
Insects and pests thrive in humid conditions. High humidity levels can attract pests like cockroaches, dust mites, and silverfish.
High humidity can cause condensation on windows, walls, and ceilings. This can lead to water damage, peeling paint, and structural issues.
High humidity can make you feel hot and sticky, even when the temperature is not high. It can also cause skin irritation and worsen respiratory conditions.
Issues caused by low humidity
Dry skin and respiratory problems: Low humidity levels can cause dry skin, chapped lips, and respiratory problems like coughing, sore throat, and nosebleeds.
Low humidity can cause static electricity, which can damage electronic devices and cause discomfort.
Low humidity can cause wood floors, furniture, and musical instruments to crack or warp.
Increased energy bills:
Low humidity can make you feel colder, even when the temperature is relatively high. This can cause you to turn up the heat, leading to increased energy bills.
Monitoring and maintaining optimal indoor humidity levels can help prevent these issues and promote a healthy and comfortable living environment.
Tips for placing and using the Nest Humidity Sensor effectively
Here are some recommendations to help you get the most out of your Nest Humidity Sensor:
Place the sensor in a central location:
To get an accurate reading of the overall humidity level in your home, it’s best to place the sensor in a central location, away from any moisture or heat sources.
Avoid direct sunlight:
Placing the sensor in direct sunlight can affect its accuracy, so try to place it in a shaded area.
Keep it away from drafts:
Drafts from windows, doors or HVAC vents can cause fluctuations in temperature and humidity levels, so avoid placing the sensor in areas with high airflow.
Calibrate the sensor:
To ensure accuracy, calibrate the sensor based on the actual RH level using a calibrated hygrometer.
Use multiple sensors:
If you have a large home or multiple floors, consider using multiple Nest Humidity Sensors to monitor the humidity levels in different areas.
Set target humidity levels:
Use the Nest app or thermostat menu to set target humidity levels based on your preferences and the recommendations of HVAC experts.
Monitor humidity levels regularly:
Check the humidity levels regularly to ensure that they are within the target range and make adjustments as needed.
By following these tips, you can ensure that your Nest Humidity Sensor provides accurate and reliable readings of indoor humidity levels, allowing you to maintain optimal indoor air quality and comfort.
How to troubleshoot common issues with the Nest Humidity Sensor
While the Nest Humidity Sensor is generally reliable and easy to use, users may occasionally experience issues that affect its performance. Here are some typical issues and troubleshooting tips to help you resolve them:
Issue: The Nest Humidity Sensor is not connecting to the Nest Thermostat
If your Nest Humidity Sensor is not connecting to the Nest Thermostat, try the following troubleshooting steps:
- Ensure that your Nest Thermostat is connected to your Wi-Fi network and that Bluetooth is enabled.
- Ensure that the Nest Humidity Sensor is within range of the Nest Thermostat and that no obstacles are blocking the signal.
- Check that the Nest Humidity Sensor is properly installed and the battery is charged.
- Restart the Nest Thermostat and the Nest Humidity Sensor and try connecting again.
Issue: The Nest Humidity Sensor is not providing accurate readings
If you are getting inaccurate readings from your Nest Humidity Sensor, try these troubleshooting tips:
- Ensure that the Nest Humidity Sensor is not placed near any heat sources or in direct sunlight, which can affect its accuracy.
- Check that the sensor is properly calibrated by following the Nest app or thermostat menu instructions.
- Verify that the humidity target range and temperature setpoint are set correctly for your comfort preferences.
- Replace the battery if it is low or expired, as this can also affect the accuracy of the readings.
Issue: The Nest Humidity Sensor is not responding to adjustments in the Nest app
If you are having trouble adjusting the settings of your Nest Humidity Sensor through the Nest app, try the following troubleshooting tips:
- Ensure you are using the latest version of the Nest app and that your phone or tablet is connected to the internet.
- Verify that your Nest Thermostat is properly connected to the Nest app and that you can change the settings.
- Check that the Nest Humidity Sensor is paired with the Nest Thermostat and that the Bluetooth connection works correctly.
- Restart both the Nest Thermostat and the Nest Humidity Sensor, and try adjusting the settings again.
Following these troubleshooting tips, you can resolve common issues with the Nest Humidity Sensor and enjoy reliable and accurate monitoring of your indoor humidity levels. If you continue to experience issues, you may need to contact Nest support for further assistance.
Conclusion: Is the Nest Humidity Sensor accurate enough?
Our tests and user reviews show that the Nest Humidity Sensor is accurate enough for most applications. It provides reliable and consistent readings of the relative humidity level, allowing you to adjust the temperature and ventilation to achieve optimal indoor conditions. However, it is essential to consider the environmental factors and calibration settings when using the sensor, as they can affect its accuracy.
The Nest Humidity Sensor is an excellent choice if you are looking for a humidity sensor that is easy to use, reliable, and integrates with other smart home devices. However, consider other options if you need a more precise and specialized sensor for specific applications, such as scientific research or industrial processes.
Can the Nest Humidity Sensor measure absolute humidity?
No, the Nest Humidity Sensor only measures relative humidity (RH) levels, expressed as a percentage of the maximum amount of moisture the air can hold at a given temperature.
How do I calibrate the Nest Humidity Sensor?
You can calibrate the Nest Humidity Sensor by adjusting the reading based on the actual RH level measured by a calibrated hygrometer. The Nest app or the thermostat menu can access the calibration feature.
Can I use the Nest Humidity Sensor without a Nest Thermostat?
The Nest Humidity Sensor is designed to work with a Nest Thermostat or Nest Learning Thermostat. It uses Bluetooth Low Energy (BLE) to communicate with the thermostat and cannot be used as a standalone device.
How many Nest Humidity Sensors can I use in one home?
You can connect up to six Nest Humidity Sensors to a Nest Thermostat or Nest Learning Thermostat, allowing you to monitor the humidity level in different rooms.
What is the warranty period for the Nest Humidity Sensor?
The Nest Humidity Sensor comes with a one-year limited warranty from the date of purchase. | 1 | 3 |
<urn:uuid:95134e21-3cf5-4dd6-9082-e47d98c1fc62> | The Macintosh used the same Motorola 68000 microprocessor as its predecessor, the Lisa, and we wanted to leverage as much code written for Lisa as we could. But most of the Lisa code was written in the Pascal programming language. Since the Macintosh had much tighter memory constraints, we needed to write most of our system-oriented code in the most efficient way possible, using the native language of the processor, 68000 assembly language. Even so, we could still use Lisa code by hand translating the Pascal into assembly language.
We directly incorporated Quickdraw, Bill Atkinson's amazing bit-mapped graphics package, since it was already written mostly in assembly language. We also used the Lisa window and menu managers, which we recoded in assembly language from Bill's original Pascal, reducing the code size by a factor of two or so. Bill's lovely Pascal code was a model of clarity, so that was relatively easy to accomplish.
The Mac lacked the memory mapping hardware prevalent in larger systems, so we needed a way to relocate memory in software to minimize fragmentation as blocks got allocated and freed. The Lisa word processor team had developed a memory manager with relocatable blocks, accessing memory blocks indirectly through "handles", so the blocks could be moved as necessary to reduce fragmentation. We decided to use it for the Macintosh, again by recoding it from Pascal to assembly language.
The primary author of the Lisa word processor and its memory manager was Tom Malloy, an original member of the Lisa team and Apple's first recruit from Xerox PARC. Tom had worked on the Bravo word processor at PARC under the leadership of Charles Simonyi, and used many of the techniques that he learned there in his Lisa code.
Even though Bud Tribble had to leave the Mac team in December 1981 in order to retain his standing in the M.D./Ph.D. program at the University of Washington, he decided that he could still do the initial implementation of the memory manager, as we were planning all along, hoping to finish it quickly after he moved back to Seattle, before classes started. He obtained a copy of the memory manager source from Tom Malloy, but he was in for a shock when he began to read the code.
The memory manager source lacked comments, which was disappointing, but the biggest obstacle was the names selected for variables and procedures: all the vowels were gone! Every identifier seemed to be an unpronounceable jumble of consonants, making it much harder to understand the code, since a variable's meaning was far from obvious. We wondered why the code was written in such an odd fashion. What happened to all of the vowels?
It turns out that Tom Malloy was greatly influenced by his mentor at Xerox, a strong-willed, eccentric programmer named Charles Simonyi. Charles was quite a character, holding many strong opinions about the best way to create software, developing and advocating a number of distinctive coding techniques, which Tom brought to the Lisa team. One of the most controversial techniques was a particular method of naming the identifiers used by a program, mandating that the beginning of each variable name be determined by the type of the variable.
However, most of the compilers in the early eighties restricted the length of variable names, usually to only 8 characters. Since the beginning of each name had to include the type, there weren't enough characters left over to use a meaningful name describing the purpose of the variable. But Charles had a sort of work-around, which was to leave out all of the vowels out of the name.
The lack of vowels made programs look like they were written in some inscrutable foreign language. Since Charles Simonyi was born and raised in Hungary (defecting to the west at age 17), his coding style came to be known as "Hungarian". Tom Malloy's memory manager was an outstanding specimen of Hungarian Pascal code, with the identifiers looking like they were chosen by Superman's enemy from the 5th dimension, Mr. Mxyzptlk.
Bud decided that it would be too error prone to try to translate the Hungarian memory manager directly into assembly language. First, he made a pass through it to strip the type prefixes and restore the vowels to all the identifier names, so you could read the code without getting a headache, before adding lots of block comments to explain the purpose of various sub-components.
A few weeks later, when Bud came back to attend one of our first retreats, he brought with him a nicely coded, efficient assembly language version of the memory manager, complete with easy to read variable names, which immediately became a cornerstone of our rapidly evolving Macintosh operating system. | 1 | 2 |
<urn:uuid:ec859ce1-c037-42f3-9648-9c428f0d2d9c> | Blynk Board Project Guide
So you've provisioned your SparkFun Blynk Board -- connected it to your Wi-Fi network and started using the zeRGBa to control the RGB LED -- now what? Time to build some projects!
This tutorial will walk you through fourteen Blynk projects, which range from blinking an LED with a smartphone to setting up a tweeting, moisture-sensing house plant.
This tutorial follows our "Getting Started with the SparkFun Blynk Board" tutorial, which demonstrates how to provision your Blynk Board and get it connected to a Blynk project.
Have you just powered up your Blynk Board? You need to get your board on Wi-Fi first! Head over to the Getting Started tutorial to learn how.
Getting Started with the SparkFun Blynk Board
March 25, 2016
All of the projects in this guide are pre-loaded into the Blynk Board. That means you don't have to write any code -- just drag and drop some Blynk widgets, configure some settings and play! This tutorial will help familiarize you with both the Blynk Board hardware and the Blynk app, so, once you're ready, you can jump into customizing the Blynk Board code and creating a project of your own.
We'll be (over-)using electrical engineering terms like "voltage", "digital", "analog", and "signal" throughout this tutorial, but that doesn't mean you need to be an electrical engineer to know what they mean.
We pride ourselves on our comprehensive list of conceptual tutorials, which cover topics ranging from basics, like What is Electricity? or Voltage, Current, Resistance, and Ohm's Law to more advanced tutorials, like Logic Levels and I2C.
Voltage, Current, Resistance, and Ohm's Law
What is Electricity?
We'll link to tutorials as we introduce new concepts throughout this tutorial. If you ever feel like you're in too deep, take a detour through some of those first!
Before we really dive into those projects, though, let's familiarize ourselves with the Blynk Board and all of the components it features. Click the "Next Page" button below to proceed to the Blynk Board Overview section (or click "View as Single Page" to load the entire tutorial in all of its glory).
Blynk Board Overview
You're probably already familiar with the most important Blynk Board component -- the shiny RGB LED -- but there's a whole lot more included with the board. Throughout these projects you'll explore everything the Blynk Board has to offer, but here's a quick overview:
Meet the Blynk Board Pins
The Blynk Board interfaces with the outside world using input/ouput (I/O) "pins" -- tiny "fingers" that can either control real-world objects, like motors or LEDs, or read in values from sensors (for example light or position).
Each of the Blynk Board's pins are accessible via the large, metal-encircled holes on the edge of the board. These large holes are designed to interface with alligator clip cables -- a staple interface cable for beginner and advanced electrical engineers alike.
Each of the Blynk Board alligator-clippable-pins are labeled with white text towards the center of the board. The Blynk Board pins can be broken down into a few categories: general-purpose (GP), analog input, and power output.
General Purpose Input/Output (GPIO) Pins
There are eight "general-purpose" input/output (GPIO) pins. These are the "worker-bees" to the Blynk Board's main processor "queen". You can use them to control outputs -- like LEDs or motors -- or as inputs, gathering data from buttons, switches, encoders, and more.
|12||Input or PWM-capable output.|
|13||Input or PWM-capable output.|
|15||Input or PWM-capable output (pull-down resistor).|
|16||Input (internal pull-down resistor).|
|0||Input; connected to on-board button.|
|5||Output; connected to on-board LED.|
We recommend against using the RX and TX pins unless you really need them, but the rest are free for interfacing with the rest of the world as you desire!
Analog Input (ADC) Pin
A very special pin labeled "ADC" sports the Blynk Board's analog-to-digital converter (ADC). This pin translates analog voltages to the digital 1's and 0's a computer can understand.
This pin is mostly used to read the values of real-world sensors -- you can connect it to light sensors, motion sensors, flex sensors, and all sorts of other physical-world-sensing electronic components.
In addition to the Blynk Board's I/O pins, the power rails are also broken out to alligator-clip pins. These are the pins labeled "VIN", "3.3V", and "GND".
You'll get very accustomed to using these pins -- especially the ground pin. They have all sorts of uses -- ranging from powering motors to providing a reference voltage for a potentiometer.
While the Blynk Board includes a variety of inputs and outputs, we could never fit as much onto the board as we'd like. This page lists the handful of wires, sensors, LEDs, and other electronic components that tie-in well with the Blynk Board projects.
If you have the Blynk Board IoT Starter Kit, you're probably already set up with most of these components in the wishlist. (Everything except the IoT Power Relay, in fact.)
Don't worry if your electronics toolbox isn't outfitted with one, or more, of these components yet!
We've designed the projects in this guide to all be do-able regardless of whether-or-not you have external components plugged into the board. (You may just get very tired of using the Blynk Board's temp/humidity sensor input, or RGB LED output.)
If you already have a Blynk board but just need the components to follow along with this tutorial, check out the wishlist below!
Project 1: Blynk Button, Physical LED
Enough reading, time for blinking/Blynking! Our first project explores one of the most fundamental concepts in electronics and programming: digital input and output. A digital signal has a finite number of states -- in fact, it usually only has two possible conditions: ON (a.k.a. HIGH, 1) or OFF (a.k.a. LOW, 0).
💡Blink – the Electronics "Hello, Word"
As simple as this project may look, blinking an LED is the first step towards a long, fruitful journey of electronics tinkering. You'd be surprised at how many other real-world objects you can manipulate with a simple HIGH/LOW digital signal: you can turn a Relay on or off -- which can, in turn, control power to any household electronics. You can use digital signals to spin motors (or at least drive a motor controller). Or you can quickly pulse a digital signal to produce tones in a buzzer.
Using Blynk's Button widget, we can send a digital signal to the Blynk Board. If we attach that input to the right output on the Blynk board, we can use the HIGH/LOW signal to turn an LED on or off.
By now you should already have a Blynk project -- complete with an LED-controlling zeRGBa -- running on your phone. We're going to continue using this project for our experimenting in this guide.
Don't delete the BlynkMe project! We'll continue using the provisioning-provided Blynk project throughout this tutorial. Later, after coming up with a Blynk project of your own, you can create more projects (or continue using this one).
Make sure you keep the Blynk Board QR-Code Card – although it won't supply your account with more energy, it can be used to re-provision the Blynk Board.
Since we'll be using the same project throughout, you'll eventually want to make some space for more/bigger widgets. So to begin, let's clear the project out (don't worry, the zeRGBa and LCD are coming back soon!).
To delete widgets from your a Blynk project, follow these steps:
- If your project is still running, stop it by clicking the square stop button in the upper right-hand corner.
- Tap the zeRGBa widget to open its settings.
- Scroll down to the bottom of the zeRGBa settings and press the red delete button.
- Confirm the deletion -- on iOS click Delete Widget, on an Android hit "OK" on the popup dialog.
Follow the same set of steps to remove the LCD widget from the project.
Adding a Button to Pin 5
Let's start by adding a simple button widget. Here's how:
- Make sure your project is not running -- the upper-right icon should be a triangular play button.
- Touch anywhere in the blank, gray project space. A toolbox should open up on the right side with all of your widgets to choose from.
- Select the Button widget by tapping it. You'll find it at the top of the "Controllers" list.
- Tap and hold the button widget to drag it anywhere within the project space. You've got a lot of room to work with right now.
- Touch the Button Widget to bring up the settings page, and modify these values:
- Name: "LED" – While the widget is a button, we'll be using it to control an LED.
- Output: 5 – in the "Digital" list.
- Color: Click the red circle to change the color of the button. Try blue, since we're toggling a blue LED!
- Mode: Take your pick. Try them both!
- Confirm the settings.
- If you're using an Android, hit the back arrow in the upper-left corner
- If you're using an iOS device, hit the OK button.
Now that the button is all configured, run the project by tapping the play button in the upper-right corner of the screen.
Once the project is running, tap your new-blue button widget. When the widget is set to ON, the tiny blue LED should also turn on.
Button: Push vs. Switch
Try switching the button’s mode between push and switch. Whenever you need to make changes to a widget’s settings tap the upper-right stop button, then tap the widget you’d like to configure. Once you’re done configuring, confirm the changes (“OK” button on iOS, upper-left back-arrow on Android), and set the project back to run mode.
If you have the widget set to PUSH, you’ll have to hold the button down to keep the LED on. SWITCH mode allows you to set it and leave it. Give them both a try and see which you prefer.
Going Further: Adding an Offboard LED
While it's a useful status indicator, that blue LED is so small it's barely visible above the shine of the RGB LED. Combining a couple alligator clip cables, with a 330Ω resistor, and an LED of your choice, you can offboard the Blynk Board's LED control.
LED - Assorted (20 pack)COM-12062
LED Rainbow Pack - 5mm PTHCOM-12903
First, locate the LED's positive, anode pin -- you'll be able to identify it by the longer leg.
Bend the long leg out 90°, then twist it together with one of the legs of the 330Ω resistor (either end, resistors aren't polarized).
Next grab two alligator clip cables -- one black, the other green (although the color doesn't matter, using black for the ground wire is a nice convention to follow). Clamp one end of the black cable to the LED, and clamp one end of the other cable to the resistor.
Plug the other end of the black cable into the "GND" pin, and the other end of the green cable to the "5" pin.
Now flick the Blynk app's LED button again. Not only will the blue LED toggle, but your offboard LED should too! If the LED isn't turning on, try swapping the alligator clip cables around on the LED and resistor legs.
Changing the Digital Pins
Now try driving the offboard LED using pin 12. Move the green alligator clip from the "5" pin to the "12".
You'll also need to either add another button widget, or change the settings of the one you've already laid down.
Feel free to repeat the same experiment on pins 13 and 15! Avoid pins 0 and 16 for now, they'll be used as inputs later in this tutorial.
Project 2: Physical Button, Blynk LED
In the previous experiment, we used the button widget to receive a digital input and produce a digital output -- pressing a button on the app toggled an LED on the board. Now let's do the opposite -- use a button on the Blynk Board to toggle an "LED" in the app.
This project introduces the LED widget -- a simple tool for indicating the digital status of a Blynk Board input.
There should still be plenty of room in the Blynk Board project for the LED widget. You can either keep the button widget from the previous project, or remove it to save a little space. If it's not bugging you, we suggest keeping the button widget around -- you'll be re-configuring and using it again soon.
Saving and Re-Purposing Widgets
Widgets cost Blynk energy! Even if you get most of that energy refunded when you remove it from a project, it can take a psychological toll – every energy point is precious!
Throughout this guide, never feel obligated to remove any widget from a project – even if you're not using it in the active project. That's especially true with the button and value widgets, which will be reocurring throughout this project guide.
Add an LED Widget to V1
Like the button widget before, follow these steps to add an LED widget:
- If your project is running touch the sqaure button in the upper-right corner to stop it.
- Touch anywhere in the blank project space to open the widget box.
- Select the LED widget near the top of the "Displays" section.
- Drag and position the LED widget anywhere in your project space.
- Touch the LED widget to open up the Settings dialog, and adjust these values:
- Name: "Btn" (not "Button" for reasons...)
- Pin: V1. Any pin beginning with a "V" will be found under the "Virtual" list.
- Color: Touch the red circle to change the color of your LED. You can even set it up as a mythical black LED.
Touch the Play button in the upper-right corner to start the project back up.
With your project running, push down on the little gold circle of the Blynk Board's push-button.
While you're holding the button down, you should see the Blynk project's LED illuminate. Depending on lagtime, the LED may take a couple seconds to notice the button is pressed. Releasing the button will turn the LED back off.
V1, which we're using in this example to control the Blynk LED state, is one of the Blynk project's 32 virtual pins – a custom-programmed input or output, that can read or write values of all types to the Blynk Board or app.
Instead of directly reading or writing to digital or analog pins, virtual pins have to be implemented in firmware (the code running on the board). When you start writing your own Blynk programs, you can re-define these virtual pins to read or write any value, or to control anything that meets your fancy. For now, though, these pins are all defined in the Blynk Board's firmware; you should discover nearly all 32 of them throughout this guide.
Going Further: Launch a Rocket
Blynk LED widgets are great for indicating the digital status of any input pin, or any other virtual pin. You can tie just about any digital input into the 0 pin on the Blynk Board.
For example, grab a couple alligator clips and a rocket-launcher-style toggle switch, then connect them up like this:
Be careful not to allow the two alligator clips to touch -- it's a tight fit, but it works!
Then connect the black wire to GND and the colored cable to 0.
Now, turning on the LED is even more satisfying! When the toggle switch is set to "ON", the LED should illuminate.
Project 3: Slide-Dimming LEDs
Now that you're an expert on digital inputs and outputs, it's time to throw a curveball with analog signals. Analog values can take on any shape and be any value among the infinite possibilities in our world.
As with digital signals, the Blynk Board can also produce analog outputs or read in analog inputs. By producing an analog input, the Blynk Board can dim an LED, instead of being left with either turning it on or off.
To produce analog outputs, we'll use the Slider widget in the Blynk app. The slider allows you to precisely set the value of an output on the Blynk Board -- it's not just ON or OFF. Now, you can set a value between 0-255, 0-1023, -8 to +8, or whatever else you please.
Pulse-Width Modulation (PWM)
To be honest, the Blynk Board actually can't produce truly analog outputs. Instead, it quickly pulses a digital signal high and low to produce an average voltage in a technique called pulse-width modulation (PWM).
PWM waves aren't really analog, but they go up and down so fast that a lot of components – like LEDs – can't tell the difference.
Once again, you should have plenty of room left in your project -- only delete widgets if you want to clean up a bit. However, if you still have the button triggering the pin 5 blue LED, you'll need to disconnect it in order to use a slider on the pin.
One Pin At a Time
When configured to monitor or control a pin, a Blynk widget lays claim over that pin until it's disconnected. In fact, in most cases the Blynk app won't let you assign one pin to multiple widgets at once.
By limiting pin-control to one widget at time, we make sure the Blynk Board can't get confused – you wouldn't like it when it's confused.
Disconnect the Button From Pin 5
- Stop the project.
- Touch the button to bring up its settings.
- Change the pin to the dash (–) and hit OK a couple times.
The button will remain in your project -- you won't lose any energy -- but it'll be disconnected from any digital or virtual pins for now. Pressing it while the project is running won't have any effect.
Connect a Slider Widget to Pin 5
- Touch anywhere in the blank project space to open the widget box.
- Select the Slider near the top of the "Controllers" section.
- Drag and position the Slider widget anywhere in your project space.
- Touch the Slider widget to open up the Settings dialog, and adjust these values:
- Name: "LED Dimming" – we're using it to control the LED
- Pin: 5 – under the "Digital" list
- Range: 0⟷255, covering the full PWM output range.
- Color: Touch the red circle to change the color of your slider.
Confirm the settings, and run the project.
Once the project is running, try grabbing the slider and gradually moving it from one side to the other. The small, blue LED should brighten and dim as you do so. The closer the slider value is to 0, the dimmer it will be. 255 is 100% ON and 0 is totally off.
You can also give the large slider a try. Both sliders accomplish the same task, but the large sliders tend to provide a bit more precision over the pin's value.
Going Further: RGB Precision Control
Sliders can take on all sorts of applications in a Blynk project. In addition to directly controlling a digital pin's PWM value, they can be used to provide a range of input to firmware running on the Blynk Board.
In fact, we've set up virtual pins 2, 3, and 4 to individually control the red, green, and blue channels of the RGB LED. Try adding three more sliders:
Run the project, and slide around. You may find that the three individual sliders provide more precise control over the mixed color of the RGB LED compared to the zeRGBa widget.
Time for another admission of guilt: We've been holding back the full awe -- and terror -- of the Blynk Board's RGB LED. In fact, we've been limiting the LED brightness to about 12.5% of it's full power.
To set the maximum range of the RGB LED add a slider to V15 -- you can re-purpose the small slider widget controlling the pin 5 LED, if you'd like. Name it "Brightness", and once again set the range to 0-255.
Play with all four sliders to see the full range of colors you can create. Just be careful! That LED really can get blindingly bright.
Dimming LEDs isn't all the sliders are good for. Later projects will use them as input control, setting values like Twitter timers, moisture thresholds, and sensor update rates.
Project 4: Temperature and Humidity Values
Blynk's Value widget is the workhorse of many-a-Blynk project. Set to a digital pin, it can display the real-time HIGH or LOW values of the pin. Set to the proper virtual pin, it can display as much information as you can fit into four characters.
In this project, we'll use two-or-three Blynk value widgets to read how hot your Blynk Board is running and find out whether it's hydrated enough.
This is the first project to use the Blynk Board's on-board temperature and humidity sensor -- the tiny, white square adjacent to the "12" pin. This is the first step towards creating environment-sensing projects -- for example, you could wire up a relay to turn a fan on or off depending on the local weather conditions.
Clean up your Blynk Board project as necessary, make sure the project is stopped, and add three new value widgets.
Add Three Value Widgets to V5, V6, and V7
The Value widgets are located at the top of the "Displays" category. Once in the project, set the widgets up like this:
|Name||Pin||Min||Max||Frequency (or Reading Rate)|
|Temp F||V5||–||–||1 sec|
|Temp C||V6||–||–||1 sec|
As always, feel free to adjust your widget colors to your heart's delight.
The frequency (or reading rate) setting controls how often the Blynk app asks the Blynk Board for updated values. Don't set it to push, though, as the Blynk Board firmware isn't configured to "push" these values to the app.
For these virtual pins, the range (defaulted to 0-1023) won't have any effect on the displayed value -- you can ignore them.
Once you've set all three value widget's up, run the project.
A second-or-so after you hit Play, you should see the three values begin to update. "Temp F" and "Temp C" display the temperature in Fahrenheit and Celsius, respectively, while "Humidity" displays the relative humidity as a percentage.
The most effective way to interact with this project is to get up close to the white temperature/humidity sensor and blow on it. Your breath should quickly raise the humidity reading, before it slowly drops back down. Or, if you can take your Blynk Board outside, go check the environment readings against your local weatherman.
You can probably tell by placing a finger under the Blynk Board that it tends to run hot. Don't worry! Your desk probably isn't 90°F.
The humidity sensor should still be correct, but, to get a more accurate temperature reading, try unplugging the board, letting it cool off for a minute-or-so, and plugging it back in.
Continue to play around with the value widget settings to get a feel for the update rate.
You can use the value widget for just about any Blynk input built into the firmware. For example, try setting either the "Temp F" or "Temp C" widgets to V1 (you may have to disconnect the LED first). Now, when you press the button, you'll reinforce the idea that 255 is equivalent to 100% ON, and 0 is completely off.
Or -- if you want to get a jump-start on the next project -- set one of the value widget's pin's to ADC0, under the "Analog" list. What are these 0-1023 values all about? All of you questions will be answered in the next project!
Project 5: Gauging the Analog-to-Digital Converter
To read in analog inputs, the Blynk Board uses a special-purpose pin called an analog-to-digital converter (ADC). An ADC measure the voltage at a set pin and turns that into a digital value. The ADC on the Blynk Board produces a value between 0 and 1023 -- 0 being 0V/LOW/OFF, 1023 being 3.3V/HIGH/ON, and 512 being somewhere in the middle ~1.65V.
There are a variety of widgets that can be used to display the voltage at the ADC pin. In this project, we'll use the Gauge widget, which provides the real-time reading on the ADC pin in a nice, proportional manner.
The Blynk Board's ADC input is floating -- not electrically connected to any circuitry. Without something connected to the pin, the voltage may wildly fluctuate, so to produce a reliable, steady voltage, we'll need to wire it up.
There are a huge variety of analog-signal producing electronic components out there, but the most traditional is a potentiometer. "Pots" come in all sorts of shapes and sizes from rotary to linear to soft.
Trimpot 10K Ohm with KnobCOM-09806
Rotary Potentiometer - 10k Ohm, LinearCOM-09939
SoftPot Membrane Potentiometer - 50mmSEN-08680
Slide Pot - X-Large (10k Linear Taper)COM-09119
To really get the most out of this project, consider grabbing a sliding linear potentiometer and three alligator clip cables. Wire up the bottom of the slide pot like below -- red cable on the pin labeled "1", yellow connected to "2" and black connected to "3".
Then route the other ends of the alligator-clip cables like below -- red to 3.3V, black to GND, and yellow to ADC.
The yellow cable -- terminating on the ADC pin -- will carry a voltage that varies between 0V (GND) and 3.3V, depending on the position of the slide pot.
The Gauge widget takes up a good chunk of room, so you may need to remove some previous widgets before adding it. Keep a value widget from the previous experiment -- we'll use it to display the calculated voltage.
Connect a Gauge to ADC
You'll find the Gauge widget under the "Displays" section. Once it's added, modify the settings like so:
|Name||Pin||Min||Max||Frequency (or Reading Rate)|
|ADC||ADC (under "Analog")||0||1023||1 sec|
We're reading directly from the ADC -- the Blynk Board's analog-to-digital converter input. The 10-bit ADC produces a value between 0 and 1023, which is a value proportional to a voltage between 0 and about 3.3V. So, an ADC reading of 0 equates to 0V, 1023 equals 3.3V, and 512 is about 1.75V.
Repurpose a Value Widget to V8
If you don't want continuously do that voltage-converting math in your head, modify a value widget to display V8, and set the name to Voltage.
The Blynk Board will convert the ADC reading to an equivalent voltage for you.
Run the project, and watch for the gauge to settle in on a value. If you have a potentiometer wired up, the reading should remain rather steady. Try moving the wiper up and down.
If you don't have a potentiometer handy or any way of connecting it to the Blynk board, don't fret. You're a variable resistor too! You can test out the ADC by putting a finger on the "ADC" pin.
You should be able to move the gauge around by placing another finger on either the "GND", "VIN", or "3.3V".
(Electricity is running through your body, but it's a minuscule, insignificant amount. You don't have anything to worry about.)
There are a huge variety of analog-signal producing electronic components out there. You could wire up an accelerometer, stick the circuit on a washer/dryer, and check the analog readings to observe if your laundry is done or not. Or wire up a force-sensitive resistor, hide it under your doormat, and check if anyone's at the front door.
Later in this guide, we'll wire the ADC up to a Soil Moisture sensor and connect your houseplant to your twitter account, so it can notify the world when it's thirsty.
Project 6: Automating With the Timer
A large chunk of Internet-of-Things projects revolve around home automation -- a classic example of which is automatically switching your lights on and off. Using the Blynk Timer widget, you can trigger specific outputs to fire at any time of the day -- even if your app is closed and your smart device is off!
The timer's pair of settings include a start time and a stop time. When the start time is triggered, the timer's configured pin turns HIGH. When the stop time is met, the pin goes back into a LOW state.
All you'll need for this project is the simple but powerful Timer widget.
Add a Timer Widget on V9
Add the Timer widget to your project -- you'll find it under the "Controllers" list. Then tap the widget to open up the settings page.
Depending on what you have plugged into the board, there are a variety of options available to you on the Pin setting. For now, let's use it to trigger an RGB light show. Set the Timer's pin to V9, which is configured to start Rainbow Mode on the Blynk Board's RGB LED.
Alternatively, you can use it to toggle any digital pin -- like the pin 5 blue LED, or an external LED on pins 12 or 13.
For experimenting purposes, set the start time to about a minute from now and the stop time to 30-60 seconds later. Once you get a feel for the timer, you can start using it more strategically.
As usual, give it any name and color you please.
Once you've set your timer up, run the project. Hopefully you get it running before the timer's programmed Start Time! If not, stop and increase the start time another 30 seconds-or-so.
The timer has a hidden feature in run mode: if you tap it you can switch between the start-time display and a countdown display. Countdown display mode is especially handy while your just testing around.
Once the timer triggers it will fade in and out to indicate the pin is on. If the timer's fading, your pin should be active. Watch the RGB do its hypnotic rainbow dance.
Once the timer hits the Stop Time, the LED should return to its previous duties -- waiting to shine another day (literally, you better adjust the start time again).
Going Further: Controlling Lamps With a Relay
The PowerSwitch Tail and IoT Power Relay are our favorite general-purpose components in the catalog. With a simple HIGH/LOW signal from the Blynk Board, you can use the relay to control power to any device you would otherwise connect to a wall outlet. Best of all, it's completely enclosed and totally safe.
Parts Not Included
The PowerSwitch Tail used in this example is not included with the Blynk Board IoT Starter Kit. The IoT Power Relay is also not included and can be used as an alternative option to the PowerSwitchTail II.
To follow along with this example, you'll need the PowerSwitch Tail or the IoT Power Replay, a long, phillips-head screwdriver, a couple jumper wires and two alligator clip cables.
First, use your screwdriver to securely wire the jumpers into the PowerSwitch Tail's "+IN" and "-IN" pins (leave the "Ground" pin unconnected). Then clip alligator cables to the ends of those wires. Wire the "-IN"-connected cable to the Blynk Board's GND pin, and the "+IN" cable to the Blynk Board's pin 12.
Plug a lamp, fan, or any device you'd like to automate into the PowerSwitch Tail's 3-prong female connector. Then connect male 3-prong connector into the wall.
Then set up a new timer -- this time connected to pin 12. Adjust the start and stop times, and have the Blynk Board make sure your lights are off when you're out of the house.
Project 7: The LCD's Wealth of Information
The 16x2 Liquid-Crystal Display -- a 16-column, 2-row LCD, which can display any combination of up to 32 alphanumeric characters -- is one of the most commonly recurring components in electronic projects. It's simple to use, and it has they ability to convey a wealth of information pertaining to your project.
Blynk's LCD widget is similarly useful in displaying diagnostic and other Blynk-project information. In this project, we'll use the LCD widget to display everything from the Blynk Board's temperature and humidity readings, to the length of time it's been up-and-running.
This project requires the LCD widget as well as three button widgets, which you can repurpose from the previous projects.
Connect an LCD Widget to V10
Add an LCD widget from the "Displays" section of the widget box, and tap it to bring up the settings page.
Before adjusting anything else, slide the Simple/Advanced slider to Advanced. Then set the pin to V10, and adjust the background and text color as you please (can't beat white text on black).
Add Button Widgets to V11, V12, and V13
Set the three buttons up to trigger virtual pins 11, 12, and 13. Leave them in Push mode:
Once the button's are set, you're ready to run. Until you hit one of the three buttons, the LCD may print a greeting message, but that will quickly fade once you trigger V11, 12 or 13.
Although it takes up a lot of room initially, you can see how valuable the LCD is -- and the wealth of information and text it can display. While the Value widgets are limited to four characters, the LCD can display up to 32!
Project 8: Joystick Joyride
Joysticks are a staple input for a variety of control systems, including arcade gaming, RC-car driving, drone guiding, and assistive-tech maneuvering. They produce two pieces of data: position components along x- and y-axes. Using that data, a project can compute an angle between 0 and 360° and use it to drive another mobile mechanism.
Blynk's take on the joystick is analogous to a physical joystick -- as you move the center-stick around, it'll send x- and y-axis values between 0 and 255 to the Blynk Board. What are we going to to do with that data? Spin a motor!
Servo - Generic (Sub-Micro Size)ROB-09065
To be more exact, we're going to use the joystick to drive a servo motor. Servo's are specialized motors with a built-in feedback system, which allows for precise control over the motor's position. Instead of rotating continuously, like DC motors, a servo will move to a position you tell it to and stop (unless it's a continuous rotation servo). They're useful for projects which require complete control over movement, like opening or closing a door to an automatic pet-feeder.
Most servo motor's are terminated with a 0.1"-pitch 3-pin female header. To interface it with your Blynk Board, plug in a few male-to-male jumper wires into the servo socket (if you have the "connected" jumper wires, peel off a strip of three wires). Then clip a few alligator clip cables onto the ends of those wires.
Connect the cable wired to the servo's black wire to GND, red to VIN, and the white signal wire to pin 15.
Press one of the servo motor's mounts onto the motor head, so you can better-see the spin of the motor.
In addition to the Joystick widget, this project can also optionally use a gauge (or value) and a slider. The slider controls the servo motor's maximum angle, and the gauge displays the calculated servo position (especially handy if you don't have a servo connected).
Connect the Joystick to V14
Add a Joystick widget from the "Controllers" section. Slide the Split/Merge switch to Merge, and set the pin to V14. It's not required, but we recommend setting autoreturn to off.
Connect a Slider to V16
If you have a slider in your project, you can re-purpose it to adjust the servo's maximum angle. Set the pin to V16, and modify the range to make it easy to set your servo's maximum value.
Connect a Gauge or Value Widget to V17
Finally, the project produces a virtual output on V17 displaying the servo's current angle. You can use a Value or Gauge widget to show this value. Neither are required -- but it does provides feedback if you don't have a servo motor attached.
Modify the range of the gauge, or else you might not get the right feel for the servo's position.
Once everything's set up, run it, and joystick!
As you rotate the stick, the servo should noisily reposition itself along the angle you've set.
In the background, the Blynk Board firmware is grabbing the x and y values of the joystick, doing some trigonometry on them to calculate an angle (in degrees), and pushing that calculation back out to V17.
Once you've got the servo rotating, connect something physical up to it! How about an automatic fish feeder?
Scrounge around for a bottlecap, and screw it into the servo's single-arm.
Then slot the servo arm onto your servo motor head, and check the motion of the bottlecap -- you may need to re-position the cap to get the rotation you need.
Now, when you rotate the joystick, you'll have a mobile-food-dumping apparatus -- perfectly sized for a goldfish!
Project 9: Graphing Voltage
Electrical engineers love measuring voltage. Whether we're using a multimeter to get a real-time view into a line's potential, or a monitoring a periodic signal's shape using an oscilloscope, monitoring voltage can be critical to project-debugging.
While we can't really re-create an oscilloscope's signal-triggering in Blynk, we can chart the Blynk Board's voltage-over-time using the Graph widget. The graph widget periodically pulls in data from a virtual or physical pin and creates either a bar or line graph representing how that input changes over time. You can set the graph to draw as fast as four times per second, or as slow as once-a-minute.
The Blynk Board's input voltage will range anywhere from 3.7 to 6V -- well outside the acceptable input range of 0-3.3V. So, to properly measure the input voltage, we'll need to step it down using a voltage divider. Although it sounds complex, a voltage divider is actually just a pair of resistors sitting in between one voltage and another.
To create a voltage divider, first grab a couple 10kΩ resistors, and twist them together at the ends. Clip a yellow alligator clip to the twisted legs of the resistors, and connect red and black alligator cables to either of the other two legs.
Wire the other ends of the cables to ADC (yellow), VIN (red), and GND (black).
Voltage: divided. A voltage divider made out of two 10kΩ resistors will cut the voltage in half. So, if the Blynk Board's input voltage is 5V, the voltage at the ADC pin will only be 2.5V.
In addition to the graph widget, you can optionally add two value widget's to help get a better view into the Blynk Board's voltage measuring.
Connect the Graph to V20
You'll find the Graph widget under the "Displays" section of the widget box. Add it, then tap it to configure. Set the pin to V20, and adjust the range to something like 0-6.
You can play with the Bar/Line switch, but a line graph seems to work best for this type of data.
Monitor the ADC and V8 With Value's
For a bit more insight into the Blynk Board's ADC reading, consider adding a couple value widgets to monitor the ADC pin and V8 -- the calculated ADC voltage.
See how steady that USB supply is. Try zooming the graph in, so the minimum is 4 and maximum is 6. The more "wiggles" in the signal, the noisier your supply is.
Fortunately, the Blynk Board regulates that input voltage, to create a nice, steady 3.3V supply. In fact, if you want to measure the 3.3V supply, simply swap the red cable from VIN to 3.3V. Is it steadier than the VIN supply?
Plotting Battery Voltage
If you want to recreate that feeling as you watch your phone’s battery-life icon progressively empty – or the excitement of watching it charge, consider powering the Blynk Board with a LiPo Battery. There are a variety of Blynk Board-compatible LiPo batteries, we recommend either the 400mAh, 850mAh, or 1000mAh.
If you set the read rate to the maximum – 59 seconds – and let the Board run for a while, you should begin to see an interesting downward slope while the battery discharges. Or plug in the Blynk Board, and watch that slope incline.
The graph widget should work for any of the Blynk Board's output values. Try changing it to V5 or V6 -- see how the temperature fluctuates over time. You may need to adjust the graph's range to actually see the line.
Try plugging other Blynk Board outputs you've already used into the Graph widget. Make some interesting curves!
Project 10: Charting Lighting History
Blynk's History Graph widget takes the standard graph to the next level. It allows you to compare a widget's value to data from hours, days, weeks, even months back.
In this project, we'll plug readings from a light sensor into the History Graph. After you've let the project run for a while, you'll be able to track the sun rise/set time, or find out if someone's been snooping in a room when they're not supposed to be.
To measure ambient light, we're going to use a light-sensitive resistor called a photocell. When it's pitch-black, the photocell morphs into a large, 10kΩ resistor, but when light shines on the cell, the device's resistance drops closer to 1kΩ.
To create a voltage for the Blynk Board's ADC using the photocell's variable resistance, we need to pair it with a second resistor. The photocell and resistor will combine to create a variable voltage divider.
That second resistor should be somewhere in the middle of the photocell's resistance range -- right about 5kΩ. There aren't any 5kΩ resistors in the IoT Starter Kit, but it does include the means to create one! By combining two equal resistors in parallel, we can cut their total resistance in half.
To create a 5kΩ resistor, grab two 10kΩ resistors, and twist them together in parallel -- that is, twist the ends of both resistors together, so the bodies are touching each other. Then, twist one leg of the photocell together with one shared leg of our new 5kΩ resistor.
Clip a yellow alligator cable to the middle legs -- the behemoth that is two resistors and a photocell leg twisted together. Then clip a red cable onto the photocell's leg and a black cable onto the other resistor leg.
On the Blynk Board -- as you're probably used to by now -- clip the yellow to ADC, red to 3.3V, and black to GND.
This circuit will produce a higher voltage in the light and a lower voltage in the dark.
This Blynk project combines the History Graph widget with a Value widget to display the real-time light reading.
Configure a Value Widget
Before adding the graph, add a Value widget and configure it to read from V18.
Set the update rate to 1 second, and name the widget "Light."
Add a History Graph Widget
Once the Value widget is in place, add a History Graph widget. In the settings, configure any one of the four PIN's to V18.
The textbox adjacent to the pin should automatically update to "Light" (or whatever the Value widget is configured as).
There is one quirk with the History Graph widget -- it won't directly pull or request a variable's value. It relies on widgets like Value or Gauge to get the latest value from a virtual or physical pin.
After running the project, begin by monitoring the Value widget's reading -- it should vary between 0 and 1023. See how high you can get the value reading by shining a light on it. (Hint: your phone might just have a bright LED to shine on the photocell.)
Then cover up the photocell, or turn off your lights, and see how low you can get the reading. Or, add a zeRGBa, turn the brightness up to max, and have the Blynk Board feed back into its own light sensor.
To really get the most out of the history widget, you need to leave the project running for at least an hour. If it's about time to hang it up for the night, leave your Blynk project plugged in and graphing. Maybe you'll catch someone sneaking in and turning the light on!
If you ever want to delete old history from the History Graph, swipe left on the graph (while the project is running), and select "Erase data."
You can add up to four values to the History Graph -- play around with the other three pins to see what other interesting info you can graph.
You may have to remove some of the previous pins to adjust the graph's scale.
Celsius temperature and humidity -- usually around the same order of magnitude -- pair nicely on the graph together. Use the legend so you don't forget which is which!
Project 11: Terminal Chat
The word "terminal" may instill images of 80's hacker-kids playing a game of Global Thermonuclear War or 90's Mr. Anderson's following white rabbits, but, retro as they may sound, engineers still use terminals on a daily basis.
The Blynk Terminal widget allows you to pass endless information to and from the Blynk Board. It can be incredibly handy -- in fact, we'll use it in all four of the final projects to enter email addresses, pass debug information, and name your Blynk Board.
In this project, we'll use the terminal on your Blynk app and a terminal on your Blynk Board-connected computer to set up a "chat program."
There aren't any external components to connect to your Blynk Board in this experiment, but you may need to do some extra legwork to set up a terminal on your computer.
Install FTDI Drivers, Identify Your Serial Port
The Blynk Board uses a specialized chip called an "FTDI" to convert USB data to a more simple serial interface. If you've never used an FTDI-based device before, you'll probably need to install drivers on your computer. Our How to Install FTDI Drivers tutorial should help get your drivers installed, whether you're on a Mac, Windows, or Linux machine.
Once you’ve installed the drivers, your Blynk Board should show up on your computer as either COM# (if you’re on a Windows machine) or /dev/tty.usbserial-######## (if you’re on a Mac/Linux computer), where the #’s are unique numbers or alphabetic characters.
Download, Run, and Configure the Terminal
There are a huge variety of software serial terminals out there. If you don't already have one, read through our Serial Terminal Basics tutorial for some suggestions.
Once you've selected terminal software – and found your Blynk Board's serial port number – open it and set the baud rate to 9600. The Serial Terminal Basics tutorial linked above should have directions for configuring the serial port.
Using TeraTerm to communicate with the Blynk Board over a serial interface.
Don't be alarmed if your Blynk Board resets when you open the terminal. It may also print some debug messages as it re-connects -- they're handy, but nothing you'll really need to concern yourself with.
Once the port is open, swap back over to your Blynk project. Time to install another terminal!
Just one widget this time: the Terminal. We'll be using the Terminal widget for the rest of this tutorial, so make it cozy. And, don't delete it.
Add a Terminal Widget
Find the terminal widget under the "Displays" list of the widget box.
Once added, tap the terminal to enter the settings screen. Set the terminal widet's pin to V21. Keep "Input Line" and "Auto Scroll" set to ON.
Pick any screen and background color you please -- how about green background and black text, to get a little oppo-matrix style going.
That's it. Now, run the project.
Try typing something in your computer terminal -- you should see those same characters almost instantly pop up on your Blynk project's terminal.
Then try typing something into the Blynk terminal. They should show up on your computer.
Now have a conversation with yourself! Or share the project, and chat with a friend.
Project 12: BotaniTweeting
For over ten years now, Twitter has been the microblog-of-choice for human and machine alike. While the rich-and-famous use-and-abuse twitter to reach their millions of followers, twitter-enabled projects like our old Kegerator or bots like Stupidcounter have found their own use for the service.
Blynk's Twitter widget is one of three notification-enabling devices in the app. After connecting it to a Twitter account, the widget will give your Blynk Board a voice on the world-wide-web.
This project, inspired by the Botanicalls Kit, will set your Blynk Board up as a fully-configurable plant soil moisture monitor. Plugged into your favorite house plant, the Blynk Board will give it a voice -- so it can shout to the world when it's thirsty.
Our handy Soil Moisture Sensor will be the hardware focus of this experiment.
At its core, this two-probed sensor is simply a resistance sensor. Wet soil is less resistive than dry soil, so a parched, dry plant will produce a lower voltage than a wet, sated one.
In addition to the soil moisture sensor, you'll need jumper wires, alligator clip cables, and a screwdriver
Note that while the Soil Moisture Sensor included with the IoT Starter Kit has a screw terminal installed, the stand-alone product version does not. If you've bought the Soil Moisture Sensor separately without the screw terminals, you will need to solder wires or a connector to the board.
Hook Up The Soil Moisture Sensor
There are a few hoops to jump through to get the moisture sensor connected to your Blynk Board. To begin, grab a screwdriver, and three jumper wires -- black, red, and yellow.
Flip the board over to see the terminal labels. Plug the yellow wire into the "SIG" terminal, black wire into "GND", and red into "VCC". Use the small flathead screwdriver bit to securely tighten the jumper wires into the connector.
The SparkFun Pocket Screwdriver includes half-a-dozen bits – small/large, flat/Phillips – but they're hidden in the cap. To access the bits, unscrew the cap and pour them out. Look for the smallest flathead you can find in there, and slot it into the head.
Once the jumper wires are secured, clamp alligator clip cables onto the other ends -- match up the colors if you can!
Finally, clamp the other ends of the alligator clips to 3.3V (VCC/red), GND (GND/black), and the ADC (SIG/yellow).
The higher the reading on the ADC, the wetter (and happier) your plant is.
This project requires five widgets: Twitter, Terminal, a Value, and two Sliders. Hopefully you've got the Terminal -- and maybe a few others -- from previous projects. Here's how to set them up:
Set Up the Twitter Widget
Add the Twitter widget from the "Notifications" list. Move it anywhere you'd like, and tap it to configure.
Hit Connect Twitter, and the app will take you to a foreign screen, where you can log in to your Twitter account. This is an OAUTH connection from Blynk to your Twitter account -- if you ever want to disconnect Blynk from your account, you can do so in the Apps section of your account settings.
Once you've logged in and allowed Blynk access to your account, the Twitter widget should have a @YOUR_ACCOUNT link in the settings page. Confirm the settings, and head back to the project.
Set Up the Terminal
As with the previous project, the Terminal should be connected to V21, make sure "Input Line" is turned ON.
Give the terminal any color(s) you'd like.
Set Up the Sliders
A pair of sliders are used to set your plant's moisture threshold and the minimum tweet rate. Add or re-configure two sliders (large or regular) as so:
|Slider||Minimum Tweet Rate||V24||5||60|
The minimum tweet rate slider sets the minimum number of minutes between tweets. The Twitter widget can't tweet more often than once-a-minute, so make sure that's the bare-minimum of the slider. Set the maximum to as long as you'd like between tweets (e.g. 60 minutes, 720 minutes [12 hours], etc.)
If the reading out of the soil moisture sensor falls below the minimum threshold, it will begin tweeting as often as it's allowed (by the tweet rate), until the reading goes back up.
Set up the Value
Finally, add or re-configure a value widget to monitor the ADC pin. You'll need that as you hone in on a good threshold value.
Once you've got all of those widgets set up, run the project. Plug your moisture sensor into your plant, and check the ADC reading.
If your soil is nice-and-moist, the reading should be somewhere around 700-800. Try watering your plant -- see if the reading goes up.
To verify that the project is functioning and tweeting, set the threshold to 1023 and set the tweet limit to 1. Within a minute-or-so, you should see a new tweet on your timeline.
So far, so good. Now the tricky part. You need to set the moisture threshold to a "dry soil" value. It'll be under the current value a bit. If you're moisture is reading out at about 750, try setting the threshold to 740. Then you play the waiting game (or take a heaterizer to your plant's soil). When the soil dries up, your plant should tweet.
Going Further: Setting the Plant's Name
Why the terminal? To name your plant! Come up with a unique, identifiable name for your plant. Then, in the terminal, type
$MY_PLANTS_TWITTER_NAME and hit enter. Make sure to type the "$" first. The terminal will catch that and begin tweeting with your new name next time your plant gets thirsty.
Now, when your plant tweets, it'll identify which one it is.
Project 13: Push Door, Push Phone
Push notifications: you either love them or hate them (or it varies, depending on the app sending them), but they can come in handy every once-in-a-while. The Blynk Push widget allows your Blynk Board to send your phone periodic push notifications, so you're the first to be made aware of a state change in the board.
This project combines a Door Switch Sensor with the push widget. When the door's state changes -- whether it opens or closes -- you'll be
the first...well...at least the second to know!
This project is based around a Magnetic Door Switch -- a truly magical door-state sensor.
Magnetic Door Switch SetCOM-13247
This door switch is what we call a reed switch -- a magnetically-actuated electronic switch. There are two components to the device: the switch itself and a simple magnet. When the two components are in close proximity, the switch closes. And when they're pulled far enough apart, the switch opens up. These switches are commonly used as part of a burglar alarm system or as a proximity-detecting device.
Wire Up the Door Switch
To connect the door switch to your Blynk Board, you'll just need a couple alligator clip cables -- how about red and green. Clamp a red wire to one of the switch's wires and the green wire to the other.
Then clamp the other end of the red wire to 3.3V and the green wire to 16.
We've got a pull-down resistor on pin 16. So when the switch is open, it reads as LOW, but, when the switch closes, the pin connects directly to 3.3V and reads as HIGH.
This project uses three widgets: Push (of course), Terminal, and Value. Here's how to set them up:
Add the Push Widget
You'll find the Push widget under the "Notifications" list, towards the bottom. After adding it, tap it to configure its settings.
Notifications work differently in iOS and Android devices. If you're using an Android device, your settings will look like above. You can turn the "Notify when hardware goes offline" setting on or off at your discretion -- it can be handy in other projects. The Priority setting can be set to HIGH, but it will end up draining your phone's battery a little faster.
The iOS settings look like this:
Again, both of these sliders are left to your discretion. Enabling background notifications will likely take a bit more battery-life out of your phone.
Configure the Terminal
If you've still got the Terminal widget from previous projects, great -- leave it be. If not, configure it to use pin V21.
Configure the Value Widget
Finally, set the value widget to V25. This widget will display the up-to-date state of your door switch sensor.
Once everything's added, run the project! Try putting the two door switches close together, or pulling them apart. A few things should happen:
- The Terminal will print a debug message -- stating that the door opened or closed.
- The Value widget should display the up-to-date state of the door.
Hopefully, your phone will pop up a notification that the switch's state changed. Unfortunately (or fortunately, depending on how much you enjoy notifications), the Blynk app limits project notifications to once-a-minute. If you open and close the door to fast, you might get the first notification, but for the next minute you'll need to check the terminal or value widgets.
As with the previous project, you can set the name of your board by typing
$BOARD_NAME. That name will be added to the notification that pops up on your phone.
Tape or screw the door switch sensor to something! If you've got the IoT Starter Kit, you may already have something to test it out on.
The unique red SparkFun box has been re-purposed as a project enclosure in countless projects. Or, you can use it to store your valuables.
With a Blynk-enabled alarm -- you'll be notified whenever someone's sneaking into your box!
Project 14: Status Emails
The final notification-creating Blynk gadget is the Email widget, which allows you to tailor an Email's subject and message and send it to anyone's inbox. It's a great power; use it responsibly. Don't go creating a Blynking spam bot!
This project gathers data from all of the sensor's we've been using in these projects, combines them into an Email message, and sends that message to an Email address of your choice (set using the Terminal widget).
This project uses three widgets: Email, Terminal, and a Button.
Add the Email Widget
Find the Email widget under the "Notifications" list, towards the bottom of the widget box.
You're understandably conditioned to tap the Email widget to configure it, but that's not necessary this time. The Email widget doesn't have any settings! All it does is provide your Blynk Board with Email-sending ability.
Configure a Terminal Widget
As with the previous experiments, we'll be using the Terminal widget as an general-purpose string input/output device. If you’ve still got the Terminal widget from previous projects, great – leave it be. If not, configure it to use pin V21.
Most importantly, the Terminal widget will be used to enter an Email address, otherwise the Blynk Board will have nowhere to send your data.
Connect a Button to V27
Finally, we'll use a button to trigger the sending of an Email. Add or reconfigure a button to activate V27. Make sure it's a Push button.
After running the project, tap into the Terminal input box. Type an exclamation point (
!), then type your (or a friend's) email address. Hit enter, and the Blynk Board should respond with a message verifying the email address you entered.
Now all that's left is to tap the "Send Email!" button, and check your inbox. A status update should be dispatched to your inbox.
Don't go mashing on the "Send Email!" button, now. The Email widget is limited to sending at most one email a minute. If you're too trigger-happy, the Terminal will let you know when you can next send an email.
Resources and Going Further
Finished all the projects? Wondering where you go from here? Now that you're a professional Blynker, consider re-programming the Blynk Board to create a unique project of your own! Check out our Programming the Blynk Board in Arduino tutorial to find out how to level up your Blynking!
Blynk Board Arduino Development Guide
March 25, 2016
Or, if you're looking for more general Blynk Board or Blynk App resources, these may help:
- SparkFun Blynk Board Resources
- Blynk Resources
If you need any technical assistance with your Blynk Board, don't hesitate to contact our technical support team via either e-mail, chat, or phone. | 1 | 34 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.