Dataset Viewer
text
string | id
string | dump
string | url
string | date
timestamp[us] | file_path
string | language
string | language_score
float64 | token_count
int64 | score
float64 | int_score
int64 |
---|---|---|---|---|---|---|---|---|---|---|
Mexican America - Introduction
"Mexican America" is a sampling of objects from the collections of the National Museum of American History. The stories behind these objects reflect the history of the Mexican presence in the United States. They illustrate a fundamentally American story about the centuries-old encounter between distinct (yet sometimes overlapping) communities that have coexisted but also clashed over land, culture, and livelihood.
Who, where, and what is Mexico? Over time, the definitions and boundaries of Mexico have changed. The Aztec Empire and the area where Náhautl was spoken—today the region surrounding modern Mexico City—was known as Mexico. For 300 years, the Spanish colonizers renamed it New Spain.
When Mexico was reborn in 1821 as a sovereign nation, its borders stretched from California to Guatemala. It was a huge and ancient land of ethnically, linguistically, and economically diverse regions that struggled for national unity. Texas, (then part of the Mexican state of Coahuila y Tejas) was a frontier region far from the dense cities and fertile valleys of central Mexico, a place where immigrants were recruited from the United States. The immigrants in turn declared the Mexican territory an independent republic in 1836 (later a U.S. state), making the state the first cauldron of Mexican American culture. By 1853, the government of Mexico, the weaker neighbor of an expansionist United States, had lost what are today the states of California, Nevada, Utah, Arizona, New Mexico, Texas, and parts of Colorado and Wyoming. In spite of the imposition of a new border, the historical and living presence of Spaniards, Mexicans, indigenous peoples, and their mixed descendants remained a defining force in the creation of the American West.
“La América Mexicana” es una muestra conformada por objetos provenientes de las distintas colecciones del Museo Nacional de Historia Americana. Estos objetos reflejan la historia de la presencia mexicana en los Estados Unidos e ilustran una crónica fundamentalmente americana acerca del encuentro centenario entre comunidades diferentes que han coexistido, pero que también se han enfrentado, en la pugna por la tierra, la cultura y el sustento.
¿Quién, dónde y qué es México? Con el transcurso del tiempo, las definiciones y los límites de México han ido cambiando. Se conocía como México al Imperio Azteca y toda el área donde se hablaba náhuatl —actualmente la región circundante a la ciudad de México. Durante 300 años los colonizadores españoles se refirieron a ella como Nueva España. Cuando en 1821 México resurgió como una nación soberana, sus fronteras se extendían desde California a Guatemala. En ese entonces era un antiguo e inmenso territorio conformado por regiones étnica, lingüística y económicamente diversas que luchaban por adquirir unidad nacional. Texas (en ese entonces parte de los estados mexicanos de Coahuila y Tejas) era una región fronteriza lejos de las densas urbes y de los fértiles valles de México central, donde se reclutaban inmigrantes de los Estados Unidos. En el año 1836 este territorio mexicano se declaró como república independiente (y más tarde, estado de EE.UU.), convirtiéndose en el primer calderón de la cultura mexicoamericana. Hacia 1853, el gobierno de México, el vecino débil de un Estados Unidos en expansión, había perdido el territorio de los actuales estados de California, Nevada, Utah, Arizona, Nuevo México, Texas y partes de Colorado y Wyoming. A pesar de la imposición de un nuevo límite fronterizo, la presencia histórica y ocupacional de los españoles, mexicanos y pueblos indígenas, junto a sus descendientes mestizos, constituiría a lo largo del tiempo una influencia determinante para el desarrollo del Oeste Americano.
"Mexican America - Introduction" showing 1 items.
- This print depicts American forces attacking the fortress palace of Chapultepec on Sept. 13th, 1847. General Winfield Scott, in the lower left on a white horse, led the southern division of the U.S. Army that successfully captured Mexico City during the Mexican American War. The outcome of American victory was the loss of Mexico's northern territories, from California to New Mexico, by the terms set in the Treaty of Guadalupe Hidalgo. It should be noted that the two countries ratified different versions of the same peace treaty, with the United States ultimately eliminating provisions for honoring the land titles of its newly absorbed Mexican citizens. Despite notable opposition to the war from Americans like Abraham Lincoln, John Quincy Adams, and Henry David Thoreau, the Mexican-American War proved hugely popular. The United States' victory boosted American patriotism and the country's belief in Manifest Destiny.
- This large chromolithograph was first distributed in 1848 by Nathaniel Currier of Currier and Ives, who served as the "sole agent." The lithographers, Sarony & Major of New York (1846-1857) copied it from a painting by "Walker." Unfortunately, the current location of original painting is unknown, however, when the print was made the original painting was owned by a Captain B. S. Roberts of the Mounted Rifles. The original artist has previously been attributed to William Aiken Walker as well as to Henry A. Walke. William Aiken Walker (ca 1838-1921) of Charleston did indeed do work for Currier and Ives, though not until the 1880's and he would have only have been only 10 years old when this print was copyrighted. Henry Walke (1808/9-1896) was a naval combat artist during the Mexican American War who also worked with Sarony & Major and is best known for his Naval Portfolio.
- Most likely the original painting was done by James Walker (1819-1889) who created the "Battle of Chapultepec" 1857-1862 for the U.S. Capitol. This image differs from the painting commissioned for the U. S. Capitol by depicting the troops in regimented battle lines with General Scott in a more prominent position in the foreground. James Walker was living in Mexico City at the outbreak of the Mexican War and joined the American forces as an interpreter. He was attached to General Worth's staff and was present at the battles of Contreras, Churubusco, and Chapultepec. The original painting's owner, Captain Roberts was assigned General Winfield Scott to assist Walker with recreating the details of the battle of Chapultepec. When the painting was complete, Roberts purchased the painting. By 1848, James Walker had returned to New York and had a studio in New York City in the same neighborhood as the print's distributor Nathaniel Currier as well as the lithographer's Napoleon Sarony and Henry B. Major.
- This popular lithograph was one of several published to visually document the war while engaging the imagination of the public. Created prior to photography, these prints were meant to inform the public, while generally eliminating the portrayal of the more gory details. Historians have been able to use at least some prints of the Mexican War for study and to corroborate with the traditional literary forms of documentation. As an eyewitness, Walker could claim accuracy of detail within the narrative in his painting. The battle is presented in the grand, historic, heroic style with the brutality of war not portrayed. The print depiction is quite large for a chromo of the period. In creating the chromolithographic interpretation of the painting, Sarony & Major used at least four large stones to produce the print "in colours," making the most of their use of color. They also defined each figure with precision by outlining each in black. This print was considered by expert/collector Harry T. Peters as one of the finest ever produced by Sarony & Major.
- Currently not on view
- Date made
- associated date
- Currier, Nathaniel
- Scott, Winfield
- Sarony & Major
- Walker, James
- ID Number
- catalog number
- accession number
- Data Source
- National Museum of American History, Kenneth E. Behring Center | <urn:uuid:ff577d1a-83b8-467c-af1c-4c0aa2ead4fb> | CC-MAIN-2013-20 | http://americanhistory.si.edu/collections/object-groups/mexican-america?edan_start=0&edan_fq=date%3A%221840s%22 | 2013-05-18T07:26:18 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.776227 | 1,938 | 4.0625 | 4 |
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production.
What is Wind Shear
Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring.
Wind Shear and Supercell Thunderstorms
This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form.
All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment.
Rain’s Influence on Tornado Production
Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air.
That’s Not a Tornado!
I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air.
This Can Be a Tornado
You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air.
(NOAA image showing vertical column of air in a supercell thunderstorm)
The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear.
(NOAA image showing tornado formation in supercell thunderstorm) | <urn:uuid:7400301c-e625-46d5-be90-1020cf8d52f8> | CC-MAIN-2013-20 | http://cloudyandcool.com/2009/05/05/wind-shear-and-tornadoes/ | 2013-05-18T06:26:14 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.916764 | 573 | 4.15625 | 4 |
On this day in 1951, more than six years after the end of World War II in Europe, President Harry S. Truman signed a proclamation officially ending U.S. hostilities with Germany.
The official end to the war came nine years, 10 months and 13 days after Congress had declared war on Nazi Germany. The lawmakers had responded to a declaration of war issued by the Third Reich in the aftermath of the Dec. 7, 1941, Japanese attack on Pearl Harbor and other U.S. bases in the Pacific.
The president explained why he had waited so long after the fighting had ended to act: It had always been America’s hope, Truman wrote, to create a treaty of peace with the government of a united and free Germany, but the postwar policies pursued by the Soviet Union “made it impossible.”
After the war, the United States, Britain, France and the Soviet Union divided Germany into four zones of occupation. Berlin, while located wholly within the Soviet zone, was jointly occupied by the wartime allies and also subdivided into four sectors because of its symbolic importance as the nation’s historic capital and seat of the former Nazi government.
The three western zones were merged to form the Federal Republic of Germany in May 1949, and the Soviets followed suit in October 1949 with the establishment of the German Democratic Republic.
The East German regime began to falter in May 1989, when the removal of Hungary’s border fences punched a hole in the Iron Curtain, allowing tens of thousands of East Germans to flee to the West. Despite the grants of general sovereignty to both German states in 1955, neither of the two German governments held unrestricted sovereignty under international law until after they were reunified in October 1990. | <urn:uuid:802d6d3f-73ff-4476-973b-a3c618ed8f7a> | CC-MAIN-2013-20 | http://dyn.politico.com/printstory.cfm?uuid=5C7F8F2E-EB28-4D2A-84B9-D699AAA47355 | 2013-05-18T05:50:56 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.975901 | 352 | 4.34375 | 4 |
Uveitis is inflammation of the uvea, which is made up of the iris, ciliary body and choroid. Together, these form the middle layer of the eye between the retina and the sclera (white of the eye).
The eye is shaped like a tennis ball, with three different layers of tissue surrounding the central gel-filled cavity, which is called the vitreous. The innermost layer is the retina, which senses light and helps to send images to your brain. The outermost layer is the sclera, the strong white wall of the eye. The middle layer between the sclera and retina is called the uvea.
The uvea contains many blood vessels — the veins, arteries and capillaries — that carry blood to and from the eye. Because the uvea nourishes many important parts of the eye (such as the retina), inflammation of the uvea can damage your sight.
There are several types of uveitis, defined by the part of the eye where it occurs.
- Iritis affects the front of your eye. Also called anterior uveitis, this is the most common type of uveitis. Iritis usually develops suddenly and may last six to eight weeks. Some types of anterior uveitis can be chronic or recurrent.
- If the uvea is inflamed in the middle or intermediate region of the eye, it is called pars planitis (or intermediate uveitis). Episodes of pars planitis can last between a few weeks to years. The disease goes through cycles of getting better, then worse.
- Posterior uveitis affects the back parts of your eye. Posterior uveitis can develop slowly and often lasts for many years.
- Panuveitis occurs when all layers of the uvea are inflamed.
Next Page: Uveitis Causes | <urn:uuid:33687e0d-90f9-4e53-ac31-257283325d4f> | CC-MAIN-2013-20 | http://eyecareamerica.org/eyesmart/diseases/uveitis.cfm | 2013-05-18T05:18:05 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.914266 | 389 | 4.125 | 4 |
Marion Levine teaches English, Literature and Film Production at Los Angeles Center for Enriched Studies, Los Angeles, CA
Measure for Measure, Act 4 or 5
What's On for Today and Why
Students will choose a character from Measure for Measure and create a "back story" for that character. This will encourage students to read the text closely looking for clues regarding a specific character's history. Students will re-read a portion of the text and then write about what has happened to the character before the play begins. They will then create an artifact, such as a diary or journal entry, written by the charcacter they have selected. This will allow them the opportunity to think like the character and to view the events of the play from a specific point of view.
This lesson will take two 40 minute class periods.
What You Need
Measure for Measure, Folger Edition
What To Do
1. Explain the concept of a "back story" as the important events that occur to a character before the play begins. You may need to prompt students with questions such as:
What was the character like as a child?
In what situation did he/she grow up?
Students will need to show how the script supports their choices.
2. Have the students write a one or two page back story in either the first or third person.
3. Divide students into small groups of 4 or 5 and have them re-read Act 4 or Act 5, combing throught the text for character details.
4. Have students write a letter, diary or journal entry from their selected characters point of view (first person). This artifact should concern one or more characters in the play.
5. For increased authenticity, appropriate for an "Extra-Extended" book, students could write their letter, diary entry using calligraphy, a handwriting font or on a piece of yellowed paper.
6. Allow students time to read their pieces and share their artifacts with the class.
How Did It Go?
Were students able to justify their choices with reference to the text? Did their artifacts accurately portray character traits that can be interpreted from the text? Were students able to convey a sense of the character's perspective through this activity?
This lesson could be applied to any fictional text that the students read in class. Through close reading and attention to a specific character, students are able to identify with, and understand the concerns of a character on a deeper level. Possible choices could include Jay Gatsby, Hester Prynne,and Atticus Finch.
If you used this lesson, we would like to hear how it went and about any adaptations you made to suit the needs of YOUR students. | <urn:uuid:86849ab7-4070-40ee-9f28-f23c0e6d4e97> | CC-MAIN-2013-20 | http://folger.edu/eduLesPlanDtl.cfm?lpid=863 | 2013-05-18T06:49:22 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948124 | 553 | 4.0625 | 4 |
Mercury in the Morning
The planet Mercury -- the planet closest to the Sun -- is just peeking into view in the east at dawn the next few days. It looks like a fairly bright star. It's so low in the sky, though, that you need a clear horizon to spot it, and binoculars wouldn't hurt.
Mercury is a bit of a puzzle. It has a big core that's made mainly of iron, so it's quite dense. Because Mercury is so small, the core long ago should've cooled enough to form a solid ball. Yet the planet generates a weak magnetic field, hinting that the core is still at least partially molten.
The solution to this puzzle may involve an iron "snow" deep within the core.
The iron in the core is probably mixed with sulfur, which has a lower melting temperature than iron. Recent models suggest that the sulfur may have kept the outer part of the core from solidifying -- it's still a hot, thick liquid.
As this mixture cools, though, the iron "freezes" before the sulfur does. Small bits of solid iron fall toward the center of the planet. This creates convection currents -- like a pot of boiling water. The motion is enough to create a "dynamo" effect. Like a generator, it produces electrical currents, which in turn create a magnetic field around the planet.
Observations earlier this year by the Messenger spacecraft seem to support that idea. But Messenger will provide much better readings of what's going on inside Mercury when it enters orbit around the planet in 2011.
Script by Damond Benningfield, Copyright 2008
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | <urn:uuid:d0a1999f-a775-4afc-bcfd-ee6ff6243a0b> | CC-MAIN-2013-20 | http://stardate.org/radio/program/2008-10-20 | 2013-05-18T06:50:10 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.943661 | 357 | 4 | 4 |
Black holes growing faster than expected
Black hole find Existing theories on the relationship between the size of a galaxy and its central black hole are wrong according to a new Australian study.
The discovery by Dr Nicholas Scott and Professor Alister Graham, from Melbourne's Swinburne University of Technology, found smaller galaxies have far smaller black holes than previously estimated.
Central black holes, millions to billions of times more massive than the Sun, reside in the core of most galaxies, and are thought to be integral to galactic formation and evolution.
However astronomers are still trying to understand this relationship.
Scott and Graham combined data from observatories in Chile, Hawaii and the Hubble Space Telescope, to develop a data base listing the masses of 77 galaxies and their central supermassive black holes.
The astronomers determined the mass of each central black hole by measuring how fast stars are orbiting it.
Existing theories suggest a direct ratio between the mass of a galaxy and that of its central black hole.
"This ratio worked for larger galaxies, but with improved technology we're now able to examine far smaller galaxies and the current theories don't hold up," says Scott.
In a paper to be published in the Astrophysical Journal, they found that for each ten-fold decrease in a galaxy's mass, there was a one hundred-fold decrease in its central black hole mass.
"That was a surprising result which we hadn't been anticipating," says Scott.
The study also found that smaller galaxies have far denser stellar populations near their centres than larger galaxies.
According to Scott, this also means the central black holes in smaller galaxies grow much faster than their larger counterparts.
Black holes grow by merging with other black holes when their galaxies collide.
"When large galaxies merge they double in size and so do their central black holes," says Scott.
"But when small galaxies merge their central black holes quadruple in size because of the greater densities of nearby stars to feed on."
Somewhere in between
The findings also solve the long standing problem of missing intermediate mass black holes.
For decades, scientists have been searching for something in between stellar mass black holes formed when the largest stars die, and supermassive black holes at the centre of galaxies.
"If the central black holes in smaller galaxies have lower mass than originally thought, they may represent the intermediate mass black hole population astronomers have been hunting for," says Graham.
"Intermediate sized black holes are between ten thousand and a few hundred thousand times the mass of the Sun, and we think we've found several good candidates."
"These may be big enough to be seen directly by the new generation of extremely large telescopes now being built," says Graham. | <urn:uuid:e617c5fd-d556-4d43-be1f-042e7e7f2c60> | CC-MAIN-2013-20 | http://www.abc.net.au/science/articles/2013/01/17/3671551.htm?topic=enviro | 2013-05-18T06:23:22 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948663 | 552 | 4.25 | 4 |
Hoodoos may be seismic gurus
Hoodoo prediction Towering chimney-like sedimentary rock spires known as hoodoos may provide an indication of an area's past earthquake activity.
The research by scientists including Dr Rasool Anooshehpoor, from the United States Nuclear Regulatory Commission, may provide scientists with a new tool to test the accuracy of current hazard models.
Hoodoo formations are often found in desert regions, and are common in North America, the Middle East and northern Africa.
They are caused by the uneven weathering of different layers of sedimentary rocks, that leave boulders or thin caps of hard rock perched on softer rock.
By knowing the strengths of different types of sedimentary layers, scientists can determine the amount of stress needed to cause those rocks to fracture.
The United States Geological Survey (USGS) use seismic hazard models to predict the type of ground motion likely to occur in an area during a seismic event. But, according to Anooshehpoor, these models lack long term data.
"Existing hazard maps use models based on scant data going back a hundred years or so," says Anooshehpoor. "But earthquakes have return periods lasting hundreds or thousands of years, so there is nothing to test these hazard models against."
The researchers examined two unfractured hoodoos within a few kilometres of the Garlock fault, which is an active strike-slip fault zone in California's Red Rock Canyon.
Their findings are reported in the Bulletin of the Seismological Society of America.
"Although we can't put a precise age on hoodoos because of their erosion characteristics, we can use them to provide physical limits on the level of ground shaking that could potentially have occurred in the area," says Anooshehpoor.
The researchers developed a three-dimensional model of each hoodoo and determined the most likely place where each spire would fail in an earthquake.
They then tested rock samples similar to the hoodoo pillars to measure their tensile strength and compared their results with previously published data.
USGS records suggest at least one large magnitude earthquake occurred along the fault in the last 550 years, resulting in seven metres of slip, yet the hoodoos are still standing.
This finding is consistent with a median level of ground motion associated with the large quakes in this region, says Anooshehpoor.
"If an earthquake occurred with a higher level of ground motion, the hoodoos would have collapsed," he says.
"Nobody can predict earthquakes, but this will help predict what ground motions are associated with these earthquakes when they happen."
Dr Juan Carlos Afonso from the Department of Earth and Planetary Sciences at Sydney's Macquarie University says it's an exciting development.
"In seismic hazard studies, it's not just difficult to cover the entire planet, it's hard to cover even small active regions near populated areas," says Afonso.
"You need lots of instruments, so it's great if you can rely on nature and natural objects to help you."
He says while the work is still very new and needs to be proven, the physics seems sound. | <urn:uuid:85a979cb-9571-4e06-b38a-2f79912abb44> | CC-MAIN-2013-20 | http://www.abc.net.au/science/articles/2013/02/05/3682324.htm?site=science&topic=enviro | 2013-05-18T06:47:33 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955619 | 644 | 4.3125 | 4 |
Science Fair Project Encyclopedia
The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions.
The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride.
Other examples of inorganic covalently bonded chlorides which are used as reactants are:
- phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory.
- Disulfur dichloride (SCl2) - used for vulcanization of rubber.
Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb> | CC-MAIN-2013-20 | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Chloride | 2013-05-18T08:08:06 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.896893 | 320 | 4.59375 | 5 |
Fun Classroom Activities
The 20 enjoyable, interactive classroom activities that are included will help your students understand the text in amusing ways. Fun Classroom Activities include group projects, games, critical thinking activities, brainstorming sessions, writing poems, drawing or sketching, and more that will allow your students to interact with each other, be creative, and ultimately grasp key concepts from the text by "doing" rather than simply studying.
1. A Year from Now
Where will Bone be and how will she be feeling a year from now? Write a one page description of Bone's life a year after the end of the book from Bone's perspective.
2. The Monster Within
When Bone's anger is described, it seems to grow and even take form. Take one of the descriptions for Bone's anger and rage and draw it.
3. Bone's Poetry
Write a poem as if you are Bone. The poem can be...
This section contains 555 words|
(approx. 2 pages at 300 words per page) | <urn:uuid:7da8e5fb-c5fb-415f-93c4-97d18531f703> | CC-MAIN-2013-20 | http://www.bookrags.com/lessonplan/bastardoutcarolina/funactivities.html | 2013-05-18T06:24:08 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.941543 | 210 | 4.3125 | 4 |
In the American electoral system, a primary election is an election that determines the nominee for each political party, who then competes for the office in the general election. A presidential primary is a state election that picks the delegates committed to nominate particular candidates for president of the United States. A presidential caucus, as in Iowa, requires voters to meet together for several hours in face-to-face meetings that select county delegates, who eventually pick the delegates to the national convention. No other country uses primaries; they choose their candidates in party conventions.
Primaries were introduced in the Progressive Era in the early 20th century to weaken the power of bosses and make the system more democratic. In presidential elections, they became important starting in 1952, when the first-in-the-nation New Hampshire Primary helped give Dwight D. Eisenhower the Republican nomination, and knocked Harry S. Truman out of the Democratic race because of his poor showing. In 1964, Lyndon B. Johnson ended his reelection campaign after doing poorly in New Hampshire.
After 1968, both parties changed their rules to emphasize presidential primaries, although some states still use the caucus system.
In recent decades, New Hampshire holds the first primary a few days after Iowa holds the first caucus. That gives these two states enormous leverage, as the candidates and the media focus there. New Hampshire and Iowa receive about half of all the media attention given all primaries.
The primary allows voters to choose between different candidates of the some political parties, perhaps representing different wings of the party. For example, a Republican primary may choose between a range of candidates from moderate to conservative. Gallup's 2008 polling data indicated a trend in primary elections towards more conservative candidates, despite the more liberal result in the general election.
In recent years the primary seasons has come earlier and earlier, as states move up to earlier dates in the hope it will give them more leverage. For example, Barry Goldwater won the 1964 nomination because he won the last primary in California. The logic is faulty--in highly contested races the later primaries have more leverage. Thus in 2008 California gave up its traditional last-in-the-nation role and joined 20 other states on Super Tuesday. Neither the candidates not the voters paid it much attention. Michigan and Florida moved up their primaries in defiance of national Democratic Party rules and were penalized. The result is the primary season is extended, and is far more expensive, and no state gets an advantage--except for Iowa and New Hampshire, which now have dates in early January.
In late 2009 the two national parties are meeting to find a common solution.
- Duncan, Dayton. Grass roots: one year in the life of the New Hampshire presidential primary (1991) 436 pages; on 1988 campaign
- Johnson, Haynes, and Dan Balz. The Battle for America 2008: The Story of an Extraordinary Election (2009), excellent history of 2008 primaries
- Kamarck, Elaine C. Primary Politics: How Presidential Candidates Have Shaped the Modern Nominating System (2009) excerpt and text search | <urn:uuid:c66cbd20-f2be-4f73-902d-7b0198351323> | CC-MAIN-2013-20 | http://www.conservapedia.com/Primary_election | 2013-05-18T06:20:16 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.951015 | 615 | 4.03125 | 4 |
by I. Peterson
Unlike an ordinary, incandescent bulb, a laser produces light of a single wavelength. Moreover, the emitted light waves are coherent, meaning that all of the energy peaks and troughs are precisely in step.
Now, a team at the Massachusetts Institute of Technology has demonstrated experimentally that a cloud consisting of millions of atoms can also be made coherent. Instead of flying about and colliding randomly, the atoms display coordinated behavior, acting as if the entire assemblage were a single entity.
According to quantum mechanics, atoms can behave like waves. Thus, two overlapping clouds made up of atoms in coherent states should produce a zebra-striped interference pattern of dark and light fringes, just like those generated when two beams of ordinary laser light overlap.
By detecting such a pattern, the researchers proved that the clouds' atoms are coherent and constitute an "atom laser," says physicist Wolfgang Ketterle, who heads the MIT group. These matter waves, in principle, can be focused just like light.
Ketterle and his coworkers describe their observations in the Jan. 31 Science.
The demonstration of coherence involving large numbers of atoms is the latest step in a series of studies of a remarkable state of matter called a Bose-Einstein condensate. Chilled to temperatures barely above absolute zero, theory predicted, the atoms would collectively enter the same quantum state and behave like a single unit, or superparticle, with a specific wavelength.
First created in the laboratory in 1995 by Eric A. Cornell and his collaborators at the University of Colorado and the National Institute of Standards and Technology, both in Boulder, Bose-Einstein condensates have been the subject of intense investigation ever since (SN: 7/15/95, p. 36; 5/25/96, p. 327).
At MIT, Ketterle and his colleagues cool sodium atoms to temperatures below 2 microkelvins. The frigid atoms are then confined in a special magnetic trap inside a vacuum chamber.
To determine whether the atoms in the resulting condensate are indeed as coherent as photons in a laser beam, the researchers developed a novel method of extracting a clump of atoms from the trap.
In effect, they manipulate the magnetic states of the atoms to expel an adjustable fraction of the original cloud; under the influence of gravity, the released clump falls. The method can produce a sequence of descending clumps, with each containing 100,000 to several million coherent atoms.
The apparatus acts like a dripping faucet, Ketterle says. He and his colleagues describe the technique in the Jan. 27 Physical Review Letters.
To demonstrate interference, the MIT group created a double magnetic trap so that two pulses of coherent atoms could be released at the same time. As the two clumps fell, they started to spread and overlap. The researchers could then observe interference between the atomic waves of the droplets.
"The signal was almost too good to be true," Ketterle says. "We saw a high-contrast, very regular pattern."
"It's a beautiful result," Cornell remarks. "This work really shows that Bose-Einstein condensation is an atom laser."
From the pattern, the MIT researchers deduced that the condensate of sodium atoms has a wavelength of about 30 micrometers, considerably longer than the 0.04-nanometer wavelength typical of individual atoms at room temperature.
Ketterle and his colleagues are already planning several improvements to their primitive atom laser, including getting more atoms into the emitted pulses and going from pulses to a continuous beam.
Practical use of an atom laser for improving the precision of atomic clocks and for manipulating atoms is still distant, however, Cornell notes. | <urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3> | CC-MAIN-2013-20 | http://www.sciencenews.org/pages/sn_arc97/2_1_97/fob2.htm | 2013-05-18T08:10:21 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.933547 | 769 | 4 | 4 |
July 31, 1998
Explanation: Do you recognize the constellation Orion? This striking but unfamiliar looking picture of the familiar Orion region of the sky was produced using survey data from the InfraRed Astronomical Satellite (IRAS). It combines information recorded at three different invisible infrared wavelengths in a red, green, and blue color scheme and covers about 30x24 degrees on the sky. Most of Orion's visually impressive stars don't stand out, but bright Betelgeuse does appear as a small purplish dot just above center. Immediately to the right of Betelgeuse and prominent in the IRAS skyview, expanding debris from a stellar explosion, a supernova remnant, is seen as a large bright ring-shaped feature. The famous gas clouds in Orion's sword glow brightly as the yellow regions at the lower right. No longer operational, IRAS used a telescope cooled by liquid helium to detect celestial infrared radiation.
Authors & editors:
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
&: Michigan Tech. U. | <urn:uuid:f2519e47-47f4-4694-91cc-e23c91d5d788> | CC-MAIN-2013-20 | http://apod.nasa.gov/apod/ap980731.html | 2013-05-21T10:34:25 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.889232 | 227 | 4.0625 | 4 |
Combined Gas Law
The Combined Gas Law combines Charles Law, Boyle s Law and Gay Lussac s Law. The Combined Gas Law states that a gas pressure x volume x temperature = constant.
Alright. In class you should have learned about the three different gas laws. the first one being Boyle's law and it talks about the relationship between pressure and volume of a particular gas. The next one should be Charles law which talks about the volume and temperature of a particular gas. And the last one should be Gay Lussac's law which talks about the relationship between pressure and temperature of a particular gas. Okay. But what happens when you have pressure, volume and temperature all changing? Well, we're actually going to combine these gas laws to form one giant gas law called the combined gas law. Okay.
If you notice then these three gas laws the pressure and volume are always in the numerator. So we're going to keep them on the numerator. p1v1. And notice the temperature is in the denominator over t1. So all these things are just squished into one and then p2v2 over t2. Okay. So this is what we're going to call the combined gas law. So let's actually get an example and do one together.
Alright, so I have a problem up here that says a gas at 110 kilo pascals and 33 celsius fills a flexible container with an initial volume of two litres, okay? If the temperature is raised to 80 degrees celsius and the pressure is raised to 440 kilo pascals, what is the new volume? Okay. So notice we have three variables. We're talking about pressure, temperature and volume. Okay, so now we're going to employ this combined gas law dealing with all three of these variables. So we're going to look at our first, our first number 110 kilo pascals and that's going to, that is the unit of pressure. So we know that's p1. Our p1 is 110 kilo pascals, at 30 degree celsius. I don't like things with celsius so I'm going to change this to kelvin. So I'm going to add 273 to that which makes it 303 kelvin. That's our temperature. And my initial volume is two litres so I'm going to say v1=2 litres. Okay then I continue reading. If the temperature is raised at 80 degree celsius, again we want it in kelvin, so we're going to add 273 making it to 353. So our t2 is 353 kelvin and the pressure increased to 440 kilo pascals, the pressure p2 is equal to 440 kilo pascals which I'm very happy that I kept it in kilo pascals that I kept it in kilo pascals. I've got to make sure these units are the same because pressure can be measured in several different units. I'm going to make sure all units are the same. And what is the new volume? So our v2 is our variable, what we're trying to find. Okay.
So let's basically plug all these variable in into our combined gas law to figure out what the new volume would be. Okay. So I'm going to erase this and say our pressure one is 110 kilo pascals. Our volume one is two litres. Our temperature one is 303 kelvin. Our pressure two is 440 kilo pascals. We don't know our volume so we're just going to say v2 over 353 kelvin. Okay. When I'm looking for a variable I'm going to cross multiply these guys. So I'm going to say 353 times 110 times 2 and that should give me seven, 77660, if you put that in a calculator. So I just cross multiply these guys. And I cross multiply these guys 303 times 440 times v2 gives me 133320v2. Okay, so then I want to get my, I want to isolate my variable, so I'm going to divide 133320. 133320. And I find that my new volume is 0.58. 0.58 metres. And that is how you do the combined gas law. | <urn:uuid:5f1963a4-8da7-4d73-9dda-3c8691608115> | CC-MAIN-2013-20 | http://brightstorm.com/science/chemistry/kinetic-molecular-theory/combined-gas-law/ | 2013-05-21T10:27:06 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.942287 | 873 | 4.09375 | 4 |
Woodrow Wilson, as described in the introductory section of the text, was the leader of the immediate post-war period and was the architect of an internationalist vision for a new world order. Yet, as discussed in the paragraphs below, he was not able to persuade the other Allied leaders at the peace settlement negotiations in Paris to embrace his vision. But it was not just the opposition of Clemenceau and Lloyd George to some of his ideas that moved the conference away from Wilson's vision. Wilson became so blindingly caught up in his vision, thinking that everything he advocated was what democracy and justice wanted, that he completely alienated the other negotiators in Paris, and they stopped listening to him. Another historian points to a different problem, that Wilson himself stopped listening to his earlier vision, having become convinced that a harsh peace was justified and desirable. Even if that historical view is accurate, Wilson was probably still more moderate in his conception of a harsh peace than were Clemenceau and Lloyd George. But as the conference dragged on and the departure from Wilsonianism became more and more pronounced, Wilson clung to his proposal for the League of Nations. In fact, he seemed to place all his faith in his pet project, believing it would solve all the evils the negotiators were unable to solve during the conference. Unfortunately, Wilson made it clear that the League was his primary objective, and it came to be his only bargaining chip. He then compromised on numerous issues that had no corollary in his vision in order to maintain the support for the creation of the League. Thus, though full of good intentions and a vision for a just and peaceful future, Wilson's arrogance and ineffective negotiating skills largely contibuted to the downfall of his vision. Finally, it must be mentioned that Wilson's inability to negotiate with the Senate in its discussion of the ratification of the Treaty of Versailles caused the Senate to reject the Treaty, leaving the United States noticeably absent from the newly created League of Nations, which greatly undermined the effectiveness and importance of Wilson's principal goal. Nonetheless, Wilson was awarded the 1919 Nobel Peace Prize for his efforts to secure a lasting peace and the success in the creation of the League of Nations.
David Lloyd George, the British Prime Minister,
entered the negotiations in Paris with the clear support of the British
people, as evidenced by his convincing win in the so-called khaki election
of December 1918.
During the weeks leading up to the election, though, he had publicly committed
himself to work for a harsh peace against Germany, including obtaining
payments for war damages committed against the British. These campaign
promises went against Lloyd George's personal convictions. Knowing
that Germany had been Britain's best pre-war trading partner, he thought
that Britain's best chance to return to its former prosperity was to restore
Germany to a financially stable situation, which would have required a
fairly generous peace with respect to the vanquished enemy.
Nonetheless, his campaign statements showed Lloyd George's understanding
that the public did not hold the same convictions as he did, and that,
on the contrary, the public wanted to extract as much as possible out of
the Germans to compensate them for their losses during the war. So
Lloyd George and Clemenceau were in agreement on many points, each one
seeming to support the other in their nationalist objectives, and thereby
scratching each other's back as the "game of grab" of Germany's power played
itself out. But most historians do not attribute to Lloyd George
a significant role in the Treaty negotiations.
In their defense, Clemenceau and Lloyd George were only following popular sentiment back home when they fought for harsh terms against Germany. It is clear from historical accounts of the time that after seeing so many young men not return from the trenches on the Western front, the French and British wanted to exact revenge against the Germans through the peace settlement, to ensure that their families would never again be destroyed by German aggression. In that respect, democracy was clearly functioning as it is intended in a representative democracy. In fact, Lloyd George is the quintessential example of an elected leader serving the interests of his people, putting his personal convictions second to British public opinion. Yet it was that same public opinion (in France and Britain) that Wilson had believed would support his internationalist agenda, placing Germany in the context of a new and more peaceful world order which would prevent future aggression. Wilson's miscalculation was one of the single greatest factors leading to the compromise of his principles and the resulting harsh and, in the eyes of many, unjust treatment of Germany within the Treaty of Versailles.
[See also the biographies of the Big Three listed
on the Links
1. James L. Stokesbury, A Short History of World War I, 1981, p. 309.
2. Manfred F. Boemeke, "Woodrow Wilson's Image of Germany, the War-Guilt Question, and the Treaty of Versailles,"inThe Treaty of Versailles: A Reassessment After 75 Years, Ch. 25, Boemeke, Feldman & Glaser, eds., 1998, pp. 603-614.
3. Robert H. Ferrell, Woodrow Wilson and World War I: 1917-1921, 1985, p. 146.
4. Lawrence E. Gelfand, "The American Mission to Negotiate Peace: An Historian Looks Back," in The Treaty of Versailles: A Reassessment After 75 Years, Ch. 8, Boemeke, Feldman & Glaser, eds., 1998, p. 191.
5. See Ferrell, supra note 3, Ch. 10, "The Senate and the Treaty."
6. Information from this paragraph is taken from Ferrell, supra note 3, at 142, 144, 151.
7. Id. at 151.
8. Stokesbury, supra note 1, at 311-312. | <urn:uuid:54521255-4567-40ea-9b12-eccf47e11bd7> | CC-MAIN-2013-20 | http://faculty.virginia.edu/setear/students/sandytov/Big_Three.htm | 2013-05-21T09:58:56 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.976382 | 1,231 | 4.1875 | 4 |
Introduction / History
Jews represent the oldest monotheistic religion of modern times. Because of the uniqueness of their history and culture, all Jews have a strong sense of identity. Persecution of and discrimination against the Jews have been the historical reasons for their migrations and settlements around the world.
The Jews of Europe arrived on the continent at least 2,000 years ago during the early days of the Roman empire. Since then, they have been a significant influence in the history and culture of Europe. Much of what is considered "Jewish" today finds its roots among the European Jews.
One of the unique features among European Jews is the distinction between the Ashkenazic Jews and the Sephardic Jews. The word Ashkenaz is derived from a Biblical word for the larger Germanic region of Europe. Therefore, Ashkenazim Jews are those whose ancestry is linked to that area. This group traditionally speaks the Yiddish language, which is a German dialect that has Hebrew and Slavic elements. The word Sephard was the name used by Jews in medieval times for the Iberian peninsula. Sephardim Jews, then, are the descendants of the Jews who lived in Spain or Portugal prior to expulsion in 1492 by King Ferdinand and Queen Isabella. Sephardim also have a distinctive language called Ladino, or Judeo-Spanish. This is a dialect of Castilian Spanish with Hebrew and Turkish elements.
What are their lives like?
During the last few centuries, Eastern Europe had the largest Jewish population in the world. National attitudes toward the Jews were ambivalent, depending on the usefulness of the Jewish inhabitants to the nations' rulers. Anti-Semitism was prevalent and frequently led to either persecution or expulsion. The Holocaust of World War II was the climax of Jewish persecution in Europe, leading to the extermination of six million Jews. Many Eastern European countries lost the majority of their Jewish population in this tragedy.
As a result of the Holocaust, thousands of Jewish survivors and their descendants have emigrated from Eastern Europe to Israel, the United States, or Western Europe. The recent memories of the Holocaust as well as the centuries of discrimination and persecution play a strong part in modern Jewish identity. European Jews are strong supporters of "Zionism," a revival of Jewish culture and support of Israel as a national, secure, Jewish homeland.
Since the dissolution of the Soviet empire, former Soviet Jews no longer live under oppressive government rule. Anti-Semitism is still a concern, but Jewish life has been revitalized in recreated countries like the Ukraine. Synagogues are functioning and kosher (traditional, acceptable) food is once again available.
The Jewish emigration from Eastern Europe is cause for concern among the remaining aged Jewish population. As the older Jews die, the Jewish community dwindles. Many of the younger Jews are unlearned in their Jewish identity. They are either non-observant or have assimilated into the prevailing culture. However, strong efforts are being made to maintain a Jewish presence and clarify their identity. Jewish schools are being opened and Judaic studies are being promoted in universities. Jewish hospitals and retirement homes are being built. Community centers also promote cultural events such as the Israeli dance, theater, Yiddish and Hebrew lessons, and sports.
Western Europe now has the largest concentration of European Jewish residents. The Netherlands received a large influx of Sephardic Jews from Portugal in the late 1500's, and another contingent of Ashkenazic Jews after World War II. They have been very influential in the development of Dutch commerce. England's Jews are concentrated in the Greater London area and have been politically active for over 100 years. They have been avid supporters of Zionism and solidly committed to the settlement of Diaspora Jews in Israel. A large percentage of England's Jews are affiliated with the traditional Orthodox synagogues. Italy's Jewish population is primarily Sephardic due to its absorption of Spanish Jews in the 1500's. France's Ashkenazic community received 300,000 Sephardic Jews from North Africa in recent decades.
What are their beliefs?
For religious Jews, God is the Supreme Being, the Creator of the universe, and the ultimate Judge of human affairs. Beyond this, the religious beliefs of the Jewish communities vary greatly. European Jews are extremely diverse in religious practice. The Ashkenazic Jews are the most prevalent, representing the Orthodox, ultra-Orthodox, Conservative, and Reform movements. The unusual and adamantly traditional Hasidic movement was born in Poland and has gained a strong following in the United States and Israel. The Sephardic denomination is similar to the Orthodox Ashkenazic, but is more permissive on dietary rules and some religious practices. Each Jewish denomination maintains synagogues and celebrates the traditional Jewish holiday calendar. While most European Jews are religiously affiliated, there is a significant minority which is not religious.
What are their needs?
The Jews have a wonderful understanding of their connection with the Abrahamic covenant. However, they also have a history of rejecting Jesus Christ as Messiah, the one who has fulfilled that covenant. Pray that as the Gospel is shared, it will not be viewed as anti-Semitic, but rather as the fulfillment of what God promised through Abraham centuries ago.
Prayer PointsView Jew, Eastern Yiddish-Speaking in all countries.
* Ask the Lord of the harvest to send forth loving Christians to work among the Jewish communities.
* Ask the Holy Spirit to grant wisdom and favor to the missions agencies that are focusing on the European Jews.
* Pray that the Jewish people will understand that Jesus is the long-awaited Messiah.
* Ask the Lord to soften the hearts of the Jews towards Christians so that they might hear and receive the message of salvation.
* Pray that the Lord Jesus will reveal Himself to the Jews through dreams and visions.
* Pray that God will grant Jewish believers favor as they share their faith in Christ with their own people.
* Pray that strong local churches will be raised up in each Jewish community.
* Pray for the availability of the Jesus Film in the primary language of this people. | <urn:uuid:58c97692-7ec3-45e2-add2-5f67fd45724c> | CC-MAIN-2013-20 | http://joshuaproject.net/people-profile.php?peo3=12350&rog3=BO | 2013-05-21T10:09:18 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.960548 | 1,249 | 4.125 | 4 |
After the British Pyrrhic (costly) victory at Bunker Hill in 1775, British General William Howe decided a lethal blow needed to be delivered to the Patriot cause. Howe proposed to launch an attack on New York City using tens thousands of troops. He began mobilizing the massive fleet in Halifax, Nova Scotia. Meanwhile, American Commander-in-Chief George Washington had ordered General Charles Lee to prepare for the defense of the city. That June, Howe and 9,000 troops set sail for New York. Howe’s army was to be met in the city by additional regiments of German and British troops. Reinforcements from Halifax led by Howe’s brother would follow them.
Howe’s initial fleet arrived in New York Harbor and began landing troops on Staten Island. On April 27, 1776, British forces engaged the Americans at the Battle of Brooklyn Heights (also called the Battle of Long Island). Howe’s army successfully outflanked Washington’s, eventually causing the Patriots, after some resistance, to withdraw to Manhattan under the cover of darkness, thereby avoiding a potentially costly siege at the hands of the British.
After failed peace negotiations, the British Army next struck at Lower Manhattan, where 12,000 British troops quickly overtook the city. Most of the Continental Army had retreated to defensible positions at Harlem Heights and then to White Plains, well north of the city, but some soldiers remained at Fort Washington in Manhattan. Howe’s army chased Washington and the Continental Army into positions north of White Plains before returning to Manhattan. In Manhattan, Howe set his sights on Fort Washington, the last Patriot stronghold in Manhattan. In the furious, three-pronged attacked, British forces easily took the fort, capturing nearly 3,000 American prisoners and at least 34 cannons in the process. Most of the prisoners were taken to squalid British prison ships where all but 800 or so died of disease or starvation. General Washington, now at Fort Lee, directly across the Hudson River from Fort Washington, witnessed the events that happened. Following the fall of Fort Washington, British forces ferried up the Hudson River in barges toward Fort Lee. Washington ordered the evacuation of the fort’s 2,000 soldiers across the Hackensack River at New Bridge Landing. Washington would lead his army clear across the Delaware River into Pennsylvania.
Following the events in and around New York City, the outlook was bleak for the Continental Army. Morale in the army was extremely low, enlistments were ending, and desertions were commonplace. Even General Washington admitted his army’s chances of success were slim. Meanwhile, General Howe ordered his army into their winter quarters that December and established several outposts from New York City south to New Brunswick, New Jersey. | <urn:uuid:d9f94478-8f2d-45d3-b081-710593609b23> | CC-MAIN-2013-20 | http://mrnussbaum.com/history-2-2/new_york_battles/ | 2013-05-21T10:28:04 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.962022 | 564 | 4 | 4 |
The Solar and Heliospheric Observatory (SOHO) spacecraft is expected to discover its 1,000TH comet this summer.
The SOHO spacecraft is a joint effort between NASA and the European Space Agency. It has accounted for approximately one-half of all comet discoveries with computed orbits in the history of astronomy.
"Before SOHO was launched, only 16 sun grazing comets had been discovered by space observatories. Based on that experience, who could have predicted SOHO would discover more than 60 times that number, and in only nine years," said Dr. Chris St. Cyr. He is senior project scientist for NASA's Living With a Star program at the agency's Goddard Space Flight Center, Greenbelt, Md. "This is truly a remarkable achievement!"
About 85 percent of the comets SOHO discovered belongs to the Kreutz group of sun grazing comets, so named because their orbits take them very close to Earth's star. The Kreutz sun grazers pass within 500,000 miles of the star's visible surface. Mercury, the planet closest to the sun, is about 36 million miles from the solar surface.
SOHO has also been used to discover three other well-populated comet groups: the Meyer, with at least 55 members; Marsden, with at least 21 members; and the Kracht, with 24 members. These groups are named after the astronomers who suggested the comets are related, because they have similar orbits.
Many comet discoveries were made by amateurs using SOHO images on the Internet. SOHO comet hunters come from all over the world. The United States, United Kingdom, China, Japan, Taiwan, Russia, Ukraine, France, Germany, and Lithuania are among the many countries whose citizens have used SOHO to chase comets.
Almost all of SOHO's comets are discovered using images from its Large Angle and Spectrometric Coronagraph (LASCO) instrument. LASCO is used to observe the faint, multimillion-degree outer atmosphere of the sun, called the corona. A disk in the instrument is used to make an artificial eclipse, blocking direct light from the sun, so the much fainter corona can be seen. Sun grazing comets are discovered when they enter LASCO's field of view as they pass close by the star.
"Building coronagraphs like LASCO is still more art than science, because the light we are trying to detect is very faint," said Dr. Joe Gurman, U.S. project scientist for SOHO at Goddard. "Any imperfections in the optics or dust in the instrument will scatter the light, making the images too noisy to be useful. Discovering almost 1,000 comets since SOHO's launch on December 2, 1995 is a testament to the skill of the LASCO team."
SOHO successfully completed its primary mission in April 1998. It has enough fuel to remain on station to keep hunting comets for decades if the LASCO continues to function.
For information about SOHO on the Internet, visit:
Explore further: Long-term warming, short-term variability: Why climate change is still an issue | <urn:uuid:78cbe1bd-1849-4138-b59a-5521e93122a3> | CC-MAIN-2013-20 | http://phys.org/news4969.html | 2013-05-21T10:13:56 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.943417 | 663 | 4 | 4 |
Teaching Strategies: Effective Discussion Leading
While lecturing is a fast and direct way to communicate a body of knowledge, discussion encourages students to discover solutions for themselves and to develop their critical thinking abilities. They learn how to generate ideas, consider relevant issues, evaluate solutions, and consider the implications of these solutions. Thus, although discussion is not as efficient as lecture in conveying facts, it helps students learn how to think better and more clearly about the facts that they should learn from their reading and their lectures.
Leading a discussion, however, offers its own set of challenges: participants can spend too much time exploring small, sometimes irrelevant issues, forget that they are progressing toward an identifiable goal, and become bored. The leader must guide the conversation carefully without stifling creativity and students' initiative and without surrendering to some students' desire for answers that they can write down and memorize.
Here are four strategies that can help faculty and TAs encourage students explore issues themselves:
We all know that creating a fine lecture requires research and planning; we sometimes forget that leading a good discussion requires the same research and planning and demands spontaneous responses in the classroom. The beauty of the extra demand is that developing the skills for intervening and directing discussions leads to exciting, productive exchanges that help students learn to think clearly and creatively, while simultaneously inspiring you to teach more thoroughly and carefully.
"Discussions: Leading and Guiding, but Not Controlling," The
Teaching Professor VI, 8 [October 1992].) | <urn:uuid:03dc16ec-33ae-4c39-a06b-93924571a72e> | CC-MAIN-2013-20 | http://trc.virginia.edu/Publications/Teaching_Concerns/Fall_1993/TC_Fall_1993_Teaching_Strategies.htm | 2013-05-21T10:13:33 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.954276 | 304 | 4.03125 | 4 |
Presenting - 'Amasia', The Next Supercontinent!
Ever since Earth has been in existence there have been the formation and breaking apart of many supercontinents - While Pangaea, that existed between 150-300 million years ago is the most well-known, prior to that was Nuna (1.8 billion years ago), Rodina (1 billion years ago) and many more that cannot be verified because 2 billion year-old rocks containing evidence of magnetic fields, are hard to find.
And while most scientists are in agreement that Rodina, Nuna and Pangaea did exist, there is very little consensus on the continents they comprised of - Some experts believe that they were the same ones, while others think that the wandering landmasses reassembled on the opposite sides each time - about 180° away from where the previous supercontinent had come together.
Now, a group of geologists led by Yale University graduate student Ross Mitchell have a new theory - They think that each supercontinent came together about 90° from its predecessor. That is, the geographic center of Rodina was about 88° away from the center of Nuna, whilst the center of Panagea, believed to have been located near modern-day Africa, was about 88° away from the center from its super giant predecessor, Rodina.
These calculations that were reported earlier this year were based not only on the paleolatitude (The latitude of a place at some time in the past, measured relative to the earth's magnetic poles in the same period) of the ancient supercontinents, but also, for the first time the paleolongitude, that Ross measured by estimating how the locations of the Earth's magnetic poles have changed through time.
While the theory is interesting, what is even more so is that the team has also come up with a model of the next supercontinent. If their estimates are accurate, over the next few hundred million years, the tectonic plates under the Americas and Asia will both drift northward and merge. This means that modern day North and South America will come together and become one giant landmass, displacing the Caribbean Sea completely. A similar movement in Eurasia (Australia and South Eastern Asia) will cause the Arctic Ocean to disappear causing the continents to fuse with Canada. The result? A ginormous continent that they call 'Amasia'. The one thing that is not too clear is if Antarctica will be part of this or just be left stranded.
While many researchers believe that the Yale team's theory is quite feasible, nobody will ever know for sure - Because unfortunately, none of us are going to be around few 100 million years from now - But it's sure fun to envision the new world, isn't it? | <urn:uuid:2d0e9c93-cfc6-4a81-aac7-dc1b77fe6e90> | CC-MAIN-2013-20 | http://www.dogonews.com/2012/10/18/presenting-amasia-the-next-supercontinent | 2013-05-21T10:12:42 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.965343 | 567 | 4.3125 | 4 |
|This is a measure of the brightness of a celestial object. The lower the value, the brighter the object, so magnitude -4 is brighter than magnitude 0, which is in turn brighter than magnitude +4. The scale is logarithmic, and a difference of 5 magnitudes means a brightness difference of exactly 100 times. A difference of one magnitude corresponds to a brightness difference of around 2.51 (the fifth root of 100).
The system was started by the ancient Greeks, who divided the stars into one of six magnitude groups with stars of the first magnitude being the first ones to be visible after sunset. In modern times, the scale has been extended in both directions and more strictly defined.
Examples of magnitude values for well-known objects are;
|Sun||-26.7 (about 400 000 times brighter than full Moon!)|
|Brightest Iridium flares||-8|
|Venus (at brightest)||-4.4|
|International Space Station||-2|
|Sirius (brightest star)||-1.44|
|Limit of human eye||+6 to +7|
|Limit of 10x50 binoculars||+9|
|Limit of Hubble Space Telescope||+30| | <urn:uuid:a13e5774-8a15-4ad6-bc01-def7c66a2edb> | CC-MAIN-2013-20 | http://www.heavens-above.com/glossary.aspx?term=magnitude&lat=38.895&lng=-77.037&loc=Washington&alt=0&tz=EST | 2013-05-21T10:27:14 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.854211 | 260 | 4.25 | 4 |
Scientists gets further evidence that Mars once had oceans
Mars, our neighbor, once the dreams of science fiction writers and astronomers, one of which only wrote about the live that could have lived on Mars, and still might; while the other seeks to prove that there might actually have been life on that red planet eons ago.
Part of proving that idea is being able to show that there was water on the surface of Mars, water that would have been the foundation of life, just as it is here on earth.
To help find the facts behind whether there was, or even still is, water on Mars the European Space Agency (ESA) Mars Express space craft which houses the Mars Advanced Radar for Subsurface and Ionsphere Sounding (MARSIS) has detected sediment on the planet, the type of sediment that you would find on the floor of an ocean.
It is within the boundaries of features tentatively identified in images from various spacecraft as shorelines that MARSIS detected sedimentary deposits reminiscent of an ocean floor.
“MARSIS penetrates deep into the ground, revealing the first 60 – 80 meters (197 – 262 ft) of the planet’s subsurface,” says Wlodek Kofman, leader of the radar team at the Institut de Planétologie et d’Astrophysique de Grenoble (IPAG). “Throughout all of this depth, we see the evidence for sedimentary material and ice.”
The sediments detected by MARSIS are areas of low radar reflectivity, which typically indicates low-density granular materials that have been eroded away by water and carried to their final resting place.
Scientists are interpreting these sedimentary deposits, which may still be ice-rich, as another indication that there once an ocean in this spot.
At this point scientists have proposed that there were two main oceans on the planet. One was aroun the 4 billion year ago range with the second at around 3 billion years ago.
For the scientist the MARSIS findings provide some of the best evidence yet that Mars did have large bodies of water on its surface and that the water played a major role in the planet’s geological history. | <urn:uuid:40e4be34-8172-4949-b887-cd566fea95cb> | CC-MAIN-2013-20 | http://www.inquisitr.com/192264/scientists-gets-further-evidence-that-mars-once-had-oceans/ | 2013-05-21T10:06:29 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.96167 | 460 | 4.03125 | 4 |
The knowledge, skills and understandings relating to students’ writing have been drawn from the Statements of Learning for English (MCEECDYA 2005).
Students are taught to write a variety of forms of writing at school. The three main forms of writing (also called genres or text types) that are taught are narrative writing, informative writing and persuasive writing. In the Writing tests, students are provided with a ‘writing stimulus' (sometimes called a prompt – an idea or topic) and asked to write a response in a particular genre or text type.
In 2013, students will be required to complete a persuasive writing task.
The Writing task targets the full range of student capabilities expected of students from Years 3 to 9. The same stimulus is used for students in Years 3, 5, 7 and 9. The lines in the response booklet for Year 3 students are more widely spaced than for Years 5, 7 and 9 and more capable students will address the topic at a higher level. The same marking guide is used to assess all students' writing, allowing for a national comparison of student writing capabilities across these year levels.
Assessing the Writing task
Students’ writing will be marked by assessors who have received intensive training in the application of a set of ten writing criteria summarised below. The full Persuasive Writing Marking Guide ( 5.7 MB) and the writing stimulus used to prompt the writing samples in the Marking Guide are both available for download.
Descriptions of the Writing criteria
||Description of marking criterion
|The writer’s capacity to orient, engage and persuade the reader
||The organisation of the structural components of a persuasive text (introduction, body and conclusion) into an appropriate and effective text structure
||The selection, relevance and elaboration of ideas for a persuasive argument
||The use of a range of persuasive devices to enhance the writer’s position and persuade the reader
||The range and precision of contextually appropriate language choices
||The control of multiple threads and relationships across the text, achieved through the use of grammatical elements (referring words, text connectives, conjunctions) and lexical elements (substitutions, repetitions, word associations)
||The segmenting of text into paragraphs that assists the reader to follow the line of argument
||The production of grammatically correct, structurally sound and meaningful sentences
||The use of correct and appropriate punctuation to aid the reading of the text
||The accuracy of spelling and the difficulty of the words used
The Narrative Writing Marking Guide (used in 2008 - 2010 ) is also available.
Use of formulaic structures
Beginning writers can benefit from being taught how to use structured scaffolds. One such scaffold that is commonly used is the five paragraph argument essay. However, when students becomes more competent, the use of this structure can be limiting. As writers develop their capabilities they should be encouraged to move away from formulaic structures and to use a variety of different persuasive text types, styles and language features, as appropriate to different topics.
Students are required to write their opinion and to draw on personal knowledge and experience when responding to test topics. Students are not expected to have detailed knowledge about the topic. Students should feel free to use any knowledge that they have on the topic, but should not feel the need to manufacture evidence to support their argument. In fact, students who do so may undermine the credibility of their argument by making statements that are implausible.
Example topics and different styles:
City or country (see example prompt )
A beginning writer could write their opinion about living in either the city or country and give reasons for it. A more capable writer might also choose to take one side and argue for it. However, this topic also lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to living in the city and living in the country. A writer could also choose to introduce other options, for example living in a large country town that might have the benefits of city and rural life. Positions taken on this topic are likely to elicit logical, practical reasons and anecdotes based on writers’ experiences.
Books or TV (see example prompt )
A beginning writer could write about their opinion of one aspect and give reasons for it. However, this topic lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to both books and TV. The reasons for either side of the topic are likely to elicit logical, practical reasons and personal anecdotes based on the writer's experiences of both books and TV.
It is cruel to keep animals in cages and zoos (see example prompt )
A beginning writer could take on one side of the topic and give reasons for it. However, this topic lends itself to be further redefined. For example, a more capable writer might develop the difference between open range zoos and small cages and then argue the merits of one and limitations of the other. The animal welfare issues raised by this topic are likely to elicit very empathetic and emotive arguments based on the writer's knowledge about zoos and animals.
More information on persuasive writing can be found in the FAQ section for NAPLAN - Writing test.
National minimum standards
The national minimum standards for writing describe some of the skills and understandings students can generally demonstrate at their particular year schooling. The standards are intended to be a snapshot of typical achievement and do not describe the full range of what students are taught or what they may achieve.
For further information on the national minimum standards see Performance Standards. | <urn:uuid:817d308c-adeb-427a-9b89-415a8f96d2ec> | CC-MAIN-2013-20 | http://www.nap.edu.au/naplan/about-each-domain/writing/writing.html | 2013-05-21T10:13:37 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.92963 | 1,150 | 4.5 | 4 |
OurDocuments.gov. Featuring 100 milestone documents of American history from the National Archives. Includes images of original primary source documents, lesson plans, teacher and student competitions, and educational resources.
In 1866 the Russian government offered to sell the territory of Alaska to the United States. Secretary of State William H. Seward, enthusiastic about the prospects of American Expansion, negotiated the deal for the Americans. Edouard de Stoeckl, Russian minister to the United States, negotiated for the Russians. On March 30, 1867, the two parties agreed that the United States would pay Russia $7.2 million for the territory of Alaska.
For less that 2 cents an acre, the United States acquired nearly 600,000 square miles. Opponents of the Alaska Purchase persisted in calling it “Seward’s Folly” or “Seward’s Icebox” until 1896, when the great Klondike Gold Strike convinced even the harshest critics that Alaska was a valuable addition to American territory.
The check for $7.2 million was made payable to the Russian Minister to the United States Edouard de Stoeckl, who negotiated the deal for the Russians. Also shown here is the Treaty of Cession, signed by Tzar Alexander II, which formally concluded the agreement for the purchase of Alaska from Russia. | <urn:uuid:8182aa95-78e2-42b3-a86d-30bb1a0fa8f8> | CC-MAIN-2013-20 | http://www.scoop.it/t/on-this-day/p/3018291670/our-documents-check-for-the-purchase-of-alaska-1868 | 2013-05-21T10:21:24 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.934167 | 279 | 4.03125 | 4 |
Filed under: Foundational Hand
After studying the proportions of the Foundational Hand letters, the next step is to start writing the letters.
Each letter is constructed rather than written. The letters are made up of a combination of pen strokes, which are only made in a top – down or left – right direction. The pen is never pushed up.
When we studied the proportions of the Foundational Hand we could group the letters according to their widths. Now, we can group them according to the order and direction of the pen strokes.
You may find it useful to look at the construction grid whilst studying the order and direction of the letters.
The first group consists of the letters c, e, and o.
These letters are based on the circle shape. This shape is produced with two pen strokes. Visualise a clock face and start the first stroke at approximately the 11, and finish it in an anti-clockwise direction at 5. The second stroke starts again at the 11 and finishes in a clockwise direction on the 5 to complete the letter o.
The first pen-stroke for the letters c and e are the same as the first of the letter o. The second pen-stroke on the c and e are shorter and finish around the 1 position on the imaginary clock face.
Finally, the letter e has a third stroke, starting at the end of the second stroke and finishes when it touches the first stroke.
The next group of letters are d, q, b and p. All these letters combine curved and straight pen strokes. When writing these letters it can be useful to think of the underlying circle shape, which your pen will leave or join at certain points depending upon which letter is being written.
The first stroke of the b starts at the ascender height of the letter, which can be eyed in at just under half the x-height (body height of letters with no ascender or descender). Continue the ascender stroke of the b until it ‘picks up’ the circle shape, follow round the circle until the pen reaches the 5 on the imaginary clock face. The second stroke starts on the first stroke following the circle round until it touches the end of the first stroke.
The letter d is similar to the c except it has a third stroke for the ascender, which will touch the ends of the first and second stroke being for finishing on the write-line.
Letter p starts with a vertical stroke from the x-height down to the imaginary descender line, which is just under half the x-height below the write-line. The second and third strokes are curved, starting on the descender stroke and following round the imaginary circle.
The letter q is almost the same as the d, except it has a descender stroke rather than an ascender stroke.
Letters a, h, m, n, r
All these letters combine curved and straight pen strokes. Once again, think of the underlying circle shape, which your pen will leave or join at certain points depending upon the letter being written.
The Letter h consists of two pen strokes. The first is a vertical ascender stroke. The second stroke starts curved, follows the circle round, then leaves it and becomes straight.
The letter n is produced exactly the same way as the letter h, except the first stroke is not so tall as it starts on the x-height line. The first two pen strokes of the letter m are the same as the letter n. Then a third stroke is added which is identical to the second stroke.
The letter r is also written the same way as the letter n except the second stroke finishes at the point where the circle would have been left and the straight is picked up.
The first stroke of letter a is the same as the second stroke of the letters h, m and n. The second stroke follows the circle. Finally, the third stroke starts at the same point as the second stroke, but is a straight line at a 30° angle and touches the first stroke.
The next group of letters are l, u and t. These letters are straight-forward. The letter l is the same as the first stroke of letter b.
The letter u is also similar to the first stroke of letter b except it starts lower down. The second stroke starts on the x-height line and finishes on the write-line.
Letter t has the same first stroke as letter u. It is completed by a second horizontal stroke.
The following letters k, v, w, x, y and z are made of at least one diagonal pen stroke.
The letter k starts with a vertical ascender stroke, then a second stroke diagonal stroke which joins the vertical stroke. The final stroke is also diagonal and starts where the first and second stroke meet and stops when it touches the write-line. If you look closely you will see it goes further out than the second stroke. This makes the letter look more balanced. If the end of these two pen-strokes lined up the letter would look like it is about to fall over.
Letter v is simply two diagonal strokes and these are repeated to produce the letter w.
The letter y is the same as the v except the second stroke is extended until to create a descender stroke.
Letter x is a little different, you need to create it in such a way that the two stroke cross slightly above the half-way mark on the x-height. This means the top part will be slightly smaller than the bottom which will give the letter a better balance.
Finally, in this group is letter z. The easiest way to produce this is with the two horizontal pen strokes, thenjoin these two strokes with a diagonal pen-stroke to complete the letter.
Now for the hardest letters; f, g and s. Out of these three letters, f is the simplest. It starts with a vertical ascender stroke – except this is not as tall as the other ascender strokes we have produced so far. This is because we have to allow for the second curved stroke. The overall height of these two strokes should be the same as other letters that have an ascender. Finally, we need a horizontal stroke to complete the letter.
Which will you find the hardest letter g or s? These are trickier because unlike all the other letters we have written they do not relate so well to the grid.
The letter g is made of a circle shape, with an oval/bowl shape under the write-line. You can see the letter g is made of three pen-strokes. The first stroke is just like the first stroke of the letter o for example, except it is a smaller. The second stroke starts like the second stroke of the letter o, but when it joins the first stroke it continues and changes direction in the gap between the bottom of the shape and the write-line. The third stroke completes the oval shape. Finally, we have a little fourth stroke to complete the letter.
The letter s is made up of three strokes. The first stroke is sort of an s shape! The second and third strokes complete the letter s. These are easier to get right than the first stroke because they basically follow the circle shape on our construction grid. The secret to this letter is to make both ‘ends’ of the first stroke not too curved. Because the other two strokes are curved they will compensate and give the overall correct shape.
Finally, we are left with the letters i and j, which are made from one pen-stroke. You just need to remember to curve the end of the stroke when writing the letter j. | <urn:uuid:ebc9b632-c27d-4adb-85bd-b11864ab1adf> | CC-MAIN-2013-20 | http://www.scribblers.co.uk/blog/tag/starting-calligraphy/ | 2013-05-21T10:35:15 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.946402 | 1,563 | 4.15625 | 4 |
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or /ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh") and so on.
Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert.
The hyperbolic functions are:
Via complex numbers the hyperbolic functions are related to the circular functions as follows:
where is the imaginary unit defined as .
Note that, by convention, sinh2x means (sinhx)2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent function is , though cothx is far more common.
Hyperbolic sine and cosine satisfy the identity
which is similar to the Pythagorean trigonometric identity.
It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B.
For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions
In the above expressions, C is called the constant of integration.
It is possible to express the above functions as Taylor series:
A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh. However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic triangle.
and the property that cosh t ≥ 1 for all t.
The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent).
The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point (cosh t, sinh t) on the hyperbola.
The function cosh x is an even function, that is symmetric with respect to the y-axis.
The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0.
The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems
the "double angle formulas"
and the "half-angle formulas"
The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x).
The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers.
The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity.
From the definitions of the hyperbolic sine and cosine, we can derive the following identities:
These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials.
Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then holomorphic.
Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers: | <urn:uuid:34eefbfb-968b-4240-9caa-0182a3ca0559> | CC-MAIN-2013-20 | http://www.thefullwiki.org/Hyperbolic_tangent | 2013-05-21T09:59:49 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.893241 | 1,119 | 4.0625 | 4 |
Surface area is a two-dimensional property of a three-dimensional figure. Cones are similar to pyramids, except they have a circular base instead of a polygonal base. Therefore, the surface area of a cone is equal to the sum of the circular base area and the lateral surface area, calculated by multiplying half of the circumference by the slant height. Related topics include pyramid and cylinder surface area.
If you want to calculate the surface area of a cone, you only need to know 2 dimensions. The first is the slant height l and the second is the radius. So what we're going to do, we're going to separate this into two pieces the first is the base which is a circle with radius r and the second is this slant height l. So if I cut, if I took a scissors and cut the cone part and I fended out it would look like a sector. Well what I could do here is I could rearrange this sector into a parallelogram. So again if I cut this into really tiny pieces then I'll be able to organize it into a parallelogram where I would be able to calculate its area. And the way that we'll calculate its area, is first by saying well what are these lines that are going out?
Well those lines are going to be your l, your slant height and this side right here is going to be half of your circumference and half of a circumference is pi times r because the whole circumference is 2 pi r. So this down here is pi times r, so if our height l and our base is pi times r then the area of this is equal to pi times r times l. So the surface area of a cone which I'm going to write over here is equal to the base pi r squared plus this lateral area which is found using your slant height. So that's going be pi times r times l, so you only need to know 2 dimensions the radius and the slant height and you can calculate the surface area of any cone. | <urn:uuid:8c57b621-6116-4614-a9fc-c31bd7ee9c11> | CC-MAIN-2013-20 | http://www.brightstorm.com/math/geometry/area/surface-area-of-cones/ | 2013-05-23T18:31:13 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.955102 | 416 | 4.125 | 4 |
In this lecture you will learn how to undertake Solving Quadratic Systems. First you will start with Linear Quadratic Systems as well as their Solutions, before you move into Quadratic Quadratic Systems and their Solutions. Lastly, you will learn how to solve Systems of Quadratic Inequalities.
linear-quadratic system, use substitution to solve.
quadratic-quadratic system, use elimination to solve.
inequalities, remember the conventions about graphing boundaries
using either solid or dotted lines.
If possible, check
your solutions to systems of equations by graphing.
Solving Quadratic Systems
Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. | <urn:uuid:11f102ce-459d-4c2d-8912-6980537bf6dc> | CC-MAIN-2013-20 | http://www.educator.com/mathematics/algebra-2/fraser/solving-quadratic-systems.php | 2013-05-23T18:46:10 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.920276 | 178 | 4.0625 | 4 |
An earthquake is a sudden vibration or trembling in the Earth. More than 150,000 tremors strong enough to be felt by humans occur each year worldwide (see Chance of an Earthquake). Earthquake motion is caused by the quick release of stored potential energy into the kinetic energy of motion. Most earthquakes are produced along faults, tectonic plate boundary zones, or along the mid-oceanic ridges (Figures 1 and 2).
Figure 1: Distribution of earthquake epicenters from 1975 to 1995. Depth of the earthquake focus is indicated by color. Deep earthquakes occur in areas where oceanic crust is being actively subducted. About 90% of all earthquakes occur at a depth between 0 and 100 kilometers. (Source: U.S. Geologic Survey, National Earthquake Information Center)
Figure 2: Distribution of earthquakes with a magnitude less than 5.0 relative to the various tectonic plates found on the Earth's surface. Each tectonic plate has been given a unique color. This illustration indicates that the majority of small earthquakes occur along plate boundaries. (Source: PhysicalGeography.net)
At these areas, large masses of rock that are moving past each other can become locked due to friction. Friction is overcome when the accumulating stress has enough force to cause a sudden slippage of the rock masses. The magnitude of the shock wave released into the surrounding rocks is controlled by the quantity of stress built up because of friction, the distance the rock moved when the slippage occurred, and ability of the rock to transmit the energy contained in the seismic waves. The San Francisco earthquake of 1906 involved a six meter horizontal displacement of bedrock. Sometime after the main shock wave, aftershocks can occur because of the continued release of frictional stress. Most aftershocks are smaller than the main earthquake, but they can still cause considerable damage to already weakened natural and human-constructed features. Earthquakes that occur under or near bodies of water can give rise to tsunamis, which in cases like the December 26, 2004 Sumatra-Andaman Island earthquake reult in far greater distruction and loss of life that the initial earthquake.
Earthquakes are a form of wave energy that is transferred through bedrock. Motion is transmitted from the point of sudden energy release, the earthquake focus (hypocenter), as spherical seismic waves that travel in all directions outward (Figure 3). The point on the Earth's surface directly above the focus is termed the epicenter. Two different types of seismic waves have been described by geologists: body waves and surface waves. Body waves are seismic waves that travel through the lithosphere. Two kinds of body waves exist: P-waves and S-waves. Both of these waves produce a sharp jolt or shaking. P-waves or primary waves are formed by the alternate expansion and contraction of bedrock and cause the volume of the material they travel through to change. They travel at a speed of about 5 to 7 kilometers per second through the lithosphere and about 8 kilometers per second in the asthenosphere. The speed of sound is about 0.30 kilometers per second. P-waves also have the ability to travel through solid, liquid, and gaseous materials. When some P-waves move from the ground to the lower atmosphere, the sound wave that is produced can sometimes be heard by humans and animals.
Figure 3: Movement of body waves away from the focus of the earthquake. The epicenter is the location on the surface directly above the earthquake's focus. (Source: PhysicalGeography.net)
S-waves or secondary waves are a second type of body wave. These waves are slower than P-waves and can only move through solid materials. S-waves are produced by shear stresses and move the materials they pass through in a perpendicular (up and down or side to side) direction.
Surface waves travel at or near the Earth's surface. These waves produce a rolling or swaying motion causing the Earth's surface to behave like waves on the ocean. The velocity of these waves is slower than body waves. Despite their slow speed, these waves are particularly destructive to human construction because they cause considerable ground movement.
Earthquake Magnitude and Energy
|Table 1: Relationship between Richter Scale magnitude and energy released.|
|2.0||1.3 x 108||Smallest earthquake detectable by people.|
|5.0||2.8 x 1012||Energy released by the Hiroshima atomic bomb.|
|6.0 - 6.9||7.6 x 1013 to 1.5 x 1015||
About 120 shallow earthquakes of this magnitude
occur each year on the Earth.
|6.7||7.7 x 1014||Northridge, California earthquake January 17, 1994.|
|7.0||2.1 x 1015||Major earthquake threshold. Haiti earthquake of January 12, 2010 resulted in an estmated 222,570 deaths|
|7.4||7.9 x 1015||Turkey earthquake August 17, 1999. More than 12,000 people killed.|
|7.6||1.5 x 1016||Deadliest earthquake in the last 100 years. Tangshan, China, July 28, 1976. Approximately 255,000 people perished.|
|8.3||1.6 x 1017||San Francisco earthquake of April 18, 1906.|
|9.0||Japan Earthquake March 11, 2011|
|9.1||4.3 x 1018||December 26, 2004 Sumatra earthquake which triggered a tsunami and resulted in 227,898 deaths spread across fourteen countries|
|9.5||8.3 x 1018||Most powerful earthquake recorded in the last 100 years. Southern Chile on May 22, 1960. Claimed 3,000 lives.|
The strength of an earthquake can be measured by a device called a seismograph. When an earthquake occurs this device converts the wave energy into a standard unit of measurement like the Richter scale. In the Richter scale, units of measurement are referred to as magnitudes. The Richter scale is logarithmic. Thus, each unit increase in magnitude represents 10 times more energy released. Table 1 describes the relationship between Richter scale magnitude and energy released. The following equation can be used to approximate the amount of energy released from an earthquake in joules when Richter magnitude (M) is known:
Energy in joules = 1.74 x 10(5 + 1.44*M)
Figures 4 and 5 describe the spatial distribution of small and large earthquakes respectively. These maps indicate that large earthquakes have distributions that are quite different from small events. Many large earthquakes occur some distance away from a plate boundary. Some geologists believe that these powerful earthquakes may be occurring along ancient faults that are buried deep in the continental crust. Recent seismic studies in the central United States have discovered one such fault located thousands of meters below the lower Mississippi Valley. Some large earthquakes occur at particular locations along the plate boundaries. Scientists believe that these areas represent zones along adjacent plates that have greater frictional resistance and stress.
Figure 4: Distribution of earthquakes with a magnitude less than 5 on the Richter Scale. (Image Source: PhysicalGeography.net)
Figure 5: Distribution of earthquakes with a magnitude greater than 7 on the Richter Scale. (Image Source: PhysicalGeography.net)
The Richter Scale Magnitude, while the most known, is one of several measures of the magnitude of an earthquake. The most commonly used are:
- Local magnitude (ML), commonly referred to as "Richter magnitude;"
- Surface-wave magnitude (Ms);
- Body-wave magnitude (Mb); and
- Moment magnitude (Mw).
Scales 1 to 3 have limited range and applicability and do not satisfactorily measure the size of the largest earthquakes. The moment magnitude (Mw) scale, based on the concept of seismic moment, is uniformly applicable to all sizes of earthquakes but is more difficult to compute than the other types. All magnitude scales should yield approximately the same value for any given earthquake.
The severity of an earthquake can be expressed in terms of both intensity and magnitude. However, the two terms are quite different, and they are often confused.
Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter while magnitude is related to the amount of seismic energy released at the hypocenter of the earthquake.
Although numerous intensity scales have been developed over the last several hundred years to evaluate the effects of earthquakes, the one currently used in the United States is the Modified Mercalli (MM) Intensity Scale. The lower numbers of the intensity scale generally deal with the manner in which the earthquake is felt by people. The higher numbers of the scale are based on observed structural damage. Structural engineers usually contribute information for assigning intensity values of Vlll or above.
The following is an abbreviated description of the 12 levels of Modified Mercalli intensity.
I. Not felt except by a very few under especially favorable conditions.
II. Felt only by a few persons at rest, especially on upper floors of buildings. Delicately suspended objects may swing.
III. Felt quite noticeably by persons indoors, especially on upper floors of buildings. Many people do not recognize it as an earthquake. Standing motor cars may rock slightly. Vibration similar to the passing of a truck. Duration estimated.
IV. Felt indoors by many, outdoors by few during the day. At night, some awakened. Dishes, windows, doors disturbed; walls make cracking sound. Sensation like heavy truck striking building. Standing motor cars rocked noticeably.
V. Felt by nearly everyone; many awakened. Some dishes, windows broken. Unstable objects overturned. Pendulum clocks may stop.
Vl. Felt by all, many frightened. Some heavy furniture moved; a few instances of fallen plaster. Damage slight.
Vll. Damage negligible in buildings of good design and construction; slight to moderate in well-built ordinary structures; considerable damage in poorly built or badly designed structures; some chimneys broken.
Vlll. Damage slight in specially designed structures; considerable damage in ordinary substantial buildings with partial collapse. Damage great in poorly built structures. Fall of chimneys, factory stacks, columns, monuments, walls. Heavy furniture overturned.
IX. Damage considerable in specially designed structures; well-designed frame structures thrown out of plumb. Damage great in substantial buildings, with partial collapse. Buildings shifted off foundations.
X. Some well-built wooden structures destroyed; most masonry and frame structures destroyed with foundations. Rails bent.
Xl. Few, if any (masonry) structures remain standing. Bridges destroyed. Rails bent greatly.
Xll. Damage total. Lines of sight and level are distorted. Objects thrown into the air.
Earthquake Damage and Destruction
Earthquakes are a considerable hazard to humans. Earthquakes can cause destruction by structurally damaging buildings and dwellings, fires, tsunamis, and mass wasting (see Figures 6 to 10). Earthquakes can also take human lives. The amount of damage and loss of life depends on a number of factors. Some of the more important factors are:
- Time of day. Higher losses of life tend to occur on weekdays between the hours of 9:00 AM to 4:00 PM. During this time interval many people are in large buildings because of work or school. Large structures are often less safe than smaller homes in an earthquake.
- Magnitude of the earthquake and duration of the event.
- Distance form the earthquake's focus. The strength of the shock waves diminish with distance from the focus.
- Geology of the area affected and soil type. Some rock types transmit seismic wave energy more readily. Buildings on solid bedrock tend to receive less damage. Unconsolidated rock and sediments have a tendency to increase the amplitude and duration of the seismic waves increasing the potential for damage. Some soil types when saturated become liquefied (Figure 6).
- Type of building construction. Some building materials and designs are more susceptible to earthquake damage (Figure 7).
- Population density. More people often means greater chance of injury and death.
The greatest loss of life because of an earthquake this century occurred in Tangshan, China in 1976 when an estimated 250,000 people died. In 1556, a large earthquake in the Shanxi Province of China was estimated to have caused the death of about 1,000,000 people.
A common problem associated with earthquakes in urban areas is fire (Figure 8). Shaking and ground displacement often causes the severing of electrical and gas lines leading to the development of many localized fires. Response to this problem is usually not effective because shock waves also rupture pipes carrying water. In the San Francisco earthquake of 1906, almost 90% of the damage to buildings was caused by fire.
In mountainous regions, earthquake-provoked landslides can cause many deaths and severe damage to built structures (Figure 9). The town of Yungay, Peru was buried by a debris flow that was triggered by an earthquake that occurred on May 31, 1970. This disaster engulfed the town in seconds with mud, rock, ice, and water and took the lives of about 20,000 people.
Another consequence of earthquakes is the generation of tsunamis (Figure 10). Tsunamis, or tidal waves, form when an earthquake triggers a sudden movement of the seafloor. This movement creates a wave in the water body which radiates outward in concentric shells. On the open ocean, these waves are usually no higher than one to three meters in height and travel at speed of about 750 kilometers per hour. Tsunamis become dangerous when they approach land. Frictional interaction of the waves with the ocean floor, as they near shore, causes the waves to slow down and collide into each other. This amalgamation of waves then produces a super wave that can be as tall as 65 meters in height.
The US Geological Survey estimate that at least 1,783 deaths worldwide resulted from earthquake activity in 2009. In 2010, the number rose to 226,729 as the result of 222,570 people killed by the Jauary 12, 2010 earthquake in Haiti.
The deadliest earthquake of 2009 was a magnitude 7.5 event that killed approximately 1,117 people in southern Sumatra, Indonesia on Sept. 30, according to the U.S. Geological Survey (USGS) and confirmed by the United Nations Office for Coordination of Humanitarian Affairs (OCHA). However, the number of earthquake-related fatalities in 2009 was far less than the 2008 count of over 88,000. The high number of fatalities in 2008 was primarily due to the devastating magnitude 7.9 earthquake that occurred in Sichuan, China on May 12.
Although unrelated, the Sept. 30 Indonesian earthquake occurred a day after the year’s strongest earthquake, a magnitude 8.1 on Sept. 29 in the Samoa Islands region. Tsunamis generated by that earthquake killed 192 people in American Samoa, Samoa and Tonga. A magnitude 6.3 earthquake hit the medieval city of L’Aquila in central Italy on April 6, killing 295 people.
Overall, earthquakes took the lives of people in 15 countries on four continents during 2009, including Afghanistan, Bhutan, China, Costa Rica, Greece, Indonesia, Italy, Kazakhstan, Honduras, Japan, Malawi, Samoa, South Africa and Tonga, as well as the U.S. territory of American Samoa. Earthquakes injured people in 11 additional countries, including the mainland United States, where a magnitude 4.4 earthquake on May 2 injured one person in the Los Angeles area.
The biggest 2009 earthquake in the 50 United States was in the Aleutian Islands of Alaska. The magnitude 6.5 earthquake occurred in the Fox Islands on Oct. 13. It was felt at the towns of Akutan and Unalaska, but caused no casualties or damage. The greatest earthquake for the year in the contiguous United States was a magnitude 5.2 event on October 2 in the Owens Valley southeast of Lone Pine, California. Because of the sparse population in the epicentral area, this quake caused no damage although it was felt as far away as Merced and Los Angeles, California and Las Vegas, Nevada.
A magnitude 9.1 Sumatra-Andaman Island earthquake and subsequent tsunami on December 26, 2004 killed 227,898 people, which is the fourth largest casualty toll for earthquakes and the largest toll for a tsunami in recorded history. As a consequence of that earthquake, the USGS has significantly improved its earthquake notification and response capabilities. Improvements include the addition of nine real-time seismic stations across the Caribbean basin, a seismic and tsunami prone region near the U.S. southern border, implementation of a 24x7 earthquake operations center at the USGS National Earthquake Information Center (NEIC), and development of innovative tools for rapid evaluation of population exposure and damage to potentially damaging earthquakes.
The USGS estimates that several million earthquakes occur throughout the world each year, although most go undetected because they hit remote areas or have very small magnitudes. The USGS NEIC publishes the locations for about 40 earthquakes per day, or about 14,500 annually, using a publication threshold of magnitude 4.5 or greater worldwide or 2.5 or greater within the United States. On average, only 18 of these earthquakes occur at a magnitude of 7.0 or higher each year.
In the 2009 year, 17 earthquakes reached a magnitude of 7.0 or higher, with a single one topping a magnitude of 8.0. These statistics for large magnitude earthquakes are higher than those of 2008, which experienced only 12 earthquakes over magnitude 7.0 and none over 8.0. Factors such as the size of an earthquake, the location and depth of the earthquake relative to population centers, and fragility of buildings, utilities and roads all influence how earthquakes will affect nearby communities.
Table 2. Notable Earthquakes and Their Estimated Magnitude
|January 23, 1556||
|August 17, 1668||
|November 1, 1755||
|December 16, 1857||
|October 27, 1891||
|June 15, 1896||
|April 18, 1906||3,000||7.8|
|August 17, 1906||
|December 28, 1908||
|December 16, 1920||
|September 1, 1923||
|May 22, 1927||
|January 13, 1934||
|December 26, 1939||
|February 29, 1960||
|May 22, 1960||
|March 28, 1964||
Prince William Sound, AK
|May 31, 1970||
|July 27, 1976||
|September 19, 1985||
|December 7, 1988||
|August 17, 1999||
|January 26, 2001||
|December 26, 2003||
|December 26, 2004||
Off west coast northern Sumatra
|October 8, 2005||
|May 26, 2006||
|May 12, 2008||
Eastern Sichuan, China
|January 12, 2010||
Near Port-au-Prince, Haiti
|March 11, 2011||
Pacific Ocean, East of Oshika Peninsula, Japan
* Fatalities in the 1976 Tangshan, China earthquake were estimated as high as 655,000.
Source: Preferred Magnitudes of Selected Significant Earthquakes, USGS, 2010 (with additions on two most recent major earthquakes in Haiti and Japan.
The following links provide some more information about earthquakes.
- American Geophysical Union (AGU)
- Animation of P, S & Surface Waves
- Animations of Seismology Fundamentals
- Association of American State Geologists (AASG)
- Association of Bay Area Governments (ABAG)
- California Geological Survey (CGS)
- California Office of Emergency Services (OES)
- California Seismic Safety Commission
- Center for Earthquake Research & Information (CERI)
- Central United States Earthquake Consortium (CUSEC)
- Consortium of Universities for Research in Earthquake Engineering (CUREE)
- COSMOS Virtual Data Center
- CREW - Cascadia Region Earthquake Workgroup
- Earthquake Engineering Research Institute (EERI)
- Earthquake Information for 2009, USGS
- Earthquake Information for 2010, USGS
- Earthquake Monitoring
- Earthquakes - Online University
- Earthquakes by Bruce A. Bolt Online Companion
- Earthquakes Cause over 1700 Deaths in 2009, USGS
- Earth Science Education Activities
- European-Mediterranean Seismological Centre
- FEMA - Federal Emergency Management Agency
- Finite-source Rupture Model Database
- Global Earthquake Explorer
- GSA - Geological Society of America
- Incorporated Research Institutes for Seismology (IRIS)
- International Association of Seismology and Physics of the Earth's Interior (IASPEI)
- International Seismological Centre (ISC)
- John Lahr's Earthquake website
- McConnell, D., D. Steer, C. Knight, K. Owens, and L. Park. 2010. The Good Earth. 2nd Edition. McGraw-Hill, Dubuque, Iowa.
- Mid-America Earthquake Center
- Multi-Disciplinary Center for Earthquake Engineering Research (MCEER)
- National Geophysical Data Center (NGDC) - NOAA
- National Information Centre of Earthquake Engineering (NICEE)
- National Science Foundation (NSF)
- Natural Hazards Center
- Northern California Earthquake Data Center
- Observatories and Research Facilities for EUropean Seismology (ORFEUS)
- Plummer, C., D. Carlson, and L. Hammersle. 2010. Physical Geology. 13th Edition. McGraw-Hill, Dubuque, Iowa.
- Project IDA
- Quake-Catcher Network
- Saint Louis University Earthquake Center
- Seattle Fault Earthquake Scenario
- Seismographs: Keeping Track of Earthquakes
- Seismological Society of America (SSA)
- Seismo-surfing the Internet for Earthquake Data
- Smithsonian Global Volcanism Program
- SOPAC (Scripps Orbit and Permanent Array Center)
- Southern California Earthquake Center (SCEC)
- Tarbuck, E.J., F.K. Lutgens, and D. Tasa. 2009. Earth Science. 12th Edition. Prentice Hall, Upper Saddle River, New Jersey.
- Tectonics Observatory
- Tracing earthquakes: seismology in the classroom
- UPSeis Seismology Questions Answered
- USGS Earthquake Hazards Program, U.S. Geological Survey
- Western States Seismic Policy Council (WSSPC)
- World Data Center System
- World Organization of Volcano Observatories
- World Seismic Safety Initiative (WSSI) | <urn:uuid:99446ec0-7d83-4817-851c-637593492317> | CC-MAIN-2013-20 | http://www.eoearth.org/articles/view/151858/Mid-ocean_ridges/San_Francisco_Earthquake_of_1906/ | 2013-05-23T18:30:41 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.920046 | 4,773 | 4.34375 | 4 |
In 2006, high sea temperatures caused severe coral bleaching in the Keppell Islands, in the southern part of the reef — the largest coral reef system in the world. The damaged reefs were then covered by a single species of seaweed which threatened to suffocate the coral and cause further loss.
A "lucky combination" of rare circumstances has meant the reef has been able to make a recovery. Abundant corals have reestablished themselves in a single year, say the researchers from the University of Queensland's Centre for Marine Studies and the ARC Centre of Excellence for Coral Reef Studies (CoECRS).
"Three factors were critical," said Dr Guillermo Diaz-Pulido. "The first was exceptionally high regrowth of fragments of surviving coral tissue. The second was an unusual seasonal dieback in the seaweeds, and the third was the presence of a highly competitive coral species, which was able to outgrow the seaweed."
Coral bleaching occurs in higher sea temperatures when the coral lose the symbiotic algae they need to survive. The reefs then lose their colour and become more susceptible to death from starvation or disease.
The findings are important as it is extremely rare to see reports of reefs that bounce back from mass coral bleaching or other human impacts in less than a decade or two, the scientists said. The study is published in the online journal PLoS one.
"The exceptional aspect was that corals recovered by rapidly regrowing from surviving tissue," said Dr Sophie Dove, also from CoECRS and The University of Queensland.
"Recovery of corals is usually thought to depend on sexual reproduction and the settlement and growth of new corals arriving from other reefs. This study demonstrates that for fast-growing coral species asexual reproduction is a vital component of reef resilience."
Last year, a major global study found that coral reefs did have the ability to recover after major bleaching events, such as the one caused by the El Niño in 1998.
David Obura, the chairman of the International Union for Conservation of Nature climate change and coral reefs working group involved with the report, said: "Ten years after the world's biggest coral bleaching event, we know that reefs can recover – given the chance. Unfortunately, impacts on the scale of 1998 will reoccur in the near future, and there's no time to lose if we want to give reefs and people a chance to suffer as little as possible."
Coral reefs are crucial to the livelihoods of millions of coastal dwellers around the world and contain a huge range of biodiversity. The UN's Millennium Ecosystem Assessment says reefs are worth about $30bn annually to the global economy through tourism, fisheries and coastal protection.
But the ecosystems are under threat worldwide from overfishing, coastal development and runoff from the land, and in some areas, tourism impacts. Natural disasters such as the earthquake that triggered the Indian Ocean tsunami in 2004 have also caused reef loss.
Climate change poses the biggest threat to reefs however, as emissions of carbon dioxide make seawater increasingly acidic.
Last year a study showed that one-fifth of the world's coral reefs have died or been destroyed and the remainder are increasingly vulnerable to the effects of climate change.
The Global Coral Reef Monitoring Network says many surviving reefs could be lost over the coming decades as CO2 emissions continue to increase. | <urn:uuid:5e2f2baf-ab5a-40e4-ad86-116c02b20572> | CC-MAIN-2013-20 | http://www.guardian.co.uk/environment/2009/apr/22/coral-barrier-reef-australia | 2013-05-23T18:40:02 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.961021 | 683 | 4.03125 | 4 |
Earth from Space: Easter Island
Easter Island as seen by astronauts aboard the International Space Station on Sept. 25, 2002.
On Easter Sunday in 1722, Dutch explorer Jacob Roggeveen became the first known European to encounter this Polynesian island and gave it the name it has become most widely known by.
Easter Island (also known as Rapa Nui in the native language) is one of the most isolated spots on Earth, lying some 2,000 miles from the nearest areas of human habitation (Tahiti and Chile) — even more remote than the astronauts orbiting at 210 nautical miles above the Earth.. The island, which is only 15 miles long, was annexed by Chile in 1888. (In Spanish, it is called "Isla de Pascua," which means "Easter Island.")
Archaeological evidence suggests that Polynesians from other Pacific Islands discovered and colonized Easter Island around the year 400.
The island and its early inhabitants are best known for the giant stone monoliths, known as Moai, placed along the coastline.
It is thought that the population grew bigger than was sustainable on the small island, resulting in civil war, deforestation and near collapse of the island ecosystem . Today, a new forest (primarily eucalyptus) has been established in the center of the island (the dark green in the image), according to a NASA statement.
Volcanic landforms dominate the geography of the island, including the large crater Rana Kao at the southwest end of the island and a line of cinder cones that stretch north from the central mountain. Near Rana Kao is the longest runway in Chile, which served as an emergency landing spot for the space shuttle before its retirement in 2011.
MORE FROM LiveScience.com | <urn:uuid:02e8e579-65c8-4c0a-b405-7a29e289fea9> | CC-MAIN-2013-20 | http://www.livescience.com/31329-easter-island-image.html | 2013-05-23T19:06:26 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.966228 | 370 | 4.09375 | 4 |
During this tutorial you will be asked to perform calculations involving trigonometric functiions. You will need a calulator to proceed.
| The purpose of this tutorial
is to review with you the elementary properties of the trigonometric functions.
Facility with this subject is essential to success in all branches of science,
and you are strongly urged to review and practice the concepts presented
here until they are mastered. Let us consider the right-angle triangle
shown in Panel 1. The angle at C is a right angle and the angle
A we will call θ. The lengths of the
sides of the triangle we will denote as p, q and r. From your elementary
geometry, you know several things about this triangle. For example, you
know the Pythagorean relation,
q² = p² + r². That is, the square of the length of the side opposite the right angle, which we call the hypotenuse, is equal to the sum of the squares of the lengths of the other two sides.
We know other things. For example, we know that if the lengths of the three sides of any triangle p, q and r are specified, then the whole triangle is determined, angles included. If you think about this for a moment, you will see it is correct. If I give you three sticks of fixed length and told you to lay them down in a triangle, there's only one triangle which you could make. What we would like to have is a way of relating the angles in the triangle, say θ, to the lengths of the sides.
It turns out that there's no simple analytic way to do this. Even though the triangle is specified by the lengths of the three sides, there is not a simple formula that will allow you to calculate the angle θ. We must specify it in some new way.
|To do this, we define three ratios of the sides of the triangle.
One ratio we call the sine of theta, written sin(θ), and it is defined as the ratio of the side opposite θ to the hypotenuse, that is r/q.
The cosine of θ, written cos(θ), is the side adjacent to θ over the hypotenuse, that is, p/q.
This is really enough, but because it simplifies our mathematics later on, we define the tangent of θ, written tan(θ), as the ratio of the opposite to the adjacent sides, that is r/p. This is not an independent definition since you can readily see that the tangent of θ is equal to the sine of θ divided by the cosine of θ. Verify for yourself that this is correct.
All scientific calculators provide this information. The first thing to ensure is that your calculator is set to the anglular measure that you want. Angles are usually measured in either degrees or radians (see tutorial on DIMENSIONAL ANALYSIS). The angle 2º is a much different angle than 2 radians since 180º = π radians = 3.1416... radians. Make sure that your calculator is set to degrees.
Now suppose that we want the sine of 24º. Simply press 24 followed by the [sin] key and the display should show the value 0.4067. Therefore, the sine of 24º is 0.4067. That is, in a triangle like panel 1 where θ = 24º, the ratio of the sides r to q is 0.4067. Next set your calculator to radians and find the sine of 0.42 radians. To do this enter 0.42 followed by the [sin] key. You should obtain a value of 0.4078. This is nearly the same value as you obtained for the sine of 24º. Using the relation above you should confirm that 24º is close to 0.42 radians
Obviously, using your calculator to find values of sines is very simple. Now find sine of 42º 24 minutes. The sine of 42º 24 minutes is 0.6743. Did you get this result? If not, remember that 24 minutes corresponds to 24/60 or 0.4º. The total angle is then 42.4º
| The determination of
cosines and tangents on your calculator is similar. It is now possible
for us to solve the simple problem concerning triangles. For example, in
Panel 2, the length of the hypotenuse is 3 cm and the angle θ
is 24º. What is the length of the opposite side r? The sine of 24º
as we saw is 0.4067 and it is also, by definition, r/3.
So, sine of 24º = .4067 = r/3, and therefore, r = 3 x 0.4067 = 1.22 cm.
|Conversely, suppose you knew that the opposite side was
2 cm long and the hypotenuse was 3 cm long, as in panel 3, what is the
angle θ? First determine the sine of θ
You should find that the sine of θ is 2/3, which equals 0.6667. Now we need determine what angle has 0.6667 as its sine.
If you want your answer to be in degrees, be sure that your calculator is set to degrees. Then enter 0.6667 followed by the [INV] key and then the [sin] key. You should obtain a value of 41.8º. If your calculator doesn't have a [INV] key, it probably has a [2ndF] key and the inverse sine can be found using it.
|One use of these trigonometric functions which is very important is the calculation of components of vectors. In panel 4 is shown a vector OA in an xy reference frame. We would like to find the y component of this vector. That is, the projection OB of the vector on the y axis. Obviously, OB = CA and CA/OA = sin(θ), so CA = OA sin(θ). Similarly, the x-component of OA is OC. And OC/OA = cos(θ) so OC = OA cos(θ).|
|There are many relations among the trigonometric functions
which are important, but one in particular you will find used quite often.
Panel 1 has been repeated as Panel 5 for you. Let us look at the sum cos²
+ sin². From the figure, this is (p/q)² + (r/q)², which
[(p² + r²) / (q²)]. The Pythagorean theorem tells us that p² + r² = q² so we have
[(p² + r²) / q²] = (q²/q²) = 1. Therefore, we have;
Our discussion so far has been limited to angles between 0 and 90º. One can, using the calculator, find the the sine of larger angles (eg 140º ) or negative angles (eg -32º ) directly. Sometimes, however, it is useful to find the corresponding angle betweeen 0 and 90º. Panel 6 will help us here.
|In this xy reference frame, the angle θ
is clearly between 90º and 180 º, and clearly, the angle a,
which is 180 - θ
( a is marked with a double arc) can be dealt with. In this case, we say that the magnitude of sine, cosine, and tangent of θ are those of the supplement a and we only have to examine whether or not they are positive or negative.
For example, what is the sine, cosine and tangent of 140º? The supplement is 180º - 140º = 40º. Find the sine, the cosine and the tangent of 40º. | <urn:uuid:00f865ac-a066-4877-8d69-479bd1350ad2> | CC-MAIN-2013-20 | http://www.physics.uoguelph.ca/tutorials/trig/trigonom.html | 2013-05-23T19:00:52 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.91847 | 1,681 | 4.0625 | 4 |
Is there such a thing as too much money?
by Fred E. Foldvary, Senior EditorWhat is inflation? There are two economic meanings of inflation. The first meaning is monetary inflation, having to do with the money supply. To understand that, we need to understand that the impact of money on the economy depends not just on the amount of money but also on its rate of turnover.
We all know that money circulates. How fast it circulates is called its velocity. For example, suppose you get paid $4000 every four weeks. You are circulating $4000 13 times per year. Then suppose you instead get paid $1000 each week. Your total spending is the same, but now you are circulating $1000 52 times per year. The velocity of the money is 52, but the money you hold has been reduced to one fourth its previous amount, although the money held times the velocity is the same. The effect on the economy is the money supply times the velocity.
Monetary inflation is an increase in the money supply, times the velocity, which is greater than the increase in the amount of transactions measured in constant dollars. Simply put, if velocity does not change, monetary inflation is an increase in money that is greater than the increase in goods.
Price inflation is an on-going increase in the price level. The level of prices is measured by a price index, such as the consumer price index (CPI). Usually, price inflation is caused by monetary inflation. So let’s take a look at recent monetary inflation.
The broadest measure of money is MZM, which stands for money zero maturity, funds which can be readily spent. The Federal Reserve Bank of St. Louis keeps track of various measurements of money. Its data show that on an annual basis, MZM increased by 13 percent in January 2008, 36 percent in February, and 23 percent in March. These are huge increases, since gross domestic product, the total production of goods, increased at an annual rate of only .6 percent during these months. In 2006, MZM grew at an annual rate of only 4 percent.
High monetary inflation results in high price inflation. Indeed, in May 2008 the consumer price index rose by 4.2 percent from the level of May 2007. For the month, the increase for May was .6 percent, an annual rate of 7.2 percent. The “Consumer Price Index for All Urban Consumers” (CPI-U) increased 0.8 percent in May, before seasonal adjustment, for an annualized increase of 9.6 percent. The “Consumer Price Index for Urban Wage Earners and Clerical Workers” (CPI-W) increased 1.0 percent in May, prior to seasonal adjustment, for a whopping annual increase of 12 percent.
The rapid rise in oil prices fueled the increase in the price of gasoline, while the greater demand for grains made food prices rise, but beneath these rises is the monetary inflation that creates a higher demand for goods in general. The government reports that “core inflation,” not counting gasoline and food, is lower, but what counts for people is everything they buy, including food and fuel. If you have to pay much more for food and gasoline, there is less money for other things, so of course these will not rise in price as much.
In making monetary policy, the Federal Reserve targets the federal funds interest rate, which banks pay when they borrow funds from one another. During the financial troubles during the first few months of 2008, the Fed aggressively lowered the federal funds rate to 2 percent and also indicated that it would supply limitless credit to banks that borrowed directly from the Federal Reserve.
The Fed lowers the interest rate by increasing the supply of money that banks have to lend; to unload it, banks charge borrowers less interest. To start, the Fed buys U.S. Treasury bonds from the public. The Fed pays for the bonds not by using old money it has lying around but by increasing the reserves held by the banks in their accounts at their local Federal Reserve Bank then using that new money.
This increase in reserves or bank funds is a creation of money out of nothing. Actually, this does not violate the law of conservation, because this creation of money is at the expense of the value of all other money holdings. Every extra dollar created by the Fed decreases the value of the dollars you hold by a tiny amount.
Most monetary reformers stop there, but that is not enough. The current financial instability is also caused by the real estate boom-bust cycle, since even with sound money, an economic expansion would spark a speculative boom in land values. In a competitive market, when produced goods rise in price, producers usually supply more, bringing the price back down or limiting the rise. But land is not produced, so with increased demand, the price has nowhere to go but up. Speculators drive the price of land based on expectations of even higher future prices, but at the peak of the boom, the price becomes too high for those who want to use the land.
Real estate stops rising and then falls, and that brings the financial system down with it, as we have witnessed during the past year. To prevent the inflation in land prices, we need to remove the subsidy, the pumping up of land value from the civic benefits paid by returns on labor and capital goods. We can remove the land subsidy by tapping the land value or land rent for public revenue. Land-value tapping or taxation plus free-market money and banking would provide price and financial stability.
Only the free market can know the right money supply. Some people think the government could just print money and spend it. That is what is happening in Zimbabwe, which has an inflation rate of one hundred thousand percent. Much of the population has fled the country. Once government can create money at will, there is really no way to limit it, and if there is some limiting rule, then the money supply becomes too rigid. Only free market competition and production can combine price stability with money-supply flexibility.
-- Fred Foldvary
Copyright 2008 by Fred E. Foldvary. All rights reserved. No part of this material may be reproduced or transmitted in any form or by any means, electronic or mechanical, which includes but is not limited to facsimile transmission, photocopying, recording, rekeying, or using any information storage or retrieval system, without giving full credit to Fred Foldvary and The Progress Report.
Part III, The Trouble With Money and its Cure
A Better Way to Pay for Railways?
How Economic Systems Really Work
Email this article Sign up for free Progress Report updates via email
What are your views? Share your opinions with The Progress Report:
Page One Page Two Archive Discussion Room Letters What's Geoism? | <urn:uuid:accf12a7-8aaf-4627-9f32-8cccc672aeeb> | CC-MAIN-2013-20 | http://www.progress.org/2008/fold564.htm | 2013-05-23T18:45:41 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.952325 | 1,400 | 4.15625 | 4 |
There are many techniques available to help students get started with a
piece of writing. Getting started can be hard for all levels of writers.
Freewriting is one great technique to build fluency. That was
explored in an earlier lesson plan: http://www.thirteen.org/edonline/adulted/lessons/lesson18.html
This unit offers some other techniques. These techniques may be especially
helpful with students who prefer a style of learning or teaching that could
be described as visual, spatial, or graphic. Sometimes those styles or overlooked
in favor of approaches that are very linguistic or linear. The approaches
here will attend to a broader range of learning styles as they add variety.
- Writing: Writing Process, Pre-Writing, Autobiography, Exposition,
Personal Narrative, Argumentation, Comparison and Contrast, Description.
Students will be able to:
- Write more fluently (writing more with greater ease)
- Generate writing topics
- Select topics that will yield strong pieces of writing
- Connect personal experience, knowledge, and examples to an assigned
- Produce better organized pieces of writing
National Reporting System of Adult Education standards are applicable here.
These are the standards required by the 1998 Workforce Investment Act. See
Pencils, colored pencils, pens, markers, crayons, unlined paper, magazines
and newspapers with pictures inside, glue or paste, and paper. Big paper
or poster board can make the pre-writing exercises more eye-catching,
more of a project, and better for display.
Video and TV:
Prep for Teachers
Make sure you try each of the activities yourself before you ask students
to do them. That will give you a better understanding of the activities
and help you recognize any potential points that may be confusing or difficult.
This also gives you a sample to show the students. Its much easier
to create a diagram if you are shown an example of one.
Here are some Web sites that give background and even more ideas about you
pre-writing, diagrams, graphic organizers, and other ideas to get started
with writing. There is some repetition here. You dont have to read
them all. But check them out and see what you think. | <urn:uuid:8337696e-d794-475f-9207-8e5f70d2fabe> | CC-MAIN-2013-20 | http://www.thirteen.org/edonline/adulted/lessons/lesson19.html | 2013-05-23T19:05:12 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.906149 | 475 | 4.28125 | 4 |
Given all the evidence presently available, we believe it entirely reasonable that Mars is inhabited with living organisms and that life independently originated there
The conclusion of a study by the National Academy of Sciences in March 1965, after 88 years of surveying the red planet through blurry telescopes. Four months later, NASA’s Mariner 4 spacecraft would beam back the first satellite images of Mars confirming the opposite.
After Earth and Mars were born four and a half billion years ago, they both contained all the elements necessary for life. After initially having surface water and an atmosphere, scientists now believe Mars lost it’s atmosphere four billion years ago, with Earth getting an oxygenated atmosphere around half a billion years later.
According to the chief scientist on NASA’s Curiosity mission, if life ever existed on Mars it was most likely microscopic and lived more than three and a half billion years ago. But even on Earth, fossils that old are vanishingly rare. “You can count them on one hand,” he says. “Five locations. You can waste time looking at hundreds of thousands of rocks and not find anything.”
The impact of a 40kg meteor on the Moon on March 17 was bright enough to see from Earth without a telescope, according to NASA, who captured the impact through a Moon-monitoring telescope.
Now NASA’s Lunar Reconnaissance Orbiter will try and search out the impact crater, which could be up to 20 metres wide. | <urn:uuid:132d7809-ba28-4c89-8ce0-867a2a81c1e6> | CC-MAIN-2013-20 | http://8bitfuture.com/tagged/science | 2013-05-26T02:41:26 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.948667 | 300 | 4.1875 | 4 |
Chandra "Hears" a Supermassive Black Hole in Perseus
A 53-hour Chandra observation of the central region of the Perseus galaxy cluster (left) has revealed wavelike features (right) that appear to be sound waves. The features were discovered by using a special image-processing technique to bring out subtle changes in brightness.
These sound waves are thought to have been produced by explosive events occurring around a supermassive black hole (bright white spot) in Perseus A, the huge galaxy at the center of the cluster. The pitch of the sound waves translates into the note of B flat, 57 octaves below middle-C. This frequency is over a million billion times deeper than the limits of human hearing, so the sound is much too deep to be heard.
The image also shows two vast, bubble-shaped cavities, each about 50 thousand light years wide, extending away from the central supermassive black hole. These cavities, which are bright sources of radio waves, are not really empty, but filled with high-energy particles and magnetic fields. They push the hot X-ray emitting gas aside, creating sound waves that sweep across hundreds of thousands of light years.
The detection of intergalactic sound waves may solve the long-standing mystery of why the hot gas in the central regions of the Perseus cluster has not cooled over the past ten billion years to form trillions of stars. As sounds waves move through gas, they are eventually absorbed and their energy is converted to heat. In this way, the sound waves from the supermassive black hole in Perseus A could keep the cluster gas hot.
The explosive activity occurring around the supermassive black hole is probably caused by large amounts of gas falling into it, perhaps from smaller galaxies that are being cannibalized by Perseus A. The dark blobs in the central region of the Chandra image may be fragments of such a doomed galaxy. | <urn:uuid:7c5032f8-872f-474b-bda7-8c70bc31adaa> | CC-MAIN-2013-20 | http://chandra.harvard.edu/photo/2003/perseus/ | 2013-05-26T02:34:37 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94364 | 389 | 4.34375 | 4 |
SL Psychology/Intro to Research Methods
The following items should be included in this section: The Hypothetico-deductive (scientific)method, types of psychological research methods, research designs, sampling, reliability, validity, and triangulation.
Research into mind can be traced back to Ancient Greece. However, empirical psychological research has its roots in investigations into cognitive functions such as introspection and memory. While early psychological researchers attempted to bring the same standards of rigor and control to their investigations as physical scientists enjoy, psychological research poses unique obstacles. Psychological research investigates mind. Only recently the contents of the mind become observable since the advent of neuro-imaging technologies such as EEGs, PET scans, and fMRIs, thus early psychological research was focused on disagreements between different schools or generations of researchers that used varied approaches toward their investigations into the invisible mind. For example, cognitive researchers rely on inferences made from activities aimed at employing cognitive functions such as memory as opposed to examining how or where actual memories are laid down. Conversely Behaviorist researchers employed a more empirically rigorous method seeking only to make generalizations about phenomena that were directly observable and replicable in controlled settings.
Contemporary psychological research is derived from these disparate traditions and perspectives. It utilizes the hypothetico-deductive or scientific method:
1. observation and data gathering
2. inference of generalizations
3. construction of explanatory theories
4. deduction of hypothesis to test theories
5. hypothesis testing
6. support or challenges to existing theories and commensurate adjustments.
Theories and Hypothesis
Two key steps, theory construction and hypothesis deduction/testing pose special problems for researchers. Theories are sets of related generalizations explaining a specific mental phenomena e.g. schema and memory organization and hypotheses are specific predictions for research investigations. These steps are derived from empirical data, but are heavily influenced by an individual researcher’s perspective. Thus, researchers seek to clearly articulate operational definitions in an effort to make their research easily replicable. Additionally, controls are implemented to ensure credibility of results and subsequent conclusions. Finally, published research contributing to knowledge in the discipline is peer reviewed and usually rigorously scrutinized. Psychological research can take many forms ranging from: controlled laboratory true experiments (involving the manipulation of independent variables and controls for confounding variables) to field research (involving deliberate manipulation of independent variables in natural uncontrolled environments) to naturalistic/quasi experimental method (involving observation and analysis of independent variables changed by natural incidence). No matter which research method is employed, controls are carefully implemented to ensure the credibility of research. Key issues surrounding controls are: research design, sampling, reliability and validity.
The underlying structure of an investigation. It involves how psychologists use subjects/participants in their experiments. The three most common designs are:
1.Repeated Measures: using the same subjects in the experimental and control conditions
2.Independent Measures :using different subjects/participants in the experimental and control conditions
3. Matched Pairs :using different subjects/participants in the experimental and control conditions with each sample having similar characteristics.
The process of selecting participants/subjects to examine derived from a target population (a specified subpopulation of all humans). The results of a study are inferred from examination of the sample’s performance on a given measure, thus the sample is key in the line of reasoning from initial design to examination of results. Several methods can be employed when choosing a sample: random, stratified and convenience. Random sampling provides the best chance for the sample group to be representative of the target population. Stratified samples reflect similar proportions of various sub-groups within a sample. Convenience sampling involves choosing participants/subjects that are available at the time of data collection. Convenience samples do not control for possible biases that may within certain subgroups of a population and thus the results and conclusions from a convenience sample must be analyzed with caution and triangulated.
A study is reliable if it is replicable and the same results are achieved repeatedly. There are four types of reliability in regard to psychological study:
- Test-Retest Reliability (also called stability reliability)
- Interrater Reliability
- Parallel Forms Reliability
- Internal Consistency Reliability
To judge for reliability in this case, the test is administered two different times to the exact same or similar subjects. This judges for consistency of results across time, and to make sure the results were not affected by context of the time. Reliability is higher if the retest is close in chronological proximity to the original test.
Research psychologists tend to replicate older studies to generate theories or to amend findings to account for reliability. In attention for example, Treisman consistently retested findings to amend the attention models.
Two or more judges score the test. The scores are then compared to determine how much the two raters agree on the consistency on a rating system.
An example of interrater reliability would be that of teachers grading essays for an AP or IB exam. If a scale from 1 to 5 was used (where 1 is the worst and 5 is the best), and one teacher gave an essay a score of 2 and another gave a score of 5, then the reliability would be inconsistent. Through training, practice, and discussion, the individual raters can reach a consistent level of assessing an experiment, test, or result. Often, the raters are moderated by a higher rater who will assist in reaching consistency.
Parallel Forms Reliability
A large set of questions that are related to the same construct are generated and then divided into two sets. The two different sets are given to the same sample of people at the same time. Therefore, the two tests that study the same content are judged against each other for consistency.
An example would be a pretest-posttest, where the two groups would either receive form 1 or form 2, and in the posttest situation, the groups would be switched.
Internal Consistency Reliability
In this case, the tool is used as the tool to determine reliability. Thus would be a test situation in which the items on the test measure the same content. Often, questions can be strikingly similar, which shows that the test is also a measure for internal consistency reliability. Therefore, the similar questions should be answered in the same way. There are different ways to measure internal consistency reliability:
- Average Inter-item Correlation
- Average Itemtotal Correlation
- Split-Half Reliability
- Cronbach's Alpha (a)
Quantitative versus Qualitative Measures
Coolican, H. (2004). Research methods and statistics in Psychology. Cambridge University Press.
1. In what ways has new technology changed the science of psychology? Provide three examples.
2. How does the importance of validity and reliability change depending on the type of study?
3. In what ways will the different aspects of an experiment (sampling, methods, reliabilty, and validity) affect the results and conclusions of an psychology study? | <urn:uuid:defcc62e-94ff-4b7f-9ff7-845347fa405d> | CC-MAIN-2013-20 | http://en.m.wikibooks.org/wiki/SL_Psychology/Intro_to_Research_Methods | 2013-05-26T02:48:22 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.930575 | 1,445 | 4.0625 | 4 |
Most Americans believe that the Declaration of Independence by the Continental Congress on July 4, 1776 began American independence. While this date announced the formal break between the American colonists and the “mother country,” it did not guarantee independence. Not all Americans favored independence and most historical estimates place the number of Loyalist, or Tory, Americans near one-third of the population. Winning independence required an eight-year war that began in April, 1775 and ended with a peace treaty finalized on September 3, 1783. Unfortunately the infant nation found itself born in a world dominated by a superpower struggle between England and France. The more powerful European nations viewed the vulnerable United States, correctly, as weak and ripe for exploitation. Tragically, few Americans know of this period of crisis in our nation’s history because of the irresponsible neglect of the American education system.
American independence marked the end of one chapter in American history and the beginning of another. As with all historical events this declaration continued the endless cycle of action and reaction, because nothing occurs in a vacuum. Tragically, most Americans’ historical perspective begins with their birth, rendering everything that previously occurred irrelevant. Furthermore, most educators conveniently “compartmentalize” their subjects and do not place them in the proper historical context. Since most Americans only remember the United States as a superpower they do not know of our previous struggles. Unfortunately our agenda driven education system also ignores this period and often portrays America in the most negative light.
Without delving too deeply into the deteriorating relations between the American colonists and their “mother country,” declaring independence came slowly. None of the thirteen colonies trusted the other colonies and rarely acted in concert, even during times of crisis. Regional and cultural differences between New England, mid-Atlantic and the Southern colonies deeply divided the colonists. Even in these early days of America slavery proved a dividing issue, although few believed in racial equality. The “umbilical cord” with England provided the only unifying constant that bound them together culturally and politically.
The colonies further possessed different forms of government as well, although they steadfastly expressed their liberties and “rights as Englishmen.” Some colonies existed as royal colonies, where the English monarch selected the governor. Proprietary colonies formed when merchant companies or individuals, called proprietors, received a royal grant and appointed the governor. Charter colonies received their charters much as proprietary colonies with individuals or merchants receiving royal charters and shareholders selected the governor. Each colony elected its own legislature and local communities made their laws mostly based on English common law. Any form of national, or “continental,” unity remained an illusion largely in the minds of the delegates of the First Continental Congress.
The Second Continental Congress convened on May 10, 1775 because England ignored the grievances submitted by the First Continental Congress. Furthermore, open warfare erupted in Massachusetts between British troops and the colonial militia at Lexington and Concord on April 19, 1775. Known today as Patriot’s Day few Americans outside of Massachusetts celebrate it, or even know of it. Setting forth their reasons for taking up arms against England, they established the Continental Army on June 14, 1775. For attempting a united front, they appointed George Washington, a Virginian, as commander-in-chief. On July 10, 1775, the Congress sent Parliament one last appeal for resolving their differences, which proved futile.
While Congress determined the political future of the colonies fighting continued around Boston, beginning with the bloody battle on Breed’s Hill on June 17, 1775. Known as the Battle of Bunker Hill in our history the British victory cost over 1,000 British and over 400 American casualties. This battle encouraged the Americans because it proved the “colonials” capable of standing against British regulars. British forces withdrew from Boston in March, 1776 and awaited reinforcements from England as fighting erupted in other colonies.
While Washington and the Continental Army watched the British in Boston, Congress authorized an expedition against Canada. They hoped for significant resentment of British rule by the majority of French inhabitants, something they misjudged. In September, 1775 the fledgling Continental Army launched an ambitious, but futile, two-pronged invasion of Canada. Launched late in the season, particularly for Canada, it nevertheless almost succeeded, capturing Montreal and moving on Quebec. It ended in a night attack in a snowstorm on December 31, 1775 when the commander fell dead and the second-in-command fell severely wounded. American forces did breach the city walls, however when the attack broke down these men became prisoners of war.
For disrupting the flow of British supplies into America Congress organized the Continental Navy and Continental Marines on October 13, 1775 and November 10, 1775, respectively. Still, no demands for independence despite the creation of national armed forces, the invasion of a “foreign country” and all the trappings of a national government.
The full title of the Declaration of Independence ends with “thirteen united States of America,” with united in lower case. I found no evidence that the Founding Fathers did this intentionally, or whether it merely reflected the writing style of the time. Despite everything mentioned previously regarding “continental” actions, the thirteen colonies jealously guarded their sovereignty.
Although Congress declared independence England did not acknowledge the legality of this resolution and considered the colonies “in rebellion.” England assembled land and naval forces of over 40,000, including German mercenaries, for subduing the “insurrection.” This timeless lesson proves the uselessness of passing resolutions with no credible threat of force backing them up. Unfortunately our academic-dominated society today believes merely the passage of laws and international resolutions forces compliance.
We hear much in the news today about “intelligence failures” regarding the war against terrorism. England definitely experienced an “intelligence failure” as it launched an expedition for “suppressing” this “insurrection” by a “few hotheads.” First, they under estimated the extent of dissatisfaction among the Americans, spurred into action by such “rabble rousers” as John Adams. They further under estimated the effectiveness of Washington and the Continental Army, particularly after the American victories at Trenton and Princeton.
British officials further under estimated the number of Loyalists with the enthusiasm for taking up arms for the British. While Loyalist units fought well, particularly in the South and the New York frontier, they depended heavily on the support of British regulars. Once British forces withdrew, particularly in the South, the Loyalist forces either followed them or disappeared. A perennial lesson for military planners today, do not worry about your “footprint,” decisively defeat your enemy. This hardens the resolve of your supporters, influences the “neutrals” in your favor and reduces the favorability of your enemies.
Regarding the “national defense” the Continental Congress and “states” did not fully cooperate against the superpower, England. The raising of the Continental Army fell on the individual colonies almost throughout the war with the Congress establishing quotas. Unfortunately, none of the colonies ever met their quota for Continental regiments, with the soldiers negotiating one-year enlistments.
Continental Army recruiters often met with competition from the individual colonies, who preferred fielding their militias. The Congress offered bounties in the almost worthless “Continental Currency” and service far from home in the Continental Army. Colonial governments offered higher bounties in local currencies, or British pounds, and part-time service near home.
Congress only possessed the authority for requesting troops and supplies from the colonial governors, who often did not comply. For most of the war the Continental Army remained under strength, poorly supplied, poorly armed and mostly unpaid. Volumes of history describe the harsh winters endured by the Continentals at Valley Forge and Morristown, New Jersey the following year.
Colonial governments often refused supplies for troops from other colonies, even though those troops fought inside their borders. As inflation continued devaluing “Continental Currency” farmers and merchants preferred trading with British agents, who often paid in gold. This created strong resentment from the soldiers who suffered the hardships of war and the civilians who profited from this trade. In fairness, the staggering cost of financing the war severely taxed the colonial governments and local economies, forcing hard choices.
Congress further declared independence as a cry for help from England’s superpower rival, France, and other nations jealous of England. Smarting from defeat in the Seven Years War (French and Indian War in America), and a significant reduction in its colonial empire, France burned for revenge. France’s ally, Spain, also suffered defeat and loss of territory during this war and sought advantage in the American war. However, France and Spain both needed American victories before they risked their troops and treasures. With vast colonial empires of their own they hesitated at supporting a colonial rebellion in America. As monarchies, France and Spain held no love of “republican ideals” or “liberties,” and mostly pursued independent strategies against England. Fortunately their focus at recouping their former possessions helped diminish the number of British forces facing the Americans.
On the political front the Congress knew that the new nation needed some form of national government for its survival. Unfortunately the Congress fell short on this issue, enacting the weak Articles of Confederation on November 15, 1777. Delegates so feared the “tyranny” of a strong central government, as well as they feared their neighbors, that they rejected national authority. In effect, the congressional delegates created thirteen independent nations instead of one, and our nation suffered from it. Amending this confederation required the approval of all thirteen states, virtually paralyzing any national effort. This form of government lasted until the adoption of the US Constitution on September 17, 1787.
Despite these weaknesses the fledgling “United States” survived and even achieved some success against British forces. Particularly early in the war, the British forces possessed several opportunities for destroying the Continental Army and ending the rebellion. Fortunately for us British commanders proved lethargic and complacent, believing the “colonial rabble” incapable of defeating them. Furthermore, as the Continental Army gained experience and training it grew more professional, standing toe-to-toe against the British. Since the US achieved superpower status it fell into the same trap, continuously underestimating less powerful enemies.
The surrender of British forces at Yorktown, Virginia on October 19, 1781 changed British policy regarding its American colonies. British forces now controlled mainly three enclaves: New York City; Charleston, South Carolina and Savannah, Georgia. Loyalist forces, discouraged by British reverses, either retreated into these enclaves, departed America or surrendered. Waging a global war against France and Spain further reduced the number of troops available for the American theater. This serves another modern lesson for maintaining adequate forces for meeting not only your superpower responsibilities, but executing unforeseen contingencies.
Ironically, the victory at Yorktown almost defeated the Americans as well, since the civil authorities almost stopped military recruitment. Washington struggled at maintaining significant forces for confronting the remaining British forces in their enclaves. An aggressive British commander may still score a strategic advantage by striking at demobilizing American forces. Fortunately, the British government lost heart for retaining America and announced the beginning of peace negotiations in August, 1782.
The Treaty of Paris, signed on September 3, 1783 officially ended the American Revolution; however it did not end America’s struggles. American negotiators proved somewhat naïve in these negotiations against their more experienced European counterparts. Of importance, the British believed American independence a short-lived situation, given the disunity among Americans. Congress began discharging the Continental Army before the formal signing of the treaty, leaving less than one hundred on duty.
Instead of a united “allied” front, America, France and Spain virtually negotiated separate treaties with England, delighting the British. They believed that by creating dissension among the wartime allies they furthered their position with their former colonies. If confronted with a new war with more powerful France and Spain, America might rejoin the British Empire.
When England formally established the western boundary of the US at the Mississippi River it did not consult its Indian allies. These tribes did not see themselves as “defeated nations,” since they often defeated the Americans. Spanish forces captured several British posts in this territory and therefore claimed a significant part of the southeastern US.
France, who practically bankrupted itself in financing the American cause and waging its own war against England, expected an American ally. Unfortunately, the US proved a liability and incapable of repaying France for the money loaned during the war. France soon faced domestic problems that resulted in the French Revolution in 1789.
For several reasons England believed itself the winner of these negotiations, and in a more favorable situation, globally. England controlled Canada, from where it closely monitored the unfolding events in the US, and sowed mischief. It illegally occupied several military forts on American territory and incited the Indian tribes against the American frontier. By default, England controlled all of the American territory north of the Ohio River and west of the Appalachian Mountains.
Economically, England still believed that the US needed them as its primary trading partner, whether independent or not. A strong pro-British faction in America called for closer economic ties with the former “mother country.” As England observed the chaos that gripped the US at this time, they felt that its collapse, and reconquest by England, only a matter of time.
Most Americans today, knowing only the economic, industrial and military power of America cannot fathom the turmoil of this time. The weak central government and all the states accumulated a huge war debt, leaving them financially unstable. While the US possessed rich natural resources it lacked the industrial capabilities for developing them, without foreign investment. With no military forces, the nation lacked the ability of defending its sovereignty and its citizens. From all appearances our infant nation seemed stillborn, or as the vulnerable prey for the more powerful Europeans.
As stated previously the Articles of Confederation actually created thirteen independent nations, with no national executive for enforcing the law. Therefore each state ignored the resolutions from Congress and served its own self-interest. Each state established its own rules for interstate commerce, printed its own money and even established treaties with foreign nations. No system existed for governing the interactions between the states, who often treated each other like hostile powers.
The new nation did possess one thing in abundance, land; the vast wilderness between the Appalachian Mountains and the Mississippi River. Conceded by the British in the Treaty of Paris, the Americans looked at this as their economic solution. The nation owed the veterans of the Revolution a huge debt and paid them in the only currency available, land grants. Unfortunately, someone must inform the Indians living on this land and make treaties regarding land distribution.
For the Americans this seemed simple, the Indians, as British allies, suffered defeat with the British and must pay the price. After all, under the rules of European “civilized” warfare, defeated nations surrendered territory and life went on. Unfortunately no one, neither American nor British, informed the Indians of these rules, because no one felt they deserved explanation. Besides, the British hoped that by inciting Indian troubles they might recoup their former colonies.
With British arms and encouragement the tribes of the “Old Northwest” raided the western frontier with a vengeance. From western New York down through modern Kentucky these Indians kept up their war with the Americans. In Kentucky between 1783 and 1790 the various tribes killed an estimated 1,500 people, stole 20,000 horses and destroyed an unknown amount of property.
Our former ally, Spain, controlled all of the territory west of the Mississippi River before the American Revolution. From here they launched expeditions that captured British posts at modern Vicksburg and Natchez, Mississippi, and the entire Gulf Coast. However, they claimed about two-thirds of the southeastern US based on this “conquest” including land far beyond the occupation of their troops. Like the British, they incited the Indians living in this region for keeping out American settlers.
Spain also controlled the port of New Orleans and access into the Mississippi River. Americans living in Kentucky and other western settlements depended on the Mississippi River for their commerce. The national government seemed unable, or unwilling, at forcing concessions from Spain, and many westerners considered seceding from the Union. Known as the “Spanish Conspiracy” this plot included many influential Americans and only disappeared after the American victory at Fallen Timbers.
While revisionist historians ignore the “Spanish Conspiracy” they illuminate land speculation by Americans in Spanish territory. Of course they conveniently ignore the duplicity of Spanish officials in these plots, and their acceptance of American money. In signing the Declaration of Independence the Founding Fathers pledged “their lives, their fortunes and their sacred honor.” Many Continental Army officers bankrupted themselves when Congress and their states proved recalcitrant at reimbursing them for incurred expenses. These officers often personally financed their troops and their expeditions because victory required timely action. Of importance for the western region, George Rogers Clark used his personal credit for financing his campaigns, which secured America’s claim. It takes no “lettered” historian for determining that without Clark’s campaign that America’s western boundary ends with the Appalachian Mountains, instead of the Mississippi River. With the bankrupt Congress and Virginia treasuries not reimbursing him he fell into the South Carolina Yazoo Company. Clark’s brother-in-law, Dr. James O’Fallon, negotiated this deal for 3,000,000 acres of land in modern Mississippi. This negotiation involved the Spanish governor of Louisiana, Don Estavan Miro, a somewhat corrupt official. When the Spanish king negated the treaty, Clark, O’Fallon and the other investors lost their money and grew hateful of Spain.
Another, lesser known, negotiation involved former Continental Army Colonel George Morgan and the Spanish ambassador, Don Diego de Gardoqui. Morgan received title for 15,000,000 acres near modern New Madrid, Missouri for establishing a colony. Ironically, an unscrupulous American, James Wilkinson, discussed later in the document, working in conjunction with Miro, negated this deal.
Both of these land deals involved the establishment of American colonies in Spanish territory, with Americans declaring themselves Spanish subjects. Few Spaniards lived in the area west of the Mississippi River and saw the growing number of American settlers as a threat. However, if these Americans, already disgusted with their government, became Spanish subjects, they now became assets. If they cleared and farmed the land, they provided revenue that Spanish Louisiana desperately needed. Since many of these men previously served in the Revolution, they provided a ready militia for defending their property. This included defending it against their former country, the United States, with little authority west of the Appalachian Mountains.
Internationally, the weak US became a tragic pawn in the continuing superpower struggle between England and France. With no naval forces for protection, American merchant mariners became victims of both nations on the high seas. British and French warships stopped American ships bound for their enemy, confiscating cargo and conscripting sailors into their navies. In the Mediterranean Sea, our ships became the targets of the Barbary Pirates, the ancestors of our enemies today. Helpless, our government paid ransoms for prisoners and tribute for safe passage until the Barbary Wars of the early 19th Century.
Despite all of these problems most influential Americans still “looked inward,” and feared a strong central government more than foreign domination. When the cries of outrage came from the western frontiers regarding Indian depredations, our leaders still more feared a “standing army.” In the world of the Founding Fathers the tyranny of King George III’s central government created their problem. The king further used his “standing army” for oppressing the colonists and infringing on their liberties.
Congress also possessed more recent examples of the problems with a “standing army” during the American Revolution. First came the mutiny of the Pennsylvania Line in January, 1781 for addressing their grievances. Since the beginning of the war, in 1775, the Continental soldiers endured almost insurmountable hardships, as explained previously. The soldiers rarely received pay, and then received the almost worthless “Continental Currency,” which inflation further devalued. This forced severe hardships also on the soldiers’ families, and many lost their homes and farms. The soldiers marched on the then-capital, Philadelphia, for seeking redress for these grievances. Forced into action, Congress addressed their problems with pay and the soldiers rejoined the Army.
A second, though less well known, mutiny occurred with the New Jersey Line shortly thereafter with different results. For “nipping” a growing problem “in the bud,” Washington ordered courts-martial and the execution of the ring leaders. The last such trouble occurred in the final months of the war in the Continental Army camp at Newburgh, New York. Dissatisfied with congressional inaction on their long-overdue pay, many officers urged a march on Philadelphia. Fortunately, Washington defused this perceived threat against civil authority, and squashed the strong possibility of a military dictatorship.
However, Congress realized that it needed some military force for defending the veterans settling on their land grants. The delegates authorized the First United States Regiment, consisting of 700 men drawn from four state militias for a one year period. I read countless sources describing the inadequacy of this force, highlighting congressional incompetence and non-compliance by the states. The unit never achieved its authorized strength, the primitive conditions on the frontier hindered its effectiveness and corrupt officials mismanaged supplies. Scattered in small garrisons throughout the western territories, it never proved a deterrent against the Indians.
No incentives existed for enlisting in this regiment, and it attracted a minority of what we call today “quality people.” Again, confirming state dominance over the central government, this “army” came from a militia levy from four states, a draft. A tradition at the time provided for the paying of substitutes for the men conscripted during these militia levies. Sources reflect that most of these substitutes came from the lowest levels of society, including those escaping the law. From whatever source these men came, at least they served and mostly did their best under difficult circumstances.
Routinely, once the soldiers assembled they must learn the skills needed for performing their duties. For defending the western settlements the small garrisons must reach their destination via river travel. Once at their destination they must often construct their new installations using the primitive tools and resources available. The primitive transportation system often delayed the arrival of the soldiers’ pay and supplies, forcing hardships on the troops. Few amenities existed at these frontier installations and the few settlements provided little entertainment for the troops. Unfortunately, once the soldiers achieved a level of professionalism, they reached the end of their enlistment. With few incentives for reenlistment, the process must begin again, with recruiting and training a new force.
Fortunately many prominent Americans saw that the country needed a different form of government for ensuring its survival. Despite the best intentions and established rules, few people followed these rules or respected our intentions. The Constitutional Convention convened in Philadelphia in May, 1787 with George Washington unanimously elected as its president. As the delegates began the process of forming a “more perfect Union,” the old, traditional “colonial” rivalries influenced the process.
While most Americans possess at least ancillary knowledge of the heated debates among the delegates, few know the conditions. Meeting throughout the hot summer, the delegates kept the windows of their meeting hall closed, preventing the “leaking” of information. We must remember that this occurred before electric-powered ventilation systems or air conditioning. They kept out the “media,” and none of the delegates spoke with “journalists,” again for maintaining secrecy. Modern Americans, often obsessed with media access, do not understand why the delegates kept their deliberations secret.
Most of the delegates felt they possessed one chance for creating this new government and achieving the best possible needed their focus. “Media access” jeopardized this focus and “leaked” information, with potential interruptions, jeopardized their chance for success. We find this incomprehensible today, with politicians running toward television cameras, “leaking” information and disclosing national secrets. Unfortunately a “journalistic elite” exists today, misusing the First Amendment, with many “media moguls” believing themselves the “kingmakers” of favorite politicians.
The delegates sought the best document for satisfying the needs of the most people, making “special interest groups” secondary. Creating a united nation proved more important than prioritizing regional and state desires. These delegates debated, and compromised, on various issues; many of which remain important today. They worried over the threat of dominance by large, well-populated states over smaller, less-populated states. Other issues concerned taxation, the issue that sparked the American Revolution, and import duties, which pitted manufacturing states against agricultural states. Disposition of the mostly unsettled western land, claimed by many states, proved a substantial problem for the delegates. The issue of slavery almost ended the convention and the delegates compromised, achieving the best agreement possible at the time. On September 17, 1787 the delegates adopted the US Constitution and submitted it for approval by the individual states.
Again, merely passing laws and adopting resolutions does not immediately solve the problems, or change people’s attitudes. Ratification of the Constitution required the approval of nine states, (three-fourths) which occurred on June 21, 1788. However, two important large states, New York and Virginia, still debated ratification. Several signers of the Declaration of Independence, and delegates at the Constitutional Convention, urged the defeat of the Constitution. Fiery orator, Patrick Henry, of “Give me liberty, or give me death,” fame worked hard for defeating it in Virginia. Even the most optimistic supporters gave the Constitution, and the nation, only a marginal chance at survival. | <urn:uuid:fcd8384e-97df-45dc-baf6-0742150406b6> | CC-MAIN-2013-20 | http://frontierbattles.wordpress.com/2008/09/20/battle-of-fallen-timbers-confirms-american-independence-part-i/?like=1&_wpnonce=24a0599870 | 2013-05-26T02:34:30 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.952682 | 5,420 | 4.125 | 4 |
|U.S. Naval Observatory||Earth Orientation Department|
In 1956, following several years of work, two astronomers at the U. S. Naval Observatory (USNO) and two astronomers at the National Physical Laboratory (Teddington, England) determined the relationship between the frequency of the Cesium atom (the standard of time) and the rotation of the Earth at a particular epoch. As a result, they defined the second of atomic time as the length of time required for 9 192 631 770 cycles of the Cesium atom at zero magnetic field. The second thus defined was equivalent to the second defined by the fraction 1 / 31 556 925.9747 of the year 1900. The atomic second was set equal, then, to an average second of Earth rotation time near the end of the 19th century.
The Rapid Service/Prediction Center of the International Earth Rotation Service (IERS), located at the U.S. Naval Observatory, monitors the Earth's rotation. Part of its mission involves the determination of a time scale based on the current rate of the rotation of the Earth. UT1 is the non-uniform time based on the Earth's rotation.
The Earth is constantly undergoing a deceleration caused by the braking action of the ocean tides. Through the use of ancient observations of eclipses, it is possible to determine the deceleration of the Earth to be roughly 2 milliseconds per day per century. This is an effect which causes the Earth's rotational time to slow with respect to the atomic clock time. Since it has been about 1 century since the defining epoch (i.e., the duration since 1900), the difference has accumulated to roughly 2 milliseconds per day. Other factors also affect the Earth's dynamics, some in unpredictable ways, so that it is necessary to monitor the Earth's rotation continuously.
In order to keep the cumulative difference in UT1-UTC less than 0.9 seconds, a leap second is inserted periodically in the atomic UTC time scale to decrease the difference between the two. This leap second can be either positive or negative depending on the Earth's rotation. Since the first leap second in 1972, all leap seconds have been positive (click here for a list of all announced leap seconds). This reflects the general slowing trend of the Earth due to tidal braking.
Confusion sometimes arises over the misconception that the occasional insertion of leap seconds every few years indicates that the Earth should stop rotating within a few millennia. The confusion arises because some mistake leap seconds as a measure of the rate at which the Earth is slowing. The one-second increments are, however, indications of the accumulated difference in time between the two systems. As an example, the situation is similar to what would happen if a person owned a watch that lost two seconds per day. If it were set to a perfect clock today, the watch would be found to be slow by two seconds tomorrow. At the end of a month, the watch will be roughly a minute in error (thirty days of the two second error accumulated each day). The person would then find it convenient to reset the watch by one minute to have the correct time again.
This scenario is analogous to that encountered with the leap second. The difference is that instead of resetting the clock that is running slow, we choose to adjust the clock that is keeping a uniform, precise time. The reason for this is that we can change the time of an atomic clock while it is not possible to alter the Earth's rotational speed to match the atomic clocks. Currently the Earth runs slow at roughly 2 milliseconds per day. After 500 days, the difference between the Earth rotation time and the atomic time would be one second. Instead of allowing this to happen a leap second is inserted to bring the two times closer together.
The decision of when to introduce a leap second in UTC is the responsibility of the International Earth Rotation Service (IERS). According to international agreements, first preference is given to the opportunities at the end of December and June, and second preference to those at the end of March and September. Since the system was introduced in 1972, only dates in June and December have been used.
The official United States time is determined by the Master Clock at the U. S. Naval Observatory (USNO). The Observatory is charged with the responsibility for precise time determination and management of time dissemination. Modern electronic systems, such as electronic navigation or communication systems, depend increasingly on precise time and time interval (PTTI). Examples are the ground-based LORAN-C navigation system and the satellite-based Global Positioning System (GPS). Navigation systems are the most critical application for precise time. GPS, in particular, is widely used for navigating ships, planes, missiles, trucks, and cars anywhere on Earth. These systems are all based on the travel time of electromagnetic signals: an accuracy of 10 nanoseconds (10 one-billionths of a second) corresponds to a position accuracy of about 3 meters (or 10 feet).
Precise time measurements are needed for the synchronization of clocks at two or more sites. Such synchronization is necessary, for example, for high-speed communications systems. Power companies use precise time to control power distribution grids and reduce power loss. Radio and television stations require precise time (the time of day) and precise frequencies in order to broadcast their transmissions. Many programs are transmitted from coast to coast to affiliate stations around the country. Without precise timing the stations would not be able to synchronize the transmission of these programs to local audiences. All of these systems are referenced to the USNO Master Clock.
Very precise time is kept by using atomic clocks. The principle of operation of the atomic clock is based on measuring the microwave resonance frequency (9,192,631,770 cycles per seconds) of the cesium atom. At the Observatory, the atomic time scale (AT) is determined by averaging 60 to 70 atomic clocks placed in separate, environmentally controlled vaults. Atomic Time is a very uniform measure of time (one tenth of one billionth of a second per day).
The USNO must maintain and continually improve its clock system so that it can stay one step ahead of the demands made on its accuracy, stability and reliability. The present Master Clock of the USNO is based on a system of some 60 independently operating cesium atomic clocks and 7 to 10 hydrogen maser atomic clocks. These clocks are distributed over 20 environmentally controlled clock vaults, to ensure their stability. By automatic inter-comparison of all clocks every 100 seconds, a time scale is computed which is not only reliable but also extremely stable. Its rate does not change by more than about 100 picoseconds (.0000000001 seconds) per day from day to day.
On the basis of this computed time scale, a clock reference system is steered to produce clock signals which serve as the USNO Master Clock. The clock reference system is driven by a hydrogen maser atomic clock. Hydrogen masers are extremely stable clocks over short time periods (less than one week). They provide the stability and reliability needed to maintain the accuracy of the Master Clock System.
Very Long Baseline Interferometry (VLBI) is used to determine Universal Time (UT1) based on the rotation of the Earth about its axis. VLBI is an advanced astronomical technique of observing extra-galactic sources (typically quasars) with radio telescopes. The information gained using VLBI can be used to generate images of the distant radio sources, measure the rotation rate of the Earth, the motions of the Earth in space, or even measure how the tectonic plates where the telescopes are located are moving on the surface of the Earth. Measuring the Earth's rotational motion is critical for navigation. The most accurate navigation systems rely on measurements using satellite systems which are not tied to the Earth's surface. These systems can provide a position accurate to a about a meter (few feet), but the position of the Earth relative to the satellites must also be known to avoid potentially far larger errors.
The U.S. Naval Observatory has been in the forefront of timekeeping since the early 1800s. In 1845, the Observatory offered its first time service to the public: a time ball was dropped at noon. Beginning in 1865 time signals were sent daily by telegraph to Western Union and others. In 1904, a U.S. Navy station broadcast the first worldwide radio time signals based on a clock provided and controlled by the Observatory.
A time of day announcement can be obtained by calling 202-762-1401 locally in the Washington area. For long distance callers the number is 900-410-TIME. The latter number is a commercial service for which the telephone company charges 50 cents for the first minute and 45 cents for each additional minute. Australia, Hong Kong, and Bermuda can also access this service at international direct dialing rates. You can also get time for your computer by calling 202-762-1594. Use 1200 baud, no parity, 8 bit ASCII.
|Last modified: 24 October 2001||Approved by EO Dept. Head, USNO| | <urn:uuid:ad3517d5-9fdc-41be-abb7-3b5ca1eaa42c> | CC-MAIN-2013-20 | http://maia.usno.navy.mil/eo/leapsec.html | 2013-05-26T02:34:29 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.94046 | 1,861 | 4 | 4 |
The steps by which molecules in the primordial soup came together to form the genetic backbone of life are largely unknown. One approach to finding out is to artificially create basic life functions in the laboratory and consider if such conditions might have been possible in the Earth’s past. Writing in Physical Review Letters, Hubert Krammer and colleagues at the Ludwig Maximilian University of Munich in Germany show they are able to drive the replication of segments of tRNA (transfer ribonucleic acid), the molecule responsible for translating genetic code into the production of specific proteins, using a purely thermal process.
Krammer et al. begin by rapidly cooling a solution of four halves of tRNA from high temperatures to so that the molecules form hairpins—a state where the strand forms a closed loop on itself, except for a snippet of a sequence of bases, called a “toe hold.” It is this toe hold, which, in principle, carries enough information to encode a protein, that the authors try to protect and replicate by using a thermal process to coax the hairpins to open and pair to a complementary strand. When Krammer et al. thermally cycle the solution between and , the energy stored in the hairpin (which prefers it to bind to a complementary pair instead of itself) compensates for the loss of entropy associated with the molecules pairing up with their partners.
This thermally driven process occurs on a relatively fast time scale of about seconds, an important factor since molecules need to replicate faster than they degrade. According to the authors, convection currents in prebiotic liquids could have provided the necessary quenching and thermal cycling. – Jessica Thomas | <urn:uuid:4667167f-2026-4584-834a-5892652dce7e> | CC-MAIN-2013-20 | http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.108.238104 | 2013-05-26T03:09:20 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.929244 | 339 | 4.15625 | 4 |
End of preview. Expand
in Data Studio
📚 FineWeb-Edu-score-4-dedup
This is a filtered version of the deduplicated FineWeb-Edu corpus. It includes only documents that received an educational score of at least 4.
- Downloads last month
- 212