text
string
id
string
dump
string
url
string
date
timestamp[us]
file_path
string
language
string
language_score
float64
token_count
int64
score
float64
int_score
int64
Mexican America - Introduction "Mexican America" is a sampling of objects from the collections of the National Museum of American History. The stories behind these objects reflect the history of the Mexican presence in the United States. They illustrate a fundamentally American story about the centuries-old encounter between distinct (yet sometimes overlapping) communities that have coexisted but also clashed over land, culture, and livelihood. Who, where, and what is Mexico? Over time, the definitions and boundaries of Mexico have changed. The Aztec Empire and the area where Náhautl was spoken—today the region surrounding modern Mexico City—was known as Mexico. For 300 years, the Spanish colonizers renamed it New Spain. When Mexico was reborn in 1821 as a sovereign nation, its borders stretched from California to Guatemala. It was a huge and ancient land of ethnically, linguistically, and economically diverse regions that struggled for national unity. Texas, (then part of the Mexican state of Coahuila y Tejas) was a frontier region far from the dense cities and fertile valleys of central Mexico, a place where immigrants were recruited from the United States. The immigrants in turn declared the Mexican territory an independent republic in 1836 (later a U.S. state), making the state the first cauldron of Mexican American culture. By 1853, the government of Mexico, the weaker neighbor of an expansionist United States, had lost what are today the states of California, Nevada, Utah, Arizona, New Mexico, Texas, and parts of Colorado and Wyoming. In spite of the imposition of a new border, the historical and living presence of Spaniards, Mexicans, indigenous peoples, and their mixed descendants remained a defining force in the creation of the American West. “La América Mexicana” es una muestra conformada por objetos provenientes de las distintas colecciones del Museo Nacional de Historia Americana. Estos objetos reflejan la historia de la presencia mexicana en los Estados Unidos e ilustran una crónica fundamentalmente americana acerca del encuentro centenario entre comunidades diferentes que han coexistido, pero que también se han enfrentado, en la pugna por la tierra, la cultura y el sustento. ¿Quién, dónde y qué es México? Con el transcurso del tiempo, las definiciones y los límites de México han ido cambiando. Se conocía como México al Imperio Azteca y toda el área donde se hablaba náhuatl —actualmente la región circundante a la ciudad de México. Durante 300 años los colonizadores españoles se refirieron a ella como Nueva España. Cuando en 1821 México resurgió como una nación soberana, sus fronteras se extendían desde California a Guatemala. En ese entonces era un antiguo e inmenso territorio conformado por regiones étnica, lingüística y económicamente diversas que luchaban por adquirir unidad nacional. Texas (en ese entonces parte de los estados mexicanos de Coahuila y Tejas) era una región fronteriza lejos de las densas urbes y de los fértiles valles de México central, donde se reclutaban inmigrantes de los Estados Unidos. En el año 1836 este territorio mexicano se declaró como república independiente (y más tarde, estado de EE.UU.), convirtiéndose en el primer calderón de la cultura mexicoamericana. Hacia 1853, el gobierno de México, el vecino débil de un Estados Unidos en expansión, había perdido el territorio de los actuales estados de California, Nevada, Utah, Arizona, Nuevo México, Texas y partes de Colorado y Wyoming. A pesar de la imposición de un nuevo límite fronterizo, la presencia histórica y ocupacional de los españoles, mexicanos y pueblos indígenas, junto a sus descendientes mestizos, constituiría a lo largo del tiempo una influencia determinante para el desarrollo del Oeste Americano. "Mexican America - Introduction" showing 1 items. - This print depicts American forces attacking the fortress palace of Chapultepec on Sept. 13th, 1847. General Winfield Scott, in the lower left on a white horse, led the southern division of the U.S. Army that successfully captured Mexico City during the Mexican American War. The outcome of American victory was the loss of Mexico's northern territories, from California to New Mexico, by the terms set in the Treaty of Guadalupe Hidalgo. It should be noted that the two countries ratified different versions of the same peace treaty, with the United States ultimately eliminating provisions for honoring the land titles of its newly absorbed Mexican citizens. Despite notable opposition to the war from Americans like Abraham Lincoln, John Quincy Adams, and Henry David Thoreau, the Mexican-American War proved hugely popular. The United States' victory boosted American patriotism and the country's belief in Manifest Destiny. - This large chromolithograph was first distributed in 1848 by Nathaniel Currier of Currier and Ives, who served as the "sole agent." The lithographers, Sarony & Major of New York (1846-1857) copied it from a painting by "Walker." Unfortunately, the current location of original painting is unknown, however, when the print was made the original painting was owned by a Captain B. S. Roberts of the Mounted Rifles. The original artist has previously been attributed to William Aiken Walker as well as to Henry A. Walke. William Aiken Walker (ca 1838-1921) of Charleston did indeed do work for Currier and Ives, though not until the 1880's and he would have only have been only 10 years old when this print was copyrighted. Henry Walke (1808/9-1896) was a naval combat artist during the Mexican American War who also worked with Sarony & Major and is best known for his Naval Portfolio. - Most likely the original painting was done by James Walker (1819-1889) who created the "Battle of Chapultepec" 1857-1862 for the U.S. Capitol. This image differs from the painting commissioned for the U. S. Capitol by depicting the troops in regimented battle lines with General Scott in a more prominent position in the foreground. James Walker was living in Mexico City at the outbreak of the Mexican War and joined the American forces as an interpreter. He was attached to General Worth's staff and was present at the battles of Contreras, Churubusco, and Chapultepec. The original painting's owner, Captain Roberts was assigned General Winfield Scott to assist Walker with recreating the details of the battle of Chapultepec. When the painting was complete, Roberts purchased the painting. By 1848, James Walker had returned to New York and had a studio in New York City in the same neighborhood as the print's distributor Nathaniel Currier as well as the lithographer's Napoleon Sarony and Henry B. Major. - This popular lithograph was one of several published to visually document the war while engaging the imagination of the public. Created prior to photography, these prints were meant to inform the public, while generally eliminating the portrayal of the more gory details. Historians have been able to use at least some prints of the Mexican War for study and to corroborate with the traditional literary forms of documentation. As an eyewitness, Walker could claim accuracy of detail within the narrative in his painting. The battle is presented in the grand, historic, heroic style with the brutality of war not portrayed. The print depiction is quite large for a chromo of the period. In creating the chromolithographic interpretation of the painting, Sarony & Major used at least four large stones to produce the print "in colours," making the most of their use of color. They also defined each figure with precision by outlining each in black. This print was considered by expert/collector Harry T. Peters as one of the finest ever produced by Sarony & Major. - Currently not on view - Date made - associated date - Currier, Nathaniel - Scott, Winfield - Sarony & Major - Walker, James - ID Number - catalog number - accession number - Data Source - National Museum of American History, Kenneth E. Behring Center
<urn:uuid:ff577d1a-83b8-467c-af1c-4c0aa2ead4fb>
CC-MAIN-2013-20
http://americanhistory.si.edu/collections/object-groups/mexican-america?edan_start=0&edan_fq=date%3A%221840s%22
2013-05-18T07:26:18
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.776227
1,938
4.0625
4
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production. What is Wind Shear Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring. Wind Shear and Supercell Thunderstorms This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form. All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment. Rain’s Influence on Tornado Production Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air. That’s Not a Tornado! I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air. This Can Be a Tornado You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air. (NOAA image showing vertical column of air in a supercell thunderstorm) The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear. (NOAA image showing tornado formation in supercell thunderstorm)
<urn:uuid:7400301c-e625-46d5-be90-1020cf8d52f8>
CC-MAIN-2013-20
http://cloudyandcool.com/2009/05/05/wind-shear-and-tornadoes/
2013-05-18T06:26:14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916764
573
4.15625
4
On this day in 1951, more than six years after the end of World War II in Europe, President Harry S. Truman signed a proclamation officially ending U.S. hostilities with Germany. The official end to the war came nine years, 10 months and 13 days after Congress had declared war on Nazi Germany. The lawmakers had responded to a declaration of war issued by the Third Reich in the aftermath of the Dec. 7, 1941, Japanese attack on Pearl Harbor and other U.S. bases in the Pacific. The president explained why he had waited so long after the fighting had ended to act: It had always been America’s hope, Truman wrote, to create a treaty of peace with the government of a united and free Germany, but the postwar policies pursued by the Soviet Union “made it impossible.” After the war, the United States, Britain, France and the Soviet Union divided Germany into four zones of occupation. Berlin, while located wholly within the Soviet zone, was jointly occupied by the wartime allies and also subdivided into four sectors because of its symbolic importance as the nation’s historic capital and seat of the former Nazi government. The three western zones were merged to form the Federal Republic of Germany in May 1949, and the Soviets followed suit in October 1949 with the establishment of the German Democratic Republic. The East German regime began to falter in May 1989, when the removal of Hungary’s border fences punched a hole in the Iron Curtain, allowing tens of thousands of East Germans to flee to the West. Despite the grants of general sovereignty to both German states in 1955, neither of the two German governments held unrestricted sovereignty under international law until after they were reunified in October 1990.
<urn:uuid:802d6d3f-73ff-4476-973b-a3c618ed8f7a>
CC-MAIN-2013-20
http://dyn.politico.com/printstory.cfm?uuid=5C7F8F2E-EB28-4D2A-84B9-D699AAA47355
2013-05-18T05:50:56
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.975901
352
4.34375
4
Uveitis is inflammation of the uvea, which is made up of the iris, ciliary body and choroid. Together, these form the middle layer of the eye between the retina and the sclera (white of the eye). The eye is shaped like a tennis ball, with three different layers of tissue surrounding the central gel-filled cavity, which is called the vitreous. The innermost layer is the retina, which senses light and helps to send images to your brain. The outermost layer is the sclera, the strong white wall of the eye. The middle layer between the sclera and retina is called the uvea. The uvea contains many blood vessels — the veins, arteries and capillaries — that carry blood to and from the eye. Because the uvea nourishes many important parts of the eye (such as the retina), inflammation of the uvea can damage your sight. There are several types of uveitis, defined by the part of the eye where it occurs. - Iritis affects the front of your eye. Also called anterior uveitis, this is the most common type of uveitis. Iritis usually develops suddenly and may last six to eight weeks. Some types of anterior uveitis can be chronic or recurrent. - If the uvea is inflamed in the middle or intermediate region of the eye, it is called pars planitis (or intermediate uveitis). Episodes of pars planitis can last between a few weeks to years. The disease goes through cycles of getting better, then worse. - Posterior uveitis affects the back parts of your eye. Posterior uveitis can develop slowly and often lasts for many years. - Panuveitis occurs when all layers of the uvea are inflamed. Next Page: Uveitis Causes
<urn:uuid:33687e0d-90f9-4e53-ac31-257283325d4f>
CC-MAIN-2013-20
http://eyecareamerica.org/eyesmart/diseases/uveitis.cfm
2013-05-18T05:18:05
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914266
389
4.125
4
Marion Levine teaches English, Literature and Film Production at Los Angeles Center for Enriched Studies, Los Angeles, CA Measure for Measure, Act 4 or 5 What's On for Today and Why Students will choose a character from Measure for Measure and create a "back story" for that character. This will encourage students to read the text closely looking for clues regarding a specific character's history. Students will re-read a portion of the text and then write about what has happened to the character before the play begins. They will then create an artifact, such as a diary or journal entry, written by the charcacter they have selected. This will allow them the opportunity to think like the character and to view the events of the play from a specific point of view. This lesson will take two 40 minute class periods. What You Need Measure for Measure, Folger Edition What To Do 1. Explain the concept of a "back story" as the important events that occur to a character before the play begins. You may need to prompt students with questions such as: What was the character like as a child? In what situation did he/she grow up? Students will need to show how the script supports their choices. 2. Have the students write a one or two page back story in either the first or third person. 3. Divide students into small groups of 4 or 5 and have them re-read Act 4 or Act 5, combing throught the text for character details. 4. Have students write a letter, diary or journal entry from their selected characters point of view (first person). This artifact should concern one or more characters in the play. 5. For increased authenticity, appropriate for an "Extra-Extended" book, students could write their letter, diary entry using calligraphy, a handwriting font or on a piece of yellowed paper. 6. Allow students time to read their pieces and share their artifacts with the class. How Did It Go? Were students able to justify their choices with reference to the text? Did their artifacts accurately portray character traits that can be interpreted from the text? Were students able to convey a sense of the character's perspective through this activity? This lesson could be applied to any fictional text that the students read in class. Through close reading and attention to a specific character, students are able to identify with, and understand the concerns of a character on a deeper level. Possible choices could include Jay Gatsby, Hester Prynne,and Atticus Finch. If you used this lesson, we would like to hear how it went and about any adaptations you made to suit the needs of YOUR students.
<urn:uuid:86849ab7-4070-40ee-9f28-f23c0e6d4e97>
CC-MAIN-2013-20
http://folger.edu/eduLesPlanDtl.cfm?lpid=863
2013-05-18T06:49:22
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948124
553
4.0625
4
Mercury in the Morning The planet Mercury -- the planet closest to the Sun -- is just peeking into view in the east at dawn the next few days. It looks like a fairly bright star. It's so low in the sky, though, that you need a clear horizon to spot it, and binoculars wouldn't hurt. Mercury is a bit of a puzzle. It has a big core that's made mainly of iron, so it's quite dense. Because Mercury is so small, the core long ago should've cooled enough to form a solid ball. Yet the planet generates a weak magnetic field, hinting that the core is still at least partially molten. The solution to this puzzle may involve an iron "snow" deep within the core. The iron in the core is probably mixed with sulfur, which has a lower melting temperature than iron. Recent models suggest that the sulfur may have kept the outer part of the core from solidifying -- it's still a hot, thick liquid. As this mixture cools, though, the iron "freezes" before the sulfur does. Small bits of solid iron fall toward the center of the planet. This creates convection currents -- like a pot of boiling water. The motion is enough to create a "dynamo" effect. Like a generator, it produces electrical currents, which in turn create a magnetic field around the planet. Observations earlier this year by the Messenger spacecraft seem to support that idea. But Messenger will provide much better readings of what's going on inside Mercury when it enters orbit around the planet in 2011. Script by Damond Benningfield, Copyright 2008 For more skywatching tips, astronomy news, and much more, read StarDate magazine.
<urn:uuid:d0a1999f-a775-4afc-bcfd-ee6ff6243a0b>
CC-MAIN-2013-20
http://stardate.org/radio/program/2008-10-20
2013-05-18T06:50:10
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943661
357
4
4
Black holes growing faster than expected Black hole find Existing theories on the relationship between the size of a galaxy and its central black hole are wrong according to a new Australian study. The discovery by Dr Nicholas Scott and Professor Alister Graham, from Melbourne's Swinburne University of Technology, found smaller galaxies have far smaller black holes than previously estimated. Central black holes, millions to billions of times more massive than the Sun, reside in the core of most galaxies, and are thought to be integral to galactic formation and evolution. However astronomers are still trying to understand this relationship. Scott and Graham combined data from observatories in Chile, Hawaii and the Hubble Space Telescope, to develop a data base listing the masses of 77 galaxies and their central supermassive black holes. The astronomers determined the mass of each central black hole by measuring how fast stars are orbiting it. Existing theories suggest a direct ratio between the mass of a galaxy and that of its central black hole. "This ratio worked for larger galaxies, but with improved technology we're now able to examine far smaller galaxies and the current theories don't hold up," says Scott. In a paper to be published in the Astrophysical Journal, they found that for each ten-fold decrease in a galaxy's mass, there was a one hundred-fold decrease in its central black hole mass. "That was a surprising result which we hadn't been anticipating," says Scott. The study also found that smaller galaxies have far denser stellar populations near their centres than larger galaxies. According to Scott, this also means the central black holes in smaller galaxies grow much faster than their larger counterparts. Black holes grow by merging with other black holes when their galaxies collide. "When large galaxies merge they double in size and so do their central black holes," says Scott. "But when small galaxies merge their central black holes quadruple in size because of the greater densities of nearby stars to feed on." Somewhere in between The findings also solve the long standing problem of missing intermediate mass black holes. For decades, scientists have been searching for something in between stellar mass black holes formed when the largest stars die, and supermassive black holes at the centre of galaxies. "If the central black holes in smaller galaxies have lower mass than originally thought, they may represent the intermediate mass black hole population astronomers have been hunting for," says Graham. "Intermediate sized black holes are between ten thousand and a few hundred thousand times the mass of the Sun, and we think we've found several good candidates." "These may be big enough to be seen directly by the new generation of extremely large telescopes now being built," says Graham.
<urn:uuid:e617c5fd-d556-4d43-be1f-042e7e7f2c60>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2013/01/17/3671551.htm?topic=enviro
2013-05-18T06:23:22
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948663
552
4.25
4
Hoodoos may be seismic gurus Hoodoo prediction Towering chimney-like sedimentary rock spires known as hoodoos may provide an indication of an area's past earthquake activity. The research by scientists including Dr Rasool Anooshehpoor, from the United States Nuclear Regulatory Commission, may provide scientists with a new tool to test the accuracy of current hazard models. Hoodoo formations are often found in desert regions, and are common in North America, the Middle East and northern Africa. They are caused by the uneven weathering of different layers of sedimentary rocks, that leave boulders or thin caps of hard rock perched on softer rock. By knowing the strengths of different types of sedimentary layers, scientists can determine the amount of stress needed to cause those rocks to fracture. The United States Geological Survey (USGS) use seismic hazard models to predict the type of ground motion likely to occur in an area during a seismic event. But, according to Anooshehpoor, these models lack long term data. "Existing hazard maps use models based on scant data going back a hundred years or so," says Anooshehpoor. "But earthquakes have return periods lasting hundreds or thousands of years, so there is nothing to test these hazard models against." The researchers examined two unfractured hoodoos within a few kilometres of the Garlock fault, which is an active strike-slip fault zone in California's Red Rock Canyon. Their findings are reported in the Bulletin of the Seismological Society of America. "Although we can't put a precise age on hoodoos because of their erosion characteristics, we can use them to provide physical limits on the level of ground shaking that could potentially have occurred in the area," says Anooshehpoor. The researchers developed a three-dimensional model of each hoodoo and determined the most likely place where each spire would fail in an earthquake. They then tested rock samples similar to the hoodoo pillars to measure their tensile strength and compared their results with previously published data. USGS records suggest at least one large magnitude earthquake occurred along the fault in the last 550 years, resulting in seven metres of slip, yet the hoodoos are still standing. This finding is consistent with a median level of ground motion associated with the large quakes in this region, says Anooshehpoor. "If an earthquake occurred with a higher level of ground motion, the hoodoos would have collapsed," he says. "Nobody can predict earthquakes, but this will help predict what ground motions are associated with these earthquakes when they happen." Dr Juan Carlos Afonso from the Department of Earth and Planetary Sciences at Sydney's Macquarie University says it's an exciting development. "In seismic hazard studies, it's not just difficult to cover the entire planet, it's hard to cover even small active regions near populated areas," says Afonso. "You need lots of instruments, so it's great if you can rely on nature and natural objects to help you." He says while the work is still very new and needs to be proven, the physics seems sound.
<urn:uuid:85a979cb-9571-4e06-b38a-2f79912abb44>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2013/02/05/3682324.htm?site=science&topic=enviro
2013-05-18T06:47:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955619
644
4.3125
4
Science Fair Project Encyclopedia The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions. The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride. Other examples of inorganic covalently bonded chlorides which are used as reactants are: - phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory. - Disulfur dichloride (SCl2) - used for vulcanization of rubber. Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb>
CC-MAIN-2013-20
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Chloride
2013-05-18T08:08:06
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.896893
320
4.59375
5
Fun Classroom Activities The 20 enjoyable, interactive classroom activities that are included will help your students understand the text in amusing ways. Fun Classroom Activities include group projects, games, critical thinking activities, brainstorming sessions, writing poems, drawing or sketching, and more that will allow your students to interact with each other, be creative, and ultimately grasp key concepts from the text by "doing" rather than simply studying. 1. A Year from Now Where will Bone be and how will she be feeling a year from now? Write a one page description of Bone's life a year after the end of the book from Bone's perspective. 2. The Monster Within When Bone's anger is described, it seems to grow and even take form. Take one of the descriptions for Bone's anger and rage and draw it. 3. Bone's Poetry Write a poem as if you are Bone. The poem can be... This section contains 555 words| (approx. 2 pages at 300 words per page)
<urn:uuid:7da8e5fb-c5fb-415f-93c4-97d18531f703>
CC-MAIN-2013-20
http://www.bookrags.com/lessonplan/bastardoutcarolina/funactivities.html
2013-05-18T06:24:08
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941543
210
4.3125
4
In the American electoral system, a primary election is an election that determines the nominee for each political party, who then competes for the office in the general election. A presidential primary is a state election that picks the delegates committed to nominate particular candidates for president of the United States. A presidential caucus, as in Iowa, requires voters to meet together for several hours in face-to-face meetings that select county delegates, who eventually pick the delegates to the national convention. No other country uses primaries; they choose their candidates in party conventions. Primaries were introduced in the Progressive Era in the early 20th century to weaken the power of bosses and make the system more democratic. In presidential elections, they became important starting in 1952, when the first-in-the-nation New Hampshire Primary helped give Dwight D. Eisenhower the Republican nomination, and knocked Harry S. Truman out of the Democratic race because of his poor showing. In 1964, Lyndon B. Johnson ended his reelection campaign after doing poorly in New Hampshire. After 1968, both parties changed their rules to emphasize presidential primaries, although some states still use the caucus system. In recent decades, New Hampshire holds the first primary a few days after Iowa holds the first caucus. That gives these two states enormous leverage, as the candidates and the media focus there. New Hampshire and Iowa receive about half of all the media attention given all primaries. The primary allows voters to choose between different candidates of the some political parties, perhaps representing different wings of the party. For example, a Republican primary may choose between a range of candidates from moderate to conservative. Gallup's 2008 polling data indicated a trend in primary elections towards more conservative candidates, despite the more liberal result in the general election. In recent years the primary seasons has come earlier and earlier, as states move up to earlier dates in the hope it will give them more leverage. For example, Barry Goldwater won the 1964 nomination because he won the last primary in California. The logic is faulty--in highly contested races the later primaries have more leverage. Thus in 2008 California gave up its traditional last-in-the-nation role and joined 20 other states on Super Tuesday. Neither the candidates not the voters paid it much attention. Michigan and Florida moved up their primaries in defiance of national Democratic Party rules and were penalized. The result is the primary season is extended, and is far more expensive, and no state gets an advantage--except for Iowa and New Hampshire, which now have dates in early January. In late 2009 the two national parties are meeting to find a common solution. - Duncan, Dayton. Grass roots: one year in the life of the New Hampshire presidential primary (1991) 436 pages; on 1988 campaign - Johnson, Haynes, and Dan Balz. The Battle for America 2008: The Story of an Extraordinary Election (2009), excellent history of 2008 primaries - Kamarck, Elaine C. Primary Politics: How Presidential Candidates Have Shaped the Modern Nominating System (2009) excerpt and text search
<urn:uuid:c66cbd20-f2be-4f73-902d-7b0198351323>
CC-MAIN-2013-20
http://www.conservapedia.com/Primary_election
2013-05-18T06:20:16
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951015
615
4.03125
4
by I. Peterson Unlike an ordinary, incandescent bulb, a laser produces light of a single wavelength. Moreover, the emitted light waves are coherent, meaning that all of the energy peaks and troughs are precisely in step. Now, a team at the Massachusetts Institute of Technology has demonstrated experimentally that a cloud consisting of millions of atoms can also be made coherent. Instead of flying about and colliding randomly, the atoms display coordinated behavior, acting as if the entire assemblage were a single entity. According to quantum mechanics, atoms can behave like waves. Thus, two overlapping clouds made up of atoms in coherent states should produce a zebra-striped interference pattern of dark and light fringes, just like those generated when two beams of ordinary laser light overlap. By detecting such a pattern, the researchers proved that the clouds' atoms are coherent and constitute an "atom laser," says physicist Wolfgang Ketterle, who heads the MIT group. These matter waves, in principle, can be focused just like light. Ketterle and his coworkers describe their observations in the Jan. 31 Science. The demonstration of coherence involving large numbers of atoms is the latest step in a series of studies of a remarkable state of matter called a Bose-Einstein condensate. Chilled to temperatures barely above absolute zero, theory predicted, the atoms would collectively enter the same quantum state and behave like a single unit, or superparticle, with a specific wavelength. First created in the laboratory in 1995 by Eric A. Cornell and his collaborators at the University of Colorado and the National Institute of Standards and Technology, both in Boulder, Bose-Einstein condensates have been the subject of intense investigation ever since (SN: 7/15/95, p. 36; 5/25/96, p. 327). At MIT, Ketterle and his colleagues cool sodium atoms to temperatures below 2 microkelvins. The frigid atoms are then confined in a special magnetic trap inside a vacuum chamber. To determine whether the atoms in the resulting condensate are indeed as coherent as photons in a laser beam, the researchers developed a novel method of extracting a clump of atoms from the trap. In effect, they manipulate the magnetic states of the atoms to expel an adjustable fraction of the original cloud; under the influence of gravity, the released clump falls. The method can produce a sequence of descending clumps, with each containing 100,000 to several million coherent atoms. The apparatus acts like a dripping faucet, Ketterle says. He and his colleagues describe the technique in the Jan. 27 Physical Review Letters. To demonstrate interference, the MIT group created a double magnetic trap so that two pulses of coherent atoms could be released at the same time. As the two clumps fell, they started to spread and overlap. The researchers could then observe interference between the atomic waves of the droplets. "The signal was almost too good to be true," Ketterle says. "We saw a high-contrast, very regular pattern." "It's a beautiful result," Cornell remarks. "This work really shows that Bose-Einstein condensation is an atom laser." From the pattern, the MIT researchers deduced that the condensate of sodium atoms has a wavelength of about 30 micrometers, considerably longer than the 0.04-nanometer wavelength typical of individual atoms at room temperature. Ketterle and his colleagues are already planning several improvements to their primitive atom laser, including getting more atoms into the emitted pulses and going from pulses to a continuous beam. Practical use of an atom laser for improving the precision of atomic clocks and for manipulating atoms is still distant, however, Cornell notes.
<urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3>
CC-MAIN-2013-20
http://www.sciencenews.org/pages/sn_arc97/2_1_97/fob2.htm
2013-05-18T08:10:21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933547
769
4
4
July 31, 1998 Explanation: Do you recognize the constellation Orion? This striking but unfamiliar looking picture of the familiar Orion region of the sky was produced using survey data from the InfraRed Astronomical Satellite (IRAS). It combines information recorded at three different invisible infrared wavelengths in a red, green, and blue color scheme and covers about 30x24 degrees on the sky. Most of Orion's visually impressive stars don't stand out, but bright Betelgeuse does appear as a small purplish dot just above center. Immediately to the right of Betelgeuse and prominent in the IRAS skyview, expanding debris from a stellar explosion, a supernova remnant, is seen as a large bright ring-shaped feature. The famous gas clouds in Orion's sword glow brightly as the yellow regions at the lower right. No longer operational, IRAS used a telescope cooled by liquid helium to detect celestial infrared radiation. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:f2519e47-47f4-4694-91cc-e23c91d5d788>
CC-MAIN-2013-20
http://apod.nasa.gov/apod/ap980731.html
2013-05-21T10:34:25
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.889232
227
4.0625
4
Combined Gas Law The Combined Gas Law combines Charles Law, Boyle s Law and Gay Lussac s Law. The Combined Gas Law states that a gas pressure x volume x temperature = constant. Alright. In class you should have learned about the three different gas laws. the first one being Boyle's law and it talks about the relationship between pressure and volume of a particular gas. The next one should be Charles law which talks about the volume and temperature of a particular gas. And the last one should be Gay Lussac's law which talks about the relationship between pressure and temperature of a particular gas. Okay. But what happens when you have pressure, volume and temperature all changing? Well, we're actually going to combine these gas laws to form one giant gas law called the combined gas law. Okay. If you notice then these three gas laws the pressure and volume are always in the numerator. So we're going to keep them on the numerator. p1v1. And notice the temperature is in the denominator over t1. So all these things are just squished into one and then p2v2 over t2. Okay. So this is what we're going to call the combined gas law. So let's actually get an example and do one together. Alright, so I have a problem up here that says a gas at 110 kilo pascals and 33 celsius fills a flexible container with an initial volume of two litres, okay? If the temperature is raised to 80 degrees celsius and the pressure is raised to 440 kilo pascals, what is the new volume? Okay. So notice we have three variables. We're talking about pressure, temperature and volume. Okay, so now we're going to employ this combined gas law dealing with all three of these variables. So we're going to look at our first, our first number 110 kilo pascals and that's going to, that is the unit of pressure. So we know that's p1. Our p1 is 110 kilo pascals, at 30 degree celsius. I don't like things with celsius so I'm going to change this to kelvin. So I'm going to add 273 to that which makes it 303 kelvin. That's our temperature. And my initial volume is two litres so I'm going to say v1=2 litres. Okay then I continue reading. If the temperature is raised at 80 degree celsius, again we want it in kelvin, so we're going to add 273 making it to 353. So our t2 is 353 kelvin and the pressure increased to 440 kilo pascals, the pressure p2 is equal to 440 kilo pascals which I'm very happy that I kept it in kilo pascals that I kept it in kilo pascals. I've got to make sure these units are the same because pressure can be measured in several different units. I'm going to make sure all units are the same. And what is the new volume? So our v2 is our variable, what we're trying to find. Okay. So let's basically plug all these variable in into our combined gas law to figure out what the new volume would be. Okay. So I'm going to erase this and say our pressure one is 110 kilo pascals. Our volume one is two litres. Our temperature one is 303 kelvin. Our pressure two is 440 kilo pascals. We don't know our volume so we're just going to say v2 over 353 kelvin. Okay. When I'm looking for a variable I'm going to cross multiply these guys. So I'm going to say 353 times 110 times 2 and that should give me seven, 77660, if you put that in a calculator. So I just cross multiply these guys. And I cross multiply these guys 303 times 440 times v2 gives me 133320v2. Okay, so then I want to get my, I want to isolate my variable, so I'm going to divide 133320. 133320. And I find that my new volume is 0.58. 0.58 metres. And that is how you do the combined gas law.
<urn:uuid:5f1963a4-8da7-4d73-9dda-3c8691608115>
CC-MAIN-2013-20
http://brightstorm.com/science/chemistry/kinetic-molecular-theory/combined-gas-law/
2013-05-21T10:27:06
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942287
873
4.09375
4
Woodrow Wilson, as described in the introductory section of the text, was the leader of the immediate post-war period and was the architect of an internationalist vision for a new world order. Yet, as discussed in the paragraphs below, he was not able to persuade the other Allied leaders at the peace settlement negotiations in Paris to embrace his vision. But it was not just the opposition of Clemenceau and Lloyd George to some of his ideas that moved the conference away from Wilson's vision. Wilson became so blindingly caught up in his vision, thinking that everything he advocated was what democracy and justice wanted, that he completely alienated the other negotiators in Paris, and they stopped listening to him. Another historian points to a different problem, that Wilson himself stopped listening to his earlier vision, having become convinced that a harsh peace was justified and desirable. Even if that historical view is accurate, Wilson was probably still more moderate in his conception of a harsh peace than were Clemenceau and Lloyd George. But as the conference dragged on and the departure from Wilsonianism became more and more pronounced, Wilson clung to his proposal for the League of Nations. In fact, he seemed to place all his faith in his pet project, believing it would solve all the evils the negotiators were unable to solve during the conference. Unfortunately, Wilson made it clear that the League was his primary objective, and it came to be his only bargaining chip. He then compromised on numerous issues that had no corollary in his vision in order to maintain the support for the creation of the League. Thus, though full of good intentions and a vision for a just and peaceful future, Wilson's arrogance and ineffective negotiating skills largely contibuted to the downfall of his vision. Finally, it must be mentioned that Wilson's inability to negotiate with the Senate in its discussion of the ratification of the Treaty of Versailles caused the Senate to reject the Treaty, leaving the United States noticeably absent from the newly created League of Nations, which greatly undermined the effectiveness and importance of Wilson's principal goal. Nonetheless, Wilson was awarded the 1919 Nobel Peace Prize for his efforts to secure a lasting peace and the success in the creation of the League of Nations. David Lloyd George, the British Prime Minister, entered the negotiations in Paris with the clear support of the British people, as evidenced by his convincing win in the so-called khaki election of December 1918. During the weeks leading up to the election, though, he had publicly committed himself to work for a harsh peace against Germany, including obtaining payments for war damages committed against the British. These campaign promises went against Lloyd George's personal convictions. Knowing that Germany had been Britain's best pre-war trading partner, he thought that Britain's best chance to return to its former prosperity was to restore Germany to a financially stable situation, which would have required a fairly generous peace with respect to the vanquished enemy. Nonetheless, his campaign statements showed Lloyd George's understanding that the public did not hold the same convictions as he did, and that, on the contrary, the public wanted to extract as much as possible out of the Germans to compensate them for their losses during the war. So Lloyd George and Clemenceau were in agreement on many points, each one seeming to support the other in their nationalist objectives, and thereby scratching each other's back as the "game of grab" of Germany's power played itself out. But most historians do not attribute to Lloyd George a significant role in the Treaty negotiations. In their defense, Clemenceau and Lloyd George were only following popular sentiment back home when they fought for harsh terms against Germany. It is clear from historical accounts of the time that after seeing so many young men not return from the trenches on the Western front, the French and British wanted to exact revenge against the Germans through the peace settlement, to ensure that their families would never again be destroyed by German aggression. In that respect, democracy was clearly functioning as it is intended in a representative democracy. In fact, Lloyd George is the quintessential example of an elected leader serving the interests of his people, putting his personal convictions second to British public opinion. Yet it was that same public opinion (in France and Britain) that Wilson had believed would support his internationalist agenda, placing Germany in the context of a new and more peaceful world order which would prevent future aggression. Wilson's miscalculation was one of the single greatest factors leading to the compromise of his principles and the resulting harsh and, in the eyes of many, unjust treatment of Germany within the Treaty of Versailles. [See also the biographies of the Big Three listed on the Links 1. James L. Stokesbury, A Short History of World War I, 1981, p. 309. 2. Manfred F. Boemeke, "Woodrow Wilson's Image of Germany, the War-Guilt Question, and the Treaty of Versailles,"inThe Treaty of Versailles: A Reassessment After 75 Years, Ch. 25, Boemeke, Feldman & Glaser, eds., 1998, pp. 603-614. 3. Robert H. Ferrell, Woodrow Wilson and World War I: 1917-1921, 1985, p. 146. 4. Lawrence E. Gelfand, "The American Mission to Negotiate Peace: An Historian Looks Back," in The Treaty of Versailles: A Reassessment After 75 Years, Ch. 8, Boemeke, Feldman & Glaser, eds., 1998, p. 191. 5. See Ferrell, supra note 3, Ch. 10, "The Senate and the Treaty." 6. Information from this paragraph is taken from Ferrell, supra note 3, at 142, 144, 151. 7. Id. at 151. 8. Stokesbury, supra note 1, at 311-312.
<urn:uuid:54521255-4567-40ea-9b12-eccf47e11bd7>
CC-MAIN-2013-20
http://faculty.virginia.edu/setear/students/sandytov/Big_Three.htm
2013-05-21T09:58:56
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976382
1,231
4.1875
4
Introduction / History Jews represent the oldest monotheistic religion of modern times. Because of the uniqueness of their history and culture, all Jews have a strong sense of identity. Persecution of and discrimination against the Jews have been the historical reasons for their migrations and settlements around the world. The Jews of Europe arrived on the continent at least 2,000 years ago during the early days of the Roman empire. Since then, they have been a significant influence in the history and culture of Europe. Much of what is considered "Jewish" today finds its roots among the European Jews. One of the unique features among European Jews is the distinction between the Ashkenazic Jews and the Sephardic Jews. The word Ashkenaz is derived from a Biblical word for the larger Germanic region of Europe. Therefore, Ashkenazim Jews are those whose ancestry is linked to that area. This group traditionally speaks the Yiddish language, which is a German dialect that has Hebrew and Slavic elements. The word Sephard was the name used by Jews in medieval times for the Iberian peninsula. Sephardim Jews, then, are the descendants of the Jews who lived in Spain or Portugal prior to expulsion in 1492 by King Ferdinand and Queen Isabella. Sephardim also have a distinctive language called Ladino, or Judeo-Spanish. This is a dialect of Castilian Spanish with Hebrew and Turkish elements. What are their lives like? During the last few centuries, Eastern Europe had the largest Jewish population in the world. National attitudes toward the Jews were ambivalent, depending on the usefulness of the Jewish inhabitants to the nations' rulers. Anti-Semitism was prevalent and frequently led to either persecution or expulsion. The Holocaust of World War II was the climax of Jewish persecution in Europe, leading to the extermination of six million Jews. Many Eastern European countries lost the majority of their Jewish population in this tragedy. As a result of the Holocaust, thousands of Jewish survivors and their descendants have emigrated from Eastern Europe to Israel, the United States, or Western Europe. The recent memories of the Holocaust as well as the centuries of discrimination and persecution play a strong part in modern Jewish identity. European Jews are strong supporters of "Zionism," a revival of Jewish culture and support of Israel as a national, secure, Jewish homeland. Since the dissolution of the Soviet empire, former Soviet Jews no longer live under oppressive government rule. Anti-Semitism is still a concern, but Jewish life has been revitalized in recreated countries like the Ukraine. Synagogues are functioning and kosher (traditional, acceptable) food is once again available. The Jewish emigration from Eastern Europe is cause for concern among the remaining aged Jewish population. As the older Jews die, the Jewish community dwindles. Many of the younger Jews are unlearned in their Jewish identity. They are either non-observant or have assimilated into the prevailing culture. However, strong efforts are being made to maintain a Jewish presence and clarify their identity. Jewish schools are being opened and Judaic studies are being promoted in universities. Jewish hospitals and retirement homes are being built. Community centers also promote cultural events such as the Israeli dance, theater, Yiddish and Hebrew lessons, and sports. Western Europe now has the largest concentration of European Jewish residents. The Netherlands received a large influx of Sephardic Jews from Portugal in the late 1500's, and another contingent of Ashkenazic Jews after World War II. They have been very influential in the development of Dutch commerce. England's Jews are concentrated in the Greater London area and have been politically active for over 100 years. They have been avid supporters of Zionism and solidly committed to the settlement of Diaspora Jews in Israel. A large percentage of England's Jews are affiliated with the traditional Orthodox synagogues. Italy's Jewish population is primarily Sephardic due to its absorption of Spanish Jews in the 1500's. France's Ashkenazic community received 300,000 Sephardic Jews from North Africa in recent decades. What are their beliefs? For religious Jews, God is the Supreme Being, the Creator of the universe, and the ultimate Judge of human affairs. Beyond this, the religious beliefs of the Jewish communities vary greatly. European Jews are extremely diverse in religious practice. The Ashkenazic Jews are the most prevalent, representing the Orthodox, ultra-Orthodox, Conservative, and Reform movements. The unusual and adamantly traditional Hasidic movement was born in Poland and has gained a strong following in the United States and Israel. The Sephardic denomination is similar to the Orthodox Ashkenazic, but is more permissive on dietary rules and some religious practices. Each Jewish denomination maintains synagogues and celebrates the traditional Jewish holiday calendar. While most European Jews are religiously affiliated, there is a significant minority which is not religious. What are their needs? The Jews have a wonderful understanding of their connection with the Abrahamic covenant. However, they also have a history of rejecting Jesus Christ as Messiah, the one who has fulfilled that covenant. Pray that as the Gospel is shared, it will not be viewed as anti-Semitic, but rather as the fulfillment of what God promised through Abraham centuries ago. Prayer PointsView Jew, Eastern Yiddish-Speaking in all countries. * Ask the Lord of the harvest to send forth loving Christians to work among the Jewish communities. * Ask the Holy Spirit to grant wisdom and favor to the missions agencies that are focusing on the European Jews. * Pray that the Jewish people will understand that Jesus is the long-awaited Messiah. * Ask the Lord to soften the hearts of the Jews towards Christians so that they might hear and receive the message of salvation. * Pray that the Lord Jesus will reveal Himself to the Jews through dreams and visions. * Pray that God will grant Jewish believers favor as they share their faith in Christ with their own people. * Pray that strong local churches will be raised up in each Jewish community. * Pray for the availability of the Jesus Film in the primary language of this people.
<urn:uuid:58c97692-7ec3-45e2-add2-5f67fd45724c>
CC-MAIN-2013-20
http://joshuaproject.net/people-profile.php?peo3=12350&rog3=BO
2013-05-21T10:09:18
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960548
1,249
4.125
4
After the British Pyrrhic (costly) victory at Bunker Hill in 1775, British General William Howe decided a lethal blow needed to be delivered to the Patriot cause. Howe proposed to launch an attack on New York City using tens thousands of troops. He began mobilizing the massive fleet in Halifax, Nova Scotia. Meanwhile, American Commander-in-Chief George Washington had ordered General Charles Lee to prepare for the defense of the city. That June, Howe and 9,000 troops set sail for New York. Howe’s army was to be met in the city by additional regiments of German and British troops. Reinforcements from Halifax led by Howe’s brother would follow them. Howe’s initial fleet arrived in New York Harbor and began landing troops on Staten Island. On April 27, 1776, British forces engaged the Americans at the Battle of Brooklyn Heights (also called the Battle of Long Island). Howe’s army successfully outflanked Washington’s, eventually causing the Patriots, after some resistance, to withdraw to Manhattan under the cover of darkness, thereby avoiding a potentially costly siege at the hands of the British. After failed peace negotiations, the British Army next struck at Lower Manhattan, where 12,000 British troops quickly overtook the city. Most of the Continental Army had retreated to defensible positions at Harlem Heights and then to White Plains, well north of the city, but some soldiers remained at Fort Washington in Manhattan. Howe’s army chased Washington and the Continental Army into positions north of White Plains before returning to Manhattan. In Manhattan, Howe set his sights on Fort Washington, the last Patriot stronghold in Manhattan. In the furious, three-pronged attacked, British forces easily took the fort, capturing nearly 3,000 American prisoners and at least 34 cannons in the process. Most of the prisoners were taken to squalid British prison ships where all but 800 or so died of disease or starvation. General Washington, now at Fort Lee, directly across the Hudson River from Fort Washington, witnessed the events that happened. Following the fall of Fort Washington, British forces ferried up the Hudson River in barges toward Fort Lee. Washington ordered the evacuation of the fort’s 2,000 soldiers across the Hackensack River at New Bridge Landing. Washington would lead his army clear across the Delaware River into Pennsylvania. Following the events in and around New York City, the outlook was bleak for the Continental Army. Morale in the army was extremely low, enlistments were ending, and desertions were commonplace. Even General Washington admitted his army’s chances of success were slim. Meanwhile, General Howe ordered his army into their winter quarters that December and established several outposts from New York City south to New Brunswick, New Jersey.
<urn:uuid:d9f94478-8f2d-45d3-b081-710593609b23>
CC-MAIN-2013-20
http://mrnussbaum.com/history-2-2/new_york_battles/
2013-05-21T10:28:04
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962022
564
4
4
The Solar and Heliospheric Observatory (SOHO) spacecraft is expected to discover its 1,000TH comet this summer. The SOHO spacecraft is a joint effort between NASA and the European Space Agency. It has accounted for approximately one-half of all comet discoveries with computed orbits in the history of astronomy. "Before SOHO was launched, only 16 sun grazing comets had been discovered by space observatories. Based on that experience, who could have predicted SOHO would discover more than 60 times that number, and in only nine years," said Dr. Chris St. Cyr. He is senior project scientist for NASA's Living With a Star program at the agency's Goddard Space Flight Center, Greenbelt, Md. "This is truly a remarkable achievement!" About 85 percent of the comets SOHO discovered belongs to the Kreutz group of sun grazing comets, so named because their orbits take them very close to Earth's star. The Kreutz sun grazers pass within 500,000 miles of the star's visible surface. Mercury, the planet closest to the sun, is about 36 million miles from the solar surface. SOHO has also been used to discover three other well-populated comet groups: the Meyer, with at least 55 members; Marsden, with at least 21 members; and the Kracht, with 24 members. These groups are named after the astronomers who suggested the comets are related, because they have similar orbits. Many comet discoveries were made by amateurs using SOHO images on the Internet. SOHO comet hunters come from all over the world. The United States, United Kingdom, China, Japan, Taiwan, Russia, Ukraine, France, Germany, and Lithuania are among the many countries whose citizens have used SOHO to chase comets. Almost all of SOHO's comets are discovered using images from its Large Angle and Spectrometric Coronagraph (LASCO) instrument. LASCO is used to observe the faint, multimillion-degree outer atmosphere of the sun, called the corona. A disk in the instrument is used to make an artificial eclipse, blocking direct light from the sun, so the much fainter corona can be seen. Sun grazing comets are discovered when they enter LASCO's field of view as they pass close by the star. "Building coronagraphs like LASCO is still more art than science, because the light we are trying to detect is very faint," said Dr. Joe Gurman, U.S. project scientist for SOHO at Goddard. "Any imperfections in the optics or dust in the instrument will scatter the light, making the images too noisy to be useful. Discovering almost 1,000 comets since SOHO's launch on December 2, 1995 is a testament to the skill of the LASCO team." SOHO successfully completed its primary mission in April 1998. It has enough fuel to remain on station to keep hunting comets for decades if the LASCO continues to function. For information about SOHO on the Internet, visit: Explore further: Long-term warming, short-term variability: Why climate change is still an issue
<urn:uuid:78cbe1bd-1849-4138-b59a-5521e93122a3>
CC-MAIN-2013-20
http://phys.org/news4969.html
2013-05-21T10:13:56
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943417
663
4
4
Teaching Strategies: Effective Discussion Leading While lecturing is a fast and direct way to communicate a body of knowledge, discussion encourages students to discover solutions for themselves and to develop their critical thinking abilities. They learn how to generate ideas, consider relevant issues, evaluate solutions, and consider the implications of these solutions. Thus, although discussion is not as efficient as lecture in conveying facts, it helps students learn how to think better and more clearly about the facts that they should learn from their reading and their lectures. Leading a discussion, however, offers its own set of challenges: participants can spend too much time exploring small, sometimes irrelevant issues, forget that they are progressing toward an identifiable goal, and become bored. The leader must guide the conversation carefully without stifling creativity and students' initiative and without surrendering to some students' desire for answers that they can write down and memorize. Here are four strategies that can help faculty and TAs encourage students explore issues themselves: We all know that creating a fine lecture requires research and planning; we sometimes forget that leading a good discussion requires the same research and planning and demands spontaneous responses in the classroom. The beauty of the extra demand is that developing the skills for intervening and directing discussions leads to exciting, productive exchanges that help students learn to think clearly and creatively, while simultaneously inspiring you to teach more thoroughly and carefully. "Discussions: Leading and Guiding, but Not Controlling," The Teaching Professor VI, 8 [October 1992].)
<urn:uuid:03dc16ec-33ae-4c39-a06b-93924571a72e>
CC-MAIN-2013-20
http://trc.virginia.edu/Publications/Teaching_Concerns/Fall_1993/TC_Fall_1993_Teaching_Strategies.htm
2013-05-21T10:13:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954276
304
4.03125
4
Presenting - 'Amasia', The Next Supercontinent! Ever since Earth has been in existence there have been the formation and breaking apart of many supercontinents - While Pangaea, that existed between 150-300 million years ago is the most well-known, prior to that was Nuna (1.8 billion years ago), Rodina (1 billion years ago) and many more that cannot be verified because 2 billion year-old rocks containing evidence of magnetic fields, are hard to find. And while most scientists are in agreement that Rodina, Nuna and Pangaea did exist, there is very little consensus on the continents they comprised of - Some experts believe that they were the same ones, while others think that the wandering landmasses reassembled on the opposite sides each time - about 180° away from where the previous supercontinent had come together. Now, a group of geologists led by Yale University graduate student Ross Mitchell have a new theory - They think that each supercontinent came together about 90° from its predecessor. That is, the geographic center of Rodina was about 88° away from the center of Nuna, whilst the center of Panagea, believed to have been located near modern-day Africa, was about 88° away from the center from its super giant predecessor, Rodina. These calculations that were reported earlier this year were based not only on the paleolatitude (The latitude of a place at some time in the past, measured relative to the earth's magnetic poles in the same period) of the ancient supercontinents, but also, for the first time the paleolongitude, that Ross measured by estimating how the locations of the Earth's magnetic poles have changed through time. While the theory is interesting, what is even more so is that the team has also come up with a model of the next supercontinent. If their estimates are accurate, over the next few hundred million years, the tectonic plates under the Americas and Asia will both drift northward and merge. This means that modern day North and South America will come together and become one giant landmass, displacing the Caribbean Sea completely. A similar movement in Eurasia (Australia and South Eastern Asia) will cause the Arctic Ocean to disappear causing the continents to fuse with Canada. The result? A ginormous continent that they call 'Amasia'. The one thing that is not too clear is if Antarctica will be part of this or just be left stranded. While many researchers believe that the Yale team's theory is quite feasible, nobody will ever know for sure - Because unfortunately, none of us are going to be around few 100 million years from now - But it's sure fun to envision the new world, isn't it?
<urn:uuid:2d0e9c93-cfc6-4a81-aac7-dc1b77fe6e90>
CC-MAIN-2013-20
http://www.dogonews.com/2012/10/18/presenting-amasia-the-next-supercontinent
2013-05-21T10:12:42
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965343
567
4.3125
4
|This is a measure of the brightness of a celestial object. The lower the value, the brighter the object, so magnitude -4 is brighter than magnitude 0, which is in turn brighter than magnitude +4. The scale is logarithmic, and a difference of 5 magnitudes means a brightness difference of exactly 100 times. A difference of one magnitude corresponds to a brightness difference of around 2.51 (the fifth root of 100). The system was started by the ancient Greeks, who divided the stars into one of six magnitude groups with stars of the first magnitude being the first ones to be visible after sunset. In modern times, the scale has been extended in both directions and more strictly defined. Examples of magnitude values for well-known objects are; |Sun||-26.7 (about 400 000 times brighter than full Moon!)| |Brightest Iridium flares||-8| |Venus (at brightest)||-4.4| |International Space Station||-2| |Sirius (brightest star)||-1.44| |Limit of human eye||+6 to +7| |Limit of 10x50 binoculars||+9| |Limit of Hubble Space Telescope||+30|
<urn:uuid:a13e5774-8a15-4ad6-bc01-def7c66a2edb>
CC-MAIN-2013-20
http://www.heavens-above.com/glossary.aspx?term=magnitude&lat=38.895&lng=-77.037&loc=Washington&alt=0&tz=EST
2013-05-21T10:27:14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.854211
260
4.25
4
Scientists gets further evidence that Mars once had oceans Mars, our neighbor, once the dreams of science fiction writers and astronomers, one of which only wrote about the live that could have lived on Mars, and still might; while the other seeks to prove that there might actually have been life on that red planet eons ago. Part of proving that idea is being able to show that there was water on the surface of Mars, water that would have been the foundation of life, just as it is here on earth. To help find the facts behind whether there was, or even still is, water on Mars the European Space Agency (ESA) Mars Express space craft which houses the Mars Advanced Radar for Subsurface and Ionsphere Sounding (MARSIS) has detected sediment on the planet, the type of sediment that you would find on the floor of an ocean. It is within the boundaries of features tentatively identified in images from various spacecraft as shorelines that MARSIS detected sedimentary deposits reminiscent of an ocean floor. “MARSIS penetrates deep into the ground, revealing the first 60 – 80 meters (197 – 262 ft) of the planet’s subsurface,” says Wlodek Kofman, leader of the radar team at the Institut de Planétologie et d’Astrophysique de Grenoble (IPAG). “Throughout all of this depth, we see the evidence for sedimentary material and ice.” The sediments detected by MARSIS are areas of low radar reflectivity, which typically indicates low-density granular materials that have been eroded away by water and carried to their final resting place. Scientists are interpreting these sedimentary deposits, which may still be ice-rich, as another indication that there once an ocean in this spot. At this point scientists have proposed that there were two main oceans on the planet. One was aroun the 4 billion year ago range with the second at around 3 billion years ago. For the scientist the MARSIS findings provide some of the best evidence yet that Mars did have large bodies of water on its surface and that the water played a major role in the planet’s geological history.
<urn:uuid:40e4be34-8172-4949-b887-cd566fea95cb>
CC-MAIN-2013-20
http://www.inquisitr.com/192264/scientists-gets-further-evidence-that-mars-once-had-oceans/
2013-05-21T10:06:29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96167
460
4.03125
4
The knowledge, skills and understandings relating to students’ writing have been drawn from the Statements of Learning for English (MCEECDYA 2005). Students are taught to write a variety of forms of writing at school. The three main forms of writing (also called genres or text types) that are taught are narrative writing, informative writing and persuasive writing. In the Writing tests, students are provided with a ‘writing stimulus' (sometimes called a prompt – an idea or topic) and asked to write a response in a particular genre or text type. In 2013, students will be required to complete a persuasive writing task. The Writing task targets the full range of student capabilities expected of students from Years 3 to 9. The same stimulus is used for students in Years 3, 5, 7 and 9. The lines in the response booklet for Year 3 students are more widely spaced than for Years 5, 7 and 9 and more capable students will address the topic at a higher level. The same marking guide is used to assess all students' writing, allowing for a national comparison of student writing capabilities across these year levels. Assessing the Writing task Students’ writing will be marked by assessors who have received intensive training in the application of a set of ten writing criteria summarised below. The full Persuasive Writing Marking Guide ( 5.7 MB) and the writing stimulus used to prompt the writing samples in the Marking Guide are both available for download. Descriptions of the Writing criteria ||Description of marking criterion |The writer’s capacity to orient, engage and persuade the reader ||The organisation of the structural components of a persuasive text (introduction, body and conclusion) into an appropriate and effective text structure ||The selection, relevance and elaboration of ideas for a persuasive argument ||The use of a range of persuasive devices to enhance the writer’s position and persuade the reader ||The range and precision of contextually appropriate language choices ||The control of multiple threads and relationships across the text, achieved through the use of grammatical elements (referring words, text connectives, conjunctions) and lexical elements (substitutions, repetitions, word associations) ||The segmenting of text into paragraphs that assists the reader to follow the line of argument ||The production of grammatically correct, structurally sound and meaningful sentences ||The use of correct and appropriate punctuation to aid the reading of the text ||The accuracy of spelling and the difficulty of the words used The Narrative Writing Marking Guide (used in 2008 - 2010 ) is also available. Use of formulaic structures Beginning writers can benefit from being taught how to use structured scaffolds. One such scaffold that is commonly used is the five paragraph argument essay. However, when students becomes more competent, the use of this structure can be limiting. As writers develop their capabilities they should be encouraged to move away from formulaic structures and to use a variety of different persuasive text types, styles and language features, as appropriate to different topics. Students are required to write their opinion and to draw on personal knowledge and experience when responding to test topics. Students are not expected to have detailed knowledge about the topic. Students should feel free to use any knowledge that they have on the topic, but should not feel the need to manufacture evidence to support their argument. In fact, students who do so may undermine the credibility of their argument by making statements that are implausible. Example topics and different styles: City or country (see example prompt ) A beginning writer could write their opinion about living in either the city or country and give reasons for it. A more capable writer might also choose to take one side and argue for it. However, this topic also lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to living in the city and living in the country. A writer could also choose to introduce other options, for example living in a large country town that might have the benefits of city and rural life. Positions taken on this topic are likely to elicit logical, practical reasons and anecdotes based on writers’ experiences. Books or TV (see example prompt ) A beginning writer could write about their opinion of one aspect and give reasons for it. However, this topic lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to both books and TV. The reasons for either side of the topic are likely to elicit logical, practical reasons and personal anecdotes based on the writer's experiences of both books and TV. It is cruel to keep animals in cages and zoos (see example prompt ) A beginning writer could take on one side of the topic and give reasons for it. However, this topic lends itself to be further redefined. For example, a more capable writer might develop the difference between open range zoos and small cages and then argue the merits of one and limitations of the other. The animal welfare issues raised by this topic are likely to elicit very empathetic and emotive arguments based on the writer's knowledge about zoos and animals. More information on persuasive writing can be found in the FAQ section for NAPLAN - Writing test. National minimum standards The national minimum standards for writing describe some of the skills and understandings students can generally demonstrate at their particular year schooling. The standards are intended to be a snapshot of typical achievement and do not describe the full range of what students are taught or what they may achieve. For further information on the national minimum standards see Performance Standards.
<urn:uuid:817d308c-adeb-427a-9b89-415a8f96d2ec>
CC-MAIN-2013-20
http://www.nap.edu.au/naplan/about-each-domain/writing/writing.html
2013-05-21T10:13:37
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92963
1,150
4.5
4
OurDocuments.gov. Featuring 100 milestone documents of American history from the National Archives. Includes images of original primary source documents, lesson plans, teacher and student competitions, and educational resources. In 1866 the Russian government offered to sell the territory of Alaska to the United States. Secretary of State William H. Seward, enthusiastic about the prospects of American Expansion, negotiated the deal for the Americans. Edouard de Stoeckl, Russian minister to the United States, negotiated for the Russians. On March 30, 1867, the two parties agreed that the United States would pay Russia $7.2 million for the territory of Alaska. For less that 2 cents an acre, the United States acquired nearly 600,000 square miles. Opponents of the Alaska Purchase persisted in calling it “Seward’s Folly” or “Seward’s Icebox” until 1896, when the great Klondike Gold Strike convinced even the harshest critics that Alaska was a valuable addition to American territory. The check for $7.2 million was made payable to the Russian Minister to the United States Edouard de Stoeckl, who negotiated the deal for the Russians. Also shown here is the Treaty of Cession, signed by Tzar Alexander II, which formally concluded the agreement for the purchase of Alaska from Russia.
<urn:uuid:8182aa95-78e2-42b3-a86d-30bb1a0fa8f8>
CC-MAIN-2013-20
http://www.scoop.it/t/on-this-day/p/3018291670/our-documents-check-for-the-purchase-of-alaska-1868
2013-05-21T10:21:24
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934167
279
4.03125
4
Filed under: Foundational Hand After studying the proportions of the Foundational Hand letters, the next step is to start writing the letters. Each letter is constructed rather than written. The letters are made up of a combination of pen strokes, which are only made in a top – down or left – right direction. The pen is never pushed up. When we studied the proportions of the Foundational Hand we could group the letters according to their widths. Now, we can group them according to the order and direction of the pen strokes. You may find it useful to look at the construction grid whilst studying the order and direction of the letters. The first group consists of the letters c, e, and o. These letters are based on the circle shape. This shape is produced with two pen strokes. Visualise a clock face and start the first stroke at approximately the 11, and finish it in an anti-clockwise direction at 5. The second stroke starts again at the 11 and finishes in a clockwise direction on the 5 to complete the letter o. The first pen-stroke for the letters c and e are the same as the first of the letter o. The second pen-stroke on the c and e are shorter and finish around the 1 position on the imaginary clock face. Finally, the letter e has a third stroke, starting at the end of the second stroke and finishes when it touches the first stroke. The next group of letters are d, q, b and p. All these letters combine curved and straight pen strokes. When writing these letters it can be useful to think of the underlying circle shape, which your pen will leave or join at certain points depending upon which letter is being written. The first stroke of the b starts at the ascender height of the letter, which can be eyed in at just under half the x-height (body height of letters with no ascender or descender). Continue the ascender stroke of the b until it ‘picks up’ the circle shape, follow round the circle until the pen reaches the 5 on the imaginary clock face. The second stroke starts on the first stroke following the circle round until it touches the end of the first stroke. The letter d is similar to the c except it has a third stroke for the ascender, which will touch the ends of the first and second stroke being for finishing on the write-line. Letter p starts with a vertical stroke from the x-height down to the imaginary descender line, which is just under half the x-height below the write-line. The second and third strokes are curved, starting on the descender stroke and following round the imaginary circle. The letter q is almost the same as the d, except it has a descender stroke rather than an ascender stroke. Letters a, h, m, n, r All these letters combine curved and straight pen strokes. Once again, think of the underlying circle shape, which your pen will leave or join at certain points depending upon the letter being written. The Letter h consists of two pen strokes. The first is a vertical ascender stroke. The second stroke starts curved, follows the circle round, then leaves it and becomes straight. The letter n is produced exactly the same way as the letter h, except the first stroke is not so tall as it starts on the x-height line. The first two pen strokes of the letter m are the same as the letter n. Then a third stroke is added which is identical to the second stroke. The letter r is also written the same way as the letter n except the second stroke finishes at the point where the circle would have been left and the straight is picked up. The first stroke of letter a is the same as the second stroke of the letters h, m and n. The second stroke follows the circle. Finally, the third stroke starts at the same point as the second stroke, but is a straight line at a 30° angle and touches the first stroke. The next group of letters are l, u and t. These letters are straight-forward. The letter l is the same as the first stroke of letter b. The letter u is also similar to the first stroke of letter b except it starts lower down. The second stroke starts on the x-height line and finishes on the write-line. Letter t has the same first stroke as letter u. It is completed by a second horizontal stroke. The following letters k, v, w, x, y and z are made of at least one diagonal pen stroke. The letter k starts with a vertical ascender stroke, then a second stroke diagonal stroke which joins the vertical stroke. The final stroke is also diagonal and starts where the first and second stroke meet and stops when it touches the write-line. If you look closely you will see it goes further out than the second stroke. This makes the letter look more balanced. If the end of these two pen-strokes lined up the letter would look like it is about to fall over. Letter v is simply two diagonal strokes and these are repeated to produce the letter w. The letter y is the same as the v except the second stroke is extended until to create a descender stroke. Letter x is a little different, you need to create it in such a way that the two stroke cross slightly above the half-way mark on the x-height. This means the top part will be slightly smaller than the bottom which will give the letter a better balance. Finally, in this group is letter z. The easiest way to produce this is with the two horizontal pen strokes, thenjoin these two strokes with a diagonal pen-stroke to complete the letter. Now for the hardest letters; f, g and s. Out of these three letters, f is the simplest. It starts with a vertical ascender stroke – except this is not as tall as the other ascender strokes we have produced so far. This is because we have to allow for the second curved stroke. The overall height of these two strokes should be the same as other letters that have an ascender. Finally, we need a horizontal stroke to complete the letter. Which will you find the hardest letter g or s? These are trickier because unlike all the other letters we have written they do not relate so well to the grid. The letter g is made of a circle shape, with an oval/bowl shape under the write-line. You can see the letter g is made of three pen-strokes. The first stroke is just like the first stroke of the letter o for example, except it is a smaller. The second stroke starts like the second stroke of the letter o, but when it joins the first stroke it continues and changes direction in the gap between the bottom of the shape and the write-line. The third stroke completes the oval shape. Finally, we have a little fourth stroke to complete the letter. The letter s is made up of three strokes. The first stroke is sort of an s shape! The second and third strokes complete the letter s. These are easier to get right than the first stroke because they basically follow the circle shape on our construction grid. The secret to this letter is to make both ‘ends’ of the first stroke not too curved. Because the other two strokes are curved they will compensate and give the overall correct shape. Finally, we are left with the letters i and j, which are made from one pen-stroke. You just need to remember to curve the end of the stroke when writing the letter j.
<urn:uuid:ebc9b632-c27d-4adb-85bd-b11864ab1adf>
CC-MAIN-2013-20
http://www.scribblers.co.uk/blog/tag/starting-calligraphy/
2013-05-21T10:35:15
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946402
1,563
4.15625
4
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or /ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh") and so on. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity. Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert. The hyperbolic functions are: Via complex numbers the hyperbolic functions are related to the circular functions as follows: where is the imaginary unit defined as . Note that, by convention, sinh2x means (sinhx)2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent function is , though cothx is far more common. Hyperbolic sine and cosine satisfy the identity which is similar to the Pythagorean trigonometric identity. It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B. For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions In the above expressions, C is called the constant of integration. It is possible to express the above functions as Taylor series: A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh. However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic triangle. and the property that cosh t ≥ 1 for all t. The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent). The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point (cosh t, sinh t) on the hyperbola. The function cosh x is an even function, that is symmetric with respect to the y-axis. The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0. The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems the "double angle formulas" and the "half-angle formulas" The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x). The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity. From the definitions of the hyperbolic sine and cosine, we can derive the following identities: These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials. Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then holomorphic. Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
<urn:uuid:34eefbfb-968b-4240-9caa-0182a3ca0559>
CC-MAIN-2013-20
http://www.thefullwiki.org/Hyperbolic_tangent
2013-05-21T09:59:49
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893241
1,119
4.0625
4
Surface area is a two-dimensional property of a three-dimensional figure. Cones are similar to pyramids, except they have a circular base instead of a polygonal base. Therefore, the surface area of a cone is equal to the sum of the circular base area and the lateral surface area, calculated by multiplying half of the circumference by the slant height. Related topics include pyramid and cylinder surface area. If you want to calculate the surface area of a cone, you only need to know 2 dimensions. The first is the slant height l and the second is the radius. So what we're going to do, we're going to separate this into two pieces the first is the base which is a circle with radius r and the second is this slant height l. So if I cut, if I took a scissors and cut the cone part and I fended out it would look like a sector. Well what I could do here is I could rearrange this sector into a parallelogram. So again if I cut this into really tiny pieces then I'll be able to organize it into a parallelogram where I would be able to calculate its area. And the way that we'll calculate its area, is first by saying well what are these lines that are going out? Well those lines are going to be your l, your slant height and this side right here is going to be half of your circumference and half of a circumference is pi times r because the whole circumference is 2 pi r. So this down here is pi times r, so if our height l and our base is pi times r then the area of this is equal to pi times r times l. So the surface area of a cone which I'm going to write over here is equal to the base pi r squared plus this lateral area which is found using your slant height. So that's going be pi times r times l, so you only need to know 2 dimensions the radius and the slant height and you can calculate the surface area of any cone.
<urn:uuid:8c57b621-6116-4614-a9fc-c31bd7ee9c11>
CC-MAIN-2013-20
http://www.brightstorm.com/math/geometry/area/surface-area-of-cones/
2013-05-23T18:31:13
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955102
416
4.125
4
In this lecture you will learn how to undertake Solving Quadratic Systems. First you will start with Linear Quadratic Systems as well as their Solutions, before you move into Quadratic Quadratic Systems and their Solutions. Lastly, you will learn how to solve Systems of Quadratic Inequalities. linear-quadratic system, use substitution to solve. quadratic-quadratic system, use elimination to solve. inequalities, remember the conventions about graphing boundaries using either solid or dotted lines. If possible, check your solutions to systems of equations by graphing. Solving Quadratic Systems Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.
<urn:uuid:11f102ce-459d-4c2d-8912-6980537bf6dc>
CC-MAIN-2013-20
http://www.educator.com/mathematics/algebra-2/fraser/solving-quadratic-systems.php
2013-05-23T18:46:10
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920276
178
4.0625
4
An earthquake is a sudden vibration or trembling in the Earth. More than 150,000 tremors strong enough to be felt by humans occur each year worldwide (see Chance of an Earthquake). Earthquake motion is caused by the quick release of stored potential energy into the kinetic energy of motion. Most earthquakes are produced along faults, tectonic plate boundary zones, or along the mid-oceanic ridges (Figures 1 and 2). Figure 1: Distribution of earthquake epicenters from 1975 to 1995. Depth of the earthquake focus is indicated by color. Deep earthquakes occur in areas where oceanic crust is being actively subducted. About 90% of all earthquakes occur at a depth between 0 and 100 kilometers. (Source: U.S. Geologic Survey, National Earthquake Information Center) Figure 2: Distribution of earthquakes with a magnitude less than 5.0 relative to the various tectonic plates found on the Earth's surface. Each tectonic plate has been given a unique color. This illustration indicates that the majority of small earthquakes occur along plate boundaries. (Source: PhysicalGeography.net) At these areas, large masses of rock that are moving past each other can become locked due to friction. Friction is overcome when the accumulating stress has enough force to cause a sudden slippage of the rock masses. The magnitude of the shock wave released into the surrounding rocks is controlled by the quantity of stress built up because of friction, the distance the rock moved when the slippage occurred, and ability of the rock to transmit the energy contained in the seismic waves. The San Francisco earthquake of 1906 involved a six meter horizontal displacement of bedrock. Sometime after the main shock wave, aftershocks can occur because of the continued release of frictional stress. Most aftershocks are smaller than the main earthquake, but they can still cause considerable damage to already weakened natural and human-constructed features. Earthquakes that occur under or near bodies of water can give rise to tsunamis, which in cases like the December 26, 2004 Sumatra-Andaman Island earthquake reult in far greater distruction and loss of life that the initial earthquake. Earthquakes are a form of wave energy that is transferred through bedrock. Motion is transmitted from the point of sudden energy release, the earthquake focus (hypocenter), as spherical seismic waves that travel in all directions outward (Figure 3). The point on the Earth's surface directly above the focus is termed the epicenter. Two different types of seismic waves have been described by geologists: body waves and surface waves. Body waves are seismic waves that travel through the lithosphere. Two kinds of body waves exist: P-waves and S-waves. Both of these waves produce a sharp jolt or shaking. P-waves or primary waves are formed by the alternate expansion and contraction of bedrock and cause the volume of the material they travel through to change. They travel at a speed of about 5 to 7 kilometers per second through the lithosphere and about 8 kilometers per second in the asthenosphere. The speed of sound is about 0.30 kilometers per second. P-waves also have the ability to travel through solid, liquid, and gaseous materials. When some P-waves move from the ground to the lower atmosphere, the sound wave that is produced can sometimes be heard by humans and animals. Figure 3: Movement of body waves away from the focus of the earthquake. The epicenter is the location on the surface directly above the earthquake's focus. (Source: PhysicalGeography.net) S-waves or secondary waves are a second type of body wave. These waves are slower than P-waves and can only move through solid materials. S-waves are produced by shear stresses and move the materials they pass through in a perpendicular (up and down or side to side) direction. Surface waves travel at or near the Earth's surface. These waves produce a rolling or swaying motion causing the Earth's surface to behave like waves on the ocean. The velocity of these waves is slower than body waves. Despite their slow speed, these waves are particularly destructive to human construction because they cause considerable ground movement. Earthquake Magnitude and Energy |Table 1: Relationship between Richter Scale magnitude and energy released.| |2.0||1.3 x 108||Smallest earthquake detectable by people.| |5.0||2.8 x 1012||Energy released by the Hiroshima atomic bomb.| |6.0 - 6.9||7.6 x 1013 to 1.5 x 1015|| About 120 shallow earthquakes of this magnitude occur each year on the Earth. |6.7||7.7 x 1014||Northridge, California earthquake January 17, 1994.| |7.0||2.1 x 1015||Major earthquake threshold. Haiti earthquake of January 12, 2010 resulted in an estmated 222,570 deaths| |7.4||7.9 x 1015||Turkey earthquake August 17, 1999. More than 12,000 people killed.| |7.6||1.5 x 1016||Deadliest earthquake in the last 100 years. Tangshan, China, July 28, 1976. Approximately 255,000 people perished.| |8.3||1.6 x 1017||San Francisco earthquake of April 18, 1906.| |9.0||Japan Earthquake March 11, 2011| |9.1||4.3 x 1018||December 26, 2004 Sumatra earthquake which triggered a tsunami and resulted in 227,898 deaths spread across fourteen countries| |9.5||8.3 x 1018||Most powerful earthquake recorded in the last 100 years. Southern Chile on May 22, 1960. Claimed 3,000 lives.| The strength of an earthquake can be measured by a device called a seismograph. When an earthquake occurs this device converts the wave energy into a standard unit of measurement like the Richter scale. In the Richter scale, units of measurement are referred to as magnitudes. The Richter scale is logarithmic. Thus, each unit increase in magnitude represents 10 times more energy released. Table 1 describes the relationship between Richter scale magnitude and energy released. The following equation can be used to approximate the amount of energy released from an earthquake in joules when Richter magnitude (M) is known: Energy in joules = 1.74 x 10(5 + 1.44*M) Figures 4 and 5 describe the spatial distribution of small and large earthquakes respectively. These maps indicate that large earthquakes have distributions that are quite different from small events. Many large earthquakes occur some distance away from a plate boundary. Some geologists believe that these powerful earthquakes may be occurring along ancient faults that are buried deep in the continental crust. Recent seismic studies in the central United States have discovered one such fault located thousands of meters below the lower Mississippi Valley. Some large earthquakes occur at particular locations along the plate boundaries. Scientists believe that these areas represent zones along adjacent plates that have greater frictional resistance and stress. Figure 4: Distribution of earthquakes with a magnitude less than 5 on the Richter Scale. (Image Source: PhysicalGeography.net) Figure 5: Distribution of earthquakes with a magnitude greater than 7 on the Richter Scale. (Image Source: PhysicalGeography.net) The Richter Scale Magnitude, while the most known, is one of several measures of the magnitude of an earthquake. The most commonly used are: - Local magnitude (ML), commonly referred to as "Richter magnitude;" - Surface-wave magnitude (Ms); - Body-wave magnitude (Mb); and - Moment magnitude (Mw). Scales 1 to 3 have limited range and applicability and do not satisfactorily measure the size of the largest earthquakes. The moment magnitude (Mw) scale, based on the concept of seismic moment, is uniformly applicable to all sizes of earthquakes but is more difficult to compute than the other types. All magnitude scales should yield approximately the same value for any given earthquake. The severity of an earthquake can be expressed in terms of both intensity and magnitude. However, the two terms are quite different, and they are often confused. Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter while magnitude is related to the amount of seismic energy released at the hypocenter of the earthquake. Although numerous intensity scales have been developed over the last several hundred years to evaluate the effects of earthquakes, the one currently used in the United States is the Modified Mercalli (MM) Intensity Scale. The lower numbers of the intensity scale generally deal with the manner in which the earthquake is felt by people. The higher numbers of the scale are based on observed structural damage. Structural engineers usually contribute information for assigning intensity values of Vlll or above. The following is an abbreviated description of the 12 levels of Modified Mercalli intensity. I. Not felt except by a very few under especially favorable conditions. II. Felt only by a few persons at rest, especially on upper floors of buildings. Delicately suspended objects may swing. III. Felt quite noticeably by persons indoors, especially on upper floors of buildings. Many people do not recognize it as an earthquake. Standing motor cars may rock slightly. Vibration similar to the passing of a truck. Duration estimated. IV. Felt indoors by many, outdoors by few during the day. At night, some awakened. Dishes, windows, doors disturbed; walls make cracking sound. Sensation like heavy truck striking building. Standing motor cars rocked noticeably. V. Felt by nearly everyone; many awakened. Some dishes, windows broken. Unstable objects overturned. Pendulum clocks may stop. Vl. Felt by all, many frightened. Some heavy furniture moved; a few instances of fallen plaster. Damage slight. Vll. Damage negligible in buildings of good design and construction; slight to moderate in well-built ordinary structures; considerable damage in poorly built or badly designed structures; some chimneys broken. Vlll. Damage slight in specially designed structures; considerable damage in ordinary substantial buildings with partial collapse. Damage great in poorly built structures. Fall of chimneys, factory stacks, columns, monuments, walls. Heavy furniture overturned. IX. Damage considerable in specially designed structures; well-designed frame structures thrown out of plumb. Damage great in substantial buildings, with partial collapse. Buildings shifted off foundations. X. Some well-built wooden structures destroyed; most masonry and frame structures destroyed with foundations. Rails bent. Xl. Few, if any (masonry) structures remain standing. Bridges destroyed. Rails bent greatly. Xll. Damage total. Lines of sight and level are distorted. Objects thrown into the air. Earthquake Damage and Destruction Earthquakes are a considerable hazard to humans. Earthquakes can cause destruction by structurally damaging buildings and dwellings, fires, tsunamis, and mass wasting (see Figures 6 to 10). Earthquakes can also take human lives. The amount of damage and loss of life depends on a number of factors. Some of the more important factors are: - Time of day. Higher losses of life tend to occur on weekdays between the hours of 9:00 AM to 4:00 PM. During this time interval many people are in large buildings because of work or school. Large structures are often less safe than smaller homes in an earthquake. - Magnitude of the earthquake and duration of the event. - Distance form the earthquake's focus. The strength of the shock waves diminish with distance from the focus. - Geology of the area affected and soil type. Some rock types transmit seismic wave energy more readily. Buildings on solid bedrock tend to receive less damage. Unconsolidated rock and sediments have a tendency to increase the amplitude and duration of the seismic waves increasing the potential for damage. Some soil types when saturated become liquefied (Figure 6). - Type of building construction. Some building materials and designs are more susceptible to earthquake damage (Figure 7). - Population density. More people often means greater chance of injury and death. The greatest loss of life because of an earthquake this century occurred in Tangshan, China in 1976 when an estimated 250,000 people died. In 1556, a large earthquake in the Shanxi Province of China was estimated to have caused the death of about 1,000,000 people. A common problem associated with earthquakes in urban areas is fire (Figure 8). Shaking and ground displacement often causes the severing of electrical and gas lines leading to the development of many localized fires. Response to this problem is usually not effective because shock waves also rupture pipes carrying water. In the San Francisco earthquake of 1906, almost 90% of the damage to buildings was caused by fire. In mountainous regions, earthquake-provoked landslides can cause many deaths and severe damage to built structures (Figure 9). The town of Yungay, Peru was buried by a debris flow that was triggered by an earthquake that occurred on May 31, 1970. This disaster engulfed the town in seconds with mud, rock, ice, and water and took the lives of about 20,000 people. Another consequence of earthquakes is the generation of tsunamis (Figure 10). Tsunamis, or tidal waves, form when an earthquake triggers a sudden movement of the seafloor. This movement creates a wave in the water body which radiates outward in concentric shells. On the open ocean, these waves are usually no higher than one to three meters in height and travel at speed of about 750 kilometers per hour. Tsunamis become dangerous when they approach land. Frictional interaction of the waves with the ocean floor, as they near shore, causes the waves to slow down and collide into each other. This amalgamation of waves then produces a super wave that can be as tall as 65 meters in height. The US Geological Survey estimate that at least 1,783 deaths worldwide resulted from earthquake activity in 2009. In 2010, the number rose to 226,729 as the result of 222,570 people killed by the Jauary 12, 2010 earthquake in Haiti. The deadliest earthquake of 2009 was a magnitude 7.5 event that killed approximately 1,117 people in southern Sumatra, Indonesia on Sept. 30, according to the U.S. Geological Survey (USGS) and confirmed by the United Nations Office for Coordination of Humanitarian Affairs (OCHA). However, the number of earthquake-related fatalities in 2009 was far less than the 2008 count of over 88,000. The high number of fatalities in 2008 was primarily due to the devastating magnitude 7.9 earthquake that occurred in Sichuan, China on May 12. Although unrelated, the Sept. 30 Indonesian earthquake occurred a day after the year’s strongest earthquake, a magnitude 8.1 on Sept. 29 in the Samoa Islands region. Tsunamis generated by that earthquake killed 192 people in American Samoa, Samoa and Tonga. A magnitude 6.3 earthquake hit the medieval city of L’Aquila in central Italy on April 6, killing 295 people. Overall, earthquakes took the lives of people in 15 countries on four continents during 2009, including Afghanistan, Bhutan, China, Costa Rica, Greece, Indonesia, Italy, Kazakhstan, Honduras, Japan, Malawi, Samoa, South Africa and Tonga, as well as the U.S. territory of American Samoa. Earthquakes injured people in 11 additional countries, including the mainland United States, where a magnitude 4.4 earthquake on May 2 injured one person in the Los Angeles area. The biggest 2009 earthquake in the 50 United States was in the Aleutian Islands of Alaska. The magnitude 6.5 earthquake occurred in the Fox Islands on Oct. 13. It was felt at the towns of Akutan and Unalaska, but caused no casualties or damage. The greatest earthquake for the year in the contiguous United States was a magnitude 5.2 event on October 2 in the Owens Valley southeast of Lone Pine, California. Because of the sparse population in the epicentral area, this quake caused no damage although it was felt as far away as Merced and Los Angeles, California and Las Vegas, Nevada. A magnitude 9.1 Sumatra-Andaman Island earthquake and subsequent tsunami on December 26, 2004 killed 227,898 people, which is the fourth largest casualty toll for earthquakes and the largest toll for a tsunami in recorded history. As a consequence of that earthquake, the USGS has significantly improved its earthquake notification and response capabilities. Improvements include the addition of nine real-time seismic stations across the Caribbean basin, a seismic and tsunami prone region near the U.S. southern border, implementation of a 24x7 earthquake operations center at the USGS National Earthquake Information Center (NEIC), and development of innovative tools for rapid evaluation of population exposure and damage to potentially damaging earthquakes. The USGS estimates that several million earthquakes occur throughout the world each year, although most go undetected because they hit remote areas or have very small magnitudes. The USGS NEIC publishes the locations for about 40 earthquakes per day, or about 14,500 annually, using a publication threshold of magnitude 4.5 or greater worldwide or 2.5 or greater within the United States. On average, only 18 of these earthquakes occur at a magnitude of 7.0 or higher each year. In the 2009 year, 17 earthquakes reached a magnitude of 7.0 or higher, with a single one topping a magnitude of 8.0. These statistics for large magnitude earthquakes are higher than those of 2008, which experienced only 12 earthquakes over magnitude 7.0 and none over 8.0. Factors such as the size of an earthquake, the location and depth of the earthquake relative to population centers, and fragility of buildings, utilities and roads all influence how earthquakes will affect nearby communities. Table 2. Notable Earthquakes and Their Estimated Magnitude |January 23, 1556|| |August 17, 1668|| |November 1, 1755|| |December 16, 1857|| |October 27, 1891|| |June 15, 1896|| |April 18, 1906||3,000||7.8| |August 17, 1906|| |December 28, 1908|| |December 16, 1920|| |September 1, 1923|| |May 22, 1927|| |January 13, 1934|| |December 26, 1939|| |February 29, 1960|| |May 22, 1960|| |March 28, 1964|| Prince William Sound, AK |May 31, 1970|| |July 27, 1976|| |September 19, 1985|| |December 7, 1988|| |August 17, 1999|| |January 26, 2001|| |December 26, 2003|| |December 26, 2004|| Off west coast northern Sumatra |October 8, 2005|| |May 26, 2006|| |May 12, 2008|| Eastern Sichuan, China |January 12, 2010|| Near Port-au-Prince, Haiti |March 11, 2011|| Pacific Ocean, East of Oshika Peninsula, Japan * Fatalities in the 1976 Tangshan, China earthquake were estimated as high as 655,000. Source: Preferred Magnitudes of Selected Significant Earthquakes, USGS, 2010 (with additions on two most recent major earthquakes in Haiti and Japan. The following links provide some more information about earthquakes. - American Geophysical Union (AGU) - Animation of P, S & Surface Waves - Animations of Seismology Fundamentals - Association of American State Geologists (AASG) - Association of Bay Area Governments (ABAG) - California Geological Survey (CGS) - California Office of Emergency Services (OES) - California Seismic Safety Commission - Center for Earthquake Research & Information (CERI) - Central United States Earthquake Consortium (CUSEC) - Consortium of Universities for Research in Earthquake Engineering (CUREE) - COSMOS Virtual Data Center - CREW - Cascadia Region Earthquake Workgroup - Earthquake Engineering Research Institute (EERI) - Earthquake Information for 2009, USGS - Earthquake Information for 2010, USGS - Earthquake Monitoring - Earthquakes - Online University - Earthquakes by Bruce A. Bolt Online Companion - Earthquakes Cause over 1700 Deaths in 2009, USGS - Earth Science Education Activities - European-Mediterranean Seismological Centre - FEMA - Federal Emergency Management Agency - Finite-source Rupture Model Database - Global Earthquake Explorer - GSA - Geological Society of America - Incorporated Research Institutes for Seismology (IRIS) - International Association of Seismology and Physics of the Earth's Interior (IASPEI) - International Seismological Centre (ISC) - John Lahr's Earthquake website - McConnell, D., D. Steer, C. Knight, K. Owens, and L. Park. 2010. The Good Earth. 2nd Edition. McGraw-Hill, Dubuque, Iowa. - Mid-America Earthquake Center - Multi-Disciplinary Center for Earthquake Engineering Research (MCEER) - National Geophysical Data Center (NGDC) - NOAA - National Information Centre of Earthquake Engineering (NICEE) - National Science Foundation (NSF) - Natural Hazards Center - Northern California Earthquake Data Center - Observatories and Research Facilities for EUropean Seismology (ORFEUS) - Plummer, C., D. Carlson, and L. Hammersle. 2010. Physical Geology. 13th Edition. McGraw-Hill, Dubuque, Iowa. - Project IDA - Quake-Catcher Network - Saint Louis University Earthquake Center - Seattle Fault Earthquake Scenario - Seismographs: Keeping Track of Earthquakes - Seismological Society of America (SSA) - Seismo-surfing the Internet for Earthquake Data - Smithsonian Global Volcanism Program - SOPAC (Scripps Orbit and Permanent Array Center) - Southern California Earthquake Center (SCEC) - Tarbuck, E.J., F.K. Lutgens, and D. Tasa. 2009. Earth Science. 12th Edition. Prentice Hall, Upper Saddle River, New Jersey. - Tectonics Observatory - Tracing earthquakes: seismology in the classroom - UPSeis Seismology Questions Answered - USGS Earthquake Hazards Program, U.S. Geological Survey - Western States Seismic Policy Council (WSSPC) - World Data Center System - World Organization of Volcano Observatories - World Seismic Safety Initiative (WSSI)
<urn:uuid:99446ec0-7d83-4817-851c-637593492317>
CC-MAIN-2013-20
http://www.eoearth.org/articles/view/151858/Mid-ocean_ridges/San_Francisco_Earthquake_of_1906/
2013-05-23T18:30:41
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920046
4,773
4.34375
4
In 2006, high sea temperatures caused severe coral bleaching in the Keppell Islands, in the southern part of the reef — the largest coral reef system in the world. The damaged reefs were then covered by a single species of seaweed which threatened to suffocate the coral and cause further loss. A "lucky combination" of rare circumstances has meant the reef has been able to make a recovery. Abundant corals have reestablished themselves in a single year, say the researchers from the University of Queensland's Centre for Marine Studies and the ARC Centre of Excellence for Coral Reef Studies (CoECRS). "Three factors were critical," said Dr Guillermo Diaz-Pulido. "The first was exceptionally high regrowth of fragments of surviving coral tissue. The second was an unusual seasonal dieback in the seaweeds, and the third was the presence of a highly competitive coral species, which was able to outgrow the seaweed." Coral bleaching occurs in higher sea temperatures when the coral lose the symbiotic algae they need to survive. The reefs then lose their colour and become more susceptible to death from starvation or disease. The findings are important as it is extremely rare to see reports of reefs that bounce back from mass coral bleaching or other human impacts in less than a decade or two, the scientists said. The study is published in the online journal PLoS one. "The exceptional aspect was that corals recovered by rapidly regrowing from surviving tissue," said Dr Sophie Dove, also from CoECRS and The University of Queensland. "Recovery of corals is usually thought to depend on sexual reproduction and the settlement and growth of new corals arriving from other reefs. This study demonstrates that for fast-growing coral species asexual reproduction is a vital component of reef resilience." Last year, a major global study found that coral reefs did have the ability to recover after major bleaching events, such as the one caused by the El Niño in 1998. David Obura, the chairman of the International Union for Conservation of Nature climate change and coral reefs working group involved with the report, said: "Ten years after the world's biggest coral bleaching event, we know that reefs can recover – given the chance. Unfortunately, impacts on the scale of 1998 will reoccur in the near future, and there's no time to lose if we want to give reefs and people a chance to suffer as little as possible." Coral reefs are crucial to the livelihoods of millions of coastal dwellers around the world and contain a huge range of biodiversity. The UN's Millennium Ecosystem Assessment says reefs are worth about $30bn annually to the global economy through tourism, fisheries and coastal protection. But the ecosystems are under threat worldwide from overfishing, coastal development and runoff from the land, and in some areas, tourism impacts. Natural disasters such as the earthquake that triggered the Indian Ocean tsunami in 2004 have also caused reef loss. Climate change poses the biggest threat to reefs however, as emissions of carbon dioxide make seawater increasingly acidic. Last year a study showed that one-fifth of the world's coral reefs have died or been destroyed and the remainder are increasingly vulnerable to the effects of climate change. The Global Coral Reef Monitoring Network says many surviving reefs could be lost over the coming decades as CO2 emissions continue to increase.
<urn:uuid:5e2f2baf-ab5a-40e4-ad86-116c02b20572>
CC-MAIN-2013-20
http://www.guardian.co.uk/environment/2009/apr/22/coral-barrier-reef-australia
2013-05-23T18:40:02
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961021
683
4.03125
4
Earth from Space: Easter Island Easter Island as seen by astronauts aboard the International Space Station on Sept. 25, 2002. On Easter Sunday in 1722, Dutch explorer Jacob Roggeveen became the first known European to encounter this Polynesian island and gave it the name it has become most widely known by. Easter Island (also known as Rapa Nui in the native language) is one of the most isolated spots on Earth, lying some 2,000 miles from the nearest areas of human habitation (Tahiti and Chile) — even more remote than the astronauts orbiting at 210 nautical miles above the Earth.. The island, which is only 15 miles long, was annexed by Chile in 1888. (In Spanish, it is called "Isla de Pascua," which means "Easter Island.") Archaeological evidence suggests that Polynesians from other Pacific Islands discovered and colonized Easter Island around the year 400. The island and its early inhabitants are best known for the giant stone monoliths, known as Moai, placed along the coastline. It is thought that the population grew bigger than was sustainable on the small island, resulting in civil war, deforestation and near collapse of the island ecosystem . Today, a new forest (primarily eucalyptus) has been established in the center of the island (the dark green in the image), according to a NASA statement. Volcanic landforms dominate the geography of the island, including the large crater Rana Kao at the southwest end of the island and a line of cinder cones that stretch north from the central mountain. Near Rana Kao is the longest runway in Chile, which served as an emergency landing spot for the space shuttle before its retirement in 2011. MORE FROM LiveScience.com
<urn:uuid:02e8e579-65c8-4c0a-b405-7a29e289fea9>
CC-MAIN-2013-20
http://www.livescience.com/31329-easter-island-image.html
2013-05-23T19:06:26
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966228
370
4.09375
4
During this tutorial you will be asked to perform calculations involving trigonometric functiions. You will need a calulator to proceed. | The purpose of this tutorial is to review with you the elementary properties of the trigonometric functions. Facility with this subject is essential to success in all branches of science, and you are strongly urged to review and practice the concepts presented here until they are mastered. Let us consider the right-angle triangle shown in Panel 1. The angle at C is a right angle and the angle A we will call θ. The lengths of the sides of the triangle we will denote as p, q and r. From your elementary geometry, you know several things about this triangle. For example, you know the Pythagorean relation, q² = p² + r². That is, the square of the length of the side opposite the right angle, which we call the hypotenuse, is equal to the sum of the squares of the lengths of the other two sides. We know other things. For example, we know that if the lengths of the three sides of any triangle p, q and r are specified, then the whole triangle is determined, angles included. If you think about this for a moment, you will see it is correct. If I give you three sticks of fixed length and told you to lay them down in a triangle, there's only one triangle which you could make. What we would like to have is a way of relating the angles in the triangle, say θ, to the lengths of the sides. It turns out that there's no simple analytic way to do this. Even though the triangle is specified by the lengths of the three sides, there is not a simple formula that will allow you to calculate the angle θ. We must specify it in some new way. |To do this, we define three ratios of the sides of the triangle. One ratio we call the sine of theta, written sin(θ), and it is defined as the ratio of the side opposite θ to the hypotenuse, that is r/q. The cosine of θ, written cos(θ), is the side adjacent to θ over the hypotenuse, that is, p/q. This is really enough, but because it simplifies our mathematics later on, we define the tangent of θ, written tan(θ), as the ratio of the opposite to the adjacent sides, that is r/p. This is not an independent definition since you can readily see that the tangent of θ is equal to the sine of θ divided by the cosine of θ. Verify for yourself that this is correct. All scientific calculators provide this information. The first thing to ensure is that your calculator is set to the anglular measure that you want. Angles are usually measured in either degrees or radians (see tutorial on DIMENSIONAL ANALYSIS). The angle 2º is a much different angle than 2 radians since 180º = π radians = 3.1416... radians. Make sure that your calculator is set to degrees. Now suppose that we want the sine of 24º. Simply press 24 followed by the [sin] key and the display should show the value 0.4067. Therefore, the sine of 24º is 0.4067. That is, in a triangle like panel 1 where θ = 24º, the ratio of the sides r to q is 0.4067. Next set your calculator to radians and find the sine of 0.42 radians. To do this enter 0.42 followed by the [sin] key. You should obtain a value of 0.4078. This is nearly the same value as you obtained for the sine of 24º. Using the relation above you should confirm that 24º is close to 0.42 radians Obviously, using your calculator to find values of sines is very simple. Now find sine of 42º 24 minutes. The sine of 42º 24 minutes is 0.6743. Did you get this result? If not, remember that 24 minutes corresponds to 24/60 or 0.4º. The total angle is then 42.4º | The determination of cosines and tangents on your calculator is similar. It is now possible for us to solve the simple problem concerning triangles. For example, in Panel 2, the length of the hypotenuse is 3 cm and the angle θ is 24º. What is the length of the opposite side r? The sine of 24º as we saw is 0.4067 and it is also, by definition, r/3. So, sine of 24º = .4067 = r/3, and therefore, r = 3 x 0.4067 = 1.22 cm. |Conversely, suppose you knew that the opposite side was 2 cm long and the hypotenuse was 3 cm long, as in panel 3, what is the angle θ? First determine the sine of θ You should find that the sine of θ is 2/3, which equals 0.6667. Now we need determine what angle has 0.6667 as its sine. If you want your answer to be in degrees, be sure that your calculator is set to degrees. Then enter 0.6667 followed by the [INV] key and then the [sin] key. You should obtain a value of 41.8º. If your calculator doesn't have a [INV] key, it probably has a [2ndF] key and the inverse sine can be found using it. |One use of these trigonometric functions which is very important is the calculation of components of vectors. In panel 4 is shown a vector OA in an xy reference frame. We would like to find the y component of this vector. That is, the projection OB of the vector on the y axis. Obviously, OB = CA and CA/OA = sin(θ), so CA = OA sin(θ). Similarly, the x-component of OA is OC. And OC/OA = cos(θ) so OC = OA cos(θ).| |There are many relations among the trigonometric functions which are important, but one in particular you will find used quite often. Panel 1 has been repeated as Panel 5 for you. Let us look at the sum cos² + sin². From the figure, this is (p/q)² + (r/q)², which [(p² + r²) / (q²)]. The Pythagorean theorem tells us that p² + r² = q² so we have [(p² + r²) / q²] = (q²/q²) = 1. Therefore, we have; Our discussion so far has been limited to angles between 0 and 90º. One can, using the calculator, find the the sine of larger angles (eg 140º ) or negative angles (eg -32º ) directly. Sometimes, however, it is useful to find the corresponding angle betweeen 0 and 90º. Panel 6 will help us here. |In this xy reference frame, the angle θ is clearly between 90º and 180 º, and clearly, the angle a, which is 180 - θ ( a is marked with a double arc) can be dealt with. In this case, we say that the magnitude of sine, cosine, and tangent of θ are those of the supplement a and we only have to examine whether or not they are positive or negative. For example, what is the sine, cosine and tangent of 140º? The supplement is 180º - 140º = 40º. Find the sine, the cosine and the tangent of 40º.
<urn:uuid:00f865ac-a066-4877-8d69-479bd1350ad2>
CC-MAIN-2013-20
http://www.physics.uoguelph.ca/tutorials/trig/trigonom.html
2013-05-23T19:00:52
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.91847
1,681
4.0625
4
Is there such a thing as too much money? by Fred E. Foldvary, Senior EditorWhat is inflation? There are two economic meanings of inflation. The first meaning is monetary inflation, having to do with the money supply. To understand that, we need to understand that the impact of money on the economy depends not just on the amount of money but also on its rate of turnover. We all know that money circulates. How fast it circulates is called its velocity. For example, suppose you get paid $4000 every four weeks. You are circulating $4000 13 times per year. Then suppose you instead get paid $1000 each week. Your total spending is the same, but now you are circulating $1000 52 times per year. The velocity of the money is 52, but the money you hold has been reduced to one fourth its previous amount, although the money held times the velocity is the same. The effect on the economy is the money supply times the velocity. Monetary inflation is an increase in the money supply, times the velocity, which is greater than the increase in the amount of transactions measured in constant dollars. Simply put, if velocity does not change, monetary inflation is an increase in money that is greater than the increase in goods. Price inflation is an on-going increase in the price level. The level of prices is measured by a price index, such as the consumer price index (CPI). Usually, price inflation is caused by monetary inflation. So let’s take a look at recent monetary inflation. The broadest measure of money is MZM, which stands for money zero maturity, funds which can be readily spent. The Federal Reserve Bank of St. Louis keeps track of various measurements of money. Its data show that on an annual basis, MZM increased by 13 percent in January 2008, 36 percent in February, and 23 percent in March. These are huge increases, since gross domestic product, the total production of goods, increased at an annual rate of only .6 percent during these months. In 2006, MZM grew at an annual rate of only 4 percent. High monetary inflation results in high price inflation. Indeed, in May 2008 the consumer price index rose by 4.2 percent from the level of May 2007. For the month, the increase for May was .6 percent, an annual rate of 7.2 percent. The “Consumer Price Index for All Urban Consumers” (CPI-U) increased 0.8 percent in May, before seasonal adjustment, for an annualized increase of 9.6 percent. The “Consumer Price Index for Urban Wage Earners and Clerical Workers” (CPI-W) increased 1.0 percent in May, prior to seasonal adjustment, for a whopping annual increase of 12 percent. The rapid rise in oil prices fueled the increase in the price of gasoline, while the greater demand for grains made food prices rise, but beneath these rises is the monetary inflation that creates a higher demand for goods in general. The government reports that “core inflation,” not counting gasoline and food, is lower, but what counts for people is everything they buy, including food and fuel. If you have to pay much more for food and gasoline, there is less money for other things, so of course these will not rise in price as much. In making monetary policy, the Federal Reserve targets the federal funds interest rate, which banks pay when they borrow funds from one another. During the financial troubles during the first few months of 2008, the Fed aggressively lowered the federal funds rate to 2 percent and also indicated that it would supply limitless credit to banks that borrowed directly from the Federal Reserve. The Fed lowers the interest rate by increasing the supply of money that banks have to lend; to unload it, banks charge borrowers less interest. To start, the Fed buys U.S. Treasury bonds from the public. The Fed pays for the bonds not by using old money it has lying around but by increasing the reserves held by the banks in their accounts at their local Federal Reserve Bank then using that new money. This increase in reserves or bank funds is a creation of money out of nothing. Actually, this does not violate the law of conservation, because this creation of money is at the expense of the value of all other money holdings. Every extra dollar created by the Fed decreases the value of the dollars you hold by a tiny amount. Most monetary reformers stop there, but that is not enough. The current financial instability is also caused by the real estate boom-bust cycle, since even with sound money, an economic expansion would spark a speculative boom in land values. In a competitive market, when produced goods rise in price, producers usually supply more, bringing the price back down or limiting the rise. But land is not produced, so with increased demand, the price has nowhere to go but up. Speculators drive the price of land based on expectations of even higher future prices, but at the peak of the boom, the price becomes too high for those who want to use the land. Real estate stops rising and then falls, and that brings the financial system down with it, as we have witnessed during the past year. To prevent the inflation in land prices, we need to remove the subsidy, the pumping up of land value from the civic benefits paid by returns on labor and capital goods. We can remove the land subsidy by tapping the land value or land rent for public revenue. Land-value tapping or taxation plus free-market money and banking would provide price and financial stability. Only the free market can know the right money supply. Some people think the government could just print money and spend it. That is what is happening in Zimbabwe, which has an inflation rate of one hundred thousand percent. Much of the population has fled the country. Once government can create money at will, there is really no way to limit it, and if there is some limiting rule, then the money supply becomes too rigid. Only free market competition and production can combine price stability with money-supply flexibility. -- Fred Foldvary Copyright 2008 by Fred E. Foldvary. All rights reserved. No part of this material may be reproduced or transmitted in any form or by any means, electronic or mechanical, which includes but is not limited to facsimile transmission, photocopying, recording, rekeying, or using any information storage or retrieval system, without giving full credit to Fred Foldvary and The Progress Report. Part III, The Trouble With Money and its Cure A Better Way to Pay for Railways? How Economic Systems Really Work Email this article Sign up for free Progress Report updates via email What are your views? Share your opinions with The Progress Report: Page One Page Two Archive Discussion Room Letters What's Geoism?
<urn:uuid:accf12a7-8aaf-4627-9f32-8cccc672aeeb>
CC-MAIN-2013-20
http://www.progress.org/2008/fold564.htm
2013-05-23T18:45:41
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952325
1,400
4.15625
4
There are many techniques available to help students get started with a piece of writing. Getting started can be hard for all levels of writers. Freewriting is one great technique to build fluency. That was explored in an earlier lesson plan: http://www.thirteen.org/edonline/adulted/lessons/lesson18.html This unit offers some other techniques. These techniques may be especially helpful with students who prefer a style of learning or teaching that could be described as visual, spatial, or graphic. Sometimes those styles or overlooked in favor of approaches that are very linguistic or linear. The approaches here will attend to a broader range of learning styles as they add variety. - Writing: Writing Process, Pre-Writing, Autobiography, Exposition, Personal Narrative, Argumentation, Comparison and Contrast, Description. Students will be able to: - Write more fluently (writing more with greater ease) - Generate writing topics - Select topics that will yield strong pieces of writing - Connect personal experience, knowledge, and examples to an assigned - Produce better organized pieces of writing National Reporting System of Adult Education standards are applicable here. These are the standards required by the 1998 Workforce Investment Act. See Pencils, colored pencils, pens, markers, crayons, unlined paper, magazines and newspapers with pictures inside, glue or paste, and paper. Big paper or poster board can make the pre-writing exercises more eye-catching, more of a project, and better for display. Video and TV: Prep for Teachers Make sure you try each of the activities yourself before you ask students to do them. That will give you a better understanding of the activities and help you recognize any potential points that may be confusing or difficult. This also gives you a sample to show the students. Its much easier to create a diagram if you are shown an example of one. Here are some Web sites that give background and even more ideas about you pre-writing, diagrams, graphic organizers, and other ideas to get started with writing. There is some repetition here. You dont have to read them all. But check them out and see what you think.
<urn:uuid:8337696e-d794-475f-9207-8e5f70d2fabe>
CC-MAIN-2013-20
http://www.thirteen.org/edonline/adulted/lessons/lesson19.html
2013-05-23T19:05:12
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906149
475
4.28125
4
Given all the evidence presently available, we believe it entirely reasonable that Mars is inhabited with living organisms and that life independently originated there The conclusion of a study by the National Academy of Sciences in March 1965, after 88 years of surveying the red planet through blurry telescopes. Four months later, NASA’s Mariner 4 spacecraft would beam back the first satellite images of Mars confirming the opposite. After Earth and Mars were born four and a half billion years ago, they both contained all the elements necessary for life. After initially having surface water and an atmosphere, scientists now believe Mars lost it’s atmosphere four billion years ago, with Earth getting an oxygenated atmosphere around half a billion years later. According to the chief scientist on NASA’s Curiosity mission, if life ever existed on Mars it was most likely microscopic and lived more than three and a half billion years ago. But even on Earth, fossils that old are vanishingly rare. “You can count them on one hand,” he says. “Five locations. You can waste time looking at hundreds of thousands of rocks and not find anything.” The impact of a 40kg meteor on the Moon on March 17 was bright enough to see from Earth without a telescope, according to NASA, who captured the impact through a Moon-monitoring telescope. Now NASA’s Lunar Reconnaissance Orbiter will try and search out the impact crater, which could be up to 20 metres wide.
<urn:uuid:132d7809-ba28-4c89-8ce0-867a2a81c1e6>
CC-MAIN-2013-20
http://8bitfuture.com/tagged/science
2013-05-26T02:41:26
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948667
300
4.1875
4
Chandra "Hears" a Supermassive Black Hole in Perseus A 53-hour Chandra observation of the central region of the Perseus galaxy cluster (left) has revealed wavelike features (right) that appear to be sound waves. The features were discovered by using a special image-processing technique to bring out subtle changes in brightness. These sound waves are thought to have been produced by explosive events occurring around a supermassive black hole (bright white spot) in Perseus A, the huge galaxy at the center of the cluster. The pitch of the sound waves translates into the note of B flat, 57 octaves below middle-C. This frequency is over a million billion times deeper than the limits of human hearing, so the sound is much too deep to be heard. The image also shows two vast, bubble-shaped cavities, each about 50 thousand light years wide, extending away from the central supermassive black hole. These cavities, which are bright sources of radio waves, are not really empty, but filled with high-energy particles and magnetic fields. They push the hot X-ray emitting gas aside, creating sound waves that sweep across hundreds of thousands of light years. The detection of intergalactic sound waves may solve the long-standing mystery of why the hot gas in the central regions of the Perseus cluster has not cooled over the past ten billion years to form trillions of stars. As sounds waves move through gas, they are eventually absorbed and their energy is converted to heat. In this way, the sound waves from the supermassive black hole in Perseus A could keep the cluster gas hot. The explosive activity occurring around the supermassive black hole is probably caused by large amounts of gas falling into it, perhaps from smaller galaxies that are being cannibalized by Perseus A. The dark blobs in the central region of the Chandra image may be fragments of such a doomed galaxy.
<urn:uuid:7c5032f8-872f-474b-bda7-8c70bc31adaa>
CC-MAIN-2013-20
http://chandra.harvard.edu/photo/2003/perseus/
2013-05-26T02:34:37
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94364
389
4.34375
4
SL Psychology/Intro to Research Methods The following items should be included in this section: The Hypothetico-deductive (scientific)method, types of psychological research methods, research designs, sampling, reliability, validity, and triangulation. Research into mind can be traced back to Ancient Greece. However, empirical psychological research has its roots in investigations into cognitive functions such as introspection and memory. While early psychological researchers attempted to bring the same standards of rigor and control to their investigations as physical scientists enjoy, psychological research poses unique obstacles. Psychological research investigates mind. Only recently the contents of the mind become observable since the advent of neuro-imaging technologies such as EEGs, PET scans, and fMRIs, thus early psychological research was focused on disagreements between different schools or generations of researchers that used varied approaches toward their investigations into the invisible mind. For example, cognitive researchers rely on inferences made from activities aimed at employing cognitive functions such as memory as opposed to examining how or where actual memories are laid down. Conversely Behaviorist researchers employed a more empirically rigorous method seeking only to make generalizations about phenomena that were directly observable and replicable in controlled settings. Contemporary psychological research is derived from these disparate traditions and perspectives. It utilizes the hypothetico-deductive or scientific method: 1. observation and data gathering 2. inference of generalizations 3. construction of explanatory theories 4. deduction of hypothesis to test theories 5. hypothesis testing 6. support or challenges to existing theories and commensurate adjustments. Theories and Hypothesis Two key steps, theory construction and hypothesis deduction/testing pose special problems for researchers. Theories are sets of related generalizations explaining a specific mental phenomena e.g. schema and memory organization and hypotheses are specific predictions for research investigations. These steps are derived from empirical data, but are heavily influenced by an individual researcher’s perspective. Thus, researchers seek to clearly articulate operational definitions in an effort to make their research easily replicable. Additionally, controls are implemented to ensure credibility of results and subsequent conclusions. Finally, published research contributing to knowledge in the discipline is peer reviewed and usually rigorously scrutinized. Psychological research can take many forms ranging from: controlled laboratory true experiments (involving the manipulation of independent variables and controls for confounding variables) to field research (involving deliberate manipulation of independent variables in natural uncontrolled environments) to naturalistic/quasi experimental method (involving observation and analysis of independent variables changed by natural incidence). No matter which research method is employed, controls are carefully implemented to ensure the credibility of research. Key issues surrounding controls are: research design, sampling, reliability and validity. The underlying structure of an investigation. It involves how psychologists use subjects/participants in their experiments. The three most common designs are: 1.Repeated Measures: using the same subjects in the experimental and control conditions 2.Independent Measures :using different subjects/participants in the experimental and control conditions 3. Matched Pairs :using different subjects/participants in the experimental and control conditions with each sample having similar characteristics. The process of selecting participants/subjects to examine derived from a target population (a specified subpopulation of all humans). The results of a study are inferred from examination of the sample’s performance on a given measure, thus the sample is key in the line of reasoning from initial design to examination of results. Several methods can be employed when choosing a sample: random, stratified and convenience. Random sampling provides the best chance for the sample group to be representative of the target population. Stratified samples reflect similar proportions of various sub-groups within a sample. Convenience sampling involves choosing participants/subjects that are available at the time of data collection. Convenience samples do not control for possible biases that may within certain subgroups of a population and thus the results and conclusions from a convenience sample must be analyzed with caution and triangulated. A study is reliable if it is replicable and the same results are achieved repeatedly. There are four types of reliability in regard to psychological study: - Test-Retest Reliability (also called stability reliability) - Interrater Reliability - Parallel Forms Reliability - Internal Consistency Reliability To judge for reliability in this case, the test is administered two different times to the exact same or similar subjects. This judges for consistency of results across time, and to make sure the results were not affected by context of the time. Reliability is higher if the retest is close in chronological proximity to the original test. Research psychologists tend to replicate older studies to generate theories or to amend findings to account for reliability. In attention for example, Treisman consistently retested findings to amend the attention models. Two or more judges score the test. The scores are then compared to determine how much the two raters agree on the consistency on a rating system. An example of interrater reliability would be that of teachers grading essays for an AP or IB exam. If a scale from 1 to 5 was used (where 1 is the worst and 5 is the best), and one teacher gave an essay a score of 2 and another gave a score of 5, then the reliability would be inconsistent. Through training, practice, and discussion, the individual raters can reach a consistent level of assessing an experiment, test, or result. Often, the raters are moderated by a higher rater who will assist in reaching consistency. Parallel Forms Reliability A large set of questions that are related to the same construct are generated and then divided into two sets. The two different sets are given to the same sample of people at the same time. Therefore, the two tests that study the same content are judged against each other for consistency. An example would be a pretest-posttest, where the two groups would either receive form 1 or form 2, and in the posttest situation, the groups would be switched. Internal Consistency Reliability In this case, the tool is used as the tool to determine reliability. Thus would be a test situation in which the items on the test measure the same content. Often, questions can be strikingly similar, which shows that the test is also a measure for internal consistency reliability. Therefore, the similar questions should be answered in the same way. There are different ways to measure internal consistency reliability: - Average Inter-item Correlation - Average Itemtotal Correlation - Split-Half Reliability - Cronbach's Alpha (a) Quantitative versus Qualitative Measures Coolican, H. (2004). Research methods and statistics in Psychology. Cambridge University Press. 1. In what ways has new technology changed the science of psychology? Provide three examples. 2. How does the importance of validity and reliability change depending on the type of study? 3. In what ways will the different aspects of an experiment (sampling, methods, reliabilty, and validity) affect the results and conclusions of an psychology study?
<urn:uuid:defcc62e-94ff-4b7f-9ff7-845347fa405d>
CC-MAIN-2013-20
http://en.m.wikibooks.org/wiki/SL_Psychology/Intro_to_Research_Methods
2013-05-26T02:48:22
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930575
1,445
4.0625
4
Most Americans believe that the Declaration of Independence by the Continental Congress on July 4, 1776 began American independence. While this date announced the formal break between the American colonists and the “mother country,” it did not guarantee independence. Not all Americans favored independence and most historical estimates place the number of Loyalist, or Tory, Americans near one-third of the population. Winning independence required an eight-year war that began in April, 1775 and ended with a peace treaty finalized on September 3, 1783. Unfortunately the infant nation found itself born in a world dominated by a superpower struggle between England and France. The more powerful European nations viewed the vulnerable United States, correctly, as weak and ripe for exploitation. Tragically, few Americans know of this period of crisis in our nation’s history because of the irresponsible neglect of the American education system. American independence marked the end of one chapter in American history and the beginning of another. As with all historical events this declaration continued the endless cycle of action and reaction, because nothing occurs in a vacuum. Tragically, most Americans’ historical perspective begins with their birth, rendering everything that previously occurred irrelevant. Furthermore, most educators conveniently “compartmentalize” their subjects and do not place them in the proper historical context. Since most Americans only remember the United States as a superpower they do not know of our previous struggles. Unfortunately our agenda driven education system also ignores this period and often portrays America in the most negative light. Without delving too deeply into the deteriorating relations between the American colonists and their “mother country,” declaring independence came slowly. None of the thirteen colonies trusted the other colonies and rarely acted in concert, even during times of crisis. Regional and cultural differences between New England, mid-Atlantic and the Southern colonies deeply divided the colonists. Even in these early days of America slavery proved a dividing issue, although few believed in racial equality. The “umbilical cord” with England provided the only unifying constant that bound them together culturally and politically. The colonies further possessed different forms of government as well, although they steadfastly expressed their liberties and “rights as Englishmen.” Some colonies existed as royal colonies, where the English monarch selected the governor. Proprietary colonies formed when merchant companies or individuals, called proprietors, received a royal grant and appointed the governor. Charter colonies received their charters much as proprietary colonies with individuals or merchants receiving royal charters and shareholders selected the governor. Each colony elected its own legislature and local communities made their laws mostly based on English common law. Any form of national, or “continental,” unity remained an illusion largely in the minds of the delegates of the First Continental Congress. The Second Continental Congress convened on May 10, 1775 because England ignored the grievances submitted by the First Continental Congress. Furthermore, open warfare erupted in Massachusetts between British troops and the colonial militia at Lexington and Concord on April 19, 1775. Known today as Patriot’s Day few Americans outside of Massachusetts celebrate it, or even know of it. Setting forth their reasons for taking up arms against England, they established the Continental Army on June 14, 1775. For attempting a united front, they appointed George Washington, a Virginian, as commander-in-chief. On July 10, 1775, the Congress sent Parliament one last appeal for resolving their differences, which proved futile. While Congress determined the political future of the colonies fighting continued around Boston, beginning with the bloody battle on Breed’s Hill on June 17, 1775. Known as the Battle of Bunker Hill in our history the British victory cost over 1,000 British and over 400 American casualties. This battle encouraged the Americans because it proved the “colonials” capable of standing against British regulars. British forces withdrew from Boston in March, 1776 and awaited reinforcements from England as fighting erupted in other colonies. While Washington and the Continental Army watched the British in Boston, Congress authorized an expedition against Canada. They hoped for significant resentment of British rule by the majority of French inhabitants, something they misjudged. In September, 1775 the fledgling Continental Army launched an ambitious, but futile, two-pronged invasion of Canada. Launched late in the season, particularly for Canada, it nevertheless almost succeeded, capturing Montreal and moving on Quebec. It ended in a night attack in a snowstorm on December 31, 1775 when the commander fell dead and the second-in-command fell severely wounded. American forces did breach the city walls, however when the attack broke down these men became prisoners of war. For disrupting the flow of British supplies into America Congress organized the Continental Navy and Continental Marines on October 13, 1775 and November 10, 1775, respectively. Still, no demands for independence despite the creation of national armed forces, the invasion of a “foreign country” and all the trappings of a national government. The full title of the Declaration of Independence ends with “thirteen united States of America,” with united in lower case. I found no evidence that the Founding Fathers did this intentionally, or whether it merely reflected the writing style of the time. Despite everything mentioned previously regarding “continental” actions, the thirteen colonies jealously guarded their sovereignty. Although Congress declared independence England did not acknowledge the legality of this resolution and considered the colonies “in rebellion.” England assembled land and naval forces of over 40,000, including German mercenaries, for subduing the “insurrection.” This timeless lesson proves the uselessness of passing resolutions with no credible threat of force backing them up. Unfortunately our academic-dominated society today believes merely the passage of laws and international resolutions forces compliance. We hear much in the news today about “intelligence failures” regarding the war against terrorism. England definitely experienced an “intelligence failure” as it launched an expedition for “suppressing” this “insurrection” by a “few hotheads.” First, they under estimated the extent of dissatisfaction among the Americans, spurred into action by such “rabble rousers” as John Adams. They further under estimated the effectiveness of Washington and the Continental Army, particularly after the American victories at Trenton and Princeton. British officials further under estimated the number of Loyalists with the enthusiasm for taking up arms for the British. While Loyalist units fought well, particularly in the South and the New York frontier, they depended heavily on the support of British regulars. Once British forces withdrew, particularly in the South, the Loyalist forces either followed them or disappeared. A perennial lesson for military planners today, do not worry about your “footprint,” decisively defeat your enemy. This hardens the resolve of your supporters, influences the “neutrals” in your favor and reduces the favorability of your enemies. Regarding the “national defense” the Continental Congress and “states” did not fully cooperate against the superpower, England. The raising of the Continental Army fell on the individual colonies almost throughout the war with the Congress establishing quotas. Unfortunately, none of the colonies ever met their quota for Continental regiments, with the soldiers negotiating one-year enlistments. Continental Army recruiters often met with competition from the individual colonies, who preferred fielding their militias. The Congress offered bounties in the almost worthless “Continental Currency” and service far from home in the Continental Army. Colonial governments offered higher bounties in local currencies, or British pounds, and part-time service near home. Congress only possessed the authority for requesting troops and supplies from the colonial governors, who often did not comply. For most of the war the Continental Army remained under strength, poorly supplied, poorly armed and mostly unpaid. Volumes of history describe the harsh winters endured by the Continentals at Valley Forge and Morristown, New Jersey the following year. Colonial governments often refused supplies for troops from other colonies, even though those troops fought inside their borders. As inflation continued devaluing “Continental Currency” farmers and merchants preferred trading with British agents, who often paid in gold. This created strong resentment from the soldiers who suffered the hardships of war and the civilians who profited from this trade. In fairness, the staggering cost of financing the war severely taxed the colonial governments and local economies, forcing hard choices. Congress further declared independence as a cry for help from England’s superpower rival, France, and other nations jealous of England. Smarting from defeat in the Seven Years War (French and Indian War in America), and a significant reduction in its colonial empire, France burned for revenge. France’s ally, Spain, also suffered defeat and loss of territory during this war and sought advantage in the American war. However, France and Spain both needed American victories before they risked their troops and treasures. With vast colonial empires of their own they hesitated at supporting a colonial rebellion in America. As monarchies, France and Spain held no love of “republican ideals” or “liberties,” and mostly pursued independent strategies against England. Fortunately their focus at recouping their former possessions helped diminish the number of British forces facing the Americans. On the political front the Congress knew that the new nation needed some form of national government for its survival. Unfortunately the Congress fell short on this issue, enacting the weak Articles of Confederation on November 15, 1777. Delegates so feared the “tyranny” of a strong central government, as well as they feared their neighbors, that they rejected national authority. In effect, the congressional delegates created thirteen independent nations instead of one, and our nation suffered from it. Amending this confederation required the approval of all thirteen states, virtually paralyzing any national effort. This form of government lasted until the adoption of the US Constitution on September 17, 1787. Despite these weaknesses the fledgling “United States” survived and even achieved some success against British forces. Particularly early in the war, the British forces possessed several opportunities for destroying the Continental Army and ending the rebellion. Fortunately for us British commanders proved lethargic and complacent, believing the “colonial rabble” incapable of defeating them. Furthermore, as the Continental Army gained experience and training it grew more professional, standing toe-to-toe against the British. Since the US achieved superpower status it fell into the same trap, continuously underestimating less powerful enemies. The surrender of British forces at Yorktown, Virginia on October 19, 1781 changed British policy regarding its American colonies. British forces now controlled mainly three enclaves: New York City; Charleston, South Carolina and Savannah, Georgia. Loyalist forces, discouraged by British reverses, either retreated into these enclaves, departed America or surrendered. Waging a global war against France and Spain further reduced the number of troops available for the American theater. This serves another modern lesson for maintaining adequate forces for meeting not only your superpower responsibilities, but executing unforeseen contingencies. Ironically, the victory at Yorktown almost defeated the Americans as well, since the civil authorities almost stopped military recruitment. Washington struggled at maintaining significant forces for confronting the remaining British forces in their enclaves. An aggressive British commander may still score a strategic advantage by striking at demobilizing American forces. Fortunately, the British government lost heart for retaining America and announced the beginning of peace negotiations in August, 1782. The Treaty of Paris, signed on September 3, 1783 officially ended the American Revolution; however it did not end America’s struggles. American negotiators proved somewhat naïve in these negotiations against their more experienced European counterparts. Of importance, the British believed American independence a short-lived situation, given the disunity among Americans. Congress began discharging the Continental Army before the formal signing of the treaty, leaving less than one hundred on duty. Instead of a united “allied” front, America, France and Spain virtually negotiated separate treaties with England, delighting the British. They believed that by creating dissension among the wartime allies they furthered their position with their former colonies. If confronted with a new war with more powerful France and Spain, America might rejoin the British Empire. When England formally established the western boundary of the US at the Mississippi River it did not consult its Indian allies. These tribes did not see themselves as “defeated nations,” since they often defeated the Americans. Spanish forces captured several British posts in this territory and therefore claimed a significant part of the southeastern US. France, who practically bankrupted itself in financing the American cause and waging its own war against England, expected an American ally. Unfortunately, the US proved a liability and incapable of repaying France for the money loaned during the war. France soon faced domestic problems that resulted in the French Revolution in 1789. For several reasons England believed itself the winner of these negotiations, and in a more favorable situation, globally. England controlled Canada, from where it closely monitored the unfolding events in the US, and sowed mischief. It illegally occupied several military forts on American territory and incited the Indian tribes against the American frontier. By default, England controlled all of the American territory north of the Ohio River and west of the Appalachian Mountains. Economically, England still believed that the US needed them as its primary trading partner, whether independent or not. A strong pro-British faction in America called for closer economic ties with the former “mother country.” As England observed the chaos that gripped the US at this time, they felt that its collapse, and reconquest by England, only a matter of time. Most Americans today, knowing only the economic, industrial and military power of America cannot fathom the turmoil of this time. The weak central government and all the states accumulated a huge war debt, leaving them financially unstable. While the US possessed rich natural resources it lacked the industrial capabilities for developing them, without foreign investment. With no military forces, the nation lacked the ability of defending its sovereignty and its citizens. From all appearances our infant nation seemed stillborn, or as the vulnerable prey for the more powerful Europeans. As stated previously the Articles of Confederation actually created thirteen independent nations, with no national executive for enforcing the law. Therefore each state ignored the resolutions from Congress and served its own self-interest. Each state established its own rules for interstate commerce, printed its own money and even established treaties with foreign nations. No system existed for governing the interactions between the states, who often treated each other like hostile powers. The new nation did possess one thing in abundance, land; the vast wilderness between the Appalachian Mountains and the Mississippi River. Conceded by the British in the Treaty of Paris, the Americans looked at this as their economic solution. The nation owed the veterans of the Revolution a huge debt and paid them in the only currency available, land grants. Unfortunately, someone must inform the Indians living on this land and make treaties regarding land distribution. For the Americans this seemed simple, the Indians, as British allies, suffered defeat with the British and must pay the price. After all, under the rules of European “civilized” warfare, defeated nations surrendered territory and life went on. Unfortunately no one, neither American nor British, informed the Indians of these rules, because no one felt they deserved explanation. Besides, the British hoped that by inciting Indian troubles they might recoup their former colonies. With British arms and encouragement the tribes of the “Old Northwest” raided the western frontier with a vengeance. From western New York down through modern Kentucky these Indians kept up their war with the Americans. In Kentucky between 1783 and 1790 the various tribes killed an estimated 1,500 people, stole 20,000 horses and destroyed an unknown amount of property. Our former ally, Spain, controlled all of the territory west of the Mississippi River before the American Revolution. From here they launched expeditions that captured British posts at modern Vicksburg and Natchez, Mississippi, and the entire Gulf Coast. However, they claimed about two-thirds of the southeastern US based on this “conquest” including land far beyond the occupation of their troops. Like the British, they incited the Indians living in this region for keeping out American settlers. Spain also controlled the port of New Orleans and access into the Mississippi River. Americans living in Kentucky and other western settlements depended on the Mississippi River for their commerce. The national government seemed unable, or unwilling, at forcing concessions from Spain, and many westerners considered seceding from the Union. Known as the “Spanish Conspiracy” this plot included many influential Americans and only disappeared after the American victory at Fallen Timbers. While revisionist historians ignore the “Spanish Conspiracy” they illuminate land speculation by Americans in Spanish territory. Of course they conveniently ignore the duplicity of Spanish officials in these plots, and their acceptance of American money. In signing the Declaration of Independence the Founding Fathers pledged “their lives, their fortunes and their sacred honor.” Many Continental Army officers bankrupted themselves when Congress and their states proved recalcitrant at reimbursing them for incurred expenses. These officers often personally financed their troops and their expeditions because victory required timely action. Of importance for the western region, George Rogers Clark used his personal credit for financing his campaigns, which secured America’s claim. It takes no “lettered” historian for determining that without Clark’s campaign that America’s western boundary ends with the Appalachian Mountains, instead of the Mississippi River. With the bankrupt Congress and Virginia treasuries not reimbursing him he fell into the South Carolina Yazoo Company. Clark’s brother-in-law, Dr. James O’Fallon, negotiated this deal for 3,000,000 acres of land in modern Mississippi. This negotiation involved the Spanish governor of Louisiana, Don Estavan Miro, a somewhat corrupt official. When the Spanish king negated the treaty, Clark, O’Fallon and the other investors lost their money and grew hateful of Spain. Another, lesser known, negotiation involved former Continental Army Colonel George Morgan and the Spanish ambassador, Don Diego de Gardoqui. Morgan received title for 15,000,000 acres near modern New Madrid, Missouri for establishing a colony. Ironically, an unscrupulous American, James Wilkinson, discussed later in the document, working in conjunction with Miro, negated this deal. Both of these land deals involved the establishment of American colonies in Spanish territory, with Americans declaring themselves Spanish subjects. Few Spaniards lived in the area west of the Mississippi River and saw the growing number of American settlers as a threat. However, if these Americans, already disgusted with their government, became Spanish subjects, they now became assets. If they cleared and farmed the land, they provided revenue that Spanish Louisiana desperately needed. Since many of these men previously served in the Revolution, they provided a ready militia for defending their property. This included defending it against their former country, the United States, with little authority west of the Appalachian Mountains. Internationally, the weak US became a tragic pawn in the continuing superpower struggle between England and France. With no naval forces for protection, American merchant mariners became victims of both nations on the high seas. British and French warships stopped American ships bound for their enemy, confiscating cargo and conscripting sailors into their navies. In the Mediterranean Sea, our ships became the targets of the Barbary Pirates, the ancestors of our enemies today. Helpless, our government paid ransoms for prisoners and tribute for safe passage until the Barbary Wars of the early 19th Century. Despite all of these problems most influential Americans still “looked inward,” and feared a strong central government more than foreign domination. When the cries of outrage came from the western frontiers regarding Indian depredations, our leaders still more feared a “standing army.” In the world of the Founding Fathers the tyranny of King George III’s central government created their problem. The king further used his “standing army” for oppressing the colonists and infringing on their liberties. Congress also possessed more recent examples of the problems with a “standing army” during the American Revolution. First came the mutiny of the Pennsylvania Line in January, 1781 for addressing their grievances. Since the beginning of the war, in 1775, the Continental soldiers endured almost insurmountable hardships, as explained previously. The soldiers rarely received pay, and then received the almost worthless “Continental Currency,” which inflation further devalued. This forced severe hardships also on the soldiers’ families, and many lost their homes and farms. The soldiers marched on the then-capital, Philadelphia, for seeking redress for these grievances. Forced into action, Congress addressed their problems with pay and the soldiers rejoined the Army. A second, though less well known, mutiny occurred with the New Jersey Line shortly thereafter with different results. For “nipping” a growing problem “in the bud,” Washington ordered courts-martial and the execution of the ring leaders. The last such trouble occurred in the final months of the war in the Continental Army camp at Newburgh, New York. Dissatisfied with congressional inaction on their long-overdue pay, many officers urged a march on Philadelphia. Fortunately, Washington defused this perceived threat against civil authority, and squashed the strong possibility of a military dictatorship. However, Congress realized that it needed some military force for defending the veterans settling on their land grants. The delegates authorized the First United States Regiment, consisting of 700 men drawn from four state militias for a one year period. I read countless sources describing the inadequacy of this force, highlighting congressional incompetence and non-compliance by the states. The unit never achieved its authorized strength, the primitive conditions on the frontier hindered its effectiveness and corrupt officials mismanaged supplies. Scattered in small garrisons throughout the western territories, it never proved a deterrent against the Indians. No incentives existed for enlisting in this regiment, and it attracted a minority of what we call today “quality people.” Again, confirming state dominance over the central government, this “army” came from a militia levy from four states, a draft. A tradition at the time provided for the paying of substitutes for the men conscripted during these militia levies. Sources reflect that most of these substitutes came from the lowest levels of society, including those escaping the law. From whatever source these men came, at least they served and mostly did their best under difficult circumstances. Routinely, once the soldiers assembled they must learn the skills needed for performing their duties. For defending the western settlements the small garrisons must reach their destination via river travel. Once at their destination they must often construct their new installations using the primitive tools and resources available. The primitive transportation system often delayed the arrival of the soldiers’ pay and supplies, forcing hardships on the troops. Few amenities existed at these frontier installations and the few settlements provided little entertainment for the troops. Unfortunately, once the soldiers achieved a level of professionalism, they reached the end of their enlistment. With few incentives for reenlistment, the process must begin again, with recruiting and training a new force. Fortunately many prominent Americans saw that the country needed a different form of government for ensuring its survival. Despite the best intentions and established rules, few people followed these rules or respected our intentions. The Constitutional Convention convened in Philadelphia in May, 1787 with George Washington unanimously elected as its president. As the delegates began the process of forming a “more perfect Union,” the old, traditional “colonial” rivalries influenced the process. While most Americans possess at least ancillary knowledge of the heated debates among the delegates, few know the conditions. Meeting throughout the hot summer, the delegates kept the windows of their meeting hall closed, preventing the “leaking” of information. We must remember that this occurred before electric-powered ventilation systems or air conditioning. They kept out the “media,” and none of the delegates spoke with “journalists,” again for maintaining secrecy. Modern Americans, often obsessed with media access, do not understand why the delegates kept their deliberations secret. Most of the delegates felt they possessed one chance for creating this new government and achieving the best possible needed their focus. “Media access” jeopardized this focus and “leaked” information, with potential interruptions, jeopardized their chance for success. We find this incomprehensible today, with politicians running toward television cameras, “leaking” information and disclosing national secrets. Unfortunately a “journalistic elite” exists today, misusing the First Amendment, with many “media moguls” believing themselves the “kingmakers” of favorite politicians. The delegates sought the best document for satisfying the needs of the most people, making “special interest groups” secondary. Creating a united nation proved more important than prioritizing regional and state desires. These delegates debated, and compromised, on various issues; many of which remain important today. They worried over the threat of dominance by large, well-populated states over smaller, less-populated states. Other issues concerned taxation, the issue that sparked the American Revolution, and import duties, which pitted manufacturing states against agricultural states. Disposition of the mostly unsettled western land, claimed by many states, proved a substantial problem for the delegates. The issue of slavery almost ended the convention and the delegates compromised, achieving the best agreement possible at the time. On September 17, 1787 the delegates adopted the US Constitution and submitted it for approval by the individual states. Again, merely passing laws and adopting resolutions does not immediately solve the problems, or change people’s attitudes. Ratification of the Constitution required the approval of nine states, (three-fourths) which occurred on June 21, 1788. However, two important large states, New York and Virginia, still debated ratification. Several signers of the Declaration of Independence, and delegates at the Constitutional Convention, urged the defeat of the Constitution. Fiery orator, Patrick Henry, of “Give me liberty, or give me death,” fame worked hard for defeating it in Virginia. Even the most optimistic supporters gave the Constitution, and the nation, only a marginal chance at survival.
<urn:uuid:fcd8384e-97df-45dc-baf6-0742150406b6>
CC-MAIN-2013-20
http://frontierbattles.wordpress.com/2008/09/20/battle-of-fallen-timbers-confirms-american-independence-part-i/?like=1&_wpnonce=24a0599870
2013-05-26T02:34:30
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952682
5,420
4.125
4
|U.S. Naval Observatory||Earth Orientation Department| In 1956, following several years of work, two astronomers at the U. S. Naval Observatory (USNO) and two astronomers at the National Physical Laboratory (Teddington, England) determined the relationship between the frequency of the Cesium atom (the standard of time) and the rotation of the Earth at a particular epoch. As a result, they defined the second of atomic time as the length of time required for 9 192 631 770 cycles of the Cesium atom at zero magnetic field. The second thus defined was equivalent to the second defined by the fraction 1 / 31 556 925.9747 of the year 1900. The atomic second was set equal, then, to an average second of Earth rotation time near the end of the 19th century. The Rapid Service/Prediction Center of the International Earth Rotation Service (IERS), located at the U.S. Naval Observatory, monitors the Earth's rotation. Part of its mission involves the determination of a time scale based on the current rate of the rotation of the Earth. UT1 is the non-uniform time based on the Earth's rotation. The Earth is constantly undergoing a deceleration caused by the braking action of the ocean tides. Through the use of ancient observations of eclipses, it is possible to determine the deceleration of the Earth to be roughly 2 milliseconds per day per century. This is an effect which causes the Earth's rotational time to slow with respect to the atomic clock time. Since it has been about 1 century since the defining epoch (i.e., the duration since 1900), the difference has accumulated to roughly 2 milliseconds per day. Other factors also affect the Earth's dynamics, some in unpredictable ways, so that it is necessary to monitor the Earth's rotation continuously. In order to keep the cumulative difference in UT1-UTC less than 0.9 seconds, a leap second is inserted periodically in the atomic UTC time scale to decrease the difference between the two. This leap second can be either positive or negative depending on the Earth's rotation. Since the first leap second in 1972, all leap seconds have been positive (click here for a list of all announced leap seconds). This reflects the general slowing trend of the Earth due to tidal braking. Confusion sometimes arises over the misconception that the occasional insertion of leap seconds every few years indicates that the Earth should stop rotating within a few millennia. The confusion arises because some mistake leap seconds as a measure of the rate at which the Earth is slowing. The one-second increments are, however, indications of the accumulated difference in time between the two systems. As an example, the situation is similar to what would happen if a person owned a watch that lost two seconds per day. If it were set to a perfect clock today, the watch would be found to be slow by two seconds tomorrow. At the end of a month, the watch will be roughly a minute in error (thirty days of the two second error accumulated each day). The person would then find it convenient to reset the watch by one minute to have the correct time again. This scenario is analogous to that encountered with the leap second. The difference is that instead of resetting the clock that is running slow, we choose to adjust the clock that is keeping a uniform, precise time. The reason for this is that we can change the time of an atomic clock while it is not possible to alter the Earth's rotational speed to match the atomic clocks. Currently the Earth runs slow at roughly 2 milliseconds per day. After 500 days, the difference between the Earth rotation time and the atomic time would be one second. Instead of allowing this to happen a leap second is inserted to bring the two times closer together. The decision of when to introduce a leap second in UTC is the responsibility of the International Earth Rotation Service (IERS). According to international agreements, first preference is given to the opportunities at the end of December and June, and second preference to those at the end of March and September. Since the system was introduced in 1972, only dates in June and December have been used. The official United States time is determined by the Master Clock at the U. S. Naval Observatory (USNO). The Observatory is charged with the responsibility for precise time determination and management of time dissemination. Modern electronic systems, such as electronic navigation or communication systems, depend increasingly on precise time and time interval (PTTI). Examples are the ground-based LORAN-C navigation system and the satellite-based Global Positioning System (GPS). Navigation systems are the most critical application for precise time. GPS, in particular, is widely used for navigating ships, planes, missiles, trucks, and cars anywhere on Earth. These systems are all based on the travel time of electromagnetic signals: an accuracy of 10 nanoseconds (10 one-billionths of a second) corresponds to a position accuracy of about 3 meters (or 10 feet). Precise time measurements are needed for the synchronization of clocks at two or more sites. Such synchronization is necessary, for example, for high-speed communications systems. Power companies use precise time to control power distribution grids and reduce power loss. Radio and television stations require precise time (the time of day) and precise frequencies in order to broadcast their transmissions. Many programs are transmitted from coast to coast to affiliate stations around the country. Without precise timing the stations would not be able to synchronize the transmission of these programs to local audiences. All of these systems are referenced to the USNO Master Clock. Very precise time is kept by using atomic clocks. The principle of operation of the atomic clock is based on measuring the microwave resonance frequency (9,192,631,770 cycles per seconds) of the cesium atom. At the Observatory, the atomic time scale (AT) is determined by averaging 60 to 70 atomic clocks placed in separate, environmentally controlled vaults. Atomic Time is a very uniform measure of time (one tenth of one billionth of a second per day). The USNO must maintain and continually improve its clock system so that it can stay one step ahead of the demands made on its accuracy, stability and reliability. The present Master Clock of the USNO is based on a system of some 60 independently operating cesium atomic clocks and 7 to 10 hydrogen maser atomic clocks. These clocks are distributed over 20 environmentally controlled clock vaults, to ensure their stability. By automatic inter-comparison of all clocks every 100 seconds, a time scale is computed which is not only reliable but also extremely stable. Its rate does not change by more than about 100 picoseconds (.0000000001 seconds) per day from day to day. On the basis of this computed time scale, a clock reference system is steered to produce clock signals which serve as the USNO Master Clock. The clock reference system is driven by a hydrogen maser atomic clock. Hydrogen masers are extremely stable clocks over short time periods (less than one week). They provide the stability and reliability needed to maintain the accuracy of the Master Clock System. Very Long Baseline Interferometry (VLBI) is used to determine Universal Time (UT1) based on the rotation of the Earth about its axis. VLBI is an advanced astronomical technique of observing extra-galactic sources (typically quasars) with radio telescopes. The information gained using VLBI can be used to generate images of the distant radio sources, measure the rotation rate of the Earth, the motions of the Earth in space, or even measure how the tectonic plates where the telescopes are located are moving on the surface of the Earth. Measuring the Earth's rotational motion is critical for navigation. The most accurate navigation systems rely on measurements using satellite systems which are not tied to the Earth's surface. These systems can provide a position accurate to a about a meter (few feet), but the position of the Earth relative to the satellites must also be known to avoid potentially far larger errors. The U.S. Naval Observatory has been in the forefront of timekeeping since the early 1800s. In 1845, the Observatory offered its first time service to the public: a time ball was dropped at noon. Beginning in 1865 time signals were sent daily by telegraph to Western Union and others. In 1904, a U.S. Navy station broadcast the first worldwide radio time signals based on a clock provided and controlled by the Observatory. A time of day announcement can be obtained by calling 202-762-1401 locally in the Washington area. For long distance callers the number is 900-410-TIME. The latter number is a commercial service for which the telephone company charges 50 cents for the first minute and 45 cents for each additional minute. Australia, Hong Kong, and Bermuda can also access this service at international direct dialing rates. You can also get time for your computer by calling 202-762-1594. Use 1200 baud, no parity, 8 bit ASCII. |Last modified: 24 October 2001||Approved by EO Dept. Head, USNO|
<urn:uuid:ad3517d5-9fdc-41be-abb7-3b5ca1eaa42c>
CC-MAIN-2013-20
http://maia.usno.navy.mil/eo/leapsec.html
2013-05-26T02:34:29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94046
1,861
4
4
The steps by which molecules in the primordial soup came together to form the genetic backbone of life are largely unknown. One approach to finding out is to artificially create basic life functions in the laboratory and consider if such conditions might have been possible in the Earth’s past. Writing in Physical Review Letters, Hubert Krammer and colleagues at the Ludwig Maximilian University of Munich in Germany show they are able to drive the replication of segments of tRNA (transfer ribonucleic acid), the molecule responsible for translating genetic code into the production of specific proteins, using a purely thermal process. Krammer et al. begin by rapidly cooling a solution of four halves of tRNA from high temperatures to so that the molecules form hairpins—a state where the strand forms a closed loop on itself, except for a snippet of a sequence of bases, called a “toe hold.” It is this toe hold, which, in principle, carries enough information to encode a protein, that the authors try to protect and replicate by using a thermal process to coax the hairpins to open and pair to a complementary strand. When Krammer et al. thermally cycle the solution between and , the energy stored in the hairpin (which prefers it to bind to a complementary pair instead of itself) compensates for the loss of entropy associated with the molecules pairing up with their partners. This thermally driven process occurs on a relatively fast time scale of about seconds, an important factor since molecules need to replicate faster than they degrade. According to the authors, convection currents in prebiotic liquids could have provided the necessary quenching and thermal cycling. – Jessica Thomas
<urn:uuid:4667167f-2026-4584-834a-5892652dce7e>
CC-MAIN-2013-20
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.108.238104
2013-05-26T03:09:20
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929244
339
4.15625
4
The Savage Islands, or Ilhas Selvagens in Portuguese, are a small archipelago in the eastern North Atlantic Ocean between the archipelago of Madeira to the north and the Canary Islands to the south. Like these other island groups, the Savage Islands are thought to have been produced by volcanism related to a mantle plume or “hot spot.” Typically, volcanoes are fueled by magma being generated where tectonic plates are colliding or being pulled apart. The active volcanoes remain at the plate boundaries, even as the plates shift. Mantle plumes, in contrast, are relatively fixed regions of upwelling magma that can feed volcanoes on an overlying tectonic plate. When a tectonic plate passes over the mantle plume, active volcanoes form, but they become dormant as they are carried away from the hot spot on the moving tectonic plate. Over geologic time, this creates a line of older, extinct volcanoes, seamounts, and islands extending from the active volcanoes that are currently over the plume. These two astronaut photographs illustrate the northern (top) and southern (bottom) Savage Islands. The two views were taken 13 seconds apart from the International Space Station; the geographic center points of the images are separated by about 15 kilometers. Selvagem Grande, with an approximate area of 4 square kilometers, is the largest of the islands. The smaller and more irregularly-shaped Ilhéus do Norte, Ilhéu de Fora, and Selvagem Pequena are visible at the center of the lower image. Spain and Portugal both claim sovereignty over the Savage Islands. All of the islands of the archipelago are ringed by bright white breaking waves along the fringing beaches. Reefs that surround the Savage Islands make it very difficult to land boats there, and there is no permanent settlement on the islands. The islands serve as nesting sites for several species of seabird including petrels and shearwaters, and they are included on the tentative list of additional UNESCO World Heritage Sites.
<urn:uuid:e698619d-b5f7-457e-a7c2-18345ce4f693>
CC-MAIN-2013-20
http://visibleearth.nasa.gov/view.php?id=41323
2013-05-26T02:48:22
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93986
431
4.21875
4
Why does this galaxy have so many big black holes? No one is sure. What is sure is that NGC 922 is a ring galaxy created by the collision of a large and small galaxy about 300 million years ago. Like a rock thrown into a pond, the ancient collision sent ripples of high density gas out from the impact point near the center that partly condensed into stars. Pictured above is NGC 922 with its beautifully complex ring along the left side, as imaged recently by the Hubble Space Telescope. Observations of NGC 922 with the Chandra X-ray Observatory, however, show several glowing X-ray knots that are likely large black holes. The high number of massive black holes was somewhat surprising as the gas composition in NGC 922 -- rich in heavy elements -- should have discouraged almost anything so massive from forming. Research is sure to continue. spans about 75,000 light years, lies about 150 million light years away, and can be seen with a small telescope toward the constellation of the furnace (Fornax). Acknowledgement: Nick Rose
<urn:uuid:3231bdb9-63f9-4af7-ba27-fa67dc6013bb>
CC-MAIN-2013-20
http://www.astrobio.net/index.php?option=com_galleryimg&task=imageofday&imageId=1337&msg=&id=&pageNo=27
2013-05-26T02:34:53
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951589
241
4.0625
4
How Beans Grow by National Gardening Association Editors If you've ever walked by containers of bulk seed in a garden store, you may have been surprised by the many different colors, sizes and shapes of the beans -- even by the variety of designs on the seed coats and their descriptive names: 'Soldier', 'Wren's Egg', 'Yellow Eye', 'Black Eye', and others. Maybe you were impressed, too, with how big some of these seeds are. Underneath the large, hard seed coat is an embryo, a tiny plant ready to spring to life. When you plant a bean seed, the right amount of water, oxygen and a warm temperature (65°F to 75°F) will help it break through its seed coat and push its way up through the soil. The Seed of Life Most of the energy the young plant needs is stored within the seed. In fact, there's enough food to nourish bean plants until the first true leaves appear without using any fertilizer at all. As the tender, young beans come up, they must push pairs of folded seed leaves (or cotyledons) through the soil and spread them above the ground. Beans also quickly send down a tap root, the first of a network of roots that will anchor the plants as they grow. Most of the roots are in the top eight inches of soil, and many are quite close to the surface. What Beans Need Beans need plenty of sunlight to develop properly. If the plants are shaded for an extended part of the day, they'll be tall and weak. They'll be forced to stretch upward for more light, and they won't have the energy to produce as many beans. The bean plant produces nice, showy flowers, and within each one is everything that's necessary for pollination, fertilization and beans. Pollination of bean flowers doesn't require much outside assistance -- a bit of wind, the occasional visit from a bee, and the job is done. After fertilization occurs, the slender bean pods emerge and quickly expand. Once this happens, the harvest isn't far off. Although beans love sun, too much heat reduces production. Bean plants, like all other vegetables, have a temperature range that suits them best: They prefer 70°F to 80°F after germinating. When the daytime temperature is consistently over 85°F, most beans tend to lose their blossoms. That's why many types of beans don't thrive in the South or Southwest in the middle of the summer -- it's simply too hot. Beans don't take to cold weather very well, either. Only Broad or Fava beans can take any frost at all. Other types must be planted when the danger of frost has passed and the soil has warmed up.
<urn:uuid:c093810a-407f-4b82-8b42-656f230c30e1>
CC-MAIN-2013-20
http://www.garden.org/foodguide/browse/veggie/beans_getting_started/441
2013-05-26T03:09:31
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960438
571
4.0625
4
- Hypothermia occurs when the body's core temperature drops below 37°C. - This typically results from prolonged exposure to cold conditions, especially in damp, wet or snowy weather. - Early signs: shivering; listlessness; cold, pale, puffy face; impaired speech and impaired judgment. - Later signs: drowsiness, weakness, slow pulse, shallow breathing, confusion, altered behaviour, stumbling, unsteadiness. - Move person to warmer area, shield from cold, passive rewarming with space blanket etc., give warm fluids and high energy foods if possible. What is hypothermia and what causes it? Hypothermia occurs when the body's core temperature drops below 37°C. This happens when more heat is lost than the body can produce through shivering and muscle contractions. Hypothermia is the result of prolonged exposure to cold conditions, especially in damp, wet or snowy weather. Inadequate clothing during winter or at night in the wilderness, or falling into cold water, are examples of situations which commonly cause hypothermia. Inactivity rapidly leads to heat loss and this is worse if the person is injured. Symptoms and signs of hypothermia Hypothermia has a gradual onset and the affected person might lose heat to a critical level before becoming aware of the problem. Early signs include shivering (shivering stops once body temperature falls to below 32°C); listlessness; a cold, pale, puffy face; slurred or incoherent speech and impaired judgment. This decrease in mental sharpness typically results in someone becoming unaware of the gravity of the situation. Later signs, indicating severe hypothermia, include an overwhelming drowsiness and weakness, slow pulse and shallow breathing, confusion, altered behaviour such as aggressiveness, stumbling when walking and unsteadiness when standing. Infants, the very lean and the elderly are at particular risk. Elderly people may become hypothermic at temperatures as mild as 10 to 15°C, particularly if they are malnourished, have heart disease or an underactive thyroid, or if they take certain medications or abuse alcohol. Hypothermia can be fatal and therefore needs prompt treatment. Severe hypothermia may be difficult to distinguish from death because pulses become very difficult or impossible to feel and breathing may be too shallow to notice. First aid for hypothermia - Call for an ambulance if the person's level of consciousness is dropping, or you have any doubt about the severity of the condition. - If possible, move the person to a warmer area, shielded from the cold and wind. Remove wet clothing. - Passively re-warm the person by wrapping him in a space blanket, blankets, clothing or newspapers, and cover the head. If outdoors, insulate the person from the ground and lie next to him. - If the person is conscious, give warm fluids and high-energy foods, unless he is vomiting. Don't give any alcohol or caffeinated drinks. - Keep the person still as movement draws blood away from the vital organs. Don't massage or rub someone with severe hypothermia, or jostle them during transport. (Cold can interfere with the electric conduction system of the heart, making it prone to irregular rhythms which may lead to cardiac arrest.) - Do not apply direct heat, such as a hot bath, heating pad or electric blanket. (This is called active re-warming and should not be done unless the person is very far from definitive care, as it carries a risk of burns.) Prevention of hypothermia - If you're going to be doing outdoor sports like hiking, research the conditions first and speak to experienced people who know the area. Ask them what they would recommend in terms of gear and available shelter. As a general rule: take along several layers of warm clothing (layers help trap warmed air) and keep the head, hands and feet covered. - Change out of wet clothes as soon as you can. Being wet and in the wind rapidly speeds up heat loss from the body. - Take along sufficient food, especially carbohydrates, and snack regularly. It's also important to stay hydrated, even in cold weather. - Carry a space blanket; these are available at outdoor and camping shops. Reviewed by Barry Milner, Instructor, Blue Star Academy of First Aid, BLS National Faculty and First Aid Representative (Resuscitation Council of Southern Africa)
<urn:uuid:a2ba41f1-bb16-4bb4-ae10-76981841b1f9>
CC-MAIN-2013-20
http://www.health24.com/Medical/First-aid/The-basics/Hypothermia-Client-20120721
2013-05-26T02:42:27
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926906
916
4.15625
4
Most common smoke detectors (Fig. 13-2) contain a small amount of 241Am, a radioactive isotope. 241Am is produced and recovered from nuclear reactors. Alpha particles emitted by the decays of 241Am ionize the air (split the air molecules into electrons and positive ions) and generate a small current of electricity that is measured by a current-sensitive circuit. When smoke enters the detector, ions become attached to the smoke particles, which causes a decrease in the detector current. When this happens, an alarm sounds. These detectors provide warning for people to leave burning homes safely. Many lives have been saved by the their use. Because the distance alpha particles travel in air is so short, there is no risk of being exposed to radiation by having a smoke detector in the house. Since ionization-type smoke detectors contain radioactive materials, they should be recycled or disposed of as radioactive waste. It is important to follow the instructions that come with the smoke alarm when they need to be discarded.
<urn:uuid:7c528fd1-8760-4d5e-898c-fba0c1ae355c>
CC-MAIN-2013-20
http://www.lbl.gov/nsd/education/ABC/wallchart/chapters/13/1.html
2013-05-26T02:48:42
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93566
202
4.21875
4
Lesson Plans and Worksheets Browse by Subject Blood Teacher Resources Find teacher approved Blood educational resource ideas and activities A series of diagrams and photographs is a vivid tool for delivering a lesson about blood vessels. Each slide has notes for the lecturer to use to explain each slide. Your young biologists will increase their understanding of the structure and function of arteries, veins, and capillaries. The final slide provides a comparison chart for them to copy and complete as a review of the information absorbed. A thorough commentary on blood type is presented in this handout. Antigens and antibodies are defined. Punnett squares and a pedigree chart help to clarify. Human biology or genetics learners then apply their knowledge to two situations: two newborn baby girls being possibly switched in the hospital and a crime scene investigation. This is an engaging activity that ends with a lab activity simulating the blood typing and identification of the perpetrator. Although there are vocabulary terms in this PowerPoint that use British spelling, the presentation is attractive and educational. The content flows from the general composition of blood, into the different types of blood cells and their functions. The concluding slide has review questions that you can use to assess student retention. In this blood type worksheet, students create a wheel showing blood type, antigens and the genes involved in coding for each blood type. Students use the wheel to answer 16 questions about blood type and they complete a chart with the genes, antigens and blood types using what they learned from the wheel. In this simulation activity, young biologists examine blood types to determine whether the death rate in a hospital was caused because of incorrect identification of patient blood types. You will need obtain and follow the procedures of a blood typing kit in order carry out this lab activity in your classroom. Using this scenario makes a blood typing or scientific method lesson more interesting, and the provided lab sheet makes it easier for you to implement. There are factors that can be controlled and factors that can't be controlled regarding blood pressure. Read through these handouts and learn about the different factors. Then answer some questions about the information just learned. There is even an activity to determine resting heart rate, and then to make calculations regarding one's target heart rate range. Discover why all of this information is important. In this blood worksheet, students watch a video called "The Epic Story of Blood" and answer 24 questions about the creation of blood, how it is produced, blood donation, blood banks and transfusions. Students take an short quiz about the blood and what they learned in the video. Here is a sharp presentation on multiple alleles using the classic blood type example. Viewers revisit codominance and dominance and learn that blood type is actually a combination of both. They use Punnett squares to solve blood type problems. They learn about agglutination and antibodies that make blood type crossing a topic of study. Follow this PowerPoint with a blood typing lab activity and more Punnett square practice.
<urn:uuid:38c78de4-6fd8-4840-9a4b-d7d98773d9ef>
CC-MAIN-2013-20
http://www.lessonplanet.com/lesson-plans/blood
2013-05-26T02:35:38
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909451
608
4.5
4
Standing Bear in his formal attire National Anthropological Archives, Smithsonian Institution On their journey westward in 1804, Lewis and Clark learned about the Ponca, a small tribe living on the west bank of the Missouri River and along what are now the lower Niobrara River and Ponca Creek in northeast Nebraska. The two did not meet as the tribe was on a hunting trip to the west. Early Life And Movement To Reservation Standing Bear was born around 1829 in the traditional Ponca homeland near the confluence of the Niobrara and Missouri rivers. About thirty years later, the tribe sold its homeland to the United States, retaining a 58,000-acre reservation between Ponca Creek and the Niobrara River. On this reservation the Poncas lived a life of hardscrabble farming and fear-the United States did little to protect them from attacks from the Brule Sioux. When the federal government created the Great Sioux Reservation in 1868, the Ponca Reservation was included within its boundaries, depriving them of title to their remaining lands. Eviction And Removal In 1877, the federal government decided to remove the Poncas to Indian Territory. Standing Bear, a tribal leader, protested his tribe's eviction. Federal troops enforced the removal orders, with the result that the Poncas arrived in Indian Territory in the summer of 1878. Discouraged, homesick and forlorn, the Poncas found themselves on the lands of strangers, in the middle of a hot summer, with no crops or prospects for any as the time for planting was long past. Since the tribe had left Nebraska, one-third had died and nearly all of the survivors were sick or disabled. Talk around the campfire revolved around the "old home" in the north. The death of Chief Standing Bear's sixteen-year old son in late December 1878 set in motion the event which was to bring a measure of justice and worldwide fame to the chief and his small band of followers. Honoring A Son's Wish Wanting to honor his son's last wish to be buried in the land of his birth and not in a strange country where his spirit would wander forever, Standing Bear gathered a few members of his tribe-mostly women and children-and started for the Ponca homeland in the north. They left in early January 1879 and trekked through the Great Plains winter, reaching the reservation of their relatives, the Omahas, about two months later. Standing Bear carried with him the bones of his son to be buried in the familiar earth along the Niobrara River. The Court Case - Standing Bear v. Crook Because Indians were not allowed to leave their reservation without permission, Standing Bear and his followers were labeled a renegade band. The Army, on the order of The Secretary of the Interior, arrested them and took them to Fort Omaha, the intention being to return them to Indian Territory. General George Crook, however, sympathized with Standing Bear and his followers and asked Thomas Henry Tibbles, an Omaha newspaperman, for help. Tibbles took up the cause and secured two prominent Omaha attorneys to represent Standing Bear. The lawyers filed a federal court application for a writ of habeas corpus to test the legality of the detention, basing their case on the 14th Amendment to the Constitution. The government disputed the right of Standing Bear to obtain a writ of habeas corpus on the grounds that an Indian was not a "person" under the meaning of the law. Death And Commemoration The case of Standing Bear v. Crook began on May 1, 1879 before Judge Elmer S. Dundy in U.S. District Court in Omaha and continued into the evening of the following day. On May 12, Judge Dundy ruled in favor of Standing Bear, reasoning that he and his band were indeed "persons" under the law, entitled to sever tribal connections and were free to enjoy the rights of any other person in the land. The government appealed Dundy's decision, but the Supreme Court of the United States refused to hear the case, leaving Standing Bear and his followers free in the eyes of the law. Standing Bear died in 1908 and was buried alongside his ancestors in the Ponca homeland. At the eastern end of the 39-mile reach of the Missouri National Recreational River is a relatively new bridge. It links the communities of Niobrara, Nebraska, and Running Water, South Dakota. The official name of the structure is the Chief Standing Bear Memorial Bridge. Click here for more details on Standing Bear and the Ponca Tribe, with a list of suggested additional reading. (PDF file)
<urn:uuid:fdc02ec5-7827-476b-a491-e9263d13bcc0>
CC-MAIN-2013-20
http://www.nps.gov/mnrr/historyculture/standingbear.htm
2013-05-26T03:03:57
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973558
962
4.03125
4
Teacher's Resources - A Question of Ritual For the people of the ancient Americas as for many other humans worldwide, ceremonies formed a bridge between their world and that of deities and spirits. Through sacred rituals they communicated to their gods and ancestors their hopes for a bounty of food, protection from the potential disasters wrought by nature or political enemies and general good fortune, thus forming a bridge between the living and the metaphysical. In the Mesoamerican and Andean belief systems, many different rituals involving dances, ceremonies, festivals and games were performed to maintain a sense of order and balance and to create the conditions necessary for prosperous human cultures to unfold. Much has been made of the evidence for the practice of human sacrifice in both regions; certainly it appears to have been practiced, as it was in certain civilizations all over the world, including Imperial Rome, the early Celtic settlements of Western Europe, and many others. What is more interesting and fruitful to pursue with students is an exploration of the possible belief systems that gave meaning to the CLOTH & CLAY objects, such as this ritual incense burner from Teotihuacan, an ancient Mexican culture. Classroom Activities and Projects (recommended ages are in italics and are approximate) 1. 12 -16: In groups or as individuals, students can research the symbolism of the mountain in ancient American cultures, investigating such areas as sacred geometry, layout of cities and ritual sites, pyramids, and rituals. The results of the research can result in essays, as well as drawings and 3-dimensional models. 2. 12 -16: Within a larger project comparing the beliefs of ancient peoples around the world, students can investigate those of the ancient Americas, including shamanism, use of psychotropic plants, and human sacrifice, tying these practices to the religious beliefs of the cultures practicing them. Class discussions can touch on how easy it is to sensationalize these practices, and the importance of putting them in context with the larger belief systems of the civilizations under review. 3. 12 - 16: Research on the Day of the Dead, a yearly festival in Mexico that takes place in early November, provides insight into ancient and contemporary religious beliefs and rituals. By studying this and other festivals, students who are growing up in the secular societies of the West can develop an understanding of life in a deeply religious culture. Similarly, students who are themselves rooted in a religious culture can learn about other belief systems. In its original form, this festival took place during Miccailhuitontli (late summer) in the Aztec calendar, and celebrated the life cycle, honouring the newly dead and the ancestors. Today the day corresponds to All Saints Day in the Christian calendar. Students working in groups can present the results of their research to the class, with each group covering different aspects of the festival such as the variety of forms taken by the festivities, the combined Christian/pre-Christian elements, and the significance of ofrendas, calaveras and other related objects.
<urn:uuid:45964684-0e7b-4afe-9a67-a3dadf9ff5d0>
CC-MAIN-2013-20
http://www.textilemuseum.ca/cloth_clay/Resources/ritual.cfm
2013-05-26T03:10:08
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955161
610
4.28125
4
Until the Aswan High Dam was built, Egypt received a yearly inundation - an annual flood - of the Nile. The ancient Egyptians did not realise this, but the flood came due to the heavy summer rains in the Ethiopian highlands, swelling the different tributaries and other rivers that joined and became the Nile. This happened yearly, between June and September, in a season the Egyptians called akhet - the inundation. This was seen by the Egyptians as a yearly coming of the god Hapi, bringing fertility to the land. The first signs of the inundation were seen at Aswan by the end of June, reaching its swelling to its fullest at Cairo by September. The flood would then decrease in size around two weeks later, leaving behind a deposit of rich, black silt. The amount of silt left behind due to the height of the Nile determined the amount of crops that the Egyptians could grow - if the inundation was too low, it would be a year of famine. The Egyptians learned a method of measuring the height of the Nile known as the Nilometre. Although all Nilometres used by the Egyptians had a single obvious purpose, to mark the highest point of Inundation, they were constructed in one of three different formats -- a slab or pillar, a well or a series of steps. All three were calibrated using the same unit of measurement, the cubit; the Egyptians broke the cubit into smaller units, which allowed them to keep remarkably accurate records, perhaps more accurate than would have been warranted for the purposes of merely agriculture and taxation. The Nilometre on Elephantine Island near the First Cataract deep in southern Egypt always held supreme importance. It was the first outpost where the floods exerted themselves and the first to know when they were over, but the religious significance of the might might have overshadowed its strategic location. It was the home-place of Khnum, the ram-headed god of Inundation. During the Eleventh Dynasty a sanctuary was built on the island specifically to celebrate Inundations. A new Nilometre replaced a much older one at the edge of Khnum's Temple during the Twenty-Sixth Dynasty; somewhat later, in Dynasty Thirty, a riverside terrace and another Nilometre was added to the nearby Temple of Satet, one of Khnum's celestial consorts. When Egypt fell to Rome, that did not mean an end to Nilometres on Elephantine Island, for Khnum's Nilometre received a new calibrated staircase and a granite roof from the Romans. -- Ralph Vaughan, Nilometers: Measuring the Universe The Nilometres were usually a series of steps by the Nile, where the water level against the steps would show how high the Nile would rise and records of the maximum height of the inundation could be taken. There are Nilometres at the temples at Elephantine, Philae, Edfu, Esna, Kom Ombo and Dendera. These were build through pharaonic times up until Roman times. There was even a Nilometre built during early Islamic times at el-Rhoda in Cairo, which was possibly the site of an ancient Nilometre, though it used a pillar rather than the usual steps. The ancient Egyptians viewed Sirius as the bringer of new life. This was because Sirius was newly visible in the sky at the time of the flooding of the Nile River, the life-giving inundation which yearly fertilised their crops. The inundation was also around the time that the Egyptians noticed the rising of the 'dog star' Sirius. The goddess Sopdet (Sothis) was the personification of this star, represented as a woman with a star as her headdress, or as a seated cow with a plant between her horns (just as Seshat's hieroglyph might have been a flower or a star.) Her star was the most important of the stars to the ancient Egyptians, and the rising of this star came at the time of inundation and the start of the Egyptian new year. She was linked closely with Isis, just as her husband Sah (the star Orion) and son Soped were linked with Osiris and Horus. Isis' sister Nephthys is also somewhat linked to the inundation - in one particular tale, she represents the desert while Osiris represents the inundation itself. When the Nile flood is high enough to reach the desert, flowers bloom in the barren red land. In the story, Osiris and Nephthys have a drunken union, where Osiris leaves behind his garland of melilot flowers. As the inundation was a sign of fertility, Osiris and Nephthys were thought to have had a child - Anubis, god of mummification. Now because the Ancient Egyptian calendar was slightly out of step with the solar and lunar year - the Egyptian calendar was out by 6 hours. As time went on, the inundation came occasionally during the season of akhet, so the Egyptians relied on the star, rather than the season, as the herald of both the new year and the yearly flood. The other two seasons were peret (growing) and shemu (harvest). During the growing season (after the inundation had receded, if not exactly in the season according to the calendar) the Egyptians planted their crops - around October and November - and tended to the fields. The Egyptians watered their crops using an irrigation system of canals or by bringing water to the fields in basins or by using the shaduf, which is still in use in Egypt today, to raise water from the river to the bank of the Nile. By the time the Nile reached its lowest level, some time around March or April, the crops would be ready for the harvest. During the inundation, though, there was nothing to do for the Egyptian farmer. Rather than doing nothing for a whole season, the Egyptians would do other tasks rather than paying tax. (Tax was usually taken out of the crops that the farmers grew, and during inundation, the farmland was covered by water!) During the Old Kingdom, this work took on the form of working on building pyramids. This was not done, as originally and incorrectly thought, by slave labour. In fact, it was done by Egyptian citizens who had little else to do for one season a year. These men were also 'paid' for their work - workmen at the pyramids of the Giza Plateau were given beer, thrice daily - five kinds of beer and four kinds of wine! If Egypt had a drought or a year of plenty, it was the will of the Nile god Hapi. The Egyptians gave him offerings and worship to hopefully bring a good flood that wasn't too high or too low. They celebrated the 'Arrival of Hapi', hoping that their houses wouldn't be washed away, or that the Nile would rise enough to provide both water and silt for the farmland. But the Egyptians, despite being able to measure the flood, couldn't change the situation if the Nile's waters weren't at the required level. To them, the inundation was truly in the hands' of the gods. Who are we? Tour Egypt aims to offer the ultimate Egyptian adventure and intimate knowledge about the country. We offer this unique experience in two ways, the first one is by organizing a tour and coming to Egypt for a visit, whether alone or in a group, and living it firsthand. The second way to experience Egypt is from the comfort of your own home: online.
<urn:uuid:40a922c3-bf31-46e4-ae3c-63d90ee50283>
CC-MAIN-2013-20
http://www.touregypt.net/featurestories/nile.htm
2013-05-26T03:01:42
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982695
1,546
4.125
4
Want to stay on top of all the space news? Follow @universetoday on Twitter Currently, astronomers have two competing models for planetary formation. In one, the planets form in a single, monolithic collapse. In the second, the core forms first and then slowly accretes gas and dust. However, in both situations, the process must be complete before the radiation pressure from the star blows away the gas and dust. While this much is certain, the exact time frames have remained another matter of debate. It is expected that this amount should be somewhere in the millions of years, but low end estimates place it at only a few million, whereas upper limits have been around 10 million. A new paper explores IC 348, a 2-3 million year old cluster with many protostars with dense disks to determine just how much mass is left to be made into planets. The presence of dusty disks is frequently not directly observed in the visible portion of the spectra. Instead, astronomers detect these disks from their infrared signatures. However, the dust is often very opaque at these wavelengths and astronomers are unable to see through it to get a good understanding of many of the features in which they’re interested. As such, astronomers turn to radio observations, to which disks are partially transparent to build a full understanding. Unfortunately, the disks glow very little in this regime, forcing astronomers to use large arrays to study their features. The new study uses data from the Submillimeter Array located atop Mauna Kea in Hawaii. To understand how the disks evolved over time, the new study aimed to compare the amount of gas and dust left in IC 348′s disc to younger ones in star forming regions in Taurus, Ophiuchus, and Orion which all had ages of roughly 1 million years. For IC 348, the team found 9 protoplanetary disks with masses from 2-6 times the mass of Jupiter. This is significantly lower than the range of masses in the Taurus and Ophiuchus star forming regions which had protoplanetary clouds ranging to over 100 Jupiter masses. If planets are forming in IC 348 at the same frequency in which they form in systems astronomers have observed elsewhere, this would seem to suggest that the gravitational collapse model is more likely to be correct since it doesn’t leave a large window in which forming planets could accrete. If the core accretion model is correct, then planetary formation must have begun very quickly. While this case don’t set any firm pronouncements on which model of planetary formation is dominant, such 2-3 million year old systems could provide an important test bed to explore the rate of depletion of these reservoirs.
<urn:uuid:dd183d74-6efe-4b52-a5fd-55d21cf5c3b4>
CC-MAIN-2013-20
http://www.universetoday.com/85888/want-to-make-planets-better-hurry/
2013-05-26T02:55:42
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952898
545
4.09375
4
Evidence from caves in Siberia indicates that a global temperature increase of 1.5° Celsius may cause substantial thawing of a large tract of permanently frozen soil in Siberia. The thawing of this soil, known as permafrost, could have serious consequences for further changes in the climate. Permafrost regions cover 24 percent of the land surface in the northern hemisphere, and they hold twice as much carbon as is currently present in the atmosphere. As the permafrost thaws, it turns from a carbon sink (meaning it accumulates and stores carbon) into a carbon source, releasing substantial amounts of carbon dioxide and methane into the atmosphere. Both of these gasses enhance the greenhouse effect. By looking at how permafrost has responded to climate change in the past, we can gain a better understanding of climate change today. A team of international researchers looked at speleothems, such as stalagmites, stalactites, and flowstones. These are mineral deposits that are formed when water from snow or rain seeps into the caves. When conditions are too cold or too dry, speleothem growth ceases, since no water flows through the caves. As a result, speleothems provide a detailed history of periods when liquid water was available as well as an assessment of the relationship between global temperature and permafrost extent. Using radioactive dating and data on growth from six Siberian caves, the researchers tracked the history of permafrost in Siberia for the past 450,000 years. The caves were located at varying latitudes, ranging from a boundary of continuous permafrost at 60 degrees North to the permafrost-free Gobi Desert. In the northernmost cave, Lenskaya Ledyanaya, no speleothem growth has occurred since a particularly warm period around 400,000 years ago—the growth at that time suggests water was flowing in the area due to a melt in the permafrost. The extensive thawing at that time allows for an assessment of the warming required globally to cause a similar change in the permafrost boundary. Global temperatures at that time were only 1.5°C warmer than today, suggesting that we could be approaching a critical point at which the coldest permafrost regions would begin to thaw. Not only will increasing global temperatures cause substantial thawing of permafrost, but it may also create wetter conditions in the Gobi Dessert, based on data from the southern-most cave obtained for the same time period. This suggests a dramatically changed environment in continental Asia. Aside from changes in temperature and precipitation, thawing permafrost enables coastal erosion and the liquefaction of ground that was previously frozen. This poses a risk to the infrastructure of Siberia, including major oil and gas facilities.
<urn:uuid:867e4ca7-5a93-4c6d-b021-8088aa153645>
CC-MAIN-2013-20
http://arstechnica.com/science/2013/02/small-rise-in-global-temperatures-could-thaw-permafrost/
2013-05-18T17:27:30
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944051
570
4.625
5
Storytelling in the Classroom Storytelling is not the same as reading aloud because it requires greater interaction between the teller and the listener. Therefore, storytelling is a great tool for improving children's communication skills, as well as developing language skills, comprehension, and self-awareness. (Reading and Communication Skills) To help students use storytelling to foster creativity and to develop social skills and language skills—speaking, listening, and comprehension Print out selected Building Blocks Character Cards, Know-Kit Cards, character bios, and ABC Coloring Book pages. If you're going to use the Optional Activity for Older Students, gather a variety of at least 10 to 12 everyday items (pencil, spoon, umbrella, pair of shoes, tape, etc.). - Gather the students in a large group. Choose a Building Blocks character picture and introduce him or her as a new student in the class. Tell the students some important things to know about their new friend based on the character cards and biography. We will use Ali Rabbit for our example. Ali Rabbit is 5 years old. He lives with his mom, dad, grandparents, and great grandpa. He has six brothers and sisters. The oldest is aged 17 and the youngest is 5. His favorite sport is soccer. He likes to play on the computer and make music. His good friend is Thurgood Turtle. - Now, start a story about Ali Rabbit's first day in your classroom. Include specific places and people in the story—the bus driver, the media specialist, etc. Model good storytelling practices for the students to remind them to speak clearly and loudly and to express feelings as they tell the story. Is Ali excited, frightened, or shy about his first day at a new school? - Then, pass the picture of Ali to a student and have him or her add to the story. Continue passing the picture around the class until Ali's first day at school is complete. You may need to ask questions to prompt the children’s imaginations. - Next, divide the class into small groups. Select several Know-Kit Cards and/or ABC Coloring Book pages that show Ali Rabbit. For example: Ali asleep on the soccer field, Ali playing the keyboard, Ali crying when someone took away his keyboard, Ali eating peanut butter and apples, Ali at his fifth birthday party, or Ali with his friends. - Let the group talk about the pictures as they put them into a sequence and begin to make up a story that goes with the pictures. Depending on the age of your students, you may have to help them decide on the sequence of the pictures they will use. - Have each group come to the front of the class with their pictures and share their story. Have others in the class participate by asking questions about the story or the characters. - Finally, mix up all the pictures and distribute them around the class. Start a story based on the picture you hold. Then, call on a child to add to your story, using the character in the picture he or she holds. Go around the room and call on students to add to the class story. - Have the students talk about the different stories and tell what they liked best. Was it more fun to have planned a story with their small group or to mix and match stories as a whole group? Why? Optional: For Older Students Place all the everyday items you’ve gathered into a big box. Be sure not to let the students see what's in the box. Then, tell the students that they're going to tell a story using the props in the box. Pass out one prop to the first student and have him or her start the story. Then, in the middle of the story, pass out another prop to a different student, which is the cue to jump into the story. Continue this until all the props are given out. The stories should make everyone laugh with the mismatched items and complicated storyline. You can start again using the same props, but in a different order. Or, you can have students find their own props to tell an add-on group story. Please note—to view documents in PDF format, you must have Adobe’s free Acrobat Reader software. If you do not already have this software installed on your computer, please download it from Adobe's Web site.
<urn:uuid:16cca657-1e65-4040-9db7-6bc51ce437c3>
CC-MAIN-2013-20
http://bblocks.samhsa.gov/educators/lesson_plans/teachingwithstories.aspx
2013-05-18T18:05:07
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957054
889
4.4375
4
What's the Latest Development? Using location data gathered by personal mobile phones, researchers from Carnegie Mellon University have created the first map that tracks the spread of malaria by examining movement patterns among Kenya's population. Between 2008 and 2009, researchers followed the movement of 15 millions Kenyans, out of a total population of close to 40 million. Then they combined the data with "maps of population distribution and malaria prevalence over the same period to create, for the first time, a map that correlates large-scale trends in movement to the spread of the disease." What's the Big Idea? Because of how malaria spreads, the disease is particularly sensitive to the movement of affected populations. "Malaria is usually associated with the bite of infected female mosquitoes. But once humans contract the disease, they can act as a vector if they are bitten by uninfected insects, which then spread the parasite to other people." Tom Scott of the Mosquito Research Laboratory at the University of California, Davis, said the research will be essential in finding and targeting the human transmission routes of the parasites that cause malaria. Photo credit: Shutterstock.com
<urn:uuid:8bee974a-a16f-4213-9552-52fa20c4d393>
CC-MAIN-2013-20
http://bigthink.com/ideafeed/how-mobile-phones-combat-the-spread-of-global-disease
2013-05-18T17:58:26
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929187
228
4.0625
4
Discussion about a core curriculum brings to light some fundamental differences in thinking about education and learning. A core curriculum is a set of educational goals that focuses on making sure that all students using it will learn set material tied to age/grade level. The design of the curricula are based on things that do not have much relevance to a homeschooler: grade levels, learning divided into discrete subjects (math, science, history, etc), testing goals and classroom management needs. Recently the idea of a national core curriculum has been receiving a lot of attention. The Core Knowledge Sequence/ Common Core State Standards is a national initiative to create a standard path of education across the states, in part to make sure that students across states have similar academic skills, in part to simplify the production of text books and in part to create a “level” playing field for future job prospects. A recent periodical (American Educator, Winter, 2010-2011) calls the core curriculum an idea whose time has come. The article subtitle reads ” How a core curriculum could make our education system run like clockwork.” The article goes on to state: A curriculum sets forth that body of knowledge and skill our children need to grow into economically productive and socially responsible citizens. A common curriculum—meaning one that is shared by all schools—is what binds all the different actors together; instead of going off in radically different directions and inadvertently undermining each other, teachers, administrators, parents, textbook writers, assessment developers, professors of education, and policymakers all work in concert. A common core curriculum—meaning one that fills roughly two-thirds of instructional time—leaves teachers ample room to build on students’ interests and address local priorities. In countries with a common core curriculum, the benefits are many: - Teachers need not guess what will be on assessments; if they teach the curriculum, their students will be prepared. - Students who change schools are not lost, so time is not wasted on review and remediation. Their new teachers may have different lesson plans and projects, but the core content and skills to be mastered in each grade are the same. - Textbooks are slim, containing just the material to be learned in a given year (not hundreds of incoherent pages trying to “align” to different states’ vague standards and different notions of proficiency). - Teacher preparation programs ensure that candidates have mastered the curriculum, and ways to teach it, before they become teachers. - Teachers across the hall, across town, and (thanks to the Internet) across the country are able to collaborate on developing and refining lesson plans and other instructional materials. In other words, a core curriculum is there to help teachers, schools, test companies, textbook manufacturers create an infrastructure for traditional, school based education. The idea of a core curriculum in the presented format doesn’t make sense for homeschooling. One of the benefits of homeschooling is that the learning process can be tailored to the learner. A curriculum guide uses a variety of sources to create a philosophical framework for education. The choices for what get taught and when are based on sociology, politics, educational theory, developmental psychology and the needs of a large institutional system. What has gotten lost along the way is developing a system of learning that benefits the individual. Instead of tying learning to a chart, you can personalize your child’s learning. A curriculum is a terrific place to find ideas and to help organize your thoughts and plans for learning, but don’t let it define your family’s homeschooling. Look at two different aspects of education: the learning process & the materials/opportunities. - Let ability and comprehension determine the pace of the subject material - Assist the learner with taking control of the learning process - Spend as much or more time on understanding logic, critical thinking, problem solving, etc as on fact acquisition - learn to examine and evaluate the structure creating the information base being used for learning - Teach how to evaluate learning as it is in progress so that the learning process is continually developed - Build learning experiences so that the learner can apply previous knowledge and processes to current learning. - Don’t limit learning to the contents of a textbook or syllabus. Let the learner follow the ideas and information. - Expand the resource base- wider variety of materials, different formats, different interpretations, different sources, different foci. - Build learning communities that challenge ideas and create opportunities to talk about ideas & learning - Make the most of the community learning opportunities Let me start by saying that I agree there is a set of foundation knowledge that makes learning easier. However, people have taken a very large step from the idea of basic skills, ideas and facts to a K-12 guide detailing what should get taught when. For example, the folks at the Core Curriculum site have stated: The more you know, the more you are able to learn. In one sense, this is a truism. In another, it’s just wrong. This is a misinterpretation of how learning works. Yes, the larger your knowledge base, the more information you have for processing new experiences and ideas. However, a knowledge base is much more than facts. The most critical part of the knowledge base is the ability to process and integrate information in a way that gives you a framework to evaluate new information. Programs that focus on facts first, thinking later takes away one of the most valuable learning skills- the ability to put information in context. Many educational program will focus on a parts to whole style of instruction. First we’ll teach the basic components, then we’ll show how it all goes together and makes sense. Others insist that whole to parts is the only way to go. We have to show the ideas as larger concepts, and then we will take it apart and look at what went into it. The assumption in the whole/parts debate is that one method is significantly better than the other. The decision about how to approach a topic and where to start shouldn’t be set in stone. How much experience does the learner have with the subject? If the ideas are completely new, it is worth taking some time to help the learner get an idea about the larger picture. Does the learner have previous understanding of a similar situation which will make decoding this one more straightforward? Does the learner see the link between the two situations. Instead of worrying about parts and whole, we can focus on finding ways for the learner to determine which learning approach will be best for this situation. The approach will differ from person to person based on learning styles, experience and the materials available. One complaint about asking kids to think critically is that we are really asking them to guess how to figure things out. Wouldn’t it be much simpler to just give them the steps and the facts and go from there? Yes, it would– if your goal is the ability to recall and return a set of preprocessed information. However if your goal is for a student to be able to assess unfamiliar ideas and information and to determine a structure to think about it, then no. The advantage to building critical thinking skills early on is that the learner is learning how information is gathered, how it is analyzed and how to develop ideas and theories. With these skills, a learner in a totally unknown situation can figure out what’s happening & develop an approach to solving a problem or putting together the facts for a larger picture. Rather than fuss over parts to whole/whole to parts, the structure of introducing the information and learning structure should reflect the learner’s prior knowledge and experience both with the subject and the learning skills needed.
<urn:uuid:6d288153-790b-4105-85dc-b1ebf1a42e01>
CC-MAIN-2013-20
http://fieldguidetolearning.wordpress.com/2011/02/28/what-is-a-core-curriculum-should-we-be-following-one/
2013-05-18T18:06:40
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945931
1,590
4
4
Lamp Shell Terebratulina septentrionalis It would be easy to mistake a lamp shell for a small bivalve mollusk, as both have a hinged shell in two parts and live attached to the sea floor. Lamp shells, however, have a very thin, light shell and the two parts are different sizes, with the smaller one fitting into the larger. The shell valves cover the dorsal and ventral surfaces of the animal whereas in bivalve mollusks they are on the left and right side of the body. Lamp shells attach their pear-shaped shell to hard surfaces by means of a fleshy stalk that emerges from a hole in the ventral shell valve. With the shell valves gaping open, the animal draws in a current of water that brings plankton with it. Taking up most of the space inside the shell is a feeding structure called the lophophore, which consists of two lateral lobes and a central coiled lobe covered in long ciliated tentacles. The beating of the cilia creates the water current. Lamp shells are found worldwide, but they are especially abundant in colder waters. In the northeastern Atlantic, Terebratulina septentrionalis is mostly found in deep water, while along the east coast of North America, it commonly occurs in shallow water. This species is very similar to Terebratulina retusa.
<urn:uuid:0c979112-e58f-46d8-980b-de289ead7025>
CC-MAIN-2013-20
http://oceana.org/en/print/explore/marine-wildlife/lamp-shell
2013-05-18T18:06:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946317
289
4.15625
4
Pronounced: Gas-tro-ee-sof-a-geal re-flux diseaseEn Español (Spanish Version) Gastroesophageal reflux (GER) is the back up of acid or food from the stomach to the esophagus. The esophagus is the tube that connects your mouth and stomach. GER is common in infants. It may cause them to spit up. Most infants outgrow GER within 12 months. GER that progresses to esophageal injury and other symptoms is called gastroesophageal reflux disease (GERD). The backed-up acid irritates the lining of the esophagus. It causes heartburn, a pain in the stomach and chest. GERD can occur at any age. Copyright © Nucleus Medical Media, Inc. GERD is caused by acid or food from the stomach that regularly backs up into the esophagus. It is not always clear why the acid backs up. The reasons may vary from person to person. There may also be a genetic link in some GERD. Acid is kept in the stomach by a valve at the top of the stomach. The valve opens when food comes in. It should close to keep in the food and acid. If this valve does not close properly, the acid can flow out of the stomach. In addition to GERD, the valve may not close because of: The following factors increase the chances of developing GERD: Symptoms of GERD include: Your doctor will ask about your child’s symptoms and medical history. A physical exam will be done. Your child may need to see a pediatric gastroenterologist. This type of doctor focuses on diseases of the stomach and intestines. Tests may include: Talk with your doctor about the best treatment plan for your child. Treatment options include the following: Medications options include: Many of these are over-the-counter medications. Surgery or endoscopy may be recommended for more severe cases. It may be considered if lifestyle changes and medications do not work. The most common surgery is called fundoplication . During this procedure, a part of the stomach will be wrapped around the stomach valve. This makes the valve stronger. It should prevent stomach acid from backing up into the esophagus. This surgery is often done through small incisions in the skin. Last reviewed May 2013 by Michael Woods Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
<urn:uuid:8cffbe4f-c69c-4804-9fe9-cd80aa0e8f15>
CC-MAIN-2013-20
http://pediatrics.med.nyu.edu/conditions-we-treat/conditions/gastroesophageal-reflux-disease%E2%80%94child?ChunkIID=29309
2013-05-18T17:27:46
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928609
595
4.09375
4
In neuroanatomy, a sulcus (Latin: "furrow", pl. sulci) is a depression or fissure in the surface of the brain. It surrounds the gyri, creating the characteristic appearance of the brain in humans and other large mammals. Large furrows (sulci) that divide the brain into lobes are often called fissures. The large furrow that divides the two hemispheres—the interhemispheric fissure—is very rarely called a "sulcus". The sulcal pattern varies between human individuals, and the most elaborate overview on this variation is probably an atlas by Ono, Kubick and Abernathey: Atlas of the Cerebral Sulci. Some of the larger sulci are, however, seen across individuals - and even species - so it is possible to establish a nomenclature. The variation in the amount of fissures in the brain (gyrification) between species is related to the size of the animal and the size of the brain. Mammals that have smooth-surfaced or nonconvoluted brains are called lissencephalics and those that have folded or convoluted brains gyrencephalics. The division between the two groups occurs when cortical surface area is about 10 cm2 and the brain has a volume of 3–4 cm3. Large rodents such as beavers (Template:Convert/lbTemplate:Convert/test/A) and capybaras (Template:Convert/lbTemplate:Convert/test/A) are gyrencephalic and smaller rodents such as rats and mice lissencephalic. In humans, cerebral convolutions appear at about 5 months and take at least into the first year after birth to fully develop. It has been found that the width of cortical sulci not only increases with age , but also with cognitive decline in the elderly. ↑ 2.02.1Hofman MA. (1985). Size and shape of the cerebral cortex in mammals. I. The cortical surface. Brain Behav Evol. 27(1):28-40. PMID 3836731 ↑ 3.03.1Hofman MA. (1989).On the evolution and geometry of the brain in mammals. Prog Neurobiol.32(2):137-58. PMID 2645619 ↑Martin I. Sereno, Roger B. H. Tootell, "From Monkeys to humans: what do we now know about brain homologies," Current Opinion in Neurobiology15:135-144, (2005). Caviness VS Jr. (1975). Mechanical model of brain convolutional development. Science. 189(4196):18-21. PMID 1135626 Tao Liu, Wei Wen, Wanlin Zhu, Julian Trollor, Simone Reppermund, John Crawford, Jesse S Jin, Suhuai Luo, Henry Brodaty, Perminder Sachdev (2010) The effects of age and sex on cortical sulci in the elderly. Neuroimage 51:1. 19-27 May. PMID 20156569 ↑ Tao Liu, Wei Wen, Wanlin Zhu, Nicole A Kochan, Julian N Trollor, Simone Reppermund, Jesse S Jin, Suhuai Luo, Henry Brodaty, Perminder S Sachdev (2011) The relationship between cortical sulcal variability and cognitive performance in the elderly. Neuroimage 56:3. 865-873 Jun. PMID 21397704 ↑Gerhardt von Bonin, Percival Bailey, The Neocortex of Macaca Mulatta, The University of Illinois Press, Urbana, Illinois, 1947
<urn:uuid:90017c93-fbd3-4e3b-bf56-08ddb285416b>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Sulcus_(neuroanatomy)?oldid=150425
2013-05-18T18:02:38
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.811399
767
4.03125
4
How to Enter Basic Formulas in Excel 2007 As entries go in Excel 2007, formulas are the real workhorses of the worksheet. If you set up a formula properly, it computes the right answer when you first enter it into a cell. From then on, it keeps itself up to date, recalculating the results whenever you change any of the values that the formula uses. You let Excel know that you’re about to enter a formula in the current cell by starting the formula with the equal sign (=). Some formulas follow the equal sign with a built-in function, such as SUM or AVERAGE. Many simple formulas use a series of values or cell references that contain values separated by one or more of the following mathematical operators: + (plus sign) for addition - (minus sign or hyphen) for subtraction * (asterisk) for multiplication / (slash) for division ^ (caret) for raising a number to an exponential power For example, to create a formula in cell C2 that multiplies a value entered in cell A2 by a value in cell B2, enter the following formula in cell C2: =A2*B2 To enter this formula in cell C2, follow these steps: Select cell C2. Type the entire formula =A2*B2 in the cell. Select cell C2. Type = (equal sign). Select cell A2 in the worksheet by using the mouse or the keyboard. This action places the cell reference A2 in the formula in the cell.To start the formula, type = and then select cell A2. Type * (Shift+8 on the top row of the keyboard). The asterisk is used for multiplication in Excel. Select cell B2 in the worksheet by using the mouse or the keyboard. This action places the cell reference B2 in the formula.To complete the second part of the formula, type * and select cell B2. Click the Enter box (the check mark in the Formula bar) to complete the formula entry, while at the same time keeping the cell cursor in cell C2. Excel displays the calculated answer in cell C2 and the formula =A2*B2 in the Formula bar.Click the Enter box, and Excel displays the answer in cell C2 while the formula appears in the Formula bar above. If you select the cell you want to use in a formula, either by clicking it or moving the cell pointer to it, you have less chance of entering the wrong cell reference. After creating a formula like the preceding one that refers to the values in certain cells (rather than containing those values itself), you can change the values in those cells, and Excel automatically recalculates the formula, using these new values and displaying the updated answer in the worksheet! Using the example shown in the figures, suppose that you change the value in cell B2 from 100 to 50. The moment that you complete this change in cell B2, Excel recalculates the formula and displays the new answer, 1000, in cell C2.
<urn:uuid:12d6c5bf-9b7b-40ab-9be2-047a27d2fee2>
CC-MAIN-2013-20
http://www.dummies.com/how-to/content/how-to-enter-basic-formulas-in-excel-2007.html
2013-05-18T17:49:49
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.842138
649
4.1875
4
Trilobite Olenoides erratus. Source: Mark A.Wilson A fossil is the preserved remain of a lifeform that lived in prehistoric times. Most commonly fossils are mineralized parts or a whole organism which is no longer alive. In other cases the fossil may contain actual remains of the dead organism; in particular, DNA may be preserved over long periods of time under the right conditions, since DNA is fundamentally an inanimate molecule. While the most familiar fossils are those derived from animals and plants including such exotic species as dinosaurs and woolly mammoths, there are also fossils from ancient bacteria. There are several methods of fossil formation including re-crystallization, permineralization, compression, molding and entombment. Examination of fossils was the earliest technique of palaeoentology, the study of ancient lifeforms, and it continues as the companion of molecular biology, to be key in the elucidation of phylogenies (ancestral relationships of species). The earliest animal fossils date from the Cambrian Period, approximately 540 million years before present, although some bacterial fossils exist from at least two billion years before present. Process of formation Most fossils are the result of sedimentary rock formation where coverage of the original organism occurred quickly; preservation and mineralization is encouraged by anoxic (oxygen deprived) conditions, where decompostion was not able to occur rapidly upon death of the subject. Body parts most readily preserved are teeth and bony animal parts and the chitonous elements of plants, which elements are most resistive to decay, and thus have more time to enter their preserved state. Recrystallized Scleractinian fossil coral. Source: Mark A.Wilson One of the simplest processes for fossil formation is the phenomenon of re-crystalization. This is an easily understood process, whereby substances such as teeth and bone, which begin with a high mineral content, have their chemicals replaced by a new crystalline lattice. In a number of instances, mineral replacement of the original body part transpires so gradually that microstructural features are conserved even though a complete transformation of the original organism's material occurs. A shell is termed recrystallized when the original skeletal compounds are still present but in a different crystal form, as in a transition form aragonite to calcite. The process of re-crystallization is often termed replacement. Living organisms ordinarily contain large amounts of volume that is filled with water or gases. When an organism is covered with sediment, those aqueous and gaseous portions of the individual may be replaced with mineral rich water from an aquifer or surface water body. This process is termed permineralization, and often is best viewed as a replacement phenomenon at the cellular level. Fine grained or small scale permineralization produces very detailed fossil specimens. For permineralization to occur, it is essential for the organism to become covered by sediment rapidly upon death, before significant decay has set in. In ideal cases of preservation, individual cell walls can be effectively fossilized, producing an incredible level of microscopic preservation. Compression fossils, most often exemplified by planar plant forms such as leaves or ferns, result from chemical reduction of complex organic molecules that comprise biotic tissues. The fossil actually preserves original organism material, but in a geochemically altered state. In many cases the preserved fossil is nothing but a thin carbonaceous film. Lepidodendron external mold. Source: Mark A.Wilson Such a chemical change is a manifestation of diagenesis, the transformation of sediment after its original deposition. In some cases DNA may effectively be extracted from compression fossils. Internal or external mold In some cases the entire organism may vanish, but an exterior mold will be made by sedimentary rock encasing the original remains of the organism which has been completely dissolved or destroyed. This casting results in a sculptured cavity within the rock, with the exterior three dimensional outline of the original organism's exterior; the product of this process is termed an external mold. If this hole is subsequently filled with different minerals, it is a cast and called an internal mold. A common example of this latter type of mold is a bivalve mollusc. Ant entombed in Baltic Amber. Souce: Anders L.Damgaard In some cases exceptional preservation may occur by a relatively rapid fatal trapping and encasing of a live organism with (usually) an organic substance such as amber or tar. Some of these circumstances create fossils of unusual life-pose forms, and many of these instances provide substantial detail of the subjects. For example, recent research using crystallography has yielded amazing structural detail of the morphology of long extinct insects that were trapped in amber. Bioimmuration is a form of fossilization whereby a skeletal organism envelopes or subsumes another living creature, preserving the latter, or its mold, within the skeleton. Most typically this process occurs with a sessile skeletal animal, such as a bryozoan or an oyster, which grows along a substrate that covers other sessile encrusting organisms. Frequently the bioimmured organism is soft-bodied and is thus conserved in negative relief in an external mold form. There are also cases where an organism settles on top of a living skeletal organism which grows upwards, preserving the settler within its skeleton. Most of the examples of bioimmuration are from the fossil record of the Ordovician Period. The geological record and fossil records are respectively the prehistoric sequence of events for rocks and organisms. In the 19th century early days of research in these fields age of rocks was imputed from the best estimates of the age of the fossil organisms found entrained within those rocks. The fossil record and flora and faunal succession form the basis of biostratigraphy, the science of determining the age of rocks based on the fossils they contain. For the earliest years of geological study, biostratigraphy and superposition were the chief methods of assigning the relative age of rocks. The geologic time scale was first developed based on the relative ages of rock strata as determined by pioneering paleontologists. Since the early 1900s, absolute dating methods, including radiometric dating (including potassium-argon argon/argon and uranium-lead dating techniques were applied. In the case of young fossils, carbon-14 dating has been applied to verify the relative ages obtained by fossils and to provide absolute ages for many fossils. Radiometric dating has shown that the earliest known stromatolites, or bacterial fossils, are more than 3.4 billion years old. Since the close of the 20th century new techniques in molcular biollogy using DNA marker techniques have allowed great progress to be made in defining phylogenetic trees, which illustrate the lineage of plant and animal families. These techniques can also use regression techniques to establish reasonable estimates of the absolute timelines of common parentage of biotic families. Proterozoic stromatolite cyanobacteria fossil, Bolivia. Source: GNU SNP The most ancient fossils are bacterial colonies constructed from sedimentary layers of rock; these fossils are termed stromatolite. Based on studies of living stromatolites it has been ascertained that the formation of stromatolitic fossils was biogenetically mediated by microorganism mats via sediment entrapment. However, abiotic mechanisms for stromatolitic growth are also known, leading to a decades-long scientific debate regarding biogenesis of certain formations, especially those from the lower to middle Archaean eon. Stromatolites from the late Archaean through the middle Proterozoic eon were chiefly formed by massive colonies of cyanobacteria, and that the oxygen byproduct of their photosynthetic metabolism first resulted in earth’s massive banded iron formations and subsequently oxygenated Earth’s atmosphere. History of fossil studies Georges Cuvier, French zoologist (1769-1832) The Greek scientist Aristotle observed that fossilized shells were previously living organisms, constituting the first observational recognition of the relationship of a time relationship between earlier living organisms and their fossilized remnants. In the pre-Darwinian era of the 18th century English geologist William Smith noticed that rocks of varying ages and origins preserved different fossil assemblages, which succeeded in a systematic sequence. He was the first to state that rocks from distant locations may be correlated based on the fossils they contained, and he termed this the principle of faunal succession. Smith expressed no awareness of biological evolution, and he did not speculate why faunal succession occurred. His contemporary Lord Monboddo, in nearby Scotland, was expressing a clear recognition of evolutionary sequences, but the two are not known to have communicated. Biological evolution would later explains faunal and floral succession exists: as different organisms evolve, change and go extinct, they leave behind fossils. Faunal succession was one of the chief pieces of evidence cited by Darwin that natural selection had occurred. Georges Cuvier in 1796 noted that the majority of the animal fossils he examined were remains of species that had become extinct. It was not until the era of Charles Darwin and his contemporaries that a clearly stated link existed with the hierarchical tree of life and the fossil record. Image by Ghedoghedo (Wikimedia Commons) Process of formation - T.P.Jones and Nick P.Rowe. 1999. Fossil plants and spores: modern techniques. Geological Society. 396 pages - Berndt Herrmann and Suzanne Hummel. 1994. Ancient DNA: recovery and analysis of genetic material from paleontological, archaeological, museum, medical, and forensic specimens. Springer. 263 pages - Paul Selden and John R. Nudds. 2004. Evolution of fossil ecosystems. Manson Publishing. 160 pages - Douglas H.Erwin and Robert L.Ansley. 1995. New approaches to speciation in the fossil record. Columbia University Press. 342 pages. - Eduardo A.M.Koutsoukos. 2007. Applied Stratigraphy. Springer. 488 pages History of fossil studies - William Knight. 1900. Lord Monboddo. John Murray. London. 314 pages - Charles Darwin. 1859. On the Origin of Species. Chapter 10: On the Imperfection of the Geological Record.
<urn:uuid:ca3878eb-3957-4e4e-a94f-0f0fcc8f8c98>
CC-MAIN-2013-20
http://www.eoearth.org/articles/view/159006/?topic=49478
2013-05-18T17:27:59
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935664
2,132
4.0625
4
Bullying is any behavior person(s) exhibit that intends to harm or distress a targeted person(s) consistently. The types of bullying are: Physical - Verbal - Cyber - Relational (Social Aggression) Your child feels sad about attending school - Refusal to go to school - Loss of personal items such as toys, clothing or lunch money - Child may appear withdrawn or anxious -Change in sleeping and/or eating habits - Complaints of headache and/or stomach ache - Frequent visits to Health Office If you suspect your child is involved in bullying behavior, please consider the following: Discuss your concerns with your child, school, or social worker - Determine if your child has been having any particular problems with other children - Assess if your child is experiencing difficulties in other areas - Assist your child in understanding the serious nature and consequences of bullying behavior - Share your concerns with your child's teacher or any other significant adult (example teacher, coaches, scout-leaders etc.) Ignore or walk away - tell the person to stop and then walk away - Warn that you will get help from an adult and then walk away - Get help. Tell an adult what has happened Bullies keep bullying because of inaction. Make sure your child communicates their concerns to a trusted adult. Also, if they see bullying happening and they feel comfortable doing so, they should firmly state that those actions are inappropriate.
<urn:uuid:be9f62e2-7280-4aef-8de4-3ac6c237a6aa>
CC-MAIN-2013-20
http://www.heightsschools.com/ABFAQ.cfm
2013-05-18T17:26:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936571
287
4.03125
4
On this day in 1863, Union General Ulysses S. Grant breaks the siege of Chattanooga, Tennessee, in stunning fashion by routing the Confederates under General Braxton Bragg at Missionary Ridge. For two months following the Battle of Chattanooga, the Confederates had kept the Union army bottled up inside a tight semicircle around Chattanooga. When Grant arrived in October, however, he immediately reversed the defensive posture of his army. After opening a supply line by driving the Confederates away from the Tennessee River in late October, Grant prepared for a major offensive in late November. It was launched on November 23 when he sent General George Thomas to probe the center of the Confederate line. This simple plan turned into a complete victory, and the Rebels retreated higher up Missionary Ridge. On November 24, the Yankees captured Lookout Mountain on the extreme right of the Union lines, and this set the stage for the Battle of Missionary Ridge. The attack took place in three parts. On the Union left, General William T. Sherman attacked troops under Patrick Cleburne at Tunnel Hill, an extension of Missionary Ridge. In difficult fighting, Cleburne managed to hold the hill. On the other end of the Union lines, General Joseph Hooker was advancing slowly from Lookout Mountain, and his force had little impact on the battle. It was at the center that the Union achieved its greatest success. The soldiers on both sides received confusing orders. Some Union troops thought they were only supposed to take the rifle pits at the base of the ridge, while others understood that they were to advance to the top. Some of the Confederates heard that they were to hold the pits, while others thought they were to retreat to the top of Missionary Ridge. Furthermore, poor placement of Confederate trenches on the top of the ridge made it difficult to fire at the advancing Union troops without hitting their own men, who were retreating from the rifle pits. The result was that the attack on the Confederate center turned into a major Union victory. After the center collapsed, the Confederate troops retreated on November 26, and Bragg pulled his troops away from Chattanooga. He resigned shortly thereafter, having lost the confidence of his army. The Confederates suffered some 6,600 men killed, wounded, and missing, and the Union lost around 5,800. Grant missed an opportunity to destroy the Confederate army when he chose not to pursue the retreating Rebels, but Chattanooga was secured. Sherman resumed the attack in the spring after Grant was promoted to general in chief of all Federal forces.
<urn:uuid:7b1a4a78-5b08-48b8-86b9-bcbde260344d>
CC-MAIN-2013-20
http://www.history.com/this-day-in-history/-battle-of-missionary-ridge?catId=2
2013-05-18T17:58:28
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.975761
513
4.03125
4
In the streets of Prague and in the United Nations headquarters in New York City, Czechs protest against the Soviet invasion of their nation. The protests served to highlight the brutality of the Soviet action and to rally worldwide condemnation of the Soviet Union. On August 21, 1968, more than 200,000 troops of the Warsaw Pact crossed into Czechoslovakia in response to democratic and free market reforms being instituted by Czech Communist Party General Secretary Alexander Dubcek. Negotiations between Dubcek and Soviet bloc leaders failed to convince the Czech leader to back away from his reformist platform. The military intervention on August 21 indicated that the Soviets believed that Dubcek was going too far and needed to be restrained. On August 22, thousands of Czechs gathered in central Prague to protest the Soviet action and demand the withdrawal of foreign troops. Although it was designed to be a peaceful protest, violence often flared and several protesters were killed on August 22 and in the days to come. At the United Nations, the Czech delegation passionately declared that the Soviet invasion was illegal and threatened the sovereignty of their nation. They called on the U.N.'s Security Council to take action. The Council voted 10 to 2 to condemn Russia's invasion; predictably, the Soviet Union vetoed the resolution. The 1968 invasion of Czechoslovakia severely damaged the Soviet government's reputation around the world, and even brought forth condemnation from communist parties in nations such as China and France. Nonetheless, Dubcek was pushed from power in April 1969 and the Czech Communist Party adopted a tough line toward any dissent. The "Prague Spring" of 1968, when hopes for reform bloomed, would serve as a symbol for the so-called "Velvet Revolution" of 1989. In that year, Czech dissidents were able to break the Communist Party's stranglehold on their nation's politics by electing Vaclav Havel, the first noncommunist president in 40 years.
<urn:uuid:cbbbe908-244f-44cc-a4ac-0762f8ebf597>
CC-MAIN-2013-20
http://www.history.com/this-day-in-history/czechs-protest-against-soviet-invasion
2013-05-18T17:19:26
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973815
388
4.25
4
Mars has a striking red appearance, and in its most favorable position for viewing, when it is opposite the sun, it is twice as bright as Sirius, the brightest star. Mars has a diameter of 4,200 mi (6,800 km), just over half the diameter of the earth, and its mass is only 11% of the earth's mass. The planet has a very thin atmosphere consisting mainly of carbon dioxide (95%) with some nitrogen, argon, oxygen, and other gases. Mars has an extreme day-to-night temperature range, resulting from its thin atmosphere, from about 80°F (27°C) at noon to about - 100°F ( - 73°C) at midnight; however, the high daytime temperatures are confined to less than 3 ft (1 m) above the surface.Surface Features A network of linelike markings first studied in detail (1877) by G. V. Schiaparelli was referred to by him as canali, the Italian word meaning "channels" or "grooves." Percival Lowell, then a leading authority on Mars, created a long-lasting controversy by accepting these "canals" to be the work of intelligent beings. Under the best viewing conditions, however, these features are seen to be smaller, unconnected features. The greater part of the surface area of Mars appears to be a vast desert, dull red or orange in color. This color may be due to various oxides in the surface composition, particularly those of iron. About one fourth to one third of the surface is composed of darker areas whose nature is still uncertain. Shortly after its perihelion Mars has planetwide dust storms that can obscure all its surface details. Photographs sent back by the Mariner 4 space probe show the surface of Mars to be pitted with a number of large craters, much like the surface of Earth's moon. In 1971 the Mariner 9 space probe discovered a huge canyon, Valles Marineris. Completely dwarfing the Grand Canyon in Arizona, this canyon stretches for 2,500 mi (4,000 km) and at some places is 125 mi (200 km) across and 2 mi (3 km) deep. Mars also has numerous enormous volcanoes—including Olympus Mons (c.370 mi/600 km in diameter and 16 mi/26 km tall), the largest in the solar system—and lava plains. In 1976 the Viking spacecraft landed on Mars and studied sites at Chryse and Utopia. They recorded a desert environment with a reddish surface and a reddish atmosphere. Experiments analyzed soil samples for evidence of microorganisms or other forms of life; none was found, but a reinterpretation (2010) of the results in light of data collected later suggests that organic compounds may have been present. In 1997, Mars Pathfinder landed on Mars and sent a small rover, Sojourner, to take soil samples and pictures. Among the data returned were more than 16,000 images from the lander and 550 images from the rover, as well as more than 15 chemical analyses of rocks and extensive data on winds and other weather factors. Mars Global Surveyor, which also reached Mars in 1997 and remained operational until 2006, returned images produced by its systematic mapping of the surface. The European Space Agency's Mars Express space probe went into orbit around Mars in late 2003 and sent the Beagle 2 lander to the surface, but contact was not established with the lander. In addition to studying Mars itself, the orbiter has also studied Mars's moons. The American rovers Spirit and Opportunity landed successfully in early 2004 and have explored the Martian landscape ( Spirit's last transmission was in 2010). In 2008 NASA's Phoenix lander touched down in the planet's north polar region; it conducted studies for five months. Curiosity, another NASA rover, landed on Mars near its equator in 2012. Analysis of space probes' data indicates that Mars appears to lack active plate tectonics at present; there is no evidence of recent lateral motion of the surface. With no plate motion, hot spots under the crust stay in a fixed position relative to the surface; this, along with the lower surface gravity, may be the explanation for the giant volcanoes. However, there is no evidence of current volcanic activity. There is evidence of erosion caused by floods and small river systems as well as evidence of ancient lakebeds. The possible identification of rounded pebbles and cobbles on the ground, and sockets and pebbles in some rocks, suggests conglomerates that formed in running water during a warmer past some 2–4 billion years ago, when liquid water was stable and there was water on the surface, possibly even large lakes or oceans. Rovers have identified minerals believed to have formed in the presence of liquid water. There is also evidence of flooding that occurred less than several million years ago, most likely as the result of the release of water from aquifers deep underground or the melting of ice. However, other evidence suggests that the water would have been extremely salty and acidic. Data received beginning in 2002 from the Mars Odyssey space probe suggests that there is water in sand dunes found in the northern hemisphere, and the Mars Reconnaissance Orbiter, which went into orbit around the planet in 2006, collected radar data that indicates the presence of large subsurface ice deposits in the mid-northern latitudes of Mars. Most of the known water on Mars, however, lies in a frozen layer under the planet's large polar ice caps, which themselves consist of water ice and dry ice (frozen carbon dioxide); the lander Phoenix found and observed frozen water beneath the soil surface in the north polar region in 2008. Because the axis of rotation is tilted about 25° to the plane of revolution, Mars experiences seasons somewhat similar to those of the earth. One of the most apparent seasonal changes is the growing or shrinking of white areas near the poles known as polar caps. These polar caps, which are are composed of water ice and dry ice (frozen carbon dioxide). During the Martian summer the polar cap in that hemisphere shrinks and the dark regions grow darker; in winter the polar cap grows again and the dark regions become paler. The seasonal portion of the ice cap is dry ice. When the ice cap is seasonally warmed, geyserlike jets of carbon dioxide gas mixed with dust and sand erupt from the ice. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Astronomy: General
<urn:uuid:0b209998-4433-4b7d-8629-dc91f68d4b69>
CC-MAIN-2013-20
http://www.infoplease.com/encyclopedia/science/mars-astronomy-physical-characteristics.html
2013-05-18T17:20:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955693
1,348
4.0625
4
Jamestown, it was thought, had simply disappeared. As the first permanent English settlement in North America, Jamestown served as the capital of the Jamestown Colony throughout most of the 17th century. It was there, along Virginia’s James River, that Captain John Smith forged his famous bond with Pocahontas that kept the Jamestown colonists from starving. And it was there that the newly introduced crop of tobacco first flourished in the American colonies. But when the colony’s capital was relocated in 1699 to what is now modern-day Williamsburg, the Jamestown site was largely abandoned and gradually succumbed to erosion. Though many believed that the original fortified town had been washed away, NEH-funded archaeological excavations conducted by the Association for the Preservation of Virginia Antiquities in 1996 uncovered the original seventeenth-century fort, revealing it to be intact on three sides. The digs also uncovered hundreds of early colonial artifacts, including glass and copper works, giving glimpses of daily life in the first American colony. Learn more about the Jamestown 1607 settlement at another NEH-supported project, Virtual Jamestown. Created by researchers at Virginia Polytechnic Institute and State University, this online resource lets users peruse early colonial legal suits, see John Smith’s 1608 map of Virginia, and visit an interactive 3D recreation of an Indian village.
<urn:uuid:47a2e6d7-2ec2-45cb-8393-df99ba81c85b>
CC-MAIN-2013-20
http://www.neh.gov/news/jamestown-rediscovered
2013-05-18T17:57:52
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959497
288
4.15625
4
Classification of Nouns Nouns are categorized by vowels and consonants they include. For the consonants they include nouns are categorized in two groups: 15 Nouns and 16 Nouns. For the vowels they include nouns are categorized in 5 groups: a, ä; å, ø; o, ö; u, y; i, e. 15 Nouns: the first consonant of these nouns are h, j, m, n, r, s, s' 16 Nouns: the first consonant of there nouns are c, d, l, p, t, v, l' Categorizing by vowels depends on how many syllable noun has and what is the vowel in the syllable we have to use. If a noun has 2 syllable we use the vowel of the second syllable If a noun has 3 syllable we use the vowel of the third syllable If a noun has 4 syllable we use the vowel of the second syllable ıf a noun has 5 or more syllable we use the vowel of the syllable before the last syllable. Vowel in a syllable can be a single vowel or a diphthong. If the syllable contains a diphthong, we choose the vowel we will use according to following rules: If there is "u" or "y" we use them, If there is a back vowel and "i" we use the back vowel, If there is a front vowel and "i" we use "i", In other diphthongs we use the first vowel.Definite Articles Vi Söllidäävin has10 definite articles. All definite articles are 2 letters. First is a consonant, second a vowel. If a noun is a 15 noun, the first letter of the article is "v" If a noun is a 16 noun, the first letter of the article is "s" we add the vowel according to the rules above to these consonants and make the article. E.g.: tyycciön has 2 syllable. we will use the second syllable and the vowels in it are i and ö. "ö" is a front vowel this means we will use "i". And it is a 16 noun. With these knowledges the article is "si". Si Tyycciön. More examples: vo riol, su vuun, va raceslain, ve hiynnen. We have to use the root word to decide the article of the noun. In a sentence if a noun takes a prefix, that noun does not take article. We use definite article when the noun does not take prefixes and indefinite article.Indefinite Article Vi Söllidäävin has 1 indefinite article and it is a suffix. We add "-me" to the noun. And if a noun takes this suffix, it does not take definite article. E.g.: noidame → a love
<urn:uuid:4f2b4892-3258-465a-b222-702ca9785491>
CC-MAIN-2013-20
http://www.omniglot.com/forum/viewtopic.php?p=7048
2013-05-18T17:18:12
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.813639
642
4.34375
4
The War of Independence The violent confrontations between Jews and Arabs in the land of Israel started in the early 1920s. For the most part, the Jews defended themselves against attacks by the Arabs. The Hagana was responsible for defense of the Jewish community, and sometimes British armed forces intervened to end the violence. The Hagana was established in 1920 primarily as a regional organization; in each settlement its members were responsible for its own defense. Every Jewish resident of the land of Israel was eligible to join, the main condition being the person's ability to keep the organization's activities secret. At first the Hagana's limited mobility hindered its capability to carry out attacks. After the 1921 uprisings the Hagana expanded by drafting new members, conducting courses for commanders and accelerating weapons' acquisition. Armaments were purchased abroad or manufactured in factories located primarily in kibbutzim. The Hagana was under the authority of the elected governing institutions of the yishuv (Jewish community in the land of Israel.) In 1936 there was an Arab uprising which called for liberation from British rule. They attacked British forces and Jews as well. In the course of the revolt the British recommended a solution: To divide the land into two states — Arab and Jewish (the Peel Commission Report). The Arab leadership rejected the proposal of partition. The yishuv leadership accepted the principle of partition but opposed the borders suggested by the commission. At the end of World War II, in spite of revelations about the scope of the Jewish Holocaust in Europe and the murder of millions of Jews, Britain refused to permit the establishment of a Jewish state. In postwar Europe there were over 100,000 Jewish refugees who could not return to their homes, but the British refused to allow them to immigrate to the land of Israel. The yishuv fought the decision. Britain, whose resources had been drained by the war, turned the issue of the land of Israel over to the United Nations; the organization appointed a special committee which once more recommended partition as a solution to the problem. On November 29, 1947, the UN General Assembly, by a large majority, approved the resolution calling for two independent states to be established alongside each other in the land of Israel (Resolution 181). Members of the Jewish community danced in the streets to celebrate but shortly afterward Palestinian Arabs and volunteers from Arab countries that rejected the partition plan attacked, and the war began. The Civil War: December 1947-May 1948 The war that began on November 29, 1947 is known as the War of Independence because it resulted in independence for the Jewish community in the land of Israel, in spite of the fact that at the beginning local Arabs, and then armies from Arab countries tried to prevent it. Local Arab troops and volunteers attacked isolated Jewish communities, Jews in cities with mixed populations and the roads. They also employed terror tactics — all Jewish people, settlements and property were considered to be legitimate targets. The most serious terror attacks were against the Haifa oil refineries, where 39 Jews were murdered in December 1947. At the time Hagana tactics were primarily defensive or focused on specific objectives. Because of Arab attacks, various areas of the yishuv were cut off from the center and became isolated. The Hagana tried to supply besieged areas by means of clandestine convoys. These convoys became the foci of armed confrontations between Jews and Arabs, but in spite of everything, no Jewish settlement was abandoned. Dozens of fighters were killed in attempts to relieve isolated communities. The main efforts were dedicated to bringing supplies to the besieged city of Jerusalem, and this resulted in many victims. In memory of these martyrs, Haim Gouri wrote the poem Bab El-Wad which is the Arabic name for Sha'ar Ha-Gai [gate to the valley] —; a strategic point where convoys began the climb from the coastal plains to the hills of Jerusalem. The Catastrophe [An-Nakbeh] 1948 On November 29, 1947, the United Nations General Assembly passed Resolution 181, which calls for the partition of Palestine into two states, Arab and Jewish. This was the start of the countdown for the establishment of the state of Israel on May 15, 1948 and the 1948 Catastrophe, which uprooted and dispersed the Palestinian people. The Catastrophe was: 1) the defeat of the Arab armies in the 1948 Palestine War; 2) their acceptance of the truce; 3) the displacement of most of the Palestinian people from their cities and villages; and 4) the emergence of the refugee problem and the Palestinian Diaspora. First and foremost, Britain bears responsibility for the defeat of the Palestinian Arab people in 1948. It received the mandate for Palestine from the League of Nations in 1917, and from the beginning of its occupation of Palestine until it relinquished the territory on May 15, 1948, Britain did all it could to suppress the Palestinian people and to arrest and deport their leaders. The British did not allow Palestinians to exercise their right to defend themselves and their land against the Zionist movement. It suppressed the popular uprisings (intifadas) which followed one after another beginning in 1920 (including those of 1921, 1929, 1930, 1935 and 1936). The rulers considered all forms of Palestinian resistance to be illegal acts of terrorism, extremism and fanaticism, and issued unjust laws against every Palestinian who carried arms or ammunition. Punishments included: "Six years in prison for possessing a revolver, twelve years for a grenade, five years of hard labor for possessing twelve bullets and eighteen months for giving false information to a group of soldiers asking for directions." However, Britain did allow Zionist immigration to Palestine, which led to an economic crisis because of the increasing number of Jews in the land. Britain permitted the Zionist movement to form military forces, such as the Haganah and Etzel and others. Members carried out bombings in Jerusalem, fired on British soldiers and smuggled arms, immigrants, and more. But that wasn't the end of the story. The British allowed the Zionist movement to have its own armed brigade attached to the British Army. It took part in battles of World War II, thereby acquiring training and experience in the techniques of war. In 1939 ten detachments of Zionist settlement police were formed, each led by a British officer — altogether 14,411 men. There were 700 policemen in Tel Aviv and 100 in Haifa, all of whom were members of the Haganah. By 1948 most Jews over the age of 14 had already undergone military training. For these reasons they were militarily superior to the Palestinians during the '48 war. In 1946 one British commander in Palestine told an American journalist that: "If we withdraw British forces, the Haganah will control all of Palestine tomorrow." The journalist asked him if the Haganah could maintain its control of Palestine under such circumstances. He replied: "Certainly, they could do so even if they had to confront the entire Arab world." Before the war broke out and just before they withdrew, the British either turned a blind eye, or actually conspired with the Zionists who seized British arms and equipment. This strengthened the Zionist movement's superiority over the Palestinians. It is worth mentioning that when Britain relinquished its Palestinian Mandate to the UN, it was a very influential member of the international organization. The partition resolution 181 was a revival of the partition plan proposed by Britain in the aftermath of the 1936 Revolution. After Britain, the Arabs and their leaders had the lion's share of responsibility for the defeat. Their war was like a heroic drama, whose hero was a British military officer — Glubb Pasha — who commanded the Transjordanian Arab troops in the war. The Arab armies did not take up their roles in the theater of war until the strength of the Palestinian people was virtually exhausted.
<urn:uuid:0ff7cba3-feb8-451a-acc8-03479a339f67>
CC-MAIN-2013-20
http://www.onbeing.org/program/two-narratives-reflections-israeli-palestinian-present-part-1-two-narratives-reflections-3
2013-05-18T17:37:47
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971394
1,586
4.09375
4
Far north within the Arctic Circle off the northern coast of Norway lies a small chain of islands known as Svalbard. These craggy islands have been scoured into shape by ice and sea. The effect of glacial activity can be seen in this image of the northern tip of the island of Spitsbergen. Here, glaciers have carved out a fjord, a U-shaped valley that has been flooded with sea water. Called Bockfjorden, the fjord is located at almost 80 degrees north, and it is still being affected by glaciers. The effect is most obvious in this image in the tan layer of silty freshwater that floats atop the denser blue water of the Arctic Ocean. The fresh water melts off land-bound glaciers and flows over the sandstone, collecting fine red-toned silt. In this image, the tan-colored fresh water flows northward up the fjord and is being pushed to the east side of the fjord by the rotation of the Earth. Glaciers here and elsewhere on Spitsbergen are cold bottom glaciers, which means that they are frozen to the ground rather than floating on top of a thin layer of melt water. The glaciers are also land glaciers since their terminus (end) lies on land, rather than floating on the water (a tidewater glacier). Land glaciers grow and retreat slowly, balancing fresh snow with the melting and draining of old ice. Their rate of growth or retreat can be affected by global warming. In most cases, including the glaciers around Bockfjorden, global warming has caused glaciers to retreat from increased melting. On the eastern side of Svalbard, however, glaciers are growing from enhanced snowfall. The reason for this pattern remains only one of many intriguing unanswered questions of Arctic science in the islands. The Advanced Spaceborne Thermal Emission and Reflection Radiometer, (ASTER) on NASA's Terra satellite captured this false-color image on June 26, 2001.
<urn:uuid:16ea6476-5c8b-4b53-82e8-132f1d1b3ac1>
CC-MAIN-2013-20
http://www.redorbit.com/images/pic/7968/bockfjorden/
2013-05-18T17:19:38
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948666
411
4.65625
5
Posted Sunday, Feb. 24, 2013, at 8:00 AM Years ago, in 1999, some odd pictures were returned from The Mars Global Surveyor space probe orbiting the red planet. They showed what looked for all the world(s) like trees, banyan trees, dotting the Martian landscape. They made quite a splash on the internet, and you can see why; here’s a section of one of the pictures: Image credit: NASA/JPL/Malin Space Science Systems No fooling, they really do look like trees. The usual pseudoscience website went nuts—well, more nuts—claiming they were life on Mars. More rational heads knew they were formed from some sort of natural non-biological process, but what? Over time, more and better pictures were taken, and eventually the story became clear. Hints were found when these features were detected at extreme latitudes, and only in the spring. That meant they must be related to the change in seasons, specifically to the weather warming. That, plus some high-resolution images, made it possible to eventually figure out what they are. Mars has a thin atmosphere that’s mostly carbon dioxide. In the winter at the poles it gets cold enough that this CO2 freezes out, becoming frost or snow on the Martian surface—what we on Earth call dry ice. It gets this name because when you warm it up, it doesn’t melt: It turns directly from a solid into a gas, a process called sublimation. Image credit: Arizona State University/Ron Miller In the Martian spring sunlight warms the ground, which warms the layers of dry ice. They sublimate slowly, and—here’s the cool part—from the bottom up. Dry ice is very white and reflective, so sunlight doesn’t warm it efficiently. The ground is darker, and absorbs the solar warmth. This tends to heat the pile of dry ice from the sides and underneath at the edges. The newly released gaseous carbon dioxide needs somewhere to go. It might just leak away from the side, but some will find its way deeper into the dry ice pack, toward the center. If the gas finds a weak spot in the ice it’ll burst through, creating a hole. Other trickles of CO2 under the ice will flow that way as well, and eventually find that hole. What you get, then, is dry ice on the surface laden with cracks, converging on a single spot where the gas can then leak out into the Martian atmosphere like dry geysers. The plumes of CO2 will carry with them dust from the ground under the dry ice pack, depositing the darker dust on the brighter surface ice, discoloring it. And when you look at them from above, you see what look like trees! After a while, the carbon dioxide frost sublimates away entirely, and all you’re left with are weird looking spidery channels in the ground, up to a couple of meters deep, created by erosion as the carbon dioxide gas wended its way under the dry ice pack. These are even called araneiform features, meaning spider-like. They also kinda look like the cell bodies of neurons. Unsettling. But probably a better situation than an infestation of giant alien tree spiders. How cool is that? While reading about this, I found various other features that have a similar origin, created from carbon dioxide gas flow. One aspect really got to me, a simple but terrifically strange observation: In some of these features on Mars, the tracks get wider as they go uphill. That’s the opposite of what you’d expect from the flow of an actual liquid; channels created by, say, water on Earth get wider as they flow downhill. This means whatever formed those channels must be flowing uphill. So the culprit must be gas, not liquid. That is so flippin’ weird! It’s bizarre enough that a major component of a planet’s air might freeze out at all, but then to have some it flow uphill in the spring, and also to create those creepy spidery things? Mars is a damn odd place.
<urn:uuid:641c25c1-c61b-496d-816a-517009d91725>
CC-MAIN-2013-20
http://www.slate.com/blogs/bad_astronomy/2013/02/24/martian_spiders_features_on_mars_look_like_alien_spiders_and_trees.html
2013-05-18T17:56:59
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954451
878
4.0625
4
ExploraTour - How to Build a Star "But wait a minute," you say. "We've tried this nuclear fusion stuff on Earth to produce energy and so far it hasn't worked very well. How does the sun succeed where we have failed?" You are right. Operational nuclear power plants on Earth use fission reactions to produce power. They work by splitting apart heavy nuclei like Uranium-235 or Plutonium-239. The combined mass of the resulting lighter nuclei is less than the original heavy nucleus. The missing mass is converted to energy, mostly in the form of heat. Uranium-235 and Plutonium-239 are very rare elements that are difficult to extract. The Earth's reserves will be used up in a relatively short time. In addition, the products left over from the fission reaction are radioactive and thus dangerous to humans. This radioactive waste has to be disposed of very carefully. We have high hopes that nuclear power plants using fusion reactions to liberate energy will soon be developed.
<urn:uuid:774cbc1e-d50b-432b-9ca1-4a27d0d9fdf4>
CC-MAIN-2013-20
http://www.windows2universe.org/cool_stuff/Exploratour_2g.html&edu=elem
2013-05-18T17:19:20
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954338
209
4.0625
4
After participating in this activity, students will be able to: - Describe the difference between herbivores, carnivores, and producers. - Answer questions about the interdependence of herbivores, carnivores, and producers as members of a food chain. - Answer questions about how pollution affects food chains. All living organisms depend on one another for food. By reviewing the relationships of organisms that feed on one another, students begin to see how all organisms—including humans—are linked. If students understand the relationships in a simple food chain, they will better understand the importance and sensitivity of these connections, and why changes to one part of the food chain almost always impact another. A food chain is a simplified way to show the relationship of organisms that feed on each other. It’s helpful to classify animals in a simple food chain by what they eat, or where they get their energy. Green plants, called producers, form the basis of the aquatic food chain. They get their energy from the sun and make their own food through photosynthesis. In the Great Lakes, producers can be microscopic phytoplankton (plant plankton), algae, aquatic plants like Elodea, or plants like cattails that emerge from the water’s surface. Herbivores, such as ducks, small fish, and many species of zooplankton (animal plankton), eat plants. Carnivores (meat eaters) eat other animals and can be small (i.e, frog) or large (i.e, lake trout). Omnivores are animals (including humans) that eat both plants and animals. Each is an important part of the food chain. In reality, food chains overlap at many points—because animals often feed on multiple species—forming complex food webs. Food web diagrams depict all feeding interactions among species in real communities. These complex diagrams often appear as intricate spider webs connecting the species. See: Unit 1, Lesson 2 This lesson demonstrates that changes in one part of a food chain or web may affect other parts, resulting in impacts on carnivores, herbivores, and eventually on producers. An example of this might be the harmful effects of pollution. The point that should be made is that when something disrupts a food web, humans should try to understand and minimize the disturbance. Students should also come to recognize that humans, too, are part of this complex web of life.
<urn:uuid:f1baaba0-3e9a-4eb0-b405-df100ad85365>
CC-MAIN-2013-20
http://miseagrant.umich.edu/flow/U1/U1-L1.html?gclid=CISq9e6rr5ACFShkTAodKl033A
2013-05-21T17:16:21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9316
504
4.3125
4
Formatting marks in a Word document. Imagine that you have typed a few paragraphs. The paragraphs seem very far apart, and the second paragraph starts farther to the right than the first paragraph. You can see what's going on in your document by looking at the formatting marks that Word automatically inserts as you type. These marks are always in documents, but they are invisible to you until you display them. For example, a dot appears every time you press the SPACEBAR, such as between words. One dot is one space; two dots are two spaces, and so on. Normally there should be one space between each word. Word inserts a paragraph mark ( ) each time you press ENTER to start a new paragraph. In the picture, there are two paragraph marks between the two paragraphs, which means that ENTER was pressed twice. This creates extra space between paragraphs. One arrow ( ) appears each time TAB is pressed. In the picture there is one arrow before the first paragraph and two arrows before the second paragraph, so TAB was pressed twice in the second paragraph. To see formatting marks, go to the ribbon, at the top of the window. On the Home tab, in the Paragraph group, click the Show/Hide button ( ). Click the button again to hide formatting marks. Note: These marks are just for show. They won't be on printed pages, even when you see them on the screen.
<urn:uuid:19ad7cec-3621-445b-88be-076ef3743391>
CC-MAIN-2013-20
http://office.microsoft.com/en-za/word-help/go-behind-the-scenes-with-formatting-marks-RZ101806168.aspx?section=4
2013-05-21T17:38:24
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94974
293
4.125
4
From Ohio History Central The Progressive Movement was a widespread reform effort to cure the many social and political ills in America after the advent of the Industrial Revolution. During the late nineteenth and the early twentieth centuries, the United States of America underwent tremendous change. One of the principal changes was the shift from a predominantly agricultural economy to a much more industrialized one. This change also brought stark social changes to the United States. Now millions of Americans relied on other people -- business owners -- for their livelihood. Oftentimes, the employers reinvested profits back into the company, rather than paying workers a fair wage. These business owners also had tremendous power within the federal government. Many Americans believed that the business owners had undue influence over the government and that the employers had no desire to relinquish any power to middle and working-class Americans. By the 1890s, a group of reformers, known as the Progressives, emerged to combat some of the ill effects of these changes. Most Progressives came from middle-class backgrounds. Many of them were college educated. Progressives generally believed that industrialization was good for the United States, but they also contended that human greed had overcome industrialization's more positive effects. They hoped to instill in Americans moral values based upon Protestant religious beliefs. The Progressives wanted employers to treat their workers as the bosses wanted to be treated. They also hoped that, if working conditions improved, Americans would not engage in immoral activities, like drinking and gambling, to forget the difficulties that they faced. Progressives sought better pay, safer working conditions, shorter hours, and increased benefits for workers. Believing that only education would allow Americans to lead successful lives, Progressives opposed child labor, wanting children to attend school rather than working in mines and factories. They supported Prohibition and succeeded in enacting a ban on the manufacture, transportation, and sale of alcohol with the Eighteenth Amendment to the United States Constitution in 1919. Progressives also sought to reclaim government from the business owners and corrupt politicians partly by supporting the direct election by the people of United States Senators. The Progressives succeeded in attaining this reform with the adoption of the Seventeenth Amendment to the United States Constitution in 1913. Other reforms included Initiative, which allowed voters to pass legislation on their own, Referendum, which allowed voters to repeal laws that they did not support, and Recall, which allowed voters to remove elected officials from office. Many Progressives supported women's suffrage, helping women secure the right to vote through the adoption of the Nineteenth Amendment to the United States Constitution in 1919. Progressives also battled against city bosses, including Cincinnati, Ohio's George Cox, by hiring city managers. While Progressives enacted numerous positive reforms, some of their goals were questionable. They did seek to make the United States government more democratic and to protect American workers, but they also sought to force their social and political beliefs on others. Progressives opposed immigration and enacted several immigration restrictions during the 1920s. Progressives also tried to force immigrants to adopt Progressive moral beliefs. One way they tried to accomplish this was through settlement houses. Settlement houses existed in most major cities during the late nineteenth and the early twentieth centuries. They were places where immigrants could go to receive free food, clothing, job training, and educational classes. While all of these items greatly helped immigrants, Progressives also used the settlement houses to convince immigrants to adopt "American" or Progressive beliefs, causing the foreigners to forsake their own culture. During the 1920s, many Progressives also joined the Ku Klux Klan, a self-proclaimed religious group that was to enforce morality -- based on Progressive beliefs -- on other people. Due to such the Progressives' participation in Prohibition, the Ku Klux Klan, and immigration restrictions, many Americans stopped supporting the Progressive Movement. While aspects of its beliefs remain today, as a functioning and clearly identifiable group, the Progressive Movement began to weaken by the late 1920s and the early 1930s.
<urn:uuid:e9660ef2-5da1-41fd-903e-62e73e7d71ca>
CC-MAIN-2013-20
http://ohiohistorycentral.org/w/Progressive_Movement?rec=543&nm=Progressive-Movement
2013-05-21T17:10:42
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974349
801
4.5
4
stressArticle Free Pass stress, in physical sciences and engineering, force per unit area within materials that arises from externally applied forces, uneven heating, or permanent deformation and that permits an accurate description and prediction of elastic, plastic, and fluid behaviour. A stress is expressed as a quotient of a force divided by an area. There are many kinds of stress. Normal stress arises from forces that are perpendicular to a cross-sectional area of the material, whereas shear stress arises from forces that are parallel to, and lie in, the plane of the cross-sectional area. If a bar having a cross-sectional area of 4 square inches (26 square cm) is pulled lengthwise by a force of 40,000 pounds (180,000 newtons) at each end, the normal stress within the bar is equal to 40,000 pounds divided by 4 square inches, or 10,000 pounds per square inch (psi; 7,000 newtons per square cm). This specific normal stress that results from tension is called tensile stress. If the two forces are reversed, so as to compress the bar along its length, the normal stress is called compressive stress. If the forces are everywhere perpendicular to all surfaces of a material, as in the case of an object immersed in a fluid that may be compressed itself, the normal stress is called hydrostatic pressure, or simply pressure. The stress beneath the Earth’s surface that compresses rock bodies to great densities is called lithostatic pressure. Shear stress in solids results from actions such as twisting a metal bar about a longitudinal axis as in tightening a screw. Shear stress in fluids results from actions such as the flow of liquids and gases through pipes, the sliding of a metal surface over a liquid lubricant, and the passage of an airplane through air. Shear stresses, however small, applied to true fluids produce continuous deformation or flow as layers of the fluid move over each other at different velocities like individual cards in a deck of cards that is spread. For shear stress, see also shear modulus. Reaction to stresses within elastic solids causes them to return to their original shape when the applied forces are removed. Yield stress, marking the transition from elastic to plastic behaviour, is the minimum stress at which a solid will undergo permanent deformation or plastic flow without a significant increase in the load or external force. The Earth shows an elastic response to the stresses caused by earthquakes in the way it propagates seismic waves, whereas it undergoes plastic deformation beneath the surface under great lithostatic pressure. What made you want to look up "stress"? Please share what surprised you most...
<urn:uuid:79e0dfc6-44f0-433d-bdf7-1f27c991027e>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/568893/stress
2013-05-21T17:38:24
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928167
547
4.21875
4
Receptive and Expressive Language All communication has two aspects: receptive language and expressive language. Receptive language is what we hear and understand. Expressive language is what we say to others. I believe that empathy is also a form of communication; one that is as essential to each of us as is spoken, written, or signed language in understanding the feelings of other sentient beings and in conveying our reaction to them. To oversimplify, one might think of language as the cognitive component of communication, whereas empathy is the emotional component. Of course, in reality, they overlap and complement each other. Receptive empathy is the ability to perceive the feelings that others are experiencing. Expressive empathy is the ability to convey that understanding to others. Definition of Empathy “Empathy” is a complicated word — it means so many different things to different people. And, a discussion of whether autistic people have a capacity for empathy that is different from most other people further complicates the conversation. A web search on the single word “empathy” produced for me these top 5 results, defining the word in 5 different ways: - Empathy is the capacity to recognize emotions that are being experienced by another sentient or fictional being. (wikipedia) - the imaginative projection of a subjective state into an object so that the object appears to be infused with it (Mirriam-Webster) - the intellectual identification with or vicarious experiencing of the feelings, thoughts, or attitudes of another (dictionary.com) - Empathy is the experience of understanding another person’s condition from their perspective. (Psychology Today) - Identification with and understanding of another’s situation, feelings, and motives. See Synonyms at pity. (thefreedictionary.com) So, which is it? “recognize emotions” or “imaginative projection” or “intellectual identification” or “vicarious experience of understanding perspective” or “identification and understanding” or “pity”? It is probably all of those things and more, including sympathy and compassion. Trying to understand what people intend to convey by using the word is a bit like Justice Potter’s infamous definition of pornography. Empathy is something you know when you experience it, even though it is hard to describe in words. Are there “Types” of Empathy? Modifying the word “empathy” with “cognitive” and “affective” represents an ill-advised attempt to deconstruct empathy, in my view . Much has been made of the idea that these two aspects of empathy (to the extent that this dichotomy has any validity at all, which I doubt) arise from different parts of the brain, and that one or the other is deficient in certain personality types. This kind of hair-splitting is a distraction, it seems to me, when it comes to understanding the role and functioning of empathy. I’m sure there is a wide range of empathic capacity, both in terms of experiencing empathy (whatever it is) and in expressing it. Those with alexithymia may have empathic capacity but may not recognize what they are experiencing or be able to express it. And all of my discussion here so far has nothing directly to do with autism. Empathy is a universal human trait. And beyond. Clearly, many other animals have empathic capacity as well. Empathy arises from, or at least is related to, mirror neurons. In the famous incident of the discovery of mirror neurons, a monkey watched an object being picked up, and his brain region for picking things up fired as if he were doing it himself. So, he experienced what it was like to pick up an apple (or whatever it was), but not from the perspective of the other monkey (he’s not inside that brain) but from the perspective of how he would feel if he were doing what he was observing. And What Does All of This Have To Do With Autism? Now comes the tricky part with respect to autism. It’s twofold. The descriptions that follow are experiential (my own experiences and those of other autistic people I’ve spoken with), and represent my own speculations. What I report here may or may not be generalizable to other autistic people. See the link in the previous paragraph for a discussion of some of the controversy surrounding the linkage (if any) between autism and mirror neurons. Autism and Receptive Empathy It may be (1) that the mirror neuron system in the autistic brain is impaired because of the usual sensory overload that is always going on. It’s not that the mirror neurons are defective, it’s just that their functioning is clouded by the brain having so much else to deal with at the same time. Distractions, if you will. So the autistic person will not have the receptive clarity that matches the neurotypical — what is being called by some “cognitive empathy.” The emotional state of another being is recorded, but not processed with the same clarity because of the other demands on attention. The TMS experiments I participated in at Beth Israel demonstrated this. The experiment involved suppressing activity in a small area in the right hemisphere of my brain. Neuroscientists know that, through a process called neuroplasticity, when one area of the brain is compromised, another area will attempt to take over the lost functionality. That often involves the equivalent region in the opposite hemisphere of the brain. Broca’s area is heavily involved in language and (therefore) social cognition,and much more. It is a complex and important region of the brain that is somewhat imprecisely located in the part of the brain known as Brodmann’s areas 44 and 45. I say “it” although, in my understanding (I have no formal training in neuroscience), there are two equivalent areas, one in each hemisphere, and the lion’s share of language processing occurs in the dominant hemisphere (the left one for right-handed people like me). Broca’s area, besides its central role in language comprehension and creation, also seems to serve as a bridge between the prefrontal cortex (cognition), and regions that control motor and somatosensory (tactile and other sensory) systems of the body. It is also thought to be rich in mirror neurons. For all of these reasons, the scientists in the TMS Lab hypothesized that by temporarily and artificially suppressing the right side of my brain in the area just described, the left hemisphere would be more strongly activated than usual, thereby improving language and social (empathic) cognition. How right (so to speak) they were! I experienced (both subjectively and in their computerized measurements) sharpened ability to interpret emotional content more accurately. The difference in clarity was astounding to me and to others I spoke with who were subjects in the experiment. Caitlin, for example, was shocked to find that she could see emotional content in written sentences and in video clips which, with the benefit of hindsight, she had not been able to see before. My clarity was more intellectual. I was able to solve (computerized) tasks faster than the computer could feed them to me, whereas before I had struggled and was unsure of the answers. Subjectively, it was like night and day, although I’m sure that the difference in my performance was measured in milliseconds. The difference in what Caitlin and I experienced (and John had a musical revelation, among many other experiences) was probably a function of where we started. I was relatively better (compared with her) at emotional reception. She, for example, had once been floored to find out that her brother knew more (much more) about the personal life of her receptionist than she did, although he lived in a distant city. It was just that when he called to speak with Caitlin, he would chat with the person who answered the phone about vacation plans and the like. It never occurred to Caitlin to make that kind of emotional connection. The Irrelevancy of “Cognitive” Versus “Affective” Empathy Which brings me around to the other bit (2) about autism and empathy. Take the Psychology Today definition: “Empathy is the experience of understanding another person’s condition from their perspective.” Please. Think of the monkey. Picking up a banana is probably a pretty universal monkey experience, so it’s easy to imagine that the mirror neurons of monkeys allow them to experience watching another monkey and essentially experience (vicariously) a nearly identical experience. Now, take an autistic brain. Not mine, please. I need it. If I watch a neurotypical pick up a banana, I am likely to be less clear about how that feels to them because they experience the world in a way that is very different from mine. I’m being metaphorical here, in case you didn’t pick up on that (so to speak). A physical action is one thing, but a more complex emotional reaction is quite a different level of experience. How can I empathize what you are going through if your way of experiencing the world is vastly different from mine? This works both ways, of course. How can a neurotypical person empathize with me if they have no clue what my brain is experiencing? So, it’s not a lack of empathy, or a lack of empathic capacity, it’s a knowledge or experiential gap. I can tell when my horse is happy to see me, or when he is in a playful mood, or frightened; these are fairly universal emotions. But my empathy doesn’t go too deep because I don’t really know what it’s like to be a horse. Or, maybe at some fundamental level, I do. I don’t always grok why he’s upset, but I know when he is. Now, all of that is about receptive empathy; taking in and appreciating the emotional state of another being. This may be what is meant by “cognitive” empathy. But I also think receptive empathy includes components (or maybe all) of what has been termed “affective empathy” or “pity” or “compassion” — not just understanding, but sharing the emotional state of another. I believe this must naturally flow via the mirror neuron system that enables us to take in the feelings of another. If one is truly understanding what another is experiencing, it naturally follows that one is experiencing their emotions, too. From an evolutionary point of view, the value of being able to understand how someone else is feeling is being able to predict their behavior. If someone picks up a banana and smiles, that’s pretty non-threatening, but if someone picks up a rock and scowls, it might be better to take protective action. To truly take in another’s emotions, in the process I’m calling receptive empathy, one must also experience an approximation of those emotions. Although I’m aware these emotions are yours, and not mine, I experience my version of your anger, your pain, and your joy. It can’t be any other way. And yes, there are people who have difficulty comprehending what they are experiencing emotionally, and conveying it, too. But, as alluded to earlier, that is a condition called alexithymia, not autism. Although studies about this are scant, I’m not aware of any definitive study that shows that alexithymia is more prevalent in the neuroexceptional population than it is in the neurotypical one. In my work with neuroexceptional couples (in a support group setting), I observe a fairly high proportion of alexithymia among the partners who are not neurotypical, but my sample is a highly self-selected subgroup of all neuroexceptional people, and I don’t have a control group to compare with. For me, when I experience high receptive empathy (which includes sympathy, compassion or pity), such an experience is likely to lead to an emotional state of shock that requires me to tone down my feelings, because the nerves are too raw and exposed. So, I withdraw, I put up barriers to keep the world out, to keep things from getting worse. I can only take so much. Most autistic people with whom I have talked about this agree. We have too much empathic capacity. It is paralyzing. Why is that? I’m not sure. Excess myelination? I’ll get back to you on that. Autism and Expressive Empathy: The Challenge for Autistic People The biggest complaint in my couples support groups is that the (typically) Asperger’s partner does not express empathy. I explain the bit that I’ve just gone through, that those of us who are autistic experience a high degree of understanding (what I have called here receptive empathy), and that our flat affect or silence does not mean we lack comprehension or sympathy. It’s just that dealing with these raw emotions is too frightening.Thus, we exhibit a lack of what I have come to call expressive empathy. To partners who are feeling emotionally isolated, and are in need of validation, it’s not comforting to hear this. To them, there is no empathy if it isn’t expressed. And they have a good point. We autistics often stop one step short of what empathy is all about; connecting with another human being, to validate and comfort them. Without that piece, it does not serve its purpose. The result is, from the outside observers point of view, a “lack” of empathy. No reaction. Or an “inappropriate” (oh, how I hate that word) reaction. If fact, the reaction is an internal volcano that is about to erupt. Sometimes it does, and that is one form of meltdown. Sometimes it is contained, and the world is shut out. I remember a time in my second marriage when things were not going well, and I was talking with my shrink about it. At one point, he said to me (in frustration), “Can’t you just tell her you love her? That’s what she wants to hear!” And, I realized (for complex reasons) that, no, I was not capable of that at that time. It seemed like lying to me. Yet, it would have been a harmless lie that could have made all the difference to her. I was empathizing with her distress, but I was not able to communicate that to her in a way that would have been helpful. The terms I have used here, receptive and expressive, are often used to describe forms of language communication, which is where I started this post. And that’s really what empathy is, in its fullest expression; communicating emotional states. Autistics are really good at receptive empathy, but some of us fall short when it comes to using expressive empathy. This is a failure of execution, not of cognition. Our brains work just fine, thank you. We just need to learn how to let other people know that. The good news is called neuroplasticity, and there is a way to use that, in neurally-inspired therapies and techniques that can change our patterns of behavior. Stay tuned. Much more to come on those subjects. Meanwhile, I will be practicing my expressive empathy.
<urn:uuid:578a0c92-90e2-4207-b53a-45368d068939>
CC-MAIN-2013-20
http://www.mfw.us/blog/
2013-05-21T17:17:51
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965028
3,252
4.125
4
Most southern women did not publicly express a desire for equal rights with men until well after the Civil War (1861-65), and suffrage, or the right to vote, came later to women in Georgia than to women in most other states. The American Equal Rights Association (AERA), dedicated to human rights, black suffrage, and woman suffrage, was formed in 1866, the same year Georgia passed legislation giving married women property rights. In 1869, when a woman suffrage amendment was introduced in the U.S. Congress, the AERA split into two factions. The American Woman Suffrage Association (AWSA) was a moderate group led by Lucy Stone and Julia Ward Howe, and the National Woman Suffrage Association (NWSA) was a more radical faction formed by Elizabeth Cady Stanton and Susan B. Anthony. While the former campaigned to accomplish a state-by-state right to vote, the latter sought a constitutional amendment for the vote and worked for a variety of reforms. The passage in 1870 of the Fifteenth Amendment, stating that citizens could not be denied the vote because of race, color, or former status as a slave, granted black males the right to vote and to hold office. The belief that women also deserved this right not only increased membership in the pro-suffrage organizations but also led to tension over how best to achieve this in the South. Organizing for Suffrage The AWSA and the NWSA reunited in 1890, into a group known as the National American Woman Suffrage Association (NAWSA). In Columbus, Georgia, Helen Augusta Howard formed a branch of the organization she called the Georgia Woman Suffrage Association (GWSA). In 1892 the NAWSA established the Committee on Southern Work, and by 1893 the Georgia chapter had members in five counties. In 1894 the Equal Suffrage League formed in Atlanta as a chapter of the GWSA. Further impetus for the suffrage movement in Georgia came in 1895, when the NAWSA held their annual meeting in Atlanta, the first held outside of Washington, D.C. The organization's headquarters was at the Aragon Hotel, and meetings were held at DeGive's Opera House. Susan B. Anthony and ninety-three delegates from twenty-eight states, together with visitors and reporters, attended. African American women were excluded from these meetings, but Anthony did speak on the campus of Atlanta University, an all-black school. In the audience was alumna Adela Hunt Logan, a Georgian who taught at Alabama's Tuskegee Institute. Logan published several suffrage articles and became the NAWSA's first lifetime member. For African American women, support of NAWSA efforts was seen as a further step toward reenfranchisement for black men, as well as enfranchisement for themselves. In 1896 several African American women's organizations formed the National Association of Colored Women (NACW) in Washington, D.C. In the next several years, many members, like Lugenia Burns Hope, wife of John Hope, the first president of Atlanta's Morehouse College, became suffrage advocates through their work in the NACW. The GWSA held its first convention in November 1899 in Atlanta. Speakers from Georgia as well as from other southern states attended. Under president Mary Latimer McLendon, the association passed several resolutions, including a statement that Georgia women should not pay taxes if they did not have the vote and a request that the University of Georgia be opened to women. At the November 1901 GWSA convention, Atlantan Katherine Koch was chosen president. In 1902 Atlanta women petitioned to vote in municipal elections but were rejected. National and State Events The 1906 Atlanta race riot further intensified the question of woman suffrage and how to achieve it in the South, where attitudes on gender and race became a defining issue. The year 1908 was a presidential election year, and suffragists asked both parties to include the issue in their platforms, but neither did. The Prohibition Party of Georgia, however, did adopt woman suffrage as part of its platform. The Georgia Federation of Labor had endorsed woman suffrage in 1900. They called for local unions to support it, and events outside the South encouraged them to do so. In 1907 Harriet Blatch, daughter of Elizabeth Cady Stanton, formed the Equality League of Self-Supporting Women to reach out to working-class women. In 1909 the woman suffrage–connected strike of 20,000 women garment workers and a boycott by the wealthy women who purchased clothing was coordinated by the Women's Trade Union League in New York City. After California gave women the vote in 1911, there were six suffrage states. In 1913 the Georgia Woman Equal Suffrage League was formed, with many teachers and businesswomen as members. The league's president was an Atlanta teacher and principal of Ivy Street School, Frances Smith Whiteside, and the sister of U.S. senator Hoke Smith. The Georgia Men's League for Woman Suffrage, formed by Atlanta attorney Leonard J. Grossman, was a chapter of a national organization. With few members outside Atlanta, its formation was mostly a symbolic one for the movement. The Equal Franchise League of Muscogee County as well as a Macon suffrage association were formed by the end of 1913. In 1914 women who wanted the GWSA to work more aggressively for suffrage formed the Equal Suffrage Party of Georgia, which by 1915 had member branches in thirteen Georgia counties. In the first five years of the party's existence its presidents were from Atlanta, Augusta, and Savannah. Several Georgia cities and counties had branches of both suffrage organizations working simultaneously for the same goal but with a different focus. Anti-suffrage Movement and Pro-suffrage Groups In the spring of 1914 a Georgia chapter of the National Association Opposed to Woman Suffrage, founded in 1895, was formed in Macon. Three months later, it claimed to have 10 state branches and 2,000 members, far more than the pro-suffrage organizations. The leadership included Mildred Lewis Rutherford, head of the Lucy Cobb Institute in Athens and president of the United Daughters of the Confederacy. When the Georgia legislature first conducted hearings on the subject in 1914, sisters Mary Latimer McLendon of Atlanta and Rebecca Latimer Felton of Cartersville, Leonard J. Grossman, James L. Anderson, and Mrs. Elliott Cheatham of Atlanta all addressed the house committee for suffrage. Speaking for the opposition were Rutherford and Dolly Blount Lamar of Macon. The vote was five to two against suffrage, and the resolution did not pass. Hearings were conducted again the following year before committees of the senate and house, and both voted against it. In March 1914 pro-suffrage women held their first rally in Atlanta, with urban reform leader Jane Addams as speaker. In 1915 a May Day celebration was cause for Atlanta suffragists to gather on the steps of the state capitol. The following November, a significant event in the movement occurred when pro-suffrage groups marched after Atlanta's Harvest Festival celebration. Patterned after parades held in New York and Washington, D.C., the march included more than 200 students in caps and gowns, marchers wearing "votes for women" sashes and carrying banners, and decorated automobiles, all led by a brass band. Mary McLendon led the vehicles in Eastern Victory, an automobile sold by Anna Howard Shaw to pay "unjust" taxes. A pony cart filled with yellow chrysanthemums carried a large sign reading "Georgia Catching Up." On horseback, representing the herald leading women "forward into light," was Eleanore Raoul, organizer of the Fulton and DeKalb Equal Suffrage Party. All of Georgia's suffrage groups, including the Georgia Young People's Suffrage Association, were represented. The following year the Atlanta woman suffrage organizations were the first nonlabor group to be included in Atlanta's Labor Day Parade. In 1917 Alice Paul formed the National Woman's Party. Because of the party's protest of Democratic president Woodrow Wilson's lack of support for a federal suffrage amendment, their attempts to organize in the South as early as 1915 had failed. In 1917, feeling that the "antis" were gaining too much of the South, the National Woman's Party increased efforts to recruit in the South, and a Georgia branch was formed. Considered radical by other southern suffrage groups, the National Woman's Party was a relatively militant organization. Although they did nothing unusual in Georgia, they were the first group ever to picket the White House for a political cause. Two 1917 events were significant to the suffrage movement: the United States entered World War I (1917-18), and New York women won the right to vote. Although the NAWSA endorsed the war effort, not all suffrage organizations were in agreement. The same year, Georgia suffrage supporters again presented their resolution to the senate committee. Endorsement was finally achieved by a vote of eight to four in favor, but the senate did not act on that support. Elsewhere in Georgia, the city of Waycross allowed women, many of them property owners, to vote in municipal primary elections. In May 1919, women were allowed to vote in Atlanta municipal primary elections, by a vote of twenty-four to one. Passage of the Amendment On June 4, 1919, with the support of only one southern senator, Georgia's William J. Harris, the U.S. Congress passed the Woman Suffrage Amendment, and it was submitted to the states for ratification. In response, Alabamians formed the Southern Women's League for the Rejection of the Susan B. Anthony Amendment (Southern Rejection League), and Rutherford became one of its few out-of-state members. On July 24 Georgia became the first state to reject the ratification of the amendment, and both houses adopted resolutions to that effect. By August 1920 thirty-five states had ratified the Nineteenth Amendment. One more state was needed for full ratification, and the state of Tennessee ratified it on August 18. Although many in Tennessee and the South continued to challenge it, the amendment became effective on August 26. Women finally had won the vote, but Georgia's women still could not vote in that year's November elections. Georgia, along with Mississippi, cited a requirement that one must be registered six months before the election in order to vote. Because the legislature refused to pass an "enabling act" to make voting immediately possible, Georgia women did not vote until 1922. Assured that women had won the vote, the League of Women Voters organized in February 1920 to carry on the work of the NAWSA. In Georgia all branches of the various suffrage societies and leagues merged into the League of Women Voters of Georgia. Through this organization and others, women sought to address the many issues important to them that had been raised, including employment, education, and health care. The Nineteenth Amendment remains a milestone from which women could begin to do this through political means. Elna C. Green, Southern Strategies: Southern Women and the Woman Suffrage Question (Chapel Hill: University of North Carolina Press, 1997). Elizabeth Gillespie McRae, "Caretakers of Southern Civilization: Georgia Women and the Anti-Suffrage Campaign, 1914-1920," Georgia Historical Quarterly 82 (winter 1998): 801-28. Marjorie Spruill Wheeler, New Women of the New South: The Leaders of the Woman Suffrage Movement in the Southern States (New York: Oxford University Press, 1993). Marjorie Spruill Wheeler, Votes for Women! The Woman Suffrage Movement in Tennessee, the South, and the Nation (Knoxville: University of Tennessee Press, 1995). E. Lee Eltzroth, Georgia State University A project of the Georgia Humanities Council, in partnership with the University of Georgia Press, the University System of Georgia/GALILEO, and the Office of the Governor.
<urn:uuid:556ab7eb-d8d4-425f-badd-cb6d162ae57c>
CC-MAIN-2013-20
http://www.newgeorgiaencyclopedia.org/nge/Article.jsp?id=h-643
2013-05-21T17:25:05
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970222
2,460
4.0625
4
The British Agricultural Revolution describes a period of development in Britain between the 18th century and the end of the 19th century, which saw a massive increase in agricultural productivity and net output. This in turn supported unprecedented population growth, freeing up a significant percentage of the workforce, and thereby helped drive the Industrial Revolution. How this came about is not entirely clear. In recent decades, enclosure, mechanization, four-field crop rotation, and selective breeding have been highlighted as primary causes, with credit given to relatively few individuals. Prior to the 18th century, agriculture had been much the same across Europe since the Middle Ages. The open field system was essentially post-feudal, with each farmer subsistence-cropping strips of land in one of three or four large fields held in common and splitting up the products likewise. Beginning as early as the 12th century, some of the common fields in Britain were enclosed into individually owned fields, and the process rapidly accelerated in the 15th and 16th centuries. This led to farmers losing their land and their grazing rights, and left many unemployed. In the 16th and 17th centuries, the practice of enclosure was denounced by the Church, and legislation was drawn up against it; but the developments in agricultural mechanization during the 18th century required large, enclosed fields in order to be workable. This led to a series of government acts, culminating finally in the General Inclosure Act of 1801. While farmers received compensation for their strips, it was minimal, and the loss of rights for the rural population led to an increased dependency on the Poor law. Surveying and legal costs weighed heavily on poor farmers, who sometimes even had to sell their share of the land to pay for its being split up. Only a few found work in the (increasingly mechanised) enclosed farms. Most were forced to relocate to the cities to try to find work in the emerging factories of the Industrial Revolution. By the end of the 18th century the process of enclosure was complete. Joseph Foljambe's Rotherham plough of 1730, while not the first iron plough, was the first iron plough to have any commercial success, combining an earlier Dutch design with a number of technological innovations. Its fittings and coulter were made of iron and the mouldboard and share were covered with an iron plate making it lighter to pull and more controllable than previous ploughs. It remained in use in Britain until the development of the tractor. It was followed by John Small of Doncaster and Berwickshire in 1763, whose 'Scots Plough' used an improved cast iron shape to turn the soil more effectively with less draft, wear, or strain on the ploughing team. Andrew Meikle's threshing machine of 1786 was the final straw for many farm labourers, and led to the 1830 agricultural rebellion of Captain Swing (a probably mythical character comparable to the Luddite's Ned Ludd). In the 1850s and '60s John Fowler, an agricultural engineer and inventor, produced a steam-driven engine that could plough farmland more quickly and more economically than horse-drawn ploughs. His ploughing engine could also be used to dig drainage channels, thereby bringing into cultivation previously unused swampy land. Although faster than horse-drawn ploughing, the capital costs of a pair of engines would often be too much for a single farmer to purchase for his own exclusive use, which lead to the development of an independent contracting industry for ploughing. During the Middle Ages, the open field system had employed a three year crop rotation, with a different crop in each of the three fields, eg. wheat and barley in two, with the third fallow. 'Fallow' is a term which means that the field is empty, there is nothing growing there. Over the following two centuries, the regular planting of nitrogen-rich legumes in the fields which were previously fallow slowly increased the fertility of croplands. The planting of legumes (leguminosae, plants of the pea/bean family) helped to increase plant growth in the empty field because they used a different set of nutrients to grow than the grains. The legumes put back nutrients the grains used, nitrates produced from nitrogen in the atmosphere, and the grains put back the minerals the legumes used. In a way, they fed each other. Other crops that were occasionally grown were flax, and members of the mustard family. Medieval record keepers did not distinguish between rape seed or other mustards grown for animal feed and mustard grown for mustard seed for condiments. When the pastures were brought back into crop production after their long fallow, their fertility was much greater than they had been in medieval times. The farmers in Flanders (current day Belgium), however, discovered a still more effective four-field rotation system, introducing turnips and clover to replace the fallow year. Clover was both an ideal fodder crop, and it actually improved grain yields in the following year (clover is part of the pea family, leguminosae). The improved grain production simultaneously increased livestock production. Farmers could grow more livestock because there was more food, and manure was an excellent fertilizer, so they could have even more productive crops. Charles Townshend learned the four field system from Flanders and introduced it to Great Britain in 1730. The increase in population led to more demand from the people for goods such as clothing. A new class of landless labourers, products of enclosure, provided the basis for cottage industry, a stepping stone to the Industrial Revolution. To supply continually growing demand, shrewd businessmen began to pioneer new technology to meet demand from the people. This led to the first industrial factories. People who once were farmers moved to large cities to get jobs in the factories. It should be noted that the British Agricultural Revolution not only made the population increase possible, but also increased the yield per agricultural worker, meaning that a larger percentage of the population could work in these new, post-Agricultural Revolution jobs. The British Agricultural Revolution was the cause of drastic changes in the lives of British women. Before the Agricultural Revolution, women worked alongside their husbands in the fields and were an active part of farming. The increased efficiency of the new machinery, along with the fact that this new machinery was often heavier and difficult for a woman to wield, made this unnecessary and impractical, and women were relegated to other roles in society. To supplement the family's income, many went into cottage industries. Others became domestic servants or were forced into professions such as prostitution. The new, limited roles of women, dubbed by one historian as "this defamation of women workers", (Valenze) fueled prejudices of women only being fit to work in the home, and also effectively separated them from the new, mechanized areas of work, leading to a divide in the pay between men and women. Towards the end of the 19th century, the substantial gains in British agricultural productivity were rapidly offset by competition from cheaper imports, made possible by advances in transportation, refrigeration, and many other technologies. From that point, farming in Britain entered a period of economic struggle which continues to the present day. Prior to the Agricultural Revolution, perhaps half of land was kept in an open-field system, which included the village commons, where such activities as wheat threshing and animal grazing might take place. Parliamentary enclosures saw much of this being taken into private plots of land. With the elimination of the manor court, private property laws prevailed over what had once been land for common usage.
<urn:uuid:f7476dd3-0ecd-49bd-b9b6-0229520328e2>
CC-MAIN-2013-20
http://www.reference.com/browse/agricultural-worker
2013-05-21T17:32:25
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978988
1,559
4.125
4
Hands-On Is Minds-On Want to Engage Every Student? Break out the Old-Fashioned Scissors and Glue Second-grade teacher Becky Hicks has learned that there is no substitute for activities that require kids to use their hands as well as their minds. During literacy hour in Hicks's class at Blanchard Elementary School in Cape Girardeau, Missouri, students pair up and head to one of 16 "corners," or centers, to tackle hands-on vocabulary, reading, and math activities. In the ABC corner, students thumb through clues to find mystery words. In the math corner, students stack buttons, plastic fruit, and toy bugs to create graphs. And in the spelling corner, they manipulate alphabet puzzle shapes to piece together vocabulary words. In corners, Hicks's students practice what they know by playing teacher. "Look closely at the clock's hands," one student says to her partner in her best teacher voice. "Which one shows the hour, and which one shows the minutes?" Some explain their work to other students by showing them how to move, group, or assemble objects. Concepts are explained through tactile procedures, and skills are bolstered as children practice new ideas and test out theories. Over the years, Hicks has noticed that her students are more engaged and focused when they're working on hands-on projects—even those who fidget during large-group lessons. In her classroom, Hicks has figured out what research has revealed: The best way to engage kids' brains is by having them move their hands. Busy Hands, Busy Brains As students put projects together, create crafts, or use familiar materials in new ways, they're constructing meaning. "Kids learn through all their senses," says Ben Mardell, PhD, a researcher with Project Zero at Harvard University, "and they like to touch and manipulate things." But more than simply moving materials around, hands-on activities activate kids' brains. According to Cindy Middendorf, educational consultant and author of The Scholastic Differentiated Instruction Plan Book (Scholastic, 2009), between the ages of four and seven, the right side of the brain is developing and learning comes easily through visual and spatial activities. The left hemisphere of the brain—the side that's involved in more analytical and language skills—develops later, around ages 10 and 11. When you combine activities that require movement, talking, and listening, it activates multiple areas of the brain. "The more parts of your brain you use, the more likely you are to retain information," says Judy Dodge, author of 25 Quick Formative Assessments for a Differentiated Classroom (Scholastic, 2009). "If you're only listening, you're only activating one part of the brain," she says, "but if you're drawing and explaining to a peer, then you're making connections in the brain." Multitasking in the classroom is not a negative when it comes to hands-on activities such as coloring, scribbling, or cutting with scissors. Indeed, even adults benefit from the "busy hands, busy brain" phenomenon: Recent research has shown that people who doodle during business meetings have better memory recall. A report in the journal Applied Cognitive Psychology demonstrated that volunteers who doodled during a dull verbal message were 29 percent better at recalling details from the message. Researchers suggest that engaging in a simple hands-on task, such as cutting out a shape with scissors, can help prevent daydreaming and restlessness during a learning experience. If adults in business settings can benefit from mnemonic tricks such as doodling, then students should certainly be encouraged to try these strategies. The Hands-On Classroom Terri LaChance, a kindergarten teacher at Darcey School in Cheshire, Connecticut, uses hands-on activities all day, every day, to let all her students shine. Currently, LaChance is teaching a student who is a gifted artist but has poor language skills. He fidgets during large-group activities but can spend hours drawing or building. LaChance nurtures his interest and talent by allowing him to make projects; she recalls one day when he carefully constructed bird beaks out of recycled materials, then gave them to other kids to wear in class. Through art projects and play, LaChance has seen the student's language skills improve as he answers questions about his creations and illustrations. We know our students learn in many different ways: visual, auditory, tactile, kinesthetic, and social. Still, says Dodge, most of us teach the way we're most comfortable, and that's not necessarily the way our students learn. "It's a missed opportunity if we don't use the way that a child learns best to hook them and get them excited about learning," says Dodge. Hands-on projects obviously engage kids who are tactile or kinesthetic learners, who need movement to learn best. They also engage students who are auditory learners, who talk about what they're doing, and visual learners, who have the opportunity to see what everyone else is creating. For social learners, the time spent in small group conversation will strengthen their knowledge. Just as Hicks has found in her classroom, hands-on activities let students become teachers. "When students explain and demonstrate skills to each other," says Sheldon Horowitz, EdD, director of professional services for the National Center for Learning Disabilities, "they are validating their understanding of the material being learned and, often in ways that adults are less successful, helping their peers to build and master new skills.” Hands-on activities also lend themselves to authentic assessment and observation, says Lanise Jacoby, a 2nd grade teacher at Pierce School in Arlington, Massachusetts, who observes how well her students follow directions and use fine motor skills during center time. Next time your students are working on a craft project or in centers, ask each student to quickly explain what they're doing and why, as well as what they're learning along the way. Using tools such as markers, scissors, and glue in hands-on projects also builds the fine motor skills that children will need to use for functional activities throughout their lives. Simple tasks such as buttoning, tying shoes, and using a key to open a lock all require manual precision. The best way to build that precision is, of course, through practice. Yet practice need not be dull and repetitive. Activities such as constructing a miniature city out of recycled materials, or crafting a butterfly's life cycle using fabric scraps, not only help kids strengthen their hands and minds -- they are also fun and engaging. The more arts and crafts that teachers can bring into the classroom, the more opportunity they have to reach every child in the room, from kids with sensory difficulties to those who need an extra challenge in order to stay focused. Hands-on, creative, and artistic activities help students to focus and retain knowledge, and at the same time emphasize the importance of beauty and design in our world. TIPS FOR USING TACTILE LEARNING Here are more ways to increase the amount of time your students spend with their hands and minds in motion: - Provide self-check materials: Hands-on activities naturally lend themselves to differentiation, but Cindy Middendorf suggests adding in tools, such as number charts, for kids to use at each center to help them work independently. - Include assessment: In addition to observing and asking students to talk about what they've learned, teacher Becky Hicks has students record their center work and what they learned on individual accountability sheets. Judy Dodge suggests creating flip books with a page for each center so children can record what they learn at each station. - Keep kids moving: Dodge suggests using rotation stations that change every few minutes. Some examples: an observation station where students peer at objects under a microscope; an exploration station where students explore materials that you've just introduced; a visualization station where students draw what they've learned; a collaboration station where students talk about what they've learned; and a "ketchup and mustard" (catch-up and must-do) station where students can make up work they didn't get to. - Move the materials: If you can't handle all the movement of center rotations, Dodge suggests putting each activity and the necessary supplies in a basket. Then pass the baskets from table to table instead of moving the students. - Group students by interest: Grouping students according to what they're interested in can increase their engagement. "When you're in a small group, you have more air time," says Ben Mardell, PhD, with Project Zero at Harvard University. "Kids can talk more and if you put a group together based on interest, then you have kids who share a passion and they're more involved in being there." Small groups also build accountability, as each child has to attend to the activity for the product to come together. - Incorporate language: As students move into third grade and beyond, the amount of language used in class will increase. Prepare them by incorporating speaking skills into your assessment of tactile activities: Ask students to explain what they're doing and end some units with oral presentations. - Adjust expectations: Kindergarten teacher Terri LaChance admits that during hands-on activities, her classroom is louder. To manage the volume level, LaChance limits the number of students in each activity to two. Get inspired for more hands-on learning with these sites from teachers and professionals: You'll find activities and tips from retired kindergarten teacher Linda Critchell here: www.kinderteacher.com. Former kindergarten teacher Mrs. Perdue has a variety of literacy centers and photos of how to set them up. First-grade teacher Ms. Ross's class Web site has ideas for literacy centers. Second-grade teacher Becky Hicks's class Web site has more ideas for hands-on activities. You'll find more information about tactile learning here. Take an online inventory to figure out your personal learning style. Then, find out more about learning styles so you can incorporate activities that will grab all your students. You can also find an inventory on Judy Dodge's Web site.
<urn:uuid:b0a76633-3161-44b2-9e97-9da928f082a5>
CC-MAIN-2013-20
http://www.scholastic.com/browse/article.jsp?id=3751901
2013-05-21T17:25:32
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964457
2,082
4
4
The creation of dead zones begins when nitrogen and phosphorus from fertilizers used on land, and from raw or poorly treated sewage, wash into streams, into rivers, and to the sea. Thus fertilized, single celled drifting algae in the sea reproduce – bloom — until they reach abnormal densities. The algae subsequently die and fall to the ocean floor, where they spark an explosion of bacteria that decompose them. The bacteria deplete the oxygen in the sea water to levels so low that little else can survive. Thus, a dead zone is created. Fish can sometimes swim away from these dead zones, or algae blooms, but other sea life like clams and crabs cannot. 3 things you can do to help shrink dead zones: 1. Consider using organic compost or other natural fertilizers instead of commercial products on your lawn and garden 2. Buy locally grown food to support small-scale, regional farmers. 3. Get involved in local efforts to reduce commercial fertilizer use. Other great ways you can make a difference. LINKS & VIDEOS Dr. Nancy Rabalais The rise of global dead zones. Dead zones, from fertilizer, cause fish die-offs.
<urn:uuid:750e5233-52be-419b-a471-8e420e040d32>
CC-MAIN-2013-20
http://blueocean.org/issues/changing-ocean/pollution/dead-zones/?imgpage=1&showimg=521
2013-05-24T02:08:39
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.850455
246
4.09375
4
||This article needs additional citations for verification. (March 2011)| Nuclear meltdown is an informal term for a severe nuclear reactor accident that results in core damage from overheating. The term is not officially defined by the International Atomic Energy Agency or by the U.S. Nuclear Regulatory Commission. However, it has been defined to mean the accidental melting of the core of a nuclear reactor, and is in common usage a reference to the core's either complete or partial collapse. "Core melt accident" and "partial core melt" are the analogous technical terms for a meltdown. A core melt accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. Alternately, in a reactor plant such as the RBMK-1000, an external fire may endanger the core, leading to a meltdown. Once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel (such as uranium, plutonium, or thorium) and fission products (such as cesium-137, krypton-88, or iodine-131) within the fuel elements can leach out into the coolant. Subsequent failures can permit these radioisotopes to breach further layers of containment. Superheated steam and hot metal inside the core can lead to fuel-coolant interactions, hydrogen explosions, or water hammer, any of which could destroy parts of the containment. A meltdown is considered very serious because of the potential, however remote, that radioactive materials could breach all containment and escape (or be released) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby. Nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. If the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. A core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss-of-pressure-control accident, a loss-of-coolant accident (LOCA), an uncontrolled power excursion or, in reactors without a pressure vessel, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth ensure that multiple layers of safety systems are always present to make such accidents unlikely. The containment building is the last of several safeguards that prevent the release of radioactivity to the environment. Many commercial reactors are contained within a 1.2-to-2.4-metre (3.9 to 7.9 ft) thick pre-stressed, steel-reinforced, air-tight concrete structure that can withstand hurricane-force winds and severe earthquakes. - In a loss-of-coolant accident, either the physical loss of coolant (which is typically deionized water, an inert gas, NaK, or liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss-of-coolant accident and a loss-of-pressure-control accident are closely related in some reactors. In a pressurized water reactor, a LOCA can also cause a "steam bubble" to form in the core due to excessive heating of stalled coolant or by the subsequent loss-of-pressure-control accident caused by a rapid loss of coolant. In a loss-of-forced-circulation accident, a gas cooled reactor's circulators (generally motor or steam driven turbines) fail to circulate the gas coolant within the core, and heat transfer is impeded by this loss of forced circulation, though natural circulation through convection will keep the fuel cool as long as the reactor is not depressurized. - In a loss-of-pressure-control accident, the pressure of the confined coolant falls below specification without the means to restore it. In some cases this may reduce the heat transfer efficiency (when using an inert gas as a coolant) and in others may form an insulating "bubble" of steam surrounding the fuel assemblies (for pressurized water reactors). In the latter case, due to localized heating of the "steam bubble" due to decay heat, the pressure required to collapse the "steam bubble" may exceed reactor design specifications until the reactor has had time to cool down. (This event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the Emergency Core Cooling System may be turned on). In a depressurization fault, a gas-cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel; however, as long as at least one gas circulator is available, the fuel will be kept cool. - In an uncontrolled power excursion accident, a sudden power spike in the reactor exceeds reactor design specifications due to a sudden increase in reactor reactivity. An uncontrolled power excursion occurs due to significantly altering a parameter that affects the neutron multiplication rate of a chain reaction (examples include ejecting a control rod or significantly altering the nuclear characteristics of the moderator, such as by rapid cooling). In extreme cases the reactor may proceed to a condition known as prompt critical. This is especially a problem in reactors that have a positive void coefficient of reactivity, a positive temperature coefficient, are overmoderated, or can trap excess quantities of deleterious fission products within their fuel or moderators. Many of these characteristics are present in the RBMK design, and the Chernobyl disaster was caused by such deficiencies as well as by severe operator negligence. Western light water reactors are not subject to very large uncontrolled power excursions because loss of coolant decreases, rather than increases, core reactivity (a negative void coefficient of reactivity); "transients," as the minor power fluctuations within Western light water reactors are called, are limited to momentary increases in reactivity that will rapidly decrease with time (approximately 200% - 250% of maximum neutronic power for a few seconds in the event of a complete rapid shutdown failure combined with a transient). - Core-based fires endanger the core and can cause the fuel assemblies to melt. A fire may be caused by air entering a graphite moderated reactor, or a liquid-sodium cooled reactor. Graphite is also subject to accumulation of Wigner energy, which can overheat the graphite (as happened at the Windscale fire). Light water reactors do not have flammable cores or moderators and are not subject to core fires. Gas-cooled civilian reactors, such as the Magnox, UNGG, and AGCR type reactors, keep their cores blanketed with non reactive carbon dioxide gas, which cannot support a fire. Modern gas-cooled civilian reactors use helium, which cannot burn, and have fuel that can withstand high temperatures without melting (such as the High Temperature Gas Cooled Reactor and the Pebble Bed Modular Reactor). - Byzantine faults and cascading failures within instrumentation and control systems may cause severe problems in reactor operation, potentially leading to core damage if not mitigated. For example, the Browns Ferry fire damaged control cables and required the plant operators to manually activate cooling systems. The Three Mile Island accident was caused by a stuck-open pilot-operated pressure relief valve combined with a deceptive water level gauge that misled reactor operators, which resulted in core damage. Light water reactors (LWRs) Before the core of a light water nuclear reactor can be damaged, two precursor events must have already occurred: - A limiting fault (or a set of compounded emergency conditions) that leads to the failure of heat removal within the core (the loss of cooling). Low water level uncovers the core, allowing it to heat up. - Failure of the Emergency Core Cooling System (ECCS). The ECCS is designed to rapidly cool the core and make it safe in the event of the maximum fault (the design basis accident) that nuclear regulators and plant engineers could imagine. There are at least two copies of the ECCS built for every reactor. Each division (copy) of the ECCS is capable, by itself, of responding to the design basis accident. The latest reactors have as many as four divisions of the ECCS. This is the principle of redundancy, or duplication. As long as at least one ECCS division functions, no core damage can occur. Each of the several divisions of the ECCS has several internal "trains" of components. Thus the ECCS divisions themselves have internal redundancy – and can withstand failures of components within them. The Three Mile Island accident was a compounded group of emergencies that led to core damage. What led to this was an erroneous decision by operators to shut down the ECCS during an emergency condition due to gauge readings that were either incorrect or misinterpreted; this caused another emergency condition that, several hours after the fact, led to core exposure and a core damage incident. If the ECCS had been allowed to function, it would have prevented both exposure and core damage. During the Fukushima incident the emergency cooling system had also been manually shut down several minutes after it started. If such a limiting fault were to occur, and a complete failure of all ECCS divisions were to occur, both Kuan, et al and Haskin, et al describe six stages between the start of the limiting fault (the loss of cooling) and the potential escape of molten corium into the containment (a so-called "full meltdown"): - Uncovering of the Core – In the event of a transient, upset, emergency, or limiting fault, LWRs are designed to automatically SCRAM (a SCRAM being the immediate and full insertion of all control rods) and spin up the ECCS. This greatly reduces reactor thermal power (but does not remove it completely); this delays core becoming uncovered, which is defined as the point when the fuel rods are no longer covered by coolant and can begin to heat up. As Kuan states: "In a small-break LOCA with no emergency core coolant injection, core uncovery [sic] generally begins approximately an hour after the initiation of the break. If the reactor coolant pumps are not running, the upper part of the core will be exposed to a steam environment and heatup of the core will begin. However, if the coolant pumps are running, the core will be cooled by a two-phase mixture of steam and water, and heatup of the fuel rods will be delayed until almost all of the water in the two-phase mixture is vaporized. The TMI-2 accident showed that operation of reactor coolant pumps may be sustained for up to approximately two hours to deliver a two phase mixture that can prevent core heatup." - Pre-damage heat up – "In the absence of a two-phase mixture going through the core or of water addition to the core to compensate water boiloff, the fuel rods in a steam environment will heat up at a rate between 0.3 °C/s (0.5 °F/s) and 1 °C/s (1.8 °F/s) (3)." - Fuel ballooning and bursting – "In less than half an hour, the peak core temperature would reach 1,100 K (1,520 °F). At this temperature the zircaloy cladding of the fuel rods may balloon and burst. This is the first stage of core damage. Cladding ballooning may block a substantial portion of the flow area of the core and restrict the flow of coolant. However complete blockage of the core is unlikely because not all fuel rods balloon at the same axial location. In this case, sufficient water addition can cool the core and stop core damage progression." - Rapid oxidation – "The next stage of core damage, beginning at approximately 1,500 K (2,240 °F), is the rapid oxidation of the Zircaloy by steam. In the oxidation process, hydrogen is produced and a large amount of heat is released. Above 1,500 K (2,240 °F), the power from oxidation exceeds that from decay heat (4,5) unless the oxidation rate is limited by the supply of either zircaloy or steam." - Debris bed formation – "When the temperature in the core reaches about 1,700 K (2,600 °F), molten control materials [1,6] will flow to and solidify in the space between the lower parts of the fuel rods where the temperature is comparatively low. Above 1,700 K (2,600 °F), the core temperature may escalate in a few minutes to the melting point of zircaloy [2,150 K (3,410 °F)] due to increased oxidation rate. When the oxidized cladding breaks, the molten zircaloy, along with dissolved UO2 [1,7] would flow downward and freeze in the cooler, lower region of the core. Together with solidified control materials from earlier down-flows, the relocated zircaloy and UO2 would form the lower crust of a developing cohesive debris bed." - (Corium) Relocation to the lower plenum – "In scenarios of small-break LOCAs, there is generally a pool of water in the lower plenum of the vessel at the time of core relocation. Release of molten core materials into water always generates large amounts of steam. If the molten stream of core materials breaks up rapidly in water, there is also a possibility of a steam explosion. During relocation, any unoxidized zirconium in the molten material may also be oxidized by steam, and in the process hydrogen is produced. Recriticality also may be a concern if the control materials are left behind in the core and the relocated material breaks up in unborated water in the lower plenum." At the point at which the corium relocates to the lower plenum, Haskin, et al relate that the possibility exists for an incident called a fuel-coolant interaction (FCI) to substantially stress or breach the primary pressure boundary when the corium relocates to the lower plenum of the reactor pressure vessel ("RPV"). This is because the lower plenum of the RPV may have a substantial quantity of water - the reactor coolant - in it, and, assuming the primary system has not been depressurized, the water will likely be in the liquid phase, and consequently dense, and at a vastly lower temperature than the corium. Since corium is a liquid metal-ceramic eutectic at temperatures of 2,200 to 3,200 K (3,500 to 5,300 °F), its fall into liquid water at 550 to 600 K (530 to 620 °F) may cause an extremely rapid evolution of steam that could cause a sudden extreme overpressure and consequent gross structural failure of the primary system or RPV. Though most modern studies hold that it is physically infeasible, or at least extraordinarily unlikely, Haskin, et al state that that there exists a remote possibility of an extremely violent FCI leading to something referred to as an alpha-mode failure, or the gross failure of the RPV itself, and subsequent ejection of the upper plenum of the RPV as a missile against the inside of the containment, which would likely lead to the failure of the containment and release of the fission products of the core to the outside environment without any substantial decay having taken place. Breach of the Primary Pressure Boundary There are several possibilities as to how the primary pressure boundary could be breached by corium. - Steam Explosion As previously described, FCI could lead to an overpressure event leading to RPV fail, and thus, primary pressure boundary fail. Haskin, et al. report that in the event of a steam explosion, failure of the lower plenum is far more likely than ejection of the upper plenum in the alpha-mode. In the even of lower plenum failure, debris at varied temperatures can be expected to be projected into the cavity below the core. The containment may be subject to overpressure, though this is not likely to fail the containment. The alpha-mode failure will lead to the consequences previously discussed. - Pressurized Melt Ejection (PME) It is quite possible, especially in pressurized water reactors, that the primary loop will remain pressurized following corium relocation to the lower plenum. As such, pressure stresses on the RPV will be present in addition to the weight stress that the molten corium places on the lower plenum of the RPV; when the metal of the RPV weakens sufficiently due to the heat of the molten corium, it is likely that the liquid corium will be discharged under pressure out of the bottom of the RPV in a pressurized stream, together with entrained gases. This mode of corium ejection may lead to direct containment heating (DCH). Severe Accident Ex-Vessel Interactions and Challenges to Containment Haskin, et al identify six modes by which the containment could be credibly challenged; some of these modes are not applicable to core melt accidents. - Dynamic pressure (shockwaves) - Internal missiles - External missiles (not applicable to core melt accidents) Standard failure modes If the melted core penetrates the pressure vessel, there are theories and speculations as to what may then occur. In modern Russian plants, there is a "core catching device" in the bottom of the containment building, the melted core is supposed to hit a thick layer of a "sacrificial metal" which would melt, dilute the core and increase the heat conductivity, and finally the diluted core can be cooled down by water circulating in the floor. However there has never been any full-scale testing of this device. In Western plants there is an airtight containment building. Though radiation would be at a high level within the containment, doses outside of it would be lower. Containment buildings are designed for the orderly release of pressure without releasing radionuclides, through a pressure release valve and filters. Hydrogen/oxygen recombiners also are installed within the containment to prevent gas explosions. In a melting event, one spot or area on the RPV will become hotter than other areas, and will eventually melt. When it melts, corium will pour into the cavity under the reactor. Though the cavity is designed to remain dry, several NUREG-class documents advise operators to flood the cavity in the event of a fuel melt incident. This water will become steam and pressurize the containment. Automatic water sprays will pump large quantities of water into the steamy environment to keep the pressure down. Catalytic recombiners will rapidly convert the hydrogen and oxygen back into water. One positive effect of the corium falling into water is that it is cooled and returns to a solid state. Extensive water spray systems within the containment along with the ECCS, when it is reactivated, will allow operators to spray water within the containment to cool the core on the floor and reduce it to a low temperature. These procedures are intended to prevent release of radiation. In the Three Mile Island event in 1979, a theoretical person standing at the plant property line during the entire event would have received a dose of approximately 2 millisieverts (200 millirem), between a chest X-ray's and a CT scan's worth of radiation. This was due to outgassing by an uncontrolled system that, today, would have been backfitted with activated carbon and HEPA filters to prevent radionuclide release. However in case of Fukushima incident this design also at least partially failed: large amounts of highly radioactive water were produced and nuclear fuel has possibly melted through the base of the pressure vessels. Cooling will take quite a while, until the natural decay heat of the corium reduces to the point where natural convection and conduction of heat to the containment walls and re-radiation of heat from the containment allows for water spray systems to be shut down and the reactor put into safe storage. The containment can be sealed with release of extremely limited offsite radioactivity and release of pressure within the containment. After a number of years for fission products to decay - probably around a decade - the containment can be reopened for decontamination and demolition. Unexpected failure modes Another scenario sees a buildup of hydrogen, which may lead to a detonation event, as happened for three reactors during Fukushima incident. Catalytic hydrogen recombiners located within containment are designed to prevent this from occurring; however, prior to the installation of these recombiners in the 1980s, the Three Mile Island containment (in 1979) suffered a massive hydrogen explosion event in the accident there. The containment withstood the pressure and no radioactivity was released. However, in Fukushima recombiners did not work due the absence of power and hydrogen detonation breached the containment. Speculative failure modes One scenario consists of the reactor pressure vessel failing all at once, with the entire mass of corium dropping into a pool of water (for example, coolant or moderator) and causing extremely rapid generation of steam. The pressure rise within the containment could threaten integrity if rupture disks could not relieve the stress. Exposed flammable substances could burn, but there are few, if any, flammable substances within the containment. Another theory called an 'alpha mode' failure by the 1975 Rasmussen (WASH-1400) study asserted steam could produce enough pressure to blow the head off the reactor pressure vessel (RPV). The containment could be threatened if the RPV head collided with it. (The WASH-1400 report was replaced by better-based[original research?] newer studies, and now the Nuclear Regulatory Commission has disavowed them all and is preparing the overarching State-of-the-Art Reactor Consequence Analyses [SOARCA] study - see the Disclaimer in NUREG-1150.) It has not been determined to what extent a molten mass can melt through a structure (although that was tested in the Loss-of-Fluid-Test Reactor described in Test Area North's fact sheet). The Three Mile Island accident provided some real-life experience, with an actual molten core within an actual structure; the molten corium failed to melt through the Reactor Pressure Vessel after over six hours of exposure, due to dilution of the melt by the control rods and other reactor internals, validating the emphasis on defense in depth against core damage incidents. Some believe a molten reactor core could actually penetrate the reactor pressure vessel and containment structure and burn downwards into the earth beneath, to the level of the groundwater. By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; gravity would prevent it continuing to the other side. Other reactor types Other types of reactors have different capabilities and safety profiles than the LWR does. Advanced varieties of several of these reactors have the potential to be inherently safe. CANDU reactors CANDU reactors, Canadian-invented deuterium-uranium design, are designed with at least one, and generally two, large low-temperature and low-pressure water reservoirs around their fuel/coolant channels. The first is the bulk heavy-water moderator (a separate system from the coolant), and the second is the light-water-filled shield tank. These backup heat sinks are sufficient to prevent either the fuel meltdown in the first place (using the moderator heat sink), or the breaching of the core vessel should the moderator eventually boil off (using the shield tank heat sink). Other failure modes aside from fuel melt will probably occur in a CANDU rather than a meltdown, such as deformation of the calandria into a non-critical configuration. All CANDU reactors are located within standard Western containments as well. Gas-cooled reactors One type of Western reactor, known as the advanced gas-cooled reactor (or AGCR), built by the United Kingdom, is not very vulnerable to loss-of-cooling accidents or to core damage except in the most extreme of circumstances. By virtue of the relatively inert coolant (carbon dioxide), the large volume and high pressure of the coolant, and the relatively high heat transfer efficiency of the reactor, the time frame for core damage in the event of a limiting fault is measured in days. Restoration of some means of coolant flow will prevent core damage from occurring. Other types of highly advanced gas cooled reactors, generally known as high-temperature gas-cooled reactors (HTGRs) such as the Japanese High Temperature Test Reactor and the United States' Very High Temperature Reactor, are inherently safe, meaning that meltdown or other forms of core damage are physically impossible, due to the structure of the core, which consists of hexagonal prismatic blocks of silicon carbide reinforced graphite infused with TRISO or QUADRISO pellets of uranium, thorium, or mixed oxide buried underground in a helium-filled steel pressure vessel within a concrete containment. Though this type of reactor is not susceptible to meltdown, additional capabilities of heat removal are provided by using regular atmospheric airflow as a means of backup heat removal, by having it pass through a heat exchanger and rising into the atmosphere due to convection, achieving full residual heat removal. The VHTR is scheduled to be prototyped and tested at Idaho National Laboratory within the next decade (as of 2009) as the design selected for the Next Generation Nuclear Plant by the US Department of Energy. This reactor will use a gas as a coolant, which can then be used for process heat (such as in hydrogen production) or for the driving of gas turbines and the generation of electricity. A similar highly advanced gas cooled reactor originally designed by West Germany (the AVR reactor) and now developed by South Africa is known as the Pebble Bed Modular Reactor. It is an inherently safe design, meaning that core damage is physically impossible, due to the design of the fuel (spherical graphite "pebbles" arranged in a bed within a metal RPV and filled with TRISO (or QUADRISO) pellets of uranium, thorium, or mixed oxide within). A prototype of a very similar type of reactor has been built by the Chinese, HTR-10, and has worked beyond researchers' expectations, leading the Chinese to announce plans to build a pair of follow-on, full-scale 250 MWe, inherently safe, power production reactors based on the same concept. (See Nuclear power in the People's Republic of China for more information.) Experimental or conceptual designs Some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety. The PIUS (process inherent ultimate safety) designs, originally engineered by the Swedes in the late 1970s and early 1980s, are LWRs that by virtue of their design are resistant to core damage. No units have ever been built. Power reactors, including the Deployable Electrical Energy Reactor, a larger-scale mobile version of the TRIGA for power generation in disaster areas and on military missions, and the TRIGA Power System, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the TRIGA due to the uranium zirconium hydride fuel used. The Hydrogen Moderated Self-regulating Nuclear Power Module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the TRIGA, also possesses these extreme safety and stability characteristics, and has attracted a good deal of interest in recent times. The liquid fluoride thermal reactor is designed to naturally have its core in a molten state, as a eutectic mix of thorium and fluorine salts. As such, a molten core is reflective of the normal and safe state of operation of this reactor type. In the event the core overheats, a metal plug will melt, and the molten salt core will drain into tanks where it will cool in a non-critical configuration. Since the core is liquid, and already melted, it cannot be damaged. Advanced liquid metal reactors, such as the U.S. Integral Fast Reactor and the Russian BN-350, BN-600, and BN-800, all have a coolant with very high heat capacity, sodium metal. As such, they can withstand a loss of cooling without SCRAM and a loss of heat sink without SCRAM, qualifying them as inherently safe. Soviet Union-designed reactors Soviet designed RBMKs, found only in Russia and the CIS and now shut down everywhere except Russia, do not have containment buildings, are naturally unstable (tending to dangerous power fluctuations), and also have ECCS systems that are considered grossly inadequate by Western safety standards. The reactor from the Chernobyl Disaster was a RBMK reactor. RBMK ECCS systems only have one division and have less than sufficient redundancy within that division. Though the large core size of the RBMK makes it less energy-dense than the Western LWR core, it makes it harder to cool. The RBMK is moderated by graphite. In the presence of both steam and oxygen, at high temperatures, graphite forms synthesis gas and with the water gas shift reaction the resultant hydrogen burns explosively. If oxygen contacts hot graphite, it will burn. The RBMK tends towards dangerous power fluctuations. Control rods used to be tipped with graphite, a material that slows neutrons and thus speeds up the chain reaction. Water is used as a coolant, but not a moderator. If the water boils away, cooling is lost, but moderation continues. This is termed a positive void coefficient of reactivity. Control rods can become stuck if the reactor suddenly heats up and they are moving. Xenon-135, a neutron absorbent fission product, has a tendency to build up in the core and burn off unpredictably in the event of low power operation. This can lead to inaccurate neutronic and thermal power ratings. The RBMK does not have any containment above the core. The only substantial solid barrier above the fuel is the upper part of the core, called the upper biological shield, which is a piece of concrete interpenetrated with control rods and with access holes for refueling while online. Other parts of the RBMK were shielded better than the core itself. Rapid shutdown (SCRAM) takes 10 to 15 seconds. Western reactors take 1 - 2.5 seconds. Western aid has been given to provide certain real-time safety monitoring capacities to the human staff. Whether this extends to automatic initiation of emergency cooling is not known. Training has been provided in safety assessment from Western sources, and Russian reactors have evolved in result to the weaknesses that were in the RBMK. However, numerous RBMKs still operate. It is safe to say that it might be possible to stop a loss-of-coolant event prior to core damage occurring, but that any core damage incidents will probably assure massive release of radioactive materials. Further, dangerous power fluctuations are natural to the design. Lithuania joined the EU recently, and upon acceding, it has been required to shut the two RBMKs that it has at Ignalina NPP, as such reactors are totally incompatible with the nuclear safety standards of Europe. It will be replacing them with some safer form of reactor. The MKER is a modern Russian-engineered channel type reactor that is a distant descendant of the RBMK. It approaches the concept from a different and superior direction, optimizing the benefits, and fixing the flaws of the original RBMK design. There are several unique features of the MKER's design that make it a credible and interesting option: One unique benefit of the MKER's design is that in the event of a challenge to cooling within the core - a pipe break of a channel, the channel can be isolated from the plenums supplying water, decreasing the potential for common-mode failures. The lower power density of the core greatly enhances thermal regulation. Graphite moderation enhances neutronic characteristics beyond light water ranges. The passive emergency cooling system provides a high level of protection by using natural phenomena to cool the core rather than depending on motor-driven pumps. The containment structure is modern and designed to withstand a very high level of punishment. Refueling is accomplished while online, ensuring that outages are for maintenance only and are very few and far between. 97-99% uptime is a definite possibility. Lower enrichment fuels can be used, and high burnup can be achieved due to the moderator design. Neutronics characteristics have been revamped to optimize for purely civilian fuel fertilization and recycling. Due to the enhanced quality control of parts, advanced computer controls, comprehensive passive emergency core cooling system, and very strong containment structure, along with a negative void coefficient and a fast acting rapid shutdown system, the MKER's safety can generally be regarded as being in the range of the Western Generation III reactors, and the unique benefits of the design may enhance its competitiveness in countries considering full fuel-cycle options for nuclear development. The VVER is a pressurized light water reactor that is far more stable and safe than the RBMK. This is because it uses light water as a moderator (rather than graphite), has well understood operating characteristics, and has a negative void coefficient of reactivity. In addition, some have been built with more than marginal containments, some have quality ECCS systems, and some have been upgraded to international standards of control and instrumentation. Present generations of VVERs (the VVER-1000) are built to Western-equivalent levels of instrumentation, control, and containment systems. However, even with these positive developments, certain older VVER models raise a high level of concern, especially the VVER-440 V230. The VVER-440 V230 has no containment building, but only has a structure capable of confining steam surrounding the RPV. This is a volume of thin steel, perhaps an inch or two in thickness, grossly insufficient by Western standards. - Has no ECCS. Can survive at most one 4 inch pipe break (there are many pipes greater than 4 inches within the design). - Has six steam generator loops, adding unnecessary complexity. - However, apparently steam generator loops can be isolated, in the event that a break occurs in one of these loops. The plant can remain operating with one isolated loop - a feature found in few Western reactors. The interior of the pressure vessel is plain alloy steel, exposed to water. This can lead to rust, if the reactor is exposed to water. One point of distinction in which the VVER surpasses the West is the reactor water cleanup facility - built, no doubt, to deal with the enormous volume of rust within the primary coolant loop - the product of the slow corrosion of the RPV. This model is viewed as having inadequate process control systems. Bulgaria had a number of VVER-440 V230 models, but they opted to shut them down upon joining the EU rather than backfit them, and are instead building new VVER-1000 models. Many non-EU states maintain V230 models, including Russia and the CIS. Many of these states - rather than abandoning the reactors entirely - have opted to install an ECCS, develop standard procedures, and install proper instrumentation and control systems. Though confinements cannot be transformed into containments, the risk of a limiting fault resulting in core damage can be greatly reduced. The VVER-440 V213 model was built to the first set of Soviet nuclear safety standards. It possesses a modest containment building, and the ECCS systems, though not completely to Western standards, are reasonably comprehensive. Many VVER-440 V213 models possessed by former Soviet bloc countries have been upgraded to fully automated Western-style instrumentation and control systems, improving safety to Western levels for accident prevention - but not for accident containment, which is of a modest level compared to Western plants. These reactors are regarded as "safe enough" by Western standards to continue operation without major modifications, though most owners have performed major modifications to bring them up to generally equivalent levels of nuclear safety. During the 1970s, Finland built two VVER-440 V213 models to Western standards with a large-volume full containment and world-class instrumentation, control standards and an ECCS with multiply redundant and diversified components. In addition, passive safety features such as 900-tonne ice condensers have been installed, making these two units safety-wise the most advanced VVER-440's in the world. The VVER-1000 type has a definitely adequate Western-style containment, the ECCS is sufficient by Western standards, and instrumentation and control has been markedly improved to Western 1970s-era levels. Chernobyl disaster In the Chernobyl disaster the fuel became non-critical when it melted and flowed away from the graphite moderator - however, it took considerable time to cool. The molten core of Chernobyl (that part that did not vaporize in the fire) flowed in a channel created by the structure of its reactor building and froze in place before a core-concrete interaction could happen. In the basement of the reactor at Chernobyl, a large "elephant's foot" of congealed core material was found. Time delay, and prevention of direct emission to the atmosphere, would have reduced the radiological release. If the basement of the reactor building had been penetrated, the groundwater would be severely contaminated, and its flow could carry the contamination far afield. The Chernobyl reactor was an RBMK type. The disaster was caused by a power excursion that led to a meltdown and extensive offsite consequences. Operator error and a faulty shutdown system led to a sudden, massive spike in the neutron multiplication rate, a sudden decrease in the neutron period, and a consequent increase in neutron population; thus, core heat flux very rapidly increased to unsafe levels. This caused the water coolant to flash to steam, causing a sudden overpressure within the reactor pressure vessel (RPV), leading to granulation of the upper portion of the core and the ejection of the upper plenum of said pressure vessel along with core debris from the reactor building in a widely dispersed pattern. The lower portion of the reactor remained somewhat intact; the graphite neutron moderator was exposed to oxygen containing air; heat from the power excursion in addition to residual heat flux from the remaining fuel rods left without coolant induced oxidation in the moderator; this in turn evolved more heat and contributed to the melting of the fuel rods and the outgassing of the fission products contained therein. The liquefied remains of the fuel rods flowed through a drainage pipe into the basement of the reactor building and solidified in a mass later dubbed corium, though the primary threat to the public safety was the dispersed core ejecta and the gasses evolved from the oxidation of the moderator. Although the Chernobyl accident had dire off-site effects, much of the radioactivity remained within the building. If the building were to fail and dust was to be released into the environment then the release of a given mass of fission products which have aged for twenty years would have a smaller effect than the release of the same mass of fission products (in the same chemical and physical form) which had only undergone a short cooling time (such as one hour) after the nuclear reaction has been terminated. However, if a nuclear reaction was to occur again within the Chernobyl plant (for instance if rainwater was to collect and act as a moderator) then the new fission products would have a higher specific activity and thus pose a greater threat if they were released. To prevent a post-accident nuclear reaction, steps have been taken, such as adding neutron poisons to key parts of the basement. The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown unlikely, and to contain one should it occur. In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly radioactive material, a meltdown alone should not lead to significant radiation release or danger to the public. In practice, however, a nuclear meltdown is often part of a larger chain of disasters (although there have been so few meltdowns in the history of nuclear power that there is not a large pool of statistical information from which to draw a credible conclusion as to what "often" happens in such circumstances). For example, in the Chernobyl accident, by the time the core melted, there had already been a large steam explosion and graphite fire and major release of radioactive contamination (as with almost all Soviet reactors, there was no containment structure at Chernobyl). Also, before a possible meltdown occurs, pressure can already be rising in the reactor, and to prevent a meltdown by restoring the cooling of the core, operators are allowed to reduce the pressure in the reactor by releasing (radioactive) steam into the environment. This enables them to inject additional cooling water into the reactor again. Reactor design Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive nuclear safety features that may be less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively activated safety systems. The CANDU reactor has two low-temperature and low-pressure water systems surrounding the fuel (i.e. moderator and shield tank) that act as back-up heat sinks and preclude meltdowns and core-breaching scenarios. Fast breeder reactors are more susceptible to meltdown than other reactor types, due to the larger quantity of fissile material and the higher neutron flux inside the reactor core, which makes it more difficult to control the reaction. Accidental fires are widely acknowledged to be risk factors that can contribute to a nuclear meltdown. United States There have been at least eight meltdowns in the history of the United States. All are widely called "partial meltdowns." - BORAX-I was a test reactor designed to explore criticality excursions and observe if a reactor would self limit. In the final test, it was deliberately destroyed and revealed that the reactor reached much higher temperatures than were predicted at the time. - The reactor at EBR-I suffered a partial meltdown during a coolant flow test on November 29, 1955. - The Sodium Reactor Experiment in Santa Susana Field Laboratory was an experimental nuclear reactor which operated from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown in July 1959. - Stationary Low-Power Reactor Number One (SL-1) was a United States Army experimental nuclear power reactor which underwent a criticality excursion, a steam explosion, and a meltdown on January 3, 1961, killing three operators. - The SNAP8ER reactor at the Santa Susana Field Laboratory experienced damage to 80% of its fuel in an accident in 1964. - The partial meltdown at the Fermi 1 experimental fast breeder reactor, in 1966, required the reactor to be repaired, though it never achieved full operation afterward. - The SNAP8DR reactor at the Santa Susana Field Laboratory experienced damage to approximately a third of its fuel in an accident in 1969. - The Three Mile Island accident, in 1979, referred to in the press as a "partial core melt," led to the permanent shutdown of that reactor. Soviet Union In the most serious example, the Chernobyl disaster, design flaws and operator negligence led to a power excursion that subsequently caused a meltdown. According to a report released by the Chernobyl Forum (consisting of numerous United Nations agencies, including the International Atomic Energy Agency and the World Health Organization; the World Bank; and the Governments of Ukraine, Belarus, and Russia) the disaster killed twenty-eight people due to acute radiation syndrome, could possibly result in up to four thousand fatal cancers at an unknown time in the future and required the permanent evacuation of an exclusion zone around the reactor. During the Fukushima I nuclear accidents, three of the power plant's six reactors reportedly suffered meltdowns. Most of the fuel in the reactor No. 1 Nuclear Power Plant melted. TEPCO believes No.2 and No.3 reactors were similarly affected. On May 24, 2011, TEPCO reported that all three reactors melted down. Meltdown incidents - There was also a fatal core meltdown at SL-1, an experimental U.S. military reactor in Idaho. Large-scale nuclear meltdowns at civilian nuclear power plants include: - the Lucens reactor, Switzerland, in 1969. - the Three Mile Island accident in Pennsylvania, U.S.A., in 1979. - the Chernobyl disaster at Chernobyl Nuclear Power Plant, Ukraine, USSR, in 1986. - the Fukushima I nuclear accidents following the earthquake and tsunami in Japan, March 2011. Other core meltdowns have occurred at: - NRX (military), Ontario, Canada, in 1952 - BORAX-I (experimental), Idaho, U.S.A., in 1954 - EBR-I (military), Idaho, U.S.A., in 1955 - Windscale (military), Sellafield, England, in 1957 (see Windscale fire) - Sodium Reactor Experiment, (civilian), California, U.S.A., in 1959 - Fermi 1 (civilian), Michigan, U.S.A., in 1966 - Chapelcross nuclear power station (civilian), Scotland, in 1967 - Saint-Laurent Nuclear Power Plant (civilian), France, in 1969 - A1 plant, (civilian) at Jaslovské Bohunice, Czechoslovakia, in 1977 - Saint-Laurent Nuclear Power Plant (civilian), France, in 1980 China Syndrome The China syndrome (loss-of-coolant accident) is a fictional nuclear reactor operations accident characterized by the severe meltdown of the core components of the reactor, which then burn through the containment vessel and the housing building, then notionally through the crust and body of the Earth until reaching the other side, which in the United States is jokingly referred to as being China. The system design of the nuclear power plants built in the late 1960s raised questions of operational safety, and raised the concern that a severe reactor accident could release large quantities of radioactive materials into the atmosphere and environment. By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. In the event, Lapp’s hypothetical nuclear accident was cinematically adapted as The China Syndrome (1979). The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; momentum loss due to friction (fluid viscosity) would prevent it continuing to the other side. See also - Behavior of nuclear fuel during a reactor accident - Chernobyl compared to other radioactivity releases - Chernobyl disaster effects - High-level radioactive waste management - International Nuclear Event Scale - List of civilian nuclear accidents - Lists of nuclear disasters and radioactive incidents - Nuclear fuel response to reactor accidents - Nuclear safety - Nuclear power - Nuclear power debate - Martin Fackler (June 1, 2011). "Report Finds Japan Underestimated Tsunami Danger". New York Times. - International Atomic Energy Agency (IAEA) (2007). IAEA Safety Glossary: Terminology Used in Nuclear Safety and Radiation Protection (2007edition ed.). Vienna, Austria: International Atomic Energy Agency. ISBN 92-0-100707-8. Retrieved 2009-08-17. - United States Nuclear Regulatory Commission (NRC) (2009-09-14). "Glossary". Website. Rockville, Maryland, USA: Federal Government of the United States. pp. See Entries for Letter M and Entries for Letter N. Retrieved 2009-10-03. - Reactor safety study: an assessment of accident risks in U.S. commercial nuclear power plants, Volume 1 - Hewitt, Geoffrey Frederick; Collier, John Gordon (2000). "4.6.1 Design Basis Accident for the AGR: Depressurization Fault". Introduction to nuclear power (in Technical English). London, UK: Taylor & Francis. p. 133. ISBN 978-1-56032-454-6. Retrieved 2010-06-05. - "Earthquake Report No. 91". JAIF. May 25, 2011. Retrieved May 25, 2011. - Kuan, P.; Hanson, D. J., Odar, F. (1991). Managing water addition to a degraded core. Retrieved 2010-11-22. - Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. p. 3.1–5. Retrieved 2010-11-23. - Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–1 to 3.5–4. Retrieved 2010-12-24. - Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–4 to 3.5–5. Retrieved 2010-12-24. - ANS : Public Information : Resources : Special Topics : History at Three Mile Island : What Happened and What Didn't in the TMI-2 Accident - Nuclear Industry in Russia Sells Safety, Taught by Chernobyl - 'Melt-through' at Fukushima? / Govt. suggests situation worse than meltdown http://www.yomiuri.co.jp/dy/national/T110607005367.htm - Test Area North - Walker, J. Samuel (2004). Three Mile Island: A Nuclear Crisis in Historical Perspective (Berkeley: University of California Press), p. 11. - Lapp, Ralph E. "Thoughts on nuclear plumbing." The New York Times, 12 December 1971, pg. E11. - "China Syndrome". Merriam-Webster. Retrieved December 11, 2012. - Presenter: Martha Raddatz (15 March 2011). "ABC World News". ABC. - Allen, P.J.; J.Q. Howieson, H.S. Shapiro, J.T. Rogers, P. Mostert and R.W. van Otterloo (April–June 1990). "Summary of CANDU 6 Probabilistic Safety Assessment Study Results". Nuclear Safety 31 (2): 202–214. - http://www.insc.anl.gov/neisb/neisb4/NEISB_1.1.html INL VVER Sourcebook - Partial Fuel Meltdown Events - ANL-W Reactor History: BORAX I - Wald, Matthew L. (2011-03-11). "Japan Expands Evacuation Around Nuclear Plant". The New York Times. - The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-economic Impacts". International Atomic Energy Agency. p. 14. Retrieved 2011-01-26. - The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts". International Atomic Energy Agency. p. 16. Retrieved 2011-01-26. - Hiroko Tabuchi (May 24, 2011). "Company Believes 3 Reactors Melted Down in Japan". The New York Times. Retrieved 2011-05-25.
<urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Nuclear_meltdown
2013-05-24T01:52:01
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934809
11,510
4.1875
4
Song Dynasty (960–1279) of China was a period of Chinese history marked by commercial expansion, economic prosperity, and revolutionary new Private trade grew and a market economy began to link the coastal provinces with the interior. The enormous population growth rate from increased agricultural cultivation in the 10th to 11th centuries doubled China's overall population, which rose above 100 million people (compared to the earlier , with some 50 million Beyond domestic profits made in China, merchants engaged in overseas trade by investing money in trading vessels that docked at foreign ports as far away as East . The world's first development of the banknote , or printed paper money (see Jiaozi ), was established on a massive scale. Combined with a unified tax system and efficient trade routes by road and canal, this meant the development of a true nationwide market system in China. Although much of the revenue in the central was consumed by the needs of the military defense , government taxes imposed on the rising commercial base in China refilled the monetary coffers of the Song government. For certain production items and marketed goods, the Song government imposed monopolies in order to boost revenues and secure resources that were vital to the empire's security, such as steel, iron, and chemical components for gunpowder Massive Expansion of ploughland The Song government encouraged people to reclaim barren lands and put them under cultivation. Anyone who opened up new lands and paid taxes were granted permanent possession of the new land. Under this policy, the cultivated land in the Song Dynasty is estimated to have reached a peak number of 720 million mu , and was not surpassed by later Ming and Qing Dynasties. Prominent statesman and economist Wang issued the Law and Decree on Irrigation in 1069 that encouraged expansion of the irrigation system in China. By 1076, about 10,800 irrigation projects were completed, which irrigated more than 36 million mu of public and private land. irrigation projects included dredging the Yellow River at northern China and artificial silt land in the Lake Tai As a result of this policy, the crop in China Improvements in Farm Tools, Seeds and Fertilizer The Song Dynasty inherited the curved iron plough invented in the (618–907) as described in detail in Lu Guimeng's The Classic of the Plough . The Song Dynasty improved on the Tang Dynasty curved iron plough and invented a special steel plough design specificily for reclaiming wasteland. The wasteland plough was not made of iron, but of stronger steel, the blade was shorter but thicker, and particularly effective in cutting through reeds and roots in wetlands in the valley. A tool designed to facilitate seedling called "seedling horse" was invented in Song Dynasty; it was made of jujube paulownia wood. Song Dynasty farms used bamboo water wheels to harness the flow energy of rivers to raise water for irrigation of farmland. The water wheel was about 30 chi in diameter, with ten bamboo watering tubes fastened at its perimeter. Some farmers even used three stage watering wheels to lift water to a height of over 30 chi High yield Champa paddy seeds, Korean yellow paddy, Indian green pea , and Middle East were introduced into China during this period, greatly enhancing the variety of farm produce. Song farmers emphasized the importance of night soil . They understood that using night soil could transform barren wasteland into fertile farmland. Chen Pu wrote in his Book of Agriculture of 1149: "The common saying that farmland becomes exhausted after seeding three to five years is not right, if frequently top up with new soil and cure with night soil, then the land becomes more fertile". introduced from Hainan Island into Song Dynasty tribute tea, the big Cotton flowers were collected, pits removed, beaten loose with bamboo bows, and drawn into yarns and weaved into cloth called "jibei"." The cotton jibei made in Hainan has great variety, the cloth has great width, often dyed into brilliant colors, stitching up two pieces make a bedspread, stitching four pieces make a curtain Hemp was also widely planted and made into linen. Independent mulberry farms flourished in the Mount Dongting region in Suzhou. mulberry farmers did not make a living on farmland, but instead they grew mulberry trees and bred silkworm to harvest silk. first appeared in China during the Warring States Period During the Song Dynasty, Lake Tai valley was famous for the sugarcanes cultivated. Song writer Wang described in great detail the method of cultivating sugarcane and how to make cane sugar flour from sugarcane in his monography "Classic of Sugar" in 1154, the first book about sugar technology in China. plantation in the Song Dynasty was three times the size that it was during the Tang Dynasty. According to a survey in 1162, tea plantations were spread across 66 prefectures in 244 counties. The Beiyuan Plantation (North Park Plantation) was an imperial tea plantation in Fujian prefacture. It produced more than forty varieties of tribute tea for the imperial court. Only the very tip of tender tea leaves were picked, processed and pressed into tea cakes, embossed with dragon pattern, known as "dragon tea cakes". With the growth of cities, high value vegetable farms sprung up in the suburbs. In southern China, on average one mu farm land supported one man, while in the north about three for one man, while one mu of vegetable farm supported three men. Flower nurseries also flourished. Peony the favourite of the rich and powerful. Up to ninety varieties of peony were cultivated. Jasmine and crabapple from Persia were also Organization, investment, and trade During the Song Dynasty, the merchant class became more sophisticated, well-respected and organized than in earlier periods of China. Their accumulated wealth often rivaled that of the administered the affairs of government. For their organizational skills, Ebrey, Walthall, and Palais state that Song Dynasty ...set up partnerships and joint stock companies, with a separation of owners (shareholders) and In the large cities, merchants were organized into guilds according to the type of product sold; they periodically set prices and arranged sales from wholesalers to When the government requisitioned goods or assessed taxes, it dealt with the guild heads. Although large government run industries and large privately-owned dominated the market system of urban China during the Song period, there was a plethora of small private businesses and entrepreneurs throughout the large suburbs and rural areas that thrived off the economic boom of the period. There was even a large black market in China during the Song period, which was actually enhanced once the Jurchens conquered northern China and established the Jin Dynasty example, around 1160 AD there was an annual black market smuggling of some 70 to 80 thousand cattle . There were multitudes of successful small kilns shops owned by local families, along with oil presses, wine -making shops, small local paper-making businesses, etc. There was also room for small economic success with the "inn keeper, the petty diviner, the drug seller, the cloth trader," and many others. Rural families that sold a large agricultural surplus to the market not only could afford to buy more charcoal, tea, oil, and wine, but they could also amass enough funds to establish secondary means of production for generating more wealth. Besides necessary agricultural foodstuffs, farming families could often produce wine, charcoal, paper, textiles, and other goods they sold through brokers. Farmers in Suzhou often specialized in raising bombyx mori to produce silk wares, while in Fujian, Sichuan, and Guangdong farmers often grew sugarcane. In order to ensure the prosperity of rural areas, technical applications for public works projects and improved agricultural techniques were essential. The system of China had to be furnished with multitudes of wheelwrights and square-pallet chain pumps that could lift water from lower planes to higher irrigation planes. For clothing, silken robes worn by the wealthy and elite while hemp was worn by the poor; by the late Song clothes were also in use. Shipment of all these materials and goods was aided by the 10th century innovation of the canal pound in China; the Song scientist and statesman Shen Kuo (1031–1095) wrote that the building of pound lock gates at Zhenzhou (presumably Kuozhou along the Yangtze) during the 1020s and 1030s freed up the use of five hundred working laborers at the canal each year, amounting to the saving of up to 1,250,000 strings of cash annually. He wrote that the old method of hauling boats over limited the size of the cargo to 300 tan of rice per vessel (roughly 21 tons/21337 kg), but after the pound locks were introduced, boats carrying 400 tan (roughly 28 tons/28449 kg) could then be used. Shen wrote that by his time (c. 1080) government boats could carry cargo weights of up to 700 tan (49½ tons/50294 kg), while private boats could hold as much as 800 bags, each weighing 2 tan (i.e. a total of 113 tons/114813 kg). Sea trade abroad to the South East , the Hindu world , and the East African world brought merchants great fortune. Although the massive amount of indigenous trade along the Grand Canal, River, its tributaries and lakes, and other canal systems trumped the commercial gains of overseas trade, there were still many large seaports during the Song period that bolstered the economy, such as Quanzhou, Fuzhou, Guangzhou, and Xiamen. These seaports, now heavily connected to the hinterland via canal, lake, and river traffic, acted as a long string of large market centers for the sale of cash crops produced in the interior. The high demand in China for foreign luxury goods and spices coming from the East facilitated the growth of Chinese maritime trade abroad during the Song period. Along with the mining industry, the shipbuilding industry of Fujian province during the Song period increased its production exponentially as maritime trade was given more importance and as the province's population growth began to increase dramatically. The Song capital at Hangzhou had a large canal that connected its waterways directly to the seaport at Mingzhou (modern Ningbo), the center where many of the foreign imported goods were shipped out to the rest of the country. Despite the installation of fire stations and a large fire fighting force, fires continued to threaten the city of Hangzhou and the various businesses within it. In safeguarding stored supplies and providing rented space for merchants and shopkeepers to keep their surplus goods safe from city fires, the rich families of Hangzhou, palace eunuchs, and empresses had large warehouses the northeast walls; these warehouses were surrounded by channels of water on all sides and were heavily guarded by hired night watchmen. Shipbuilders generated means of employment for many skilled craftsmen, while sailors for ship crews found many opportunities of employment as more families had enough capital to purchase boats and invest in commercial trading abroad. Foreigners and merchants from abroad had an impact on the economy from within China as well. For example, many Muslims went to Song China not only to trade, but dominated the import and export industry and in some cases became officials of economic regulations. For Chinese maritime merchants, however, there was risk involved in such long overseas ventures to foreign trade posts and seaports as far away as In order to reduce the risk of losing money instead of gaining it on maritime trade missions abroad: [Song era] investors usually divided their investment among many ships, and each ship had many investors behind One observer thought eagerness to invest in overseas trade was leading to an outflow of copper cash. He wrote, "People along the coast are on intimate terms with the merchants who engage in overseas trade, either because they are fellow-countrymen or personal acquaintances...[They give the merchants] money to take with them on their ships for purchase and return conveyance of foreign goods. They invest from ten to a hundred strings of cash, and regularly make profits of several hundred percent." Wealthy landholders were still typically those who were able to educate their sons to the highest degree. Hence small groups of prominent families in any given local county would gain national spotlight for having sons travel far off to be educated and appointed as ministers of the state. Yet downward social mobility was always an issue with the matter of divided inheritance. Suggesting ways to increase a family's property, Yuan Cai (1140–1190) wrote in the late 12th century that those who obtained office with decent salaries shouldn't convert it to gold and silver, but instead could watch their values grow with investment: For instance, if he had 100,000 strings worth of gold and silver and used this money to buy productive property, in a year he would gain 10,000 strings; after ten years or so, he would have regained the 100,000 strings and what would be divided among the family would be interest. If it were invested in a pawn broking business, in three years the interest would equal the He would still have the 100,000 strings, and the rest, being interest, could be divided. Moreover, it could be doubled again in another three years, ad infinitum. (1031–1095), a minister of finance, was of the same opinion; in his understanding of the velocity of circulation, he stated in 1077: The utility of money derives from circulation and loan-making. A village of ten households may have 100,000 coins. If the cash is stored in the household of one individual, even after a century, the sum remains 100,000. If the coins are circulated through business transactions so that every individual of the ten households can enjoy the utility of the 100,000 coins, then the utility will amount to that of 1,000,000 cash. If circulation continues without stop, the utility of the cash will be beyond The author Zhu Yu wrote in his (萍洲可談; Pingzhou Table Talks) of 1119 AD about the organization, maritime practices, and government standards of seagoing vessels, their merchants, and sailing crews. His book stated: According to government regulations concerning seagoing ships, the larger ones can carry several hundred men, and the smaller ones may have more than a hundred men on One of the most important merchants is chosen to be Leader (Gang Shou), another is Deputy Leader (Fu Gang Shou), and a third is Business Manager (Za Shi). The Superintendent of Merchant Shipping gives them an unofficially sealed red certificate permitting them to use the light bamboo for punishing their company when Should anyone die at sea, his property becomes forfeit to the government...The ship's pilots are acquainted with the configuration of the coasts; at night they steer by the stars, and in the day-time by the sun. In dark weather they look at the south-pointing needle (i.e. the magnetic compass). They also use a line a hundred feet long with a hook at the end which they let down to take samples of mud from the sea-bottom; by its (appearance and) smell they can determine their Foreign travelers to China often made remarks on the economic strength of the country. The later Muslim Moroccan Berber traveler Ibn Batutta (1304–1377) wrote about many of his travel experiences in places across the Eurasian world, including China at the farthest eastern extremity. describing lavish Chinese ships holding palatial cabins and saloons, along with the life of Chinese ship crews and captains, Among the inhabitants of China there are those who own numerous ships, on which they send their agents to foreign For nowhere in the world are there to be found people richer than the Chinese. Steel and iron industries Accompanying the widespread printing of paper money was the beginnings of what one might term an early Chinese industrial revolution . For example the historian Robert Hartwell estimated that per capita iron sixfold between 806 and 1078, such that, by 1078 China was producing 127000000 kg (125,000 t) in weight of iron per year. However, historian Donald Wagner questions Hartwell's method used to estimate these figures (i.e. by using Song tax and quota receipts). In the smelting process of using huge bellows driven by waterwheels , massive amounts of charcoal were used in the production process, leading to a wide range of deforestation in northern China. However, by the end of the 11th century the Chinese discovered that using could replace the role of charcoal, hence many acres of forested land in northern China were spared from the steel and iron industry with this switch of resources. Iron and steel of this period were used to mass produce ploughs needles, pins, nails for ships, musical cymbals , chains for suspension bridges , Buddhist statues, and other routine items for an indigenous mass market. Iron was also a necessary manufacturing component for the production processes of salt and copper. Many newly constructed canals linked the major iron and steel production centers to the capital city's main market. This was also extended to trade with the outside world, which greatly expanded with the high level of Chinese maritime activity abroad during the Southern Song Through many written petitions central government by regional administrators of the Song Empire, historical scholars can piece evidence together to appropriate the size and scope of the Chinese iron during the Song era. The famed magistrate Bao Qingtian (999–1062) wrote of the iron industry at Hancheng, Tongzhou Prefecture, along the Yellow River in what is today eastern Shaanxi province, with iron smelting households that were overseen by government regulators. He wrote that 700 such households were acting as iron smelters, with 200 having the most adequate amount of government support, such as charcoal supplies and skilled craftsmen (the iron households hired local unskilled labor themselves). Bao's complaint to the throne was that government laws against private smelting in Shaanxi hindered profits of the industry, so the government finally heeded his plea and lifted the ban on private smelting for Shaanxi in 1055. The result of this was an increase in profit (with lower prices for iron) as well as production; 100,000 jin ) of iron was produced annually in Shaanxi in the 1040s AD, increasing to 600,000 jin produced annually by the 1110s, furbished by the revival of the industry in 1112. Although the iron smelters of Shaanxi were managed and supplied by the government, there were many independent smelters operated and owned by rich families. While acting as governor of Xuzhou in 1078, the famous Song poet and statesman Su Shi (1037–1101) wrote that in the Liguo Industrial Prefecture under his administered region, there were 36 iron smelters run by different local families, each employing a work force of several hundred people to mine ore, produce their own charcoal, and smelt During the Song period, there was a great deal of organized labor and bureaucracy involved in the extraction of resources from the various provinces in China. The production of sulfur , which the Chinese called 'vitriol liquid', was extracted from pyrite and used for purposes as well as for the creation of gunpowder . This was done by roasting iron pyrites, converting the sulphide , as the ore was piled up with coal briquettes in an earthenware furnace with a type of still-head to send the sulphur over as vapour, after which it would solidify and crystallize historical text of the Song Shi (History of the Song, compiled in 1345 AD) stated that the major producer of sulfur in the Tang and Song dynasties was the Jin Zhou sub-provincial administrative region (modern Linfen in southern bureaucrats appointed to the region managed the industrial processing and sale of it, and the amount created and distributed from the years 996 to 997 alone was 405,000 jin (roughly 200 tons). It was recorded that in 1076 AD the Song Dynasty government held a strict commercial monopoly on sulfur production, and if dye houses and government workshops sold their products to private dealers in the black market they were subject to meted penalties by government authorities. before this point, in 1067 AD, the Song government had issued an edict that the people living in Shanxi and Hebei were forbidden to sell foreigners any products containing saltpetre and sulfur. This act by the Song government displayed their fears of the grave potential of gunpowder weapons being developed by Song China's territorial enemies as well (i.e. the Tanguts Zhou was in close proximity to the Song capital at Kaifeng, the latter became the largest producer of gunpowder during the Northern Song period. sulfur from pyrite instead of natural sulfur (along with ehanced ), the Chinese were able to shift the use of gunpowder from an incendiary use into an explosive one for early . There were large manufacturing plants in the Song Dynasty for the purpose of making 'fire-weapons' employing the use of gunpowder, such as fire and fire arrows . While engaged in a war with the Mongols, in the year 1259 the official Li Zengbo wrote in his Ko Zhai Za Gao, Xu Gao Hou that the city of Qingzhou was manufacturing one to two thousand strong iron-cased bomb shells a month, dispatching to Xiangyang and Yingzhou about ten to twenty thousand such bombs at a time. One of the main armories for the storage of gunpowder and weapons was located at Weiyang , which accidentally caught fire and produced a massive explosion in 1280 AD. This arrangement of allowing competitive industry to flourish in some regions while setting up its opposite of strict government-regulated and monopolized production and trade in others was not exclusive to iron manufacturing. In the beginning of the Song Dynasty, the government supported competitive silk mill and brocade workshops in the eastern provinces and in the capital city of Kaifeng. However, at the same time the government established strict legal prohibition on the merchant trade of privately produced silk in Sichuan This prohibition dealt an economic blow to Sichuan that caused a small rebellion (which was subdued), yet in the Song Dynasty Sichuan was well-known for its independent industries and cultivated oranges . The reforms of the Chancellor Wang (1021–1086) sparked heated debate amongst ministers of court when he nationalized industries manufacturing, processing, and distributing tea , and wine . The state monopoly on Sichuan tea was the prime source of revenue for the state's purchase of horses in Qinghai for the Song army's cavalry forces. restrictions on the private manufacture and trade of salt were even criticized in a famous poem by Su Shi while the opposing politically-charged faction at court gained advantage and lost favor, Wang Anshi's reforms were continually abandoned and reinstated. Despite this political quarrel, the Song Empire's main source of revenue continued to come from state-managed monopolies and indirect . As for private entrepreneurship, great profits could still be pursued by the merchants in the luxury item trades and specialized regional production. For example, the silk producers of Raoyang County, Shenzhou Prefecture, southern Hebei province were especially known for producing silken headwear for the Song emperor and high court officials in the capital. Copper resources and receipts of deposit The root of the development of the banknote goes back to the earlier Tang Dynasty (618–907), when the government outlawed the use of bolts of as currency, which increased the use of coinage as money. By the year 1085 the output of copper currency was driven to a rate of 6 billion coins a year up from 5.86 billion in 1080 (compared to just 327 million coins minted annually in the Tang Dynasty 's prosperous Tianbao period of 742–755, and only 220 million coins minted annually from 118 BC to 5 AD during the Han Dynasty ). The expansion of the economy was unprecedented in China: the output of coinage currency in the earlier year of 997 AD, which was only 800 million coins a year. In the year 1120 alone, the Song government collected 18,000,000 ounces of silver in taxes. With many 9th century Tang era merchants avoiding the weight and bulk of so many copper coins in each transaction, this led them to using trading receipts from deposit shops where goods or money were left previously. Merchants would deposit copper currency into the stores of wealthy families and prominent , whereupon they would receive receipts that could be cashed in a number of nearby towns by accredited persons. Since the 10th century, the early Song government began issuing their own receipts of deposit, yet this was restricted mainly to their monopolized salt industry and trade. first official regional paper-printed money can be traced back to the year 1024, in Sichuan Robert Temple says that the Sichuan bills can be traced back to 1023; before that year, sixteen private businesses ' issued notes of exchange, but in that year the Song government took over this enterprise under an Although the output of copper currency had expanded immensely by 1085, some fifty copper mines were shut down between the years 1078 and 1085.Ch'en, 615. Although there were on average more copper mines found in Northern Song China than in the previous Tang Dynasty, this case was reversed during the Southern Song with a sharp decline and depletion of mined copper deposits by 1165.Ch'en, 615–616. Even though copper cash was abundant in the late 11th century, Chancellor Wang Anshi's tax substitution for corvée labor and government takeover of agricultural finance loans meant that people now had to find additional cash, driving up the price of copper money which would become scarce.Ch'en, 619. To make matters worse, large amounts of government-issued copper currency exited the country via international trade, while the Liao and Western Xia pursued the exchange of their iron-minted coins for Song copper coins.Ch'en, 621. As evidenced by an 1103 decree, the Song government became cautious about its outflow of iron currency into the Liao Empire when it ordered that the iron was to be alloyed with tin in the smelting process, thus depriving the Liao of a chance to melt down the currency to make iron weapons. government attempted to prohibit the use of copper currency in border regions and in seaports, but the Song-issued copper coin became common in the Liao, Western Xia, Japanese, and Southeast Asian The Song government would turn to other types of material for its currency in order to ease the demand on the government mint, including the issuing of iron coinage and paper banknotes.Ch'en, 620. In the year 976, the percentage of issued currency using copper coinage was 65%; after 1135, this had dropped significantly to 54%, a government attempt to debase the copper The world's first paper money The central government soon observed the economic advantages of printing paper money, issuing a monopoly right of several of the deposit shops to the issuance of these certificates of deposit. By the early 12th century, the amount of banknotes issued in a single year amounted to an annual rate of 26 million strings of cash coins. By the 1120s the central government officially stepped in and produced their own state-issued paper money (using woodblock printing Even before this point, the Song government was amassing large amounts of paper tribute . It was recorded that each year before 1101 AD, the prefecture of Xinan (modern Xi-xian, Anhui) alone would send 1,500,000 sheets of paper in seven different varieties to the capital at Kaifeng. In that year of 1101, the Emperor Huizong of Song lessen the amount of paper taken in the tribute quota, because it was causing detrimental effects and creating heavy burdens on the people of the region. However, the government still needed masses of paper product for the exchange certificates and the state's new issuing of paper money. For the printing of paper money alone, the Song court established several government-run factories in the cities of Huizhou, Chengdu, Hangzhou, and Anqi. The size of the workforce employed in these paper money factories were quite large, as it was recorded in 1175 AD that the factory at Hangzhou alone employed more than a thousand workers a day. However, the government issues of paper money were not yet nationwide standards of currency at that point; issues of banknotes were limited to regional zones of the empire, and were valid for use only in a designated and temporary limit of 3-year's time. The geographic limitation changed between the years 1265 and 1274, when the late Southern Song government finally produced a nationwide standard currency of paper money, once its widespread circulation was backed by gold or silver. The range of varying values for these banknotes was perhaps from one string of cash to one hundred at the most. Ever since 1107, the government printed money in no less than six ink colors and printed notes with intricate designs and sometimes even with mixture of unique fiber in the paper to avoid counterfeiting subsequent Yuan, Ming, and Qing dynasties would issue their own paper money as well. Even the Southern Song's contemporary of the Jin to the north caught on to this trend and issued their own paper money. At the archeological there was a printing plate found that dated to the year 1214, which produced notes that measured 10 cm by 19 cm in size and were worth a hundred strings of 80 cash coins. -Jin issued paper money bore a , the number of the series, and a warning label that counterfeiters would be decapitated, and the denouncer rewarded with three hundred strings Urban employment and businesses Within the cities there were a multitude of professions and places of work to choose from, if one weren't strictly inheriting a profession of his paternal line. Sinologist historians are fortunate enough to have a wide variety of written sources describing minute details about each location and the businesses within the cities of Song China. For example, in the alleys and avenues around the East Gate of the Xiangguo Temple in Kaifeng, historian Stephen H. West quotes one source: Along the Temple Eastgate Avenue...are to be found shops specializing in cloth caps with pointed tails, belts and waiststraps, books, caps and flowers as well as the vegetarian tea meal of the Ding family...South of the temple are the brothels of Manager's Alley...The nuns and the brocade workers live in Embroidery Alley...On the north is Small Sweetwater Alley...There are a particularly large number of Southern restaurants inside the alley, as well as a plethora of brothels. Similarly, in the "Pleasure District" along the Horse Guild Avenue, near a Zoroastrian temple West quotes the same source, Dongjing meng Hua lu In addition to the household gates and shops that line the two sides of New Fengqiu Gate Street...military encampments of the various brigades and columns [of the Imperial Guard] are situated in facing pairs along approximately ten li of the approach to the gate. Other wards, alleys, and confined open spaces crisscross the area, numbering in the tens of thousands—none knows their real number. In every single place, the gates are squeezed up against each other, each with its own tea wards, wineshops, stages, and food and drink. Normally the small business households of the marketplace simply purchase [prepared] food and drink at food stores; they do not cook at home. For northern food there are the Shi Feng style dried meat cubes...made of various stewed items...for southern food, the House of Jin at Temple Bridge...and the House of Zhou at Ninebends...are acknowledged to be the finest. The night markets close after the third watch only to reopen at the West points out that Kaifeng shopkeepers rarely had time to eat at home, so they chose to go out and eat at a variety of places such as restaurants, temples, and food stalls. Restaurant on this new clientele, while restaurants that catered to regional cooking targeted customers such as merchants and officials who came from regions of China where cuisine styles and flavors were drastically different than those commonly served in the capital. The pleasure district mentioned above—where stunts, games, theatrical stage performances, taverns and singing girl houses were located—was teeming with food stalls where business could be had virtually all night. West makes a direct connection between the success of the theatre industry and the food industry in the cities. Of the fifty some theatres within the pleasure districts of Kaifeng, four of these could entertain audiences of several thousand each, drawing huge crowds which would then give nearby businesses an enormous potential customer base. Besides food, traders in eagles and hawks, precious paintings well as shops selling bolts of silk and cloth, jewelry of pearls, horn, gold and silver, hair ornaments, combs, caps, scarves, and aromatic incense thrived in The Song Dynasty actively promoted overseas trade. About fifty countries carried out overseas trade with the Song Dynasty, among them Ceylon, Langkasuka, Mait, Samboja, Borneo, Kelantan, Champa, Chenla, Bengtrao, Java, India, Calicut, Lambri, Bengal, Kurum, Gujara, Mecca, Misr, Bagdad, Iraq, Aman, Almoravid dynasty, Sicily, Morroco,Tanzania, Somalia, Ryukyu, Korea, and Pearls, ivory, rhinocero horns, frankincense, agalloch eaglewood, coral, agate, hawksbill turtle shell, gardenia, and rose were imported from the Arabs and Samboja, herbal medicine from Java, costusroot from Foloan (Kuala Sungai Berang) cotton cloth, cotton yarn from Mait, and ginseng, silver, copper, and quick silver from Korea. promote overseas trade and maximize government profits in control of imported goods, in 971 the government established a Maritine Trade Supervisorate at Guangzhou, in 999 established a second one at Hangzhou, a third at Mingzhou (now Ningbo city), followed by Quanzhou (Zaitung) in 1079, Huating County (now part of Shanghai) in 1113, and Jiangyin in 1145. Initially the Maritime Trade Supervisorate was subordinate under the Department of Transportation or prefecture official, later made into a separate agency with its own supervisor. The roles of the Maritime Trade - Taxation of imported goods, tax rate varied over the Song Dynasty, from 10% to as high as 40%; however, during the reign of Emperor Shenzong (1048 – 1085), the tax rate for imports was lowered to 6.67%. The tax was goods in kind, not money. - Government purchase and sale of imported goods. In 976, all imported goods from overseas merchants had to be sold only to the government, private sales was prohibited, penalty for violation depended on the quantity of goods involved, and the highest penalty was tattooing of the face and forced labor. Later the 100% rule was relaxed somewhat. The Maritime Trade Superisorate purchased a portion of the finest quality goods, for example 60% for pearls, 40% for rhinocero horn; the low quality leftover goods were allowed to be traded in the market. The purchase rate applied to after tax goods, then paid in money, not according to market price, but according to a government-accessed "fare value". In the Southern Song Dynasty, the Maritime Trade Supervisorates were short of funds and were not paid on time, causing huge losses in profits for overseas merchants; the volume of incoming ships also dropped. - Issue foreign trade permits for local merchants. - Ebrey, 156. - Ebrey, 167. - Qi Xia, Economy of the Song Dynasty, Part I, Chapter 1, page 65 ISBN 7-80127-462-8/F - Qi Xia, Economy of the Song Dynasty, p86 - Qi Xia, Economy of the Song Dynasty, p84-96 - Robert Temple, The Genius of China, p19 - Qi Xia, p135 - Qi Xia, 156 - Zhou Qufei, p228 - Ji Xianlin, - Qi Xia 856 - Xiong Fan (Song Dynasty) Xuanhe Beiyuan Dragon Tea - Qi Xia,180 - Ebrey et al., 157. - Embree, 339-340. - Ebrey, Cambridge Illustrated History of China, - Needham, Volume 4, Part 2, 347. - Needham, Volume 4, Part 3, 352. - Rossabi, 77–78. - Fairbank, 89. - Rossabi, 79. - Fairbank, 92. - Walton, 89. - Gernet, 34-37. - Gernet, 37. - Ebrey, Cambridge Illustrated History of China, - BBC article about Islam in China - Needham, Volume 4, Part 3, 465. - Shen, 158. - Ebrey et al., 159. - Ebrey et al., 162. - Yang, 47. - Needham, Volume 4, Part 1, 279. - Needham, Volume 4, Part 3, 470. - Ebrey et al., 158. - Wagner (2001), 175–197. - Ebrey, Cambridge Illustrated History of China, - Embree, 339. - Wagner, 181. - Wagner, 182. - Wagner 182-183. - Wagner, 178-179. - Yunming, 487-489. - Yunming, 489. - Needham, Volume 5, Part 7, 126. - Yunming, 489-490. - Needham, Volume 5, Part 7, 173-174. - Needham, Volume 5, Part 7, 209-210. - Needham, Volume 4, Part 2, 23. - Ebrey, 164. - Smith, 77. - Gernet, 18. - Friedman et al., 3. - Sadao, 588. - Bowman, 105. - Ebrey, Cambridge Illustrated History of China, - Gernet, 80. - Morton, 97. - Benn, 55. - Temple, 117. - Bol (2001), p. 111. - Ebrey et al., 156. - Needham, Volume 5, Part 1, 47. - Needham, Volume 5, Part 1, 48. - Temple, 117–118. - Gernet, 80-81. - West, 71. - West, 72. - West, 72–73. - West, 74. - Gernet, 133. - West, 70. - Gernet, 184. - West, 76. - West, 75–76. - Zhao Rukua (赵汝适 Song Dynasty), Zhufanzhi (诸番志) - Zhao Yanwei (赵彦卫Song dynasty) Yun Lu Man Chao (云麓漫钞) p88 Zhong Hua Book Co ISBN 7101012256 - Qi Xia, p1175-1178 - Guan Luqian, 140-142 - Guan, p143 - Benn, Charles (2002). China's Golden Age: Everyday Life in the Tang Dynasty. Oxford: Oxford University Press. ISBN - Bol, Peter K. "Whither the Emperor? Emperor Huizong, the New Policies, and the Tang-Song Transition," Journal of Song and Yuan Studies, Vol. 31 (2001), pp. 103-34. - Bowman, John S. (2000). Columbia Chronologies of Asian History and Culture. New York: Columbia University Press. - Ch'en, Jerome. "Sung Bronzes—An Economic Analysis," Bulletin of the School of Oriental and African Studies (Volume 28, Number 3, 1965): 613–626. - Ebrey, Walthall, Palais, (2006). East Asia: A Cultural, Social, and Political History. Boston: Houghton Mifflin - Ebrey, Patricia Buckley (1999). The Cambridge Illustrated History of China. Cambridge: Cambridge University Press. ISBN 0-521-43519-6 (hardback); ISBN 0-521-66991-X (paperback). - Embree, Ainslie Thomas (1997). Asia in Western and World History: A Guide for Teaching. Armonk: ME Sharpe, Inc. - Fairbank, John King and Merle Goldman (1992). China: A New History; Second Enlarged Edition (2006). Cambridge: MA; London: The Belknap Press of Harvard University Press. ISBN - Friedman, Edward, Paul G. Pickowicz, Mark Selden. (1991). Chinese Village, Socialist State. New Haven: Yale University Press. ISBN - Gernet, Jacques (1962). Daily Life in China on the Eve of the Mongol Invasion, 1250-1276. Stanford: Stanford University Press. ISBN 0-8047-0720-0 - Hartwell, Robert (1966). Markets, Technology and the Structure of Enterprise in the Development of the Eleventh Century Chinese Iron and Steel Industry. Journal of Economic History - Ji Xianlin, (1997) History of Cane Sugar in China, ISBN 7-80127-284-6/K - Morton, Scott and Charlton Lewis (2005). China: Its History and Culture: Fourth Edition. New York: McGraw-Hill, Inc. - Needham, Joseph (1986). Science and Civilisation in China: Volume 4, Physics and Physical Technology, Part 1, Physics.. Cambridge University Press. - Needham, Joseph (1986). Science and Civilisation in China: Volume 4, Physics and Physical Technology, Part 2, Mechanical Engineering. Cambridge University Press. - Needham, Joseph (1986). Science and Civilisation in China: Volume 4, Physics and Physical Technology, Part 3, Civil Engineering and Nautics. Cambridge University Press - Needham, Joseph (1986). Science and Civilisation in China: Volume 5, Part 1. Cambridge University Press - Needham, Joseph (1986). Science and Civilisation in China: Volume 5, Chemistry and Chemical Technology, Part 7, Military Technology; the Gunpowder Epic. Cambridge University - Qi Xia (1999), 漆侠, 中国经济通史. 宋代经济卷 /Zhongguo jing ji tong shi. Song dai jing ji juan [Econony of the Song Dynasty] vol I, II ISBN - Rossabi, Morris (1988). Khubilai Khan: His Life and Times. Berkeley: University of California Press. ISBN - Sadao, Nishijima. (1986). "The Economic and Social History of Former Han," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 545-607. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 0521243270. - Shen, Fuwei (1996). Cultural flow between China and the outside world. Beijing: Foreign Languages Press. ISBN - Smith, Paul J. (1993) "State Power and Economic Activism during the New Policies, 1068–1085' The Tea and Horse Trade and the 'Green Sprouts' Loan Policy," in Ordering the World: Approaches to State and Society in Sung Dynasty China, ed. Robert P. Hymes, 76–128. Berkeley: Berkeley University of California Press. ISBN - Temple, Robert. (1986). The Genius of China: 3,000 Years of Science, Discovery, and Invention. With a forward by Joseph Needham. New York: Simon and Schuster, Inc. ISBN 0671620282. - Wagner, Donald B. "The Administration of the Iron Industry in Eleventh-Century China," Journal of the Economic and Social History of the Orient (Volume 44 2001): 175-197. - Walton, Linda (1999). Academies and Society in Southern Sung China. Honolulu: University of Hawaii Press. - West, Stephen H. "Playing With Food: Performance, Food, and The Aesthetics of Artificiality in The Sung and Yuan," Harvard Journal of Asiatic Studies (Volume 57, Number 1, 1997): - Yang, Lien-sheng. "Economic Justification for Spending-An Uncommon Idea in Traditional China," Harvard Journal of Asiatic Studies (Volume 20, Number 1/2, 1957): 36–52. - Yunming, Zhang (1986). Isis: The History of Science Society: Ancient Chinese Sulfur Manufacturing Processes. Chicago: University of Chicago Press. - Zhou Qufei, (1178) Ling Wai Dai Da(Report from Lingnan), Zhong Hua Book Co ISBN7-101-01665-0/K
<urn:uuid:e2986220-9e9a-4d11-a5d2-70e791695a52>
CC-MAIN-2013-20
http://maps.thefullwiki.org/Economy_of_the_Song_Dynasty
2013-05-24T01:50:36
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920623
10,300
4.03125
4
In the muted light of an open doorway and a rosette window, two Jewish men are shown walking through the entry porch of the Regensburg synagogue. Altdorfer made two etchings of the temple just before it was destroyed on February 22, 1519: this view and one of the interior nave. Emperor Maximilian had long been a protector of the Jews in the imperial cities, extracting from them substantial taxes in exchange. Within weeks of his death, however, the city of Regensburg, which blamed its economic troubles on its prosperous Jewish community, expelled the Jews. Altdorfer, a member of the Outer Council, was one of those chosen to inform the Jews that they had two hours to empty out the synagogue and five days to leave the city. The date of the demolition inscribed at the top of the print suggests that Altdorfer made the preparatory sketches, as well as the etchings themselves, with the knowledge that the building was to be destroyed. The prints appear to have been quickly produced, quite possibly during the five days prior to the temple's destruction: the plate was not evenly etched, particularly in the areas of dense hatching, where the individual lines lose clarity. In addition, the slightly tipsy vaults appear to have been traced freehand rather than with a compass. Despite the seemingly sensitive portrayal, the print was not intended as a sympathetic rendering of an aspect of Jewish culture, but rather as a much more dispassionate recording of the site. It is thus the first portrait of an actual architectural monument in European printmaking.
<urn:uuid:0c7c4a72-4a4d-4d1b-a40c-c6ea93bcc2f1>
CC-MAIN-2013-20
http://metmuseum.org/Collections/search-the-collections/90003246?high=on&rpp=15&pg=1&rndkey=20120723&ft=*&who=Albrecht+Altdorfer&pos=1
2013-05-24T01:51:20
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.979777
323
4.03125
4
Analog Input Channels Temperature is a measure of the average kinetic energy of the particles in a sample of matter expressed in units of degrees on a standard scale. You can measure temperature in many different ways that vary in equipment cost and accuracy. The most common types of sensors are thermocouples, RTDs, and thermistors. Figure 1. Thermocouples are inexpensive and can operate over a wide range of temperatures. Thermocouples are the most commonly used temperature sensors because they are relatively inexpensive yet accurate sensors that can operate over a wide range of temperatures. A thermocouple is created when two dissimilar metals touch and the contact point produces a small open-circuit voltage as a function of temperature. You can use this thermoelectric voltage, known as Seebeck voltage, to calculate temperature. For small changes in temperature, the voltage is approximately linear. You can choose from different types of thermocouples designated by capital letters that indicate their compositions according to American National Standards Institute (ANSI) conventions. The most common types of thermocouples include B, E, K, N, R, S, and T. For more information on thermocouples, read The Engineer's Toolbox for Thermocouples. Figure 2. RTDs are made of metal coils and can measure temperatures up to 850 °C. A platinum RTD is a device made of coils or films of metal (usually platinum). When heated, the resistance of the metal increases; when cooled, the resistance decreases. Passing current through an RTD generates a voltage across the RTD. By measuring this voltage, you can determine its resistance and, thus, its temperature. The relationship between resistance and temperature is relatively linear. Typically, RTDs have a resistance of 100 Ω at 0 °C and can measure temperatures up to 850 °C. For more information on RTDs, read The Engineer's Toolbox for RTDs. Figure 3. Passing current through a thermistor generates a voltage proportional to temperature. A thermistor is a piece of semiconductor made from metal oxides that are pressed into a small bead, disk, wafer, or other shape and sintered at high temperatures. Lastly, they are coated with epoxy or glass. As with RTDs, you can pass a current through a thermistor to read the voltage across the thermistor and determine its temperature. However, unlike RTDs, thermistors have a higher resistance (2,000 to 10,000 Ω) and a much higher sensitivity (~200 Ω/°C), allowing them to achieve higher sensitivity within a limited temperature range (up to 300 °C). For information on thermistors, read The Engineer's Toolbox for Thermistors.
<urn:uuid:e3d9f26b-9215-49bf-a296-3724a4a14b64>
CC-MAIN-2013-20
http://sine.ni.com/np/app/main/p/ap/daq/lang/en/pg/1/sn/n17:daq,n21:11/fmid/2999/
2013-05-24T01:58:17
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917819
569
4.21875
4
Mystery Strategy for Elementary Students Using the premise of a mystery to solve, elementary students act as history detectives as they explore a historical question and analyze carefully chosen clues to formulate and test hypotheses. This strategy depends on our need to solve mysteries. Students are given an opportunity to be active learners as they solve a historical mystery. This strategy relates to what historians do and the process of historical inquiry. Students must work with evidence, form hypotheses, test those hypotheses, and report their findings. The goals of the mystery strategy are to learn to: 1. gather, organize, and process information; 2. formulate and test hypotheses; 3. think creatively and analytically to solve problems; and 4. develop, defend, and present solutions to problems. 1. Choose an topic that contains a mystery such as “Why did the American beaver almost become extinct in the 1840s?” Other examples of appropriate historical mysteries include: “How did flooding in Mississippi in 1931 hinder the Civil Rights Movement?”; “Who really invented the cotton gin?”; and “Was the Boston Massacre really a massacre?” 2. Gather primary and secondary sources that will serve as clues for students such as letters, diary entries, maps, statistical tables, political cartoons, images, artifacts for students to touch (in this case beaver fur or felt), and web articles. These sources should pique students’ interest and provide them with clues that will help them generate theories. For example, if students are given a clue regarding the habitat and species characteristics of the beaver and then also told John Jacob Astor was the wealthiest man in America in 1848 it is hoped they conclude that Astor’s wealth had something to do with the beaver. Maps indicating trade routes should confirm this conclusion. Though they may be encountering names in the clues for the first time, making educated guesses is an essential ingredient to the mystery strategy. Students should not be afraid of making guesses or presenting ideas to the larger group. The learning goal is about what it takes to arrive at a hypothesis rather than ending up with a right answer. 3. Decide student grouping. If using small groups, keep individual needs in mind such as reading levels, ability to work with others, and Individual Education Plans (IEPs). 4. Decide how to present the clues to students (strips of paper within envelopes at stations, single sheets of paper for them to cut apart, etc.). See examples of clues for additional clues. Teachers should read through materials to pull clues that fit students’ needs and abilities. 1. Students read through clues and sort them according to common elements. Once the clues are sorted, students begin to work on their hypothesis. 2. As students analyze the clues and arrive at a hypothesis, use guiding questions such as, “Tell me how the two things relate” and “What’s your reason for thinking that?” to keep students focused on solving the mystery. Avoid guiding them in a direction. The goal is for students to work with the clues and arrive at their own hypothesis. Students can use the Mystery Writing Guide Worksheet to record ideas. 3. In a whole group, have small groups share their hypotheses and evaluate them. Are they logical based on the clues? Do they make sense? Write group responses on the board so students can track their findings as they move through the evidence. The goal is to test each group hypothesis and arrive at the best conclusion. For example, if one group understands there is a connection between the mountain men and the beaver yet they also think the railroads had a role in the problem, do the clues support or refute these ideas? Remind students they are like historians looking at information to form a hypothesis, test it, and arrive at a conclusion. 4. Assign each student a written reflection piece on the content learned and the process used to uncover the mystery. This is the most important part of the mystery strategy and should go beyond merely reporting content. Prompt students with questions such as: What happened in the activity? What things did you do well? Most importantly, ask, Which hypothesis best answers the mystery question? Why? - Data should tease the student without revealing too much. - Data should hone inference skills. - Clues should provide information not an explanation (see Mystery Strategy Clues Worksheet). Students are presented with the following problem: Why did the American beaver almost become extinct in 1840? Write the question on the board so it is visible throughout the activity. Anticipatory Set: Begin by employing a student’s knowledge of science and ecosystems learned earlier. Give a short presentation about the American Beaver. This would include the fact that beavers maintain dams that create ponds. The water level in these ponds is constant, encouraging the growth of vegetation that supports many other types of animals. The dams also keep summer rains and resulting erosion in check. The presentation could end with figures about the number of beavers estimated to be in North America from European settlement to today (see links below). Students would see a significant decline in the population during exploration and settlement. This decline leads students to the essential question and they can begin working with the clues to make hypotheses. Clues: Clues can be obtained from…. - facts on the American Beaver species – including habitat and life cycle; - maps indicating beaver habitats and population centers in the 1840s. Scroll down through the page for a fur trading route map; - images from fashion catalogs from the mid-1800s; - real beaver pelt and/or beaver trap, scraps of commercial felt, or images of beaver fur and hats; - short biographical sketches of mountain men such as Kit Carson, John Liver-Eating Johnston, and William Sublette; - Advertisements for beaver products such as top hats and ads from trading companies seeking hunters. Scroll down through each page for the aforementioned images. - newspaper accounts regarding skirmishes/battles between the Iroquois Confederation/other tribes in the Great Lakes region in the Beaver Wars; - Quotes from all parties involved in the fur trade (Native American chiefs, trading company owners such as Manuel Lisa, mountain men, etc.) - Pictures of people wearing beaver hats; - John Jacob Astor. Be sure to use some visuals! Reflection: Students reflect on the original question by presenting their hypotheses in written form. Along with their response about the disappearance of the beaver, students are asked to think about the process of historical inquiry and how it relates to the steps they followed to arrive at a hypothesis. Osborne Russell and Aubrey L. Haines, Journal of a Trapper and Maps of His Travels in the Rocky Mountains Fred R. Gowans, Rocky Mountain Rendezvous: A History of the Fur Trade, 1825-1840 Silver, Harvey.F., et. al. Teaching styles & strategies. Trenton, NJ: The Thoughtful Education Press, 1996.
<urn:uuid:57c1bc6f-047e-437d-a905-844bd12bf6ab>
CC-MAIN-2013-20
http://teachinghistory.org/teaching-materials/teaching-guides/24295
2013-05-24T01:44:45
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942968
1,454
4
4
backward and forward both run to need may they ,faster run to are computers If A Turing machine is fine for reasoning about computers, but it's not an ideal model for building them. Some more practical components of reversible logic were introduced in the 1980s by Edward F. Fredkin and Tommaso Toffoli, who were then working together at MIT. (Fredkin is now at Carnegie Mellon University, Toffoli at Boston University.) The components are logic gates, somewhat like AND and OR gates but designed for reversibility. In any reversible gate the number of inputs must equal the number of outputs. Moreover, each possible set of inputs must yield a distinct set of outputs. If this were not the case—that is, if two or more input patterns had the same output—then the reverse action of the gate would be ambiguous. The devices now known as the Fredkin gate and the Toffoli gate (see illustration on page 109) both have three inputs and three outputs; and, as required for reversibility, each input pattern maps to a unique output. In the Fredkin gate, one signal controls whether the other two data lines pass straight through the gate or else have their positions swapped. In the Toffoli gate, two of the signals control the third; if the first two bits are both 1, then the third bit is inverted. Like the NOT gate, both the Fredkin and the Toffoli gates are their own inverses: No matter what the values of the three input signals, running them through two successive copies of the same gate will return the signals to their original values. Both gates are also computationally universal, meaning that a computer assembled from multiple Fredkin or Toffoli gates (and no other components) could simulate a Turing machine or any other device of equivalent computational power. Thus the gates might be considered candidates for a real reversible computer. Of course logic gates are still just abstract devices; they have to be given some physical implementation with transistors or other kinds of hardware. Starting in the early 1990s, several groups have been designing and building prototypes of reversible (or nearly reversible) digital circuits. For example, at MIT a group including Michael Frank and Thomas F. Knight, Jr., fabricated a series of small but complete processor chips based on a reversible technology; Frank continues this work at Florida State University. At the University of Gent in Belgium, Alexis De Vos and his colleagues have built several reversible adders and other circuits. It's important to note that building a computer according to a reversible logic diagram does not guarantee low-power operation. Reversibility removes the thermodynamic floor at kT ln 2, but the circuit must still be designed to attain that level of energy savings. The current state of the art is far above the theoretical floor; even the most efficient chips, reversible or not, dissipate somewhere between 10,000 and 10 million times kT ln 2 for each logical operation. Thus it will be some years before reversible technology can be put to the ultimate test of challenging the three-zeptojoule barrier. In the meantime, however, it turns out that some concepts derived from reversible logic are already useful in low-power circuits. One of these is charge recovery, which attempts to recycle packets of electric charge rather than let them drain to ground. Another is adiabatic switching, which avoids wasteful current surges by closing switches only after voltages have had a chance to equalize. » Post Comment
<urn:uuid:53d5592b-4089-4273-9d8d-fc7396e4e75d>
CC-MAIN-2013-20
http://www.americanscientist.org/issues/pub/reverse-engineering/5
2013-05-24T01:50:59
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919654
782
4.09375
4
1536 Act of UnionThe 1536 Act of Union combined England and Wales into a single state. It was passed during the reign of King Henry VIII of England. His father, Henry VII was Welsh-born and very conscious of it, and Henry VIII declared himself proud of his Welsh blood. His motive in merging the two countries was not mere domination; Wales benefited in many ways from the legislation. Although the native Welsh rulers had long since been subdued, the English king remained concerned about the power of the Marcher lords and his government's general lack of control over the principality. He therefore instructed his chief administrator, Thomas Cromwell, to seek a solution. The effect of the act was to make Wales as an integral part of England: The country of Wales justly and righteously is … incorporated, annexed, united and subject to and under the imperial Crown of the Realm, as a very member and joint of the same. It was not unpopular with the Welsh, who recognised that it would help give them equality with their neighbours in law. Under the act, the Marcher lordships were abolished and replaced by counties. For the first time, Wales was entitled to send members to the parliament at Westminster. Justices of the Peace were created to administer the law and justice at a local level, in line with the English practice. Another effect of the act was to outlaw the Welsh language from official use, replacing it with English. This did not trouble the landed gentry, who were already largely anglicised, but it made life difficult for the common people, who were no longer able, for example, to understand court proceedings.
<urn:uuid:78b35d24-33ab-4646-96fa-a1108bc0df9f>
CC-MAIN-2013-20
http://www.encyclopedia4u.com/1/1536-act-of-union.html
2013-05-24T01:37:34
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.988191
334
4.21875
4
New Granada (grənäˈdə) [key], former Spanish colony, N South America. It included at its greatest extent present Colombia, Ecuador, Panama, and Venezuela. Between 1499 and 1510 a host of conquerors explored the Caribbean coast of Panama and South America. After 1514, Pedro Arias de Ávila was successful in assuring permanent colonization of the isthmus of Panama. At Santa Marta (1525) and Cartagena (1533), Spanish control of the Colombian coast was firmly established, and in the next few years the northern hinterland was explored. German adventurers, notably Nikolaus Federmann, penetrated the Venezuelan and Colombian llanos between 1530 and 1546. By far the greatest of the conquerors was Gonzalo Jiménez de Quesada, who in 1536 ascended the Magdalena River, climbed the mighty Andean cordillera, where he subdued the powerful Chibcha (an advanced native civilization), and by 1538 had founded Santa Fé de Bogotá, later known simply as Bogotá. He named the region El Nuevo Reino de Granada [the new kingdom of Granada]. During the next 10 years the conquest was virtually completed. No civil government was established in New Granada until 1549, when an audiencia court, a body with both executive and judicial authority, was set up in Bogotá. To further stabilize colonial government, New Granada was made a presidency (an administrative and political division headed by a governor) in 1564, and the audiencia was relegated to its proper judicial functions. Loosely attached to the viceroyalty of Peru, the presidency came to include Panama, Venezuela, and most of Colombia. Disputes with—and the great distance from—Lima led to the creation (1717) of the viceroyalty of New Granada, comprising Colombia, Ecuador, Panama, and Venezuela. Later the captaincy general of Venezuela and the presidency of Quito were detached, creating a political division that was to survive the revolution against Spain and the efforts of Simón Bolívar to establish a republic of Greater Colombia. The struggle for independence began in 1810, and by 1830 Venezuela and Ecuador had seceded, and the remnant (Colombia and Panama) was renamed the Republic of New Granada. This became the Republic of Colombia in 1886, from which the present Panama seceded in 1903. See A. J. Kuethe, Military Reform and Society in New Granada (1978). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on New Granada from Fact Monster: See more Encyclopedia articles on: Latin American History
<urn:uuid:1872d9f4-aa67-47f8-b483-01fe0f93cd28>
CC-MAIN-2013-20
http://www.factmonster.com/encyclopedia/history/new-granada.html
2013-05-24T02:08:43
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936149
573
4.03125
4
Unlike drugs, which are used to treat sick people, vaccines are used in healthy people to prevent certain illnesses. Before a vaccine can be used in the United States, it must be shown to be safe and effective. To make these determinations, the U.S. Food and Drug Administration (FDA) conducts a rigorous review of data regarding the vaccine’s safety and efficacy. Because healthy children are typically the recipients of vaccines, safety requirements are especially stringent. Many federal agencies and private organizations are involved in ensuring the safety of vaccines and for promoting the health of the population: The FDA regulates vaccines that are used in the United States, ensuring that they are shown to be safe and effective before they are approved for use. The vaccine first undergoes laboratory studies, then studies with animals, and then with humans. The results of the studies at every step in the process must show that the vaccine does what it is supposed to do, and that it does not harm people who receive it. The FDA also inspects the manufacturing plant and makes sure the vaccine is made in a safe and consistent manner. Vaccine licensure is a lengthy process that may last up to 10 years. Vaccines must go through three phases of clinical trials in human beings before they are licensed for public use. To establish basic safety, Phase One trials are small, involving only 20-100 volunteers and lasting only a few months. To continue to gather information on efficacy and safety of each vaccine, Phase Two trials are larger (with several hundred volunteers), and last anywhere from a few months to a few years. Phase Three trials have several hundred to several thousand participants and typically last many years. If the FDA approves the vaccine for use in humans, the manufacturer can market the vaccine. Each batch of vaccine made by the manufacturer must be tested for safety, potency, and purity before being put on the market. A sample from each lot must be sent to the FDA. In addition, the FDA requires that doctors report reactions that occur after vaccination. More on this program may be found in the “Monitoring Vaccine Safety” section below. After a new vaccine is approved by the FDA, committees of experts decide whether it should be recommended for use in the general population. These committees evaluate the safety and effectiveness of vaccines. They also determine how the vaccine should be used, estimate how new recommendations would affect other health care issues, and consider cost-effectiveness issues. In addition to making recommendations on new vaccines, the committees of experts also review and update recommendations on existing vaccines. The policies for vaccines change along with the changes in the threat of disease. The committees of experts include: ACIP is a scientific advisory committee with 10 to 15 members. Although ACIP is federally chartered, the experts are chosen from outside of government. For each vaccine, the ACIP reviews a broad range of materials: The ACIP considers how the use of the new vaccine might fit into existing child and adult immunization programs. The committee also considers how the vaccine is stored and administered, cost-effectiveness, how the vaccine might affect other health care delivery systems, and other factors. Members of the COID are selected on the basis of their knowledge of infectious disease and their expertise in vaccines. In addition to the 12 core committee members, liaisons represent the FDA, CDC, and other organizations. COID works closely with these federal agencies and private organizations in an effort to avoid conflicting recommendations. Committees of experts make recommendations on the use of vaccines in the United States, but it is the responsibility of the individual states to determine which vaccines are required by law. It is up to states to pass and enforce compulsory immunization statutes. School immunization laws are established to prevent epidemics of certain contagious diseases, such as measles. Currently, all 50 states have school immunization laws. All 50 states allow children to be exempted from mandated immunizations for medical reasons. Nearly all (48) states allow religious exemptions and 20 allow philosophical exemptions. If an outbreak of a vaccine-preventable disease occurs, those children who are not vaccinated may be prohibited from going to school until the outbreak is resolved. For further information, read Indications, Recommendations and Immunization Mandates.
<urn:uuid:5a017a84-ceeb-4c5a-b3ee-24a025b9feda>
CC-MAIN-2013-20
http://www.immunizationinfo.org/parents/why-immunize/how-childhood-vaccines-are-selected
2013-05-24T01:58:44
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950126
866
4.125
4
Waste is defined as something that is unwanted or unusable. According to the Office of National Statistics, a staggering 342m tonnes of waste is produced in the UK each year. Instead of going to landfill sites to be buried or burnt, a vast proportion of this waste could be cut using the following steps: Reduce – change manufacturing processes so that less materials are used or change consumer habits so that less wasted material is bought Reuse – choose goods and products that can be used again and reuse rubbish for other purposes Recycle – make sure that waste is processed and made into another product wherever possible. Composting is also recycling: the nutrients in organic waste are processed and returned to the soil to help more plants to grow Reducing your waste Any building or environment where people live or work will produce a certain amount of waste, and children’s centres are no exception. Reducing waste may involve taking an in-depth look at the types of resources your centre buys and considering ways of cutting down. There are numerous ways a children’s centre can reduce the amount of waste produced and handle waste in a more environmentally friendly way. Does the centre recycle and, if so, what types of waste do you recycle? Could you recycle more or re-use materials such as paper and yoghurt pots? Instead of paying companies to remove your waste, by recycling it you will save money in the long-term as you won’t need to pay contractors to take it away. You may also be able to sell some of your metal and glass to companies who can recycle it. Ensure that paper recycling bins are placed in every classroom and discuss with the children what they are and why they are needed. Encourage children to use both sides of paper and when they’re finished to use the recycling bins. Composting is a great way of disposing of food waste in an environmentally friendly way. Place compost bins around the centre grounds and ask the children to help empty the food into the bins. Children could then draw pictures of the different foods which are composted. Ask children to look into their lunch box and identify what waste can be composted and what can’t. Explain how certain items such as yoghurt pots can be re-used to make paint pots etc. Visit our Resources & Links to download a Rupert Bear themed waste dot to dot activity. This section also provides details of organisations that will be able to help your centre tackle the waste topic.
<urn:uuid:ded03b8c-4e91-429a-99f0-d045c0bc7ed7>
CC-MAIN-2013-20
http://www.keepbritaintidy.org/ecoschools/applyforanaward/earlyyears/ninetopics/earlyyearswaste
2013-05-24T01:59:05
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951344
509
4.34375
4
Using climatic trends for sustainable agriculture This activity provides opportunities for students to: - consider the value of weather and climate information to inform management practices - propose management strategies for resources based on these predictions - use the Internet as a source of up-to-date information - predict trends in climatic conditions as a result of El Niño or La Niña events. Earth and Beyond 6.3 Students argue a position regarding stewardship of the earth and beyond, and consider the implications of using renewable and non-renewable resources. - information on the Long Paddock website - ‘Australia's Variable Rainfall Poster' relative to historical records 1890-2004 The poster shows a series of colourful maps providing a record of El Niño cycles in Australia 1890-2004. Time: 60 minutes - Accessing resources - Interpreting data - Discussing thinking Before beginning, check that students understand the meanings of these terms: - Southern Oscillation Index (SOI) - El Niño - La Niña (These terms are explained at the top of the poster; or go to ‘Help’ on the Long Paddock website). Students access the information available on the Long Paddock web site to complete this activity (or, alternatively, the poster can be used to access most of the information required to complete the activity). Students use this information for the following activities (described in detail in the free activity sheet, Understanding Australia’s Climate): - Describe rainfall conditions in the area where they live, at various times during the twentieth century. - Describe trends in the Southern Oscillation Index during the 1990s and determine climate variability and when El Niño or La Niña conditions may have occurred. - Relate El Niño or La Niña events to local events such as drought, flood, bushfires and fluctuations in animal and plant populations. - Relate climate variability to adaptations of local plant and animal species. - Describe farming practices that can cater for climate variability. At the conclusion of the activity students brainstorm the implications of El Niño and La Niña effects for pasture and crop management. Some questions that may prompt discussion include: - What conditions do we associate with El Niño? - What conditions do we associate with La Niña? - How does drought affect pasture growth and rejuvenation? - How might periods of heavy rain affect the soil? - What implications does this have for cropping practices (e.g. what is grown, when it is grown, when it is harvested, etc.)? - What precautions should be taken to ensure that pastures suffer limited effects from degradation as a result of their management during drought conditions? Students may focus on the implications of El Niño/La Niña climatic variations for one particular aspect of land management. For example: - Numbers of livestock on grazing properties - Control of pest plants or animals - Reducing risk from bushfires - Control of soil erosion Prepare a brief report or essay about how management practices might differ during El Niño/La Niña phases, or what might be done to prepare for climate extremes. Gathering information about student learning Sources of information could include: - students’ completed activity sheets - anecdotal records of students’ contributions to the brainstorming session - students’ presentations. Last updated 31 August 2010
<urn:uuid:42d94a60-5604-471e-86b5-b4015359113a>
CC-MAIN-2013-20
http://www.nrm.qld.gov.au/education/teachers/land/activities/activity06.html
2013-05-24T01:51:34
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.86204
700
4.125
4
Constructive Learning | Adaptation Level | Language Arts Students will work in groups to create videos on the book Tom Sawyer using elements of visualization and characterization. - Students will read Tom Sawyer. - Students will write a script including dialogue and action cues based on their understanding of the characters in the book Tom Sawyer. - Students will use their visualizations from reading to design costums and choose locations. - Students will discuss how their interpretations of the same scene may have differed from other people's interpretations. - Tom Sawyer - Computers with video editing software - Video cameras - Props and costums Grade Level: 6-8
<urn:uuid:25f89dde-13cc-44d6-943d-a5f2e080096a>
CC-MAIN-2013-20
http://fcit.usf.edu/matrix/lessons/constructive_adaptation_languagearts
2013-05-26T09:35:34
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.866819
133
4.15625
4
The cactus moth, Cactoblastis cactorum is native to northern Argentina and parts of Peru and Paraguay. It was introduced into the Caribbean islands in the 1960’s to control several (native) prickly-pear cactus (Opuntia) species (Simonson 2005). In 1989 the cactus moth was discovered to have spread to southern Florida. The moth probably entered the United States more than once (Simonsen et al. 2008) - either on winds from nearby Caribbean islands or on imported cactus plants. In the 20 years since, the cactus moth has spread up the peninsula as far north as coastal South Carolina and west along the shore of the Gulf of Mexico. (There are both native prickly pear cacti and ornamental cacti throughout the Southeast.) In 2009, it was detected growing on spoil islands in the swamps of southern Louisiana. The cactus moth can kill most prickly pear cacti (genus Opuntia), in particular those species that have flat pads. In Florida, the cactus moth has already caused considerable harm to the six species of vulnerable prickly pears (Garrett 2004), three of which are state listed. In some places, 75% of the prickly pear cacti have been attacked by the moth, with small individuals at greatest risk of death from these attacks (Johnson and Stiling 1998, Baker and Stiling 2009). Most of the peninsula from Gainesville south is now infested (USDA APHIS 2009a). The greatest threat is to the deserts of the American Southwest - from Texas to California - and Mexico. These deserts are home to 114 native species of Opuntia (APHIS 2009b), about 80 of which are flat-padded species vulnerable to the cactus moth (Simonson et al. 2005). In the difficult desert environment, prickly pears are a nutritious and reliable food supply for many wildlife species, including deer, javalina (peccaries), Texas and desert tortoises, spiny iguana, and pollen-feeding insects. Prickly pears provide shelter for packrats - which in turn are eaten by raptors, coyotes, and snakes; and for nesting birds including the cactus wren and curve-billed thrasher. The cacti also are nurse plants, under which other desert plants' seedlings may start life. Finally, the prickly pears' root systems hold the highly erodible soils (Simonson 2005). The economic consequences of loss of prickly pear cacti will fall most heavily on Mexico, where prickly pears provide food to both people and livestock. Prickly pears are cultivated on some 250,000 ha in Mexico for both the fruits (tunas) and pads (nopales). Fruits and pads are collected from the wild across another 3 million ha (Simonson 2005). An estimated 28,000 people are employed by the prickly pear trade in Mexico, generating an estimated US$50 million in revenue annually during the 1990s (Simonson 2005) - and certainly more now. Opuntia are the third most important subsistence food source for Mexico's rural poor, so failure of this crop would be devastating (Soberon et al. 2001). Another associated commercial crop is the natural deep-red dye extracted from the cochineal beetle (Dactylopius coccus) – which feeds on prickly-pear cacti. In Mexico, cochineal dye production constitutes a significant agricultural crop (Simonson 2005). The dye, as a natural product, is considered by some to be preferable for use in foods and cosmetics (http://www.botgard.ucla.edu/html/botanytextbooks/economicbotany/Cochineal/index.html). The U.S. Department of Agriculture has tried since 2005 to slow the spread of the cactus moth. The USDA Agricultural Research Service (ARS) and APHIS have relied on a program that combines release of sterile moths to disrupt mating and removal of the cactus hosts on the leading edge of the invasion. In cooperation with state departments of agriculture, APHIS has funded surveys in western states to assure that those most vulnerable areas are still free of the cactus moth. A volunteer network managed by Mississippi State University monitors federal, state, and private lands along the Gulf Coast for the presence of cactus moth to ensure any new populations of the species are quickly eradicated. This integrated program has had some success in slowing the rate of the cactus moth’s spread; nevertheless, it has continued to move westward - to Petit Bois and Horn Islands, Mississippi in 2008; and the swamps and bayous southwest of New Orleans. The Louisiana outbreaks were detected in May 2009 – probably 4 years after the moth arrived there. These swamps are more than 50 miles farther West than the Mississippi islands. (USDA APHIS 2009a). There would be smaller potential economic losses in the U.S. Depending on what species, if any, expand into areas where the prickly pear cacti were formerly abundant, losses could include reduced revenue from licensed hunting opportunities. In South Texas, higher rents are received for ranchland leased for hunting than for cattle production (Garrett 2004). Ecotourism in the Southwest would probably also be harmed by widespread death of prickly-pear cacti. Just one form of such recreation, off-highway vehicle recreation, resulted in expenditure of $3 billion in Arizona alone in 2002 – with a statewide economic impact of $4.25 billion (Simonson 2005). While the Louisiana detection was a discouraging setback, researchers taking part in the program review in December 2009 still believed that it is possible to create a barrier to halt further westward spread of the moth by aggressively applying control tactics at the leading edge and managing hot spot infestation to the east of that line. Since there are few cacti in St. Mary Parish or the Atchafalaya swamp, this area might be a good barrier to westward movement; intensive surveys will be needed to verify this approach. Increased funding might enable the program to push the leading edge back to the east, where a better location for the barrier might be the Apalachicola River in Florida (USDA APHIS 2009a). The moth is spreading much faster along the coast (approximately 75 miles per year) than inland (even in Florida). It will probably move inland faster in TX where cactus density is high throughout the state (USDA APHIS 2009a). The primary tools for managing the moth at this time are removal of infested host material, limited herbicide treatment to kill cacti, and release of sterile moths (SIT). Monitoring and survey efforts depend on pheromone-baited sticky traps and visual inspections of host plants (USDA APHIS 2009a). The attractant currently used in the cactus moth lure works, but not for long distances. Another weakness is that it attracts a significant number of non-target moths (USDA APHIS 2009a). Improving the attractant is key to both detecting moth presence and any attempts to use mating disruption to suppress moth numbers (USDA APHIS 2009a). An improved lure is expected to be field-tested by the end of 2010 (Javier Trujillo Arriaga, Servicio Nacional de Sanidad, Mexico, pers. comm. October 2010). The Program Review team (USDA APHIS 2009a) set out trapping site priorities for Louisiana and called for intensive delimitation survey of the Louisiana and Texas coastline (the latter is 250 miles long) using both traps and visual inspections (USDA APHIS 2009a). Host plant removal (meaning elimination of all Opuntia from a given area, whether the plants are infested or not) and sanitation (meaning removal of all cactus moth life stages and infested plants) are key components of the containment program. The strategy chosen depends on circumstances (USDA APHIS 2009a). Both strategies depend on finding the cacti. This is difficult since cactus populations are not mapped. The Review Team suggested testing aerial surveys of bayou areas – which could be assisted by the fact that, so far, cacti are usually on spoil banks with trees (USDA APHIS 2009a). Release of sterile moths has proved effective against small populations. Scientists cannot produce enough sterile insects to flood a large population such as that present in Louisiana. To suppress that large moth population, APHIS planned to focus in 2010 on cactus removal and sanitation; with sterile insect releases scheduled for 2011. Once sterile insect components of the program are instituted, they will face the challenge of delivering sterile insects twice per month to infested areas scattered across large wetlands (USDA APHIS 2009a). The oil spill in the Gulf made access to the area more difficult because of few boats were available for lease. Mexico sent a team of experienced cactus eradicators to assist (Robyn Rose, Entomological Society of America, December 2010). Cactus moths have been successfully eradicated in limited areas using either complete removal of all cactus hosts (Isla Mujeres, Mexico and Ft. Morgan, AL), or sanitation to reduce the moth population accompanied by sterile insect releases (Isla Contoy, Mexico). Host plant removal was possible where plants could be accessed by vehicles and machinery, and homeowners and officials were comfortable with removing the plants. Sanitation and the SIT were used in protected areas where only limited plant removal was allowed and the plants difficult to access. In order to eradicate Cc in LA, an integrated approach focusing on sanitation, host plant removal and SIT with other control methods utilized on a case by case basis (USDA APHIS 2009a). Once the cactus moth reaches Texas and the Southwest, biocontrol would be the only probable strategy. Most predators currently known are generalists so they are unsuitable for release. One candidate appears to be a specialist - the braconid larval-pupal parasitoid Apanteles alexanderi; host specificity testing was begun in 2010 in Argentina and Puerto Rico (Strickman 2010). US mainland cactus-feeding moths are poorly known so they will be hard to capture and test for vulnerability to the parasitoid. Trichogramma pretiosum, a hymenopteran egg parasitoid, has been found parasitizing the cactus moth in the U.S. (Paraiso et al. 2009). While these wasps are available commercially, they offer little promise because they are not host specific and the level of parasitism is normally very low. The USDA program has been hampered from its beginning by insufficient funding from unstable sources. There has never been an appropriation by Congress for this work. The USDA Agriculture Research Service and Animal and Plant Health Inspection Service have absorbed more than $9 million in costs since 2001. Mexico has provided $1.4 million. Florida Department of Agriculture and Consumer Services, Mississippi State University, and the U.S. Geological Survey (Department of Interior) have also participated in cactus moth control work. In 2009 APHIS amended its regulations to ensure that Opuntia cactus nursery stock moved from infested states in the southeastern United States would not spread the cactus moth (Federal Register Vol. 74 No. 108 [June 8, 2009], pp. 27071-27076). The cactus moth has also been introduced to Mexico. In August 2006, it was discovered on Isla Mujeres, offshore from Quintana Roo, in southeastern Mexico. A second outbreak on Isla Contoy was detected in May 2007. These populations have been eradicated with the help of USDA APHIS (APHIS 2009). Baker, A. J. and P. Stiling. 2009. Comparing the effects of the exotic cactus-feeding moth, Cactoblastis cactorum (Berg) (Lepidoptera: Pyralidae) and the native cactus-feeding moth, Melitara prodenialis (Walker) (Lepidoptera: Pyralidae) on two species of Florida Opuntia. Biol. Invasions 11: 619-624. Garrett, L. 2004. USDA APHIS PPQ CPHST. White Paper: Economic Impact from spread of Cactoblastis cactorum in the United States. Johnson, D. M. and P. D. Stiling. 1998. Distribution and dispersal of Cactoblastis cactorum (Lepidoptera: Pyralidae), an exotic Opuntia-feeding moth in Florida. Florida Entomol. 81: 12-21 Paraiso, O., Kairo, M., Bloem, S., Hight, S.D. 2009. Survey for egg parasitoids attacking Cactoblastis cactorum in North Florida. Meeting Abstract Simonsen, T.J., R.L. Brown, and F.A. H. Sperling. 2008. Tracing an Invasion: Phylogeography of Cactoblastis cactorum (Lepidoptera: Pyralidae) in the United States Based on Mitochondrial DNA. Ann. Entomol. Soc. Am. 101(5): 899-905 (2008) Simonson, S.E., T. J. Stohlgren, L. Tyler, W. Gregg, R. Muir, and L. Garrett. 2005. Preliminary assessment of the potential impacts and risks of the invasive cactus moth, Cactoblastis cactorum Berg, in the U.S. and Mexico. Final Report to the International Atomic Energy Agency, April 25, 2005 © IAEA 2005 Soberon J, Golubov J, Sarukhan J (2001) The importance of Opuntia in Mexico and routes of invasion and impact of Cactoblastis cactorumLepidoptera: Pyralidae). Fla Entomol 84:486–492. Strickman, D. 2010. Research Project: Collection and Evaluaton of Biological Control Agents Against Cactus Moth in Argentina Project Number: 0211-22000-006-11 USDA APHIS C. cactorum Program, Technical Working Group Report, New Orleans, LA, December 1-3, 2009 USDA APHIS. 2009. Eradication of Cactoblastis cactorum, from 11 Parishes in Southeast Louisiana. September 2009 USDA APHIS
<urn:uuid:41905fa5-ad8c-41ca-a535-689d508d5e68>
CC-MAIN-2013-20
http://www.dontmovefirewood.org/gallery-of-pests/cactus-moth.html
2013-05-26T09:42:14
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921444
3,046
4.125
4
What does this program measure? Sulfur Dioxide is measured at 4,10,23,and 40 meters above the ground, in units of parts per trillion by volume. How does this program work? Why is this research important? This is part of an effort to monitor long-term of emissions from Kilauea and Mauna Loa volcanoes. It may provide a precursor to the next Mauna Loa eruption. These measurements also detection large sulfur dioxide pollution events from Asia. Are there any trends in the data? No trends. Kilauea has been in continuous eruption since 1983. Mauna Loa last erupted in 1984. Since 1994; SO2 levels from Mauna Loa have been low (<500ppt, or parts per trillion). How does this program fit into the big picture? What is it's role in global climate change? The SO2 data can be used to detect periods of volcanic pollution at the observatory. This can provide a non-baseline filter for other measurements. Comments and References NOAA Sulfur Dioxide Monitoring (SO2)
<urn:uuid:f8bcf8d0-f144-4e8a-b1ee-796934b50837>
CC-MAIN-2013-20
http://www.esrl.noaa.gov/gmd/obop/mlo/programs/esrl/so2/so2.html
2013-05-27T02:56:40
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.919777
233
4.03125
4
What causes Tsunamis? Tsunami are waves caused by sudden movement of the ocean due to earthquakes, landslides on the sea floor, land slumping into the ocean, large volcanic eruptions or meteorite impact in the ocean. Most tsunami are caused by large earthquakes on the seafloor when slabs of rock move past each other suddenly, causing the overlying water to move. The resulting waves move away from the source of the earthquake event. Underwater landslides can cause tsunami as can terrestrial land which slumps into the ocean. View our landslide generation animation which demonstrates how a landslide induces a tsunami. Less common are tsunami initiated by volcanic eruptions. These occur in several ways: - destructive collapse of coastal, island and underwater volcanoes which result in massive landslides - pyroclastic flows, which are dense mixtures of hot blocks, pumice, ash and gas, plunging down volcanic slopes into the ocean and pushing water outwards - a caldera volcano collapsing after an eruption causing overlying water to drop suddenly. Topic contact: [email protected] Last updated: December 5, 2012
<urn:uuid:942441f1-978d-4112-888a-0afdd3d00afb>
CC-MAIN-2013-20
http://www.ga.gov.au/hazards/tsunami/tsunami-basics/causes.html
2013-05-27T02:54:12
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910243
236
4.40625
4
The science fair concept has been established to: - Focus attention on students’ academic achievements - Strengthen student motivation and interest in science - Promote teacher and public recognition of outstanding science talent - Recognize outstanding individual achievements, efforts, potential, and creativity - This event provides a medium for students to apply learned knowledge and skills to solve real problems and answer real-life questions. Entering the Fair - The competition is open to 4th – 12th grade students. - Each elementary school may bring 10 projects to the district fair. - Each middle school may bring 15 projects to the district fair - Each high school may bring 25 projects to the district fair. - Schools will decide how the projects will be selected. Most often this will be through a school science fair. - Students may work individually or in groups (no groups of more than 3). - Students may not participate in more than one project. - All participating schools must be represented by at least one teacher or administrator at the competition. - Registration must be submitted for each project entered. - Project displays should NOT show the students names. - Only participants, judges, teachers, and fair officials are allowed in the judging area during judging. - Exhibits must be brought to, cared for, and removed from the fair by the exhibitor. - The Science Fair Committee and cooperating groups will assume NO responsibility for loss or damage to any exhibit. - Valuables, such as computers, meters, cameras, microscopes, etc., should NOT be left unattended. The only time they are required to be part of the exhibit is during the hours of judging. How are the projects judged? The role of judging is not to distinguish winners and losers, but to recognize students who achieve standards of excellence. By encouraging students to strive for their best effort, all participants are winners and grow from the experience. - A team of judges is assigned to each category. During the initial judging, the projects are grouped so that each project is screened by at least two judges. - All students must remain with their projects during the judging to make presentations and explain their study. - All others (sponsors, teachers, parents, and other students) are not permitted in the project area while judging is in progress. - All decisions of the judges are final. - Final judging information will not be available to participants, parents, or teachers. How are awards given? - Every student entering the fair will receive an a certificate of participation. - Trophies will be given for the top 3 scores in each category. First and second place winners that are students in grades 5 through 12 will invited to advance to the Salt Lake Valley Science and Engineering Fair.
<urn:uuid:6c0030c3-810d-4013-927f-9e3f224319d5>
CC-MAIN-2013-20
http://www.slcschools.org/departments/curriculum/science/Science-Fair/general-information.php
2013-05-26T09:42:53
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94538
569
4.34375
4
Temporal range: 0Ma Recent |Range of Chinchilla lanigera and Chinchilla chinchilla. Chinchillas are crepuscular rodents, slightly larger and more robust than ground squirrels, native to the Andes mountains in South America. They live in colonies at high altitudes (up to 15,000 ft/4,270 m). Historically, they lived in the Andes of Bolivia, Chile, and Peru, but today colonies in the wild remain only in Peru and Chile. Along with their relatives, viscachas, they make up the family Chinchillidae. The animal (whose name literally means "little chincha") is named after the Chincha people of the Andes, who once wore its dense, velvet-like fur. By the end of the 19th century, chinchillas had become quite rare due to hunting for their ultra-soft fur. Most chinchillas currently used by the fur industry for clothing and other accessories are farm-raised. The two living species of chinchilla are Chinchilla chinchilla (formerly known as Chinchilla brevicaudata) and Chinchilla lanigera. There is little noticeable difference between the species, except C. chinchilla has a shorter tail, a thicker neck and shoulders, and shorter ears than C. lanigera. The former species is currently facing extinction; the latter, though rare, can be found in the wild. Domesticated chinchillas are thought to have come from the C. lanigera species. In their native habitats, chinchillas live in burrows or crevices in rocks. They are agile jumpers and can jump up to 6 ft (1.8 m). Predators in the wild include birds of prey, skunks, felines, snakes and canines. Chinchillas have a variety of defensive tactics, including spraying urine and releasing fur if bitten. In the wild, chinchillas have been observed eating plant leaves, fruits, seeds, and small insects. In nature, chinchillas live in social groups that resemble colonies, but are properly called herds. They can breed any time of the year. Their gestation period is 111 days, longer than most rodents. Due to this long pregnancy, chinchillas are born fully furred and with eyes open. Litters are usually small in number, predominantly two. Roles with humans The international trade in chinchilla fur goes back to the 16th century. Their fur is popular in the fur trade due to its extremely soft feel, which is caused by the sprouting of 60 hairs from each hair follicle, on average. The color is usually very even, which makes it ideal for small garments or the lining of large garments, though some large garments can be made entirely from the fur. A single, full-length coat made from chinchilla fur may require as many as 150 pelts, as chinchillas are relatively small. Their use for fur led to the extinction of one species, and put serious pressure on the other two. Though it is illegal to hunt wild chinchillas, the wild animals are now on the verge of becoming extinct because of continued illegal hunting. Domesticated chinchillas are still bred for this use. Chinchillas as pets Chinchillas require extensive exercise. Their teeth need to be worn down, as they grow continuously and can prevent them from eating if they become overgrown. Wooden sticks, pumice stone and chew toys are good options, but conifer and citrus woods (such as cedar or orange) should be avoided because of the high content of resins, oils and phenols that are toxic for chinchillas. Birch, willow, apple, manzanita or kiln-dried pine woods are all safe for chinchillas to chew. Chinchillas lack the ability to sweat; therefore, if temperatures get above 25°C (80°F), they could get overheated and may suffer from heat stroke. Chinchillas dissipate heat by routing blood to their large ears, so red ears signal overheating. Chinchillas can be found in a variety of colors. The only color found in nature is standard gray. The most common other colors are white, black velvet, beige, ebony, violet, and sapphire, and blends of these. The animals instinctively clean their fur by taking dust baths, in which they roll around in special dust made of fine pumice. In the wild, the dust is formed from fine, ground volcanic rocks. The dust gets into their fur and absorbs oil and dirt. These baths are needed a few times a week. Chinchillas do not bathe in water because the dense fur prevents air-drying, retaining moisture close to the skin, which can cause fungus growth or fur rot. A wet chinchilla must be dried immediately with towels and a no-heat hair dryer. The thick fur resists parasites, such as fleas, and reduces loose dander, making chinchillas hypoallergenic. Chinchillas eat and drink in very small amounts. In the wild, they eat and digest desert grasses, so cannot efficiently process fatty or high protein foods, or too many green plants. A high quality, hay-based pellet and a constant supply of loose timothy hay will meet all of their dietary needs. Chinchillas' very sensitive gastrointestinal tracts can be easily disrupted, so a healthy diet is important. In a mixed ration, chinchillas may avoid the healthy, high-fiber pellets in favor of items such as raisins and seeds. Fresh vegetables and fruit (with high moisture content) should be avoided, as these can cause bloat, which can be fatal. Sweets and dried fruit treats should be limited to one per day, at the very most. This can lead to diarrhea, or in the long term, diabetes. Nuts should be avoided due to their high fat content. High protein foods and alfalfa hay can cause liver problems and should be limited. In scientific research The chinchilla is often used as an animal model in researching the auditory system, because the chinchilla's range of hearing (20 Hz to 30 kHz) and cochlear size is close to that of a human, and the chinchilla cochlea is fairly easy to access. Other research fields in which chinchillas are used as an animal model include the study of Chagas disease, gastrointestinal diseases, pneumonia, and listeriosis, as well as of Yersinia and Pseudomonas infections. The first scientific study on chinchilla sounds in their social environment was conducted by Dr. Bartl DVM in Germany. - Viscacha, a rodent similar to a chinchilla - Woods, C. A. and Kilpatrick, C. W. (2005). Infraorder Hystricognathi. In: D. E. Wilson and D. M. Reeder (eds), Mammal Species of the World, pp. 1538–1599. The Johns Hopkins University Press, Baltimore, MD, USA. - D'elia, G. & Ojeda, R. (2008). Chinchilla chinchilla. In: IUCN 2010. IUCN Red List of Threatened Species. Version 2010.4. Downloaded on 26 March 2011. - Columbia Electronic Encyclopedia, 6th Edition. 2011. - "What Is A Chinchilla?". Davidson Chinchillas. Retrieved 2008-02-01. - Jiménez, Jaime E. (1996). "The extirpation and current status of wild chinchillas Chinchilla lanigera and C. Brevicaudata". Biological Conservation 77: 1. doi:10.1016/0006-3207(95)00116-6. - "Chinchilla (Chinchilla lanigera)". Comparative Mammalian Brain Collections. Retrieved 2008-02-01. - Chinchillas, Chinchillidae, Chinchilla lanigera, Chinchilla brevicaudata. Animal-world.com. Retrieved on 2011-12-07. - "Is a Chinchilla the pet for me?". Fantastic Chinchillas. Archived from the original on January 12, 2008. Retrieved 2008-02-01. - "The Chinchilla". Chinchilla Lexicon. 2003-05-01. Archived from the original on 2008-02-04. Retrieved 2008-02-01. - Alderton, David. Rodents of the World, 1996, page 20. ISBN 0-8160-3229-7 - Chinchillas Endangered Species Handbook. Endangeredspecieshandbook.org. Retrieved on 2011-12-07. - "Teeth". Homepage.ntlworld.com. Archived from the original on May 3, 2008. Retrieved 2009-07-30. - So what is a safe wood for our pets?. Chinchillas2home.co.uk - Heat Stroke. Chin-chillas.com. Retrieved on 2011-12-07. - Color Mutation Percentage Charts. Chinchillas.com. Retrieved on 2011-12-07. - Chinchillas: The keystone cops of rodents!. Petstation.com (1995-03-01). Retrieved on 2011-12-07. - Nutrition. Chincare.com. Retrieved on 2011-12-07. - Nutrition and Denatl Health. chincare.com - PIR: Chinchilla. Pirweb.org. Retrieved on 2011-12-07. - Bartl, Dr. vet. med. Juliana (2008). Chinchillas. Munich, Germany: GU Verlag GmbH. ISBN 978-3-8338-1165-4 |Wikimedia Commons has media related to: Chinchillas| |Wikispecies has information related to: Chinchilla|
<urn:uuid:d9d3d85e-6c47-406a-a1d5-adb918118333>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Chinchilla
2013-05-19T02:07:44
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.871691
2,086
4.1875
4
The troposphere is the lowest portion of Earth's atmosphere. It contains approximately 80% of the atmosphere's mass and 99% of its water vapor and aerosols. The average depth of the troposphere is approximately 17 km (11 mi) in the middle latitudes. It is deeper in the tropics, up to 20 km (12 mi), and shallower near the polar regions, at 7 km (4.3 mi) in summer, and indistinct in winter. The lowest part of the troposphere, where friction with the Earth's surface influences air flow, is the planetary boundary layer. This layer is typically a few hundred meters to 2 km (1.2 mi) deep depending on the landform and time of day. The border between the troposphere and stratosphere, called the tropopause, is a temperature inversion. The word troposphere derives from the Greek: tropos for "change" reflecting the fact that turbulent mixing plays an important role in the troposphere's structure and behavior. Most of the phenomena we associate with day-to-day weather occur in the troposphere. Pressure and temperature structure The chemical composition of the troposphere is essentially uniform, with the notable exception of water vapor. The source of water vapor is at the surface through the processes of evaporation and transpiration. Furthermore the temperature of the troposphere decreases with height, and saturation vapor pressure decreases strongly as temperature drops, so the amount of water vapor that can exist in the atmosphere decreases strongly with height. Thus the proportion of water vapor is normally greatest near the surface and decreases with height. The pressure of the atmosphere is maximum at sea level and decreases with higher altitude. This is because the atmosphere is very nearly in hydrostatic equilibrium, so that the pressure is equal to the weight of air above a given point. The change in pressure with height, therefore can be equated to the density with this hydrostatic equation: Since temperature in principle also depends on altitude, one needs a second equation to determine the pressure as a function of height, as discussed in the next section.* The temperature of the troposphere generally decreases as altitude increases. The rate at which the temperature decreases, , is called the environmental lapse rate (ELR). The ELR is nothing more than the difference in temperature between the surface and the tropopause divided by the height. The reason for this temperature difference is the absorption of the sun's energy occurs at the ground which heats the lower levels of the atmosphere, and the radiation of heat occurs at the top of the atmosphere cooling the earth, this process maintaining the overall heat balance of the earth. As parcels of air in the atmosphere rise and fall, they also undergo changes in temperature for reasons described below. The rate of change of the temperature in the parcel may be less than or more than the ELR. When a parcel of air rises, it expands, because the pressure is lower at higher altitudes. As the air parcel expands, it pushes on the air around it, doing work; but generally it does not gain heat in exchange from its environment, because its thermal conductivity is low (such a process is called adiabatic). Since the parcel does work and gains no heat, it loses energy, and so its temperature decreases. (The reverse, of course, will be true for a sinking parcel of air.) Since the heat exchanged is related to the entropy change by , the equation governing the temperature as a function of height for a thoroughly mixed atmosphere is If the air contains water vapor, then cooling of the air can cause the water to condense, and the behavior is no longer that of an ideal gas. If the air is at the saturated vapor pressure, then the rate at which temperature drops with height is called the saturated adiabatic lapse rate. More generally, the actual rate at which the temperature drops with altitude is called the environmental lapse rate. In the troposphere, the average environmental lapse rate is a drop of about 6.5 °C for every 1 km (1,000 meters) in increased height. The environmental lapse rate (the actual rate at which temperature drops with height, ) is not usually equal to the adiabatic lapse rate (or correspondingly, ). If the upper air is warmer than predicted by the adiabatic lapse rate (), then when a parcel of air rises and expands, it will arrive at the new height at a lower temperature than its surroundings. In this case, the air parcel is denser than its surroundings, so it sinks back to its original height, and the air is stable against being lifted. If, on the contrary, the upper air is cooler than predicted by the adiabatic lapse rate, then when the air parcel rises to its new height it will have a higher temperature and a lower density than its surroundings, and will continue to accelerate upward. Temperatures decrease at middle latitudes from an average of 15°C at sea level to about -55°C at the top of the tropopause. At the poles, the troposphere is thinner and the temperature only decreases to -45°C, while at the equator the temperature at the top of the troposphere can reach -75°C. The tropopause is the boundary region between the troposphere and the stratosphere. Measuring the temperature change with height through the troposphere and the stratosphere identifies the location of the tropopause. In the troposphere, temperature decreases with altitude. In the stratosphere, however, the temperature remains constant for a while and then increases with altitude. The region of the atmosphere where the lapse rate changes from positive (in the troposphere) to negative (in the stratosphere), is defined as the tropopause. Thus, the tropopause is an inversion layer, and there is little mixing between the two layers of the atmosphere. Atmospheric flow The flow of the atmosphere generally moves in a west to east direction. This however can often become interrupted, creating a more north to south or south to north flow. These scenarios are often described in meteorology as zonal or meridional. These terms, however, tend to be used in reference to localised areas of atmosphere (at a synoptic scale). A fuller explanation of the flow of atmosphere around the Earth as a whole can be found in the three-cell model. Zonal Flow A zonal flow regime is the meteorological term meaning that the general flow pattern is west to east along the Earth's latitude lines, with weak shortwaves embedded in the flow. The use of the word "zone" refers to the flow being along the Earth's latitudinal "zones". This pattern can buckle and thus become a meridional flow. Meridional flow When the zonal flow buckles, the atmosphere can flow in a more longitudinal (or meridional) direction, and thus the term "meridional flow" arises. Meridional flow patterns feature strong, amplified troughs and ridges, with more north-south flow in the general pattern than west-to-east flow. Three-cell model The three cells model attempts to describe the actual flow of the Earth's atmosphere as a whole. It divides the Earth into the tropical (Hadley cell), mid latitude (Ferrel cell), and polar (polar cell) regions, dealing with energy flow and global circulation. Its fundamental principle is that of balance - the energy that the Earth absorbs from the sun each year is equal to that which it loses back into space, but this however is not a balance precisely maintained in each latitude due to the varying strength of the sun in each "cell" resulting from the tilt of the Earth's axis in relation to its orbit. It demonstrates that a pattern emerges to mirror that of the ocean - the tropics do not continue to get warmer because the atmosphere transports warm air poleward and cold air equatorward, the effect of which appears to be that of heat and moisture distribution around the planet. Synoptic scale observations and concepts Forcing is a term used by meteorologists to describe the situation where a change or an event in one part of the atmosphere causes a strengthening change in another part of the atmosphere. It is usually used to describe connections between upper, middle or lower levels (such as upper-level divergence causing lower level convergence in cyclone formation), but can sometimes also be used to describe such connections over distance rather than height alone. In some respects, tele-connections could be considered a type of forcing. Divergence and Convergence An area of convergence is one in which the total mass of air is increasing with time, resulting in an increase in pressure at locations below the convergence level (recall that atmospheric pressure is just the total weight of air above a given point). Divergence is the opposite of convergence - an area where the total mass of air is decreasing with time, resulting in falling pressure in regions below the area of divergence. Where divergence is occurring in the upper atmosphere, there will be air coming in to try to balance the net loss of mass (this is called the principle of mass conservation), and there is a resulting upward motion (positive vertical velocity). Another way to state this is to say that regions of upper air divergence are conducive to lower level convergence, cyclone formation, and positive vertical velocity. Therefore, identifying regions of upper air divergence is an important step in forecasting the formation of a surface low pressure area. - "ISS022-E-062672 caption". NASA. Retrieved 21 September 2012. - McGraw-Hill Concise Encyclopedia of Science & Technology. (1984). Troposhere. "It contains about four-fifths of the mass of the whole atmosphere." - Danielson, Levin, and Abrams, Meteorology, McGraw Hill, 2003 - Landau and Lifshitz, Fluid Mechanics, Pergamon, 1979 - Landau and Lifshitz, Statistical Physics Part 1, Pergamon, 1980 - Kittel and Kroemer, Thermal Physics, Freeman, 1980; chapter 6, problem 11 - "American Meteorological Society Glossary - Zonal Flow". Allen Press Inc. June 2000. Retrieved 2006-10-03. - "American Meteorological Society Glossary - Meridional Flow". Allen Press Inc. June 2000. Retrieved 2006-10-03. - "Meteorology - MSN Encarta, "Energy Flow and Global Circulation"". Encarta.Msn.com. Archived from the original on 2009-10-31. Retrieved 2006-10-13. |Look up troposphere in Wiktionary, the free dictionary.| - Composition of the Atmosphere, from the University of Tennessee Physics dept. - Chemical Reactions in the Atmosphere - http://encarta.msn.com/encyclopedia_761571037_3/Meteorology.html#s12 (Archived 2009-10-31)
<urn:uuid:289d13f0-c4f6-44f9-8b69-3daada4f7990>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Troposphere
2013-05-19T02:39:44
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908361
2,265
4.375
4
Early foragers and farmers made wine from wild grapes or other fruits. According to archaeological evidence, by 6000 BC grape wine was being made in the Caucasus, and by 3200 BC domesticated grapes had become abundant in the entire Near East. In Mesopotamia, wine was imported from the cooler northern regions, and so came to be known as ‘liquor of the mountains’. In Egypt as in Mesopotamia, wine was for nobles and priests, and mostly reserved for religious or medicinal purposes. The Egyptians fermented grape juice in amphorae which they covered with cloth or leather lids and then sealed up with mud from the Nile. By biblical times, wine had acquired some less dignified purposes. According to the Old Testament, Noah planted a vineyard, and ‘drank of the wine, and was drunken; and he was uncovered within his tent’ (Genesis 9:21). Skip to the New Testament and here is Jesus employed as a wine consultant: ‘And no man putteth new wine into old wineskins: else the wine bursts the skins, and the wine is lost as well as the skins: but new wine must be put into new skins’ (Mark 2:22). Many of the grape varieties that are cultivated in modern Greece are similar or identical to those cultivated there in Ancient times. Wine played a central role in Ancient Greek culture, and the vine—which, as in the Near East, had been domesticated by the Early Bronze Age—was widely cultivated. The Minoans, who flourished on the island of Crete from c.2700 to c.1450 BC, imported and exported different wines, which they used not only for recreational but also for religious and ritual purposes. Wine played a similarly important role for the later Myceneans, who flourished on mainland Greece from c.1600 to 1100 BC. In fact, wine was so important to the Greeks as to be personified by a major deity, Dionysus or Bacchus, and honoured with a number of annual festivals. One such festival was the Anthesteria, which, held in February each year, celebrated the opening of wine jars to test the new wine. Active in the 9th century BC, the poet Homer often sung of wine, famously alluding to the Aegean as the ‘wine dark sea’. In the Odyssey, he says that ‘wine can of their wits the wise beguile/ Make the sage frolic, and the serious smile’. In the Works and Days, the poet Hesiod, who lived in the 7th or 8th century BC, speaks of pruning and even of drying the grapes prior to fermentation. The Greeks plainly understood that no two wines are the same, and held the wines of Thassos, Lesbos, Chios, and Mende in especially high regard; Theophrastus, a contemporary and close friend of Aristotle, even demonstrated some pretty clear notions of terroir. In Ancient Greece, vines were left to their own devices, supported on forked props, or trained up trees. In his Natural History, Pliny the Elder describes the Ancient Greek practice of using partly dehydrated gypsum prior to fermentation, and some type of lime after fermentation, to remove acidity—but this was no doubt a relatively recent or infrequent practice. The wine was neither racked nor fined, and it was not uncommon for the drinker to want to pass it through a sieve or strainer. Additives such as aromatic herbs, spice, honey, or a small part of seawater were often added both to improve and preserve the wine—which could also be concentrated by boiling. Finished wine was stored in amphorae lined with resin or pitch, both substances that imparted some additional and characteristic flavour. Generally speaking, wine was sweeter then than it is today, reflecting not only prevalent tastes, but also the ripeness of the grapes, the use of natural yeasts in fermentation, and the lack of temperature control during fermentation. At the same time, wine did come in a wide variety of styles, some of which were markedly austere. To drink undiluted wine was considered a bad and barbarian practice—almost as bad as drinking beer like the Babylonian or Egyptian peasant classes. Wine was diluted with two or three parts of water to produce a beverage with an alcoholic strength of around 3-5%. The comedian Hermippus, who flourished in the golden age of Athens, described the best vintage wines as having a nose of violets, roses, and hyacinths; however, most wine would have turned sour within a year and specific vintages are never mentioned. Together with the sea-faring Phoenicians, the Ancient Greeks disseminated the vine throughout the Mediterranean, and even named southern Italy Oenotria or ‘Land of Vines’. If wine was important to the Greeks, it was even more so to the Romans, who thought of it as a daily necessity of life and democratized its drinking. They established a great number of Western Europe’s major wine producing regions, not only to provide steady supplies for their soldiers and colonists, but also to trade with native tribes and convert them to the Roman cause. In particular, the trade of Hispanic wines surpassed that even of Italian wines, with Hispanic amphorae having been unearthed as far as Britain and the limes Germanicus or German frontier. In his Geographica (7 BC), Strabo states that the vineyards of Hispania Beatica (which roughly corresponds to modern Andalucia) were famous for their great beauty. The area of Pompeii produced a great deal of wine, much of it destined for the city of Rome, and the eruption of Mount Vesuvius in 79 AD led to a dramatic shortage. The people of Rome panicked, uprooting food crops to plant vineyards. This led to a food shortage and wine glut, which in 92 AD compelled the emperor Domitian to issue an edict banning the planting of vineyards in Rome. The Romans left behind a number of agricultural treatises that provide a wealth of information on Roman viticulture and winemaking. In particular, Cato the Elder’s De Agri Cultura (c. 160 BC) served as the Roman textbook of winemaking for several centuries. In De Re Rustica, Columella surveyed the main grape varieties, which he divided into three main groups: noble varieties for great Italian wines, high yielding varieties that can nonetheless produce age-worthy wines, and prolific varieties for ordinary table wine. Pliny the Elder, who also surveyed the main grape varieties, claimed that ‘classic wines can only be produced from vines grown on trees’, and it is true that the greatest wines of Campania, such as Caecuban and Falernian, nearly all came from vines trained on trees—often elms or poplars. Both Caecuban and Falernian were white sweet wines, although there also existed a dry style of Falernian. Undiluted Falernian contained a high degree of alcohol; so high that a candle flame could set it alight. It was deemed best to drink Falernian at about 15-20 years old, and another classed growth called Surrentine at 25 years old or more. The Opimian vintage of 121 BC, named after the consul in that year Lucius Opimius, acquired legendary fame, with some examples still being drunk more than 100 years later. The best wines were made from the initial and highly prized free-run juice obtained from the treading of the grapes. At the other end of the spectrum were posca, a mixture of water and sour wine that had not yet turned to vinegar, and lora, a thin drink or piquette produced from a third pressing of grape skins. Following the Greek invention of the screw, screw presses became common on Roman villas. Grape juice was fermented in large clay vessels called dolia, which were often partially sunk into the ground. The wine was then racked into amphorae for storing and shipping. Barrels invented by the Gauls and, later still, glass bottles invented by the Syrians vied as alternatives to amphorae. As in Ancient Greece, additives were common: chalk or marble to neutralize excess acid; and boiled must, herbs, spice, honey, resin, or seawater to improve and preserve thin offerings. Maderization was common and sought after; at the same time, rooms destined for wine storage were sometimes built so as to face north and away from the sun. Following the decline and fall of the Western Roman Empire, the Church perpetuated the knowledge of viticulture and winemaking, first and foremost to provide the blood of Christ for the celebration of Mass.
<urn:uuid:65751155-e387-4f5b-a091-2071a5160eb5>
CC-MAIN-2013-20
http://outre-monde.com/tag/glass-bottles/
2013-05-19T02:30:44
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978305
1,810
4.03125
4
Contrary to popular belief, astronauts still have weight while they are orbiting the earth. In fact, Shuttle astronauts weigh almost as much in space as they do on the earth's surface. But these astronauts are in free fall, together with their ship, and their downward accelerations prevents them from measuring their weights directly. Instead, astronauts make a different type of measurement—one that accurately determines how much of them there is: they measure their masses. Your weight is the force that the earth's gravity exerts on you; your mass is the measure of your inertia, how hard it is to make you accelerate. For deep and interesting reasons, weight and mass are proportional to one another at a given location, so measuring one quantity allows you to determine the other. Instead of weighing themselves, astronauts measure their masses. They make these mass measurements with the help of a shaking device. They strap themselves onto a machine that gently jiggles them back and forth to see how much inertia they have. By measuring how much force it takes to cause a particular acceleration, the machine is able to compute the mass of its occupant. Answered by Lou A. Bloomfield of the University of Virginia
<urn:uuid:5696a8c8-21e4-4bdc-a7a2-5523499690d2>
CC-MAIN-2013-20
http://physicscentral.com/experiment/askaphysicist/physics-answer.cfm?uid=20080505084640
2013-05-19T02:31:43
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969176
240
4.5
4
5th Grade Oral Language Resources Students will:• Learn about the concept of whales. • Access prior knowledge and build background about whales. • Explore and apply the concept of whales. Students will:• Demonstrate an understanding of the concept of whales. • Orally use words that describe different types of whales and where they live. • Extend oral vocabulary by speaking about terms that describe whales and whale body parts. • Use key concept words [inlet, humpback, ocean, fins, underwater; submerge, ascend, Baleen, mammal]. Explain• Use the slideshow to review the key concept words. • Explain that students are going to learn about: • Where whales live. • Parts of a whale's body. Model• After the host introduces the slideshow, point to the photo on screen. Ask students: What kind of animal do you see in this picture? (whale). What do you know about these animals? (answers will vary). • Ask students: What are the dangers facing whales? (too much hunting, polluted environment). • Say: In this activity, we're going to learn about whales. How can we protect whales? (not pollute the environment, join groups that are concerned with their safety). Guided Practice• Guide students through the next two slides, showing them examples of whales and the way whales live. Always have the students describe how people are different from whales. Apply• Play the games that follow. Have them discuss with their partner the different topics that appear during the Talk About It feature. • After the first game, ask students to talk about what they think a whale's living environment is like. After the second game, have them discuss what they would like and dislike about having the body of a whale. Close• Ask students: How do you move in the water? • Summarize for students that since whales are mammals, they have to come above water to breathe. Encourage them to think about how they breathe underwater.
<urn:uuid:a16eb0ef-5e43-45e5-b0fb-052d92b4dd25>
CC-MAIN-2013-20
http://treasures.macmillanmh.com/georgia/teachers/resources/grade5/oral-language-resources/resource/whales-0
2013-05-19T02:31:37
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92018
420
4.65625
5
A visual arts lesson combined with language arts, where the students will create a visual poem using crayons. Students are asked to make a connection to an important aspect or event their lives. Have you ever wondered why children are so afraid to express themselves through poetry? Possibly it is because they think a poem has to rhyme, have a certain pattern, or look a certain way. This lesson will allow children to use their imagination to create a visual poem. They will be encouraged to think independently. Through this exposure to writing poems and making visual representations of their poems, the children will learn how to respond emotionally and verbally to different visual poems. In addition to this, the students will develop an appreciation for poetry and other forms of expression. Creative Expression: Each student will create a visual poem, using colored crayons, which will illustrate an important aspect of his or her life. Aesthetic Valuing: The students will share their visual poems with the class, which will help them to appreciate the variations in poetry and recognize the different styles of visual poetry. 1. Direct instruction- teacher will explain different types of visual poems and give examples. 2. Guided discovery- students will create their own unique visual poem. 3. Group process- student will share their poems with a partner. Introduction- First, the teacher will read a poem to the students, and ask them if they liked the poem. What things did you like/dislike? Then, the teacher will share 3-4 different examples of visual poems (done by 3rd graders) with the kids, which will show them a couple different styles. These examples will be shown on transparencies. 1. As a beginning activity to expose children to poetry, the teacher will provide the students with a worksheet. The worksheet will have 3 sentences on it, each one starting with "I wish...". The students will be asked to respond with 3 things they wish for. Then, the students will be told that they just created a poem. 2. The teacher will engage the students in a brainstorming activity (using a large sheet of white butcher paper) where they discuss some of their favorite things from favorite colors, to hobbies, to important people, etc. 3. The teacher will instruct the students to create a visual poem, which illustrates something which is very important to them. 4. A piece of plain white paper will be passed out, along with a box of crayons. Watercolors will be available at the art table if children elect to use them. 5. Soft instrumental music will be played in the background, as the students create their visual poems. 6. Students will be paired up with a partner to share their visual poem. They will be instructed to tell the person at least 1 thing they liked about the visual poem. Have the students do a poetry reading (on a volunteer basis), which gives them the opportunity to share their creations in front of an audience. Collect the visual poems and put them together into a book. Here is an example title of a book, Ms. Hiltel's 3rd grade classes' wonderful creations! Teacher collects the visual poems, and checks for visual evidence of completion of the assignment including use of color and connection to important event or aspect of students' lives. After poems are shared in partners, each student critiques peer's visual poem, by stating one aspect which is particularly liked. Teacher also listens to students' comments during sharing. 1. overhead projector 2. copies of 3-4 poems on transparencies 3. 1 poem to read aloud 4. 1 box of crayons (per student) 5. watercolors, paintbrushes, plastic cups, paper towels (for optional use) 6. white paper (1 sheet per student) 7. large piece of white butcher paper 8. soft instrumental music Assigned students will collect crayons and return them to the proper place. Students who used the watercolors at the art table will be responsible for cleaning up that area and putting away the watercolor boxes.
<urn:uuid:ed461a25-aaa6-4469-943b-ef17b3b7d1d6>
CC-MAIN-2013-20
http://www.csuchico.edu/~cguenter/FourArts/VA/VApoetart.html
2013-05-19T02:08:46
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938892
842
4.15625
4