text
string
id
string
dump
string
url
string
date
timestamp[us]
file_path
string
language
string
language_score
float64
token_count
int64
score
float64
int_score
int64
Another Way Make Clean Use of the Abundant Coal Resource Conventional coal-fired electric generating facilities capture just a third of the energy available in the fuel they burn. Fuel cells can convert significantly more of the energy, approximately 50 percent. If gas turbines and fuel cells could be combined into hybrid systems, researchers believe they could capture as much as 80 percent of the energy, reducing the amount of coal needed to produce a given amount of energy, potentially cutting carbon emissions. ...But that would only be possible if the fuel cells could run for long periods of time on coal gas, which now deactivates the anodes after as little as 30 minutes of operation. The carbon removal system developed by the Georgia Tech-led team uses a vapor deposition process to apply barium oxide nanoparticles to the nickel-YSZ electrode. The particles, which range in size from 10 to 100 nanometers, form "islands" on the nickel that do not block the flow of electrons across the electrode surface. When water vapor introduced into the coal gas stream contacts the barium oxide, it is adsorbed and dissociates into protons and hydroxide (OH) ions. The hydroxide ions move to the nickel surface, where they combine with the carbon atoms being deposited there, forming the intermediate COH. The COH then dissociates into carbon monoxide and hydrogen, which are oxidized to power the fuel cell, ultimately producing carbon dioxide and water. About half of the carbon dioxide is then recirculated back to gasify the coal to coal gas to continue the process. "We can continuously operate the fuel cell without the problem of carbon deposition," said Liu, who is also co-director of Georgia Tech's Center for Innovative Fuel Cell and Battery Technologies. The researchers also evaluated the use of propane to power solid oxide fuel cells using the new anode system. Because oxidation of the hydrogen in the propane produces water, no additional water vapor had to be added, and the system operated successfully for a period of time similar to the coal gas system. Solid oxide fuel cells operate most efficiently at temperatures above 850 degrees Celsius, and much less carbon is deposited at higher temperatures. However, those operating temperatures require fabrication from special materials that are expensive – and prevent solid oxide fuel cells from being cost-effective for many applications. Reducing the operating temperatures is a research goal, because dropping temperatures to 700 or 750 degrees Celsius would allow the use of much less expensive components for interconnects and other important components. However, until development of the self-cleaning process, reducing the operating temperature meant worsening the coking problem. "Reducing the operating temperature significantly by eliminating the problem of carbon deposition could make these solid oxide fuel cells economically competitive," Liu said. Fuel cells powered by coal gas still produce carbon dioxide, but in a much purer form than the stack gases leaving traditional coal-fired power plants. That would make capturing the carbon dioxide for sequestration less expensive by eliminating large-scale separation and purification steps, Liu noted. The researchers have so far tested their process for a hundred hours, and saw no evidence of carbon build-up. _PO The problem with making the removal of CO2 a priority, is that it destroys whatever profitability exists within the coal energy sector. But destroying coal energy production has always been one of President Obama's long-term goals, as he confessed to supporters in San Francisco before being elected in 2008. When so much of the government of the world's only superpower is dedicated to the destruction of reliable forms of energy such as coal, nuclear, oil sands, unconventional gas, offshore oil, etc etc, it becomes difficult for industry and commerce to survive. Since the prosperity and power of the world's only superpower is based upon its industrial and commercial might, it appears that the Obama administration is committing democide and a grand scale, via its broad policies of energy starvation. Images transplanted from an earlier posting at AFE
<urn:uuid:3690b82e-6ed2-423d-abbc-193048761e7b>
CC-MAIN-2013-20
http://www.alfin2300.blogspot.com/2011/06/another-way-make-clean-use-of-abundant.html
2013-05-22T14:25:08
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949252
812
4.1875
4
Ultraviolet (UV) light is electromagnetic radiation in the approximate wavelength range 10 to 400 nm. It has wavelengths shorter than visible light but longer than x-rays and is invisible to the human eye. The ultraviolet wavelength range is broadly divided in order of decreasing wavelength into the near ultraviolet (NUV; closest to the wavelength of visible light), the far ultraviolet (FUV) and the extreme ultraviolet (EUV; closest to the wavelength of xrays). The Sun is a source of ultraviolet radiation which is harmful to human skin. The Earth’s ozone layer blocks the majority of the Sun’s uv-radiation, which is beneficial for us but hampers ground-based ultraviolet astronomy. Instead, ultraviolet-wavelength telescopes must be put into space on satellites. Astrophysical sources of ultraviolet light are hot objects (T ~ 106-108 K) including young, massive OB stars, evolved white dwarfs, supernova remnants, the Sun’s corona and gas in galaxy clusters. Ultraviolet light was discovered by Johann Wilhelm Ritter in 1801 when he noticed that invisible light beyond the optical region of the electromagnetic spectrum darkened silver chloride. He split sunlight using a prism and then measured the relative darkening of the chemical as a function of wavelength. The region just beyond the optical violet region produced the most darkening, and hence was eventually christened ‘ultra’violet.
<urn:uuid:7e00adee-5855-43b3-9ea8-767b6962918c>
CC-MAIN-2013-20
http://www.astronomy.swin.edu.au/cosmos/U/Ultraviolet
2013-05-22T14:24:39
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.893845
289
4.1875
4
Solar Neutrino Experiments Neutrinos are ghostlike particles that were postulated by Wolfgang Pauli in 1930 on purely theoretical grounds and, until recently, were believed to have zero mass. They are thought to be produced in the nuclear reactions that provide the sun's energy. They rain down on each square inch of the earth at the rate of about 400 billion per second. Raymond Davis Jr. started investigating neutrinos that were produced in Brookhaven's Graphite Research Reactor and at a reactor at the Savannah River Plant in South Carolina, in the 1950s. But these experiments were really the prelude to Davisís major triumph, which came in the early 1970s, when he successfully detected solar neutrinos in a new experiment based in Lead, South Dakota (image at right). See more images of the detector. A solar neutrino was expected to produce radioactive argon when it interacts with a nucleus of chlorine. Davis developed an experiment based on this idea by placing a 100,000-gallon tank of perchloroethylene, a commonly used dry-cleaning chemical and a good source of chlorine, 4,800 feet underground in the Homestake Gold Mine in South Dakota and developing techniques for quantitatively extracting a few atoms of argon from the tank. The chlorine target was located deep underground to protect it from cosmic rays. Also, the target had to be big because the probability of chlorine's capturing a neutrino was ten quadrillion times smaller than its capturing a neutron in a nuclear reactor. Despite these odds, Davis's experiment confirmed that the sun produces neutrinos, but only about one-third of the number of neutrinos predicted by theory could be detected. This so-called "solar neutrino puzzle" gave birth to different experiments by scientists around the world, all working to confirm the solar neutrino deficit. First came Kamiokande in Japan, then SAGE in the former Soviet Union, GALLEX in Italy, and then Super Kamiokande. Finally, in 2001-2002, scientists working at SNO, the Sudbury Neutrino Observatory in Ontario, Canada, found strong evidence that the neutrino has the ability to oscillate, or change form, among its three known types: the electron, muon and tau neutrinos. 1967 Brookhaven National Lab press release, "Solar Energy Generation Theory Being Tested in Brookhaven Neutrino Experiment" (PDF) 1967 Brookhaven Bulletin story, "Solar Neutrinos Are Counted at Brookhaven" (PDF) Brief video featuring comments by Ray Davis on his neutrino research. (Note: This is a streaming video file. You must have RealPlayer installed.) A "question and answer" interview with Dr. Davis, about 9 min. (RealPlayer required.)
<urn:uuid:5293fcfb-3889-4ffc-a74f-e9824a93e7c9>
CC-MAIN-2013-20
http://www.bnl.gov/bnlweb/raydavis/research.htm
2013-05-22T14:46:06
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938494
591
4.25
4
An analysis of mineral grains from the bottom of the western Grand Canyon indicates it was largely carved out by about 70 million years ago -- a time when dinosaurs may have even peeked over the rim, says a study led by the University of Colorado Boulder. The new research pushes back the conventionally accepted date for the formation of the Grand Canyon in Arizona by more than 60 million years, said CU-Boulder Assistant Professor Rebecca Flowers. The team used a dating method that exploits the radioactive decay of uranium and thorium atoms to helium atoms in a phosphate mineral known as apatite, said Flowers, a faculty member in CU-Boulder’s geological sciences department. The helium atoms were locked in the mineral grains as they cooled and moved closer to the surface during the carving of the Grand Canyon, she said. Temperature variations at shallow levels beneath the Earth’s surface are influenced by topography, and the thermal history recorded by the apatite grains allowed the team to infer how much time had passed since there was significant natural excavation of the Grand Canyon, Flowers said. “Our research implies that the Grand Canyon was directly carved to within a few hundred meters of its modern depth by about 70 million years ago,” said Flowers. A paper on the subject by Flowers and Professor Kenneth Farley of the California Institute of Technology was published online Nov. 29 in Science magazine. Flowers said there is significant controversy among scientists over the age and evolution of the Grand Canyon. A variety of data suggest that the Grand Canyon had a complicated history, and the entire modern canyon may not have been carved all at the same time. Different canyon segments may have evolved separately before coalescing into what visitors see today. In a 2008 study, Flowers and colleagues showed that parts of the eastern section of the Grand Canyon likely developed some 55 million years ago, although the bottom of that ancient canyon was above the height of the current canyon rim at that time before it subsequently eroded to its current depth. Over a mile deep in places, Arizona’s steeply sided Grand Canyon is about 280 miles long and up to 18 miles wide in places. Visited by more than 5 million people annually, the iconic canyon was likely carved in large part by an ancestral waterway of the Colorado River that was flowing in the opposite direction millions of years ago, said Flowers. “An ancient Grand Canyon has important implications for understanding the evolution of landscapes, topography, hydrology and tectonics in the western U.S. and in mountain belts more generally,” said Flowers. The study was funded in part by the National Science Foundation. Whether helium is retained or lost from the individual apatite crystals is a function of temperatures in the rocks of Earth’s crust, she said. “The main thing this technique allows us to do is detect variations in the thermal structure at shallow levels of the Earth’s crust,” she said. “Since these variations are in part induced by the topography of the region, we obtained dates that allowed us to constrain the timeframe when the Grand Canyon was incised.” Flowers and Farley took their uranium/thorium/helium dating technique to a more sophisticated level by analyzing the spatial distribution of helium atoms near the margin of individual apatite crystals. “Knowing not just how much helium is present in the grains but also how it is distributed gives us additional information about whether the rocks had a rapid cooling or slow cooling history,” said Flowers. There have been a number of studies in recent years reporting various ages for the Grand Canyon, said Flowers. The most popular theory places the age of the Grand Canyon at 5 million to 6 million years based on the age of gravel washed downstream by the ancestral Colorado River. In contrast, a 2008 study published in Science estimated the age of the Grand Canyon to be some 17 million years old after researchers dated mineral deposits inside of caves carved in the canyon walls. Paleontologists believe dinosaurs were wiped out when a giant asteroid collided with Earth 65 million years ago, resulting in huge clouds of dust that blocked the sun’s rays from reaching Earth’s surface, cooling the planet and killing most plants and animals. Because of the wide numbers of theories, dates and debates regarding the age of the Grand Canyon, geologists have redoubled their efforts, said Flowers. “There has been a resurgence of work on this problem over the past few years because we now have some new techniques that allow us to date rocks that we couldn’t date before,” she said. For the full news release visit http://www.colorado.edu/news/releases/2012/11/29/grand-canyon-old-dinosaurs-suggests-new-study-led-cu-boulder.
<urn:uuid:f3359c11-061b-401e-a30f-3b70eb5c1b69>
CC-MAIN-2013-20
http://www.colorado.edu/news/features/dated-old-dinosaurs?qt-main=0
2013-05-22T14:20:27
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95561
992
4.0625
4
Dengue fever is a disease spread by the Aedes aegypti mosquito and caused by one of four dengue viruses that are closely related. The viruses that cause dengue fever are related to those that cause yellow fever and West Nile virus infection. Every year, it is estimated that at least 100 million cases of dengue fever occur across the globe. Tropical regions remain heavily affected. Areas that have the greatest risk of infection include: - Sub-Saharan Africa - Central America - The Caribbean (except the Cayman Islands and Cuba) - Pacific Islands - South America (except Argentina, Chile, and Paraguay) - Southeast Asia - Southern China - Northern parts of Australia Very few cases occur in the United States. Most of the cases that are diagnosed occur in individuals who contracted the disease while traveling abroad. However, risk of infection is increasing for residents of Texas that live in areas that share a border with Mexico. Additionally, cases have been on the rise in the Southern United States. As recently as 2009, an outbreak of dengue fever was identified in Key West, Florida. Dengue fever is transmitted via the bite of a mosquito harboring the dengue virus. Person-to-person transmission does not occur. If you contract dengue fever, symptoms usually begin about four to seven days after the initial infection. In many cases, symptoms will be mild. They may be mistaken for symptoms of the flu or another infection. Young children and people who have never experienced infection may have a milder illness than older children and adults. Symptoms generally last for about 10 days and can include: - sudden, high fever - severe headache - swollen lymph glands - severe joint pain and muscle pain - skin rash (appearing between two and five days after the initial fever) - mild to severe nausea - mild to severe vomiting - mild bleeding from the nose or gums - mild bruising on the skin - febrile convulsions A small percentage of individuals who have dengue fever can develop a more serious form of disease, dengue hemorrhagic fever. Dengue Hemorrhagic Fever The risk factors for developing dengue hemorrhagic fever include: - having antibodies to dengue virus from a previous infection - being under the age of 12 - being female - Caucasian race - weakened immune system This rare form of the disease is characterized by: - high fever - damage to the lymphatic system - damage to blood vessels - bleeding from the nose - bleeding from the gums - liver enlargement - circulatory system failure The symptoms of dengue hemorrhagic fever can trigger dengue shock syndrome. Dengue shock syndrome is severe, and can lead to massive bleeding and even death. Doctors use blood tests to check for viral antibodies or the presence of infection. If you experience dengue symptoms after traveling outside the country, you should see a healthcare provider to check if you are infected. There is no medication or treatment specifically for dengue infection. If you believe you may be infected with dengue, you should use over-the-counter pain relievers to reduce your fever, headache, and joint pain. However, aspirin and ibuprofen can cause more bleeding and should be avoided. Your doctor should perform a medical exam, and you should rest and drink plenty of fluids. If you feel worse after the first 24 hours of illness — once your fever has gone down — you should be taken to the hospital as soon as possible to check for complications. There is no vaccine to prevent dengue fever. The best method of protection is to avoid mosquito bites and to reduce the mosquito population. When in a high-risk area, you should: - avoid heavily populated residential areas. - use mosquito repellent indoors and outdoors. - wear long-sleeved shirts and pants tucked into socks. - use air conditioning instead of opening windows. - ensure that window and door screens are secure, and any holes repaired. - use mosquito nets if sleeping areas are not screened. Reducing the mosquito population involves getting rid of mosquito breeding areas. These areas include any place that still water can collect, such as birdbaths, pet dishes, empty planters/flower pots/cans or any empty vessel. These areas should be checked, emptied, or changed regularly. If a family member is already ill, it is important to protect yourself and other family members from mosquito bites. To help prevent the disease from spreading, consult a physician anytime you experience symptoms of dengue fever.
<urn:uuid:43b7f1d0-7465-4358-a81d-881373d1c353>
CC-MAIN-2013-20
http://www.healthline.com/health/dengue-fever
2013-05-22T14:41:11
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920754
967
4.0625
4
|| Tutorials home | Decreasing risk exposure | Safety tour | Emergencies | Meteorology | Flight Theory | Communications | Builders guide || The effect of wind Rev. 23 — page content was last changed October 12, 2009 consequent to editing by RA-Aus member Dave Gardiner |Flight Planning and Navigation| An aircraft in flight is airborne and subject to the movement of the air mass in relation to the surface; i.e. the wind. The relatively low cruising speed of light aircraft makes them particularly affected by wind velocity. Consequently the calculation of the wind effect on aircraft movement relative to the ground is a major part of light aircraft flight planning and navigation.basic forces module of the flight theory section we said it is common practice to estimate resultant forces non-mathematically by drawing scaled, arrowed lines to represent each vector quantity; this produces the resultant of two vector quantities in a vector triangle or parallelogram. The lengths of the lines represent the magnitude of each force, and the placements indicate the application points and directions. We also know that an aircraft in flight is airborne, and consequently both the path it projects over the ground and its speed relative to the ground are the resultant of the aircraft velocity and the wind velocity. For example, waypoint Beta is 150 nautical miles north-east (045° true) of waypoint Alpha and an aircraft departs overhead Alpha for Beta, maintaining a heading of 045° true while cruising at 75 knots TAS. At the time of departure, the wind velocity at the cruise altitude is 135°/20 knots; i.e. the 20-knot wind is coming from the south-east. Where will the aircraft be after two hours flight? Certainly not over Beta, as it will have moved 150 nm north-east within the air mass while the air mass has moved 40 miles north-west. So we might surmise that after two hours flight its position will be about 40 nm north-west of Beta, and this is shown in Figure 1. The aircraft has drifted from its intended path or track over the ground and the 'track made good' is about 15° to the left of the 'required track'. We should note that, relative to the aircraft's course, the wind velocity normally has both a crosswind component and a headwind or tailwind component, and that headwind or tailwind component will also affect the aircraft's speed relative to the surface — the ground speed. The wind triangleSo, if we want to track over the direct route from Alpha to Beta we will have to ascertain both the wind velocity at the time of flight and a heading to fly that will provide the necessary crosswind correction angle. Remember that velocity vectors have both speed and direction. In the wind triangle we have only one completely known vector — the forecast wind velocity. We know part of the heading vector — the true airspeed — but not the direction. We also know part of the resultant vector — the direction (required track) from Alpha to Beta — but not the ground speed. We can determine the two unknowns — the heading and the ground speed — by plotting scaled vectors on paper. You will need some drawing instruments, a protractor and ruler, but a pair of compasses or dividers can be useful. • First draw a vertical line labelled 'true north' and mark a position on the line as waypoint Alpha. • Using a protractor centred on Alpha and aligned with true north, mark the bearing to waypoint Beta; e.g. from the above, 045° true. Rule a line of appropriate length from Alpha through the bearing, marking it with two arrows to indicate it as the track direction and annotate that bearing. • Wind velocity is given as the direction the wind is coming from and we need to plot the direction it is moving to — the reciprocal bearing. The reciprocal is the stated direction ±180°. Using a protractor centred on Alpha and aligned with true north, mark the reciprocal wind bearing: 135° ±180° = 315° true. • Rule a line of appropriate length from Alpha through the wind bearing mark. Decide the scale to be used and mark off a distance along that line that equals the air movement during one hour; i.e. 20 nm (20 knots wind speed). Label that distance mark as the wind vector — v1. The convention is to add three arrows to the vector indicating direction, and annotate the wind velocity — 135/20 knots (Figure 2). • Using the scale, open up the dividers or compasses to the distance equalling the air distance the aircraft travels in one hour; i.e. 75 nm at the cruise true airspeed of 75 knots. With one divider/compass point on v1 mark the track line with the other divider/compass point and label that v2 (Figure 3). Or just use the ruler to accomplish the same task. • Draw a line connecting v1 and v2, marking it with one arrow to represent the heading vector. Its orientation with true north is the heading (060°T) and its length is the TAS. Thus we have the first unknown — the direction in which to point the aircraft. Annotate the heading (060°T) and TAS (75 kn). Also note the wind correction angle [WCA] — the difference between the track (045°T) and the heading (060°T) — is 15°, and the drift will be to the left — also known as port drift. The wind correction angle is the angular difference between the required track and the heading, intended to ensure that the track made good will equate with the required track. Note that the terms 'crab angle' and 'drift angle' are very often used instead of 'wind correction angle'. But the latter term is more precise; crab angle and drift angle do have slightly different meanings or associations. Drift angle is measured in flight, and is the angle between the heading and the track made good. Crab angle is the preferred term when associated with crosswind landing.• Now measure the distance between Alpha and v2, which is the distance (72 nm) moved over the ground during one hour. This is the second unknown — the ground speed. Annotate the ground speed (72 kn) adjacent to the bearing (Figure 3). • We can now calculate the sector flight time from overhead Alpha to overhead Beta; this time is called the estimated time interval [ETI]. ETI (minutes) = Distance (nm) / ground speed (kn) × 60 = 150/72 × 60 = 125 minutes. It is interesting to note that even though the wind were a full crosswind, the ground speed is less than TAS and thus the ETI is a bit greater than you may have expected. This is because the heading of 060° would now include a small headwind component. Direct headwind/tailwindIf the wind is aligned directly with the required track then of course it is not possible to construct the triangle, as there is no wind correction angle and the ground speed is the TAS ± wind speed. However, just as an illustration that the wind triangle still provides the correct answers, I have repeated the previous Alpha to Beta plot with winds that are only 10° off the required track; i.e. nearly full headwind and tailwind components. It may be thought that if an out-and-return trip is flown where the wind is directly aligned with the required track, the headwind encountered in one direction will be offset by the tailwind in the reverse direction; thus the total flight time will be equivalent to that in nil wind conditions. Not so — the greater the wind speed the greater the flight time on an out-and-return flight, no matter what the wind direction. Imagine a flight Alpha–Beta–Alpha in nil wind conditions. The ground speed on both the outward and return legs would equal the TAS (75 kn) and each leg would take 120 minutes for a total flight time of 240 minutes. Now let's factor in a 25-knot north-east wind. The ground speed on the outward leg would be 50 kn and the ETI would be 180 minutes, whereas the ground speed on the return leg would be 100 kn and the ETI 90 minutes for a total flight time of 270 minutes. Plotting the wind vector triangle is the most accurate method for ascertaining heading and ground speed, but there are two other methods that are quite accurate enough for light aircraft cross-country navigation. Back to top boundary layer turbulence paragraphs in the microscale meteorology module), so there is no reason to try for absolute accuracy in the initial calculation of heading, ground speed and ETI. So, rather than plotting the wind triangle we can introduce a few shortcuts to the process by using some simple mental arithmetic to estimate the crosswind and headwind/tailwind components of the wind velocity relative to the required track. Even so, it is wise to become familiar with plotting the wind triangle; the experience makes it much easier to mentally envisage the relationship between the vectors thus avoiding flying entirely in the wrong direction — which is remarkably easy to do. The trigonometrical relationships of the two wind components — crosswind and headwind/tailwind — is shown in a wind triangle (Figure 5). In this example the wind angle is 30° relative to the required track and the wind speed is 20 knots. The sine of an angle = opposite side/hypotenuse, while the cosine of an angle = adjacent side/hypotenuse. In this wind triangle the hypotenuse represents the wind velocity vector, the side opposite to the angle represents the crosswind component of the wind velocity vector and the adjacent side represents the headwind component of the wind velocity vector. An abridged trigonometric table is contained in the Flight Theory manoeuvring forces module. Reading from that table, sine 30° is 0.5 and cosine 30° is 0.866 — near enough to 0.9. Using 1-in-60 to estimate WCAThe two/three-step technique described below approximates the sine/cosine relationships and produces results near enough to the trig calculations. • 1. First find the crosswind component of the forecast wind velocity by estimating the (acute) angle at which the wind meets the required track, divide that by 60 and multiply the result by the wind speed. However, if the relative angle exceeds 60° just use 60. (a) track = 045° w/v = 075/20 kn: relative angle = 30 = 30/60 × 20 = 10 kn crosswind. (b) track = 045° w/v = 135/20 kn: relative angle = 90 [use 60] = 60/60 × 20 = 20 kn crosswind. (c) track = 045° w/v = 195/20 kn: relative angle = 30 = 30/60 × 20 = 10 kn crosswind. • 2. Then use the 1-in-60 rule to estimate the wind correction angle by dividing the crosswind component by the TAS and multiplying the result by 60. (a) and (c) crosswind = 10 kn; TAS = 75 kn: 10/75 × 60 = 8° WCA. or (b) crosswind = 20 kn; TAS = 75 kn: 20/75 × 60 = 16° WCA. But combining steps 1 and 2 simplifies the calculation: WCA = relative angle [60 max] x wind speed / TAS Example (a) track = 045° TAS = 75 kn; w/v = 075/20 kn: relative angle = 30 WCA = 30 × 20/75 = 8° And remember that the wind correction is applied in the direction the wind is coming from so that the aircraft crabs along the required track. • 3. Then to estimate the ground speed, deduct the (acute) angle at which the wind meets the track from 115 (for angles up to 60°, use 105 for greater angles) and apply that as a percentage of the wind speed. (a) track = 045° w/v = 075/20 kn: angle = 30; 115 – 30 = 85% of 20 = 17 knots headwind. or (b) track = 045° w/v = 135/20 kn: angle = 90; 105 – 90 = 15% of 20 = 3 knots headwind. or (c) track = 045° w/v = 195/20 kn: angle = 30; 115 – 30 = 85% of 20 = 17 knots tailwind. Subtract the result from TAS if wind is coming from ahead to abeam, otherwise add. If you like to try a quick mental calculation with the two plots in Figure 4, you will find the arithmetic will produce much the same results as the plots. You may think it wrong that if the wind is at 90° to the track the ground speed calculation will still come up with a headwind component. This is because the track and the wind velocity are relative to the ground, not to the aircraft's heading. With a wind at 90° to the required track the aircraft must take up a heading having some into-wind component, so that it crabs along the required track; try it by plotting a full wind vector triangle incorporating a wind at 90° to the required track. All the short-cut techniques described are not ultra-precise but they are quite okay for most cross-country navigation. You should also read the meteorology module dealing with southern hemisphere winds and particularly section 6.3. Using tables for ground speed and WCAThe third and simplest method for estimating WCA, heading and ground speed is to use tables such as those following. Table 1 is for wind speeds up to 30 knots in 5-knot intervals, and for wind angles relative to either side of the required track between 0° and 180°. In the table you will see that headwinds have a negative adjustment and tailwinds a positive adjustment for ground speed. However if the calculated WCA exceeds about 10° the inbuilt crab problem becomes apparent and a small additional calculation to derive a more accurate ground speed has to be made (Table 2). *If the WCA exceeds 10° then reduce the ground speed by an additional value that is a percentage of the TAS, as shown in Table 2. You will note that the adjustment to ground speed really only becomes particularly significant at WCAs above 20° and then, in such conditions, it is probably unwise for light aircraft to be engaged in cross-country flight. Example 1. The track required is 090°, the wind velocity is 060°/15 knots and TAS is 70 knots. Then the wind angle relative to track is 30° left and, reading from Table 1, the headwind component is –13 and the crosswind component is 7. Thus the ground speed will be 70 –13 = 57 knots, the wind correction angle will be 7/70 × 60 = 6° (to the left) and the heading = 084°. Example 2. The track required is 300°, the wind velocity is 075°/15 knots and TAS is 70 knots. Then the wind angle relative to track is 135° right and, reading from Table 1, the headwind component is +10 and the crosswind component is 10. Thus the ground speed will be 70 + 10 = 80 knots, the wind correction angle will be 10/70 × 60 = 8° (to the right) and the heading = 308° Example 3. The track required is 360°, the wind velocity is 075°/20 knots and TAS is 70 knots. Then the wind angle relative to track is 75° right and, reading from Table 1, the headwind component is –5 and the crosswind component is 20. Thus the ground speed will be 70 – 5 = 65 knots, the wind correction angle will be 20/70 × 60 = 16° (to the right) and the heading = 016°. However, because the WCA exceeds 10°, Table 2 is consulted. This shows for a WCA of 16° the ground speed should be further reduced by 3% of the TAS — about 2 knots, so the adjusted ground speed is 63 knots. Back to top The Jeppesen CR2, available from the Airservices Australia online store navigation and planning accessories for about A$50 is okay and will fit into your pocket — together with a small folding rule — and can be operated with one hand for time and distance calculations. There are hand-held E-6B calculators or 'computers', costing around A$150, which do much the same job as the whiz wheels. I think that all the in-flight variables, to which light aircraft flying at comparatively low levels are subject, negate any potential cost/benefit advantage of such expensive single purpose devices. However, E-6B software utilities for PALM OS and POCKET PC handheld computers are readily available for about US$20 — or possibly as freeware. To find sources, google 'E6b software'. It is my opinion that the whiz wheel gives a navigator a better grasp of the essentials of the wind triangle and thus makes it easier to mentally envisage in-flight corrections/adjustments without the need to fiddle with the whiz wheel or an electronic E-6B device. Money available for flight planning and navigation aids is well spent if you install a top quality magnetic compass and spend some time measuring, adjusting and recording the compass deviation in situ — and recheck deviation once or twice a year. Back to top Groundschool — Flight Planning & Navigation Guide | Guide content | 1. Australian airspace regulations | 2. Charts & compass | 3. Route planning | | [4. Effect of wind] | 5. Flight plan completion | 6. Safety audit | 7. Airmanship & flight discipline | | 8. En route adjustment | 9. Supplementary navigation technique | 10. Global Positioning System | | 11. Using the ADF | 12. Electronic planning & navigation | 13. ADS-B surveillance technology | | Operations at non-controlled airfields | Safety during take-off & landing | |Section 5 of the Flight Planning & Navigation Guide discusses flight plan completion| Copyright © 2001–2009 John Brandon [contact information]
<urn:uuid:c3e1e712-6b9c-419a-bee9-6a879a6bab6b>
CC-MAIN-2013-20
http://www.recreationalflying.com/tutorials/navigation/wind.html
2013-05-22T14:40:22
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921185
3,838
4.0625
4
This interactive activity from NOVA scienceNOW reviews several potential means of storing carbon dioxide (CO2) captured from industrial sources. Among the featured ideas are technologies that deliver compressed CO2 to underground cavities, saline aquifers, and the deep seabed. The benefit of storing, or sequestering, captured CO2 could be significant in the fight to slow or limit global warming. However, the list of drawbacks associated with carbon sequestration includes high cost, storage capacity limitations, a still-incomplete understanding of the relevant Earth systems, and uncertainty as to whether the CO2 can be safely and permanently contained. Through the carbon cycle, Earth captures about half of an estimated eight billion metric tons of carbon dioxide (CO2) produced annually through the combustion of fossil fuels. Land plants absorb CO2 for photosynthesis, and in the oceans, CO2 readily dissolves in seawater. Because CO2 is a greenhouse gas and contributes to global warming, the overwhelming consensus among scientists is that something must be done to remove most of what otherwise accumulates in the atmosphere or to reduce our combustion of fossil fuels in the first place. Many technological solutions are being explored to capture CO2 either in the air or directly at an emissions source. Once collected, the gas must be safely and permanently stored to prevent its release back into the atmosphere. Before that can happen, the CO2 must be compressed. By nature, gas is expansive and more difficult to contain than a solid or liquid. Using compression, CO2 gas can be converted into a "supercritical" fluid—somewhere between a gas and a liquid state. While this is both an energy-intensive and expensive process, once complete, the reformatted CO2 can be transported to a storage facility. Various storage solutions have been proposed, tested, and even put into limited use. They involve sites aboveground, belowground, and in the ocean. Aboveground solutions mostly rely on agricultural means to "fix" carbon in soil, while belowground solutions generally involve filling existing cavities, including depleted coal beds, oil and gas fields, or aquifers, with the fluid CO2. Ocean storage can also take many forms, including injecting CO2 deep into the seabed or stimulating growth at the surface of plankton populations, which use CO2 in photosynthesis. While each of these options has merits, each has its drawbacks as well. Although it may be appealing to plant trees and allow vegetation to absorb CO2 for photosynthesis, when plants die, they release much of their stored carbon back into the atmosphere. Another approach, using alkaline minerals to react with the acidic CO2 to form stable carbonates, appears effective, but the process of mining to obtain these minerals would make it prohibitively expensive. And as large a potential storage capacity as the oceans offer, the effects of increased levels of CO2 on organisms, especially benthic bottom-dwellers, is largely unknown. Existing research suggests that higher ocean acidity threatens calcium carbonate, the key structural constituent of coral skeletons and mollusk shells. Among other concerns about these technological solutions cited by both scientists and potential investors is the potential for leakages that could spoil freshwater supplies, and the inadequate storage capacity that most terrestrial solutions offer. And then there's the price: using present sequestration technologies, cost estimates range from $100 to $300 per ton of carbon emissions kept out of the atmosphere. All of this suggests that geological and ocean sequestration may only realistically represent one part of the solution to the problem—a solution that likely must also include reducing our consumption of fossil fuels. Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co. We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.
<urn:uuid:88a43e2e-e54d-483b-adec-27075f6d6017>
CC-MAIN-2013-20
http://www.teachersdomain.org/resource/nsn08.sci.ess.watcyc.capcarbonint/
2013-05-22T14:24:40
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935647
861
4.25
4
The Walker circulation is made up of trade winds blowing from east to west across the tropical Pacific Ocean (blue arrow), bringing moist surface air to the west. In the western tropical Pacific, the moist air rises, forming clouds. The rising air becomes drier because most of its moisture falls to the surface as rain. Winds blow from west to east, moving the now drier air toward South America. The dry air returns back to the surface in the eastern tropical Pacific, completing the loop. Click on image for full size Image courtesy of NOAA Geophysical Fluid Dynamics Laboratory The Walker circulation is an ocean-based system of air circulation. This system influences weather on the Earth. Normally, the warm, wet western Pacific Ocean is under a low pressure system, and the cool and dry eastern Pacific Ocean is under a high pressure system. This causes surface air to move east to west, from high pressure in the eastern Pacific to low pressure in the western Pacific. Higher up in the atmosphere, winds flow from west to east, and this completes the loop. The Walker circulation is part of the normal weather conditions in the tropical Pacific Ocean. Normally, the western Pacific has warm, wet weather and eastern Pacific has cool, dry weather. The Walker circulation changes every few years, and this changes the weather. This is part of the El Niño-Southern Oscillation (ENSO). When the Walker circulation weakens, it is called El Niño. When the Walker circulation is very strong, it is called La Niña. El Niño and La Niña impact the weather in North and South America, Australia, and Southeast Africa. El Niño and La Niña can cause flooding, droughts, and increases or decreases in the number of hurricanes that form. Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: Wind is moving air. Warm air rises, and cool air comes in to take its place. This movement creates the winds around the globe. Winds move at different speeds and have different names based on their speed....more Hurricanes form in the tropics over warm ocean water. The storms die down when they move over land or out of the tropics. At the center of the rotating storm is a small area of calm weather and clear skies...more The winds in the Southeast Pacific mainly blow from south to north. They affect the weather and climate in the region. They also affect the climate in other places around the world. Air near the equator...more The Atacama Desert is one of the driest places on Earth. The Atacama is in the country of Chile in South America. In an average year, this desert gets less than 1 millimeter (0.04 inch) of rain! It is...more Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. The sun's rays pass through millions of raindrops. A...more It takes the Earth one year to travel around the sun one time. During this year, there are four seasons: summer, autumn, winter, and spring. Each season depends on the amount of sunlight reaching the...more Scientists sometimes travel in airplanes that carry weather instruments in order to gather data about the atmosphere. These research aircraft bring air from the outside into the plane so scientists can...more
<urn:uuid:1e7f047d-38fc-4fa3-9d29-bf59edbb7927>
CC-MAIN-2013-20
http://www.windows2universe.org/earth/Atmosphere/walker_circulation.html&edu=elem
2013-05-22T14:31:54
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923323
733
4
4
You must be familiar with slope-intercept form (y = mx + b), and understand which numbers in the equation are m and b, and how to graph them. Mark b on the graph, then graph the slope (m) from that point. Inequalities are very similar, with only a few differences: - It's not a line of solutions as in a linear equation; it is a solid or dashed boundary line that shows on which side all the solutions are. - Shade above or below the boundary line, showing on which side all the solutions are. - Change the direction of the inequality (>, <) if you divide by a negative number. ►Check your work by using (0,0) as a test point. This will help you know if your answer is correct, and if you forgot to change the direction of the inequality. These videos cover the same topic, but go about solving in slightly different ways. I watched all of them, and gleaned a little more from each one. (1) from YourTeacher.com - graphing using a table (2) boundary line (3) graphing using slope-intercept form, y = mx + b (4) graphing using slope-intercept form. He is fast, so pause and read the text on the board. (5) graphing using slope-intercept form
<urn:uuid:99cd87c4-0e1d-4f9e-bac3-a2dfe605710e>
CC-MAIN-2013-20
http://homeschoolersresources.blogspot.com/2011_02_01_archive.html
2013-05-24T22:48:35
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94171
293
4.53125
5
Short stature is a height that is smaller than the average height for a person's age, sex, and racial group. It is specifically height that is in the third percentile. Short stature is generally broken down into three subgroups:. - Familial short stature—parents are short - Constitutional delay and development—child is small for age but growing at normal rate, will reach an adult height similar to parents - Caused by chronic disease—such as malnutrition, genetic disorders, heart problems, or growth hormone deficiency are just a few that can affect growth treatment will be needed to reach full height potential Contact a doctor if you notice a significant decrease in your child's growth rate or if your child has stopped growing. Familial and constitutional delay are due to the child's genetic make-up. If both parents are shorter than average, the child will most likely have short stature. The child may also have delayed puberty. This may cause temporary short stature, but normal height will eventually be reached. Medical conditions that may contribute to short stature, include: - Malnourishment—most common cause of growth failure and is generally associated with poverty - Genetic disorders such as skeletal dysplasias, Turner syndrome , Down’s syndrome , Silver Russell syndrome - Endocrine disorders such as hypothyroidism or growth hormone deficiency - Congenital heart disease - Kidney diseases - Liver failure - Sickle cell anemia —a blood disorder - Disorders of the stomach or intestines such as inflammatory bowel disease - Lung conditions such as cystic fibrosis , severe asthma , chronic obstructive pulmonary disease - Use of SSRI’s medications (may be used to treat attention deficit disorder or obsesive compulsive disorder) Factors that may increase the risk of short stature include: - Having family members with short stature - Poor diet - Certain diseases and drugs taken by a pregnant woman will increase risk to the newborn child Symptoms vary depending on the type of condition. Children with familial short stature do not have any disease-related symptoms. They will often reach a height similar to that of their parents. Children who have delayed puberty, late bloomers, will often have a close relative with the same delay. These children will also eventually catch up to their peers in height. Symptoms that may indicate a medical condition include: - Stopped or dramatically slowed growth (below the third percentile as determined by your doctor) - Weight loss or gain (more than five pounds in a month) - Poor nutrition - Loss of appetite - Chronic abdominal pain and diarrhea - Persistent fever - Chronic headaches and/or vomiting - Delayed puberty (no spotting by age 15 for a girl or no enlargement of the testes by age 14-15 for a boy) - Obstructed sleep apnea Your child’s doctor will ask about symptoms and medical history. A physical exam will be done. Your child's height, weight, and body proportion will be measured. The skull and facial features will also be examined. Some tests may be done to diagnose or exclude contributing conditions. These tests may include: - Bone age: an x-ray to determine the chronological age of your child’s bones - To check for hypothyroidism–low levels of thyroid hormone - To check growth hormone levels–an important factor in growth - To check for signs of conditions that may cause short stature like respiratory problems, malnutrition, and liver disease - A complete blood count to check for blood diseases - A genetic exam to detect chromosomal abnormalities and to exclude Turner syndrome (a common cause of short stature in girls) - Urinalysis—examination of urine to look for conditions like kidney disease Children with familial short stature do not require treatment. For others, treatment will focus on the cause of short stature. Treatments can vary greatly but may include medication or nutritional changes. Medication that may be used to treat associated conditions include: - Thyroid hormone replacement therapy—may be used in children with hypothyroidism - Growth hormone replacement—may be used in children with growth hormone deficiency, Prader Willi syndrome , or Turner syndrome If a medication is associated with short stature your doctor may stop the medication. Make sure to talk to your doctor before stopping any medication. Malnutrition can contribute to short stature. It may be due to a lack of proper food or other conditions like gastrointestinal problems. In either case, a change in diet may help. Talk to your doctor or dietitian to help make effective changes to your child's diet. Short stature cannot be prevented in children who have a familial short stature or those who have a chronic disease. In some cases, you can minimize your child’s risk of developing short stature by making sure the child eats a nutritious diet. Parents can minimize the risk of short stature in their children by eating a nutritious diet during pregnancy. - Reviewer: Michael Woods - Review Date: 09/2012 - - Update Date: 00/91/2012 -
<urn:uuid:ca3075cf-6c85-4f9b-a9e4-b5c15656cc18>
CC-MAIN-2013-20
http://menorahmedicalcenter.com/your-health/?/100270/sp
2013-05-24T22:58:37
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921877
1,050
4
4
Bioethics 101 provides a systematic, five-lesson introductory course to support educators in incorporating bioethics into the classroom through the use of sequential, day-to-day lesson plans. This curriculum is designed to help science teachers in guiding their students to analyze issues using scientific facts, ethical principles, and reasoned judgment. These lessons represent a "best of" compilation from our popular Ethics Primer. Through the use of case studies, ethical principles, decision-making frameworks and stakeholder role-play, students are fully supported in learning how to justify an answer to an ethical question. If you’ve been looking for a structured way to introduce bioethics into your classroom, this resource is for you! In order for us to measure how our curriculum resources are being used, please take a moment to contact us and let us know the class or classes in which you're using our lessons. We also welcome feedback about our Bioethics 101 curriculum. We will not share your contact information with anyone. Complete Lesson Plans Lesson 1--Introduction to Bioethics NWABR_Bioethics_101_Lesson1.pdf Lesson 2--Principles of Bioethics NWABR_Bioethics_101_Lesson2_0.pdf Lesson 3--Finding the Stakeholders NWABR_Bioethics_101_Lesson3.pdf Lesson 4--Making a Strong Justification NWABR_Bioethics_101_Lesson4.pdf Lesson 5--Putting it All Together NWABR_Bioethics_101_Lesson5.pdf This page is coming soon!
<urn:uuid:50cda602-d471-4d37-a508-91abc737cf71>
CC-MAIN-2013-20
http://nwabr.org/teacher-center/bioethics-101
2013-05-24T22:36:01
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.82882
345
4.03125
4
Early astronomy concentrated on finding accurate positions of the stars and planets. This was due in part to the influence of astrology, but later, accurate positions came to be important for determining the physical characteristics of the stars and planets. Accurate positions for the stars was also crucial for commercial and military navigation (navigation by the stars has only recently been replaced by the use of satellite systems such as the Global Positioning System). But probably of more importance to you is where to point your telescope or binoculars to find that cool object talked about in the newspaper or astronomy magazine. There are a couple of popular ways of specifying the location of a celestial object. The first is what you would probably use to point out a star to your friend: the altitude-azimuth system. The altitude of a star is how many degrees above the horizon it is (anywhere from 0 to 90 degrees). The azimuth of a star is how many degrees along the horizon it is and corresponds to the compass direction. Azimuth starts from exactly North = 0 degrees azimuth and increases clockwise: exactly East = 90 degrees, exactly South = 180 degrees, exactly West = 270 degrees, and exactly North = 360 degrees = 0 degrees. For example, a star in the southwest could have an azimuth between 180 degrees and 270 degrees. Since stars change their position with respect to your horizon throughout the night, their altitude-azimuth position changes. Also, observers at different locations looking at the same star at the same time will see it at a different altitude-azimuth position. A concise summary of this coordinate system and the numbers involved is given at the end of this section. The second way of specifying star positions is the equatorial coordinate system. This system is very similar to the longitude-latitude system used to specify positions on the Earth's surface. This system is fixed with respect to the stars so, unlike the altitude-azimuth system, a star's position does not depend on the observer's location or time. Because of this, astronomers prefer using this system. You will find this system used in astronomy magazines and in most sky simulation computer software. The lines on a map of the Earth that run north-south are lines of longitude and when projected onto the sky, they become lines of right ascension. Because the stars were used to measure time, right ascension (RA) is measured in terms of hours, minutes, and seconds instead of degrees and increases in an easterly direction. For two stars one hour of RA apart, you will see one star cross your meridian one hour of time before the other. If the stars are not circumpolar, you will see one star rise one hour before the other. If they were 30 minutes of RA apart, you would see one rise half an hour before the other and cross your meridian half an hour before the other. Zero RA is where the Sun crosses the celestial equator at the vernal equinox. The full 360 degrees of the Earth's rotation is broken up into 24 hours, so one hour of RA = 15 degrees of rotation. The lines of RA all converge at the celestial poles so two stars one hour of RA apart will not necessarily be 15 degrees in angular separation on the sky (only if they are on the celestial equator will they be 15 degrees apart). The lines on a map of the Earth that run east-west parallel to the equator are lines of latitude and when projected onto the sky, they become lines of declination. Like the latitude lines on Earth, declination (dec) is measured in degrees away from the celestial equator, positive degrees for objects north of the celestial equator and negative degrees for objects south of the celestial equator. Objects on the celestial equator are at 0 degrees dec, objects half-way to the NCP are +45 degrees, objects at the NCP are +90 degrees, and objects at the SCP are -90 degrees. Polaris's position is at RA 2hr 31min and dec 89 degrees 15 arc minutes. A concise summary of this coordinate system and the numbers involved is given at the end of this section. The Basic Coordinates module of the University of Nebraska-Lincoln's Astronomy Education program provides a great way to make the connection between terrestrial coordinates (longitude and latitude) and the equatorial coordinate system (link will appear in a new window). The first part of the module has you drag a cursor around on a flat world map or globe and read off its terrestrial coordinate position. The second part of the module has you do the same sort of thing using a flat map of the sky or a globe of the celestial sphere and read off the right ascension and declination. Both parts also illustrate the distortion that happens when you project a curved spherical surface onto a flat two-dimensional map. The UNL Astronomy Education's Rotating Sky module has you explore the connection between the two coordinate systems. You can change your location on the Earth and adjust the position of multiple stars and see where the stars would appear and how they would move on the celestial sphere and around your position on the Earth as the Earth rotates beneath the stars. An effect called precession causes the Sun's vernal equinox point to slowly shift westward over time, so a star's RA and dec will slowly change by about 1.4 degrees every century (a fact ignored by astrologers), or about 1 minute increase in a star's RA every twenty years. This is caused by the gravitational pulls of the Sun and Moon on the Earth's equatorial bulge (from the Earth's rapid rotation) in an effort to reduce the tilt of the Earth's axis with respect to the ecliptic and the plane of the Moon's orbit around the Earth (that is itself slightly tipped with respect to the ecliptic). Like the slow wobble of a rapidly-spinning top, the Earth responds to the gravitational tugs of the Sun and Moon by slowly wobbling its rotation axis with a period of 26,000 years. This motion was first recorded by Hipparchus in 100 B.C.E. who noticed differences between ancient Babylonian observations and his own. When the Babylonians were the world power in 2000 B.C.E., the vernal equinox was in the constellation Aries and the star Thuban (in Draco) was the closest bright star to the NCP. At the time of Jesus Christ the vernal equinox had shifted to the constellation Pisces and the star Kochab (in the bowl of the Little Dipper) was the closest bright star to the NCP. Now the star Polaris is close to the NCP and the vernal equinox is close to the border between Pisces and Aquarius (in 2600 C.E. it will officially be in Aquarius) which is what a popular song of years ago refers to with the line ``this is the dawning of the Age of Aquarius''. In the year 10,000 C.E., the bright star in the tail of Cygnus, Deneb, will be the pole star and Vega (in Lyra) will get its turn by the year 14,000 C.E. Horoscopes today are still based on the 4,000-year old Babylonian system so even though the Sun is in Aries on my birthday, the zodiac sign used for my horoscope is Taurus. I guess it's hard to keep up with all of the changes in the modern world! Go back to previous section -- Go to next section last updated: August 19, 2007
<urn:uuid:696f44fe-c03c-46d3-814b-19aea0a1b468>
CC-MAIN-2013-20
http://www.astronomynotes.com/nakedeye/s6.htm
2013-05-24T22:28:46
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927318
1,571
4.25
4
Make Clay Idioms a "Piece of Cake"! "That test was a piece of cake!" "It's clean as a whistle!" Idioms, well-known words or phrases that have figurative meanings different from their literal ones, can be found everywhere from the books we read to our everyday conversations. That’s why it’s important that your child understands the figure of speech he’s using. Luckily, an idiom is more than just an expression. It is also the inspiration for this fun, hands-on art activity! In this activity, your child will use modeling clay to represent the literal meaning of an idiom which can then be compared to how we use the phrase when we talk or write. To complete this activity, your child will brainstorm all the idioms he knows and think creatively about how to represent his favorite one using clay. Above all, he’ll discover that practicing English grammar can be fun! What You Need: - Sheet of paper - Modeling clay (can be found at any craft store) - Internet access (only if you need help brainstorming)
<urn:uuid:a1f25fa0-a4d5-430f-9f49-846903aef8de>
CC-MAIN-2013-20
http://www.education.com/activity/article/create-clay-idioms/
2013-05-24T23:02:45
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923252
234
4.5
4
Is Clean Coal Finally a Reality? Combustion is the main mechanism used to harness energy from coal. All existing coal burning processes consume oxygen to produce heat. The downside, however, is that it also produces a large amount of pollutants, such as nitrogen and sulfur oxides, which are difficult to contain and are harmful to the environment. OSU researchers found a way to harness the energy from coal through what they call Coal-Direct Chemical Looping (CDCL). CDCL mixes tiny iron oxide beads to carry oxygen to spur the chemical reaction with coal, which is grounded into a fine powder. This mixture is then heated to high temperatures, where the materials react with each other. Carbon from the coal binds with the oxygen from the iron oxide to produce heat and almost pure carbon dioxide that rises to the top of the chamber where it is then captured. The excess heat harvested in this process produces water vapor to power steam-turbines to generate electricity. Researchers reported that each unit can produce about 25 thermal kilowatts. Pure carbon dioxide is separated and recycled, the iron beads are exposed to air inside the reactor becoming re-oxidized, allowing the beads to be re-generated almost indefinitely and the coal ash is removed and disposed of safely. Coal-Direct Chemical Looping exceeds all the goals that the Department of Energy (DOE) has set in place for the development of clean energy from coal. Based on current tests the team at Ohio State University is confident that they will continue to exceed the requirements set by the DOE. OSU is preparing for their larger-scale pilot plant which is under construction at the U.S department of Energy's National Carbon Capture Center in Wilsonville, AL. Set to begin operations in late 2013, the plant will produce up to 250 kilowatts using CDCL. The Department of Energy funded this research with private sector collaborating companies. Heap of Coal image via Shutterstock
<urn:uuid:ca206a2c-6be6-4934-ac05-76288872dacb>
CC-MAIN-2013-20
http://www.enn.com/top_stories/article/45612
2013-05-24T22:36:52
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946245
391
4.3125
4
The first half of the nineteenth century in England was much like contemporary America: It was a country strangled by bureaucratic regulations. Many people were always hungry, not because of poverty level wages, but because the price of grain for bread was kept artificially high by laws which simultaneously prevented the importation of foreign grain and subsidized domestic producers. Food riots, domestic unrest, and a stagnating economy were not sufficiently frightening to make the government eliminate these barriers. In the midst of all this lived a successful young Manchester textile manufacturer named Richard Cobden (1804-1865). He saw the social injustice, and it made him furious. He was determined to change it, and he did. As a result, the world owes the existence of the free market to him. Cobden demonstrated methods that we can use to break down our own protectionist fair trade laws and massive food subsidies. Richard Cobden began his public life by leaving his calico printing company to his brother. He received a portion of the profits, which allowed Cobden to devote full time to the cause of free trade. It seemed an impossible task. Yet, seven years later, England had undergone a revolutionary economic, political, and social change. Taxes on grain had been decimated. Unequaled prosperity flooded England. For the next 85 years Britain maintained world economic leadership, and the rallying cry of free trade became much more than an economic slogan. Free trade denoted the philosophy of limited government, social justice, and freedom. Cobden understood the moral truths behind unregulated commerce. Breaking down barriers to trading freedom broke down class barriers and obstacles to civil rights. It reduced military expansion, since a powerful navy was a legacy from the old mercantile idea that warships protected trade between colonies and other controlled markets. The Corn Laws Protectionist tariffs were called Corn Laws. They restricted the free flow of corn, wheat, barley, and oats between Great Britain and foreign countries to shield the British farmer from competition. Systematic government interference in grain production began in the 1660s. The amended Corn Law of 1774, which controlled legislation for the next half century, is a typical example: when the domestic price of corn, as paid to the farmer by the baker or dealer, fell below £2.4 a quarter (28 pounds), the farmer was encouraged to sell his products abroad, to prevent the market price from falling still further. He was given a bounty of five shillings for each quarter exported. When corn sold for £2.8, export was forbidden. At prices between these levels, there was a duty of six pence a quarter. Over time, this system became progressively more bureaucratized, with elaborate regulations specifying how and in what town the price was to be measured, with specific procedures for reporting and allowances for regional differences.1 The Corn Laws displayed another characteristic of government controls: Regulations and subsidies in one area led to the manipulation of tangential areas. In this case, when bad harvests triggered soaring grain and bread prices, the Corn Law mechanism exacerbated the problem, causing still higher prices. This provoked civil disturbances to the point where the government feared insurrection. To defuse the threat, workers wages were subsidized, relative to the price of bread. This subsidy came from the Poor Rates, the British nineteenth-century welfare system. This greatly expanded state entitlement programs, leading to massive fraud, inequities, and even greater civil unrest. The Corn Laws are not merely things of the past. Their spirit exists in most countries of the world. In the U.S. today, agricultural products are subsidized and stored, to the tune of tens of billions of dollars annually, to keep the price of food artificially high. This enhances the farmers income but it also prevents the poor from eating as they should. This has led, as in nineteenth-century England, to protectionism, international tensions, and the threat of trade wars. Richard Cobden: Businessman to Pamphleteer Cobden was born in Dunford, West Sussex, in 1804. Because of a succession of family business failures, his father could not support young Richard. He went to live with an uncle who trained him to be a clerk in his London warehouse. At twenty-one Cobden became a traveling salesman. He was so successful that in 1831 he went out on his own and took over the calico printing company in Manchester. Manchester was the worlds first great industrial city. It was viewed as the metropolis of the future. Alexis de Tocqueville best explained the paradox of Manchester: From this foul drain the greatest stream of human industry flows out to fertilize the whole world. From this filthy sewer pure gold flows. Here humanity attains its most complete development and its most brutish; here civilization works its miracles, and civilized man is turned back almost into a savage.2 In Manchester Cobden had his first lesson as to what free trade meant. As he assumed ownership of the company, the protective tariff on calicos was repealed, making it possible to export them competitively. This opened up vast new markets that could not exist before, allowing Cobden to develop a new kind of international selling strategy. Cobden introduced a new mode of business. The custom of the calico trade at that period was to print a few designs, and watch cautiously and carefully those which were most acceptable to the public, when larger quantities of those which seemed to be preferred would be printed off and offered to the retail dealer. Cobden and his partners did not follow the cautious and slow policy of their predecessors, but fixing themselves upon the best designs, they had those printed off at once and pushed the sale energetically throughout the country. Those pieces which failed to take in the home market were at once shipped to other countries and the consequence was that the associated firms became very prosperous.3 Yet, at the height of his achievements, Cobdens interest in calico waned. He was eager to pursue other courses. By 1835 he wrote his first political pamphlets. One, called Russia (describing the threat of Russia against the decaying Turkish Empire), contained the core of this mature thought: It is labor improvements and discoveries that confer the greatest strength upon a people. By these alone and not by the sword of the conqueror, can nations in modern and all future times hope to rise to power and grandeur.4 Cobden wrote that Englands rulers inhibited discovery and improvements by wasting millions on the military. His favorite target was Britains obsession with the doctrine of the balance of power. He saw it as a source of conflict, not stability. Empires have arisen unbidden by us; others have departed despite our utmost efforts to preserve them.5 Cobdens ideas were not idealistic dreams. The United States industrial strength had revolutionized the world economy and political equilibrium. Cobden: The new world is destined to become the arbiter of the commercial policy of the old.6 Already the need to trade with America had compelled Britain to abandon many regulations governing colonial commerce. Since free trade and military non-intervention were the same to Cobden, he pleaded for Britain to abandon the past and repeal protectionism. This would make Britain turn moralist, in the end, in selfdefense.7 Manchester Incorporation: Prelude to Repeal Cobdens pamphlets attracted the attention of the editor of the Manchester Times, Archibald Prentice, who asked him to speak on free trade issues. This led to Cobdens being elected to the Manchester Chamber of Commerce. Here he met two men who would influence his thinking and direction: John Benjamin Corn Law Smith and John Bright. Smiths nickname was due to his years of singlehandedly fighting for Corn Law repeal, long before it became a major topic. It was Smith who converted Cobden to total repeal, not just incremental reductions. John Bright became Cobdens chief lieutenant in the long war for repeal. Brights speaking tours around the country were a great factor in victory. Cobden used the Chamber of Commerce as a vehicle for focusing public issues. The first political problem he tackled was the incorporation of Manchester. Like many of Englands new industrial cities, Manchester had no borough (an urban political administrative area) charter. Its government was manorial, with the power of a small town, instead of one of Englands largest urban centers. In 1837 Cobden led the battle for a charter. One factor in winning was that he fought for it as if it were a national issue. His pamphlet, Incorporate Your Borough, portrayed the struggle as one of democracy versus privilege, the rights of the productive classes against the rapacious aristocracy. He showed that the nobilitys gerrymandering of counties forced the middle and working classes to be their vassals. Incorporation required a petition of taxpayers. There was powerful opposition from the upper class Tories. To counter this, Cobden focused on the shopocracy, the smaller merchants and manufacturers, for petition signatures. Then, using electoral registers, the Incorporationists sent a circular to all parliamentary electors who supported reform causes, to aid them by filling seats at public meetings. They did, and incorporation passed despite the fact that the Tories had three times as many signatures. Cobden made a name-by-name check of the opposition petition and found that 70 percent were invalid. With incorporation, Cobden was elected to his first public offices: borough councilor and alderman.8 The Manchester League: Fighting for Free Trade Cobden now set his sights on an ambitious national goal that had previously proved impossible to attain: repeal of the Corn Laws. In 1838 the Manchester Anti-Com Law Association (later, the Manchester League) was created. Cobden saw repeal as the greatest single battle of his time. It would unite workers, farmers, and commercial interests against privilege to radically alter the political power structure of the country. The Leagues initial goal was to educate the public. Lecturers went all around England, giving free trade conferences. At this stage, political pressure did not seem necessary. But the League did have an ally in Parliament: Charles Villiers. For years he had unsuccessfully tried to initiate a Corn Law repeat debate in the House of Commons, which was dominated by big landlords. However, Cobden knew that Villiers efforts helped identify supporters at the national level. This would influence the Leagues strategy in the provinces. Within the first year Cobden realized that he had underestimated the Protectionists strength. In rural areas, League meetings were disrupted by physical violence. The farmers erroneously believed that free trade would bring unemployment and depression. The Chartists, representing the urban workers, were hostile for the same reason. Cobden hoped that the Leagues message would convince both groups that repeat would open up new markets which would raise all wages. It required years of educating for these truths finally to be perceived. This generated a strategic change: the lectures were now combined with petition drives for Parliament. Thus began overt political activism. By 1840 the Manchester League transformed itself, creating in every borough an anti-Corn Law party, or at least an effort to prevent the return of any candidate at the next election, whatever his political party may be, who supports ... the landowners bread tax.9 This meant a more aggressive League, less compromising, less fearful of making enemies. In 1841, a major economic depression occurred. Suddenly Prime Minister Robert Peel resorted to the free trade idea of lower tariffs to stimulate the economy. This made the Corn Laws nationally significant and gave greater credibility to the League. By now the League had several members in Parliament, including Cobden. But he was a reluctant member. He did not want to be a party man, loyal and compromising. He needed to be free to harass the government. Cobdens speeches in Parliament were not influential and this dampened League members enthusiasm. Support dropped sharply. In all mass movements, zeal is critical. There is a constant need to exceed earlier achievements or risk dissolution. So Cobden created make-work projects like conferences and fund-raisers to keep the fervor at high pitch. By 1843, paradoxically, economic recovery made the League acceptable to the one group most antagonistic to repeal: the aristocratic landowners. When times had been bad, high prices and high subsidies compensated for the poor yields. But now, prices kept failing with increased abundance and the Tories saw that the Corn Laws did not shore up their incomes. Cobdens speeches became more moderate. Instead of attacking the Corn Laws, he attacked the greater evils behind them: the economic woes to workingmen and farmers. The new accent was on distress, not repeal. Now he no longer seemed menacing to the Tories. Gone were the threats of the collapse of society because of high food prices. No longer did he say that the Corn Laws benefited only the rich. He appealed to the landlords themselves, showing them that protective tariffs deterred them from investing to improve their crops, thus hindering their prosperity. This wider view drew many leading Tories to the repeal side and was responsible for Robert Peel receiving a League delegation after repeatedly turning them down. This was followed by a new League political plan. All the boroughs were classified as either safe, doubtful, or hopeless. Voter registration focused on the hopeless districts. Teams of lecturers and voter canvassers fanned out and recruited thousands of new members. Cobdens overall objective was staggering: to reach every voter with League material through the canvassers. The sheer scale of it produced more enthusiasm, more fund-raisers, more activities, but it failed and did not destroy the Protectionists. Cobden had the courage to admit he was wrong and turned around completely in mid-campaign, refocusing on the winnable boroughs. Cobden targeted 160 boroughs as winnable. The 1845 national election showed substantial gains in 112. This still wasnt sufficient to win a Parliamentary vote. League members were now thoroughly demoralized. Their tremendous work seemed futile. Then Cobden discovered a loophole in the election law, enabling the League to attack from an entirely different direction. This proved to be the key to victory. Previously Cobden had conceded the counties (the rural political districts). To win them he would have to create a vast new electorate. This seemed impossible because of the large property qualification required. Or so he thought. But a little-known law made it possible to vote in a county election if one owned a forty-shilling freehold, a small piece of property that almost anyone could afford. By promoting forty-shilling freeholds as a great real estate investment, the number of free-trade voters was greatly expanded. Immediately the Tories retreated. They acknowledged that protectionism hindered agricultural modernization and conceded that subsidies did not stabilize corn prices. Seeing that his opponents were caving in, Cobden once again switched the mode of attack: de-emphasizing public education to put more pressure on Parliament. This forced Prime Minister Peel over to the League side, provoking a governmental crisis. He was forced to resign and his government collapsed. Repeal now seemed within reach. But the chaos compelled a Parliamentary re-organization, reflecting the revolutionary change in the balance of power that repeal represented, shifting away from the aristocrats toward the urban middle class. It appeared that the Protectionists had formed a last-ditch coalition to block repeal just when it seemed assured. League members held their breath. Repeal passed Parliament and became law.10 The Consequences of Repeal Following repeal, Richard Cobden was physically, mentally, and financially drained. He considered retiring permanently from politics. For the five years prior to repeal he saw very little of his wife and children. My only boy is five years old ... he did not positively know me as his father, so incessantly was I upon the tramp.11 Yet Cobden felt the necessity to go on. He saw repeal as a beginning, not an end. More than prosperity, it would bring world peace. He spent the next fourteen months on a missionary tour of Europe, promoting the social benefits of trade without barriers. He wrote: Warriors and despots are generally bad economists and they instinctively carry their ideas of force and violence into the civil politics of their governments. Free trade is a principle which recognizes the paramount importance of individual action.12 Several years later his evangelism led to the second great triumph of his political career, the Anglo-French Commercial Treaty of 1860. France was still a protectionist country, but Cobdens tour had converted important Frenchmen into freetraders. They had influenced Napoleon III. One such person was Michel Chevalier, a political economist. For centuries England and France had been military antagonists, but in the Crimean War of 1854-55 they were allies. Through free trade there was a unique opportunity to strengthen the bonds for permanent peace. Initially there were several secret meetings in London among Chevalier, Cobden, and Gladstone, the Chancellor of the Exchequer. Then Cobden, with no official status, quietly left for Paris. He believed then, as always, that free trade would undo the national animosities kept alive by the professional diplomats and the military. I would not step across the street just now to increase our trade, for the mere sake of commercial gain .... But to improve moral and political relations of France and England, by bringing them into greater intercourse and increased dependence, I would walk barefoot from Calais to Paris.13 Napoleon realized that he had to convince his own government about the benefits of free trade. He asked Cobden how to go about it. Cobden replied, I told him, I would act precisely as I did in England, by dealing first with one article which was the keystone of the whole system. In England, that article was corn, in France, it was iron; that I should totally abolish and at once the duty on pig iron, and leave only a small revenue duty, if any, on bars ... this would render it much easier to deal with all the other industries, whose general complaint is that they cant compete with England owing to the high price of iron and coal.14 When the negotiations reached their critical phase, Cobden thought he would be replaced by professional diplomats. Instead he was given plenipotentiary powers and continued on his own. The agreement was signed in January 1860. Cobden died in April 1865. He was sixty years old. His legacy is enormous and remains so to this day. For eighty-five years free trade reigned as Englands national policy, influencing the commercial principles of every major country in the world. Richard Cobdens idealism and passionate dream can be summed up by his statement: I see in the free trade principle that which will act on the moral world as the principle of gravitation in the universedrawing men together, thrusting aside the antagonisms of race, and creeds and language, and uniting us in the bonds of eternal peace.... I believe the effect will be to change the face of the world, so as to introduce a system of government entirely distinct from that which now prevails. I believe the desire and the motive for large and mighty empires and gigantic armies and great navies will die away .... when man becomes one family, and freely exchanges the fruits of his labor with his brother Man.15 1. Norman Longmate, The Breadstealers: The Fight Against the Corn Laws, 1838-1846 (New York: St. Martins Press, 1984), pp. 3-4. 2. Alexis de Tocqueville, Journeys to England and Ireland, edited by J, P. Mayer (New Haven: Yale University Press, 1958), pp. 107-108. 3. John Mcgilchrist, Richard Cobden, the Apostle of Free Trade (New York: Harper & Brothers, 1865), p. 20. 4. Richard Cobden, Russia, from The Political Writings of Richard Cobden, 4th edition (London: W. Ridgway, 1901), p. 26. 5. Cobden, America, from Political Writings, p. 5. 6. Ibid., p. 21. 7. Ibid., p. 256. 8. Nicholas Edsall, Richard Cobden, Independent Radical (Cambridge: Harvard University Press, 1986), pp. 51-59. 9. Ibid., p. 85. 10. Ibid., pp. 53-153. 11. Ibid., p. 174. 12. Ibid., p. 186. 13. Ibid., p. 333. 14. Ibid., p. 334. 15. Richard Cobden, Speeches on Public Policy, By Richard Cobden, M.P., edited by John Bright and J. E. Thorold Rogers (London: Macmillan & Co., 1870), pp. 225-226. |John Chodes is a writer in New York City.| Reprinted with permission from Ideas on Liberty (March 1993). © Copyright 1993, Foundation for Economic Education.
<urn:uuid:96406c2a-ce17-4236-b438-e498ee93ca9d>
CC-MAIN-2013-20
http://www.independent.org/publications/article.asp?id=1232
2013-05-24T22:39:07
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97172
4,330
4.4375
4
All digital oscilloscopes measure by sampling the analog input signals and digitizing the values. When an oscilloscope samples an input signal, samples are taken at fixed intervals. At these intervals, the size of the input signal is converted to a number. The accuracy of this number depends on the resolution of the oscilloscope. The higher the resolution, the smaller the voltage steps in which the input range of the instrument is divided. The acquired numbers can be used for various purposes, e.g. to create a graph. The sinewave in the above picture is sampled at the dot positions. By connecting the adjacent samples, the original signal can be reconstructed from the samples. You can see the result in the next illustration. The rate at which samples are taken by the oscilloscope is called the sample frequency, the number of samples per second. A higher sample frequency corresponds to a shorter interval between the samples. As is visible in the picture below, with a higher sample frequency, the original signal can be reconstructed much better from the measured samples. The sample frequency must be higher than 2 times the highest frequency in the input signal. This is called the Nyquist frequency. Theoretically it is possible to reconstruct the input signal with more than 2 samples per period. In practice, at least 10 to 20 samples per period are recommended to be able to examine the signal thoroughly in an oscilloscope. When the sample frequency is not high enough, aliasing will occur. Changing the sample frequency of an instrument in the Multi Channel software can be done in various different ways: When sampling an analog signal with a certain sampling frequency, signals appear in the output with frequencies equal to the sum and difference of the signal frequency and multiples of the sampling frequency. For example, when the sampling frequency is 1000 Hz and the signal frequency is 1250 Hz, the following signal frequencies will be present in the output data: |Multiple of sampling frequency||1250 Hz signal||-1250 Hz signal| |-1000||-1000 + 1250 =||250||-1000 - 1250 =||-2250| |0||0 + 1250 =||1250||0 - 1250 =||-1250| |1000||1000 + 1250 =||2250||1000 - 1250 =||-250| |2000||2000 + 1250 =||3250||2000 - 1250 =||750| As stated before, when sampling a signal, only frequencies lower than half the sampling frequency can be reconstructed. In this case the sampling frequency is 1000 Hz, so we can we only observe signals with a frequency ranging from 0 to 500 Hz. This means that from the resulting frequencies in the table, we can only see the 250 Hz signal in the sampled data. This signal is called an alias of the original signal. If the sampling frequency is lower than 2 times the frequency of the input signal, aliasing will occur. The following illustration shows what happens. In this picture, the green input signal (top) is a triangular signal with a frequency of 1.25 kHz. The signal is sampled with a frequency of 1 kHz. The corresponding sampling interval is 1/( 1000 Hz ) = 1 ms. The positions at which the signal is sampled are depicted with the blue dots. The red dotted signal (bottom) is the result of the reconstruction. The period time of this triangular signal appears to be 4 ms, which corresponds to an apparent frequency (alias) of 250 Hz (1.25 kHz - 1 kHz). In practice, to avoid aliasing, always start measuring at the highest sampling frequency and lower the sampling frequency if required. Use function keys <F3> (lower) and <F4> (higher) to change the sampling fequency in a quick and easy way. The next illustration gives an example of what aliasing can look like. In this picture, a sine wave signal with a frequency of 257 kHz is sampled at a frequency of 50 kHz. The minimum sampling frequency for correct reconstruction is 514 kHz. For proper analysis, the sampling frequency should have been approximately 5 MHz. With a given sampling frequency, the number of samples that is taken determines the duration of the measurement. This number of samples is called record length. Increasing the record length, will increase the total measuring time. The result is that more of the measured signal is visible. In the images below, three measurements are displayed, one with a record length of 12 samples, one with 24 samples and one with 36 samples. The total duration of a measurement can easily be calculated, using the sampling frequency and the record length: Measurement duration in seconds = record length in samples / sampling frequency in Hz Changing the record length of an instrument in the Multi Channel software can be done in various different ways: The combination of sampling frequency and record length forms the time base of an oscilloscope. To setup the time base properly, the total measurement duration and the required time resolution have to be taken in account. There are several ways to find the required time base setting. With the required measurement duration and sampling frequency, the required number of samples can be determined: record length in samples = Measurement duration in seconds * sampling frequency in Hz With a known record length in samples and the required measurement duration, the necessary sampling frequency can be calculated: sampling frequency in Hz = record length in samples / Measurement duration in seconds In the Multi Channel software, both record length and sampling frequency can be set independently, to give the best flexibility. They can be selected from menu's, using toobar buttons but also keyboard short cuts are available, for more information, refer to: The Multi Channel software also provides controls to change record length and sample frequency simultaneously to specific combinations to obtain certain time/div values: When digitizing the samples, the voltage at each sample time is converted to a number. This is done by comparing the voltage with a number of levels. The resulting number is the number of the highest level that's still lower than the voltage. The number of levels is determined by the resolution. The higher the resolution, the more levels are available and the more accurate the input signal can be reconstructed. In the image below, the same signal is digitized, using three different amounts of levels: 16, 32 and 64. The number of available levels is determined by the resolution: number of levels = 2 resolution in bits The used resolutions in the previous image are respectively: 4 bits, 5 bits and 6 bits. The smallest detectable voltage difference depends on the resolution and the input range. This voltage can be calculated as: minimum voltage = full scale range / number of levels In the 200 mV range, the full scale ranges from -200 mV to +200 mV, the full range is 400 mV. When a 12 bit resolution is used, there are 212 = 4096 levels. This results in a smallest detectable voltage step of 0.400 V / 4096 = 97.7 µV. In 16 bit resolution this step is 0.400 V / 65536 = 6.1 µV Changing the resolution of an instrument in the Multi Channel software can be done in various different ways:
<urn:uuid:16263eb3-11fd-4f64-a5f7-b879d643ecb3>
CC-MAIN-2013-20
http://www.tiepie.com/en/classroom/Measurement_basics/Digital_Data_Acquisition
2013-05-24T23:07:03
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907727
1,478
4.46875
4
The burning of fossil fuels like coal, gas and oil releases particles into the atmosphere. When fossil fuels are not burned completely, they produce black carbon -- otherwise known as soot. Soot looks like a black or brown powder and though it's made up of tiny particles, it can have a big impact on climate. Black carbon stays in the atmosphere for several days to weeks and then settles out onto the ground. It can be produced from natural causes like when lightning causes a forest fire. Most black carbon results from human practices like slash and burn methods for clearing land, using diesel engines, and industrial processes that burn coal, gas and oil, and coal burning in homes. Black carbon is produced around the world and the type of soot produced varies by region. Black carbon adds to global warming in two ways. First, when soot enters the atmosphere, it absorbs sunlight and generates heat, warming the air. Second, when soot settles on snow and ice, it makes the surface darker, so the surface absorbs more sunlight and generates heat. This warming causes more snow and ice to melt, in what can be a vicious cycle. Black carbon lowers the albedo of a surface. Scientists use the term "albedo" as an indicator of the amount of energy reflected by a surface. Albedo is measured on a scale from zero to one (or sometimes as a percent). - Very dark colors have an albedo close to zero (or close to 0%). - Very light colors have an albedo close to one (or close to 100%). Soot is dark in color, and so has a low albedo and reflects only a small fraction of the Sun's energy. Forests have low albedo, near 0.15. Snow and ice, on the other hand, are very light in color. They have very high albedo, as high as 0.8 or 0.9, so they reflect most of the solar energy that gets to them, absorbing very little. The more dark surfaces on Earth, the less solar energy is reflected and this means more warming as more solar radiation is absorbed. Soot makes surfaces (or the atmosphere) darker and so adds to global warming. Scientists say that black carbon emissions are the second largest factor in global warming, after carbon dioxide. Reducing black carbon is one of the fastest ways for slowing global warming. Luckily, many policies have been put in place to lessen black carbon around the world, and the technology needed to lessen black carbon already exists. The importance of black carbon's role in global warming has come to the forefront of the minds of many concerned citizens and exciting steps are already being taken to address issues like making cleaner burning cookstoves available in developing nations and improving industrial practices that produce black carbon. Reducing black carbon around the world will not only lessen global warming, but will cut down on air pollution and will improve human health.
<urn:uuid:4bec3b16-8808-45c8-a741-13815cc65911>
CC-MAIN-2013-20
http://www.windows2universe.org/earth/climate/black_carbon.html
2013-05-24T22:51:07
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.930707
597
4.34375
4
(Click to enlarge) This image is of the solar eclipse earlier this week. Solar eclipses occur when the moon comes between the Earth and Sun. However, there's more to it than just that, otherwise we'd have a solar eclipse every ~28 days (one full lunar cycle). When viewed edge on, the plane in which the moon orbits is slightly tilted in relation to the plane the Earth and Sun lie on (hence the reason the shadow moves along a different line in the sky than the sun, intersecting only at the one point). Because of this, most of the time, when the moon is on the line between the Warth and the Sun, it is simply too high or too low to cause an eclipse. Sometimes it's between the point where it's too high or low and the point where it will completely come in front of the Sun. In this case, the moon will only cover part of the Sun and the result will be a partial ecplipse, such as this one I photographed in Spring 2005. Additionally, the moon's orbit around the Earth is not perfectly circular. It is slightly elliptical. This means that at some points in its orbit, it further than other points. As common experience should tell you, ther further away an object is, the smaller it will look (which is why the sun appears the same size in the sky as the moon dispite being millions of times bigger). Therefore, since the moon is further away, it will be smaller, and may not cover the sun entirely. This is known as an annular eclipse in which the moon will be silhouetted on the sun leaving a ring (such as in this picture). Thus this image is an extremely rare "total solar eclipse" in which the moon completely covers the full disk of the sun. But what's all that fuzzy stuff around it in the center one? That's called the corona and is essentially the sun's extended atmosphere which is shaped by the Sun's immense magnetic field. It's actually always there, but it's extremely faint in comparison to the sun, so we can't see it unless the sun is somehow blocked out, as in the case of a total solar eclipse. It is primarily composed of the nuclei of ionized hydrogen atoms. You may also be wondering why you didn't happen to catch this eclipse given that it only happened a few days ago. The reason is that this one only happened to be visible from regions of northern Africa and the Middle East. You should not be asking yourself, "why only such a small location given that half the Earth can see the sun at any time?" The reason for this is something called parallax. In the scenario of a total eclipse, only the locations directly below the center of the moon will see the eclipse. Locations slightly further away will be viewing the event from a slightly different angle. While this wouldn't seem like it would play much of a difference, try a quick experiment. Imagine your left eye is someone standing in southern Africa and that your right is someone standing in England. Close one eye and hold your fist out in front of you and cause it to eclipse something on the other side of the room (or outside if possible, the further away the better). Make sure the object you choose is just barely covered by your fist. Now without moving your arm, change eyes. You'll notice that your fist is no longer covering the object at all. This effect that you have just observed is precisely what happens in the case of an eclipse for different observers and is what astronomers call parallax (Parallax also has many other applications in astronomy such as directly measuring the distance to a great number of stars to extremely high precision thanks to the HIPPARCOS satellite). This quick experiment is also reasonably close to actual scale in terms of angular sizes and relation between sizes for the earth and moon. The distances between objects and true sizes aren't even close, but those don't matter in this case. So you're probably wondering why there's the strange disjointed path. After all, we never see that. There's only one sun in the sky. This image is actually a compilation of 18 images taken ~3 minutes apart (and presumably one more to use as the beautiful background). I can say that these were taken ~3 minutes apart because of the spacing of the suns. In 24 hours, the sun makes a full 360º path around the sky. Thus, converting hours to minutes and dividing, we find that the sun moves 1º every 4 minutes. Although it doesn't seem that there's any scale marked on this image to permit me to figure out how many degress there is between each image from which to figure out the time between images, there actually is a very easy one: the sun itself. Both the sun and the moon have an angular size of 1/2º. That means that if the little suns were butted right up against one another, it would have traveled 1/2º between images, which in turn implies that it would have been 2 minutes (4/2) between each image. Since there's a little more space, roughly 1/2 of a sun width (ie, 1/4º), I can estimate there was approximately another minute between pictures. Thus 2 + 1 = 3. So ultimately 18 images of the sun were taken and then reassembled to produce this dramatic image. While in and of itself it is quite stunning, a closer look reveals more information than meets the eye. This concept is one I feel is important to keep in mind in the sciences. Things are not always what they seem to be at a first glance. If this wasn't the driving concept behind science, we would still hold with many ridiculous ideas such as the Earth being flat, or alchemy, or perhaps more relavant today, intelligent design. Image copyright: Stefan Seip Found via: NASA Astronomy Picture of the Day Update: The original version of this post contained erronious math which was noted by reader, Benjamin Franz, in the comments. I have corrected my math here, but wanted to make sure he was given due credit.
<urn:uuid:c439f314-1664-4d1c-830e-b989fad44188>
CC-MAIN-2013-20
http://angryastronomer.blogspot.jp/2006/04/so-i-figure-thats-enough-ranting.html?showComment=1144175280000
2013-06-19T12:40:37
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969243
1,262
4.09375
4
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards). Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes. Highlighting and Taking Notes: If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination. If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections. To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result). View Full Student FAQs 8.6 Exceptions to the Octet Rule - To assign a Lewis dot symbol to elements not having an octet of electrons in their compounds. Lewis dot structures provide a simple model for rationalizing the bonding in most known compounds. However, there are three general exceptions to the octet rule: (1) molecules, such as NO, with an odd number of electrons; (2) molecules in which one or more atoms possess more than eight electrons, such as SF6; and (3) molecules such as BCl3, in which one or more atoms possess less than eight electrons. Odd Number of Electrons Because most molecules or ions that consist of s- and p-block elements contain even numbers of electrons, their bonding can be described using a model that assigns every electron to either a bonding pair or a lone pair.Molecules or ions containing d-block elements frequently contain an odd number of electrons, and their bonding cannot adequately be described using the simple approach we have developed so far. Bonding in these compounds will be discussed in Chapter 23 "The ". There are, however, a few molecules containing only p-block elements that have an odd number of electrons. Some important examples are nitric oxide (NO), whose biochemical importance was described in earlier chapters; nitrogen dioxide (NO2), an oxidizing agent in rocket propulsion; and chlorine dioxide (ClO2), which is used in water purification plants. Consider NO, for example. With 5 + 6 = 11 valence electrons, there is no way to draw a Lewis structure that gives each atom an octet of electrons. Molecules such as NO, NO2, and ClO2 require a more sophisticated treatment of bonding, which will be developed in Chapter 9 "Molecular Geometry and Covalent Bonding Models". More Than an Octet of Electrons The most common exception to the octet rule is a molecule or an ion with at least one atom that possesses more than an octet of electrons. Such compounds are found for elements of period 3 and beyond. Examples from the p-block elements include SF6, a substance used by the electric power industry to insulate high-voltage lines, and the SO42− and PO43− ions. Let’s look at sulfur hexafluoride (SF6), whose Lewis structure must accommodate a total of 48 valence electrons [6 + (6 × 7) = 48]. If we arrange the atoms and electrons symmetrically, we obtain a structure with six bonds to sulfur; that is, it is six-coordinate. Each fluorine atom has an octet, but the sulfur atom has 12 electrons surrounding it rather than 8.The third step in our procedure for writing Lewis electron structures, in which we place an electron pair between each pair of bonded atoms, requires that an atom have more than 8 electrons whenever it is bonded to more than 4 other atoms. The octet rule is based on the fact that each valence orbital (typically, one ns and three np orbitals) can accommodate only two electrons. To accommodate more than eight electrons, sulfur must be using not only the ns and np valence orbitals but additional orbitals as well. Sulfur has an [Ne]3s23p43d0 electron configuration, so in principle it could accommodate more than eight valence electrons by using one or more d orbitals. Thus species such as SF6 are often called expanded-valence moleculesA compound with more than an octet of electrons around an atom.. Whether or not such compounds really do use d orbitals in bonding is controversial, but this model explains why compounds exist with more than an octet of electrons around an atom. There is no correlation between the stability of a molecule or an ion and whether or not it has an expanded valence shell. Some species with expanded valences, such as PF5, are highly reactive, whereas others, such as SF6, are very unreactive. In fact, SF6 is so inert that it has many commercial applications. In addition to its use as an electrical insulator, it is used as the coolant in some nuclear power plants, and it is the pressurizing gas in “unpressurized” tennis balls. An expanded valence shell is often written for oxoanions of the heavier p-block elements, such as sulfate (SO42−) and phosphate (PO43−). Sulfate, for example, has a total of 32 valence electrons [6 + (4 × 6) + 2]. If we use a single pair of electrons to connect the sulfur and each oxygen, we obtain the four-coordinate Lewis structure (a). We know that sulfur can accommodate more than eight electrons by using its empty valence d orbitals, just as in SF6. An alternative structure (b) can be written with S=O double bonds, making the sulfur again six-coordinate. We can draw five other resonance structures equivalent to (b) that vary only in the arrangement of the single and double bonds. In fact, experimental data show that the S-to-O bonds in the SO42− ion are intermediate in length between single and double bonds, as expected for a system whose resonance structures all contain two S–O single bonds and two S=O double bonds. When calculating the formal charges on structures (a) and (b), we see that the S atom in (a) has a formal charge of +2, whereas the S atom in (b) has a formal charge of 0. Thus by using an expanded octet, a +2 formal charge on S can be eliminated. Note the Pattern In oxoanions of the heavier p-block elements, the central atom often has an expanded valence shell. Less Than an Octet of Electrons Molecules with atoms that possess less than an octet of electrons generally contain the lighter s- and p-block elements, especially beryllium, typically with just four electrons around the central atom, and boron, typically with six. One example, boron trichloride (BCl3) is used to produce fibers for reinforcing high-tech tennis rackets and golf clubs. The compound has 24 valence electrons and the following Lewis structure: The boron atom has only six valence electrons, while each chlorine atom has eight. A reasonable solution might be to use a lone pair from one of the chlorine atoms to form a B-to-Cl double bond: This resonance structure, however, results in a formal charge of +1 on the doubly bonded Cl atom and −1 on the B atom. The high electronegativity of Cl makes this separation of charge unlikely and suggests that this is not the most important resonance structure for BCl3. This conclusion is shown to be valid based on the three equivalent B–Cl bond lengths of 173 pm that have no double bond character. Electron-deficient compounds such as BCl3 have a strong tendency to gain an additional pair of electrons by reacting with species with a lone pair of electrons. Note the Pattern Molecules with atoms that have fewer than an octet of electrons generally contain the lighter s- and p-block elements. Note the Pattern Electron-deficient compounds have a strong tendency to gain electrons in their reactions. Draw Lewis dot structures for each compound. - BeCl2 gas, a compound used to produce beryllium, which in turn is used to produce structural materials for missiles and communication satellites - SF4, a compound that reacts violently with water Include resonance structures where appropriate. Given: two compounds Asked for: Lewis electron structures A Use the procedure given earlier to write a Lewis electron structure for each compound. If necessary, place any remaining valence electrons on the element most likely to be able to accommodate more than an octet. B After all the valence electrons have been placed, decide whether you have drawn an acceptable Lewis structure. A Because it is the least electronegative element, Be is the central atom. The molecule has 16 valence electrons (2 from Be and 7 from each Cl). Drawing two Be–Cl bonds and placing three lone pairs on each Cl gives the following structure: B Although this arrangement gives beryllium only 4 electrons, it is an acceptable Lewis structure for BeCl2. Beryllium is known to form compounds in which it is surrounded by less than an octet of electrons. A Sulfur is the central atom because it is less electronegative than fluorine. The molecule has 34 valence electrons (6 from S and 7 from each F). The S–F bonds use 8 electrons, and another 24 are placed around the F atoms: The only place to put the remaining 2 electrons is on the sulfur, giving sulfur 10 valence electrons: B Sulfur can accommodate more than an octet, so this is an acceptable Lewis structure. Draw Lewis dot structures for XeF4. Molecules with an odd number of electrons are relatively rare in the s and p blocks but rather common among the d- and f-block elements. Compounds with more than an octet of electrons around an atom are called expanded-valence molecules. One model to explain their existence uses one or more d orbitals in bonding in addition to the valence ns and np orbitals. Such species are known for only atoms in period 3 or below, which contain nd subshells in their valence shell. - General exceptions to the octet rule include molecules that have an odd number of electrons and molecules in which one or more atoms possess more or fewer than eight electrons. What regions of the periodic table contain elements that frequently form molecules with an odd number of electrons? Explain your answer. How can atoms expand their valence shell? What is the relationship between an expanded valence shell and the stability of an ion or a molecule? What elements are known to form compounds with less than an octet of electrons? Why do electron-deficient compounds form? List three elements that form compounds that do not obey the octet rule. Describe the factors that are responsible for the stability of these compounds. What is the major weakness of the Lewis system in predicting the electron structures of PCl6− and other species containing atoms from period 3 and beyond? The compound aluminum trichloride consists of Al2Cl6 molecules with the following structure (lone pairs of electrons removed for clarity): Does this structure satisfy the octet rule? What is the formal charge on each atom? Given the chemical similarity between aluminum and boron, what is a plausible explanation for the fact that aluminum trichloride forms a dimeric structure rather than the monomeric trigonal planar structure of BCl3? Draw Lewis electron structures for ClO4−, IF5, SeCl4, and SbF5. Draw Lewis electron structures for ICl3, Cl3PO, Cl2SO, and AsF6−. Draw plausible Lewis structures for the phosphate ion, including resonance structures. What is the formal charge on each atom in your structures? Draw an acceptable Lewis structure for PCl5, a compound used in manufacturing a form of cellulose. What is the formal charge of the central atom? What is the oxidation number of the central atom? Using Lewis structures, draw all of the resonance structures for the BrO3− ion. Draw an acceptable Lewis structure for xenon trioxide (XeO3), including all resonance structures. ClO4− (one of four equivalent resonance structures) The formal charge on phosphorus is 0, while three oxygen atoms have a formal charge of −1 and one has a formal charge of zero.
<urn:uuid:d4807bf0-7776-4f95-9ce8-0987ba67ee2e>
CC-MAIN-2013-20
http://catalog.flatworldknowledge.com/bookhub/reader/4309?e=averill_1.0-ch08_s06
2013-06-19T12:47:25
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915726
2,692
4.3125
4
Researchers in Spain and Norway reported in the periodical Nature they had found tree-like growth rings on the bones of mammals, a characteristic that until now was thought to be limited to cold-blooded creatures and dinosaurs. They also found proof that dinosaurs probably had a high metabolic rate to allow fast growth another pointer of warm-bloodedness. "Our results strongly propose that dinosaurs were hot-blooded," lead author Meike Koehler of Spain's Institut Catala de Paleontologia told AFP. If so, the findings should punctual a rethink about reptiles, she said. Modern-day reptiles are cold-blooded, meaning they cannot control their body temperatures through their own metabolic system relying instead on outside means such as basking in the sun. While the dinosaurs may have been hot-blooded, their other characteristics kept them directly in the reptile camp, said Koehler. Paleontologists have long noted the ring-like markings on the bones of cold-blooded creatures and dinosaurs, and taken them to designate pauses in growth, perhaps due to cold periods or lack of food. The bones of hot-blooded animals such as birds and mammals had never been correctly assessed to see if they, too, display the lines. Koehler and her team found the rings in all 41 hot-blooded animal species they studied, counting antelopes, deer and giraffes. The finding "eliminates the strongest quarrel that does survive for cold-bloodedness" in dinosaurs, she said. The team's analysis of fillet tissue also showed that the fast enlargement rate of mammals is related to a high metabolism, which in turn is characteristic of hot-bloodedness. "If you compare this hankie with dinosaur tissue you will see that they are equal," said Koehler. "So this means that dinosaurs not only grew very fast but this increase was sustained by a very high metabolic rate, representative hot-bloodedness." A comment by University of California palaeontologist Kevin Padian that was available with the paper said the study was the latest to chip away at the long-held theory that dinosaurs were cold-blooded. "It seems that these were anything but characteristic reptiles, and Koehler and colleagues' findings remove another false association from this picture."
<urn:uuid:610e771c-75f8-4bff-bd4c-483f5ffa2f9f>
CC-MAIN-2013-20
http://dinosaur-news.rareresource.com/2012/06/dinosaurs-may-have-been-hot-blooded.html
2013-06-19T12:18:21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972689
474
4.03125
4
Microprocessor Design/Wire Wrap Historically, most of the early CPUs were built by attaching integrated circuits (ICs) to circuit boards and wiring them up. Nowadays, it's much faster to design and implement a new CPU in a FPGA -- the result will probably run faster and use less power than anything spread out over multiple ICs. However, some people still design and build CPUs the old-fashioned way. Such a CPU is sometimes called a "home brew CPU" or a "home built CPU". Some people feel that physically constructing a CPU in this way, since it allows students to probe the inner workings of the CPU, it helps them "Touch the magic", helps them learn and understand the underlying electronics and hardware. A homebrew CPU is a central processing unit constructed using a number of simple integrated circuits, usually from the 7400 Series. When planning such a CPU, the designer must not only consider the hardware of the device but also the instructions the CPU will have, how they will operate, the bit patterns for each one, and their mnemonics. Before the existence of computer based circuit simulation many commercial processors from manufacturers such as Motorola were first constructed and tested using discrete logic (see Motorola 6809). Although no limit exists on data bus sizes when constructing such a CPU, the number of components required to complete a design increases exponentially as bus size gets wider. Common physical data bus sizes are 1-bit, 4-bits, 8-bits, and 16-bits, although incomplete design documents exist for a 40-bit CPU. A microcoded CPU may be able to present a significantly different instruction set to the application programmer than seems to be directly supported by the hardware used to implement it. For example, the 68000 presented a 32-bit instruction set to the application programmer -- a 32-bit "add" was a single instruction -- even though internally it was implemented with 16-bit ALUs. For example, w:serial computers, even though they do calculations one bit per clock cycle, present a instruction set that deals with much wider words -- often 12 bits (PDP-14), 24 bits (D-17B), or even wider -- 39 bits (Elliott 803). Notable Homebrew CPUs The Magic-1 is a CPU with an 8-bit data bus and 16-bit address bus running at about 3.75MHz 4.09 Mhz. The Mark I FORTH also has a 8-bit data bus and 16-bit address bus, but runs at 1MHz. The V1648CPU is a CPU with a 16-bit data bus and 48-bit address bus that is currently being designed. APOLLO181 is a homemade didactic 4-bit processor made of TTL logics and bipolar memories, based upon the Bugbook® I and II chips, in particular on the 74181 (by Gianluca.G, Italy, May 2012). Practically all CPU designs include several 3-state buses -- an "address bus", a "data bus", and various internal buses. A 3-state bus is functionally the same as a multiplexer. However, there is no physical part you can point to and say "that is the multiplexer" in a 3-state bus; it's a pattern of activity shared among many parts. The only reason to use a 3-state bus is when it requires fewer chips or fewer, shorter wires, compared to an equivalent multiplexer arrangement. When you want to select between very few pieces of data that are close together, and most of that data is stored on a chip that only has 2-state outputs, it may require fewer chips and less wiring to use actual multiplexer chips. When you want to select between many pieces of data (one of many registers, or one of many memory chips, etc.), or many of the chips holding that data already have 3-state outputs, it usually requires fewer chips to use a 3-state bus (even counting the "extra" 3-state buffer between the bus and each thing that doesn't already have 3-state outputs). A typical register file connected to a 3-state 16-bit bus on a TTL CPU includes: - octal 2-state output registers (such as 74x273), 2 chips per 16-bit register - octal 3-state non-inverting buffers (such as 74x241), 2 chips per 16-bit register per bus - a demultiplexer with N inputs (driven by microcode) and 2^N output wires that select the 3-state buffers of one of up to 2^N possible things that can drive the bus, 1 chip per bus. Later we discuss a other shortcuts that may require fewer chips. Like many historically important commercial computers, many home-brew CPUs use some version of the 74181, the first complete ALU on a single chip. (Versions of the 74181 include the 74F181, the 40181, the 74AS181, the 72LS181, the 74HCT181, etc.). The 74181 is a 4-bit wide ALU can perform all the traditional add / subtract / decrement operations with or without carry, as well as AND / NAND, OR / NOR, XOR, and shift. A typical home-brew CPU uses 4 of these 74181 chips to build an ALU that can handle 16 bits at once. The simplest home-brew CPUs have only one ALU, which at different times is used to increment the program counter, do arithmetic on data, do logic operations on data, and calculate addresses from base+offset. Some people who build TTL CPUs attempt to "save chips" by building that one ALU of less than the largest word size (which is often 16 bits in TTL computers). For example, the earliest Data General Nova computers used a single 74181 and processed all data 4 bits at a time. Unfortunately, this adds complexity elsewhere, and may actually increase the total number of chips needed. The simplest 16-bit TTL ALU wires the carry-out of each 74181 chip to the carry-in of the next, creating a ripple-carry adder. Historically, some version of the look ahead carry generator 74182 was used to speed up "add" and "subtract" to be about the same speed as the other ALU operations. Historically, some people who built TTL CPUs put two or more independent ALU blocks in a single CPU -- a general-purpose ALU for data calculations, a PC incrementer, an index register incrementer/decrementer, a base+offset address adder, etc. We discuss ripple-carry adders, look-ahead carry generators, and their effects on other parts of a CPU at Microprocessor Design/Add and Subtract Blocks. alternatives to 74181 Some people find that '181 chips are becoming hard to find. Quite a few people building "TTL CPUs" use GAL chips (which can be erased and reprogrammed). A single GAL20V8 chip can replace a 74181 chip. Often another GAL chip can replace 2 or 3 other TTL chips. Other people building "TTL CPUs" find it more magical to build a programmable machine entirely out of discrete non-programmable chips. Are there any reasonable alternatives to the '181 for building an ALU out of discrete chips? The Magic-1 uses 74F381s and a 74F382 ALUs; is there any variant of the '381 and '382 chips that are any easier to find than a '181? ... the 74HC283, 74HCT283, MC14008 chips only add; they don't do AND, NAND, etc. ... One could build the entire CPU -- including the ALU -- out of sufficient quantities of the 74153 multiplexer. One designer "built-from-scratch" a 4-bit ALU that does add, subtract, increment, decrement, "and", "or", "xor", etc. -- roughly equivalent to the 4-bit 74181 -- out of about 14 simple TTL chips: 2-input XOR, AND, OR gates. Another designer has posted a 8-bit ALU design that has more functionality than two 74181 chips -- the 74181 can't shift right -- built from 14 complex TTL chips: two 74283 4-bit adders, some 4:1 mux, and some 2:1 mux. The designers of the LM3000 CPU posted an ALU design that has less functionality than the 74181. The 8 bit "ALU" in the LM3000 can't actually do any logical operations, only "add" and "subtract", built from two 74LS283 4-bit adders and a few other chips. Apparently those "logical" operations aren't really necessary. The MC14500B Industrial Control Unit has even less functionality than the LM3000 CPU. It is arguable that the MC14500B has close to the minimum functionality to even be considered a "CPU". The MC14500B is perhaps the most famous "1-bit" CPU. All of the earliest computers and most of the early massive parallel processing machines used a serial ALU, making them "1-bit CPUs". other parts solderless breadboard approach Solderless breadboards are perhaps the fastest way to build experimental prototypes that involve lots of changes. For about a decade, every student taking the 6.004 class at MIT was part of a team -- each team had one semester to design and build a simple 8 bit CPU out of 7400 series integrated circuits. These CPUs were built out of TTL chips plugged into several solderless breadboards connected with lots of 22 AWG (0.33 mm2) solid copper wires. Traditionally, minicomputers built from TTL chips were constructed with lots of wire-wrap sockets (with long square pins) plugged into perfboard and lots of wire-wrap wire, assembled with a "wire-wrap pencil" or "wire-wrap gun". More recently, some "retrocomputer" builders have been using standard sockets plugged into stripboard and lots of wire-wrap wire, assembled with solder and a soldering iron. Design Tips There are many ways to categorize CPUs. Each "way to categorize" represents a design question, and the various categories of that way represent various possible answers to that question that needs to be decided before the CPU implementation can be completed. One way to categorize CPU that has a large impact on implementation is: "How many memory cycles will I hold one instruction before fetching the next instruction?" - 0: load-instruction on every memory cycle (Harvard architecture) - 1: At most 1 memory cycle between each load-instruction memory cycle ( load-store architecture ) - more: some instructions have 2 or more memory cycles between load-instruction memory cycles (memory-memory architecture) Another way to categorize CPUs is "Will my control lines be controlled by a flexible microprogramming, a fixed control store, or by hard-wired control decoder that directly decodes the instruction?" The load-store and memory-memory architectures require a "instruction register" (IR). At the end of every instruction (and after coming out of reset), the next instruction is fetched from memory[PC] and stored into the instruction register, and from then on the information in the instruction register (directly or indirectly) controls everything that goes on in the CPU until the next instruction is stored in the instruction register. For homebrew CPUs, the 2 most popular architectures are: - direct-decode Harvard architecture - flexible microprogramming that supports the possibility of memory-memory architecture. Another way to categorize CPUs is "How many sub-states are in a complete clock cycle?" Many textbooks imply that a CPU has only one clock signal -- a bunch of D flip-flops each hold 1 bit of the current state of the CPU, and those flip-flops drive that state out their "Q" output. Those flip-flops always hold their internal state constant, except at the instant of the rising edge of the one and only clock, where each flip-flop briefly "glances" at their "D" input and latches the new bit, and shortly afterwards (when the new bit is different from the old bit) changes the "Q" output to the new bit. Single clock signals are nice in theory. Alas, in practice we can never get the clock signal to every flip-flop precisely simultaneously -- there is always some clock skew (differences in propagation delay). One way to avoid these timing issues is with a series of different clock signals. Another way is to use enough power and carefully design a w: clock distribution network (perhaps in the form of an w: H tree) with w: timing analysis to reduce the clock skew to negligible amounts. Relay computers are forced to use at least 2 different clock signals, because of the "contact bounce" problem. Many chips have a single "clock input" pin, giving the illusion that they use a single clock signal -- but internally a "clock generator" circuit converts that single external clock to the multiple clock signals used by the chip. Many historically and commercially important CPUs have many sub-states in a complete clock cycle, with two or more "non-overlapping clock signals". Most MOS ICs used dual clock signals (a two-phase clock) in the 1970s Building a CPU from individual chips and wires takes a person a long time. So many people take various shortcuts to reduce the amount of stuff that needs to be connected, and the amount of wiring they need to do. - 3-state bus rather than 2-state bus often requires fewer and shorter connections. - Rather than general-purpose registers that can be used (at different times) to drive the data bus (during STORE) or the address bus (during indexed LOAD), sometimes it requires less hardware to have separate address registers and data registers and other special-purpose registers. - If the software guy insists on general-purpose registers that can be used (at different times) to drive the data bus (during STORE) or the address bus (during indexed LOAD), it may require less hardware to emulate them: have all programmer-visible registers drive only one internal microarchitectural bus, and (at different times) load the microarchitectural registers MAR and MDR from that internal bus, and later drive the external address bus from MAR and the external data bus from MDR. This sacrifices a little speed and requires more microcode to make it easier to build. - Rather than 32-bit or 64-bit address and data registers, it usually requires less hardware to have 8-bit data registers (occasionally combining 2 of them to get a 16-bit address register). - If the software guy insists on 16-bit or 32-bit or 64-bit data registers and ALU operations, it may require less hardware to emulate them: use multiple narrow micro-architectural registers to store each programmer-visible register, and feed 1 or 4 or 8 or 16 bits at a time through a narrow bus to the ALU to get the partial result each cycle, or to sub-sections of the wide MAR or MDR. This sacrifices a little speed (and adds complexity elsewhere) to make the bus easier to build. (See: 68000, as mentioned above) - Rather than many registers, it usually requires less hardware to have fewer registers. - If the software guy insists on many registers, it may require less hardware to emulate some of them (like some proposed MMIX implementations) or perhaps all of them (like some PDP computers): use reserved locations in RAM to store most or all programmer-visible registers, and load them as needed. This sacrifices speed to make the CPU easier to build. Alas, it seems impossible to eliminate all registers -- even if you put all programmer-visible registers in RAM, it seems that you still need a few micro-architectural registers: IR (instruction register), MAR (memory address register), MDR (memory data register), and ... what else? - Harvard architecture usually requires less hardware than Princeton architecture. This is one of the few ways to make the CPU simpler to build *and* go faster. Harvard architecture The simplest kinds of CPU control logic use the Harvard architecture, rather than Princeton architecture. However, Harvard architecture requires 2 separate storage units -- the program memory and the data memory. Some Harvard architecture machines, such as "Mark's TTL microprocessor", don't even have an instruction register -- in those machines, the address in the program counter is always applied to the program memory, and the data coming out of the program memory directly controls everything that goes on in the CPU until the program counter changes. Alas, Harvard architecture makes storing new programs into the program memory a bit tricky. microcode architecture Assembly Tips "I don't recommend that anybody but total crazies wirewrap their own machines out of loose chips anymore, although it was a common enough thing to do in the mid- to late Seventies". -- Jeff Duntemann Programming Tips Further Reading - "Touch the magic. By this I meant to gain a deeper understanding of how computers work" -- Bill Buzbee - "To evaluate the 6800 architecture while the chip was being designed, Jeff's team built an equivalent circuit using 451 small scale TTL ICs on five 10 by 10 inch (25 by 25 cm) circuit boards. Later they reduced this to 114 ICs on one board by using ROMs and MSI logic devices." -- w:Motorola_6800#Development_team - "The 74181 is a bit slice arithmetic logic unit (ALU)... The first complete ALU on a single chip ... Many computer CPUs and subsystems were based on the '181, including ... the ... PDP-11 - Most popular minicomputer of all time" -- Wikipedia:74181 - Wikipedia: Data General Nova#Processor design - "My Home-Built TTL Computer Processor (CPU)" by Donn Stewart - "The basic algorithm executed by the instruction execution unit is most easily expressed if a memory address fits exactly in a word." -- "The Ultimate RISC" by Douglas W. Jones - "it just really sucks if the largest datum you can manipulate is smaller than your address size. This means that the accumulator needs to be the same size as the PC -- 16-bits." -- "Computer Architecture" - Andrew Holme. "Mark 2 FORTH Computer" - GALU - A Gate Array Logic based ALU IC. - Bill Buzbee. "Magic-1 Microarchitecture". - Dieter Mueller. "Multiplexers: the tactical Nuke of Logic Design" 2004. - Rodney Moffitt. Micro Programmed Arithmetic Processor. 55 TTL chips. The core 4-bit adder/subtracter has about 7 SSI chips. The ALU has about 7 additional SSI chips of logic around that core to support "and", "or", "xor", "increment", "decrement". An instruction register and a micro-programmed sequencer around the ALU handle (4-bit) "multiply" and "divide". - Dieter Mueller. ALU with Adder. 2004. - LM3000 CPU - Decode Systems. "Motorola 14500B" - "1 (Yes, ONE) bit computer? MC14500B" - TinyMicros wiki: MC14500B - Dennis Feucht. "Forgotten Circuits (that should be brought back): MC14500B Industrial Control Unit". EDN 2012. - "MC14500B - a 1 bit industrial processor" - "icu-assembler: Assembler for the Motorola MC14500B ICU written in C" - Eric Smith. "Motorola MC14500B" - Wikipedia: serial computer - the VHS, a 32 bit CPU built by Kevin McCormick, Colin Bulthaup, Scott Grant and Eric Prebys for their MIT 6.004 class. - 6.004 Contest Photos - "Libby8" neo-retro computer by Julian Skidmore - Bill Buzbee. Magic-1 Homebrew CPU: Clocks - "Intel's Atom Architecture: The Journey Begins" by Anand Lal Shimpi, 2008. In a large microprocessor, the power used to drive the clock signal can be over 30% of the total power used by the entire chip. - Svarychevski Michail Aleksandrovich. "Homemade CPU – from scratch". Briefly compares a few notable hobbyist-built CPUs. - other homemade CPUs - yet more homemade CPUs relay computers - Harry Porter's Relay Computer (415 Relays, all identical 4PDT) - "Relay Computer Two" by Jon Stanley (281 relays, of 2 types: 177 SPDT, and 104 4PDT) - Zusie - My Relay Computer by Fredrik Andersson (uses around 330 relays, of 2 types: 4-pole and 6-pole double-throw relays, plus ~30 integrated circuits for RAM and microcode) - relay computers by Kilian Leonhardt (in German): a "large computer" with around 1500 relays and a program EEPROM, and a "small computer" with 171 relays. - DUO 14 PREMIUM by Jack Eisenmann (around 50 relays, including 4 addressable "crumbs" of RAM where each crumb is 2 bits, plus 48 bits of program ROM in 6x8-switch DIP switches. The only semiconductor components: 555 timer, decade counter, and transistors in the clock generator. Each command has 6 bits, and the 8 commands in the program ROM are selected by a 3-bit program counter). - Wikipedia: Z3 (computer), designed by Konrad Zuse, the world's first working programmable, fully automatic computing machine. built with 2,000 relays. - Z3 Nachbau, Horst Zuse's (Konrad Zuse's son) and Raul Rojas' 2001 reconstruction of the classic Z3. The 32-word, 22-bit-wide memory is also constructed entirely from relays, about 700 relays. (in German) - Horst Zuse's new Z3 reconstruction: Created 2010 for the 100 year anniversary of Konrad Zuse's birth. About 2500 modern relays. (in German) - Rory Mangles. Tim 7: A 4-bit relay CPU with the program stored on punch tape - Rory Mangles. Tim 8: "one of the smallest Turing complete relay computers in the world by relay count" an 8-bit relay CPU with the program stored on punch tape, data stored in discrete capacitors (!) (no RAM chips) with one relay pole per byte; uses 152 relays, most of them single-pole. discrete transistor computers - MT15 by Dieter Mueller is built almost entirely out of (around 3000) individual SMT transistors ... also has some essays on microprogramming and ALU design. - The Q1 Computer by Joe Wingbermuehle. Built almost entirely out of (3105) individual through-hole PN2222A transistors. "Clock phases are used so that transparent latches can be used for registers to reduce transistor count at the price of speed." 8 bit data bus, 16 bit address bus. - Svarichevsky Mikhail is apparently building a processor entirely out of discrete transistors. Using very careful analog tuning (12 resistors of various values), Svarichevsky Mikhail has developed a 4 transistor full adder: "BARSFA - 4-TRANSISTOR FULL ADDER". (Are the 4 Schottky diodes -- across the base and collector of each transistor -- really necessary, or just to improve performance?) (He also shows a canonical implementation of a CMOS full adder, requiring 28 transistors). - Simon Inns. "4-Bit Computer" shows a 4-bit adder built entirely from AND, OR, NOT gates in turn built entirely from discrete NPN transistors and resistors (with toggle switches for inputs and LEDs to output the sum). (A 22 transistor full adder). - Rory Mangles. Tiny Tim: diode-transistor logic (DTL); 400 2N3904 NPN Transistors plus diodes, resistors, capacitors, etc. gives "2700 components" (?). Has 4 registers: a Working Register (8 bit), Instruction Register (8 bit) , Address Register (12 bit), Program Counter (12 bit), and a sequencer. (Also uses some Zero Page "Registers" stored in the SRAM chip). pneumatic computers - "8 bit processor using logic gates made of pneumatic valves" by Minsoung Rhee and Mark Burns K'nex computers ??? do these really count as "processors" ??? TTL computers - A Minimal TTL Processor for Architecture Exploration by Bradford J. Rodriguez (aka PISC, the Pathetic Instruction Set Computer) - Wikipedia:Apollo Guidance Computer - V1648: (16 bit data) (48 bit address bus?) - "the Ultimate RISC" and "the Minimal CISC" - alt.comp.hardware.homebuilt FAQ - Mark's TTL microprocessor (uses only 8 chips ... "Without using the two PALs I used, it would be 16 chips.") (is there a better URL for this?) - DUO Compact by Jack Eisenmann: The DUO Compact CPU was built out of 22 integrated circuit chips, including 2 EEPROMS for microcode and 1 EEPROM for boot ROM. It has some nice features -- a unified address space (16 bit address bus, 8 bit data bus); programs can run out of the boot ROM or the data RAM; memory-mapped I/O; etc. Also some odd features -- the instruction pointer is reloaded to a literal "next" value in every instruction -- it's not really a "program counter", because the CPU lacks the hardware to "count" or "increment" a value directly. - "Prehistoric Cpu's & Octal Amps" (18 bit data bus? 24 bit data bus?) - "Viktor's Amazing 4-bit Processor" ... can re-program in-circuit using manual switches. About 90 chips. - Galactic 4 bit CPU by Jon Qualey. Two, 2716 EPROMs are used to store the micro-instruction code and two, 2114 static RAMs are used for program memory. 25 ICs in all, 74LS TTL. - LM3000 CPU designed and built by five students at Bennington College, Vermont, using fifty-three integrated circuits. - The D16/M by John Doran is a 16-bit digital computer implemented with SSI and MSI HCMOS integrated logic and constructed using wire-wrap techniques. Its timing and control unit is microprogrammed (fully horizontal, with a 72-bit control word). - (FIXME: who?) has built a MC14500 clone out of (TTL) discrete logic. (FIXME: who else?) has built a MC14500 clone on a FPGA. - TANACOM-1 by Rituo Tanaka is a 16-bit TTL minicomputer built with a total of 146 ICs, including 4 SN74181s and a 74182 in the ALU. - BMOW 1 (Big Mess o' Wires) by Steve Chamberlin is an 8 bit CPU built from discrete 7400-series logic, and a few 22V10 and 20V8 GALs. All the digital electronics on a single large Augat wire-wrap board to interconnect the 50 or so chips. BMOW 1 contains roughly 1250 wires connecting the components. All data busses are 8 bit; the address bus is 24 bit. 3 parallel microcode ROMs generate the 24 bit microcode word. VGA video output is 512×480 with two colors, or 128×240 with 256 colors. The microcode emulates a 6502 (more or less). Uses two 4-bit 74LS181s to form the core 8 bit ALU. - "Asychronous 40-bit TTL CPU" by Hans Summers 1992 - "a proprietary 8-bit engine built out 3 PROM's and a few dozen TTL chips" as described by Jeff Laughton. - "One-bit Computing at 60 Hertz": a tiny computer made from an EPROM and a few logic chips; designed by Jeff Laughton. - "Bride of Son of Cheap Video - the KimKlone": TTL chips and a EPROM add extra programmer-visible registers and instructions to a microcontroller (a 65C02). - The MyCPU - Project: "everybody is invited to participate and contribute to the project." The CPU is built from 65 integrated circuits on 5 boards. 1 MByte bank switched RAM. Originally developed by Dennis Kuschel. Apparently several MyCPU systems have been built? One MyCPU system runs a HTTP web server; another MyCPU system runs a (text-only) web browser). - HJS22 - a homebrew TTL computer. Nice front panels with lots of lights and switches. - The Electronics Australia EDUC-8 microcomputer: "one of the first build-it-yourself microcomputers". "The internal implementation is bit-serial which gives good economy of components as most data paths are only 1 bit wide." - "Learning to Build a Processor" shows some nice photos of early stages in a TTL CPU built on solderless breadboards. - "Homebrew CPUs/Low Level Design" recommends a few books with low-level TTL CPU design information. - Randy Thelen. Mippy (millions of instructions per year) is a 1MHz, 16 bit Forth machine built from scratch using 74HCT00 series TTL chips. The data bus and address bus are separate, each 16 bits wide.
<urn:uuid:16465417-1951-4078-a750-2773ad578165>
CC-MAIN-2013-20
http://en.wikibooks.org/wiki/Microprocessor_Design/Wire_Wrap
2013-06-19T12:26:17
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.904368
6,368
4.09375
4
Cells that lack a membrane-bound nucleus are called prokaryotes (from the Greek meaning before nuclei). These cells have few internal structures that are distinguishable under a microscope. Cells in the monera kingdom such as bacteria and cyanobacteria (also known as blue-green algae) are prokaryotes. Prokaryotic cells differ significantly from eukaryotic cells. They don't have a membrane-bound nucleus and instead of having chromosomal DNA, their genetic information is in a circular loop called a plasmid. Bacterial cells are very small, roughly the size of an animal mitochondrion (about 1-2µm in diameter and 10 µm long). Prokaryotic cells feature three major shapes: rod shaped, spherical, and spiral. Instead of going through elaborate replication processes like eukaryotes, bacterial cells divide by binary fission. Diagram of a prokaryotic cell. Notice the internal organelles are not easily distinguishable. Bacteria perform many important functions on earth. They serve as decomposers, agents of fermentation, and play an important role in our own digestive system. Also, bacteria are involved in many nutrient cycles such as the nitrogen cycle, which restores nitrate into the soil for plants. Unlike eukaryotic cells that depend on oxygen for their metabolism, prokaryotic cells enjoy a diverse array of metabolic functions. For example, some bacteria use sulfur instead of oxygen in
<urn:uuid:f127ed7e-d4a0-4a53-a560-1cfc937fb487>
CC-MAIN-2013-20
http://library.thinkquest.org/C004535/prokaryotic_cells.html
2013-06-19T12:25:18
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907606
324
4.09375
4
Delegates (C# Programming Guide) A delegate is a type that defines a method signature. When you instantiate a delegate, you can associate its instance with any method with a compatible signature. You can invoke (or call) the method through the delegate instance. Delegates are used to pass methods as arguments to other methods. Event handlers are nothing more than methods that are invoked through delegates. You create a custom method, and a class such as a windows control can call your method when a certain event occurs. The following example shows a delegate declaration: Any method from any accessible class or struct that matches the delegate's signature, which consists of the return type and parameters, can be assigned to the delegate. The method can be either static or an instance method. This makes it possible to programmatically change method calls, and also plug new code into existing classes. As long as you know the signature of the delegate, you can assign your own method. In the context of method overloading, the signature of a method does not include the return value. But in the context of delegates, the signature does include the return value. In other words, a method must have the same return value as the delegate. This ability to refer to a method as a parameter makes delegates ideal for defining callback methods. For example, a reference to a method that compares two objects could be passed as an argument to a sort algorithm. Because the comparison code is in a separate procedure, the sort algorithm can be written in a more general way. Delegates have the following properties: Delegates are like C++ function pointers but are type safe. Delegates allow methods to be passed as parameters. Delegates can be used to define callback methods. Delegates can be chained together; for example, multiple methods can be called on a single event. Methods do not have to match the delegate signature exactly. For more information, see Using Variance in Delegates (C# and Visual Basic). C# version 2.0 introduced the concept of Anonymous Methods, which allow code blocks to be passed as parameters in place of a separately defined method. C# 3.0 introduced lambda expressions as a more concise way of writing inline code blocks. Both anonymous methods and lambda expressions (in certain contexts) are compiled to delegate types. Together, these features are now known as anonymous functions. For more information about lambda expressions, see Anonymous Functions (C# Programming Guide). For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage.
<urn:uuid:0d00acab-afd5-4a68-acaf-f8e47e41ba68>
CC-MAIN-2013-20
http://msdn.microsoft.com/en-us/library/ms173171.aspx
2013-06-19T12:41:47
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.913799
526
4.5625
5
Disunion follows the Civil War as it unfolded. Disunion follows the Civil War as it unfolded. But in Virginia, Confederates were having a summer of unprecedented successes. Stonewall Jackson humiliated five different federal commanders in the Shenandoah Valley and at the Battle of Cedar Mountain. Robert E. Lee had stymied George B. McClellan’s Peninsula Campaign aimed at Richmond, and in August joined Jackson to humiliate John Pope at Second Manassas. Confederate leaders saw this as the moment to capitalize on these successes with a bold military incursion into Kentucky in August. The Union’s breadbasket, the western border states lying astride the Ohio River, was about to become the next front. Beginning in the mid-18th century, the Ohio River was one of the great highways of North America. Tens of thousands of people used it to float westward down from the Appalachian Mountains into the interior of the continent. The region filled up quickly: by 1790, 73,677 people lived in Kentucky, then a Virginia county, and 35,691 more in Tennessee. By 1810, 15 percent of the American population lived west of the Appalachians, by then including the newest state, Ohio. Within a decade, six more Western states would be added to the national Union. Three decades later, in 1840, more than a third of all Americans lived in this so-called First West. The early settlements in the western region quickly thrived because of the river trade. The Ohio and its tributaries, which stretched north to nearly the Great Lakes, south to the Nashville Basin, and east to the Cumberland Plateau, sustained the growing population of the valley with crops and goods. Farmers loaded flatboats filled with products of their summer labors, with wheat milled into flour, corn distilled into whiskey and hogs slaughtered into bacon and soap. These and innumerable other goods floated down the Big Sandy, Scioto, Licking, Kentucky, Wabash, Cumberland and Tennessee, then down the Ohio to the Mississippi and on to even hungrier markets in Memphis, Natchez, New Orleans and beyond. The appearance of the steamboat in the first decade of the 1800s revolutionized river traffic, making it possible to return upriver without walking, riding, pushing or pulling against the river currents. Before the arrival of the steamboat, items had to be carried over the Appalachians to western Pennsylvania and floated downriver. By 1820, 73 steamboats were working the Ohio and Mississippi rivers, bringing as much as 33,000 tons of goods back up to Louisville, Cincinnati and Pittsburgh. A canal-building craze soon followed, cheaply and efficiently connecting the inland areas, especially north of the Ohio, with the rivers. In 1852, at the peak of the steamboat trade, 8,000 landings were recorded at Cincinnati. Owing in large part to the steamboat, in a single generation after the Revolution, bustling cities grew from these once isolated river towns: Pittsburgh, St. Louis, Louisville, Lexington, Cincinnati, Evansville. These were business towns — with regularity of design standard in all of these river cities, travelers talked of their attractive business climates rather than their physical beauty. Merchants dominated local society and politics, accumulating wealth by the southern river trade that both drove and exemplified their cities’ growing class stratification. But civic leaders also planted and cultivated the seeds of culture that sprouted first in these cities: newspapers, churches, opera houses and theaters, bookstores, museums, lyceums and debating societies, libraries and schools and colleges. Although St. Louis led all cities in the West in sophistication, by the early 19th century Lexington, Ky., was known as the “Athens of the West” because of its educational facilities, most notably Transylvania University. Other cities were not far behind. By the 1830s Cincinnati, the West’s Queen City with nearly 50,000 people, had replaced Lexington as the region’s cultural and commercial epicenter. Fast on the heels of the steamboat boom came rail. By the eve of the Civil War, Ohio boasted some 3,000 miles of track, 76 times what it had in 1840 and the most of any state in the entire nation. Illinois was second in the region and nation with 2,799 miles, and Indiana followed (fifth in the country) with 2,163; only New York and Pennsylvania boasted more. Missouri and Kentucky, too, had engaged heavily in railroad construction, but their 817 and 534 miles of track, respectively, left them lagging far behind even most of the cotton states, much less their immediate neighbors. Even in Missouri, virtually all of the main railroad lines ran along or north of the Missouri River. Industrial expansion in the West followed these states’ respective railroad booms, contributing to population explosions in all of them. The 1850s saw the floodtide. In 1860, Ohio’s 2.3 million residents represented a more than 50 percent increase since 1840, making it the third largest state in the country. Illinois’ population doubled each decade, reaching 1.8 million, while Indiana’s population had nearly doubled to 1.4 million. Kentucky’s population had, like Ohio’s, increased by half to some 919,000 residents and had spread noticeably westward. Some of the new settlers came from Eastern cities, but many of them came from a new wave of immigration from Europe, which favored the railroaded portions of these states, creating new population centers away from the traditional riverine sections. As late as the 1840s, many of the unorganized areas or fledgling counties of the northern portions of the Northwestern states had been sparsely settled. But soon boggy forests were drained by industrious laborers and settlers, with railroads following, allowing these counties to account for much of their states’ growth in the final antebellum decade. All of the counties of northern Indiana and Illinois saw their populations double or more; even in Ohio, they increased by half in the decade. Propelled by the lake trade, Cleveland became a city and Chicago became the region’s colossus within decades of its founding. The northward shift of these states’ populations contributed much to the western region’s population’s exceeding those of all other national regions, including the fast-growing cotton frontier. The towns and villages of the southern portions of these states, their traditional locus of population, declined proportionally. By 1860, between a quarter and a sixth of the population of the Ohio River states lay in the valley itself. At the same time, though, the Ohio Valley cities thrived. Where in 1840 just less than four percent of the region’s overall population was urban (and only three cities boasted as many as 8,000 residents), by 1860 some 14 percent lived in villages, towns and cities, and the region boasted 14 cities with 10,000 or more residents. This new urban population was unlike any the region had seen before. Between 1820 and 1849, nearly 2.5 million foreigners came to the United States, largely from northern Europe, representing a seven-fold increase in the incidence of immigration. By 1850, 47.2 percent of Cincinnatians were immigrants, and more than 70 percent came from either Ireland or Germany, in part responses to the terrible potato famine of the later 1840s in Ireland and the failed democratic revolutions of 1848 for the Germans. Nearly a contiguous square mile of the cityscape was a virtual German “stadt,” with bustling streets bearing names like Berlin, Schiller and Goethe, and with street signs and business placards posted in German script. The vibrant 10th Ward was known simply as “Über der Rhein,” or for Anglos, “Over-the-Rhine,” a descriptive term that originated from the area’s proximity to the Miami Canal, which separated it from the rest of the city. Many, like those in Cincinnati, were Catholics, which for many native-born Protestants caused more consternation than the newcomers’ ethnicities. By 1850, Cincinnati was only the third most densely immigrant city in the nation. But those with more were also Western cities: Chicago and St. Louis. (Surprisingly, all three led New York City, in which foreigners constituted 45.7 percent of the population.) But immigrants flocked to all the Western cities. They made up some 17 percent of Louisville’s population. They likewise settled heavily in Covington, Ky., and Evansville, Ind., creating a unique culture. The diversity that became much of America was as much the western border region’s as the nation’s. Many of these Germans were strongly antislavery.Yet as the second year of the war began, positions on slavery did not easily divide north and south of the Ohio. Many of the region’s Irish Catholics supported slavery’s protection, while in large sections of the free states fighting-age “Butternut” men, called “Copperheads” by their Republican opponents, laid out of the fight altogether or threatened to leave it, should they be conscripted or slavery be abolished by the “friction of war,” as Abraham Lincoln put it. Others sympathized outright with the Confederacy, and fights commonly broke out between them and their pro-war Republican neighbors. For many of the border region’s dissenting white residents, the course and events of the Civil War pointed out clearly that a new alliance had emerged, one in which Republicans in the Northern and Northwestern states appeared to be uniting in a conspiracy against liberty, which for many included the right of slaveholding. Angry and disillusioned, many of these Westerner dissenters sympathized with the region that now embodied their sense of betrayal and victimization — the beleaguered South. In the rural areas of Missouri, and western Kentucky and Tennessee, guerrillas were waging a desperate fight against occupying troops and local unionists that grew out of their recognition that the cities in their states were virtual fortresses: recruiting and staging centers for the Federal armies impossibly defended by hordes of blue-clad troops who guarded the supply and munitions depots and manufacturing centers for the federal government’s war machine. As the newly appointed general in chief, Henry Halleck, later realized, Jefferson Davis and his advisers had “boldly determined to reoccupy Arkansas, Missouri, Tennessee, and Kentucky and, if possible, invade the states of Ohio, Indiana and Illinois while our attention was distracted by the invasion of Maryland.” Coordinated invasions of the border states on both sides of the Appalachians, as well as west of the Mississippi, would threaten the West’s major river cities and even the nation’s capital, perhaps turning the tide militarily. But the decision was more than strategic; it was political. The federal government’s midterm elections loomed. Success on the new war front would embolden dissenters and moderates in the border states, especially in the Ohio Valley, to vote against Lincoln’s party, turn public support in the free states against the war, and possibly gain for the Confederacy its most elusive prize: foreign recognition. The cumulative effect would be to force the Lincoln administration to sue for peace. The Confederacy’s Tet Offensive was set to begin. Christopher Phillips is a professor of history at the University of Cincinnati. He is the author of six books on the Civil War era, including “Damned Yankee: The Life of Nathaniel Lyon” and the forthcoming “The Rivers Ran Backward: The Civil War on the Middle Border and the Making of American Regionalism.”
<urn:uuid:101cb34f-ac32-4cf5-be5b-c669ee211cb5>
CC-MAIN-2013-20
http://opinionator.blogs.nytimes.com/2012/08/08/the-breadbasket-of-the-union/
2013-06-19T12:31:59
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968323
2,434
4.03125
4
Earlier that day, the President of Germany, Paul von Hindenburg, had appointed Hitler Chancellor (similar to Prime Minister). Having won more than than 37 percent of the vote in the previous year's legislative elections, Hitler's Nazi party had enough power to effectively paralyze Germany's democratic government, which had been in place since 1919. Hindenburg hoped that by appointing Hitler, he could satisfy Nazi legislators and break the deadlock, while maintaining control of the government behind the scenes. His miscalculation led to disaster for Germany, for Europe, and for the world. How was Hitler, probably the most ruthless dictator of the 20th century, able to come to power in a democratic Germany 75 years ago? And could something like it happen again? To think about these questions, it helps to understand the circumstances in Germany at the time that helped Hitler and his Nazi party gain power. Impact Of Versailles By the early 1930s, Germany was in desperate shape. Its defeat in World War I and the harsh conditions imposed by the United States, Britain, and France in the 1919 Treaty of Versaillesincluding debilitating reparation payments to the victorshad left Germany humiliated and impoverished, with ruinous inflation eating away at its economy. The worldwide Depression that followed the 1929 U.S. stock market crash exacerbated the situation as banks failed, factories closed, and millions of people lost their jobs. It all made for fertile ground for Hitler's radical nationalist ideology. The Nazis (short for National Socialists) promised to stop reparation payments, to give all Germans jobs and food, and to make them proud to be German again. And they blamed Jews for most of Germany's problems. By 1930, when the Nazis won 18 percent of the vote, it was effectively impossible to govern Germany without Nazi support, according to Ian Kershaw, a history professor at Sheffield University in England. And that led to President Hindenburg's gamble to appoint Hitler Chancellor in January 1933. Less than a month later, Hitler used the fire that destroyed the Reichstag, the parliament building in Berlin, as an excuse to declare a state of emergency and suspend democratic protections such as freedom of speech. (At the time, Hitler blamed the Communists, but many historians believe the Nazis set the fire themselves.) It marked, in effect, the death of German democracy and the beginning of Hitler's reign of terror. Within months, the first concentration camp was opened in the Bavarian town of Dachau. The first prisoners were political opponents of the regime. But it wasn't long before other groups that the Nazis deemed undesirable were rounded up and sent away: in particular, Jews, homosexuals, and gypsies. The SSHitler's elite paramilitary forcehad long been terrorizing Germany's Jews, beating them up and vandalizing their businesses. The Nazis believed that Germans, part of what they called the Aryan race, were racially superior to Jews. In 1935, their racist beliefs became official German policy with the passage of the Nuremberg laws, which stripped German Jews of citizenship and laid the groundwork for the horrors to follow. On Nov. 9, 1938, the Nazis orchestrated a nationwide wave of attacks on Jewish businesses, homes, and synagogues. Almost 100 Jews were killed, and thousands were arrested and sent to concentration camps. The night became known as Kristallnachtthe night of broken glass. At the same time, Hitler was moving Germany steadily toward war. In 1935, he began rebuilding Germany's military, in violation of the Versailles treaty. In 1938, he annexed Austria and the Sudetenland, a region of western Czechoslovakia where many ethnic Germans lived, making both part of Germany. Then, on Sept. 1, 1939, Germany launched a surprise attack on Poland and conquered it so quickly that the term blitzkrieg, or "lightning war," was coined. On September 3, after Germany ignored their demands to withdraw, Britain and France declared war. World War II had begun. By 1942, a year after Germany began implementing the Final Solutiondetailed plans for the systematic extermination of all of Europe's Jewsit had conquered much of Europe, from France to the outskirts of Stalingrad in the Soviet Union (see map below). As more Jews came under their control, the Germans herded them into crowded ghettos in preparation for mass deportations to concentration camps across Europe, where they died of disease, starvation, and overwork, or were systematically murdered in the gas chambers. Six million Jewsthe vast majority of Europe's Jewish communityultimately perished in the Holocaust. By the time the war in Europe (and in the Pacific, the war against Japan) ended in 1945, 48 million people worldwide had died, and much of Europe was in ruins. These distant events still echo today. Indeed, with the world now facing great tensions and instability, the question of whether such a monstrous dictator could again come to power and threaten the world seems more relevant than ever, says Kershaw, the historian. Lessons For Today Around the globe, skilled politicians have been able to manipulate populist, nationalist, or racist feelings to advance authoritarian rule, according to Kershaw. In the 1990s, for example, the President of Serbia, Slobodan Milosevic, used nationalist rhetoric reminiscent of the Nazis to launch a campaign of ethnic cleansing and war in the Balkan region of Europe. In recent years, President Vladimir Putin has gradually moved Russia in an authoritarian direction, and President Hugo Chávez has done the same in Venezuela, though his attempt to be named President for life was defeated in a referendum last year. In Zimbabwe, a once prosperous African nation now in ruins, President Robert Mugabe has used brutal force to stifle opposition and stay in power for 28 years. But, as Kershaw points out, there are international organizations today that didn't exist in 1933such as the United Nations and the European Unionthat would put up some roadblocks to the rise of a dictator bent on world conquest. Nevertheless, it's clear the world needs to stay on guard. "We always have to be watchful of a politician who announces that his country's destiny is determined by expansion, whether it's a land grab or a political and economic domination," says historian Peter Black of the Holocaust Museum in Washington. "Clearly, Hitler's statements as a politician were plenty concerning if people had taken them seriously." Today, a key question for democracies is how to balance the fight against threats like Islamic terrorism with democratic freedoms. And that, Black says, is the second lesson to take from Hitler's rise to power. "A politician who's prepared to sacrifice basic rights for security, that's something for a citizen of any democratic society to be concerned about," he says. "Whether you're looking at the Soviet Union or Germany, the move toward authoritarian dictatorship doesn't necessarily make the country more secure, and the cost to the population is very, very high."
<urn:uuid:3fb9b285-4f35-4b26-bd03-c091a25d97b0>
CC-MAIN-2013-20
http://teacher.scholastic.com/scholasticnews/indepth/upfront/features/index.asp?article=f031008_Hitler
2013-06-19T12:25:56
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970326
1,423
4.25
4
The island of Sumatra suffered from both the rumblings of the submarine earthquake and the tsunamis that were generated on December 26, 2004. Within minutes of the quake, the sea surged ashore, bringing destruction to the coasts of the northern Sumatra. This pair of images from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite shows the Aceh province of northern Sumatra, Indonesia, on December 17, 2004, before the quake and on December 29, 2004, three days after the catastrophe. Though MODIS was not specifically designed to make the very detailed observations that are usually necessary for mapping coastline changes, the sensor nevertheless observed obvious differences in the Sumatran coastline. On December 17, the green vegetation along the west coast appears to reach all the way to the sea, with only an occasional thin stretch of white that is likely sand. After the earthquake and tsunamis, the entire western coast is lined with a noticeable purplish-brown border. The brownish border could be deposited sand, or perhaps exposed soil that was stripped bare of vegetation when the large waves rushed ashore and then raced away. On a moderate-resolution image such as this, the affected area may seem small, but each pixel in the full resolution image is 250 by 250 meters. In places the brown strip reaches inland roughly 13 pixels, equal to a distance of 3.25 kilometers, or about 2 miles. On the northern tip of the island, the incursion is even larger.
<urn:uuid:cb03791d-e4b0-4b62-9d64-5625661f9ccc>
CC-MAIN-2013-20
http://visibleearth.nasa.gov/view.php?id=72482
2013-06-19T12:53:45
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955531
307
4
4
Black Reconstruction in America is a book by W. E. B. Du Bois, first published in 1935. It is revisionist approach to looking at the Reconstruction of the south after its defeat in the American civil war. On the whole, the book takes a Marxist approach to looking at reconstruction. The essential argument of the text is that the Black and White laborers, who are the proletariat, were divided after the civil war on the lines of race, and as such were unable to stand together against the white propertied class, the bourgeoisie. This to Du Bois was the failure of reconstruction and the reason for the rise of the Jim Crow laws, and other such injustices. In addition to creating a landmark work in early U.S. Marxist sociology, at the time Dubois’ historical scholarship and use of the techniques of primary source data research on the post war political economy of the former Confederate States’ were equally ground breaking. He performed the first systematic and rigorous analysis of the political economy of the reconstruction period of the southern states; based upon actual data collected during period. In chapter five, Du Bois argues that the decision by slaves on the southern plantations to stop working was an example of a General Strike. This type of Marxist rhetoric is in concert with his arguments throughout the book that the Civil War was largely a war fought over labor issues.
<urn:uuid:1a82e2f0-626b-44ed-9235-ac27b2b6ccfe>
CC-MAIN-2013-20
http://worldhistoryproject.org/1935/black-reconstruction-is-published
2013-06-19T12:47:11
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96692
276
4.0625
4
Little was known about this hydrogen-breathing organism before its genome sequence was determined. By utilizing computational analyses and comparison with the genomes of other organisms, the researchers have discovered several remarkable features. For example, the genome encodes a full suite of genes for making spores, a previously unknown talent of the microbe. Organisms that make spores have attracted great interest recently because this is a process found in the bacterium that causes anthrax. Sporulation allows anthrax to be used as a bioweopon because the spores are resistant to heat, radiation, and other treatments. By comparing this genome to those of other spore-making species, including the anthrax pathogen, Eisen and colleagues identified what may be the minimal biochemical machinery necessary for any microbe to sporulate. Thus studies of this poison eating microbe may help us better understand the biology of the bacterium that causes anthrax. Building off this work, TIGR scientists are leveraging the information from the genome of this organism to study the ecology of microbes living in diverse hot springs, such as those in Yellowstone National Park. They want to know what types of microbes are found in different hot springs--and why. To find out, the researchers are dipping into the hot springs of Yellowstone, Russia, and other far-flung locales, to isolate and decipher the genomes of microbes found there. "What we want to have is a field guide for these microbes, like those available for birds and mammals," Eisen says. "Right now, we can't even answer simple questions. D Source:The Institute for Genomic Research
<urn:uuid:f98faee8-b113-4114-800d-2befb0883078>
CC-MAIN-2013-20
http://www.bio-medicine.org/biology-news/Poison-+-water--3D-hydrogen--New-microbial-genome-shows-how-2154-2/
2013-06-19T12:35:41
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937708
327
4.1875
4
The Global Hunger Index (GHI) is designed to comprehensively measure and track hunger globally and by country and region. Calculated each year by the International Food Policy Research Institute (IFPRI), the GHI highlights successes and failures in hunger reduction and provides insights into the drivers of hunger. By raising awareness and understanding of regional and country differences in hunger, the GHI aims to trigger actions to reduce hunger. To reflect the multidimensional nature of hunger, the GHI combines three equally weighted indicators in one index number: - Undernourishment: the proportion of undernourished as a percentage of the population (reflecting the share of the population with insufficient calorie intake); - Child underweight: the proportion of children younger than the age of five who are underweight (low weight for age reflecting wasting, stunted growth, or both), which is one indicator of child undernutrition; and - Child mortality: the mortality rate of children younger than the age of five (partially reflecting the fatal synergy of inadequate dietary intake and unhealthy environments). The GHI ranks countries on a 100-point scale. Zero is the best score (no hunger), and 100 is the worst, although neither of these extremes is reached in practice. This widget was developed by HarvestChoice (Put this map on your website)
<urn:uuid:f37413e9-3eca-4eda-b6aa-a7349cc9213b>
CC-MAIN-2013-20
http://www.ifpri.org/book-8018/ourwork/researcharea/global-hunger-index?page=3
2013-06-19T12:20:23
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.913113
271
4.03125
4
To understand electrity, it is easiest to use a water current as an analogy for an electrical current, since most people are familiar with the characteristics of water. The analogies and definitions used in this section are simplified for the sake of explanation and are not 100% accurate, but they are accurate enough for building a pragmatically useful understanding of electricity. An electrical current is the flow of electrons through a wire. Much like water flow, an electrical current has similar measureable characteristics, such as pressure, flow rate, and power. It can also perform useful work like a water current. The electrical pressure differential between the positive and negative terminals of a battery or other electrical device is the voltage. This is similar to water pressure in a tank or pipe. Water pressure is measured in pounds per square inch (PSI), and electrical pressure (voltage) is measured in volts. When electrons flow through a wire, the rate at which the electrons flow through can be measured, and this rate is measured in amperes. A milliampere (abbreviated milliamp or ma) is 1/1000th of an ampere, so 1000 ma = 1 amp. The equivalent term for water would be gallons per minute of water flow. When an electrical current flows through a wire, there is some friction on the current which reduces the amount of electricity flowing through the wire. This friction is called resistance, and is measured in ohms. In hydrodynamic terms, this measurement is similar to the diameter of a pipe. A small straw has a narrow diameter, and requires a lot of suction to pull a certain amount of liquid flow through it. A larger straw has a larger diameter, and requires less suction to pull the same amount of liquid flow through it. Ohm's law specifies the relationship between volts, amps and resistance. Ohm's law is: E = I * R E = voltage (volts) I = current (amps) R = resistance (ohms) What this means is actually fairly simple. If you have an electrical current flowing through a wire, and you double the electrical pressure, then you will double the current flow through the wire, assuming you keep the same wire. The equivalent hydrodynamic analogy is: If you double the water pressure at one end of the pipe, it will double the water flow through the pipe, if you keep the same pipe. The equation can be rearranged to derive other interesting relationships such as: E / R = I So, if you double the resistance of a wire, and you keep the electrical pressure the same, then only half the current will flow through the wire. The equivalent hdyrodynamic analogy is: if you use a smaller pipe, then less water flows through the pipe, if you keep the same water pressure. Electrical power is the amount of useful work which can be done by an electrical flow. This is measured in watts, which is the volts multipled by the amps. Volts and amps independently by themselves do not measure power. For example, consider a water current of 1 gallon per minute. This water current can either be dribbling out of a six foot drainage pipe at very low pressure, or it can be squirting out of a very small hole at high pressure In the first case, it doesn't have much power, and in the second case, it has a lot of power. Similarly, consider water pressure of 1 pound per square inch. In a small garden hose, this can water your yard and perform some useful work. However, consider a large river like the Amazon where you may have about 1 pound per square inch of water pressure, but the amount of water flow is enormous - this can perform far more power than 1 pound per square inch in a garden hose. The current capacity of a battery is measured an ampere-hours, often abbreviated aH. 1/1000th of an ampere-hour is a milliampere-hour, or maH. One ampere-hour is the ability to supply one ampere for one hour. Two ampere-hours is the ability to supply one ampere for two hours, or two amperes for one hour. The efficiency of a battery usually decreases at higher current draws. For example, a 4000 maH battery may be able to supply 400 ma for ten hours, but may only supply 4000 ma for half an hour. Therefore, the capacity of a battery is usually specified at a specific current dependent on the manufacturer. The total power capacity of a battery is the voltage multiplied by the ampere-hour capacity of the battery. Since voltage times amperes equals watts, therefore voltage times ampere-hours equals watt-hours. Manufacturers of lithium-ion batteries usually specify the maximum discharge rate for a particular cell. This maximum discharge rate is specified as a value which is a multiple of the battery capacity. For example, a 20C battery is rated for a maximum discharge rate which is equivalent to twenty times the value of its total current capacity. If the battery is rated for 1300 maH, then the 20C discharge rate would be 26000 ma, or 26 amps. Note that this C rate value may be the maximum continuous or burst (usually 10 second) discharge rate depending on the battery manufacturer, therefore it is wise to carefully read the battery specifications.
<urn:uuid:0807482f-e025-4234-8413-4806bb1c287f>
CC-MAIN-2013-20
http://www.swashplate.co.uk/ehbg-v18/ch28s10.html
2013-06-19T12:40:55
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935623
1,116
4.125
4
How a nuclear reactor makes electricity A nuclear reactor produces and controls the release of energy from splitting the atoms of uranium. Uranium-fuelled nuclear power is a clean and efficient way of boiling water to make steam which drives turbine generators. Except for the reactor itself, a nuclear power station works like most coal or gas-fired power stations. The Reactor Core Several hundred fuel assemblies containing thousands of small pellets of ceramic uranium oxide fuel make up the core of a reactor. For a reactor with an output of 1000 megawatts (MWe), the core would contain about 75 tonnes of enriched In the reactor core the U-235 isotope fissions or splits, producing a lot of heat in a continuous process called a chain reaction. The process depends on the presence of a moderator such as water or graphite, and is fully controlled. The moderator slows down the neutrons produced by fission of the uranium nuclei so that they go on to produce more fissions. Some of the U-238 in the reactor core is turned into plutonium and about half of this is also fissioned similarly, providing about one third of the reactor's energy output. The fission products remain in the ceramic fuel and undergo radioactive decay, releasing a bit more heat. They are the main wastes from the process. The reactor core sits inside a steel pressure vessel, so that water around it remains liquid even at the operating temperature of over 320°C. Steam is formed either above the reactor core or in separate pressure vessels, and this drives the turbine to produce electricity. The steam is then condensed and the PWRs and BWRs The main design is the pressurised water reactor (PWR) which has water in its primary cooling/heat transfer circuit, and generates steam in a secondary circuit. The less popular boiling water reactor (BWR) makes steam in the primary circuit above the reactor core, though it is still under considerable pressure. Both types use water as both coolant and moderator, to slow To maintain efficient reactor performance, about one-third or half of the used fuel is removed every year or two, to be replaced with fresh fuel. The pressure vessel and any steam generators are housed in a massive containment structure with reinforced concrete about 1.2 metres thick. This is to protect neighbours if there is a major problem inside the reactor, and to protect the reactor from Because some heat is generated from radioactive decay even after the reactor is shut down, cooling systems are provided to remove this heat as well as the main operational heat output. Natural Prehistoric Reactors The world's first nuclear reactors operated naturally in a uranium deposit about two billion years ago in what is now Gabon. The energy was not harnessed, since these were in rich uranium orebodies in the Earth's crust and moderated by percolating Nuclear energy's contribution to global electricity supply Nuclear energy supplies some 14% of the world's electricity. Today 31 countries use nuclear energy to generate up to three quarters of their electricity, and a substantial number of these depend on it for one quarter to one half of their supply. Almost 15,000 reactor-years of operational experience have been accumulated since the 1950s by the world's 440 nuclear power reactors (and nuclear reactors powering naval vessels have clocked up a similar amount).
<urn:uuid:c259cc03-df21-435b-b4d6-fb7b27710b2b>
CC-MAIN-2013-20
http://www.world-nuclear.org/Nuclear-Basics/How-does-a-nuclear-reactor-make-electricity-/
2013-06-19T12:26:15
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927325
738
4.125
4
Students will use basic addition and subtraction facts everyday for the rest of their lives, so it is extremely important they have a good foundation of knowledge on which to use and further build upon. Included below are books, online games, and other websites and resources available to enrich the learning experience of this crucial topic. The Virginia Standards of Learning covered include: 2.5 The student will recall addition facts with sums to 20 or less and the corresponding subtraction facts; 2.8 The student will create and solve one- and two-step addition and subtraction problems, using data from simple tables, picture graphs, and bar graphs, and 2.9 The student will recognize and describe the related facts that represent and describe the inverse relationship between addition and subtraction. - Red Riding Hood's Math Adventure - Written by Lalie Harcourt and Ricki Wortzman - Illustrated by Capucine Mazille In this interactive math tale, the reader plays a role in choosing how many cookies Little Red Riding Hood gives to the fairy tale characters she meets on her way to Grandma's house. On each page there is a wheel that the reader can turn to change the dialogue and number of cookies to be shared. Readers are encouraged to use copies of the dozen cookies Little Red Riding Hood starts out with to help keep track of the subtracted cookies so some will remain for Grandma! - 12 Ways to Get to 11 - Written by Eve Merriam - Illustrated by Bernie Karlin This story starts out by counting to twelve, with the number eleven missing from the list. Throughout the rest of the book, twelve different ways to add to eleven are showcased. Examples of the objects used in the number sentences include the pinecones and acorns on the forest floor, items found on a sailboat, babies, and a mother hen and her hatching chicks. Readers are exposed to a variety of number combinations that all add up to the missing number eleven. - Panda Math: Learning about Subtraction from Hua Mei and Mei Sheng - Written by Ann Whitehead Nagda in collaboration with the San Diego Zoo Real photographs of the panda cubs Hua Mei and Mei Sheng grace the pages of this informative non-fiction book. Readers have the option to read only the story of the baby panda cubs or they can learn more about pandas, and subtraction, as they explore the real life math issues on the left-side pages of the book. Some of the interesting math problems include how much less time pandas in the zoo spend eating bamboo compared with those in the wild or how much weight Hua Mei gained in three months. The adorable pictures and engaging facts will surely keep readers interested in both the life of the baby pandas and the math that goes along with it! - Lights Out! - Written by Recht Penner - Illustrated by Jerry Smath The narrator of this story is a little girl who not only has to go to bed before everyone in her family, but as she notices by the lights on in all of their windows, before everyone in the apartment building across the street. One night she convinces her parents to let her stay up until all of the thirty-two lights across the street have gone out. Throughout the night the narrator describes both some of the fun things she sees, a pillow fight and a parrot for example, as well as the steps she takes in subtracting the lights that go off, until one stubborn light remains. - Math Fables Too - Written by Greg Tang - Illustrated by Taia Morley This beautifully and colorfully illustrated book provides readers with fun science facts as they read about different animals. The animals, ranging from one sea horse to ten seagulls, are described through playful rhymes that portray the animals' behaviors done in smaller groups. By breaking down the larger number of each animal, readers are exposed to a variety of different addition facts that add up to the sums one through ten, as they also learn fun facts about a variety of creatures! - A Day at the Beach Subtraction – In this activity, an ocean scene is the background for the demonstration of subtraction using colored balls. A group of the balls are crossed out and separated from the original group and students must choose which of the two number sentences provided matches the balls. After selecting it students are then prompted to answer the fact before moving on to the next sentence. After about five of these, one beach-themed word problem is given, and at the end students can color in a fun beach scene. - Alien Addition – This game can be modified for ability levels by entering in the highest sum the facts provided will go to. In the game, students are instructed to use the cursor to move the laser beam that has the desired sum written on it below the UFO with the corresponding number sentence. They have one minute to get as many of the correct UFOs as possible before moving on to the next stage where the game continues to get harder. - Addition Chart Surprise – Students are directed to drag the given number to a spot on the chart where the row and column add up to the sum. When they drop the number in the correct spot, the entire diagonal of facts that add up to that sum is uncovered and pieces of a larger picture are shown, which can help students visualize addition patterns. - Number Jump – For this activity students use the calculator buttons, either to add or subtract, the number of spaces the green ball should jump to be able to smash the flies that are resting on a number. Students need to switch back and forth between the operations in order to get from one level of numbers to the next as they try to smash all of the flies in the least number of moves possible. - Ten Frame – Available from the National Council of Teachers of Mathematics, this online ten frame allows students to choose whether they want to use the manipulative to answer how many?, build, fill, or add and a variety of fun counters are available for the students to choose from. The ten frame lets students work in terms of fives and tens, two very important numbers in our number system, which can help them develop stronger addition and subtraction understanding and skills. Additional Teacher Resources - Grapher – This online grapher can be used within the classroom to create bar graphs which students can then analyze. It is a great way to get students involved and connected with the subtraction facts they are working on! - It's a Fact! – This website provides teachers with a variety of different types of activities to teach students addition patterns including counting on, doubles, doubles plus one, fact families, and combining ten. It has lists of the materials needed for each activity, including the PDF files for any forms or necessary worksheets, and step-by-step directions for each activity. There is also a list of books that go along with the topics being covered. - Numbers Away – Very similar to the addition site above, this website provides many ideas on how to teach subtraction throughout the year. It gives activity ideas for lessons that teach subtraction using a number line, subtracting from 10, subtracting doubles, and counting up. Included are downloadable forms, step-by-step instructions, background information for teachers, a related book list, and assessment ideas. - Manipulative Templates – This site provides teachers with templates for a wide variety of manipulatives. There are printable base-ten block sets, Cuisenaire rods, and colored tiles which would be very useful in the teaching of addition and subtraction.
<urn:uuid:531c63d2-4d8a-465f-8826-507bd90506bd>
CC-MAIN-2013-20
http://blog.richmond.edu/openwidelookinside/archives/author/lc2hb
2013-05-20T02:05:49
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938661
1,549
4.3125
4
Armistice Day, as November 11 became known, officially became a holiday in the United States in 1926, and a national holiday 12 years later. On June 1, 1954, the name was changed to Veterans Day to honor all U.S. veterans. In 1968, new legislation changed the national commemoration of Veterans Day to the fourth Monday in October. It soon became apparent, however, that November 11 was a date of historic significance to many Americans. Therefore, in 1978 Congress returned the observance to its traditional date. Official, national ceremonies for Veterans Day center around the Tomb of the Unknowns. To honor these men, symbolic of all Americans who gave their lives in all wars, an Army honor guard, the 3d U.S. Infantry (The Old Guard), keeps day and night vigil. At 11 a.m. on November 11, a combined color guard representing all military services executes “Present Arms” at the tomb. The nation’s tribute to its war dead is symbolized by the laying of a presidential wreath and the playing of “Taps.” Congress voted Armistice Day a federal holiday in 1938, 20 years after the war ended. But Americans realized that the previous war would not be the last one. World War II began the following year and nations great and small again participated in a bloody struggle. After the Second World War, Armistice Day continued to be observed on November 11. In 1953 townspeople in Emporia, Kansas called the holiday Veterans’ Day in gratitude to the veterans in their town. Soon after, Congress passed a bill introduced by a Kansas congressman renaming the federal holiday to Veterans’ Day. 1971 President Nixon declared it a federal holiday on the second Monday in November.
<urn:uuid:21472076-ded1-4dd5-8111-81a8a31669f7>
CC-MAIN-2013-20
http://elev8.com/510495/what-is-veterans-day/
2013-05-20T02:14:23
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956895
365
4.03125
4
This lesson focuses on the drafting of the United States Constitution during the Federal Convention of 1787 in Philadelphia. Students will analyze an unidentified historical document and draw conclusions about what this document was for, who created it, and why. After the document is identified as George Washington’s annotated copy of the Committee of Style’s draft constitution, students will compare its text to that of an earlier draft by the Committee of Detail to understand the evolution of the final document. Upon completion of this lesson students will be able to: - Examine documents as primary sources; - Analyze and compare drafts; - Describe the significance of changes to the document’s text. - One to two classes Recommended Grade Level - Government, Law & Politics - The New Nation, 1783-1815
<urn:uuid:bea5935b-85c1-4813-b57a-26d148033019>
CC-MAIN-2013-20
http://loc.gov/teachers/classroommaterials/lessons/more-perfect-union/
2013-05-20T02:41:21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.900833
168
4.03125
4
Library Home || Full Table of Contents || Suggest a Link || Library Help |Fermi questions emphasize estimation, numerical reasoning, communicating in mathematics, and questioning skills. Students often believe that "word problems" have one exact answer and that the answer is derived in a unique manner. Fermi questions encourage multiple approaches, emphasize process rather than "the answer," and promote non-traditional problem solving strategies. The Fermi Questions Library features classic Fermi questions with annotated solutions, a list of Fermi questions for use with students, Fermi questions with a Louisiana twist, and Fermi activities for the K-12 classroom. A Louisiana Lessons Web Activity.| |Levels:||Elementary, Middle School (6-8), High School (9-12)| |Resource Types:||Lesson Plans and Activities, Problems/Puzzles, Word Problems| |Math Topics:||Estimation, Problem-Solving, Communicating Math| |Math Ed Topics:||Grouping/Cooperative Learning, Non-traditional, Manipulatives| © 1994-2013 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education.
<urn:uuid:0982ac60-6c85-4317-9c57-d2a8b00e4570>
CC-MAIN-2013-20
http://mathforum.org/library/view/5508.html
2013-05-20T02:48:49
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.822112
257
4
4
Metamorphic Rocks and Minerals This activity was selected for the On the Cutting Edge Reviewed Teaching Collection This activity has received positive reviews in a peer review process involving five review categories. The five categories included in the process are - Scientific Accuracy - Alignment of Learning Goals, Activities, and Assessments - Pedagogic Effectiveness - Robustness (usability and dependability of all components) - Completeness of the ActivitySheet web page For more information about the peer review process itself, please see http://serc.carleton.edu/NAGTWorkshops/review.html. This page first made public: Aug 7, 2006 This exercise is an introduction to the most important metamorphic rocks and minerals. This exercise is designed for a mid/upper-level undergraduate geology course on the principles of mineralogy. Skills and concepts that students must have mastered Students should have knowledge of basic chemistry and of minerals equivalent to what they would learn in an introductory geology class. How the activity is situated in the course This activity is the 19th of 36 mineralogy exercises and is used around the middle of the course. Content/concepts goals for this activity - Learn to identify key metamorphic minerals and rocks in hand specimen and thin section. Higher order thinking skills goals for this activity - Identify key properties useful for mineral identification. Other skills goals for this activity Description of the activity/assignment In this three-part exercise, students study hand samples and thin sections of important metamorphic rocks and minerals. - Part one - Box of Rocks: Students examine trays of metamorphic rocks and minerals and record their physical properties, composition, and habit. They note chemical and physical similarities and differences and identify the rock samples and minerals they contain. - Part two - Definitions: Define a list of terms relevent to the lab. - Part three - Minerals in Thin Section: Observe minerals in thin section and answer questions about them.
<urn:uuid:a81e921d-54b5-47fb-8595-ca6b2b457aee>
CC-MAIN-2013-20
http://serc.carleton.edu/NAGTWorkshops/mineralogy/activities/MinEx19MetaRxMins.html
2013-05-20T02:32:29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.891721
425
4.09375
4
Knowing the exact cause of a child's hearing loss can assist clinicians and parents in making decisions regarding treatment and educational options. It may surprise some parents to know that more than half of all children who are born deaf or who become deaf very early in life have a genetic cause for their hearing loss. In fact, recent studies have revealed that approximately 50-60% of moderate to profound, congenital, or early-onset hearing loss is genetic. The remaining 40-50% of hearing loss is due to non-genetic effects, such as maternal infection (CMV or rubella), prematurity, postnata infection (meningitis, otitis media), ototoxic drugs, or acoustic/ cranial trauma. Genetic forms of hearing loss result from changes in the genetic material. The genetic material, called DNA (deoxyribonucleic acid), is contained in almost every cell in the human body. The long chains of DNA can be divided into sections, called genes. Each person inherits two copies of each gene, one from each parent. Genes control the production and function of proteins, which form the structural and regulatory elements of the body. Genes, composed of a specific sequence of the chemical units adenine (A), guanine (G), cytosine (C), and thymine (T), are fairly consistent from person to person. Estimates suggest that humans have approximately 30,000 genes, of which at least 10% are involved in determining the structure and function of the ear. Recent progress in identification of these genes has provided insight into how the ear functions and how mutations in a single gene can cause hearing loss. More than 400 different forms of hereditary hearing loss are known. Many of these forms can be distinguished from one another by audiologic characteristics (type, degree, or progression), vestibular characteristics, mode of inheritance, or the presence of other medical or physical characteristics. In the majority of cases (60-70%), hearing loss occurs as an isolated finding and is referred to as non-syndromic. The remaining 30-40% of hereditary hearing loss is syndromic, resulting from a mutation in a gene that affects the development of multiple organs. Some common syndromes associated with hearing loss are described in Table 1, however the complete list of genetic syndromes associated with hearing loss is long and complex. Although it is not essential that professionals who work with deaf children be familiar with all of the features of syndromic forms of hearing loss, an appreciation of the complexity of these disorders and the effect they can have on the health of these individuals as well as family members (siblings and offspring) emphasizes the importance of providing referrals for genetic evaluation and encouraging families to follow through with the referrals. Genetic forms of hearing loss can also be classified by inheritance pattern. When only one copy of a mutation in a gene is necessary to cause hearing loss, the trait is inherited in a dominant pattern. Approxi-mately 10-20% of non-syndromic deafness is inherited in a dominant manner. When two copies of a mutation in the same gene (one from each parent) are necessary to cause hearing loss, the trait is inherited in an autosomal recessive pattern. Roughly 70-80% of hearing loss is inherited in an autosomal recessive pattern. When recessive genes for deafness are located on the X chromosome, the trait is inherited as X-linked recessive. A small percentage, around 1-2%, of hearing loss can be attributed to X-linked recessive inheritance. Mitochondrial inheritance refers to the inheritance of a genetic mutation in the genes contained within the mitochondria. The mitochondria contain a small amount of DNA (37 genes) which is passed on to the next generation only through the egg cell. Thus, mitochondrial mutations are only inherited from the mother and are passed on to all of her children. These mutations may account for 0-20% of inherited hearing loss, depending on ethnic background. The most common form of hereditary deafness is caused by mutations in the GJB2 gene. This gene, which encodes the gap junction beta protein connexin 26, is most commonly inherited in an autosomal recessive fashion. Hearing loss which results from mutations in the GJB2 gene varies in degree and progression, but most individuals have congenital, profound, stable sensorineural hearing loss. Between 10-37% of individuals with an "unknown" cause for their deafness have mutations in GJB2. A genetic evaluation can often identify the exact cause of hearing loss or at the very least, exclude many causes of the hearing loss. At some point after the identification of their hearing loss, most children can benefit from a genetic evaluation. In 2002, the American College of Medical Genetics (ACMG) published a statement entitled "Genetic Evaluation Guidelines for the Etiologic Diagnosis of Congenital Hearing Loss." In this statement, the ACMG emphasized that the appropriate management of all persons identified with congenital hearing loss requires a comprehensive genetic evaluation. The genetic evaluation should include a detailed family history, a complete physical examination, a thorough patient history, and examinations by other medical specialists, when necessary. During genetic evaluation and counseling the geneticist and/or genetic counselor will assist patients and families with the diagnosis of a genetic condition, identify associated medical issues and provide referrals for medical management. The geneticist/genetic counselor will also calculate and communicate the recurrence risk and provide psychosocial support for the family. Audiologists can play an essential role in the process of genetic diagnosis. Whether providing the initial referral to genetic services, helping to reinforce or correct misinformation, or by identifying those in need of additional support, the audiologist is an important member of the health care team. Audiologists, speech-language pathologists, and other interested professionals may obtain additional information about genetics and/or the Gallaudet University Genetics Program by viewing the Web site. [Editor's note: Arnos, Pandya, and Burton gave the audiology keynote focused on genetics Nov. 18, 2005 at the ASHA Convention in San Diego.]
<urn:uuid:ce942ac2-8ef9-4b24-91f8-dd98b9be4394>
CC-MAIN-2013-20
http://www.asha.org/Publications/leader/2006/060117/060117a.htm
2013-05-20T02:31:49
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939656
1,258
4.1875
4
The Industrial Revolution and a wave of liberal nationalist revolutions transformed Europe during the nineteenth century. A weakened old order gave way, and a number of unified European states emerged. Canada gained its independence, and the northern and southern United States reunited after a bloody civil war. Section 1 The Industrial Revolution The Industrial Revolution began in the late eighteenth century and turned Great Britain into the first and the richest industrialized nation. A series of technological advances caused Great Britain to become a leader in the production of cotton, coal, and iron. After the introduction of the first steam-powered locomotives, railroad tracks were laid across Great Britain, reducing the cost of shipping goods. The Industrial Revolution spread to Europe and North America. In the United States, the railroad made it possible to sell manufactured goods from the Northeast across the country. The Industrial Revolution had a tremendous social impact in Europe. Cities grew quickly, and an industrial middle class emerged. The industrial working class, meanwhile, dealt with wretched working conditions. These conditions gave rise to socialism, a movement aimed at improving working conditions through government control of the means of production. Section 2 Reaction and Revolution After the defeat of Napoleon, European leaders met at the Congress of Vienna to restore the old order and establish stable borders. Great Britain, Russia, Prussia, and Austria met regularly to maintain the conservative order. Meanwhile, liberalism and nationalismótwo philosophies that opposed the old orderówere on the rise. Many liberals were middle-class men who wanted a constitution and a share in the voting rights enjoyed by landowners. Liberals tended to be nationalists as well. In 1830, France's upper middle class overthrew the king and installed a constitutional monarchy. Belgium broke free of Dutch control. Revolts in Poland and Italy failed. Economic crisis in 1848 brought a revolt of the French working classes. This time, a Second Republic was formed, under the leadership of Napoleon's nephew, Louis-Napoleon. Revolts followed in Germany and the Austrian Empire. In each case the old order was restored. Section 3 National Unification and the National State Unification occurred at different times and in different forms throughout Europe and in North America. The Crimean War destroyed the Concert of Europe. A defeated Russia retreated from European affairs, and Austria was isolated. Italian and German nationalists exploited Austria's isolation. Both gained important territory in the Austro-Prussian War and the Franco-Prussian War, and a unified Germany and Italy emerged. Growing prosperity and expanded voting rights helped Great Britain avoid revolution in 1848. In 1852, the French voted to restore their empire. Louis-Napoleon became the authoritarian Napoleon III and ruled until France's defeat in the Franco-Prussian War. Austria granted Hungarians the right to govern their own domestic affairs. In Russia, Czar Alexander II freed the serfs and instituted other reforms. When a radical assassinated him, his son, Alexander III, reverted to repressive rule. The United States endured a costly civil war to settle the conflict over slavery between the Northern and Southern states. After two short rebellions, Canada won its independence from Great Britain. Section 4 Culture: Romanticism and Realism At the end off the eighteenth century, a new intellectual movement known as romanticism emerged as a reaction to the ideas of the Enlightenment. Romantics emphasized feelings, emotion, and imagination as sources of knowing. Many were passionately interested in the past. They developed a neo-Gothic style in architecture, and created literature, art, and music that worshiped nature and was critical of science and industry. Meanwhile, the Scientific Revolution revived interest in science. The new age of science produced important ideas, such as Louis Pasteur's germ theory of disease and Charles Darwin's theory of natural selection. The influence of the scientific outlook was readily apparent in the work of the realist novelists and artists, who depicted everyday life, including the lives of the poor, in realistic, and unromantic, detail.
<urn:uuid:153070c1-02de-49f1-816c-143b051b3dd6>
CC-MAIN-2013-20
http://www.glencoe.com/sec/socialstudies/worldhistory/gwh2003/tx/content.php4/749/1
2013-05-20T02:41:00
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952741
800
4.15625
4
absolute value, magnitude of a number or other mathematical expression disregarding its sign; thus, the absolute value is positive, whether the original expression is positive or negative. In symbols, if | a | denotes the absolute value of a number a, then | a | = a for a > 0 and | a | = - a for a < 0. For example, |7| = 7 since 7 > 0 and | - 7| = - ( - 7), or | - 7| = 7, since - 7 < 0. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on absolute value from Infoplease: See more Encyclopedia articles on: Mathematics
<urn:uuid:204faf78-7a3c-464d-bfc3-589035535875>
CC-MAIN-2013-20
http://www.infoplease.com/encyclopedia/science/absolute-value.html
2013-05-20T02:07:19
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.708007
148
4.09375
4
Life Science: Session 3 Sex Cell Production What are sex cells? Sex cells, or gametes, are unique to organisms that reproduce sexually. In animals and plants (fungi are somewhat different in this regard) there are two types of sex cells: male and female. The male sex cells are sperm, while the female sex cells are eggs. Sex cells are formed from special body cells that are typically located in sex organs. In most animals, sperm are formed in the testes of males, and eggs are formed in the ovaries of females. Sex cells contain only half of the hereditary material present in the body cells that form them. This is important because male and female sex cells ultimately join to become a fertilized egg, which gives rise to a new organism, or offspring. In order for the offspring to resemble its parents, its first cell must receive the entire genome from its two parents. For humans, we know there are 46 chromosomes in body cells existing as 23 pairs. A fertilized egg must therefore contain this same number and arrangement. In an elegant process called meiosis, each sex cell receives one member of each chromosome pair—23 total. When sperm fertilizes egg, these singles unite to reform pairs, with half the genome coming from each parent. With a few exceptions, this pattern holds true for all sexually reproducing organisms. How are sex cells produced? Sex cells are produced from special body cells that contain the entire genome. The process by which the genome is halved is very precise — it’s not just a matter of randomly dividing the chromosomes into two sets. The process involves two cell divisions. Before the first occurs, all of the chromosomes are duplicated just as they are in body cell reproduction, but what happens next is different: the two duplicated strands remain attached to each other as the members of each chromosome pair move alongside each other. During the cell division that follows, only one member of each pair is transferred to each daughter cell—this is where the number of chromosomes is halved. The two strands of each chromosome are then separated during the second cell division, still maintaining half the number that existed in the parent cell. This results in four daughter cells — sperm or egg — that contain one member of each chromosome pair. This process is called meiosis. What is the role of sex cell production in an animal life cycle? Sex cell production ensures that the genome is maintained between parent and offspring generations. Occasionally, this process goes awry with chromosome pairs not lining up or not separating. The consequences are almost always harmful, and frequently lethal to potential offspring. A successful animal life cycle therefore depends on successful sex cell production. There is another consequence to sex cell production that has a profound impact on the populations involved. Unlike body cell production, where the daughter cells are identical to parent cells, fertilized eggs result from genetic material from two different parents. Furthermore, each of these parents is only able to pass on half of its genome. The mixing and matching of half sets of chromosomes results in the astounding diversity we see in the living world. For example, we can see “parts” of both our parents when we look in the mirror. Similarly, a litter of puppies will reflect the size and coloration of both parents. The significance of this is explored in Session Five: Variation, Adaptation, and Natural Selection. Compare body cell reproduction with sex cell production: |Body cell reproduction||Sex cell production| |Role in life cycle||Growth and maintenance||Reproduction| |Where process occurs||Cells in all parts of body||Sex organs or tissues| |Number of cell divisions||One||Two| |What happens to chromosomes||All chromosomes line up singly, each chromosome duplicates, the two copies separate, and one copy of each chromosome is distributed to each daughter cell.||First division: chromosomes duplicate and copies remain attached, chromosome pairs line up alongside each other, the members of each pair separate, one member of each pair goes to each daughter cell. Second division: all chromosomes line up singly, the two copies separate, one copy of each chromosome is distributed to each daughter cell.| |Number of cells that result||Two||Four| |Number of chromosomes in resulting cells||Same number as in parent cell||Half the number as in parent cell| |Significance||Genome is maintained; all information is passed along||Genome is halved; will be restored at fertilization| |prev: body cell reproduction||next: cloning|
<urn:uuid:f9e1e7fe-748a-47bc-8b98-c404e8206a3d>
CC-MAIN-2013-20
http://www.learner.org/courses/essential/life/session3/closer2.html
2013-05-20T02:30:42
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928559
934
4.21875
4
Atopic dermatitis is an inflammatory, non-contagious, chronic skin disorder that involves scaly and itchy rashes. is also called eczema, dermatitis or word ‘dermatitis’ means inflammation of the skin and ‘atopic’ refers to diseases that are hereditary and often occur together. People with atopic dermatitis often have a family history of asthma, hay fever or eczema. Atopic dermatitis is very common in all parts of the world. The disease can occur at any age but most often affects infants and small children. It may start as early as age 2-6 months, but many people outgrow it by early adulthood. It is also known as infantile eczema , when it occurs in People living in urban areas and in climates with low humidity are at an increased risk for developing atopic dermatitis. cause of atopic dermatitis is not well understood. Hypersensitivity reaction in the skin may cause atopic dermatitis. is characterized by inflammation, itching and scaling of the skin. Atopic dermatitis is often referred to as the ‘itch that rashes ’ because the itching starts first, and the skin rash appears follows due to the scratching. dermatitis responds well to home . Proper skin care reduces the need for medicines. Topical creams and oral antihistamines can be used to suppress the symptoms.
<urn:uuid:6e44736d-a77c-4e8c-93bb-41edbf2a7fd9>
CC-MAIN-2013-20
http://www.medindia.net/patients/patientinfo/general-info-about-atopic-dermatitis.htm
2013-05-20T02:22:58
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936567
314
4.125
4
How did the universe get its structure? It was very smooth when it was born, with matter distributed incredibly evenly through space. Now, thanks to the action of gravity over billions of years, it is very lumpy, with dense clusters of galaxies separated by enormous voids. You can watch the process unfold in a new computer simulation by a group of scientists led by Tiziana Di Matteo of Carnegie Mellon University in Pittsburgh, Pennsylvania, US. Unlike previous simulations, Di Matteo's team included black holes in their simulation. The black holes are not highlighted in the animation, but they do influence their surroundings. Increasingly, scientists are realising that supermassive black holes weighing millions or billions of times the mass of the Sun may affect their environment more than previously thought. Their enormous gravity can capture and swallow vast quantities of matter from their immediate vicinity, but they can also produce jets and radiation that can influence matter much farther away. Another cool animation released recently shows the effects of the solar wind on Earth's magnetosphere. The solar wind constantly buffets the magnetosphere, stretching and bending magnetic field lines until they suddenly snap in what are called magnetic reconnection events. ESA's four Cluster spacecraft have been investigating this phenomenon and ESA recently put out an animation illustrating this magnetic field snapping. I'm fascinated by how a combination of science and computer graphics can show you things that you could never witness firsthand ? like cosmic changes that unfold over billions of years in the case of Di Matteo's simulation, and the normally invisible dance of magnetic field lines in the case of the Cluster animation.David Shiga, Online reporter (Image: Tiziana Di Matteo/CMU) Labels: black holes, cluster, large-scale structure, magnetic reconnection
<urn:uuid:7db20a43-8164-47dd-bb60-e6830bc0fce4>
CC-MAIN-2013-20
http://www.newscientist.com/blog/space/labels/black%20holes.html
2013-05-20T02:30:54
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935809
365
4.03125
4
Many students participate in science fairs or are assigned to do science projects or experiments for class. This guide will help you find the information you will need to complete a science project. What is a science project? Where do I start? What steps do I need to follow? What is the scientific method? If you are just beginning your research and need basic, general information about science projects here are some Web sites that can help you. Science Buddies Science Fair Project Guide Step by step information on how to do a science project from beginning to end. Dragonfly TV Science Fair Success Twelve steps to a winning science fair project from the show on PBS Kids. IPL Kidspace: Science Fair Project Resource Guide The Internet Public Library’s Science Fair Project Resource Guide will help you through the whole project by guiding you to a variety of excellent web resources. What Makes a Good Science Fair Project? A guide written by the judges of the California State Science Fair. The Science Fair Handbook, Anthony D. Fredericks. Ed.D. Publisher Houghton Mifflin provides this handbook as part of their textbook support materials for students and teachers. Do you need to find the perfect project? Would you like to find instructions for a project or subject you have in mind? These resources include lists of projects and instructions for performing them. Databases (accessible from any computer with your Pratt Library card) Curriculum Resource Center Go to the "Science Experiments" section on the main page. Click on "Experiments" under "Browse Science Resources." Science Reference Center In the Advanced Search, limit Document Type to "Science Experiment." Then search for the kind of experiment that interests you. Science Buddies Science Fair Project Ideas, Answers and Tools This site provides many science projects and allows you to narrow by general area of science or take a quiz to determine what experiments will interest you. Newton’s Apple: Science Fair Links to “try-its” and lesson activities related to topics presented on the show produced by Twin Cities Public Television. Dragonfly TV Science Fair Use the “Super Science Spinner” to determine your topic or browse a list of projects divided by area of science. IPL Kidspace Science Fair Project Resource Guide – Choosing a Topic If you need to decide what type of project you want to do, the Internet Public Library’s Science Fair Project Resource Guide provides resources that will help you. You may need more information to use as background for your project, to create a hypothesis, or to write your report. These resources provide science information that may be useful in researching your project. National Science Digital Library NSDL is the Nation’s online library for education and research in science, technology, engineering and mathematics. US NSF - Classroom Resources A collection of lessons and web resources for teachers, students and families complied by the National Science Foundation. IPL Kidspace: Science Fair Project Resource Guide – Tools and Research Resources specifically for students who need more information to beef up their science projects. Links to “Ask an Expert” are also provided. IPL Teenspace – Science Resources to help teens with science homework compiled by the Internet Public Library. Remember, the Enoch Pratt Free Library has plenty of books about science projects too! Visit us and we’ll show you what is available. Please feel free to contact us if you need additional help. You may call us at (410) 396-5484, email us, or write to us at: Enoch Pratt Free Library Central Library/State Library Resource Center 400 Cathedral Street Baltimore, MD 21201
<urn:uuid:8430c430-e78c-4519-b80f-43eed4f570dd>
CC-MAIN-2013-20
http://www.prattlibrary.org/research/tools/index.aspx?cat=19977&id=22026
2013-05-20T02:41:48
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.864779
773
4.28125
4
Let's Learn About Weather! - Grades: 1–2 - Unit Plan: - Observe weather. - Listen to weather stories. - Recognize different kinds of weather. - Write about weather. - Large paper cloud for brainstorming words - Observation notebook for each students - Cloudy with a Chance of Meatballs by Judi Barrett, available in the Teacher Store - What Will the Weather Be Like Today? by Paul Rogers, available in the Teacher Store - Weather Words and What They Mean by Gail Gibbons, available in the Teacher Store - Who Cares About the Weather? by Melvin Berger, Natalie Lunis Set Up and Prepare - Cut out and post a large paper cloud. - Make observation notebooks for each student. I use several sheets of paper that have a few lines for writing on the bottom and space for drawing a picture on the top. Add a cover sheet with a cloud. The students can write "Weather" in the cloud. Step 1: Read Cloudy with a Chance of Meatballs by Judi Barrett. Step 2: Brainstorm words students may want to use when writing about weather. These can be written on the large paper cloud. Step 3: Have the students write a new page in the style of Cloudy with a Chance of Meatballs. Bind the pages into a book. Step 1: Read What Will the Weather Be Like Today? by Paul Rogers and Weather Words and What They Mean by Gail Gibbons. Step 2: Discuss types of weather. Add weather words to the word cloud. Step 3: Have the students draw the weather they see outside the classroom in their observation notebook and write words or sentences that describe it. Step 1: Read Who Cares About the Weather? by Melvin Berger and Natalie Lunis. Step 2: Have the students write and illustrate a page for the class' own book, Who Cares About the Weather? Bind the pages into a book. Learn more about plants and play plant games with these fun Web sites. - Dan's Wild Weather page http://www.wildwildweather.com/ Tell someone at home three facts about weather. Tomorrow, be ready to share what you said with your table partner. (The next day, the teacher will walk around the classroom listening to what students are sharing and make comments. Then the students will be asked to share with a different partner and listen to what that person said to someone at home.) - Was there enough time? - Were the students successful or frustrated observing the weather? - Were the students able to combine weather elements such as cold "and" windy? - Were the students able to write about weather without a lot of help? - Could the students tell me about weather? - How many students were able to identify a type of weather? - How many students were also able to describe the weather in their observation notebooks? Copies of the students' observation notebooks will be saved for their assessment portfolios.
<urn:uuid:0ba1b2f3-4bd5-477a-acb8-257ae0135f0c>
CC-MAIN-2013-20
http://www.scholastic.com/teachers/lesson-plan/lets-learn-about-weather
2013-05-20T02:21:54
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925407
626
4.25
4
A leaf is a plant's principal organ of photosynthesis, the process by which sunlight is used to form foods from carbon dioxide and water. Leaves also help in the process of transpiration, or the loss of water vapor from a plant. A typical leaf is an outgrowth of a stem and has two main parts: the blade (flattened portion) and the petiole (pronounced PET-ee-ole; the stalk connecting the blade to the stem). Some leaves also have stipules, small paired outgrowths at the base of the petiole. Scientists are not quite sure of the function of stipules. Leaf size and shape differ widely among different species of plants. Duckweeds are tiny aquatic plants with leaves that are less than 0.04 inch (1 millimeter) in diameter, the smallest of any plant species. Certain species of palm trees have the largest known leaves, more than 230 feet (70 meters) in length. Words to Know Abscission layer: Barrier of special cells created at the base of petioles in autumn. Blade: Flattened part of a leaf. Chloroplasts: Small structures that contain chlorophyll and in which the process of photosynthesis takes place. Margin: Outer edge of a blade. Midrib: Single main vein running down the center of a blade. Petiole: Stalk connecting the blade of a leaf to the stem. Phloem: Plant tissue consisting of elongated cells that transport carbohydrates and other nutrients. Photosynthesis: Process by which a plant uses sunlight to form foods from carbon dioxide and water. Stomata: Pores in the epidermis of leaves. Transpiration: Evaporation of water in the form of water vapor from the stomata. Xylem: Plant tissue consisting of elongated cells that transport water and mineral nutrients. A leaf can be classified as simple or compound according to its arrangement. A simple leaf has a single blade. A compound leaf consists of two or more separate blades, each of which is termed a leaflet. Each leaflet can be borne at one point or at intervals on each side of a stalk. Compound leaves with leaflets originating from the same point on the petiole (like fingers of an outstretched hand) are called palmately compound. Compound leaves with leaflets originating from different points along a central stalk are called pinnately compound. All leaves, no matter their shape, are attached to the stem in one of three ways: opposite, alternate, or whorled. Opposite leaves are those growing in pairs opposite or across from each other on the stem. Alternate leaves are attached on alternate sides of the stem. Whorled leaves are three or more leaves growing around the stem at the same spot. Most plant species have alternate leaves. The outer edge of a blade is called the margin. An entire margin is one that is smooth and has no indentations. A toothed margin has small or wavy indentations. A lobed margin has large indentations (called sinuses) and large projections (called lobes). Venation is the pattern of veins in the blade of a leaf. A single main vein running down the center of a blade is called a midrib. Several main veins are referred to as principle veins. A network of smaller veins branch off from a midrib or a principle vein. All veins transport nutrients and water in and out of the leaves. The two primary tissues in leaf veins are xylem (pronounced ZY-lem) and phloem (pronounced FLOW-em). Xylem cells mainly transport water and mineral nutrients from the roots to the leaves. Phloem cells mainly transport carbohydrates (made by photosynthesis) from the leaves to the rest of the plant. Typically, xylem cells are on the upper side of the leaf vein and phloem cells are on the lower side. Internal anatomy of leaves Although the leaves of different plants vary in their overall shape, most leaves are rather similar in their internal anatomy. Leaves generally consist of epidermal tissue on the upper and lower surfaces and mesophyll tissue throughout the body. Epidermal cells have two features that prevent the plant from losing water: they are packed densely together and they are covered by a cuticle (a waxy layer secreted by the cells). The epidermis usually consists of a single layer of cells, although the specialized leaves of some desert plants have epidermal layers that are several cells thick. The epidermis contains small pores called stomata, which are mostly found on the lower leaf surface. Each individual stoma (pore) is surrounded by a pair of specialized guard cells. In most species, the guard cells close their stomata during the night (and during times of drought) to prevent water loss. During the day, the guard cells open their stomata so they can take in carbon dioxide for photosynthesis and give off oxygen as a waste product. The mesophyll layer is divided into two parts: palisade cells and spongy cells. Palisade cells are densely packed, elongated cells lying directly beneath the upper epidermis. These cells house chloroplasts, small structures that contain chlorophyll and in which the process of photosynthesis takes place. Spongy cells are large, often odd-shaped cells lying underneath palisade cells. They are loosely packed to allow gases (carbon dioxide, oxygen, and water vapor) to move freely between them. Leaves in autumn Leaves are green in summer because they contain the pigment chlorophyll, which absorbs all the wavelengths of sunlight except for green (sunlight or white light comprises all the colors of the visible spectrum: red, orange, yellow, green, blue, indigo, and violet). In addition to chlorophyll, leaves contain carotenoid (pronounced kuh-ROT-in-oid) pigments, which appear orange-yellow. In autumn, plants create a barrier of special cells, called the abscission (pronounced ab-SI-zhen) layer, at the base of the petiole. Moisture and nutrients from the plant are cut off and the leaf begins to die. Chlorophyll is very unstable and begins to break down quickly. The carotenoid pigments, which are more stable, remain in the leaf after the chlorophyll has faded, giving the plant a vibrant yellow or gold appearance. The red autumn color of certain plants comes from a purple-red pigment known as anthocyanin (pronounced an-tho-SIGH-a-nin). Unlike carotenoids, anthocyanins are not present in a leaf during the summer. They are produced only after a leaf starts to die. During the autumn cycle of warm days and cool nights, sugars remaining in the leaf undergo a chemical reaction, producing anthocyanins. [ See also Photosynthesis ]
<urn:uuid:544743de-e584-4ec5-ad1a-bf2f9b92b7ea>
CC-MAIN-2013-20
http://www.scienceclarified.com/Io-Ma/Leaf.html
2013-05-20T02:30:20
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949533
1,450
4.09375
4
THE ULTIMATE CLUE, MY DEAR WATSON: DNA FINGERPRINTING Students learn practical applications of DNA profiling in today's forensic science and the future's many possibilities. While viewing the video, students delve into the problems of extracting ancient DNA from fossils. These molecular biologists use DNA profiling to sequence pieces of the dinosaurs' genome. After viewing the video, students will simulate DNA profiling with electrophoresis gel to solve a possible baby mix-up at the hospital. NOVA: The Real Jurassic Park Students will be able to: - Explain the steps of DNA Profiling - Describe the possible usefulness of DNA Profiling to our society - Contrast extracting ancient DNA from fossils to modern DNA from blood and other cells of organisms. Per group of two: - Copies of the lab "Will the Real Baby Smith Please Speak Up!" - Glue or tape To prepare students for the video, explain the steps of extracting and profiling DNA. Explain how DNA is extracted and isolated from cells. Cell membranes are lysed with detergent. The detergent will dissolve the lipid component of the cell membrane and expose the protein and nucleic acids. DNA must be extracted at a temperature range of 50°-60° Celsius. Temperatures exceeding 60° C may denature the DNA. DNA must then be placed in ethanol because it is soluble in aqueous solutions. For an interesting demonstration of extracting DNA, purchase 1 DNA spooling Kit. This laboratory activity allows students to extract DNA from salmon sperm. The students will add ethanol alcohol to the sperm solution and precipitate DNA by spooling it onto a stirring rod. The students are delighted. Request Sigma product D-8666: 1 kit $18.90 from: Sigma Chemical Company PO Box 14508 St. Louis, MO 63178-9916 (800) 325-3010) Explain how DNA is cut with restriction enzymes, run through an electrophoresis gel, and probed with a radioactive substance, which appears on the film. Many different restriction enzymes are available, and scientists choose the one that will cut the DNA in the appropriate place in the sequence. Each person has a slightly different sequence and when probed with radioactive substances will produce a unique set of bands. Ask the students for possible uses of DNA profiling. Have students bring in news articles of recent court cases that use DNA fingerprints as evidence. Another use of DNA profiling is to locate genes in our human genome. Once these genes are found, they can be isolated and possibly inserted into people whose genes are not functioning properly. Discuss the possibilities with cutting out dysfunctional genes and inserting functional ones to cure a disease. If there is time, distribute "Designer Genes" worksheet and hold a mock trial or debate about the moral implications of such genetic engineering. Discuss probability, a branch of math that predicts the occurrence of chance events. Give students statistics for DNA testing in various court cases and have them predict the chances of two people having similar bands. To give students a specific responsibility while viewing, have them write down each hurdle scientists must overcome to extract and sequence a dinosaur's DNA (4 billion base pairs!) Ask if they believe Jurassic Park could become a reality some day. BEGIN the NOVA video "The Real Jurassic Park". PAUSE when Jeff Goldblum says, "Jurassic Park has left me wondering if we will ever see dinosaurs in the zoo someday." Ask the students the same question again, "How many of you believe that someday Jurassic Park could become reality?" RESUME the video. PAUSE after Michael Crichton speaks about his book. Discuss what is meant by a genetically engineered dinosaur. RESUME the video. PAUSE after Step 1 Find Dinosaur DNA. Contrast uses and accessibility of obtaining DNA from modern living organisms to that of dinosaurs' fossils. As students watch the rest of this segment, have them write down all problems that must be overcome just to extract and assemble the DNA from dinosaur fossils. (Small organisms the size of a pin-point, range of species of dinosaur DNA in insect's stomach, trying to figure out which species the DNA came from, and if found in the bones, they must be well preserved) RESUME video. PAUSE when Poinar Jr. holds up the gel sheet and points to the DNA bands. Explain that this is a DNA profile film that was discussed in previewing. RESUME the video. PAUSE after the color of ancient DNA and modern DNA is shown when the female scientist takes them out of the freezer. Discuss the viability of the DNA that is only 13,000 years old compared to 100 million years old dinosaur DNA. Ask "What do you think happens to DNA over one million years?" RESUME STOP the tape after John Horner shows the jaw of a rapter and discusses manipulating eggs and sperm. Discuss on and off switches for genes in our bodies. Tie this into prenatal care and how important it is to allow off genes to stay off and on genes to come on. Drugs and alcohol can effect this delicate balance and cause diseases or deformities. On a positive note, genetic engineers may learn to turn on genes that code for important proteins such as insulin. This could rid a person of diabetes. To prepare students for the lab activity " Will the Real 'Baby Smith Please Speak Up" review the processes of DNA extraction and DNA profiling. Discuss how DNA is separated in gel solutions. Gel electrophoresis is a technique which separates charged particles such as nucleic acids by running them through an electrical field. The DNA segments (which were cut by restriction enzymes) migrate toward the opposite charge at the other end of the gel. The smallest fragments can travel or migrate through the gel the fastest. A radioactive probe is then placed on the bands and comparisons or conclusions can be drawn as to whose DNA fingerprint is more closely Tell the students they will be simulating the process of DNA profiling in the activity, "Will the real baby Smith Please Speak Up!" Explain that a simulation allows students to understand each step of DNA profiling. Students will simulate cutting the DNA with enzymes, running it through the gel, attaching radioactive probes, and developing the film to see bands. Have students predict the impact of biotechnology on their future and place their ideas in a time capsule to be opened at their 10 year reunion. Invite a genetic engineer into your classroom to share recent research being performed. It is possible that he/she could bring in equipment to show how the process of electrophoresis gel is used to analyze DNA. Visit your local forensic lab to learn how DNA fingerprinting is useful in solving crimes. Have students write to local judges asking if DNA profiling has ever been used to help solve crimes in their immediate area. LANGUAGE ARTS/SOCIAL STUDIES: Finish viewing the NOVA tape " The Real Jurassic Park". Invite the students to express their opinions of bioengineering using the debate,"Should dinosaurs be brought back to life and placed in modern society." Have students write arguments to support their position. Share ideas the next day in class to spark a debate. Have students prepare a combined argument to present to the Environmental Protection Agency. If some students are unsure, have them represent the Environmental Protection agency to make the final decision based on the arguments presented by classmates. ("Exploring Jurassic Park", The Science Teacher , November 1993, Simmons and Wylie) ART/ENGINEERING: Have students draw or create a three dimensional model of a dinosaur zoo for tourists to visit. Make sure the students research appropriate habitat, food sources, space for size of dinosaur etc. Students could use facts from Michael Crichton's book Jurassic Park for guidelines. SCIENCE/SOCIETY: Take students to the library and look up recent articles on breakthroughs in the Human Genome Project. Such diseases as Alzheimer's, heart disease, and many other genetically inherited disorders are being mapped by scientists internationally. After reading the articles, students could predict the outcomes of such technology for the year 2100. Relate this to vaccines and medicines that were not around 100 years ago. Purchase SCIENCE SLEUTHS videodisc from VIDEODISCOVERY and have students solve the mystery of the Forgotten Triplet. This is an interactive videodisc that allows students to witness interviews, look at documents, and see results of scientific tests such as DNA profiles to determine which person could be the long lost triplet to share the inheritance. Master Teachers: Suzanne Asaturian and Cindy Vernon Lesson Plan Database Thirteen Ed Online
<urn:uuid:08367d39-fc58-49eb-b081-a0a738af834e>
CC-MAIN-2013-20
http://www.thirteen.org/edonline/nttidb/lessons/cb/dnacb.html
2013-05-20T02:40:24
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898938
1,880
4.03125
4
Definition: Decius may have been born around 201 to a senatorial family. He was governor in Moesia in the mid 230s. Decius married Herennia Cupressenia Etruscilla. Philip the Arab sent Decius to restore order along the Danube. When he arrived, local troops killed Marinus who had established himself as emperor in the area. The troops named Decius emperor and asked him to lead them in revolt against Philip the Arab. After Philip died, either in the fighting or by assassination, Decius was made emperor. Decius took on the name "Trajan" partly for its association with the area in which he had served and partly because of the respect in which Trajan was held. He implemented a building plan and, according to Christian sources, a persecution. Decius died while on campaign near Nicopolis.
<urn:uuid:36c03b3b-142c-4a57-ade2-75dbf12e2106>
CC-MAIN-2013-20
http://ancienthistory.about.com/od/emperor1/g/Decius.htm
2013-05-22T21:59:25
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.992176
174
4.03125
4
February 06, 2012 Examining the insect's seven centimeter long fossil wings under microscope, researchers were able to see how the prehistoric male katydid employed stridulation, i.e. rubbing body parts together, to produce a song to attract a female. Given the insect's morphology, biomechanical experts were then able to decipher that the insect sang at a low pitch of 6.4 kHz with each note lasting about 16 milliseconds, producing a single, decisive note. [to listen to sound click here] "This discovery indicates that pure tone communication was already exploited by animals in the middle Jurassic, some 165 million years ago. For Archaboilus, as for living [katydid] species, singing constitutes a key component of mate attraction," explains Daniel Robert, biomechanical expert at Bristol's School of Biological Sciences, in a press release. "Singing loud and clear advertises the presence, location and quality of the singer, a message that females choose to respond to—or not." This is the first insect fossil to betray the secrets of its mating calls, and researchers say its likely the most ancient music ever heard. The katydid's song was adapted to travel long-distance in a sparse environment of conifers and giant ferns. However, by singing the katydid would have also attracted unwanted attention: hungry predators. "Today, all species of katydids that use musical calls are nocturnal so musical calls in the Jurassic were also most likely an adaptation to nocturnal life. Being nocturnal, Archaboilus musicus probably escaped from diurnal predators like Archaeopterix, but it cannot be ruled out that Jurassic insectivorous mammals like Morganucodon and Dryolestes also listened to the calls of Archaboilus and preyed on them," explains Fernando Montealegre-Zapata, also a biomechanical expert. How species adapt is often informed by such difficult choices: Archaboilus musicus's loud musical tone allowed mates to hear it far-and-wide perhaps giving it an edge in breeding, however the trade-off may have meant higher chance of being eaten. The katydid family that Archaboilus musicus belongs to first arose in the late Triassic before vanishing entirely prior to the early Cretaceous, however its propensity for song would survive to modern katydids. CITATION. Gu, J. J., Montealegre-Z, F., Robert, D., Engel, M. S., Qiao, G. X. and Ren, D. Wing stridulation in a jurassic katydid (insecta, orthoptera) produced low-pitched musical calls to attract females. Proc. Natl. Acad. Sci. USA DOI:10.1073/pnas.111837210 Scientists discover giant species of crocodile; luckily it is extinct (09/15/2011) Researchers excavating a coal mine in Colombia have discovered a previously unknown species of prehistoric crocodile. The beast is described in the September 15 issue of the journal Palaeontology. King of dinosaurs was a hunter, not a scavenger (01/26/2011) Ecologists say they have used a computer model to put to rest a nearly century-old debate. Did Tyrannosaurus Rex, one of the world's most well-known dinosaurs, hunt down its prey like a lion on the plains, or, instead, did it scavenge meals from other hunters like a vulture? According to scientists with the Zoological Society of London (ZSL) the Tyrannosaurus had only one choice in order to survive: hunt. Picture: scientists identify first known single-fingered dinosaur (01/25/2011) Paleontologists working in China have discovered a first for dinosaurs: a species with only one finger. Named Linhenykus monodactylus, the extinct species stood only about two feet high and weighed about as much as a large parrot. Although small, the new dinosaur was a member of the carnivorous therapod dinosaurs, which include the infamous Tyrannosaurus Rex. The find was announced in the Proceedings of the National Academy of Sciences.
<urn:uuid:dc0a8e26-9ce4-4826-a51c-0cccceaf82e2>
CC-MAIN-2013-20
http://news.mongabay.com/2012/0206-hance_jurassickatydid.html
2013-05-22T21:32:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938326
872
4.375
4
Subjunctive vs. indicative – present (noun, adjective and adverbial clauses) This course presents Spanish grammar and vocabulary. If this is too basic then please try Intermediate Spanish or Advanced Spanish If your goal is to Speak Spanish Today then you should go to Conversational Spanish. Prepositions - Por vs. Para The Spanish prepositions por and para tend to be difficult for students, because they can, but don't always, mean "for". Comparación entre Por y Para Irregular Spanish Verbs Irregular verbs tend to be difficult for students, because they do not follow the rules. Verbos Irregulares The Spanish Verb Tener The Spanish verb tener is used to indicate possession and for a multitude of other uses, such as expressing hunger, thirst, fear, luck, and much, much more! Verbo Tener y Usos - Le VERBE AVOIR Spanish Reflexive Verbs Reflexive pronouns and verbs are used much more in Spanish than in English. This section will teach you how they work. Idiomatic Expressions Using Vez In Spanish, there are a number of idiomatic expressions that employ "vez". Don't leave home without knowing these! Spanish Negative Words Also referred to as Negation. These are negative words, expressions and constructions used in Spanish. Learning interrogative pronouns is easy. The difficulty lies in determining when to use which one. The Verb Gustar The verb gustar is used to express likes and dislikes. It is formed in a unique manner. The Preterite (Simple Past) is a past action tense. We use the Preterite to answer the question "What happened?". Imperfect - Regular Verbs Do you ever have problem figuring out wether to use preterite or imperfect? This section will help you to know which to use? Imperfect - Irregular Verbs Yep, Spanish imperfect also has irregular verbs! We use the future simple to talk about future actions. Expressing Future Plans with the Verb "ir" It is often useful to discuss the future by using the present form of ir, (to go) the preposition a, and the infinitive form of the desired verb. Usually, this translates in English as "going to." Spanish adverbs provide additional information about manner, quantity, frequency, time, or place. Adverbs explain when, how, where, how often, or to what degree something is done. Spanish Relative Pronouns The words that, which, and who are not just used in questions. When they are used in statements they are called relative pronouns. Comparative and Superlative Adjectives The correct use of the comparative and superlative forms is key when learning how to express your opinion or make comparative judgments. Spanish Gerunds and the Progressive Tenses The gerund (gerundio) is a special, invariable form of the verb which always ends in (ndo). It is mistaeknly referred to as the “present participle”. The progressive tenses express an action viewed as being in progress. Do not use the progressive for other purposes, such as for expressing a future action. Do not overuse the progressive tenses, since they are used far less frequently in Spanish than in English. Do not use them unless you are portraying an action as truly being in progress. Spanish Past Participle Spanish past participles typically end in -ado or -ido. The word "worked" is a past-tense verb in the sentence "I worked" but a past participle in "I have worked." Spanish Present Perfect Tense The present perfect may be used to indicate an action or state as having occurred, and having been completed, prior to the present time. Spanish Past Perfect Tense The past perfect tense express an action, state or event that was already completed before the start of another past action, state or event. Spanish Passive Voice The passive voice in Spanish is most frequently used in the preterit, although it can occur in any tense, both in the indicative and in the subjunctive. Spanish Future Perfect Tense This tense views an action or state as having occurred, and been completed, at some time in the future. It is used in Spanish in the same way it is used in English. Spanish Direct Object A direct object is the noun or pronoun that the verb acts directly on. Spanish Indirect Object An indirect object is the person affected by the action but not acted directly upon. Direct Object Pronouns Direct object pronouns receive the action of the verb. Indirect Object Pronouns Indirect object pronouns present a way in Spanish to answer the question: to or for whom or what? Double Object Pronouns A Spanish sentence can have both a direct and an indirect object pronoun. These "double object pronouns" cannot be separated, and the indirect pronoun always precedes the direct pronoun. A conjunction is a word that creates a relation among words, phrases, clauses or sentences. Conjunctions have no meaning by themselves. Spanish Present Subjunctive The Present Subjunctive refers to things which may, or may not, happen. Irregular Verbs - pensar The Spanish verb pensar is irregular in the Present Subjunctive Irregular Verbs - entender The Spanish verb entender is irregular in the Present Subjunctive Irregular Verbs - sentir The Spanish verb sentir is irregular in the Present Subjunctive Irregular Verbs - acordar The Spanish verb acordar is irregular in the Present Subjunctive Irregular Verbs - mover The Spanish verb mover is irregular in the Present Subjunctive Irregular Verbs - dormir The Spanish verb dormir is irregular in the Present Subjunctive Irregular Verbs - pedir The Spanish verb pedir is irregular in the Present Subjunctive The imperfect tense of the subjunctive mood is used to express the same subjective attitudes as the present subjunctive, but in the past. Spanish Conditional Tense The Conditional tense works hand in hand with the imperfect subjunctive to create situations that can be characterized by “if this, then that.” Spanish Conditional Perfect Conditional Perfect is formed with the verb "Haber" in various tenses + the Past Participle. Present Perfect of the Subjunctive Mood The present perfect subjunctive, also known as the past or perfect subjunctive, is used when a verb or expression requiring the subjunctive in the main clause is in the present, future, or present perfect. Pluperfect of the Subjunctive Mood The past perfect tense express an action, state or event that was already completed before the start of another past action. Spanish Imperative Mood The imperative mood is used to give orders or commands. Imperative mood only has one tense, the present tense. Spanish Language Exercises Links to all of the exercises listed above.
<urn:uuid:39f928d8-93e7-45c8-ae37-4dbaec889f5f>
CC-MAIN-2013-20
http://www.123teachme.com/learn_spanish/node/7034
2013-05-22T21:44:38
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928242
1,495
4.28125
4
Analyse and evaluate diverse sources (examples: primary sources, secondary sources, maps, images, archaeological evidence) in order to locate and interpret the ruins of the pre-Deportation Acadian community at Grand-Pré. Develop a hypothesis about the location of the Grand-Pré parish church, Sainte-Charles- des-Mines.à Read the backgrounder that gives the pre-Deportation historical context of the community of Grand-Pré by clicking on The Story icon. Click on the Virtual Excavation icon and then: Click on The Site to read about Grand-Pré National Historic Site of Canada. Click on the Research Question and read it. Click on the Historical Evidence. Read and evaluate the primary and secondary sources by answering the questions that your teacher has provided. (The questions are also located at the bottom of the Historical Evidence page. You can print them out.) Click on the Archaeological Evidence and read the What is archaeology? backgrounder. Conduct archaeological excavations within Grand-Pré National Historic Site by clicking on the Archaeological Site Map. There are nine possible excavation sites on this map. Examine the archaeological drawings and artifacts at each site and record your discoveries in your field notebook provided by your teacher. (This is also located at the bottom of the Archaeological Evidence page. You can print it out.) Formulate a hypothesis and present it to your classmates as to the identities of the archaeological sites you have examined and the possible location of the Church of Saint-Charles-des-Mines Church. See also: The Story ; Virtual Excavation ; Site Report
<urn:uuid:34ab5dd2-4521-4009-9737-94a53ec15cbf>
CC-MAIN-2013-20
http://www.grand-pre.com/en/for-teachers.html
2013-05-22T21:25:17
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903829
343
4.03125
4
are gamma rays? A gamma ray is a packet of electromagnetic energy--a photon. Gamma photons are the most energetic photons in the electromagnetic spectrum. Gamma rays (gamma photons) are emitted from the nucleus of some unstable (radioactive) atoms. What are the properties of gamma radiation? Gamma radiation is very radiation. Gamma photons have about 10,000 times as much energy as the photons in the visible range of the electromagnetic spectrum. Gamma photons have no mass and no electrical charge--they are pure Because of their high energy, gamma photons travel at the speed of light and can cover hundreds to thousands of meters in air before spending their energy. They can pass through many kinds of materials, including human tissue. Very dense materials, such as lead, are commonly used as shielding to slow or stop gamma photons. Their wave lengths are so short that they must be measured in nanometers, billionths of a meter. They range from 3/100ths to 3/1,000ths of a nanometer. What is the difference between gamma rays and x-rays? Gamma rays and x-rays, like visible, infrared, and ultraviolet light, are part of the electromagnetic spectrum. While gamma rays and x-rays pose the same hazard, they differ in their origin. Gamma rays originate in the nucleus. X-rays originate in the electron fields surrounding the What conditions lead to gamma ray emission? emission occurs when the nucleus of a radioactive atom has too much energy. It often follows the emission of What happens during gamma provides an example of radioactive decay by gamma radiation. Scientists think that a neutron transforms to a proton and a beta particle. The additional proton changes the atom to barium-137. The nucleus ejects the beta particle. However, the nucleus still has too much energy and ejects a gamma photon (gamma radiation) to become more stable. How does gamma radiation change in the environment? Gamma rays exist only as long as they have energy. Once their energy is spent, whether in air or in solid materials, they cease to exist. The same is true for x-rays. How are people exposed to Most people's primary source of gamma exposure is naturally occurring radionuclides, particularly potassium-40, which is found in soil and water, as well as meats and high-potassium foods such as bananas. Radium is also a source of gamma exposure. However, the increasing use of nuclear medicine (e.g., bone, thyroid, and lung scans) contributes an increasing proportion of the total for many people. Also, some man-made radionuclides that have been released to the environment emit gamma rays. Most exposure to gamma and x-rays is direct external exposure. Most gamma and x-rays can easily travel several meters through air and penetrate several centimeters in tissue. Some have enough energy to pass through the body, exposing all organs. X-ray exposure of the public is almost always in the controlled environment of dental and medical Although they are generally classified as an external hazard, gamma emitting radionuclides do not have to enter the body to be a hazard. Gamma emitters can also be inhaled, or ingested with water or food, and cause exposures to organs inside the body. Depending on the radionuclide, they may be retained in tissue, or cleared via the urine or feces. Does the way a person is exposed to gamma or x-rays matter? Both direct (external) and internal exposure to gamma rays or X-rays are of concern. Gamma rays can travel much farther than alpha or beta particles and have enough energy to pass entirely through the body, potentially exposing all organs. A large protion gamma radiation largely passes through the body without interacting with tissue--the body is mostly empty space at the atomic level and gamma rays are vanishingly small in size. By contrast, alpha and beta particles inside the body lose all their energy by colliding with tissue and causing damage. X-rays behave in a similar way, but have slightly lower energy. Gamma rays do not directly ionize atoms in tissue. Instead, they transfer energy to atomic particles such as electrons (which are essentially the same as beta particles). These energized particles then interact with tissue to form ions, in the same way radionuclide-emitted alpha and beta particles would. However, because gamma rays have more penetrating energy than alpha and beta particles, the indirect ionizations they cause generally occur farther into tissue (that is, farther from the source of radiation).
<urn:uuid:b21545f5-462d-4679-b8fb-a45f9d675683>
CC-MAIN-2013-20
http://www.gsseser.com/Quarterlies/Gamma.htm
2013-05-22T21:32:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.903329
1,048
4.0625
4
Lesson Plans and Worksheets Browse by Subject Argentina Teacher Resources Find teacher approved Argentina educational resource ideas and activities In this geography worksheet, students study the names of 60 countries. Students indicate their home country by coloring its name in the word grid puzzle. Students then answer 8 questions about their country concerning what language is spoken, what it is famous for and popular foods and celebrities. Are your middle and high schoolers having trouble with tests? Do they need skills to improve reading comprehension? Take the time to teach some useful strategies for both. Working together as a class or in small groups, discuss study strategies, review the RRAP reading method, practice making a study plan, and then put it all to use! Although this resource is missing links to necessary handouts, it is still an excellent source providing teachers with a great lesson idea. Introduce your class to a different way of life. They will meet a little Argentinian girl who visits her Abuela and Abuelo in their candy factory. Not only will they see how treats are different in different places, they can see how a small factory functions. Ask your class to find out what their Grandparents do for a living. How do facts and opinions impact the news? After reading "How to Cover a War" from the New York Times, middle schoolers evaluate the claims in the article. They also consider the media's responsibilities in reporting during wartime. Additionally, they write letters to the editor to express their own opinion. Sixth graders determine the mean, range median and mode of a set of numbers and display them. In this data lesson students form a set of data and use computer spreadsheet to display the information. They extend of the process by looking at data samples from real life and the conclusions that can be drawn from analyzing the data collected.
<urn:uuid:91282d31-aaeb-42c1-a4b5-c35259eccae7>
CC-MAIN-2013-20
http://www.lessonplanet.com/lesson-plans/argentina/6
2013-05-22T21:59:07
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933147
376
4.15625
4
Two Cars in 2-Dimensional Collision Collisions between objects are governed by laws of momentum and energy. When a collision occurs in an isolated system, the total momentum of the system of objects is conserved. Provided that there are no net external forces acting upon the objects, the momentum of all objects before the collision equals the momentum of all objects after the collision. If there are only two objects involved in the collision, then the momentum change of the individual objects are equal in magnitude and opposite in direction. Certain collisions are referred to as elastic collisions. Elastic collisions are collisions in which both momentum and kinetic energy are conserved. The total system kinetic energy before the collision equals the total system kinetic energy after the collision. If total kinetic energy is not conserved, then the collision is referred to as an inelastic collision. The animation below portrays the inelastic collision between two 1000-kg cars. The before- and after-collision velocities and momentum are shown in the data tables. In the collision between the two cars, total system momentum is conserved. Yet this might not be apparent without an understanding of the vector nature of momentum. Momentum, like all vector quantities, has both a magnitude (size) and a direction. When considering the total momentum of the system before the collision, the individual momentum of the two cars must be added as vectors. That is, 20 000 kg*m/s, East must be added to 10 000 kg*m/s, North. The sum of these two vectors is not 30 000 kg*m/s; this would only be the case if the two momentum vectors had the same direction. Instead, the sum of 20 000 kg*m/s, East and 10 000 kg*m/s, North is 22 361 kg*m/s at an angle of 26.6 North of East. Since the two momentum vectors are at right angles, their sum can be found using the Pythagorean theorem; the direction can be found using SOH CAH TOA (specifically, the tangent function). The value 22 361 kg*m/s is the total momentum of the system before the collision; and since momentum is conserved, it is also the total momentum of the system after the collision. Since the cars have equal mass, the total system momentum is shared equally by each individual car. In order to determine the momentum of either individual car, this total system momentum must be divided by two (approx. 11 200 kg*m/s). Once the momentum of the individual cars are known, the after-collision velocity is determined by simply dividing momentum by mass (v=p/m). An analysis of the kinetic energy of the two objects reveals that the total system kinetic energy before the collision is 250 000 Joules (200 000 J for the eastbound car plus 50 000 J for the northbound car). After the collision, the total system kinetic energy is 125 000 Joules (62 500 J for each car). The total kinetic energy before the collision is not equal to the total kinetic energy after the collision. A large portion of the kinetic energy is converted to other forms of energy such as sound energy and thermal energy. A collision in which total system kinetic energy is not conserved is known as an inelastic collision. For more information on physical descriptions of motion, visit The Physics Classroom Tutorial. Detailed information is available there on the following topics:
<urn:uuid:c8a8aa99-0acb-4aa2-bee7-a3bc9ac7397b>
CC-MAIN-2013-20
http://www.physicsclassroom.com/mmedia/momentum/2di.cfm
2013-05-22T21:24:21
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918861
704
4.28125
4
When we drive somewhere new, we navigate by referring to a two-dimensional map that accounts for distances only on a horizontal plane. According to research published online in August in Nature Neuroscience, the mammalian brain seems to do the same, collapsing the world into a flat plane even as the animal skitters up trees and slips deep into burrows. “Our subjective sense that our map is three-dimensional is illusory,” says Kathryn Jeffery, a behavioral neuroscientist at University College London who led the research. Jeffery studies a collection of neurons in and around the rat hippocampus that build an internal representation of space. As the animal travels, these neurons, called grid cells and place cells, respond uniquely to distance, turning on and off in a way that measures how far the animal has moved in a particular direction. Past research has focused on how these cartographic cells encode two-dimensional space. Jeffery and her colleagues decided to look at how they respond to changes in altitude. To do this, they enticed rats to climb up a spiral staircase while the scientists collected electrical recordings from single cells. The firing pattern encoded very little information about height. The finding adds evidence for the hypothesis that the brain keeps track of our location on a flat plane, which is defined by the way the body is oriented. If a squirrel, say, is running along the ground, then scampers straight up a tree, its internal two-dimensional map simply shifts from the horizontal plane to the vertical. Astronauts are some of the few humans to describe this experience: when they move in space to “stand” on a ceiling, they report a moment of disorientation before their mental map flips so they feel right side up again. Researchers do not know yet whether other areas of the brain encode altitude or whether mammals simply do not need that information to survive. “Maybe an animal has a mosaic of maps, each fragment of which is flat but which can be oriented in the way that’s appropriate,” Jeffery speculates. Or maybe in our head, the world is simply flat.
<urn:uuid:87a08e2d-8f6c-4dcb-b8cb-c5fc52364d4f>
CC-MAIN-2013-20
http://www.scientificamerican.com/article.cfm?id=living-in-two-dimensions
2013-05-22T21:58:58
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934162
430
4
4
break and continue Statements The break statement is used to alter the flow of control. When a break statement is executed in a while loop, for loop, do-while loop or switch statement, it causes immediate exit from that statement. Program execution continues with the next statement. Common uses of the break statement are to escape early from a loop or to skip the remainder of a switch statement. The program written below demonstrates the break statement in a for-loop. When the if-statement detects that x has become 5, break statement is executed. This terminates the for-loop and the program continues from cout after the for-loop. The continue statement is also used to alter the flow of control. When it is executed in a while loop, for loop or do-while loop, it skips the remaining statements in the body of the control loop and performs the next iteration of the loop. An example of continue statement is shown below, Some programmers feel that break and continue statements violate the norms of structured programming since the effects of these statements can be achieved by structured programming technique. The break and continue statements, when used properly, perform faster than the corresponding structured technique.
<urn:uuid:c4089a04-e956-4919-b52d-316c81c240df>
CC-MAIN-2013-20
http://www.tech-faq.com/break-and-continue-statements.html
2013-05-22T21:52:54
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.887331
240
4.3125
4
interactive pages and projects to support teaching and learning Learning Games and Activities Comments from students suggest that part of what makes a good teacher is a willingness to be open to students' interests and needs. This may involve understanding how to support a constructive learning environment and to adopt teaching styles for active learning. One way to encourage students and to effectively manage the active classroom is through the use of Learning Games. This is an easy step that can provide an active change of pace in classroom life. You can also find more games and active lessons at such online locations as the The Educator's Reference Desk site. Below are come from this site that you might consider: Explore Ideas · Discuss Issues · Last revised September, 1999 Copyright © UNICEF
<urn:uuid:bcc4b371-7c55-4755-a617-9f8cd75d0661>
CC-MAIN-2013-20
http://www.unicef.org/teachers/action/games.htm
2013-05-22T21:26:46
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942754
153
4.03125
4
Parents and Schools What About School? A child with mental retardation can do well in school but is likely to need individualized help. Fortunately, states are responsible for meeting the educational needs of children with disabilities. For children up to age three, services are provided through an early intervention system. Staff work with the child's family to develop what is known as an Individualized Family Services Plan, or IFSP. The IFSP will describe the child's unique needs. It also describes the services the child will receive to address those needs. The IFSP will emphasize the unique needs of the family, so that parents and other family members will know how to help their young child with mental retardation. Early intervention services may be provided on a sliding-fee basis, meaning that the costs to the family will depend upon their income. In some states, early intervention services may be at no cost to parents. For eligible school-aged children (including preschoolers), special education and related services are made available through the school system. School staff will work with the child's parents to develop an Individualized Education Program, or IEP. The IEP is similar to an IFSP. It describes the child's unique needs and the services that have been designed to meet those needs. Special education and related services are provided at no cost to parents. Many children with mental retardation need help with adaptive skills, which are skills needed to live, work, and play in the community. Teachers and parents can help a child work on these skills at both school and home. Some of these skills include: - communicating with others; - taking care of personal needs (dressing, bathing, going to the bathroom); - health and safety; - home living (helping to set the table, cleaning the house, or cooking dinner); - social skills (manners, knowing the rules of conversation, getting along in a group, playing a game); - reading, writing, and basic math; and - as they get older, skills that will help them in the workplace. Supports or changes in the classroom (called adaptations) help most students with mental retardation. Some common changes that help students with mental retardation are listed below under "Tips for Teachers." The resources below also include ways to help children with mental retardation. Tips For Parents - Learn about mental retardation. The more you know, the more you can help yourself and your child. See the list of resources and organizations at the end of this publication. - Encourage independence in your child. For example, help your child learn daily care skills, such as dressing, feeding him or herself, using the bathroom, and grooming. - Give your child chores. Keep her age, attention span, and abilities in mind. Break down jobs into smaller steps. For example, if your child's job is to set the table, first ask her to get the right number of napkins. Then have her put one at each family member's place at the table. Do the same with the utensils, going one at a time. Tell her what to do, step by step, until the job is done. Demonstrate how to do the job. Help her when she needs assistance. Give your child frequent feedback. Praise your child when he or she does well. Build your child's abilities. - Find out what skills your child is learning at school. Find ways for your child to apply those skills at home. For example, if the teacher is going over a lesson about money, take your child to the supermarket with you. Help him count out the money to pay for your groceries. Help him count the change. - Find opportunities in your community for social activities, such as scouts, recreation center activities, sports, and so on. These will help your child build social skills as well as to have fun. - Talk to other parents whose children have mental retardation. Parents can share practical advice and emotional support. Call NICHCY (1.800.695.0285) and ask how to find a parent group near you. - Meet with the school and develop an educational plan to address your child's needs. Keep in touch with your child's teachers. Offer support. Find out how you can support your child's school learning at home. Tips For Teachers - Learn as much as you can about mental retardation. The organizations listed at the end of this publication will help you identify specific techniques and strategies to support the student educationally. We've also listed some strategies below. - Recognize that you can make an enormous difference in this student's life! Find out what the student's strengths and interests are, and emphasize them. Create opportunities for success. - Work together with the student's parents and other school personnel to create and implement an educational plan tailored to meet the student's needs. Regularly share information about how the student is doing at school and at home. - If you are not part of the student's Individualized Education Program (IEP) team, ask for a copy of his or her IEP. The student's educational goals will be listed there, as well as the services and classroom accommodations he or she is to receive. Talk to specialists in your school (e.g., special educators), as necessary. They can help you identify effective methods of teaching this student, ways to adapt the curriculum, and how to address the student's IEP goals in your classroom. - Be as concrete as possible. Demonstrate what you mean rather than just giving verbal directions. Rather than just relating new information verbally, show a picture. And rather than just showing a picture, provide the student with hands-on materials and experiences and the opportunity to try things out. - Break new tasks into small steps. Demonstrate the steps. Have the student do the steps, one at a time. Provide assistance, as necessary. - Give the student immediate feedback. - Teach the student life skills such as daily living, social skills, and occupational awareness and exploration, as appropriate. Involve the student in group activities or clubs. Sourced from: Mental Retardation Fact Sheet (FS8) National Dissemination Center for Children with Disabilities Revision: January 2004
<urn:uuid:1a3a6c05-8e7c-472c-8b5a-85f4a014e6f4>
CC-MAIN-2013-20
http://www.wamhc.org/poc/view_doc.php?type=doc&id=4682&cn=208
2013-05-22T21:37:40
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963509
1,286
4.125
4
The Technical Details: Determining Delta Values When we talk about the isotopic ratio in a sample, we talk about the delta value. Let's look at how a delta value is actually calculated: - The first step in figuring out the δ13C for a sample is to find the ratio of 13C to 12C within the sample. Next compare (by dividing) this ratio to the ratio of 13C to 12C in a standard. - There is a specific standard, with a known, unchanging ratio of 13C to 12C that all laboratories use in their comparison. For the stable carbon isotopes, this standard is a limestone (called Pee Dee Belemnite—or PDB) from South Carolina. Although PDB is no longer run as the standard, other carbonates (with a known, unchanging 13C to 12C ratio) are used and compared on the PDB scale. The carbonate standard is reacted with an acid to create gaseous CO2, so that the sample and standard are both in the same phase. - Often the sample and standard may have very similar ratios of the two stable isotopes, which will give you a value very close to (but not exactly) 1. Two sample bellows are used so that the sample is compared to a standard Many samples that actually have different ratios of 13C to 12C will give what seems like similar values (say, for example, 0.99 and 0.98). These samples do have different isotopic ratios, but this is hard to see when they only differ after the decimal point. To make this difference easier to see, 1 is subtracted from this value, and then this new calculation is multiplied by 1,000 to give the actual δ13C of the sample. - This makes it much easier to see the difference between two samples. For the ratios of 0.99 and 0.98, the delta values are -10‰ and -20‰ respectively. The equation for this is: Making the Values More “Friendly” Even when comparing samples with ratios of 13C to 12C of 0.99 and 0.98, the delta notation is much easier. Well, when we look at ratios that atmospheric scientists actually study, it becomes infinitely easier to compare using delta notation–in fact it would be too difficult without! |Carbon Pool||13C||Actual ratio of 13C to 12C| |Ocean & Atmosphere||-8‰||0.011142| Why Go Through This Much Work? This seems like an awful lot of calculations when you can just look at differences among samples in their 13C to 12C ratios and ignore all of the calculation steps. The reason that it is conventional to compare to a standard (and then continue on to the next steps in order to get a more ‘friendly’ value) is so that it is easier to compare results both among isotope laboratories and within a single laboratory over a long time period. It is impossible to have an isotope ratio mass spectrometer that perfectly finds the ratio of 13C to 12C in a sample. Isotope ratio mass spectrometers measure relative isotopic ratios much better than actual ratios. By comparing to a standard, the precision of the data values are much, much better since all values are relative to a given standard. For example, if the ratio for both the sample and standard are overestimated (or underestimated) by the same relative amount, then dividing the two values will account for this, making it possible to compare δ13C among laboratories all across the world. The formula for determining the Δ14C of a sample is similar to δ13C: The difference is in the term FN[x], which is still a comparison of the sample to a standard. However, after this comparison, several other calculations occur to find FN[x]. - The ratio is corrected for “background” 14C counts, where atoms or molecules that were accidentally and incorrectly identified as 14C are no longer included. - The ratio is additionally corrected for the small amount of radioactive decay between the time the sample was collected and the time it was measured, so that the Δ14C at the time of collection rather than the time of analysis is reported. - The final difference is that Δ14C is normalized, where the effect of fractionation is removed. That is, we know from the 13C measurements that, for example, when carbon dioxide is photosynthesized by plants, it fractionates, resulting in proportionately less 13C in the plant. The same thing happens to 14C, so plants have proportionately less 14C than the atmosphere does. If we know how much 13C fractionation occurs, we can calculate precisely how much 14C fractionation there is. We then calculate how much 14C would have been in the sample if it had not fractionated. This is the Δ14C. Why go to all this trouble? The main reason is that for radiocarbon dating, scientists want to study how much 14C has decayed, not how much has fractionated, and this normalization allows them to do just that. The second reason is that it makes it easier to understand the 14C in the atmosphere – now when plants photosynthesize CO2, the Δ14C value in the atmosphere does not change. Of course, we can always reverse the calculations to discover the amount of 14C without applying this normalization, and this is written as δ14C. For even more gory details, see: http://www.radiocarbon.org/Pubs/Stuiver/index.html
<urn:uuid:70641808-af24-4ed4-9507-38f917936cfa>
CC-MAIN-2013-20
http://esrl.noaa.gov/gmd/outreach/isotopes/deltavalues.html
2013-05-25T05:32:08
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935226
1,181
4.28125
4
The presentation for nouns is simple but very effective You will need - picture of a black triangle (or a black pyramid if you have one), paper and black pen. Say to the child - can you bring a, quickly go into the hallway and get a, Hurry! The child will look confused as you haven't specified anything to bring - wait until they ask you what you want them to bring or say I don't know what you want. Reply 'You don't know what to bring me as I haven't given you a name' Ask the child their name - 'you all have names, if we look around there are millions of things with names' 'I am going to write a name' - write pen and ask the child to get a pen. continue 2 or 3 times naming various objects 'How did you know to bring me a pen/book/teddy etc?' their response will be along the lines of - because you told us. 'yes, because all these objects have a special name. We call these names nouns.' Noun comes from the Latin nomen which means name. The name of something is a noun (person, place or thing) Introduce the black pyramid/triangle - this black pyramid represents names, because just like names the pyramids are ancient. Nouns are probably the oldest part of speech. We followed up with various worksheet games to reinforce the concept. (i.e. circle the noun in these sentences )
<urn:uuid:37ad161e-0b92-4a09-900f-183374c1b392>
CC-MAIN-2013-20
http://homeschoolescapade.blogspot.com/2011/02/nouns.html
2013-05-25T05:38:22
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949346
311
4.15625
4
October 4, 2005: Intricate wisps of glowing gas float amid a myriad of stars in this image of the supernova remnant, N132D. The ejected material shows that roughly 3,000 years have passed since the supernova blast. As this titanic explosion took place in the Large Magellanic Cloud, a nearby neighbor galaxy some 160,000 light-years away, the light from the supernova remnant is dated as being 163,000 years old from clocks on Earth. This composite image of N132D comprises visible-light data taken in January 2004 with Hubble's Advanced Camera for Surveys, and X-ray images obtained in July 2000 by Chandra's Advanced CCD Imaging Spectrometer. The complex structure of N132D is due to the expanding supersonic shock wave from the explosion impacting the interstellar gas of the LMC. A supernova remnant like N132D provides information on stellar evolution and the creation of chemical elements such as oxygen through nuclear reactions in their cores. When viewing objects in space, one must realize that the speed of light is a finite quantity, and that many objects that we are observing with high-powered telescopes, like Hubble, are extremely far away. If we refer to the speed of light as an unchanging value, and state that nothing can go faster than this speed, we can then use the term "light-second," "light-minute," "light-hour", and so on up to "light-year" as finite quantities of distance that are equal to the distance that light travels in that amount of time. Based on the speed of light and the distance from Earth to the Sun, we can say that the Sun is 8 light-minutes away from the Earth and vice-versa. If the Sun showed a flare, it would be visible on Earth 8 minutes later. If an object is seen in the Large Magellanic Cloud (LMC), it takes 160,000 years for the light from the LMC to reach us. If some event occurs in the LMC, like a supernova, astronomers on Earth viewing the supernova going off today know that the supernova actually exploded 160,000 years ago. If our telescopes show that 3,000 years have passed since the time of the supernova, based on the presence of ejection material in the remnant, the actual clock-time of when that event occurred based on our Earth calendars was 3,000 + 160,000 years ago, or 163,000 years ago. Since similar objects are at various distances from Earth, astronomers usually remove the light-travel time to the object when talking about the age or when an event occurred.
<urn:uuid:eb362c51-222d-4ae3-8b43-9b171d658de2>
CC-MAIN-2013-20
http://hubblesite.org/newscenter/archive/releases/2005/30/
2013-05-25T05:43:47
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94286
542
4.25
4
The Cherokee Indians were one of the largest of five Native American tribes who settled in the American Southeast portion of the country. The tribe came from Iroquoian descent. They had originally been from the Great Lakes region of the country, but eventually settled closer to the east coast. Despite popular folklore, the Cherokee actually lived in cabins made of logs instead of the stereotypical tee pee. They were a strong tribe with several smaller sections, all lead by chiefs. The tribe was highly religious and spiritual. When the American Revolution took place, the Cherokee Indians supported the British soldiers, and even assisted them in battle by taking part in several attacks. The Creek and Choctaw tribes also assisted in the battles on the British side. Eventually around the 1800s, the Cherokee Indians began to adopt the culture that the white man brought to them. They began to dress more European, and even adopted many of their farming and building methods. In 1828, gold was discovered on the Cherokee’s land. This prompted the overtaking of their homes, and they were forced out. They had been settled in Georgia for many years, but were now being made to leave and find a new place to settle. This is the origin for the historically popular Trail of Tears, where men, women, and children had to pack up their belongings and find new homes, marching a span of thousands of miles. When all was said and done, about 4,000 Cherokee lost their lives on the journey. Today, the Cherokee Indians have a strong sense of pride in their heritage. The Cherokee rose is now the state flower of Georgia. Today, the largest population of Cherokee Indians live in the state of Oklahoma, where there are three federally recognized Cherokee communities with thousands of residents. Related Article Links Disclaimer: The American Indian Heritage Foundation or Indians.org do not personally endorse or support any of the comments made within the writings of this article.
<urn:uuid:51ae2fe7-8d62-4351-b9a3-31cadc32da53>
CC-MAIN-2013-20
http://indians.org/articles/cherokee-indians.html
2013-05-25T05:57:36
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.982966
387
4.125
4
Bronchiolitis is a common illness of the respiratory tract caused by an infection that affects the tiny airways, called the bronchioles, that lead to the lungs. As these airways become inflamed, they swell and fill with mucus, making breathing difficult. most often affects infants and young children because their small airways can become blocked more easily than those of older kids or adults typically occurs during the first 2 years of life, with peak occurrence at about 3 to 6 months of age is more common in males, children who have not been breastfed, and those who live in crowded conditions Day-care attendance and exposure to cigarette smoke also can increase the likelihood that an infant will develop bronchiolitis. Although it's often a mild illness, some infants are at risk for a more severe disease that requires hospitalization. Conditions that increase the risk of severe bronchiolitis include prematurity, prior chronic heart or lung disease, and a weakened immune system due to illness or medications. Kids who have had bronchiolitis may be more likely to develop asthma later in life, but it's unclear whether the illness causes or triggers asthma, or whether children who eventually develop asthma were simply more prone to developing bronchiolitis as infants. Studies are being done to clarify the relationship between bronchiolitis and the later development of asthma. Bronchiolitis is usually caused by a viral infection, most commonly respiratory syncytial virus (RSV). RSV infections are responsible for more than half of all cases of bronchiolitis and are most widespread in the winter and early spring. Other viruses associated with bronchiolitis include rhinovirus, influenza (flu), and human metapneumovirus. The first symptoms of bronchiolitis are usually the same as those of a common cold: These symptoms last a day or two and are followed by worsening of the cough and wheezing (high-pitched whistling noises when exhaling). Sometimes more severe respiratory difficulties gradually develop, marked by: rapid, shallow breathing a rapid heartbeat drawing in of the neck and chest with each breath, known as retractions flaring of the nostrils irritability, with difficulty sleeping and signs of fatigue or lethargy The child may also have a poor appetite and not feed well or become dehydrated. Vomiting after coughing may occur as well. Less commonly, babies, especially those born prematurely, may have episodes where they briefly stop breathing (this is called apnea) before developing other symptoms. In severe cases, symptoms may worsen quickly. A child with severe bronchiolitis may get fatigued from the work of breathing and have poor air movement in and out of the lungs due to the clogging of the small airways. The skin can turn blue (called cyanosis), which is especially noticeable in the lips and fingernails. The child also can become dehydrated from working harder to breathe, vomiting, and taking in less during feedings. The infections that cause bronchiolitis are contagious. The germs can spread in tiny drops of fluid from an infected person's nose and mouth, which may become airborne via sneezes, coughs, or laughs, and also can end up on things the person has touched, such as used tissues or toys. Infants in child-care centers have a higher risk of contracting an infection that may lead to bronchiolitis because they're in close contact with lots of other young children. The best way to prevent the spread of viruses that can cause bronchiolitis is frequent hand washing. It may help to keep infants away from others who have colds or coughs. Babies who are exposed to cigarette smoke are more likely to develop more severe bronchiolitis compared with those from smoke-free homes. Therefore, it's important to avoid exposing children to cigarette smoke. Although a vaccine for bronchiolitis has not yet been developed, a medication can be given to lessen the severity of the disease. It contains antibodies to RSV and is injected monthly during peak RSV season. The medication is recommended only for infants at high risk of severe disease, such as those born very prematurely or those with chronic lung or heart disease. The incubation period (the time between infection and the onset of symptoms) ranges from several days to a week, depending on the infection causing the bronchiolitis. Cases of bronchiolitis typically last about 12 days, but kids with severe cases can cough for weeks. The illness generally peaks on about the second to third day after the child starts coughing and having difficulty breathing and then gradually resolves. Fortunately, most cases of bronchiolitis are mild and require no specific professional treatment. Antibiotics aren't useful because bronchiolitis is caused by a viral infection, and antibiotics are only effective against bacterial infections. Medication may sometimes be given to help open a child's airways. Infants who have trouble breathing, are dehydrated, or appear fatigued should always be evaluated by a doctor. Those who are moderately or severely ill may need to be hospitalized, watched closely, and given fluids and humidified oxygen. Rarely, in very severe cases, some babies are placed on respirators to help them breathe until they start to get better. The best treatment for most kids is time to recover and plenty of fluids. Making sure a child drinks enough fluids can be a tricky task, however, because infants with bronchiolitis may not feel like drinking. They should be offered fluids in small amounts at more frequent intervals than usual. Indoor air, especially during winter, can dry out airways and make the mucus stickier. Some parents use a cool-mist vaporizer or humidifier in the child's room to help loosen mucus in the airway and relieve cough and congestion. If you use one, clean it daily with household bleach to prevent mold from building up. Avoid hot-water and steam humidifiers, which can be hazardous and can cause scalding. To clear nasal congestion, try a bulb syringe and saline (saltwater) nose drops. This can be especially helpful just before feeding and sleeping. Sometimes, keeping the child in a slight upright position may help improve labored breathing. Acetaminophen can be given to reduce fever and make the child more comfortable. Be sure to follow appropriate dosing and interval of medication based on your child’s weight. When to Call the Doctor Call your doctor if your child: is breathing quickly, especially if this is accompanied by retractions or wheezing might be dehydrated due to poor appetite or vomiting is sleepier than usual has a high fever has a worsening cough appears fatigued or lethargic Seek immediate help if you feel your child is having difficulty breathing and the cough, retractions, or wheezing are getting worse, or if his or her lips or fingernails appear blue.
<urn:uuid:b059abcd-7e5c-4582-a15b-fc6496c31966>
CC-MAIN-2013-20
http://kidshealth.org/PageManager.jsp?dn=CookChildrens&lic=403&cat_id=20028&article_set=22950&ps=104
2013-05-25T05:52:46
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957351
1,445
4.21875
4
A Reference Resource Life Before the Presidency John Quincy Adams was born on July 11, 1767, in the village of Braintree (now Quincy), Massachusetts, a few miles south of Boston. His early years were spent living alternately in Braintree and Boston, and his doting father and affectionate mother taught him mathematics, languages, and the classics. His father, John Adams, had been politically active for all of John Quincy's life, but the calling of the First Continental Congress in 1774 marked a new stage in John Adams' activism. The older Adams would go on to help lead the Continental Congress, draft the Declaration of Independence, and oversee the execution of the Revolutionary War. He was also absent from his children's lives more often than he was present, leaving much of their raising and education to their mother, Abigail. In the first year of the war, young John Quincy Adams feared for the life of his father and worried that the British might take his family hostage. Indeed, when John Adams signed his name to the Declaration of Independence, he committed an act of treason against England, an offense punishable by death. For John Quincy, these years were actually the beginning of his manhood, and he recalled later in life feeling responsible—as the eldest son—for protecting his mother while his father attended to the business of revolution. John Quincy witnessed the Battle of Bunker Hill with his mother from the top of one of the Braintree hills and regularly saw soldiers passing through his hometown. The Revolutionary War was not some distant, theoretical event but an immediate and frightening reality. Grooming for the World Stage From ages ten to seventeen, Adams experienced an incredible European adventure that prepared him for his later career in the foreign service of his country. In late 1777, John Adams was posted to Europe as a special envoy, and in 1778, John Quincy accompanied him to Paris. Over the next seven years, John Quincy would spend time in Paris, the Netherlands, and St. Petersburg, with shorter visits to England, Sweden, and Prussia. The young Adams experienced his first formal schooling at the Passy Academy outside of Paris where—together with the grandsons of Benjamin Franklin—he studied fencing, dance, music, and art. The Adamses remained in France for a little over a year and then returned home for some three months. When John Adams was again posted to Europe in November 1779, tasked with negotiating the peace with Britain, he returned with his sons John Quincy and Charles, reaching Paris in February 1780 after a harrowing journey in first a leaky ship, then overland on mules from Spain. John, recognizing that there was little likelihood of peace negotiations, decided in the summer of 1780 to relocate to Amsterdam along with his sons, both of whom briefly attended the University of Leiden. Charles proved unhappy in Europe and was sent home after a year and a half. Around the same time in 1781, John Quincy's education was interrupted when Francis Dana, the newly appointed U.S. emissary to St. Petersburg, asked that John Quincy, then fourteen years old, accompany him as translator and personal secretary. A year later, John Quincy traveled alone for five months from St. Petersburg to The Hague, the Dutch seat of government, to rejoin his father. When he returned to America in 1785, Adams enrolled in Harvard College as an advanced student, completing his studies in two years. After college, Adams studied law and passed the Massachusetts bar exam in the summer of 1790. While preparing for the law exam, he mastered shorthand and read everything in sight, from ancient history to popular literature. He especially enjoyed the humorous novel Tom Jones by Henry Fielding, which he deemed "one of the best novels in the language." Always in awe of Thomas Jefferson, a close friend of his father and the principal author of the Declaration of Independence, Adams considered Jefferson's Notes on Virginia a brilliant piece of writing. As a young man, Adams stood apart from his age group. He took no part in the usual college pranks nor did he think much of his teachers—many of whom were less well read and had less worldly experience than he had. But Adams did have an appreciative eye for young women. His first love, at age fourteen, was a French actress whom he never met personally but dreamed about after seeing her stage performance. During his legal apprenticeship, John Quincy fell deeply in love with a young woman he met in Newburyport, Massachusetts, where he was studying law. The romance lasted for several months before his mother, Abigail Adams, persuaded him to put off marriage until he could afford to support a wife. John Quincy agreed, and the two drifted apart. It was a parting that he always regretted, but it demonstrated a character trait in Adams that accompanied him throughout his life: his respect for the opinions of his parents. From 1790 to 1794, Adams practiced law with little success in Boston. As a new, young lawyer competing for clients with far more established and senior men, he had difficulty attracting paying clients. Not even the fact that his father was now vice president of the United States seemed to help. When not practicing law, Adams wrote articles in support of the Washington administration and debated the political issues of the day with his fellow lawyers. Finally, in 1794, just as John Quincy's law career was beginning to make headway, President George Washington, appreciative of the young Adams's support for his administration and aware of his fluency in French and Dutch, appointed him minister to the Netherlands. It was a good time for the young diplomat. He carefully managed the repayment of Dutch loans made to America during the American Revolution and sent well-regarded official reports to Washington on the aftermath of the French Revolution. A Moody Suitor While traveling in France as a young boy, John met Louisa Catherine, the four-year-old daughter of Joshua Johnson, an American merchant who had married an Englishwoman and was then living in Nantes, France. Years later, in 1797, when Louisa had grown into a pretty 22-year-old woman, she and Adams met again. Now he was a 30-year-old diplomat and the son of the President of the United States. She was living in London, where her father served as the American consul, and Adams had been sent to London from The Hague to exchange the ratifications of the Jay Treaty. The Johnson family provided the social center for Americans in London, and Adams regularly visited. In time, he began to court Louisa, dining nightly with the family but always leaving when the girls began to sing after the evening meal—Adams disliked the sound of the female voice in song. Louisa found herself intrigued by her moody suitor. The two were married on July 26, 1797, over the initial objections of Adams's parents, who did not think it wise for a future President to have a foreign-born wife. Right around the time of their marriage, John Quincy was appointed U.S. minister to Prussia, where he remained until his father lost his reelection bid for a second term as President in 1800. The Adamses returned to the United States in 1801 with their son George Washington Adams, and John Quincy threw himself into local politics, winning election to the state senate. Then the Massachusetts legislature appointed him to the U.S. Senate in 1803. Career in Diplomacy As the U.S. senator from Massachusetts, he shifted from his nominally Federalist position to support the Democratic-Republican administration of President Thomas Jefferson. He supported the Louisiana Purchase, one of only two Federalists to do so, and the imposition of the Embargo Act of 1807 against foreign trade. In 1808, the Federalist-controlled Massachusetts state legislature was infuriated by Adams's pro-Jeffersonian conduct and expressed their displeasure by appointing Adams's successor nearly a full year before Adams's term was complete. Adams promptly resigned and subsequently changed his party affiliation from Federalist to Democratic-Republican. Shortly after the loss of his Senate seat, President James Madison appointed Adams the first U.S. minister to Russia. Although Adams had previously expressed negative feelings about Russia as a nation of "slaves and princes," he soon developed a strong personal attachment to Czar Alexander, whom he admired for his willingness to stand up to Napoleon. While in Russia, Adams persuaded the czar to allow American ships to trade in Russian ports, and when Napoleon invaded Russia in 1812, Adams's dispatches home provided Madison with detailed and perceptive accounts of the war. In 1814, President Madison appointed Adams to head a five-person delegation to negotiate a peace agreement ending the War of 1812 with Britain. It was an auspicious group of Americans who met in Ghent, Belgium: Special Envoy John Quincy Adams, Secretary of the Treasury Albert Gallatin, Senator James A. Bayard of Delaware, Speaker of the House Henry Clay, and U.S. Minister to Sweden Jonathan Russell. The treaty negotiations took five months, resulting in an agreement to end the fighting and restore all territory to the status quo at the beginning of the war. No mention was made of the issues that had started the war, such as the impressment of American seamen or the rights of neutral commerce. Still, the treaty was a significant victory for the United States: the young nation had engaged the greatest military power in the world without conceding anything in return for peace. The treaty was signed on December 24, 1814, two weeks prior to the great victory of U.S. forces over the British at the Battle of New Orleans. Word did not reach America of the treaty until mid-February, and the Senate ratified it unanimously on February 17, 1815. Madison subsequently posted Adams to England for two years. With the election of James Monroe as President, Adams accepted appointment as secretary of state, serving from 1817 to 1825. During his long tenure as head of the State Department, he compiled an impressive record of diplomatic accomplishments. At the top of the list stands his role in formulating the Monroe Doctrine, which warned European nations not to meddle in the affairs of the Western Hemisphere. Although Thomas Jefferson and James Madison had advised President Monroe to issue the proclamation in a joint statement with Britain, Adams—understanding the diplomatic symbolism involved—persuaded Monroe to make a unilateral and independent statement as a mark of U.S. sovereignty in the hemisphere. Secretary of State Adams also successfully negotiated U.S. fishing rights off the Canadian coast, established the present U.S.-Canadian border from Minnesota to the Rockies, formulated a pragmatic policy for the recognition of newly independent Latin American nations, and achieved the transfer of Spanish Florida to the United States in the Adams-Onís Treaty of 1819. This treaty also fixed the southwestern boundary of the United States at the Sabine River (in present-day Texas) and removed Spanish claims to Oregon. Adams also halted Russian claims to Oregon. Within the State Department, he appointed staff on the basis of merit rather than patronage, and upon his election as President in 1824, he left behind a highly efficient diplomatic service with clear accountability procedures and a system of regularized correspondence in place.
<urn:uuid:8a679c9a-b740-4f93-a377-2197f6dd44f4>
CC-MAIN-2013-20
http://millercenter.org/president/jqadams/essays/biography/2
2013-05-25T05:29:58
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980512
2,299
4.0625
4
School: Down to Earth School Area of Science: Engineering Interim: We are concerned about children in our area getting burned by slides at local parks and in yards at home. Children all over the U.S have been burned by playground equipment from the ages of 6 months to 6 years. Most of the burns occur due to young children not having the reaction of pulling away from heated materials quick enough. Density is defined as weight per unit volume, though more properly called “specific weight”. Less dense materials heat faster because the amount of atoms is less and they take less time for all to heat. Light from the sun excites electrons in the atoms which create the materials themselves. This is knows as “radiationless transitions”. The atoms of the material vibrate and that vibrational energy is roughly equal to the electronic energy (photons) absorbed from the sun. We hope to find a safer alternative material for slides by learning how the density of material holds heat and changes temperature. We hope to use this information to design a safer slide. The model programming we are using is NetLogo. We will be modeling a slide being heated by rays from the sun. We plan to have the sun move across the sky, and agent-based heat rays that will travel to heat the slide. The slide will be made of color changeable patches. When the patches are hit by the rays they will change color. The starting patch color will depend on the density of the material in use and the temperature surrounding the area. The slide will begin flashing the warm colors when the temperature gets to skin burning degrees . We have the main part of our research completed to start our model. We have started on all our other projects involved in the challenge. We have begun our model with the sun and heat rays. We are working on getting the sun path laid out so the sun will follow it. We are also working on getting the heat rays attached to the sun and heading set for the slide. We would like for the model to show the suns angle and how it affects the temperature of the material. The results we expect to get include finding out what material heats up the most based on the density. We hope to find a solution on what materials would be safest..We will be using the information to create a “Safe Slide”. Kristen Hampton. Wistv.com. 621, 12 WBTV. http://www.wistv.com / story/1884371/hot-playground-equipment-can-burn-childrens-skin.html Lidia Chen. 11-3-12. Http://www.cruchingplants.net/playground-equipment-may-cause-burn-injuries-to-children. 2005 - 2011. Maryland Materials. 11-3-12. http://www.mdmaterials.com/slides_commercial_straight&wave.html. Sponsoring Teacher: Maia Chaney Mail the entire Team
<urn:uuid:a203e691-c92a-43ed-95d8-3ee59da1ffc4>
CC-MAIN-2013-20
http://mode.lanl.k12.nm.us/get_interim1213.php?team_id=33
2013-05-25T05:44:08
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909962
622
4
4
Once you have chosen between a model I and model II anova, the next step is to test the homogeneity of means. The null hypothesis is that the all the groups have the same mean, and the alternate hypothesis is that at least one of the means is different from the others. To test the null hypothesis, the variance of the population is estimated in two different ways. I'll explain this in a way that is strictly correct only for a "balanced" one-way anova, one in which the sample size for each group is the same, but the basic concept is the same for unbalanced anovas. If the null hypothesis is true, all the groups are samples from populations with the same mean. One of the assumptions of the anova is that the populations have the same variance, too. One way to estimate this variance starts by calculating the variance within each sample—take the difference between each observation and its group's mean, square it, then sum these squared deviates and divide by the number of observations in the group minus one. Once you've estimated the variance within each group, you can take the average of these variances. This is called the "within-group mean square," or MSwithin. For another way to estimate the variance within groups, remember that if you take repeated samples of a population, you expect the means you get from the multiple samples to have a standard deviation that equals the standard deviation within groups divided by the square root of n; this is the definition of standard error of the mean, or Remember that the standard deviation is just the square root of the variance, so squaring both sides of this gives: so the second way of estimating the variance within groups is n×Varmeans, the sample size within a group times the variance of the group means. This quantity is known as the among-group mean square, abbreviated MSamong or MSgroup. If the null hypothesis is true and the groups are all samples from populations with the same mean, the two estimates of within-group variance, MSwithin and MSamong, should be about the same; they're just different ways of estimating the same quantity. Dividing MSamong by MSwithin should therefore be around 1. This quantity, MSamong/MSwithin, is known as Fs, and it is the test statistic for the anova. If the null hypothesis is not true, and the groups are samples of populations with different means, then MSamong will be bigger than MSwithin, and Fs will be greater than 1. To illustrate this, here are two sets of five samples (n=20) taken from normally distributed populations. The first set of five samples are from populations with a mean of 5; the null hypothesis, that the populations all have the same mean, is true. |Five samples (n=20) from populations with parametric means of 5. Red bars indicate sample means.| |Five samples (n=20) from populations with parametric means of 5. Thick horizontal lines indicate sample means.| The variance among the five group means is quite small; multiplying it by the sample size (20) yields 0.72, about the same as the average variance within groups (1.08). These are both about the same as the parametric variance for these populations, which I set to 1.0. |Four samples (n=20) from populations with parametric means of 5; the last sample is from a population with a parametric mean of 3.5. Red bars indicate sample means.| |Four samples (n=20) from populations with parametric means of 5; the last sample is from a population with a parametric mean of 3.5. Thick horizontal lines indicate sample means.| The second graph is the same as the first, except that I have subtracted 1.5 from each value in the last sample. The average variance within groups (MSwithin) is exactly the same, because each value was reduced by the same amount; the size of the variation among values within a group doesn't change. The variance among groups does get bigger, because the mean for the last group is now quite a bit different from the other means. MSamong is therefore quite a bit bigger than MSwithin, so the ratio of the two (Fs) is much larger than 1. The theoretical distribution of Fs under the null hypothesis is given by the F-distribution. It depends on the degrees of freedom for both the numerator (among-groups) and denominator (within-groups). The probability associated with an F-statistic is given by the spreadsheet function FDIST(x, df1, df2), where x is the observed value of the F-statistic, df1 is the degrees of freedom in the numerator (the number of groups minus one, for a one-way anova) and df2 is the degrees of freedom in the denominator (total n minus the number of groups, for a one-way anova). Here are some data on a shell measurement (the length of the anterior adductor muscle scar, standardized by dividing by length) in the mussel Mytilus trossulus from five locations: Tillamook, Oregon; Newport, Oregon; Petersburg, Alaska; Magadan, Russia; and Tvarminne, Finland, taken from a much larger data set used in McDonald et al. (1991). Tillamook Newport Petersburg Magadan Tvarminne 0.0571 0.0873 0.0974 0.1033 0.0703 0.0813 0.0662 0.1352 0.0915 0.1026 0.0831 0.0672 0.0817 0.0781 0.0956 0.0976 0.0819 0.1016 0.0685 0.0973 0.0817 0.0749 0.0968 0.0677 0.1039 0.0859 0.0649 0.1064 0.0697 0.1045 0.0735 0.0835 0.1050 0.0764 0.0659 0.0725 0.0689 0.0923 0.0836 The conventional way of reporting the complete results of an anova is with a table (the "sum of squares" column is often omitted). Here are the results of a one-way anova on the mussel data: |sum of squares||d.f.||mean square||Fs||P| If you're not going to use the mean squares for anything, you could just report this as "The means were significantly heterogeneous (one-way anova, F4, 34=7.12, P=2.8×10-4)." The degrees of freedom are given as a subscript to F. Note that statisticians often call the within-group mean square the "error" mean square. I think this can be confusing to non-statisticians, as it implies that the variation is due to experimental error or measurement error. In biology, the within-group variation is often largely the result of real, biological variation among individuals, not the kind of mistakes implied by the word "error." Graphing the results |Length of the anterior adductor muscle scar divided by total length in Mytilus trossulus. Means ±one standard error are shown for five locations.| The usual way to graph the results of a one-way anova is with a bar graph. The heights of the bars indicate the means, and there's usually some kind of error bar: 95% confidence intervals, standard errors, or comparison intervals. Be sure to say in the figure caption what the error bars represent. How to do the test I have put together a spreadsheet to do one-way anova on up to 50 groups and 1000 observations per group. It calculates the P-value, does unplanned comparisons of means (appropriate for a model I anova) using Gabriel comparison intervals and the Tukey–Kramer test, and partitions the variance (appropriate for a model II anova) into among- and within-groups components. Some versions of Excel include an "Analysis Toolpak," which includes an "Anova: Single Factor" function that will do a one-way anova. You can use it if you want, but I can't help you with it. It does not include any techniques for unplanned comparisons of means, and it does not partition the variance. Several people have put together web pages that will perform a one-way anova; one good one is here. It is easy to use, and will handle three to 26 groups and 3 to 1024 observations per group. It does not calculate statistics used for unplanned comparisons, and it does not partition the variance. Another good web page for anova is Rweb. There are several SAS procedures that will perform a one-way anova. The two most commonly used are PROC ANOVA and PROC GLM. Either would be fine for a one-way anova, but PROC GLM (which stands for "General Linear Models") can be used for a much greater variety of more complicated analyses, so you might as well use it for everything. Here is a SAS program to do a one-way anova on the mussel data from above. data musselshells; input location $ aam; cards; Tillamook 0.0571 ====See the web page for the full data set==== Tvarminne 0.1045 proc glm data=musselshells; class location; model aam = location; run; The output includes the traditional anova table; the P-value is given under "Pr > F". Sum of Source DF Squares Mean Square F Value Pr > F Model 4 0.00451967 0.00112992 7.12 0.0003 Error 34 0.00539491 0.00015867 Corrected Total 38 0.00991458 If the data show a lot of heteroscedasticity (different groups have different variances), the one-way anova can yield an inaccurate P-value; the probability of a false positive may be much higher than 5 percent. In that case, the most common alternative is Welch's anova. This can be done in SAS by adding a MEANS statement, the name of the nominal variable, and the word WELCH following a slash. Here is the example SAS program from above, modified to do Welch's anova: proc glm data=musselshells; class location; model aam = location; means location / welch; run; Here is the output: Welch's ANOVA for aam Source DF F Value Pr > F location 4.0000 5.66 0.0051 Error 15.6955 Sokal and Rohlf, pp. 207-217. Zar, pp. 183. McDonald, J.H., R. Seed and R.K. Koehn. 1991. Allozymes and morphometric characters of three species of Mytilus in the Northern and Southern Hemispheres. Mar. Biol. 111:323-333. This page was last revised August 31, 2009. Its address is http://udel.edu/~mcdonald/statanovasig.html. It may be cited as pp. 130-136 in: McDonald, J.H. 2009. Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland. ©2009 by John H. McDonald. You can probably do what you want with this content; see the permissions page for details.
<urn:uuid:b4918fa8-bacf-4bec-a162-7e5718ff8e87>
CC-MAIN-2013-20
http://udel.edu/~mcdonald/statanovasig.html
2013-05-25T05:58:45
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909819
2,430
4.09375
4
Guest Author - Connie Krochmal Pollination is critical for most cacti and succulents. A good crop of seeds are needed to ensure the survival of the species. Under most circumstances, cross-pollination is preferred by most plants in order to sustain a strong healthy plant population. So, self pollination is usually an exception. This does occur in a few cacti and succulents. In the case of frailea, this cactus takes the route of self pollination. Once considered to be an echinocactus, this is native to South America. These are rather small cacti with small blooms located on the top of the plant. These are unusual in that they can be pollinated without the flowers ever opening. Depending on the species, frailea flowers are often yellow. They open when the weather is sunny. Rather than wait for cross-pollination, the blooms often donít open at all. The reason is that inside the blooms are the seeds, which were already produced through self-fertilization. As an aid to pollination, some species of cacti and succulents have developed close relationships with their pollinators. These often rely on a few partner species to carry our pollination. For example, the plants can co-evolve with their animal helper. This is true for yuccas. If the nocturnal yucca moths arenít present, yuccas are unable to produce seeds. In the case of the Joshua tree, scientists say that the plants in different areas of the Mohave Desert develop flowers for a specific species of yucca moth. There are two species of the moths within the areas. The plant is co-evolving by adjusting the length of the flower canal so this can receive the pollen from the appropriate species of yucca moth. This is a mutual relationship for the female yucca moth needs the pollen for her larvae. The yucca moth collects the pollen, and shapes it into a ball. Then, she carries this pollen to a blossom of another yucca plant where she lays an egg. She places the pollen in with the egg so the developing larvae will have a supply of food. Some cacti rely on wild bees that are called cactus bees. As solitary insects, these donít live in colonies like honeybees. The cactus bee is able to transfer just enough pollen to achieve pollination, which is less likely to occur with other pollinators in the area. The cactus bee is responsible for pollinating the barrel cactus, which seems to be disliked by honeybees.
<urn:uuid:f488098d-77c4-4f09-89fc-a7916144ebc2>
CC-MAIN-2013-20
http://www.bellaonline.com/articles/art60574.asp
2013-05-25T05:51:02
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956905
549
4.03125
4
How to Tackle Word Problems on the ACT A word problem (also called a story problem or a problem in a setting) gives you information in words rather than in just equations and numbers. To answer an ACT word problem, you have to translate the provided information into one or more equations and then solve. You can solve some word problems fairly easily. Jotting down the numbers in the problem can be useful to help get you focused and moving in the right direction. The following example word problems show you how. Seminar X brought in $700 in revenue and had 20 participants, each of whom paid the same amount. Seminar Y brought in $750 and had 15 participants, each of whom paid the same amount. How much more did each person pay for Seminar Y than Seminar X? (E) The two seminars cost the same amount. If you’re not immediately sure how to proceed, jot down the numbers in an orderly fashion: This step only takes a moment and gets your brain moving. When you organize the information in this way, you may see that the next step involves division: Now you can easily see that Seminar Y cost $15 more than Seminar X, so the correct answer is Choice (C). Jessica is in charge of stocking shelves at a supermarket. Today, she has already stocked 8 boxes that each contained 40 cans of soup, 12 boxes that each contained 24 cans of corned beef hash, and 4 boxes that each contained 60 cans of tuna. How many cans has Jessica stocked today? Record the numbers in this question as follows: Now multiply these numbers across (with or without your calculator, as needed) to get the number of cans in each set of boxes: Finish by adding the results: 320 + 288 + 240 = 848. Therefore, the correct answer is Choice (J). Some word problems are much easier to solve when you draw a sketch to organize your thoughts. This technique is especially helpful if you’re a visual learner. So if you like to draw, paint, or play video games, lead with your strength and try to find a visual way to express math problems whenever possible. The following example shows how to use a sketch to your advantage. The 12:00 p.m. eastbound train left the station at a constant speed of 40 miles per hour. At 12:45 p.m., the next eastbound train left the station at a constant speed of 60 miles per hour. Assuming neither train stops along the way, how far apart will the two trains be at 2:00 p.m.? (A) 5 miles (B) 10 miles (C) 12 miles (D) 15 miles (E) 18 miles This problem is difficult to visualize, so sketching out the information can help you arrive at the correct answer: This figure helps illustrate a way into the problem: By 2:00 p.m., the 12:00 p.m. train has traveled for 2 hours at 40 miles per hour, so it’s 80 miles from the station. And the 12:45 p.m. train has traveled for 1 hour and 15 minutes at 60 miles per hour. The 1 hour accounts for 60 miles. The 15 minutes is a quarter of an hour, so this accounts for 15 miles. Thus, the 12:45 p.m. train is 75 miles from the station. As a result, the trains are 5 miles apart, making the correct answer Choice (A).
<urn:uuid:c7f22177-d4f4-4a22-84dd-66924679e6e9>
CC-MAIN-2013-20
http://www.dummies.com/how-to/content/how-to-tackle-word-problems-on-the-act.navId-813771.html
2013-05-25T05:46:17
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925726
725
4.25
4
A warm spring or hot spring is a place where warm or hot groundwater issues from the ground on a regular basis for at least a predictable part of the year, and is significantly above the ambient ground temperature (which is usually around 55~57°F or 13~14°C in the eastern United States). The water is heated by geothermal heat, or heat generated from the interior of the Earth. This occurs in various "hot spots", where magma or other mantle material is close to the surface. If the water becomes so heated that it builds steam pressure and erupts in a jet above the surface of the Earth, it is called a geyser; if the water only reaches the surface in the form of steam, it is called a fumarole; and if the water is mixed with mud and clay, it is called a mud pot. Warm springs are sometimes the result of hot and cold springs mixing but may also occur outside of geothermal areas, such as Warm Springs, Georgia (frequented for its therapeutic effects by polio-stricken U.S. President Franklin D. Roosevelt, who built the Little White House there). Because heated water can hold more dissolved solids and gases, warm and especially hot springs also often have a very high mineral content, containing everything from simple calcium to lithium, and even radium. Because of both the folklore and the proven medical value some of these springs have, they are often popular tourist destinations, and locations for rehabilitation clinics for those with disabilities. At least three United States national parks feature hot springs:
<urn:uuid:220488cd-8a13-413c-bc09-6797612b83d6>
CC-MAIN-2013-20
http://www.fact-index.com/h/ho/hot_spring.html
2013-05-25T05:30:06
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954886
320
4
4
Women have played a unique role in the history of clothing manufacture in the United States. During the American Revolution, white women—and, in the case of slaveowning families, black female slaves—wove homespun clothing in order to sustain colonial boycotts on manufactured English goods. Female slaves were an integral part of the cotton cultivation and harvesting that produced the raw materials for textile production during the antebellum period. After emancipation, black women continued their integral role in cotton production as sharecroppers, either working the fields alongside their husbands or providing the childcare and cooking that sustained their husbands and children in the fields. Black women also worked as laundresses throughout the South and in other areas, since it was one of the few paid occupations considered "low" enough for black women. In 1840, lower- and middle-class women made up almost half of manufacturing workers in the nation, and two-thirds of those in New England.172 These young women moved to factory towns like Lynn, Massachusetts and lived in same-sex dormitories with strict schedules and curfews; they usually sent most or all of their meager earnings home to support their families. In the late nineteenth century, poor women—primarily immigrants—who came to America from Southern and Eastern Europe toiled in the garment industry, centered in New York City. They worked endless hours from home doing outwork (or piecework), in which they were paid by the number of items they sewed in a given time period. Advocates claimed that such work remunerated employees in accordance with their ability, while opponents argued that piecework encouraged quantity at the expense of quality and forced workers to the limits of endurance in order to make a living wage. Women doing outwork had to pay for their own supplies; a heavy investment, especially when it came to the purchase of an expensive sewing machine. Thus some of the earliest workers' protests centered on the heavy burden of having to buy those machines. Other textile and apparel workers labored outside the home in unsanitary, dangerous sweatshops for low wages. Shoe and clothing manufacturers no longer apprenticed young artisans, teaching them how to make an entire shoe or shirt; instead workers were trained in how to sew or stitch a single piece (like a shirt collar, a shirtwaist, or a part of a shoe), and they were only paid for that piece, not for the sale of the final finished product. They had no marketable skills, and were therefore easily replaced and seldom promoted. One sweatshop typical of the late nineteenth-century period was located at 23-29 Washington Place, at the northern corner of Washington Square East in Manhattan. It was called the Triangle Waist Company, a shirtwaist manufacturer. The building owners subcontracted their work out to men who then paid their workers any wage they chose, since there was, until 1938, no federally mandated minimum wage. Clearly this system enabled the owners of the factory to maintain a comfortable measure of ignorance as to the real workings of the sweatshops that produced their rigid quotas, while workers were exploited in terrible working conditions with long hours (as many as 70 hours a week) and no overtime pay. Most of these workers had no choice; they were immigrants or otherwise unskilled or impoverished, and desperately needed the work. They were also afraid to unionize, or they simply did not have the requisite English language skills to do so. At Triangle, shirtwaist makers toiled ten to twelve hours a day. Supervisors watched the workers constantly; if they talked, whistled, or sang on the job, were a few minutes late to work, or missed a Sunday shift, their pay was docked. As historian Alice Kessler-Harris has explained, "women's industries like clothing and textiles became centers of strike activity," in part because of the deplorable working conditions but also because of the advertising of the fashion industry itself. The garment producers had created their own paradox: they sought ever-increasing production output at the cheapest possible wages by exploiting their labor force as much as possible, yet in order to sell their products, they marketed a glamorous lifestyle filled with extravagance and ease. Female garment workers compared their unclean, unsafe, and exhausting working conditions with the luxurious world showcased all around them in radio advertisements and popular magazines. Kessler-Harris explains that "much of urban America had already begun to absorb the values of consumerism," and female workers "increasingly began to demand their share of material benefits."173 In 1909, 400 workers—mostly Jewish women—reached their limit and spontaneously walked out of the Triangle Shirtwaist factory. Organized by the International Ladies' Garment Workers Union (ILGWU) and with help from the progressive middle-class members of the Women's Trade Union League (WTUL), this spontaneous eruption of discontent grew into a general strike of shirtwaist makers, and the largest women's strike in American history. The strike lasted for thirteen weeks; 20,000 people ultimately walked out on their jobs, shutting down the industry. News on the strike was printed in English, Italian, and Yiddish, reflecting the diversity of the city and its workers. Thousands of activists marched arm-in-arm on City Hall, and dozens of women were beaten by hostile police, arrested, and even carted off to Blackwell's Island as workhouse prisoners. The workers won widespread public support and as time dragged on, many small and medium shops settled with the union's wage and hour demands. The agreements varied, but generally they involved an employer's recognition of worker's union, an arbitration process to determine piece rates, and an end to the policy of charging workers for needles, thread, and electricity. On 15 February 1910, the ILGWU called off the strike and declared victory, having won 320 separate signed contracts with various employers. Yet the union's success was hollow; there was no industry-wide agreement to enforce these individual shop contracts, and the 70 large manufacturers who dominated the garment industry—including Triangle Shirtwaist—had not made any agreements by strike's end.174 Just a year later, on 25 March 1911, a fire broke out in the supposedly "fireproof" Asch building where Triangle Waist Company occupied the eighth, ninth, and tenth floors. The shirtwaists that hung on lines above the workers' heads and the shirtwaist cuttings that littered the floors quickly ignited, allowing the blaze to spread rapidly through the building. As one reporter described, the sewing machines were "placed so closely together that there was hardly aisle room for the girls between them."175 The workers had been locked inside the factory, a standard practice that the owners employed—supposedly to prevent theft. This policy turned out to be deadly when the fire broke out, as it trapped the workers inside the burning building. A few made it down the stairs, but flames soon blocked that exit route. There was one small fire escape in the corner of the building, but not everyone could make it down. Many women, desperate and suffocating from the fire's dense black smoke, stepped out on the ledge and plunged 100 feet to their deaths. Fire ladders extended up towards some of the women, but were not high enough to reach them. Bodies soon littered Washington Place and Greene Street. Others stayed in the factory and burned alive. Of 500 shirtwaist makers who reported to work that awful day, 146 died in the blaze, all within half an hour. As the New York Times reported the next day, most of the victims were "girls from 16 to 23 years of age... Most of them could barely speak English. Many of them came from Brooklyn. Almost all were the main support of their hard-working families."176 New Yorkers and Americans across the country were shocked by the tragedy. Only a handful of the Triangle workers were ILGWU members, and Triangle Shirtwaist was a non-union shop. Unions capitalized on this fact, using the disaster to illustrate their contention that an organized workforce could demand safer working conditions. Several unions, including the ILGWU, the WTUL, and the United Hebrew Trades formed the Joint Relief Committee, which raised relief money—some $30,000—for fire survivors and their families. The ILGWU organized a rally to protest the unsafe working conditions that created the disaster, and the Women's Trade Union League collected testimonies and campaigned for an investigation of the working conditions at Triangle. Within a month, New York's governor appointed the Factory Investigating Commission, which held a series of hearings over five years and helped to pass groundbreaking factory safety legislation. Eight months after the fire, a jury acquitted building owners Max Blanck and Isaac Harris of any wrongdoing. Despite subsequent gains in workplace safety legislation, sweatshops continue to plague the garment and textile manufacturing industries, both at home and abroad. The U.S. Department of Labor has conducted several studies in the twenty-first century, finding that 67% of Los Angeles garment factories and 63% of New York garment factories violate minimum wage and overtime laws. In Los Angeles, 98% of garment factories have workplace health and safety problems serious enough to lead to severe injuries or death. Abroad, the situation is even worse; workers in northern Mexico have seen manufacturers migrate even farther south, where wages are lower and labor protections often go unenforced. One 2002 Los Angeles Times article profiled a mother of five in a Mexican border town two hours southwest of San Antonio, a woman who earned about $55 a week sewing cloth bags at a local factory. Just two years earlier, the same woman had been able to earn twice that amount sewing jeans in a Levi's' factory, but that plant had shut down and moved its jobs to Central America and Asia. The same article profiled Lisa Rahman, a 19-year-old garment factory worker in Dhaka, Bangladesh, who made fifteen cents an hour in 2002. She could only afford to eat chicken along with her usual meal of rice about once every two months, had never gone to school, ridden a bicycle or seen a movie, and lived with her parents and two young relatives in one room amid the slums. She often worked from 8 a.m. until 10 p.m., seven days a week, and had done so since she was ten years old. One of Lisa's most recent jobs had been making a Winnie the Pooh shirt that the Walt Disney Company sold in the United States for $17.99. In the wake of the negative publicity generated by the article, Disney's licensee subsequently suspended its work at that factory; licensees in today's globalized garment industry work much the same way as subcontractors did in Triangle Shirtwaist Company. Disney remains at least partially removed from the manufacturing process—and such scandals—if the responsibility for working conditions and wages is passed onto their licensees.177 Anti-sweatshop activists invoke human rights as the central principle of their cause. Sweatshops, they argue, not only take the place of domestic jobs in the U.S., but they employ desperately poor people in working conditions that are often unsafe, unclean, and exploitative. "In the cold war," explained Michael Posner, head of the Lawyers Committee for Human Rights, "the main issue was how do you hold governments accountable when they violate laws and norms. Today the emerging issue is how do you hold private companies accountable for the treatment of their workers at a time when government control is ebbing all over the world, or governments themselves are going into business and can't be expected to play the watchdog or protection role."178 Activists also argue that Americans have the power to determine "what comes into our country." The Union of Needletrades, Industrial and Textile Employees (or UNITE, which merged with the Hotel Employees and Restaurant Employees International Union—or HERE—in 2004) has represented most American apparel workers in recent lawsuits. Jay Mazur, UNITE's retired president, compares sweatshop-produced goods to cocaine, and argues that if we can legislate against the latter, we can also legislate against the former.179 In April 2003, the 600-member American Apparel and Footwear Association called on the U.S. government to ban imports of apparel, textiles and footwear from the Asian country of Myanmar (a.k.a. Burma), where a military regime had seized power and was committing numerous human rights abuses, including the use of forced labor and child labor. Association President Kevin Burke explained why his group of retailers—who would benefit from the lower labor costs under such oppressive regimes—took the unprecedented step of calling for federal action: Myanmar's military junta had expressed a "total disdain" for basic human rights, and by allowing the country's rulers "to produce products and send them here, we're putting money in their pocket while they're taking money out of other people's pockets and abusing them."180 Thus the amount of net gain from cheap labor does not always prove worthwhile for retailers, if the bad press from countries like Myanmar provokes consumer boycotts on stores that carry the sweatshop-produced goods. Yet the lure of inexpensive goods and maximized profit margins remains very strong. Despite U.S. efforts to isolate the Myanmar government and ban new investment in the country, in 2002, the U.S. imported $350 million in goods from Myanmar, mostly apparel and textiles.181 Despite the manifest problems with sweatshop labor, observers like New York economist Michael M. Weinstein argue that it would be "unconscionable to clamp down on sweatshops" that make foreign worker's lives "better than they would otherwise be."182 Weinstein also points out that "if we bar low-cost goods from abroad, it would be the poorest among us who depend on these products who would be punished most harshly."183 Constant pressure from investors to maximize profits, and from consumers to find good bargains, is likely to ensure that sweatshops will not go away any time soon.
<urn:uuid:6a268a94-6ce7-4b43-910b-8149a6cb46b2>
CC-MAIN-2013-20
http://www.shmoop.com/history-american-fashion/labor.html
2013-05-25T05:30:03
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.975443
2,848
4.40625
4
The use of Newton's second law for rotation involves the assumption that the axis about which the rotation is taking place is a principal axis. Since most common rotational problems involve the rotation of an object about a symmetry axis, the use of this equation is usually straightforward, because axes of symmetry are examples of principle axes. A principal axis may be simply defined as one about which no net torque is needed to maintain rotation at a constant angular velocity. The issue is raised here because there are some commonly occurring physical situations where the axis of rotation is not a principal axis. For example, if your automobile has a tire which is out of balance, the axle about which it is rotating is not a principal axis. Consequently, the tire will tend to wobble, and a periodic torque must be exerted by the axle of the car to keep it rolling straight. At certain speeds, this periodic torque may excite a resonant wobbling frequency, and the tire may begin to wobble much more violently, vibrating the entire automobile. Moment of inertia concepts
<urn:uuid:c09810ae-0f5d-4a9c-9435-dbc05b338343>
CC-MAIN-2013-20
http://hyperphysics.phy-astr.gsu.edu/HBASE/tdisc.html
2013-06-19T18:52:18
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94078
210
4.15625
4
In the Early Childhood Program children listen to music, learn songs, begin rhythm activities and creative movement. The elementary music curriculum provides time for listening to artists and discussion of the various styles of music, from modern day to centuries past. Understanding relationships among the arts, and relating music to history and culture is an important part of their music development. Elementary students are taught notation. Students develop critical listening skills, vocal flexibility and control, and knowledge of music theory and notation. Various exercises are done to help students develop an ear for music. Students are encouraged to try their hand at composing music as well. They learn to sing the notes of songs in tonic sol-fa which, in turn helps with playing the Irish Tin Whistle and other instruments. Students begin in grade one to learn to play the Irish whistle. Students learn musical expression through sacred dance. Band Lessons are an option for students in grades 4-6. Children participate in musical performances throughout the school year.
<urn:uuid:0dea0546-e880-477c-bce4-085fe8948bee>
CC-MAIN-2013-20
http://mtstmaryacademy.org/index.cfm?load=page&page=158
2013-06-19T18:52:40
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951365
195
4
4
Animal fossils are usually the remains of hard structures – bones and shells that have been petrified through enormous pressures acting over millions of years. But not all of them had such hard beginnings. Some Chinese fossils were once the embryos of animals that lived in the early Cambrian period, some 550 million years ago. Despite having the consistency and strength of jelly, the embryos have been exceptionally well preserved and the structure of their individual cells, and even the compartments within them, have been conserved in all their beautiful, minute detail. They are a boon to biologists. Ever since the work of Ernst Haeckel in the 19th century, comparing the development of animal embryos has been an important part of evolutionary biology. Usually, scientists have to piece together the development of ancient animals by comparing their living descendants. But the preserved embryos give the field of embryology its very own fossil record, allowing scientists to peer back in time at the earliest days of some of the earliest living things. But how did these delicate structures survive the pressures of the ages? Elizabeth Raff from Indiana University has a plausible answer. Her experiments suggest that fossil embryos are the work of colonies of ancient bacteria, which grew over the dead clumps of cells and eventually replaced their organic matter with minerals. They are mere casts of the original embryos. Under a range of different conditions, Raff watched decaying sea urchin embryos (which are roughly similar to the fossil ones in both size and shape. Under normal conditions, she saw that dead embryonic cells destroy themselves within a matter of hours, through the actions of their own enzymes. To produce fossil embryos, this self-destruction is the first hurdle to clear and Raff found that it can be done quite simply by placing the embryos in oxygen-less environments. When the fossilised embryos first died several million years ago, they must have sank into oxygen-deprived mud, which staved off the embryos’ destruction long enough for bacteria living on their surface to take hold. Raff found that dead sea urchin embryos are rapidly colonised by bacteria that form three-dimensional communities called biofilms. They construct these communities using the embryo’s own structures as scaffolding and the bacteria replicate the structures of the cells they consume, right down to the smallest feature. Indeed, the fossil embryos still bear traces of these ancient bacteria. They have long, thread-like imprints that strongly resemble the shapes made by bacterial groups growing over sea urchin embryos in oxygen-less water. These biofilms stimulated the growth of minerals and Raff saw that needle-shaped crystals start forming around the bacteria within a week after the embryo’s death. The crystals are mainly made of aragonite, a type of calcium carbonate typically found in the shells of molluscs and hard enough to withstand the pressures of time. By decaying the embryos, the bacteria lower the pH of the surrounding water, which creates the right conditions for the growth of these crystals. Raff’s experiments with living embryos don’t by any means give certain answers about the origins of the fossil embryos, but they do at least provide a plausible origin story. Indeed, the degree of degradation in her sea urchin embryos mirrors that seen in the fossils – in some, little but the outer layer was preserved but in others, even the inner workings were sealed in glorious detail (even though the multiple steps put a question mark over the accuracy of the final structures). The study suggests that the beautiful fossil embryos are not in fact preserved versions of the original cells, but uncanny facsimiles created by bacteria. They may have been created through a two-step process, where each layer acted as a base for sculpting the next one – animal to bacterial, and bacterial to mineral. Reference: E. C. Raff, K. L. Schollaert, D. E. Nelson, P. C. J. Donoghue, C.-W. Thomas, F. R. Turner, B. D. Stein, X. Dong, S. Bengtson, T. Huldtgren, M. Stampanoni, Y. Chongyu, R. A. Raff (2008). Embryo fossilization is a biological process mediated by microbial biofilms Proceedings of the National Academy of Sciences, 105 (49), 19360-19365 DOI: 10.1073/pnas.0810106105
<urn:uuid:0ddbe824-57dd-4013-b303-9b4e35c11948>
CC-MAIN-2013-20
http://phenomena.nationalgeographic.com/2008/11/25/fossilised-embryos-are-the-work-of-bacteria/
2013-06-19T18:59:51
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948884
906
4.375
4
Professional Development Resources 2 Jan 2013 Cover image © 2012 istockphoto.com/ContentWorks. All rights reserved. Used under license. The Common Core State Standards (CCSS) are changing curriculum planning and classroom instruction in many ways. One significant change involves the difficulty levels of text. In the past, standards documents have referred to proficiency with grade-level texts. However, grade level was not defined. The CCSS represents a departure from this practice. Standard 10 of the CCSS specifically calls for increasing levels of text complexity across the grades to ensure students’ proficiency with the texts of college and career. This standard affects all students, but it represents a special challenge to English Learners. Many educators ask what increases in text complexity mean for English Learners, many of whom struggle with their current texts. What, then, is text complexity, and how can English Learners achieve success with this standard? First, an understanding of what makes texts complex is in order. Archaic language, lengthy sentences, new topics, unusual writing styles, unique text structures—these features and many others affect the complexity, and hence the comprehensibility, of text. However, the foremost challenge for English Learners is a text’s vocabulary (Pasquarella, Gottardo, & Grant, 2012). The syntax of a new language does present obstacles to comprehension; however, vocabulary is the most significant hurdle when learning a second language. Two issues of Text Matters are devoted to the topic of complex text and English learners. This first issue describes support for English Learners in developing strategies and knowledge about the vocabulary of complex texts. The second issue of Text Matters presents guidelines for selecting appropriate texts that move English Learners up the staircase of text complexity. Both topics depend on teachers’ understanding of how English words work. A small group of words—4,000 simple word families (e.g., help, helps, helping, helped, helper)—accounts for about 90% of the words in most texts. This vocabulary forms the core of any text, even complex ones. In the exemplars of complex texts listed in Appendix B of the CCSS, the core vocabulary accounts for 93% of the Grades 2–3 exemplars, 92% of those in Grades 4–5, 90% of those in Grades 6–8 and in Grades 9–10, and 88% of those in Grade 11–College and Career Ready. This Text Matters focuses on the extended vocabulary—the 300,000 or more words that account for approximately 10% of the words in texts. [Readers who are interested in learning more about the core vocabulary can read Hiebert (2012, 2013).] Unlike the words in the core vocabulary, many words in the extended vocabulary appear less than once per every million words of text. Consequently, they are often described as rare. When rare words do appear in text, they often are essential to the content and quality of texts. In narrative texts, words in the extended vocabulary often describe the traits of characters and the nuances of plots. In informational texts, they convey specialized terms in chemistry, entomology, and many other topics. The percentage of rare words in a text can vary considerably, even among complex texts (see Table 1). An increase of only one or two percent in rare vocabulary can make texts considerably more complex. When viewed from the vantage point of a thousand-word text, a rate of 8% means the text has about 80 rare words, while a rate of 10% means that a text has about 100 rare words. An additional two rare words in every 100 words can increase the challenge of a text. The next Text Matters gives guidelines on appropriate rates of rare words for English Learners at different developmental levels. In this issue, the focus is on the extended vocabulary and ways teachers can support English Learners in understanding this vocabulary. To help English Learners build strong vocabularies, teachers need to focus on general principles and strategies of word learning. They also need to conduct short lessons and discussions about the vocabularies of specific texts. This Text Matters focuses on the rare vocabulary of literary texts, not that of content-area texts. Content-area standards are explicit about the topics and the concepts underlying those topics (Marzano, 2004). In that concepts are represented by vocabulary, the critical words in a physics unit are clear within standards and curricula (e.g., magnetic attraction, repel, polarity). This vocabulary becomes part and parcel of activities and discussion. Words such as magnetic attraction, for example, are used repeatedly as students engage in inquiry with magnets. Such clarity is not evident in English/Language Arts standards where literary texts are the focus (Marzano, 2004). Literary texts often present rare words that are unique to a particular story. Each text has its own rare words. Thus, students cannot become proficient in the meaning of these words through repetition. These rare words, however, represent specific elements of stories. An author of a literary text chooses a word intentionally from the extended vocabulary to communicate an action, a social relationship, the feature of a place or event, and the feelings and attitudes of characters. In Geeks, for example, Katz (2000) could have used numerous words to describe the condition of the furniture within the apartment of his protagonists. However, by describing the beanbag chair as moldering, readers get a clear idea of the condition of the apartment. Just as in mathematics where lessons build understanding for future problem solving, considerable time needs to be spent in developing the linguistic foundation of English vocabulary for the future reading of complex literary texts. This pattern of single-appearing, rare vocabulary does not only appear in narratives, though. It also appears in magazine articles on topics of science, history, and civics to describe traits, features, interactions, and contexts. This style also extends to full-length texts with a literary stance about technology and science (e.g., Geeks) and history and political science (e.g., A Night to Remember). English Learners need to become adept with such vocabulary for a variety of reasons, including the heavy presence of literary texts on assessments. Analyses of two Grade 6–8 texts from the CCSS’s Appendix B exemplar list—Geeks and A Night to Remember (NTR)—demonstrate the two types of vocabulary instruction needed for proficient reading of literary texts: (a) general principles/strategies and (b) lessons with specific texts. Both texts typify the literary texts offered as CCSS exemplars. Both have higher levels of extended vocabulary than other Grade 6–8 exemplars (see Table 1), which is why these texts were selected for illustration in this Text Matters. Analyses of these two texts show that, even in these vocabulary-dense texts, most words fall into particular groups—groups that share underlying features. |Content Area||Text||Core Vocabulary (%)||Extended Vocabulary (%)| |Social Studies||A Night to Remember||92.5||7.5| |Narrative of the Life of Frederick Douglass||93||7| |Literature||Adventures of Tom Sawyer||90||10| |Dark is Rising||95||5| Table 2 shows the types of words in these two texts. Although there are numerous monosyllabic words, students typically recognize these less-complex words more readily than they do multisyllabic words. Without instruction on multisyllabic words, however, students can develop dysfunctional word-recognition strategies. This is why, beginning in the late primary grades, multisyllabic words should receive the lion’s share of vocabulary instruction. |A Night to Remember||Geeks| |Total Rare Words||180 (11 words per 100)||157 (17 words per 100)| |Multisyllabic Words: Proper Names||23%||8%| |Multisyllabic: Compound Words||16%||16%| |Multisyllabic: Remaining||8% of rare words (1 word per 100 of entire text)||12% of rare words (2 words per 100 of entire text)| The multisyllabic words in texts such as Geeks or NTR are grist for lessons on four types of words in literary texts. These types of vocabulary contribute to the meaning of the text, but they are not necessarily complex in content. Students who are not prepared to deal with this vocabulary will find literary texts difficult. Following are the four types of words in literary texts and strategies for helping students understand them. Proper names. Stories and magazine articles are typically replete with proper names, many of which are difficult to pronounce (e.g., Boise in Geeks). Students need to learn that capitalized words within sentences often are proper names and that accurate pronunciation of these words is not a priority. Picturable words. Research has shown that concrete words that can be represented in pictures are learned more easily than abstract words (Strain, Patterson, & Seidenberg, 2002). Using pictures to create a context for a new concrete word (e.g., smelting) or to support English Learners in relating a known concept with the English label (e.g., necklace) are especially effective ways to support the vocabulary development of English Learners. Pictures that illustrate these words will support students’ recognition much more effectively than extended discussions. Sources for images (with certain copyright restrictions but free and downloadable) include Flickr and Wikimedia Commons. Compound words. Compounding of two root words is a primary way in which many new words are added to English. Some compound words in Geeks illustrate how new words are generated, especially with inventions in fields such as digital technology: software, motherboard, network, upgrade, and playlist. Compound words typically have a connection to the root words within them, but they often have idiomatic meanings. As a consequence, compound words with the same headword (e.g., up in upgrade, uproar, uptown, uptight, upkeep) cannot be taught in the same way as words from the same morphological family (e.g., suspicion, suspiciously, unsuspicious). The upside of compound words is that most headwords (and often the second word as well) belong to the core vocabulary. Once students learn to use the headword to predict meanings of compound words, their word-recognition vocabularies expand considerably. Morphological families. Becoming facile with inflected endings and affixes is also a critical part of preparing students for reading complex text. Among the 300,000 words of the extended vocabulary, most belong to morphological word families with an average of approximately four members. Lessons on the relationships among members of a morphological family are essential to developing the expectation that words are connected to one another structurally (e.g., formality, formal, formalize, informal, informally). Such lessons provide students with opportunities to use words in meaningful ways, not simply to memorize the meanings of suffixes and prefixes. Even when students have been taught strategies to recognize a high percentage of the words in complex texts, a group of words remains to be learned (see Table 2). These words need individual study. Lessons on the vocabulary of specific texts have two dimensions: (a) an overview of the task and (b) instruction on specific words. An overview of the task. The CCSS refers to the scaffolding of complex texts for challenged readers, but the forms of this scaffolding are not described. Frequently, scaffolding has been interpreted as reading a text for students or leading students through a guided reading of the text. However, students need to take responsibility for reading, including initial reads of texts, if they are to improve their comprehension. But teachers also need to give students a realistic view of the challenges of texts by identifying core (in black) and extended (in gray) vocabulary in samples of text, such as the following: He wasn’t just a kid at a computer, but something more, something new, an impresario and an Information Age CEO, transfixed and concentrated, almost part of the machinery, conducting the digital ensemble that controlled his life. (Katz, 2000, p. 19) Four days before, she had playfully teased him for putting a life belt in her stateroom, if the ship was meant to be so unsinkable. At the time he had laughed and assured her it was a formality … she would never have to wear it. (Lord, 1955, p. 22) One of several text analysis schemes can be used to distinguish between core and extended vocabulary (e.g., Laurence Anthony’s AntWordProfiler software). Next, teachers should address the words for which previously taught strategies should be applicable: proper names, picturable words, compound words, and morphological families. Teachers can’t review all of the words in these categories, but they can give students examples of words of different types within the text. Instruction of specific words in extended clusters. The remaining words become the grist for instruction. In NTR, these remaining words account for approximately one rare multisyllabic word per 100 words of text, including words such as adamant, formality, solicitous, and suspiciously. In Geeks, the number of rare multisyllabic words per 100 is two, including words such as alumnus, ensemble, impresario, transfixed, and contemplated. A handful of the most critical words—those that are fundamental to the meaning of the text—can be introduced before students read the text. For example, the unsinkable reputation of the Titanic led to particular stances on the part of passengers (e.g., bewildered, protested, suspicious) as well as on the part of the crew (e.g., solicitous, reassuring, adamant). Short lessons on critical vocabulary should also follow the initial reading of a text. In addition, discussions of critical vocabulary are an essential part of the close reading of text. For example, in the segment from Geeks above, the author’s use of the phrase “conducting the digital ensemble” merits discussion, as do words such as dispensable. Would the use of disposable, superfluous, unnecessary, or useless have served the same function as dispensable? A final post-reading vocabulary activity asks students to record critical words and their semantic connections (e.g., the above-mentioned synonyms of dispensable) and morphological derivatives. For English Learners, such records are important as references for writing and as records of what they have learned. A rich vocabulary and strategies that permit students to read texts with new words are essential to comprehending complex text. For English Learners, a rich vocabulary and strong strategies result from intentional instruction on the part of their teachers. This intentional instruction is not a one-shot occurrence but rather a sustained effort that focuses on categories of words (e.g., compound words, picturable words) and also on words within specific texts, especially words which are part of extended networks of words. Common Core State Standards Initiative (2010). Common Core State Standards for English language arts & literacy in history/social studies, science, and technical subjects. Washington, DC: CCSSO & National Governors Association. Hiebert, E.H. (2013). Core vocabulary and the challenge of complex text. In S. Neuman & L. Gambrell (Eds.), Reading Research in the Age of the Common Core State Standards. Newark, DE: IRA. [Pre-publication version of the chapter is available at http://textproject.org/library/articles/core-vocabulary-and-the-challenge-of-complex-text/] Hiebert, E.H. (2012). Core vocabulary: The foundation for successful reading of complex text, Text Matters 1.2. Santa Cruz, CA: TextProject. Retrieved from http://textproject.org/professional-development/text-matters/core-vocabulary/ Katz, J. (2000). Geeks: How Two Lost Boys Rode the Internet out of Idaho. New York, NY: Broadway Books. Lord, W. (1955). A night to remember. New York, NY: Bantam Books. Marzano, R.J. (2004). Building background knowledge for academic achievement: Research on what works in schools. Alexandria, VA: ASCD. Strain, E., Patterson, K., & Seidenberg, M.S. (2002). Theories of word naming interact with spelling-sound consistency. Journal of Experimental Psychology: Learning, Memory, & Cognition, 28, 207–215.
<urn:uuid:b4d04a34-f21b-4a70-83a1-40a836efda0a>
CC-MAIN-2013-20
http://textproject.org/professional-development/text-matters/text-complexity-and-english-learners-building-vocabulary/
2013-06-19T18:52:33
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921289
3,471
4
4
An important concept that comes from sequences is that of series and summation. Series and summation describes the addition of terms of a sequence. There are different types of series, including arithmetic and geometric series. Series and summation follows its own set of notation that is important to memorize in order to understand homework problems. So a series is just the summation of a sequence. So a sequence is just a bunch of numbers in a row, a series is what happens when we add up all those numbers together. Okay? So before me I have a general term for a sequence. a sub n is equal to n squared minus 1. And first we're asked to find the first four terms. Okay? So in order to find the first term, we would find a sub 1 which happens when we plug in 1. 1 squared minus 1 that's just 0. So our first term is going to be 0. To find the second term we plug in 2. a sub 2 is equal to 2 squared. 4-1 which is going to give us 3. Third term [IB] and repeat a sub 3 is 3 squared, 9-1 is 8. And the fourth term a sub 4, plug in 4. 4 squared, 16-1 is 15. So this right here is a sequence. It's 4 numbers written in order with commons in between. It's just a collection of numbers. Find the sum of those first 4 terms. So basically we already found the 4 terms, all we have to do is add them together. 0+3 is 3 plus 8 is 11 plus 15 is 26. So 26 is then the series, okay? Series is the way I remember it is, series is a shorter word therefore your answer should be shorter, one number. A sequence is a longer word, it's going to be a collection of data, a collection of numbers, okay? So basically all the series is is a summation of the sequence.
<urn:uuid:812b542f-d4de-4cf3-9bff-52518cbabffb>
CC-MAIN-2013-20
http://www.brightstorm.com/math/algebra-2/sequences-and-series/series-and-summation-notation-problem-1/
2013-06-19T19:26:16
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966113
402
4.40625
4
| || | The Role of An AdvocateBy Georgina Rayner Open PDF Version The role of an advocate may be vital at some point in our life to obtain and maintain the necessary changes and opportunities for our children and ourselves. By definition, advocacy involves speaking on behalf of a person(s) or yourself to ensure that their rights and needs are recognized. The word "advocacy" comes from Latin and means 'to add a voice'. The purpose of advocacy is to assist in securing the rights of one's self or another. We all need to develop advocacy skills in order to ensure that our needs are met and our rights are respected. Tips to be an Effective Advocate - Believe in yourself - one person can do a lot. - Be organized! - Identify unmet need(s) or right(s). What is the problem? Listen carefully to what the individual or family's concerns are and help them to focus on the issue(s). - Research the law for understanding and how it impacts the case. - Be systematic in your approach: Know your resources and your allies. Assess the nature of the barriers, resistance you might meet and/or the opposition. Knowing what you are up against will sharpen your strategic thinking. What kind of pressure is possible and from where. - Know and build your case. - Identify all the key players - Narrow down the problem - Develop a plan or map of where everyone is on the issue. - Do your homework. - Document the facts - Keep careful notes and logs of contacts and calls - Listen carefully. - What are the desired outcomes? What is acceptable? What is unacceptable? - Identify what conditions need to be developed or altered in order for change to take place. Be assertive and communicate well. Note: an assertive person clearly states a point of view but takes into account other points of view as well, then works for the right outcome cooperatively. Analyze possible consequences. Develop a back-up plan. - What is the possible fallout? - What historically has happened in other advocacy situations related to this issue? - What is the worst case scenario? - Can the family/child live with it? Remember the process is about the needs of the child. Parental egos and/or your personal preferences should not influence the process or outcome. - Look at alternate strategies to achieve the same goal. - Be careful what you ask for as you might get it. - Make sure you have plan B in case it is needed. Do not accept that nothing will happen or change. Provide a process so that the individual/family can undertake their own advocacy the next time. Provide feedback to the key players. Analyze your own process and look to see what you could have done better. Be Respectful of your client and their ideals. They may have different cultures, beliefs and ideas than you. If you think that you cannot act independently because of your cultural, ethical, moral, or political belief, respectfully decline the case and send them to someone who can assist them.
<urn:uuid:078884b6-ff06-4f05-a50d-fe731f9547e6>
CC-MAIN-2013-20
http://www.caddac.ca/cms/page.php?61
2013-06-19T19:20:29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938512
651
4.15625
4
Acoustic Nerve - the eighth cranial nerve, the nerve concerned with hearing and balance. Amplitude - the height of a sound wave, as associated with the loudness of a sound. Ampulla - the swelling at the base of each semicircular canal, containing sensory cells which detect movement of the fluid within the canals. Anvil - one of three bones of the middle ear that help transmit sound waves from the outer ear to the cochlea. Also called the Incus. Assistive Device - any device other than a hearing aid which helps the hearing impaired. Audiogram - a graph depicting the ability to hear sounds at different Audiologist - a person trained in the science of hearing and hearing impairments, who can administer tests and help in the rehabilitation of Audiometry - the measurement of hearing acuity. Auditory Nerve - the nerve carrying electrical signals from the inner ear to the base of the brain. Auricle - outer flap of the ear. Also called the pinna. Basilar Membrane - thin sheet of material which vibrates in response to movements in the liquid that fills the cochlea. Bony Labyrinth - the cavity in the skull which contains the inner-ear Brainstem testing - measures hearing sensitivity without requiring responses from very young patients or persons who are unable to communicate. Bone Conduction - the conduction of sound waves through reverberations of the mastoid bone to the inner ear. CC (Closed Captioned) - a broadcast television program that includes a signal which produces descriptive subtitles on the screen. Requires Cerumen - ear wax. Cochlea - shaped like a snail's shell, this organ of the inner ear contains the organ of Corti, from which eighth nerve fibers send hearing signlals to the brain. Cochlear Implant - replacement of part or all of the function of the Conductive Hearing Loss - hearing loss caused by a problem of the outer or middle ear, resulting in the inablilty of sound to be conducted to the inner ear. Congenital Hearing Loss - hearing loss that is present from birth which may or may not be hereditary. Cortex - that surface of the brain where sensory information is processed. Crista - sensory cells within the semicircular canals which detect Cupola - jelly-like covering of the sensory hairs in the ampullae of the semicircular canals which responds to movement in the surrounding fluid and assists in maintaining balance. Cycles (per second) - measurement of frequency, or a sound's pitch. Decibel - measurement of the volume or loudness of a sound. Ear Canal - the short tube which conducts sound from the outer ear to the eardrum. Eardrum - membrane separating outer ear from middle ear: the tympanum. Eustachian Tube - tube running from the nasal cavity to the middle ear. Helps maintain sinus and middle ear pressure, protecting the ear Frequency - the number of vibrations per second of a sound. Hammer - one of three bones of the middle ear that help transmit sound waves from the outer ear to the cochlea. Also called the Malleus. Impedance Audiometry - test for measuring the ability to hear sound waves transmitted through bone. Incus - one of three bones of the middle ear that help transmit sound waves from the outer ear to the cochlea. Also called the anvil. Inner Ear - the portion of the ear, beginning at the oval window, which transmits sound signals to the brain and helps maintain balance. Consists of the cochlea and vestibular apparatus. Labyrinthitis - a viral infection in the vestibular canal which may Macula - within the organs of balance, area containing sensory cells which measure head position. Malleus - one of three bones of the middle ear that help transmit sound waves from the outer ear to the cochlea. Also called the Hammer. Mastoid - the bone in which the entire ear mechanism is housed. Part of the larger temporal bone. Meniere's Disease - a condition resulting from fluid buildup in the inner ear, leading to episodes of hearing loss, tinnitus and vertigo. Middle Ear - the portion of the ear between the eardrum and the oval window which transmits sound to the inner ear. Consists of the hammer, anvil and stirrup. Nerve Loss Deafness - a term used to differentiate inner-ear problems from those of the middle ear. Organ of Corti - the organ, located in the cochlea, which contains the hair cells that actually transmit sound waves from the ear through the auditory nerve to the brain. Ossicles - collective name for the three bones of the middle ear: hammer, anvil and stirrup. Otoliths - stone-like particles in the macula which aid in our awareness of gravity and movement. Otosclerosis- a conductive hearing loss caused when the middle ear no longer transmits sound properly from the eardrum to the inner ear. Otitis Media - infection of the middle ear. Otology - branch of medicine concentrating on diseases of the ear. Outer Ear - the external portion of the ear which collects sound waves and directs them into the ear. Consists of the pinna (auricle) and the ear canal and is separated from the middle ear by the ear drum. Oval Window - membrane that vibrates, transmitting sound into the cochlea. Separates the middle ear from the inner ear. Perilymph - watery liquid that fills the outer tubes running through Pinna - the outer, visible part of the ear, also called the auricle. Presbycusis - a hereditary sensory-neural hearing loss that comes Saccule - inner ear area which contains some of the organs that measure position and gravity. Semicircular Canals - curved tubes containing fluid, movement of which which makes us aware of turning sensations as the head moves. Sensorineural Hearing Loss - Hearing loss resulting from an inner Sound Wave - alternating low and high pressure areas, moving through the air which are as interpreted as sound when collected in the ear. Stapes - one of three bones of the middle ear that help transmit sound waves from the outer ear to the cochlea. Also called the Stirrup. Stirrup - one of three bones of the middle ear that help transmit sound waves from the outer ear to the cochlea. Also called the Stapes. Tectorial Membrane - thin strip of membrane in contact with sensory hairs which sound vibrations move producing nerve impulses. In the organ Tinnitus - ringing or buzzing in the ears. TTY (phone device) - dialog is achieved at any distance as words, typed into a TTY, are converted to phone signals and appear, or are printed, as words on a receiving TTY machine. Tympanum - membrane separating outer ear from middle ear: the eardrum. Vertigo - the sensation of moving or spinning while actually sitting or lying still. Vestibular Apparatus - part of the cochlea concerned with maintaining Wave Length - distance between the peaks of successive sound waves. White Noise - a sound, such as running water, which masks all speech
<urn:uuid:0c1a7e72-9b63-40b6-809e-b32a923e8a2a>
CC-MAIN-2013-20
http://www.freehearingtest.com/glossary.shtml
2013-06-19T19:18:57
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.892584
1,621
4.03125
4
In September 1609, when Henry Hudson guided his ship, De Halve Maen, through the narrows dividing present-day Staten and Long Islands, he was not the first European navigator to sail into what we know today as New York Bay. The Italian explorer Giovanni da Verrazzano came in 1524; the Frenchmen Jean Alfonse de Saintonge and Jean Cossin made separate voyages over the next half century. But it was Hudson’s arrival that established a Dutch claim to the region and changed its history for all time. Hudson, an English mariner in Dutch employ, had left Amsterdam in April intending to explore the Arctic seas north of Norway for a possible eastern route to the rich trade of the Indies. When ice floes barred the way, his eighty-five-foot vessel and its crew of sixteen mariners turned to the west and journeyed five thousand miles to North America. For weeks they navigated southwards within sight of the shore, looking for an estuary or bay that might indicate the beginnings of a western route to Asia. By August they had reached Long Island and, after a few days exploring the coast around Sandy Hook, Hudson set off up the broad, deep, and promising river that now bears his name. Although the intrepid captain failed to locate a route to Asia—his navigation of the Hudson ended at the site of modern-day Albany—he had discovered a territory rich in timber and furs that would please his Dutch financiers back in Amsterdam. Hudson’s voyage took place at a critical moment in Atlantic history, and, in particular, for the challenge of northern European states to the power of Spain. Weakened by the loss of the Armada to England in 1588 and by relentless attacks on its New World gold fleets, Spain was plagued by financial crises that pushed it to the edge of collapse. The Spanish had also been unable to put down a revolt by their northern Dutch provinces, eight of which had declared their independence and established a new Dutch Republic. In April 1609, after decades of intermittent and inconclusive hostilities, the two sides agreed to a truce, allowing Dutch merchants to back voyages such as Hudson’s without fear of Spanish attack and financial ruin.Show Full EssayHide Full Essay Once news of Hudson’s discovery reached Holland, new expeditions arrived to trade beads, knives, and hatchets for furs with the Munsee and Lenape Indians. These private traders established a fortified trading post, Fort Nassau, at the site of present-day Albany and charted the coastline and river inlets between Cape Cod and the Delaware Bay. In 1614, one of them, Adrian Block, produced the first map of the territory that he named New Netherland. The following year, Block and others formed the New Netherland Company and secured a three-year monopoly of the region’s trade from the States General, the governing body of the Dutch Republic. New Netherland, like other early American colonies, was a state-sponsored venture, the aim of which was to realize a profit and serve the emerging Dutch state by eliminating competition from other trading ports and capturing more of the Indies from Portugal and Spain. In 1621, the States General drew up a charter for a new West India Company, granting it a monopoly of all the Dutch Atlantic trade with West Africa, Brazil, the Caribbean, and North America. The Company was a joint-stock venture, financed by government investment and private capital to the tune of more than seven million guilders. Like its East Indian counterpart, it was managed by the shareholders who met in five regional chambers. The company enjoyed some success in its early years, establishing trading posts on both sides of the Atlantic, dealing in slaves on the coast of Africa, as well as gold, ivory, and sugar in the Caribbean, Suriname, and the northeast coast of Brazil. New Netherland was only part of the Company’s concern, and a relatively minor one at that. In the summer of 1624, the Company established a small settlement under the command of Cornelis Jacobsz May, the first provincial director, transporting some thirty families to what is now Governor’s Island. More colonists arrived the following year, and the settlement was relocated a short distance across the bay to the equally secure and more commodious lower tip of Manhattan, establishing New Amsterdam, later New York City. To secure the settlement, Peter Minuit, then the provincial director, offered sixty guilders worth of blankets, kettles, and knives to neighboring Indians, who accepted the trade goods as gifts, sealing a defensive alliance with the newcomers and not, as was once supposed, as payment for the island of Manhattan. Fifteen years after Hudson’s arrival, New Netherland, the newest commercial outpost of the Dutch empire, consisted of a small group of traders living at the edge of a vast and rich wilderness. The settlers’ peace with the numerous local Native American tribes was tenuous at best. The large linguistic and cultural native groupings of Algonquian and Iroquoian Indians who inhabited the region were subdivided into smaller communities that were frequently at war or in some form of alliance with each other. The arrival of the Dutch had piqued the interest of local Indians, who regarded the newcomers as potential allies and sources of new and interesting gifts that could in turn be traded with other tribes. Thus, the Dutch found themselves drawn into a web of Indian diplomacy that they only partially understood. As early as 1626, the settlers at Fort Orange (formerly Fort Nassau) suffered a bloody defeat at the hands of Mohawks, the enemies of the Mahicans, the tribe with which the Dutch had been trading. Beginning in 1629, European-Amerindian commercial and diplomatic relations became even more complicated following the migration of thousands of English Puritans from New England, the territory north of New Netherland. These New Englanders provided Native Americans with yet another source of gifts and friendship, and their rapidly growing and spreading settlement soon threatened to overwhelm the thinly populated New Netherland. The arrival of the English prompted a reassessment of the colony’s future. In June 1629, in an attempt to bolster New Netherland’s population, the Company announced its intention to offer large tracts of land to patroons (a Dutch word for landowners, from the Spanish “patrón”) who agreed to “buy” the land from the Indians, settle fifty families within four years, and thereafter administer their settlements’ civil and criminal courts. Unfortunately, the relatively prosperous conditions prevailing in the United Provinces and the limited benefits for settlers—who were expected to endure a dangerous sea voyage to live in the North American wilderness—hardly recommended the patroonships as desirable destinations. All the prospective communities except for Rensselaerswijck, established by Kiliaen van Rensselaer on both banks of the Hudson River near Fort Orange, failed to attract large numbers of investors and settlers. Those who did make the trans-Atlantic journey often deserted their designated employment, hoping to get rich quickly by defying the Company’s regulations and joining the lucrative fur trade. Meanwhile, English colonists continued to settle in the Dutch territory. The failure of the patroonship scheme established important precedents for the future. The easing of the patroon policy in 1640, along with the arrival of independent fur traders, signaled the beginning of the end for the Company’s trading monopoly and also drew its shareholders and officers into civil rather than commercial administration. By the mid-seventeenth century, New Netherland’s future as a colony of traders and farmers was increasingly apparent; land, not furs, would prove to be its greatest resource. In the second half of the 1630s, groups of Puritans spread southwards into the Connecticut River Valley—territory previously claimed by the West India Company. The shareholders took steps to secure their territorial position, purchasing from the Canarsee Indians all land west of Oyster Bay on Long Island and offering revised terms and conditions in an attempt to attract new settlers. Under the new “Freedoms and Exemptions” policy, adopted in 1640, the Company gave up its trading monopoly and offered two hundred acres of land to Dutch or English immigrants who undertook to settle five colonists. The change of policy succeeded in bringing new settlers to the colony. Individual traders traveled independently to the colony to trade for furs, and some remained on a semi-permanent basis to represent the interests of major trading houses in Amsterdam. Men and women were drawn across the Atlantic by networks of family and friends. However, the policy also encouraged the Puritans to spill across Long Island Sound, where they established the towns of Gravesend, Hempstead, Flushing, and Middleburgh (later Newtown) on Long Island—a sign of the English settlers’ ever-encroaching presence in the region. By 1645, when the French Jesuit priest Father Isaac Jogues visited lower Manhattan, the island was populated by some four or five hundred men of different sects and nationalities speaking eighteen different languages. The population of the entire province remained no more than a couple of thousand, but as the number of free traders increased, so did the competition for Indian furs, prompting subtle changes in European-Amerindian relations. As the caution of early years diminished, familiarity bred exploitation and, in time, mutual contempt. In 1639 the provincial director, Willem Kieft, made the fateful decision to try to exact a tribute from the neighboring Raritan Indians. In Kieft’s view, since the Indians, as defensive allies, benefited from the presence of the Company and the colonists, it was only reasonable that they bear some of its costs. The Indians, for their part, could see little benefit in having allies who stuck to the coast and concentrated on trade, and they rejected Kieft’s authority to levy a tribute. The two sides clashed inconclusively until 1643, when the slaughter of some eighty Wecquaesgeek Indians across the river from New Amsterdam at Pavonia (Jersey City) succeeded in uniting almost the entire Indian population of the Lower Hudson Valley against New Netherland. When Keift’s War ended two years later, dozens of colonists and some 1600 Indians had been killed, and New Netherland was almost wiped out. Appealing for intervention to the States General in Holland, the settlers declared that “almost every place is abandoned . . . we, wretched people, must skulk, with wives and little ones that still survive in poverty together . . . whilst the Indians daily threaten to overwhelm us.” In 1647 the Company shareholders dispatched Peter Stuyvesant to restore the colony. A stern and sober man, Stuyvesant was also a fiercely loyal employee who had lost a leg in the Company’s service while fighting the Portuguese on the Caribbean island of Saint Martin. No sooner had he arrived than Stuyvesant and his hand-picked council issued a flurry of orders on matters ranging from compulsory church attendance to fire prevention and the keeping of hogs and goats. This set the tone for his seventeen-year administration, during which time he negotiated boundary agreements with the English to the north, led a force of seven hundred men to expel the Swedes from the Delaware River to the south, and, through a combination of diplomacy and armed force, rebuilt Dutch influence and strength in the region. Stuyvesant managed to navigate a middle course between the competing demands of settler lobbies seeking greater autonomy and distant Company shareholders trying to preserve their authority and chartered prerogatives. Although he acquired a reputation as a domineering and autocratic administrator, most historians agree that under Stuyvesant’s care, New Netherland’s population of independent traders and farmers collaborated, establishing orderly villages and small towns. New Amsterdam quickly became known as the major port and capital of this increasingly prosperous provincial society. The origins of the city’s government can be traced to a campaign for municipal reform begun by local merchants in the 1640s and culminating with the first meeting of the municipal government on February 2, 1653. The city’s first burgomasters and schepens (roughly equivalent to the English mayors and aldermen) were given charge of the school, the docks, and a newly established public weigh-house, but they added to their administrative powers in subsequent years. In the course of the decade, the lives of ordinary settlers in New Amsterdam came to resemble those of the urban Dutch brede middenstand, roughly equivalent to the English middling sort, who balanced their private pursuits with public obligations and adherence to a regulatory order, and served as a powerful integrating force upon an otherwise diverse settler group. During this period of growth, neither the burgomasters nor the ordinary colonists realized that their success was about to become the source of their undoing. In the late 1650s the colony’s new-found prosperity attracted the attention of powerful English interests who were jealous of the Dutch imperial success. Within months of Charles II’s restoration in 1660, Parliament adopted another Navigation Act, designed to drive the Dutch from the English-controlled American trade. The keenest advocates of England’s commercial empire gathered around the king’s younger brother, James, Duke of York. By March 1664 James and his counsellors had succeeded in persuading the king to grant his brother part of present-day Maine and a handful of islands near its shores. In an act of superlative aggrandizement, the most substantial part of James’s grant awarded him control of all the territory lying between the Delaware and Connecticut rivers—the territory comprising New Netherland. In May of 1664 James, Duke of York, dispatched Colonel Richard Nicolls with four ships and three hundred soldiers to secure the “entyre submission and obedience” of England’s newest colonial American subjects. In mid-August the invaders disembarked from vessels anchored off Long Island in Gravesend Bay and moved west to Brooklyn. Nicolls enlisted residential militias from the English towns on Long Island and distributed handbills ahead of the advancing troops offering fair treatment for those who surrendered. The English commander repeated his terms in a letter written to Stuyvesant, promising that in return for capitulation the settlers would “peaceably enjoy whatsoever God’s blessing and their own honest industry have furnished them with and all other privileges with his majesty’s English subjects.” Stuyvesant wanted to make a fight of it. But when he tried to convince New Amsterdam’s leaders to keep news of the lenient surrender terms—and reports of the fort’s limited supply of good gun powder—from the inhabitants, the burgomasters left the meeting “greatly disgusted and dissatisfied.” Furious at their defiance, Stuyvesant tore up Nicolls’s letter offering terms. Within hours work on the city’s fortifications ceased, and a delegation of the “inhabitants of the place assisted by their wives and children crying and praying” confronted the director and demanded that he re-assemble the letter and negotiate surrender. The following day ninety-three prominent burghers—including Stuyvesant’s own seventeen-year-old son—presented a remonstrance denouncing resistance as a folly that would not save “the smallest portion of our entire city, our property and (what is dearer to us), our wives and children, from total ruin.” Stuyvesant relented, and merchant leaders met with Nicolls and his officers to draft the Articles of Capitulation under which New Netherland and New Amsterdam became New York, New York. The conquest of New Netherland expelled the Dutch from the continent and consolidated the English colonization of North America. Thereafter the English turned their attention to the French as their major European competitor in the North Atlantic, culminating with the French and Indian War (1756–1763), which ushered in the era of the American Revolution. But Dutch New York lived on in the marriage choices, inheritance practices, and naming patterns of a population that, in New York City, remained “Dutch” until at least the end of the seventeenth century and up the Hudson River Valley for a decade or more into the eighteenth. For those who care to look, Dutch New York lives on still in the names of streets and noteworthy families, and in the “cookies” and “coleslaw” that the rest of the world has come to consider so quintessentially American. E. B. O’Callaghan and Berthold Fernow, eds. Documents Relative to the Colonial History of the State of New York. (Albany: Weed, Parsons, 1856–1887), 1:139. Simon Middleton is Senior Lecturer in History at the University of Sheffield in England and the author of From Privileges to Rights: Work and Labor in Colonial New York (2006). Make Gilder Lehrman your Home for History Already have an account? Please click here to login and access this page. How to subscribe Click here to get a free subscription if you are a K-12 educator or student, and here for more information on the Affiliate School Program, which provides even more benefits. Otherwise, click here for information on a paid subscription for those who are not K-12 educators or students. Make Gilder Lehrman your Home for History Become an Affiliate School to have free access to the Gilder Lehrman site and all its features. Click here to start your Affiliate School application today! You will have free access while your application is being processed. Individual K-12 educators and students can also get a free subscription to the site by making a site account with a school-affiliated email address. Click here to do so now! Make Gilder Lehrman your Home for History Why Gilder Lehrman? Your subscription grants you access to archives of rare historical documents, lectures by top historians, and a wealth of original historical material, while also helping to support history education in schools nationwide. Click here to see the kinds of historical resources to which you'll have access and here to read more about the Institute's educational programs. Individual subscription: $25 Click here to sign up for an individual subscription to the Gilder Lehrman site. K-12 School subscription: $195 Click here to sign up for an institutional subscription, which allows site access to all faculty and students in a single school, or all visitors to a library branch. Make Gilder Lehrman your Home for History Upgrade your Account We're sorry, but it looks as though you do not have access to the full Gilder Lehrman site. All K-12 educators receive free subscriptions to the Gilder Lehrman site, and our Affiliate School members gain even more benefits! How to Subscribe K-12 educator or student? Click here to edit your profile and indicate this, giving you free access, and here for more information on the Affiliate School Program. Not a educator or student? Click here for more information on purchasing a subscription to the Gilder Lehrman site. Related Site Content - Teaching ResourceEssential Questions in Teaching American History - EssayNative American Discoveries of Europe - EssayThe Columbian Exchange - EssayIberian Roots of the Transatlantic Slave Trade, 1440–1640 - Primary SourceA Jamestown settler describes life in Virginia, 1622 - EssayEngland on the Eve of Colonization - EssayNavigating the Age of Exploration - Teaching ResourceNew Amsterdam: The Center of the Dutch Settlement - Primary SourceThe surrender of New Netherland, 1664 - EssayAmerican Indians
<urn:uuid:699256b9-4c2e-493c-9bd7-53d0e2fcacdc>
CC-MAIN-2013-20
http://www.gilderlehrman.org/history-by-era/early-settlements/essays/conflict-and-commerce-rise-and-fall-new-netherland
2013-06-19T19:07:54
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953392
4,144
4.125
4
A British seismologist explains earthquakes. The rumbling and shaking of earthquakes puzzled people for centuries, writes Musson, chief spokesman at the British Geological Survey. Aristotle blamed the noise on roaring winds forced through subterranean caverns. The people of Lisbon, Portugal, racked by a massive quake in 1755, felt certain God was punishing the wicked. Shortly thereafter, working with limited data, scientists began to develop an understanding: British geologist John Michell posited that earthquakes transmitted on elastic waves; his colleague Charles Lyell found evidence of moving faults. Based on observations of the archetypal San Francisco quake of 1906, Johns Hopkins geologist Harry Fielding Reid accurately defined an earthquake as a violent movement of rocks that releases energy in the form of waves that spread outward at high velocity. Musson describes the evolving science of seismology, including the development of today’s global seismological networks. Analyzing the most significant earthquakes of all time—Lisbon, San Francisco and Sumatra (2004)—he explains what we know about these “strange and uncanny things” and scientists’ “persistent failure” at predicting them. Based on the growing population of urban areas, especially in developing nations, where buildings are not designed to withstand violent shaking, scientists are able to predict that a massive future quake will eventually result in 1 million deaths. In villages in seismically active areas, builders generally use available materials and follow traditional practices, which can lead to high death tolls. In earthquake-savvy cities, builders prevent collapses through reinforcement and other techniques. Musson urges national governments to mandate earthquake safety programs. In the meantime, he writes, the safest place to be during a quake is under a solid piece of furniture. An authoritative and accessible investigation of one of nature’s most destructive forces.
<urn:uuid:6e1ec658-51d5-4ed3-b918-fe3523c1929f>
CC-MAIN-2013-20
http://www.kirkusreviews.com/book-reviews/roger-musson/million-death-quake/print/
2013-06-19T19:20:18
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934672
370
4
4
“Choreography” is the art of composing dances to express a theme, story, or emotion. Choreographers must begin with the most basic movements and build on them to create a complex performance. This process involves a series of steps. First, dancers begin with simple movements, then combine them into phrases. Various phrases are then united until they culminate into a dance. If a choreographer is talented and the dancers are skilled, the audience is unaware of the many compositional elements that made up the finished performance. The word “choreographer” comes from Greek words for “dance writer.” Although notation to write down the basic elements of dance compositions remains imperfect and is not standardized — there are currently many competing systems of dance notation — “dance writers” still have to teach in person. Click the buttons on the left to view each compositional element.
<urn:uuid:f1a3b537-7bdb-4b69-a5c2-96d0d6996f05>
CC-MAIN-2013-20
http://www.mhhe.com/HumanitiesStudio/3/5/1.html
2013-06-19T19:25:59
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94934
186
4.03125
4
Fruits are classified according to the arrangement from which they derive. There are four types—simple, aggregate, multiple, and accessory fruits. Simple fruits develop from a single ovary of a single flower and may be fleshy or dry. Principal fleshy fruit types are the berry, in which the entire pericarp is soft and pulpy (e.g., the grape, tomato, banana, pepo, hesperidium, and blueberry) and the drupe, in which the outer layers may be pulpy, fibrous, or leathery and the endocarp hardens into a pit or stone enclosing one or more seeds (e.g., the peach, cherry, olive, coconut, and walnut). The name fruit is often applied loosely to all edible plant products and specifically to the fleshy fruits, some of which (e.g., eggplant, tomatoes, and squash) are commonly called vegetables. Dry fruits are divided into those whose hard or papery shells split open to release the mature seed (dehiscent fruits) and those that do not split (indehiscent fruits). Among the dehiscent fruits are the legume (e.g., the pod of the pea and bean), which splits at both edges, and the follicle, which splits on only one side (e.g., milkweed and larkspur); others include the dry fruits of the poppy, snapdragon, lily, and mustard. Indehiscent fruits include the single-seeded achene of the buttercup and the composite flowers; the caryopsis (grain); the nut (e.g., acorn and hazelnut); and the fruits of the carrot and parsnip (not to be confused with their edible fleshy roots). An aggregate fruit (e.g., blackberry and raspberry) consists of a mass of small drupes (drupelets), each of which developed from a separate ovary of a single flower. A multiple fruit (e.g., pineapple and mulberry) develops from the ovaries of many flowers growing in a cluster. Accessory fruits contain tissue derived from plant parts other than the ovary; the strawberry is actually a number of tiny achenes (miscalled seeds) outside a central pulpy pith that is the enlarged receptacle or base of the flower. The core of the pineapple is also receptacle (stem) tissue. The best-known accessory fruit is the pome (e.g., apple and pear), in which the fleshy edible portion is swollen stem tissue and the true fruit is the central core. The skin of the banana is also stem tissue, as is the rind of the pepo (berrylike fruit) of the squash, cucumber, and melon. The structure of a fruit often facilitates the dispersal of its seeds. The "wings" of the maple, elm, and ailanthus fruits and the "parachutes" of the dandelion and the thistle are blown by the wind; burdock, cocklebur, and carrot fruits have barbs or hooks that cling to fur and clothing; and the buoyant coconut may float thousands of miles from its parent tree. Some fruits (e.g., witch hazel and violet) explode at maturity, scattering their seeds. A common method of dispersion is through the feces of animals that eat fleshy fruits containing seeds covered by indigestible coats. Fruit in which the outer layer is a thin skin, the middle layer is thick and usually fleshy (though sometimes tough, as in the almond, or fibrous, as in the coconut), and the inner layer (the pit) is hard and stony. Within the pit is usually one seed. In aggregate fruits such as the raspberry and blackberry (which are not true berries), many small drupes are clumped together. Other representative drupes are the cherry, peach, mango, olive, and walnut. Learn more about drupe with a free trial on Britannica.com. Edible fruit of the vine Actinidia chinensis (family Actinidiaceae), native to mainland China and the island of Taiwan and grown commercially in New Zealand and California. It became popular in the nouvelle cuisine of the 1970s. It has a slightly acid taste and is high in vitamin C. Kiwi can be eaten raw or cooked, and the juice is sometimes used as a meat tenderizer. Learn more about kiwi fruit with a free trial on Britannica.com. Organic compound, one of the simple sugars (monosaccharides), chemical formula C6H12O6. It occurs in fruits, honey, syrups (especially corn syrup), and certain vegetables, usually along with its isomer glucose. Fructose and glucose are the components of the disaccharide sucrose (table sugar); hydrolysis of sucrose yields invert sugar, a 50:50 mixture of fructose and glucose. The sweetest of the common sugars, fructose is used in foods and medicines. Learn more about fructose with a free trial on Britannica.com. Fruit fly (Trypetidae) Learn more about fruit fly with a free trial on Britannica.com. Any of numerous tropical Old World bats in the family Pteropodidae as well as several species of herbivorous New World bats. Old World fruit bats are widely distributed from Africa to South Asia and Australasia. Most species rely on vision rather than on echolocation to avoid obstacles. Some species are solitary, some gregarious; most roost in the open in trees, though some inhabit caves, rocks, or buildings. Some are red or yellow, and some are striped or spotted. They eat fruit or flowers (including pollen and nectar). The smallest species in the family, the long-tongued fruit bats, reach a head and body length of about 2.5 in. (6–7 cm) and a wingspan of about 10 in. (25 cm). The same family contains the largest of all bats, the flying foxes, which attain lengths up to 16 in. (40 cm) and a wingspan of 5 ft (1.5 m). New World fruit bats are generally smaller and make use of echolocation. They are found in the tropics, with many species belonging to the genera Artibeus and Sturnira. Learn more about fruit bat with a free trial on Britannica.com. In its strict botanical sense, the fleshy or dry ripened ovary (enlarged portion of the pistil) of a flowering plant, enclosing the seed or seeds. Apricots, bananas, and grapes, as well as bean pods, corn grains, tomatoes, cucumbers, and (in their shells) acorns and almonds, are all technically fruits. Popularly, the term is restricted to the ripened ovaries that are sweet and either succulent or pulpy. The principal botanical purpose of the fruit is to protect and spread the seed. There are two broad categories of fruit: fleshy and dry. Fleshy fruits include berries, such as tomatoes, oranges, and cherries, which consist entirely of succulent tissue; aggregate fruits, including blackberries and strawberries, which form from a single flower with many pistils, each of which develops into fruitlets; and multiple fruits, such as pineapples and mulberries, which develop from the mature ovaries of an entire inflorescence. Dry fruits include the legumes, cereal grains, capsules, and nuts. Fruits are important sources of dietary fiber and vitamins (especially vitamin C). They can be eaten fresh; processed into juices, jams, and jellies; or preserved by dehydration, canning, fermentation, and pickling. Learn more about fruit with a free trial on Britannica.com. Fruit fly (Ceratitis capitata) proven to be particularly destructive to citrus crops, at great economic cost. The Med fly lays up to 500 eggs in citrus fruits (except lemons and sour limes), and the larvae tunnel into the fruit, making it unfit for human consumption. Because of this pest, quarantine laws regulating fruit importation have been enacted worldwide. Learn more about Mediterranean fruit fly with a free trial on Britannica.com. The term fruit has different meanings dependent on context, and the term is not synonymous in food preparation and biology. In botany, which is the scientific study of plants, fruits are the ripened ovaries of flowering plants. In many plant species, the fruit includes the ripened ovary and surrounding tissues. Fruits are the means by which flowering plants disseminate seeds, and the presence of seeds indicates that a structure is most likely a fruit, though not all seeds come from fruits. No single terminology really fits the enormous variety that is found among plant fruits. The term 'false fruit' (pseudocarp, accessory fruit) is sometimes applied to a fruit like the fig (a multiple-accessory fruit; see below) or to a plant structure that resembles a fruit but is not derived from a flower or flowers. Some gymnosperms, such as yew, have fleshy arils that resemble fruits and some junipers have berry-like, fleshy cones. The term "fruit" has also been inaccurately applied to the seed-containing female cones of many conifers. A fruit is a ripened ovary. Inside the ovary is one or more ovules (eggs). The ovules are fertilized in a process that starts with pollination, which is the movement of pollen from the stamens to the stigma of flowers. After pollination, a tube grows from the pollen through the stigma into the ovary to the ovule and sperm are transferred from the pollen to the ovule, when the sperm enters the nucleus of the ovule and the endosperm mother cell, the fertilization process is completed. As the developing seeds mature, the ovary begins to ripen. The ovules develop into seeds and the ovary wall, the pericarp, may become fleshy (as in berries or drupes), or form a hard outer covering (as in nuts). In some cases, the sepals, petals and/or stamens and style of the flower fall off. Fruit development continues until the seeds have matured. In some multiseeded fruits, the extent to which the flesh develops is proportional to the number of fertilized ovules. The wall of the fruit, developed from the ovary wall of the flower, is called the pericarp. The pericarp is often differentiated into two or three distinct layers called the exocarp (outer layer - also called epicarp), mesocarp (middle layer), and endocarp (inner layer). In some fruits, especially simple fruits derived from an inferior ovary, other parts of the flower (such as the floral tube, including the petals, sepals, and stamens), fuse with the ovary and ripen with it. The plant hormone ethylene causes ripening. When such other floral parts are a significant part of the fruit, it is called an accessory fruit. Since other parts of the flower may contribute to the structure of the fruit, it is important to study flower structure to understand how a particular fruit forms. Fruits are so diverse that it is difficult to devise a classification scheme that includes all known fruits. Many common terms for seeds and fruit are incorrectly applied, a fact that complicates understanding of the terminology. Seeds are ripened ovules; fruits are the ripened ovaries or carpels that contain the seeds. To these two basic definitions can be added the clarification that in botanical terminology, a nut is not a type of fruit and not another term for seed, on the contrary to common terminology. There are three basic types of fruits: Simple fruits can be either dry or fleshy, and result from the ripening of a simple or compound ovary with only one pistil. Dry fruits may be either dehiscent (opening to discharge seeds), or indehiscent (not opening to discharge seeds). Types of dry, simple fruits, with examples of each, are: Fruits in which part or all of the pericarp (fruit wall) is fleshy at maturity are simple fleshy fruits. Types of fleshy, simple fruits (with examples) are: An aggregate fruit, or etaerio, develops from a flower with numerous simple pistils. An example is the raspberry, whose simple fruits are termed drupelets because each is like a small drupe attached to the receptacle. In some bramble fruits (such as blackberry) the receptacle is elongated and part of the ripe fruit, making the blackberry an aggregate-accessory fruit. The strawberry is also an aggregate-accessory fruit, only one in which the seeds are contained in achenes. In all these examples, the fruit develops from a single flower with numerous pistils. Some kinds of aggregate fruits are called berries, yet in the botanical sense they are not. In the photograph on the right, stages of flowering and fruit development in the noni or Indian mulberry (Morinda citrifolia) can be observed on a single branch. First an inflorescence of white flowers called a head is produced. After fertilization, each flower develops into a drupe, and as the drupes expand, they become connate (merge) into a multiple fleshy fruit called a syncarpet. There are also many dry multiple fruits, e.g. |True berry||Pepo||Hesperidium||False berry (Epigynous)||Aggregate fruit||Multiple fruit||Other accessory fruit| |Blackcurrant, Redcurrant, Gooseberry, Tomato, Eggplant, Guava, Lucuma, Chili pepper, Pomegranate, Avocado, Kiwifruit, Grape,||Pumpkin, Gourd, Cucumber, Melon||Orange, Lemon, Lime, Grapefruit||Banana, Cranberry, Blueberry||Blackberry, Raspberry, Boysenberry, Hedge apple||Pineapple, Fig, Mulberry||Apple, Apricot, Peach, Cherry, Green bean, Sunflower seed, Strawberry| Seedlessness is an important feature of some fruits of commerce. Commercial cultivars of bananas and pineapples are examples of seedless fruits. Some cultivars of citrus fruits (especially navel oranges and mandarin oranges), table grapes, grapefruit, and watermelons are valued for their seedlessness. In some species, seedlessness is the result of parthenocarpy, where fruits set without fertilization. Parthenocarpic fruit set may or may not require pollination. Most seedless citrus fruits require a pollination stimulus; bananas and pineapples do not. Seedlessness in table grapes results from the abortion of the embryonic plant that is produced by fertilization, a phenomenon known as stenospermocarpy which requires normal pollination and fertilization. Some fruits have coats covered with spikes or hooked burrs, either to prevent themselves from being eaten by animals or to stick to the hairs, feathers or legs of animals, using them as dispersal agents. Examples include cocklebur and unicorn plant. The sweet flesh of many fruits is "deliberately" appealing to animals, so that the seeds held within are eaten and "unwittingly" carried away and deposited at a distance from the parent. Likewise, the nutritious, oily kernels of nuts are appealing to rodents (such as squirrels) who hoard them in the soil in order to avoid starving during the winter, thus giving those seeds that remain uneaten the chance to germinate and grow into a new plant away from their parent. Other fruits are elongated and flattened out naturally and so become thin, like wings or helicopter blades, e.g. maple, tuliptree and elm. This is an evolutionary mechanism to increase dispersal distance away from the parent via wind. Other wind-dispersed fruit have tiny parachutes, e.g. dandelion and salsify. Many hundreds of fruits, including fleshy fruits like apple, peach, pear, kiwifruit, watermelon and mango are commercially valuable as human food, eaten both fresh and as jams, marmalade and other preserves. Fruits are also in manufactured foods like cookies, muffins, yoghurt, ice cream, cakes, and many more. Many fruits are used to make beverages, such as fruit juices (orange juice, apple juice, grape juice, etc) or alcoholic beverages, such as wine or brandy. Apples are often used to make vinegar. Many vegetables are botanical fruits, including tomato, bell pepper, eggplant, okra, squash, pumpkin, green bean, cucumber and zucchini. Olive fruit is pressed for olive oil. Spices like vanilla, paprika, allspice and black pepper are derived from berries. Fruits of opium poppy are the source of the drugs opium and morphine. Osage orange fruits are used to repel cockroaches. Bayberry fruits provide a wax often used to make candles. Many fruits provide natural dyes, e.g. walnut, sumac, cherry and mulberry. Dried gourds are used as decorations, water jugs, bird houses, musical instruments, cups and dishes. Pumpkins are carved into Jack-o'-lanterns for Halloween. The spiny fruit of burdock or cocklebur were the inspiration for the invention of Velcro. Coir is a fibre from the fruit of coconut that is used for doormats, brushes, mattresses, floortiles, sacking, insulation and as a growing medium for container plants. The shell of the coconut fruit is used to make souvenir heads, cups, bowls, musical instruments and bird houses. |Country||Production (Int $1000)||Footnote||Production (MT)||Footnote| |No symbol = official figure,F = FAO estimate, * = Unofficial figure, C = Calculated figure;| Production in Int $1000 have been calculated based on 1999-2001 international prices Source: Food and Agricultural Organization of United Nations: Economic and Social Department: The Statistical Division |Country||Production (Int $1000)||Footnote||Production (MT)||Footnote| |No symbol = official figure, F = FAO estimate, * = Unofficial figure, C = Calculated figure;| Production in Int $1000 have been calculated based on 1999-2001 international prices Source: and Agricultural Organization of United Nations: Economic and Social Department: The Statistical Division Fruit snacks harvest bountiful sales: nutritional claims, flavor, form and popular licensing tie-ins have propelled fruit snack sales. (Special Report: Fruit Snack Report). Sep 01, 2002; Fruit SNACKS are in demand among kids and adults, suppliers say, based on product claims of convenience, nutrition and...
<urn:uuid:eecf1991-1367-4898-a115-60ee66068355>
CC-MAIN-2013-20
http://www.reference.com/browse/Fruit
2013-06-19T19:26:05
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925193
3,958
4.15625
4
LIGHT WORK: Eight lasers constructed from inexpensive silicon reside in this chip made by Intel. The coherent light beams could lead to ultrafast computer circuitry that transmits data optically. Image: PAUL SAKUMA AP Photo Scientists have long sought to build lasers from silicon. Such an advance would enable engineers to incorporate both electronic and optical devices onto cheap silicon chips rather than being compelled to employ costly-to-make lasers based on "exotic" semiconductor materials such as gallium arsenide or indium phosphide. Silicon lasers could lead to affordable light-based systems that harness photons instead of electrons to shuttle huge amounts of data swiftly--at multigigabit-per-second rates. Two research groups, one at the University of California at Los Angeles and the other at Intel Corporation, have recently reported success in making silicon emit continuous laser light. This much anticipated feat came despite silicon's dogged resistance to serving as a lasing medium. In a good lasing material, electrons that are pumped up with energy release that energy in the form of coherent photons of light. In silicon, however, excited electrons are more likely to vibrate, thus generating heat instead. "There have been many attempts, but no one had been able to get silicon to lase before now," notes Bahram Jalali, the physicist who led the U.C.L.A. team. Jalali and his group solved the problem last fall by making clever use of some of the very vibrations that undermined silicon's suitability for lasers in the first place. In particular, they focused on the Raman effect, a process in which the wavelength of light lengthens after it scatters off atomic vibrations. The U.C.L.A. researchers matched the scattered light with the pump energy from another laser in a way that created constructive feedback, resulting in a net amplification of light. Intel reported its own success in creating a silicon Raman laser several months afterward. The chipmaker's scientists fed light from a separate laser into a waveguide (or light pipe)--basically an S-shaped ridge the engineers sculpted onto a 15-millimeter-square silicon chip--and Raman laser light emerged. Naturally, the task was not that easy. The power of a silicon Raman laser typically hits a limit as photons sporadically collide with silicon atoms and release free electrons. "Unfortunately, the free electron cloud absorbs and scatters light, so you get diminishing returns as you pump the device harder," explains Mario Paniccia, director of Intel's photonics technology laboratory. The team therefore positioned two electrodes on either side of the waveguide, forming a kind of diode. "Placing a voltage across the diode sucks the free electrons away like a vacuum cleaner," he says, and thus keeps the light flowing through the chip. "This and related research should lead to many useful applications," says Philippe M. Fauchet, an electrical and computer engineer at the University of Rochester. A laser beam generated continuously through silicon could overcome cost and size limitations in lasers that could be used in surgical procedures, for example. The technology could also detect tiny amounts of chemicals in the environment, jam the sensors of heat-seeking missiles or enable high-bandwidth (high-capacity) optical communications. Looking a bit farther afield, Paniccia believes that the new laser technology could serve as a building block for high-bandwidth photonic devices constructed almost entirely of inexpensive silicon in existing semiconductor foundry and micromachining facilities. "We've already developed the other necessary components of such a system," including fast modulators (optical encoders), light guides and photodetectors, he notes. Of course, many in the industry hope that this technology will eventually lead to fully optical computers--superspeedy digital systems in which photons rather than electrons serve as 0s and 1s. Paniccia is certainly optimistic about the recent progress: "This work constitutes not only a scientific breakthrough but also a psychological one, because nobody thought it could be done."
<urn:uuid:8734440d-3600-4864-ba4c-1c64e1d848ef>
CC-MAIN-2013-20
http://www.scientificamerican.com/article.cfm?id=making-light-of-silicon
2013-06-19T19:00:28
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944273
827
4.15625
4
Our knowledge concerning the surface of Venus comes from a limited amount of information obtained by the series of Russian Venera landers, and primarily from extensive radar imaging of the planet. The radar imaging of the planet has been performed both from Earth-based facilities and from space probes. The most extensive radar imaging was obtained from the Magellan orbiter in a 4-year period in the early 1990s. As a consequence, we now have a detailed radar picture of the surface of Venus. The adjacent animation shows the topography of the surface as determined using the Magellan synthetic aperture radar (black areas are regions not examined by Magellan). An MPEG movie (303 kB) of this animation is also available. Much of the surface of Venus appears to be rather young. The global data set from radar imaging reveals a number of craters consistent with an average Venus surface age of 300 million to 500 million years. There are two "continents", which are large regions several kilometers above the average elevation. These are called Istar Terra and Aphrodite Terra. They can be seen in the preceding animation as the large green, yellow, and red regions indicating higher elevation near the equator (Aphrodite Terra) and near the top (Ishtar Terra). |Hemispheres of Venus (Ref)| The center image (a) is centered at the North Pole. The other four images are centered around the equator of Venus at (b) 0 degrees longitude, (c) 90 degrees east longitude, (d) 180 degrees and (d) 270 degrees east longitude. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. (Here is a more extensive discussion of these hemispheric views.) |A Volcano (Ref)||Apparent Lava Flows (Ref)| In all of these radar images you should bear in mind that bright spots correspond to regions that reflect more radar waves than other regions. Thus, if you could actually see these regions with your eyes the patterns of brightness and darkness would probably not be the same as in these images. However, the basic features would still be the same. There are rift valleys as large as the East African Rift (the largest on Earth). The image shown below illustrates a rift valley in the West Eistla Region, near Gula Mons and Sif Mons. |Rift valley on Venus| The perspective in cases like this is synthesized from radar data taken from different positions in orbit. African Rift on Earth is a consequence of tectonic motion between the African and Eurasian plates (the Dead Sea in Israel is also a consequence of this same plate motion). Large rift valleys on Venus appear to be a consequence of more local tectonic activity, since the surface of Venus still appears to be a |A Field of Craters||The Largest Crater (Ref)| |The surface of Venus from Venera 14 (Ref)|
<urn:uuid:98f84e8a-c73e-4cf2-9c53-4b613f94b23a>
CC-MAIN-2013-20
http://csep10.phys.utk.edu/astr161/lect/venus/surface.html
2013-05-20T11:46:57
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92402
622
4.34375
4
Inheritance describes a relationship between two (or more) types, or classes, of objects in which one is said to be a "subtype" or "child" of the other, as result the "child" object is said to inherit features of the parent, allowing for shared functionality, this lets programmers re-use or reduce code and simplifies the development and maintenance of software. Inheritance is also commonly held to include subtyping, whereby one type of object is defined to be a more specialized version of another type (see Liskov substitution principle), though non sub-typing inheritance is also possible. Inheritance is typically expressed by describing classes of objects arranged in an inheritance hierarchy (also referred to as inheritance chain), a tree-like structure created by their inheritance relationships. For example, one might create a variable class "Mammal" with features such as eating, reproducing, etc.; then define a subtype "Cat" that inherits those features without having to explicitly program them, while adding new features like "chasing mice". This allows commonalities among different kinds of objects to be expressed once and reused multiple times. In C++ we can then have classes that are related to other classes (a class can be defined by means of an older, pre-existing, class ). This leads to a situation in which a new class has all the functionality of the older class, and additionally introduces its own specific functionality. Instead of composition, where a given class contains another class, we mean here derivation, where a given class is another class. This OOP property will be explained further when we talk about Classes (and Structures) inheritance in the Classes Inheritance Section of the book. If one wants to use more than one totally orthogonal hierarchy simultaneously, such as allowing "Cat" to inherit from "Cartoon character" and "Pet" as well as "Mammal" we are using multiple inheritance.Last modified on 13 November 2012, at 02:25
<urn:uuid:9a20ccee-39d2-4e2e-b39d-d7f4cb702252>
CC-MAIN-2013-20
http://en.m.wikibooks.org/wiki/C%2B%2B_Programming/Programming_Languages/Paradigms/Inheritance
2013-05-20T11:46:32
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939604
414
4.25
4
Claessens has discovered that the kauri trees in New Zealand prevent landslides. When these enormous conifers reached a certain age, they stabilise areas prone to landslides. This maximises the benefit the trees gain by living far longer than other tree species. At present the slopes are drained and large concrete structures are placed to prevent the landslides and the associated mud flows. According to Claessens planting kauri trees is a natural and in the longer-term possibly better solution for this problem. During his doctoral research, the Belgian researcher developed a dynamic landscape model to simulate the distribution of soil due to landslides. For this he studied the landscape, soil and vegetation dynamics in the Waitakere Ranges Regional Park in New Zealand. The model can be used to predict the locations where landslides will occur and researchers can also use it to calculate how rainfall affects the soil. Waitakere Ranges Regional Park is situated on the North Island of New Zealand. About 1000 years ago this entire island was covered with kauri trees, which can reach a height of 50 metres and grow in the most inhospitable places. The largest kauri tree in New Zeeland is the Tane Mahuta ('king of the forest'). This tree has reached the honourable age of 1500 years, is more than 51 metres high and has a girth of 13.7 metres. Some of the remaining kauri forests of the island are still inhabited by the original islanders, the Maori's. They use the trees to build canoes and houses. From the mid-19th century onwards, many kauri trees were chopped down by Europeans for the timber trade. This led to the disappearance of most of these colossal conife Contact: Dr Lieven Claessens Netherlands Organization for Scientific Research
<urn:uuid:64d141aa-b3bb-4700-98f9-eb42b659c1b1>
CC-MAIN-2013-20
http://news.bio-medicine.org/biology-news-3/New-Zealand-forest-giant-prevents-landslides-12238-1/
2013-05-20T11:53:38
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949121
368
4.09375
4
These caterpillars have 16 parts. What different shapes do they make if each part lies in the small squares of a 4 by 4 square? Use the interactivities to fill in these Carroll diagrams. How do you know where to place the numbers? Use the interactivities to complete these Venn diagrams. In this investigation, you are challenged to make mobile phone numbers which are easy to remember. What happens if you make a sequence adding 2 each time? How have the numbers been placed in this Carroll diagram? Which labels would you put on each row and column? Investigate the different ways these aliens count in this challenge. You could start by thinking about how each of them would write our number 7. Holly from Hermitage School sent a particularly well-explained solution to this problem. Go to last month's problems to see more solutions. In this article for teachers, Bernard gives an example of taking an initial activity and getting questions going that lead to other This article for pupils explores what makes numbers special or lucky, and looks at the numbers that are all around us every day.
<urn:uuid:531132a7-7595-490a-85bf-43b618581833>
CC-MAIN-2013-20
http://nrich.maths.org/thismonth/1and2/2007/09
2013-05-20T11:30:57
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923114
239
4.0625
4
CORVALLIS, Ore. – The ebb and flow of the ocean tides, generally thought to be one of the most predictable forces on Earth, are actually quite variable over long time periods, in ways that have not been adequately accounted for in most evaluations of prehistoric sea level changes. Due to phenomena such as ice ages, plate tectonics, land uplift, erosion and sedimentation, tides have changed dramatically over thousands of years and may change again in the future, a new study concludes. Some tides on the East Coast of the United States, for instance, may at times in the past have been enormously higher than they are today – a difference between low and high tide of 10-20 feet, instead of the current 3-6 foot range. And tides in the Bay of Fundy, which today are among the most extreme in the world and have a range up to 55 feet, didn’t amount to much at all about 5,000 years ago. But around that same time, tides on the southern U.S. Atlantic coast, from North Carolina to Florida, were about 75 percent higher. The findings were just published in the Journal of Geophysical Research. The work was done with computer simulations at a high resolution, and supported by the National Science Foundation and other agencies. “Scientists study past sea levels for a range of things, to learn about climate changes, geology, marine biology,” said David Hill, an associate professor in the School of Civil and Construction Engineering at Oregon State University. “In most of this research it was assumed that prehistoric tidal patterns were about the same as they are today. But they weren’t, and we need to do a better job of accounting for this.” One of the most interesting findings of the study, Hill said, was that around 9,000 years ago, as the Earth was emerging from its most recent ice age, there was a huge amplification in tides of the western Atlantic Ocean. The tidal ranges were up to three times more extreme than those that exist today, and water would have surged up and down on the East Coast. One of the major variables in ancient tides, of course, was sea level changes that were caused by previous ice ages. When massive amounts of ice piled miles thick in the Northern Hemisphere 15,000 to 20,000 years ago, for instance, sea levels were more than 300 feet lower. But it’s not that simple, Hill said. “Part of what we found was that there are certain places on Earth where tidal energy gets dissipated at a disproportionately high rate, real hot spots of tidal action,” Hill said. “One of these today is Hudson Bay, and it’s helping to reduce tidal energies all over the rest of the Atlantic Ocean. But during the last ice age Hudson Bay was closed down and buried in ice, and that caused more extreme tides elsewhere.” Many other factors can also affect tides, the researchers said, and understanding these factors and their tidal impacts is essential to gaining a better understanding of past sea levels and ocean dynamics. Some of this variability was suspected from previous analyses, Hill said, but the current work is far more resolved than previous studies. The research was done by scientists from OSU, the University of Leeds, University of Pennsylvania, University of Toronto, and Tulane University. “Understanding the past will help us better predict tidal changes in the future,” he said. “And there will be changes, even with modest sea level changes like one meter. In shallow waters like the Chesapeake Bay, that could cause significant shifts in tides, currents, salinity and even temperature.”
<urn:uuid:92cfa7a8-c4ac-4f98-b4e6-fd5945c40921>
CC-MAIN-2013-20
http://oregonstate.edu/ua/ncs/archives/2011/jul/ancient-tides-different-today-%E2%80%93-some-dramatically-higher
2013-05-20T11:48:29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968576
760
4.03125
4
Copernicus is said to be the founder of modern astronomy. He was born in Poland,1 and eventually was sent off to Cracow University, there to study mathematics and optics; at Bologna, canon law. Returning from his studies in Italy, Copernicus, through the influence of his uncle, was appointed as a canon in the cathedral of Frauenburg where he spent a sheltered and academic life for the rest of his days. Because of his clerical position, Copernicus moved in the highest circles of power; but a student he remained. For relaxation Copernicus painted and translated Greek poetry into Latin. His interest in astronomy gradually grew to be one in which he had a primary interest. His investigations were carried on quietly and alone, without help or consultation. He made his celestial observations from a turret situated on the protective wall around the cathedral, observations were made "bare eyeball," so to speak, as a hundred more years were to pass before the invention of the telescope. In 1530, Copernicus completed and gave to the world his great work De Revolutionibus, which asserted that the earth rotated on its axis once daily and traveled around the sun once yearly: a fantastic concept for the times. Up to the time of Copernicus the thinkers of the western world believed in the Ptolemiac theory that the universe was a closed space bounded by a spherical envelope beyond which there was nothing. Claudius Ptolemy, an Egyptian living in Alexandria, at about 150 A.D., gathered and organized the thoughts of the earlier thinkers. (It is to be noted that one of the ancient Greek astronomers, Aristarchus, did have ideas similar to those more fully developed by Copernicus but they were rejected in favour of the geocentric or earth-centered scheme as was espoused by Aristotle.) Ptolemy's findings were that the earth was a fixed, inert, immovable mass, located at the center of the universe, and all celestial bodies, including the sun and the fixed stars, revolved around it. It was a theory that appealed to human nature. It fit with the casual observations that a person might want to make in the field; and second, it fed man's ego. Copernicus was in no hurry to publish his theory, though parts of his work were circulated among a few of the astronomers that were giving the matter some thought; indeed, Copernicus' work might not have ever reached the printing press if it had not been for a young man who sought out the master in 1539. George Rheticus was a 25 year old German mathematics professor who was attracted to the 66 year old cleric, having read one of his papers. Intending to spend a few weeks with Copernicus, Rheticus ended up staying as a house guest for two years, so fascinated was he with Copernicus and his theories. Now, up to this time, Copernicus was reluctant to publish, -- not so much that he was concerned with what the church might say about his novel theory (De Revolutionibus was placed on the Index in 1616 and only removed in 1835), but rather because he was a perfectionist and he never thought, even after working on it for thirty years, that his complete work was ready, -- there were, as far as Copernicus was concerned, observations to be checked and rechecked. (Interestingly, Copernicus' original manuscript, lost to the world for 300 years, was located in Prague in the middle of the 19th century; it shows Copernicus' pen was, it would appear, continually in motion with revision after revision; all in Latin as was the vogue for scholarly writings in those days.) Copernicus died in 1543 and was never to know what a stir his work had caused. It went against the philosophical and religious beliefs that had been held during the medieval times. Man, it was believed (and still believed by some) was made by God in His image, man was the next thing to God, and, as such, superior, especially in his best part, his soul, to all creatures, indeed this part was not even part of the natural world (a philosophy which has proved disastrous to the earth's environment as any casual observer of the 20th century might confirm by simply looking about). Copernicus' theories might well lead men to think that they are simply part of nature and not superior to it and that ran counter to the theories of the politically powerful churchmen of the time. Two other Italian scientists of the time, Galileo and Bruno, embraced the Copernican theory unreservedly and as a result suffered much personal injury at the hands of the powerful church inquisitors. Giordano Bruno had the audacity to even go beyond Copernicus, and, dared to suggest, that space was boundless and that the sun was and its planets were but one of any number of similar systems: Why! -- there even might be other inhabited worlds with rational beings equal or possibly superior to ourselves. For such blasphemy, Bruno was tried before the Inquisition, condemned and burned at the stake in 1600. Galileo was brought forward in 1633, and, there, in front of his "betters," he was, under the threat of torture and death, forced to his knees to renounce all belief in Copernican theories, and was thereafter sentenced to imprisonment for the remainder of his days. The most important aspect of Copernicus' work is that it forever changed the place of man in the cosmos; no longer could man legitimately think his significance greater than his fellow creatures; with Copernicus' work, man could now take his place among that which exists all about him, and not of necessity take that premier position which had been assigned immodestly to him by the theologians. "Of all discoveries and opinions, none may have exerted a greater effect on the human spirit than the doctrine of Copernicus. The world had scarcely become known as round and complete in itself when it was asked to waive the tremendous privilege of being the center of the universe. Never, perhaps, was a greater demand made on mankind - for by this admission so many things vanished in mist and smoke! What became of our Eden, our world of innocence, piety and poetry; the testimony of the senses; the conviction of a poetic - religious faith? No wonder his contemporaries did not wish to let all this go and offered every possible resistance to a doctrine which in its converts authorized and demanded a freedom of view and greatness of thought so far unknown, indeed not even dreamed of." [Goethe.] 1 I quote from Chambers Biographical Dictionary: "Copernicus ... was born at Torun, Poland. His father was a Germanized Slav, his mother a German; and Poland and Germany both claim the honour of producing him."
<urn:uuid:3c2b335c-ea95-45e2-83de-8bdccbde66ad>
CC-MAIN-2013-20
http://www.blupete.com/Literature/Biographies/Science/Copernicus.htm
2013-05-20T12:01:51
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.989478
1,401
4.03125
4
Unit F: Things That Move The books listed below may be available through publishers, distributors such as bookstores or online retailers, or library systems. Leveled Independent Science Books - Sounds All Around - Summary: This simple text illustrates that sounds surround us. - Heat and Eat! - Summary: People have different ways of heating their food. - Summary: Light shining on different objects casts different shadows. - Guess Whose Shadow? - Photos of shadows of all shapes and sizes accompany simple text describing how shadows are made. - All About Sounds - Simple words and photos clearly present the concept of sound. - Light: Shadows, Mirrors, and Rainbows - Facts about light include information on moonlight, shadows, and rainbows. - Forces and Motion - The concept of how things move is explained through examples based on everyday situations.
<urn:uuid:0f34f7a2-fa56-4baa-b4f3-41c599e13061>
CC-MAIN-2013-20
http://www.eduplace.com/science/hmsc/k/f/bibliography/bibcontent_kf.shtml
2013-05-20T11:46:28
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.84395
181
4.15625
4
"Jury nullification of law," as it is sometimes called, is a traditional right that was rigorously defended by America's Founding Fathers. Those great men, Patriots all, intended the jury to serve as a final safeguard – a test that laws must pass before gaining sufficient popular authority for enforcement. Thus the Constitution provides five separate tribunals with veto power – representatives, senate, executive, judges – and finally juries. Each enactment of law must pass all these hurdles before it gains the authority to punish those who may choose to violate it. Thomas Jefferson said, "I consider trial by jury as the only anchor yet imagined by man, by which a government can be held to the principles of its constitution." Such was the case in the 1670 political trial of William Penn, who was charged with preaching Quakerism to an unlawful assembly. Four of the twelve jurors voted to acquit – and continued to acquit even after being imprisoned and starved for four days. Under such duress, most jurors paid the fines. However, one juror, Edward Bushell, refused to pay and brought his case before the Court of Common Pleas. As a result, Chief Justice Vaughan issued an historically-important ruling: that jurors could not be punished for their verdicts. Bushell's Case (1670) was one of the most important developments in the common-law history of the jury. Earlier in America, jury nullification decided the celebrated seditious libel trial of John Peter Zenger. (Zenger's Case, 1735) His newspaper had openly criticized the royal governor of New York. The current law made it a crime to publish any statement (true or false) criticizing public officials, laws or the government in general. The jury was only to decide if the material in question had been published; the judge was to decide if the material was in violation of the statute. Zenger's defense asked the jury to make use of their own consciences and, even though the judge ruled that the truth was no defense, they acquitted him. The jury's nullification in this case is praised in history textbooks as a hallmark of freedom of the press in the United States. At the time of the American Revolution, the jury was known to have the power to be the judge of both law and fact. In a case involving the civil forfeiture of private property by the state of Georgia, first Supreme Court Justice John Jay, instructed jurors that the jury has "a right to determine the law as well as the fact in controversy." (Georgia vs. Brailsford, 1794:4) And this stuff happened when we only had a few laws, compared to the millions upon millions we have today... and they continue to grow, each one taking away a piece of our freedoms. That's what laws do, take away freedoms..... but I suppose those of you here who make your living off of 'the law' wouldn't be bothered or have any conflicts in such matters.
<urn:uuid:1ccf67ef-ccb6-429c-9024-ae8d1c1eb7a9>
CC-MAIN-2013-20
http://www.expertlaw.com/forums/showthread.php?t=102140
2013-05-20T11:54:31
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978813
605
4.0625
4
How it Works? The immune system has two parts – 'innate' and 'adaptive' The 'innate' (meaning: "present from birth") part of the immune system is so-called because it has a number of set strategies for recognising and dealing with certain kinds of infection, without needing to be "trained" to identify them. This generally involves certain immune cells "sniffing-out" germs, via signs in the bloodstream, following the trail to the site of infection, and neutralising the invaders with special chemicals before swallowing them (a process known as 'phagocytosis'). Such cells are generally called white blood cells – (but specifically known as 'macrophages' and 'neutrophils'). This approach is very effective for many infections, but certain germs have developed ways of avoiding detection. For instance, viruses can be particularly difficult to detect and target because they are much smaller, even than bacteria, and can actually hide and multiply within body cells. During infections, signs such as the swelling and inflammation of the skin are often indications of immune activity, as they help the immune system by allowing blood carrying immune elements to flow more easily to the site of infection. However, if uncontrolled, inflammation can itself cause damage, so it has to be carefully controlled. The other part of the immune response is called the 'adaptive' immune system. Unlike the innate immune system, it isn't able to respond instantly to infections, as it needs time to adapt (or learn) to recognise them. Once it has learned, however, it is extremely effective and is also able to 'remember' particular germs that have previously infected the body, so that when (or if) they try to infect the body again, the next response is rapid, accurate, and effective. Doctors can trick the body into producing a memory to a particular infection by using vaccines (harmless versions of germs) to create immune 'memory'. This gives you protection without having to experience the dangers of a real infection. An advantage of the adaptive immune response, once it has developed, is that it utilises further specialised types of white cell, called lymphocytes, that coordinate and focus the immune system's response, and also produce specialised molecules to target the infection. These include an incredibly elegant molecule, called the 'antibody', that is produced in huge numbers during an adaptive response, and moves through the bloodstream during an infection, targeting germs with incredible accuracy. It is thought that the human body can create enough different antibodies to recognise a total of 1 billion different targets (that's 1,000,000,000 or 'one thousand million'). As we have mentioned, the only drawback with the adaptive response is that it takes time to develop initially, and it can take several days for the primary response to be detectable, and longer still for it to become effective. The innate response is therefore still extremely important for controlling infection whilst the adaptive response builds up. On patrol for signs of trouble… A further aspect of the adaptive immune system worth mentioning is its role in monitoring body cells to check that they aren't infected by viruses or bacteria, for instance, or in order to make sure that they haven't become cancerous. Cancer occurs when certain body cells 'go wrong' and start dividing in an uncontrolled way (body cells usually divide in an extremely regulated way), often spreading to other parts of the body. It is an extremely dangerous disease, so it is important to catch it before it develops. Certain lymphocytes patrol the body, checking cells for signs that something is wrong, and so the immune system plays an important role in preventing tumours from developing. To get a grip on just how small the world we are describing really is, click here Immunity in the gut: an important balancing act As we mentioned earlier, certain areas of the body, such as the lung and the gut, can be more difficult to 'police' because they have to be more open to certain elements in the environment. The gut, in particular, because of its role in absorbing food, has an enormous surface area. The small intestine alone (a part of the gut) has a surface area some 200 times that of the skin. For the immune system, this represents a big challenge to police just in terms of area. In addition, it must also be remembered that the food that we eat could be a potential target for the immune system, because it is foreign to the body. That's not to mention the other considerations we deal with below. For instance, the gut also has nutrient-rich fluid, derived from the things we eat, continuously flowing through it, as part of the food absorption process. Due to the food-rich environment, this makes the gut a particularly attractive environment for bacteria – it is estimated that over 500 microbial species live in the human gut, contributing some two pounds (about one kilogram) to the body's overall weight. It is estimated that over 90% of exposure to microorganisms occurs within the gut. Many of these bacteria (known as 'commensals') are a perfectly normal part of the gut population and do not cause disease – in fact they often perform some very useful roles such as aiding in the digestion of food. If the immune system were simply to treat all of the many gut microorganisms as 'targets', especially in such a delicate environment, the immune response itself could cause more harm than good by producing excessive inflammation and damaging the gut surface. Instead the immune system does an extremely clever job of regulating itself so that it doesn't react to harmless food, or overreact to commensals – whilst still performing the vitally important role of targeting really harmful germs when they infect. This is a remarkable feat about which there is still much to learn, and there is much research into how it achieves this remarkable balancing act. We do know that perhaps around 75% of the immune system's lymphocyte cells are found in association with the body's 'mucosal' tissues, of which the gut forms a large part, so gut immunity is obviously an important area of immune function. We also know that the process is further helped by the fact that a healthy population of commensals in the gut can help to prevent colonisation by harmful bacteria – by crowding them out and not allowing them take hold. Certain commensals have even developed particular substances, called colicins, that neutralise other bacteria. Due to certain differences in the way commensal 'behave', compared to disease-causing species, it seems that the immune system is able to tell the difference between the two. Evidence for the importance of commensal bacteria is found when oral antibiotics are taken by people to counter harmful bacterial infections. These can also drastically reduce the population of commensal bacteria in the gut. Although the population grows back again, it has been noted that the gut is temporarily more vulnerable to infection with harmful bacteria, due to the breaking of the 'commensal barrier'. It seems that in the gut, as in other aspects of life, it pays to cultivate a healthy group of friends to protect you from your enemies… The immune system is a network of cells, tissues and organs, found throughout the body that combats infectious disease and cancers. It is divided into 'innate' and 'adaptive' immune responses. 'Innate' immunity is quick to respond to certain general signs of infection, and includes certain specialised cells (phagocytes) able to track and 'eat' infective germs. 'Adaptive' immunity is used to develop a more specific response to particular germs that are more difficult to target by innate immunity. This takes time to develop – but the adaptive immune system 'remembers' germs that it has previously encountered and responds immediately the next time they try to infect. The 'antibody' is a key molecule in the adaptive immune response and is incredibly specific in targeting particular germs – millions of different antibodies can be made, each with unique targets. Vaccines use adaptive immunity to 'trick' the body into creating an adaptive response, without the danger of a real infection. Millions of lives have been saved as a result. The 'Father of Vaccination' is Edward Jenner, and his development of a smallpox vaccine led to an effective treatment for this terrible disease and, eventually, its eradication (in 1979). The lungs and the gut are key areas for the body to protect, as they are vulnerable to infection. In these areas, the immune response has to be effective, but controlled (to prevent damage) – this is an important balancing act for the immune system. People born without immune systems are extremely vulnerable to infection, and people infected with HIV/AIDS can experience similar symptoms because the virus targets the immune system. This illustrates the importance of a functioning immune system. Many immunologists are involved in research into important diseases such as asthma, type 1 diabetes, rheumatoid arthritis, HIV/AIDS and tuberculosis – effective therapies and cures are their goals.
<urn:uuid:5bca7618-4ece-4586-b466-642794c006fa>
CC-MAIN-2013-20
http://www.immunologyexplained.co.uk/HowItWorks.aspx
2013-05-20T11:29:40
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96046
1,863
4.03125
4
PRE-ALGEBRALesson 53 of 171 Students learn to compare decimals by first lining up the decimals, then comparing numbers place-by-place, starting on the left. For example, to compare 17.456 and 17.501, the 1's in the tens place are the same, and the 7's in the units place are the same. However, in the tenths place, more... Elementary / Middle High School Math *Also referred to as Elementary Algebra or Beginning Algebra. Search our lessons
<urn:uuid:153982b4-ebc8-4a13-892b-588bb924a6e6>
CC-MAIN-2013-20
http://www.mathhelp.com/how_to/decimal_concept/comparing_decimals.php
2013-05-20T12:21:29
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.824686
114
4.4375
4
Verizon Thinkfinity offers thousands of free K-12 educational resources across seven disciplines for use in and out of school. Our lesson plans are written and reviewed by educators using current research and the best instructional practices and are aligned to state and national standards. Choose from hundreds of topics and strategies. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| |Grades||K – 12| San Diego, California |Strategy Guide Series||Teaching Writing| Young and/or poor writers need to observe experienced writers at work in ways that will actually help them to write more effectively themselves. Write-aloud lessons, known as modeled writing, will help you to provide authentic explanations for your students, demonstrating how writers actually go about constructing various kinds of texts. Readers use metacognitive processes to comprehend text: adjusting reading to purpose, self-monitoring and questioning, and reflecting on an author's purpose. Think-aloud, in which a teacher verbalizes his thinking for students while reading a text, improves students' understanding of these processes. Writing is also a complex cognitive activity. Research has demonstrated that students improve their writing ability when cognitive strategies are demonstrated for them in clear and explicit ways. Students learn the forms and functions of writing as they observe and participate in writing events directed by knowledgeable writers, particularly when these events are followed by opportunities for independent writing. Instruction that makes writing processes visible to students is key to improving their writing skills. Several excellent instructional frameworks for writing, including modeled, shared, interactive, guided or independent writing, can provide strong support for students' successful writing based on the level and type of teacher support that is provided for students. During write-aloud, like think-aloud, teachers verbalize the internal dialog they use as they write a particular type of text, explicitly demonstrating metacognitive processes. Strategy in Practice Write-aloud is taught to small groups or a whole class in briskly paced, 10- to 15-minute lessons. Model your own writing of a short text, generally choosing one particular aspect of a genre to write-aloud (such as an opening or closing paragraph of a longer essay or a dialogue between characters). Plan write-aloud lessons for types of writing that present particular challenges to your students. Prepare for the lesson by writing your own short texts and developing awareness of your own decision-making while you write. Tell students that you will be verbalizing your own thinking for them as you write. Ask students to pay attention to the decisions you make as you write, and remind them that they will be producing this same type of text themselves. Explain to students what kind of text you will be writing and what you want to accomplish as you write this text. If you are writing a persuasive essay, for example, remind students very briefly that you will need to convince readers of your own point of view. For narrative dialogue, point out that characters' talk should explain the main problem of the story. As you write (using chart paper or document viewer), make verbal statements that describe your own decision-making processes: Now I need to summarize my main points. I think I should look back at my outline of points that I made in the rest of the essay. Hmm, what can I have this character say now in order to show how upset she is? How can I spell this word? It will help if I say the word slowly to myself first. After you have completed the write-aloud for a short text, ask students to comment on what they noticed about your thinking during the activity. You may want to ask students to talk about what seemed to be most important to accomplish as you were writing. You might also ask students to describe what you were thinking about as you wrote a challenging part of the writing. It may also be useful to ask students to talk about their own thinking and decision-making used while they are writing this same kind of text or to work with a partner to write their own example. |Lesson Plans||Student Interactives||Calendar Activities||Printouts||Other Strategy Guides||Professional Library||Games & Tools| Grades 3 – 5 | Lesson Plan | Standard Lesson It’s not easy surviving fourth grade (or third or fifth)! In this lesson, students brainstorm survival tips for future fourth graders and incorporate those tips into an essay. Grades 3 – 5 | Lesson Plan | Standard Lesson Students will walk a mile in the shoes of Solomon Singer as they learn how to use flashbacks, flash-aheads, and internal dialogue to develop realistic characters. Grades 3 – 5 | Lesson Plan | Minilesson In this minilesson, students explore the use of dialogue tags such as “he said” or “she answered” in picture books and novels, discussing their purpose, form, and style. Grades 2 – 5 | Lesson Plan | Unit Let the power of imagination and inference serve as a “time machine” to bring Benjamin Franklin into the classroom! History and science come to life in a dialogue with Franklin the inventor, developed through lesson activities that incorporate research, imagination, writing, visual arts, and drama.
<urn:uuid:23a86b6f-d817-4eb7-944a-b2e13f730f9e>
CC-MAIN-2013-20
http://www.readwritethink.org/professional-development/strategy-guides/write-alouds-30687.html
2013-05-20T12:22:20
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942973
1,138
4.15625
4
The people in south Asia had no warning of the next disaster rushing toward them the morning of December 26, 2004. One of the strongest earthquakes in the past 100 years had just destroyed villages on the island of Sumatra in the Indian Ocean, leaving many people injured. But the worst was yet to come—and very soon. For the earthquake had occurred beneath the ocean, thrusting the ocean floor upward nearly 60 feet. The sudden release of energy into the ocean created a tsunami (pronounced su-NAM-ee) event—a series of huge waves. The waves rushed outward from the center of the earthquake, traveling around 400 miles per hour. Anything in the path of these giant surges of water, such as islands or coastlines, would soon be under water. The people had already felt the earthquake, so why didn't they know the water was coming? As the ocean floor rises near a landmass, it pushes the wave higher. But much depends on how sharply the ocean bottom changes and from which direction the wave approaches. Energy from earthquakes travels through the Earth very quickly, so scientists thousands of miles away knew there had been a severe earthquake in the Indian Ocean. Why didn't they know it would create a tsunami? Why didn't they warn people close to the coastlines to get to higher ground as quickly as possible? In Sumatra, near the center of the earthquake, people would not have had time to get out of the way even if they had been warned. But the tsunami took over two hours to reach the island of Sri Lanka 1000 miles away, and still it killed 30,000 people! It is important, though, to understand just how the tsunami will behave when it gets near the coastline. As the ocean floor rises near a landmass, it pushes the wave higher. But much depends on how sharply the ocean bottom changes and from which direction the wave approaches. Scientists would like to know more about how actual waves react. MISR has nine cameras all pointed at different angles. So the exact same spot is photographed from nine different angles as the satellite passes overhead. The image at the top of this page was taken with the camera that points forward at 46°. The image caught the sunlight reflecting off the pattern of ripples as the waves bent around the southern tip of the island. These ripples are not seen in satellite images looking straight down at the surface. Scientists do not yet understand what causes this pattern of ripples. They will use computers to help them find out how the depth of the ocean floor affects the wave patterns on the surface of the ocean. Images such as this one from MISR will help. Images such as these from MISR will help scientists understand how tsunamis interact with islands and coastlines. This information will help in developing the computer programs, called models, that will help predict where, when, and how severely a tsunami will hit. That way, scientists and government officials can warn people in time to save many lives.
<urn:uuid:db2613b9-457b-405c-a9e8-cf6b3053cdc7>
CC-MAIN-2013-20
http://www.spaceplace.nasa.gov/tsunami/en/facebook.com
2013-05-20T12:02:15
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.972419
607
4.6875
5
Virginia held five Revolutionary Conventions between August 1774 and July 1776. The conventions selected and instructed the Virginia delegates to Congress, organized military preparation, arranged economic embargoes of British goods, and formed the Virginia Committee of Safety that between August 1775 and July 1776 governed Virginia in the absence of the royal governor. The last of the Revolutionary Conventions met in the Capitol in Williamsburg from May 6 through July 5, 1776. On the morning of May 6, a few members of the House of Burgesses met there for the last time and let that body die. The members of the fifth Convention then began their meetings in the Capitol. Many of the delegates brought instructions from their localities to declare Virginia independent of Great Britain. As their first order of business, they elected Edmund Pendleton president of the convention. On May 14, the debate on independence began. There was no question that the ties between Virginia and Great Britain would be dissolved (Robert Nicolas Carter voiced the only opposition), but there were varying opinions on how best to preserve liberty and win the clash with British forces. Some of the delegates preferred to wait until foreign alliances could be negotiated, but on May 15 the delegates voted unanimously to instruct the colony's representatives in Congress to introduce a motion for independence. On June 7, 1776, the senior Virginia member of Congress, Richard Henry Lee introduced a resolution stating, "That these United Colonies are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved." Congress adopted his motion on July 2, 1776, and the Declaration of Independence on July 4, 1776. When the Virginia Convention instructed the delegates in Congress on May 15 to propose a resolution of independence, it also created a committee to prepare a Declaration of Rights and a form, or constitution, of government for Virginia. On June 12, 1776, the convention unanimously adopted the Virginia Declaration of Rights, and on June 29, 1776, it unanimously adopted the first Constitution of Virginia. On the latter day it also elected Patrick Henry governor. He took office as the first governor of the independent Commonwealth of Virginia on July 6, 1776. 1. What did the convention members state were their reasons for wanting independence? 2. What did the convention resolve to do in addition to instructing the congressional delegates to enter a motion for independence? 1. Compare the list of grievances the Virginia convention detailed in their resolution with the indictment of George III in the Declaration of Independence. How are they similar, how are they different? 2. How did Virginia declare its independence even before the Declaration of Independence was created? The convention journal was recorded during the session in Williamsburg from May 6 through July 5, 1776. A governmental record, it stayed in the Commonwealth's records when the capital was moved to Richmond. In April 1865, shortly after the end of the Civil War, a Union soldier removed the journal from the state archives in the Capitol in Richmond and took it home with him. His descendants sold the manuscript journal in 1942 to a Philadelphia dealer in rare books and manuscripts. When the dealer, in turn, attempted to sell the volume to the Colonial Williamsburg Foundation, the state librarian and the attorney general of Virginia intervened to ensure the document's safe return to the archives, by then part of the Virginia State Library. Virginia reimbursed the dealer in the amount of his original purchase price. The transaction was one of several made during the same period that established the precedents by which the Commonwealth of Virginia has been able to recover a large number of lost public documents. Virginia Independence Bicentennial Commission. Revolutionary Virginia: the Road to Independence, a Documentary Record, Vol 7: Independence and the Fifth Convention, 1776. Compiled and edited by Robert L. Scribner and Brent Tarter. Charlottesville: University Press of Virginia. 1983. Smith, Hampden, III. "The Virginia Resolutions for Independence." Virginia Cavalcade 25 (Spring 1976): 148–157.
<urn:uuid:81a4671f-1913-426d-a1e1-5655c1d6f96b>
CC-MAIN-2013-20
http://www.virginiamemory.com/online_classroom/shaping_the_constitution/doc/convention_independence
2013-05-20T11:47:11
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962486
826
4.0625
4
Memory is the retention of information. It is closely associated with learning, which experts say is the ability to change behavior through the acquisition of new knowledge. Memory allows us to retain what we've learned. There are different types of memory. * Short-term memory is a temporary retention of information, while long-term memory can be permanent. Memory experts say new information can be converted into long-term memory through attention and repetition, a process called consolidation. The retention of facts and events is called declarative memory, and the retention of abilities and skills is termed procedural memory. * Mnemonics are methods for improving memory by linking information in a context that allows easier recall. Some mnemonic devices include groupings, rhymes, acronyms and visual associations. * Many people believe their ability to remember declines as they grow older. But some experts say this is a fallacy. They maintain the problem is that most people neglect their memory skills after they leave school. The memory is similar to a muscle, the more it is used, the better it gets. * Memory can be affected by what you eat. Folic acid and vitamin B-12 are essential for good memory. So is drinking plenty of water and getting enough sleep. Drinking alcohol and smoking can have a negative effect on memory, as can many medications, such as tranquilizers and anti-anxiety drugs. There are many books available on improving your memory, including Kevin Trudeau's "Mega Memory: How to Release Your Superpower Memory in 30 Minutes or Less a Day," $14, and "Use Your Perfect Memory" by Tony Buzan, $12.95. * If all else fails, the Internet offers myriad services that promise to notify you by postcard seven to 10 days in advance of your most important dates for the rest of your life, for a one-time fee of $39.
<urn:uuid:849a3757-35e0-4b9a-90bc-3747304473da>
CC-MAIN-2013-20
http://articles.latimes.com/2000/jan/21/local/me-56279
2013-05-23T04:48:53
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951533
383
4
4
NASA began observing a dust storm on the planet Mars on November 10, 2012. Martian dust storms are the largest such storms in our solar system. Over the century that astronomers have monitored them through telescopes – and now via spacecraft – these periodic storms have been know to rage for months and grow to cover the entire planet Mars. This one, however, appeared to be dissipating by early December, 2012. Dust storms on Mars sometimes start in the months before Mars is closest to the sun, as it soon will be. Mars will reach perihelion – its closest point to the sun – in January 2013. Each Martian year lasts about two Earth years. Regional dust storms expanded and affected vast areas of Mars in 2001 and 2007, but not between those years and not since 2007. The image above is a mosaic taken by a spacecraft in orbit around Mars, the wonderful Mars Reconnaissance Orbiter, on November 18, 2012. Small white arrows outline the area in Mars’ southern hemisphere where the 2012 Martian dust storm was building. The storm was not far from two Mars rovers, Opportunity and Curiosity. At that time, Rich Zurek, chief Mars scientist at NASA’s Jet Propulsion Laboratory, Pasadena, California said: This is now a regional dust storm. It has covered a fairly extensive region with its dust haze, and it is in a part of the planet where some regional storms in the past have grown into global dust hazes. For the first time since the Viking missions of the 1970s, we are studying a regional dust storm both from orbit and with a weather station on the surface. That weather station on Mars comes from the Mars rover Curiosity, which landed on Mars on August 5, 2012. NASA says Curiosity’s weather station detected atmospheric changes related to the storm. For example, its sensors measured decreased air pressure and a slight rise in overnight low temperature. In fact, dust storms on Mars are known to raise the air temperature of the planet, sometimes globally. The Opportunity rover on Mars – that stalwart vehicle that has been tooling around on the Red Planet since 2004 and is now near the Endeavour crater on Mars – does not have a weather station. Opporunity was within 837 miles (1,347 kilometers) of the storm on November 21, NASA said, and did observe a slight drop in atmospheric clarity from its location. If the storm had taken over the entire planet and clouded over the sky, it would have impacted Opportunity most heavily, because that rover relies on the sun for energy. The rover’s energy supply would be disrupted if dust from the air fell on its solar panels. Meanwhile, the car-sized Curiosity rover would fare better since it is powered by plutonium instead of solar cells. Curiosity and the Mars Reconnaissance Orbiter are working together to provide a weekly Mars weather report from the orbiter’s Mars Color Imager, which you can see here. Bottom line: As Mars nears its perihelion or closest point to the sun in January 2013, a major dust storm broke out in the planet’s southern hemisphere, where summer is coming. NASA is tracking the storm with both the Curiosity and Opportunity rovers on the Martian surface, and from above with the Mars Reconnaissance Orbiter. These dust storms on Mars sometimes rage for months and cover the entire planet. This one seems to have died down suddenly.
<urn:uuid:79cd84c3-9aea-4123-a05f-43931a24850e>
CC-MAIN-2013-20
http://earthsky.org/space/nasa-tracking-a-brewing-dust-storm-on-desert-world-mars?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+fullsite+%28EarthSky%29
2013-05-23T04:55:22
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955927
697
4.15625
4
Rotavirus causes severe vomiting and diarrhea. It can be a serious condition in young children. Fortunately, there is a rotavirus vaccine that has been proven effective at preventing this infection. One of the first rotavirus vaccines was associated with an increased risk of intussusception in first few weeks after vaccine. Intussusception is a condition where intestines folds on itself and cause a blockage or damage to the intestines. It is the most common abdominal emergency in children under 2 years of age. This original rotavirus vaccine was removed from use and newer versions of rotavirus appear to be much safer. Researchers from the United States wanted to assess any remaining risk of intussusception with the newer generation of rotavirus vaccines. The study, published in Journal of American Medical Association, did not find an increased risk of intussusception in infants receiving the rotavirus vaccine. About the Study The retrospective cohort study included 786,725 doses of the pentavalent rotavirus vaccine (RV5) of which 309,844 were first doses. Infants included in the study were aged 4-34 weeks who received the vaccine between 2006-2010. Researchers then gathered historical data on rate of intussusception in children who did not receive vaccines during this same time period. This value was called expected case rate and was compared to rate of intussusception in children with vaccine to. Intussusception developed in: - 21 infants that had received vaccine vs. 20.9 infants without vaccination (not significant) during 1-30 days after vaccine - 4 infants with vaccine vs. 4.3 infants without vaccination (not significant) during 1-7 day window after vaccine How Does This Affect You? Rotavirus vaccine is an effective method of reducing incidence and serious side effects of a rotavirus infections in infants. Its use has been associated with significant reductions in the number of infants needing medical care for these types of infections. This type of study is an observational study which can decrease its reliability. However, there was a large number of infants in this trial and side effects for vaccines are carefully monitored. The lack of difference is a safe assumption of the vaccine's safety. The cause of intussusception is not clear. While past rotavirus vaccines were associated with a small increase in risk, viral infections may also be associated with intussusception. So an unvaccinated child's increased risk of rotavirus infection may also increase their risk of intussusception. Vaccine's are an important step in your infants health. They are a widely used tools whose benefits and risk are carefully monitored. Talk to your child's pediatrician about the benefits and risks of vaccine for your child. - Reviewer: Brian P. Randall, MD - Review Date: 04/2012 -
<urn:uuid:41fbf876-e283-4b9f-9787-78b52f5c46cc>
CC-MAIN-2013-20
http://kendallmed.com/your-health/?/2011526253/Rotavirus-Vaccine-Not-Associated-with-Increased-Risk-of-Intussusception-in-Infants
2013-05-23T04:47:56
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961278
584
4.1875
4
Drought was nothing new to the farmers of western Kansas. Since their fathers and grandfathers had settled there in the 1870s, there had been dry periods interspersed with times of sufficient rainfall. But the drought that descended on the Central Plains in 1931 was more severe than most could remember. Many factors led to the Dust Bowl. The increased demand for wheat during World War I, the development of new mechanized farm machinery along with falling wheat prices in the 1920s, led to millions of acres of native grassland being replaced by heavily disked fields of straight row crops. Four years of drought shriveled the crops and left the loose top soil to the mercy of the ever-present winds. On Sunday, April 14, 1935, called Black Sunday, a massive front moved across the Great Plains from the northwest. Packing winds of 60 miles per hour, the loose topsoil was scooped up and mounded into billowing clouds of dust hundreds of feet high. People hurried home, for to be caught outside could mean suffocation and death. The dust and darkness halted all forms of transportation and the fine silt sifting through any crack or joint forced the closure of hospitals, flour mills, schools and businesses. Some met this incredible hardship and gave up. Others stayed, living on hope, humor and stubbornness. Farmers listened to the advice of the U.S. Soil Conservation Service and began strip farming and contour farming, restoring pastureland and planting hundreds of miles of wind breaks. With concerted effort and favorable weather conditions, the land was made to bloom again as the breadbasket of the nation. Entry: Dust Bowl Author: Kansas Historical Society Author information: The Kansas Historical Society is a state agency charged with actively safeguarding and sharing the state's history. Date Created: June 2003 Date Modified: February 2013 The author of this article is solely responsible for its content.
<urn:uuid:2c072b5b-3666-4fe6-932f-0e324167fc25>
CC-MAIN-2013-20
http://kshs.org/kansapedia/dust-bowl/12040
2013-05-23T05:00:51
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961801
391
4.15625
4
Swiss scientists say Europe's recent rapid temperature increase is likely due to an unexpected greenhouse gas: water vapor. Researchers at the World Radiation Center in Davos, Switzerland, say elevated surface temperatures caused by other greenhouse gases have enhanced water evaporation and contributed to a cycle that stimulates further surface temperature increases. The scientists say their findings might help answer a long-debated Earth science question about whether the water cycle could strongly enhance greenhouse warming. The Swiss researchers examined surface radiation measurements from 1995 to 2002 over the Alps in Central Europe and found strongly increasing total surface absorbed radiation, concurrent with rapidly increasing temperatures. The authors, led by Rolf Philipona of the World Radiation Center, show experimentally that 70 percent of the rapid temperature increase is very likely caused by water vapor feedback. They indicate the remaining 30 percent is likely due to increasing manmade greenhouse gases. They suggest their observations indicate Europe is experiencing an increasing greenhouse effect and the dominant part of the rising heat emitted from the Earth's atmosphere (longwave radiation) is due to water vapor increase. The report appears in the journal Geophysical Research Letters. Copyright 2005 by United Press International Explore further: Collisions of coronal mass ejections can be super-elastic
<urn:uuid:e91cbd20-af99-4f5a-977b-543c709f8f58>
CC-MAIN-2013-20
http://phys.org/news8015.html
2013-05-23T04:35:26
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.901396
251
4
4
All modern cars have a computer, in charge of monitoring the various systems. This central computer receives information from a collection of sensors that monitor things like oxygen, air pressure, air temperature, and engine temperature, to name a few. Using this information, the computer can control the car's parts to get the best performance from the engine while keeping emissions low. How do these sensors work? It depends on whether they are pressure sensitive or light sensitive. A pressure sensitive device can sense changes in pressure and emits an appropriate voltage in response to correct problems. These sorts of sensors are used in braking systems and collision avoidance systems, for example. Light-sensitive, or optical, sensors work very much like the wireless mouse technology in desktop computer systems. A small diode bounces light off a surface onto a sensor to form images. The sensor sends the data to a digital signal processor for analysis. This processor can detect patterns in the images and figure out how they have changed since the previous image it received from the sensor. Based on the changes in patterns over time, the processor can determine how far the mouse, or car, for example, has moved. It can then send electrical signals to the central computer to trigger the appropriate response. For instance, sensors can scan the precise position of the driver's eye level and adjust the seat accordingly. Newer prototype cars include infrared light enhancers to improve night vision, as well as rearview mirrors and rear bumper sensors to alert the driver when other vehicles are approaching the car's blind spot. Adaptive headlamps contain sensors that monitor a car's speed and steering wheel movements and adjust lighting accordingly. For example, at high speeds, light beams are given a longer reach. Remain-in-lane systems use forward-facing cameras to monitor the car's position in relation to the road's centerline and side marker lines for 20 meters ahead of the car. If the car begins to veer out of the lane, the sensors detect this and set off a warning sound. The automatic door openers found in most grocery stores use a very simple form of radar. The box above the door sends out bursts of radio waves and waits for the reflected energy to bounce back. When a person moves into the field of wave energy, it changes either the amount of reflected energy or the time it takes for the reflection to arrive, and the box opens the door. Infrared security systems work in much the same way, replacing the radio waves with infrared light waves.
<urn:uuid:5ab0f282-79e5-4cf1-95e5-e46222ec0892>
CC-MAIN-2013-20
http://www.aip.org/dbis/stories/2004/14182.html
2013-05-23T04:27:40
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.916838
501
4
4