score
int64 10
1.34k
| text
stringlengths 296
618k
| url
stringlengths 16
1.13k
| year
int64 13
18
|
---|---|---|---|
10 | Writing Equations: Precipitation Reactions
Equations written to represent precipitation reactions can be written in one of three ways:
- In a precipitation reaction a product of the reaction is only slightly soluble, or insoluble. This product is formed as a solid, also known as a precipitate.
- Solubility Rules can be used to determine if a product is insoluble (forms a precipitate)
- Ions in solution that are not used to form the precipitate are called spectator ions
- It is important to include the states of matter in the chemical equation:
(s) for solid, the precipitate
(g) for gas
(l) for liquid
(aq) for substances in aqueous solution
- Molecular Equations
All reactants and products are written as if they are molecules
- Ionic Equations
All reactants and products that are soluble are written as ions, only the precipitate is written as if it were a molecule
- Net Ionic Equations
Only the reactants and product taking part in the reaction are written in the equation, the reactants as ions, the product as a molecule.
Spectator ions are not included in the equation
Consider the reaction between solutions of sodium chloride, NaCl(aq), and silver nitrate, AgNO3(aq).
The possible products of the reaction are sodium nitrate, NaNO3, and silver chloride, AgCl.
From the solubility rules we find that sodium nitrate, NaNO3, is soluble since all Group I ions form soluble salts and also all nitrates are soluble. Silver chloride, AgCl, is insoluble since all chlorides are soluble EXCEPT those of silver, lead (II), mercury (I), copper (II) and thallium.
Writing the precipitation reaction equations
- Molecular Equation
All species in the reaction are written as if they are molecules, species in solution must include the (aq), the precipitate must include the (s)
That is: NaCl(aq), AgNO3(aq), NaNO3(aq), AgCl(s)
NaCl(aq) + AgNO3(aq) -----> NaNO3(aq) + AgCl(s)
- Ionic Equation
All species in solution are written as ions, the precipitate is written as if a molecule.
That is; REACTANTS:Na+(aq), Cl-(aq), Ag+(aq), NO3-(aq)
PRODUCTS: Na+(aq), NO3-(aq), AgCl(s)
Na+(aq) + Cl-(aq) + Ag+(aq) + NO3-(aq) ------> Na+(aq) + NO3-(aq) + AgCl(s)
- Net Ionic Equation
Written as for Ionic Equation except that spectator ions are not included in the equation:
That is; Na+(aq), NO3-(aq) are not included.
Only the species involved in producing the precipitate are included in the equation
That is; Ag+(aq), Cl-(aq), AgCl(s) are included in the equation
Ag+(aq) + Cl-(aq) ------> AgCl(s) | http://www.ausetute.com.au/ppteeqtn.html | 13 |
14 | The climate of Uranus is heavily influenced by both its lack of internal heat, which limits atmospheric activity, and by its extreme axial tilt, which induces intense seasonal variation. Uranus' atmosphere is remarkably bland in comparison to the other gas giants which it otherwise closely resembles. When Voyager 2 flew by Uranus in 1986, it observed a total of ten cloud features across the entire planet. Lat hai er observations from the ground or by the Hubble Space Telescope made in the 1990s and the 2000s revealed bright clouds in the northern (winter) hemisphere of the planet. In 2006 a dark spot similar to the Great Dark Spot on Neptune was detected.
In 1986 Voyager 2 discovered that the visible southern hemisphere of Uranus can be subdivided into two regions: a bright polar cap and dark equatorial bands (see figure on the right). Their boundary is located at about −45 degrees of latitude. A narrow band straddling the latitudinal range from −45 to −50 degrees is the brightest large feature on the visible surface of the planet. It is called a southern "collar". The cap and collar are thought to be a dense region of methane clouds located within the pressure range of 1.3 to 2 bar. Unfortunately Voyager 2 arrived during the height of the planet's southern summer and could not observe the northern hemisphere. However, at the end of 1990s and the beginning of the twenty-first century, when the northern polar region came into view, Hubble Space Telescope (HST) and Keck telescope initially observed neither a collar nor a polar cap in the northern hemisphere. So Uranus appeared to be asymmetric: bright near the south pole and uniformly dark in the region north of the southern collar. In 2007, however, when Uranus passed its equinox, the southern collar almost disappeared, while a faint northern collar emerged near 45 degrees of latitude. The visible latitudinal structure of Uranus is different from that of Jupiter and Saturn, which demonstrate multiple narrow and colorful bands.
In addition to large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar. In all other respects Uranus looked like a dynamically dead planet in 1986. However in 1990s the number of the observed bright cloud features grew considerably. The majority of them was found in the northern hemisphere as it started to become visible. The common though incorrect explanation of this fact was that bright clouds are easier to identify in the dark part of the planet, whereas in the southern hemisphere the bright collar masks them. Nevertheless there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter. They appear to lie at a higher altitude, which is connected to fact that until 2004 (see below) no southern polar cloud had been observed at the wavelength 2.2 micrometres, which is sensitive to the methane absorption, while northern clouds have been regularly observed in this wavelength band. The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours, while at least one southern cloud has persisted since Voyager flyby. Recent observation also discovered that cloud-features on Uranus have a lot in common with those on Neptune, although the weather on Uranus is much calmer.
The dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature was imaged. In that year observations from both Hubble Space Telescope and Keck Telescope revealed a small dark spot in the northern (winter) hemisphere of Uranus. It was located at the latitude of about 28 ± 1° and measured approximately 2° (1300 km) in latitude and 5° (2700 km) in longitude. The feature called Uranus Dark Spot (UDS) moved in the prograde direction relative to the planet with an average speed of 43.1 ± 0.1 m/s, which is almost 20 m/s faster than the speed of clouds at the same latitude. The latitude of UDS was approximately constant. The feature was variable in size and appearance and was often accompanied by a bright white clouds called Bright Companion (BC), which moved with nearly the same speed as UDS itself.
The behavior and appearance of UDS and its bright companion were similar to Neptunian Great Dark Spots (GDS) and their bright companions, respectively, though UDS was significantly smaller. This similarity suggests that they have the same origin. GDS were hypothesized to be anticyclonic vorteces in the atmosphere of Neptune, whereas their bright companions were thought to be methane clouds formed in places, where the air is rising (orographic clouds). UDS is supposed to have a similar nature, although it looked differently than GDS at some wavelengths. While GDS had the highest contrast at 0.47 μm, UDS was not visible at this wavelength. On the other hand, UDS demonstrated the highest contrast at 1.6 μm, where GDS were not detected. This implies that dark spots on the two ice giants are located at somewhat different pressure levels—the Uranian feature probably lies near 4 bar. The dark color of UDS (as well as GDS) may be caused by thinning of the underlying hydrogen sulfide or ammonium hydrosulfide clouds.
The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus. At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from −100 to −50 m/s. Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located. Closer to the poles, the winds shift to a prograde direction, flowing with the planet's rotation. Wind speeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles. Wind speeds at −40° latitude range from 150 to 200 m/s. Since the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure. In contrast, in the northern hemisphere maximum speeds as high as 240 m/s are observed near +50 degrees of latitude. These speeds sometimes lead to incorrect assertions that winds are faster in the northern hemisphere. In fact, latitude per latitude, winds are slightly slower in the northern part of Uranus, especially at the midlatitudes from ±20 to ±40 degrees. There is currently no agreement about whether any changes in wind speed have occurred since 1986, and nothing is known about much slower meridional winds.
Determining the nature of this seasonal variation is difficult because good data on Uranus' atmosphere has existed for less than 84 Earth years, or one full Uranian year. A number of discoveries have however been made. Photometry over the course of half a Uranian year (beginning in the 1950s) has shown regular variation in the brightness in two spectral bands, with maxima occurring at the solstices and minima occurring at the equinoxes. A similar periodic variation, with maxima at the solstices, has been noted in microwave measurements of the deep troposphere begun in the 1960s. Stratospheric temperature measurements beginning in 1970s also showed maximum values near 1986 solstice.
The majority of this variability is believed to occur due to changes in the viewing geometry. Uranus is an oblate spheroid, which causes its visible area to become larger when viewed from the poles. This explains in part its brighter appearance at solstices. Uranus is also known to exhibit strong meridional variations in albedo (see above). For instance, the south polar region of Uranus is much brighter than the equatorial bands. In addition, both poles demonstrate elevated brightness in the microwave part of the spectrum, while the polar stratosphere is known to be cooler than the equatorial one. So seasonal change seems to happen as follows: poles, which are bright both in visible and microwave spectral bands, come into the view at solstices resulting in brighter planet, while the dark equator is visible mainly near equinoxes resulting in darker planet. In addition, occultations at solstices probe hotter equatorial stratosphere.
However there are some reasons to believe that seasonal changes are happening in Uranus. While the planet is known to have a bright south polar region, the north pole is fairly dim, which is incompatible with the model of the seasonal change outlined above. During its previous northern solstice in 1944, Uranus displayed elevated levels of brightness, which suggests that the north pole was not always so dim. This information implies that the visible pole brightens some time before the solstice and darkens after the equinox. Detailed analysis of the visible and microwave data revealed that the periodical changes of brightness are not completely symmetrical around the solstices, which also indicates a change in the albedo patterns. In addition, the microwave data showed increases in pole–equator contrast after the 1986 solstice. Finally in the 1990s, as Uranus moved away from its solstice, Hubble and ground based telescopes revealed that the south polar cap darkened noticeably (except the southern collar, which remained bright), while the northern hemisphere demonstrated increasing activity, such as cloud formations and stronger winds, having bolstered expectations that it would brighten soon. In particular an analog of the bright polar collar present in the southern hemisphere at −45° was expected to appear in the northern part of the planet. This indeed happened in 2007 when the planet passed an equinox: a faint northern polar collar arose, while the southern collar became nearly invisible, although the zonal wind profile remained asymmetric, with northern winds being slightly slower than southern.
The mechanism of physical changes is still not clear. Near the summer and winter solstices, Uranus' hemispheres lie alternately either in full glare of the Sun's rays or facing deep space. The brightening of the sunlit hemisphere is thought to result from the local thickening of the methane clouds and haze layers located in the troposphere. The bright collar at −45° latitude is also connected with methane clouds. Other changes in the southern polar region can be explained by changes in the lower cloud layers. The variation of the microwave emission from the planet is probably caused by a changes in the deep tropospheric circulation, because thick polar clouds and haze may inhibit convection.
For a short period in Autumn 2004, a number of large clouds appeared in the Uranian atmosphere, giving it a Neptune-like appearance. Observations included record-breaking wind speeds of 824 km/h and a persistent thunderstorm referred to as "Fourth of July fireworks". Why this sudden upsurge in activity should be occurring is not fully known, but it appears that Uranus' extreme axial tilt results in extreme seasonal variations in its weather.
Several solutions have been proposed to explain the calm weather on Uranus. One proposed explanation for this dearth of cloud features is that Uranus' internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low internal thermal flux. Why Uranus' heat flux is so low is still not understood. Neptune, which is Uranus' near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun. Uranus, by contrast, radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is 1.06 ± 0.08 times the solar energy absorbed in its atmosphere. In fact, Uranus' heat flux is only 0.042 ± 0.047 W/m², which is lower than the internal heat flux of Earth of about 0.075 W/m². The lowest temperature recorded in Uranus' tropopause is 49 K (−224 °C), making Uranus the coldest planet in the Solar System, colder than Neptune.
Another hypothesis states that when Uranus was "knocked over" by the supermassive impactor which caused its extreme axial tilt, the event also caused it to expel most of its primordial heat, leaving it with a depleted core temperature. Another hypothesis is that some form of barrier exists in Uranus' upper layers which prevents the core's heat from reaching the surface. For example, convection may take place in a set of compositionally different layers, which may inhibit the upward heat transport.
Here you can share your comments or contribute with more information, content, resources or links about this topic. | http://www.mashpedia.com/Climate_of_Uranus | 13 |
10 | The Kalsun Math Tool for Addition and Subtraction is a simple educational device specially designed with patented features, for early use by children from pre-KG to U.K.G. It is useful for students in special education.
The purposes of the tool are to help children:
- Learn and develop a strong number sense
- Understand properties of addition
- Recognise patterns and relationships among numbers
- Enhance memory and mental skills through building concepts
- Develop algebraic thinking
- The colourful and student friendly device presents the number sequence on slidable and rotatable blocks that serve as the reckoning slide on the top row. and provides a set of movable blocks on the bottom slide that are aligned neatly with the blocks on the top row in matching colors to help with counting and solving problems.
This design helps build math foundation in children and improve classroom performance by :
- Combining all modes of learning - kinesthetic, tactile, and visual Minimising errors associated with loose pieces and use of fingers Helping to visualise patterns and relationships with ease
- Children can explore one or more numbers on a daily basis and master addition facts of each number and learn various combinations is of arriving at the number in a systematic way
- Once the children are taught the use of the tool for the first few numbers, they can use it themselves for learning other numbers and develop number sense on their own.
- The repeated use of the tool will give them a clear understanding of the numbers, thereby enabling them to memories addition and subtraction facts with ease, in turn encouraging them to libel math.
- The use of the tool helps with addition and subtraction problems, and develops mental arithmetic | http://jayavidya.com/kalfun.php | 13 |
30 | Curriculum quality is a key element of IDRA’s Quality Schools Action Framework (Robledo Montecel, 2005). IDRA believes that this key element has to be in place to ensure a quality education for all students, in all content areas, in all schools and at all grade levels.
When you think of quality mathematics curricula, what do you envision? Massachusetts Institute of Technology professor and world-renowned mathematician and educator, Seymour Papert asks us to think of curricula in a new way, replacing a system where students learn something on a scheduled day, with one where they learn something when they need it in an environment that shows meaning and gives context as to why it is being learned. It is student-centered where students use what they are learning (Curtis, 2001).
Think for a moment what you would expect… teachers doing and saying; students doing and saying; and parents doing and saying. Reflect on the outcomes and possibilities that would unfold for students, families, teachers and the community if all schools had a quality mathematics curriculum in place.
Standard of Quality Math Curriculum
The National Council of Teachers of Mathematics includes in its Principles and Standards for School Mathematics the curriculum principle: “A curriculum is more than a collection of activities: it must be coherent, focused on important mathematics and well articulated across the grades” (2000). This principle provides a framework in which to make instructional decisions and policies that impact student success and achievement in mathematics.
A quality mathematics curriculum must be vertically aligned, connecting and building upon concepts within and across grade levels, engaging students in meaningful mathematics where they see the value of learning the concepts, and facilitating the development of a student’s productive disposition toward mathematics (NCTM, 2000; Kilpatrick, et al., 2001).
Throughout Texas, school districts have invested many resources in creating a variety of curricula in attempts to meet national and state standards. A shift has occurred over the past decade from the optional use of course curricula to a more pervasive and monitored use. Although use of district mathematics curricula is more often the case than not, the quality of such curricula spans many levels:
- Mediocre test-driven curriculum where the only expectation for students is to pass a punitive, high-stakes standardized test;
- Scripted lessons and timelines detailing verbatim what teachers will say and dictating what materials will be used, leaving no room for teacher creativity or student investigation; or
- Highly challenging and engaging curriculum that is standards-driven and that values teacher’s professional expertise and values students as mathematics learners.
Sample Process Used in Math Smart!
IDRA models the development of highly challenging and engaging curricula through its Math Smart! program. Math Smart! integrates the Five Dimensions of Mathematical Proficiency with strategies for engaging students, dynamic technology tools for building and deepening student mathematical thinking, strategies for supporting English language learners, and strategies for engaging and valuing parents through a variety of methods. The process is outlined below.
Planning with Teachers
Planning sessions are an opportunity for mathematics teachers to reflect on math concepts and their teaching practice. In a planning session IDRA held with Math Smart! Algebra I teachers at one school, teachers reviewed the timeline and discussed how they were exploring the concepts of quadratic functions, finding roots, maximum and minimum values, and evaluating the functions with their students.
Teachers wanted to put into practice elements of the Math Smart! program in the curriculum and lead into polynomials and polynomial properties. What resulted was a deep discussion on how to bring to life quadratic functions, roots and maximums through kicking a soccer ball or football and using physics.
A plan for integrating non-traditional, brain-researched teaching strategies where students discover and present their own methods for simplifying polynomials, finding roots and real-life applications was also developed from the discussion among teachers.
Planning that reflects the teaching practice where teachers also explore the actual concepts is an integral part of building a quality mathematics curriculum. Curriculum development becomes a collaborative effort and parallels what we want to happen in the classroom, where communication and discovery is two-way: students and teachers participating in conversations about mathematical ideas.
Thus, quality curriculum development integrates the teacher and the reflection on the teaching practice and mathematics, where district content specialists and teachers participate in collaborative, curriculum development.
Curriculum that Engages Students
Taking what they had planned, teachers developed an activity that engaged students from the moment the bell rang. The following is a sample from one classroom.
Lesson Introduction: Engaging Students – The teacher began the class by telling students that if she knew how long a football they kicked was in flight, she could figure out exactly how high that ball went without having to chase the ball with a meter stick and ladder. None of her students believed her, and they asked her to “prove it.”
She proceeded to show them a video that she had downloaded from the United Streaming Video resource (that her school has a subscription to) of classic football games and soccer kicks. Students worked in groups of three, beginning with a warm-up activity (see box below) that included a timed brainstorm about quadratic functions in their everyday lives. She asked students to sketch a graph of the football in motion from the video.
As a closing to the introduction part of the lesson and to describe the next part of the lesson, she showed a humorous video of how “not” to kick the football. Humor, not sarcasm, is a highly effective strategy for engaging students. Students were eager to take on the task of finding their own quadratic functions to their kicks.
Experiencing Quadratic Functions – Using soccer balls and stop watches and working in groups of three outside, students kicked the ball and recorded the times the ball was in motion (see activity below). Many questions about how their graph would change surfaced as they were experiencing mathematics in motion. Students wondered about how the graph would differ if they kicked the ball straight up versus across the field and what if they kicked the ball off the ground versus as it is on the ground.
Every student was engaged in the activity. Part of the success of this is attributed to the physicality of the activity. Students were outside of their sterile classroom, and the soccer field became their lab. The act of doing something helps students remember properties of quadratics, what the roots mean, what the maximum/minimum mean and what happens when we change any of the parameters. They have something to tie it to.
When a student is taking a state-mandated test and comes across a problem asking about the change in a parameter, what will the student call upon – an exact equation that she worked on or the experience that explored what happens if the ball was kicked 0.5 meters off of the ground, how it would affect the graph equation, and the maximum value?
This was an activity that students found valuable. Many of the students were involved in sports and were able to relate their life experiences to quadratic functions.
Bringing it Back to the Classroom – After collecting the data and taking a much-needed water break, students went back to the classroom and began using a well-known quadratic function for finding vertical distance to find their own quadratic functions. Using cognitively-guided instruction techniques and building academic language from student’s natural language, the teacher was impressed and energized by how students were able to connect to the meaning of the coefficients and constants for initial velocity (v0), initial height (h0), and the dependent variable, vertical distance (d).
Students discussed in groups and shared with the whole group the meaning of the roots and the maximum in their own graphs, connecting them to their real-life application. Students said such things as: “The first root is where time and distance are both 0, or the origin, because I had not kicked the ball yet, and the second root is when the ball landed, and also the vertical distance is 0. This connects to when our teacher explained that the roots are where the parabola crosses the x-axis.”
Another student explained initial velocity as to how fast it is going at kick-off, but then the ball slows down because it is going up but gains speed as it is coming back down and will reach that velocity again right at the moment it lands.
These are highly complex mathematical ideas that students so readily explained as the meaning of the function d = -5t2 + V0t + H0 was being explored in conjunction with the graphs they had sketched.
It also enabled the teacher to bring in the idea of instantaneous rate of change, a concept formally presented in Calculus I, to her Algebra I students. This teacher has the expectation for all of her students to go on to Calculus I. It shows in statements she makes, such as, “When you get to calculus, you will hear the term ‘instantaneous rate of change’ to describe how fast the ball is going along the path.”
Finding the Functions and Making Conjectures – Students readily volunteered to present to and get guidance from each other in trying to figure out how they would first calculate the initial velocity as it was easy to find the initial height (which was 0 because the ball was on the ground when it was kicked). One student volunteered that even though he “didn’t know what to do,” he would “get help from the class.” The class eagerly helped him, justifying and bringing in ways that they knew how to “do the math” (i.e., solve equations to find the initial velocity given the time and the vertical distance after t number of seconds). Once students found the initial velocity, they were able to write their very own quadratic function describing their own kicks.
Students Challenging Students – The beauty of mathematics is in the “what if’s” – variables changing, parameters and coefficients changing, and analyzing what it all means and how it applies. Using an engaging activity paves the way for students to begin thinking of “what if” questions. It gives them the experience of mathematics.
As indicated above, students began asking the “what if” questions when they were out in the field collecting data. It was natural for them to do this, without being prompted by the teacher. Students were able to answer their questions using their graphing calculators, quadratic functions and natural mathematical reasoning abilities.
In closing the activity, students had to present one of the quadratic functions from the group, indicating the roots, the maximum height of the ball, why they chose that kick, and a what if question to their fellow classmates. Some of the questions included: what if we were on another planet where the gravity is not so strong, what do you think the graph would look like? And, what if I kicked the ball at a faster initial velocity and it was three feet off the ground, how would my equation change?
As a result of planning and of teachers’ experiencing with students a highly challenging and engaging activity that had them involved in mathematical conversations, these Math Smart! teachers wanted to continue to contribute to the district curricula and include collaboration and teaching practice reflections as an ongoing way of ensuring a quality mathematics curriculum for their students.
IDRA and the teachers were able to explore a model for creating a quality mathematics curriculum: reflecting on current curricula, sharing ideas on how to get students involved and appeal to their interests so they find mathematics valuable, using available resources, breaking out of traditional one-way conversations into two-way conversations with students about the mathematics, and realizing that as time and technologies change, so too will the curriculum.
Quality curriculum is dynamic; involves teacher practitioners in ongoing reflection, development and refinement; values students’ experiences and the knowledge they bring; and is rigorous and vertically aligned so that students are not only prepared to enter higher-level mathematics courses, but also experience higher-level mathematics within their current courses.
Curtis, D. Start With the Pyramid (San Rafael, Calif.: The George Lucas Educational Foundation, 2001), http://www.edutopia.org/php/article.php?id=Art_884&key=037.
Kilpatrick, J., and J. Swafford, B. Findell (Eds). Adding it Up: Helping Children Learn Mathematics (Washington, D.C.: National Research Council Mathematics Learning Study Committee, November 2001).
National Council of Teachers of Mathematics. Principles and Standards for School Mathematics (Reston, Va.: National Council of Teachers of Mathematics, 2000).
Robledo Montecel, M. “A Quality Schools Action Framework – Framing Systems Change for Student Success,” IDRA Newsletter (San Antonio, Texas: Intercultural Development Research Association, November-December 2005).
Kathryn Brown is the technology coordinator in the IDRA Division of Professional Development. Comments and questions may be directed to her via e-mail at | http://www.idra.org/IDRA_Newsletter/April_2006_Curriculum_Quality/_Re-Invigorating_Math_Curricula/ | 13 |
13 | Scientists think that dark energy, the weird force blamed for propelling the universe to
expand at an accelerated speed, probably turned on between 5 and 7 billion years ago. Now astronomers have mapped thousands of galaxies from this era, and have determined the most precise distances to them yet, in an effort to get to the bottom of the dark energy mystery. Dark energy is thought to represent about 74 percent of the universe’s total mass and energy, dwarfing ordinary matter. While its existence has never been directly confirmed, the strange force remains the leading explanation for why galaxies are speeding up as they spread farther and farther apart from each other. As said Ariel Sanchez, a research scientist at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, ordinary matter is only a few percent of the universe. The largest component of the universe is dark energy, an irreducible energy associated with space itself that is causing the expansion of the universe to accelerate. But the expansion of the universe hasn’t always been accelerating. Theorists think that before roughly 5 to 7 billion years ago, the expansion of the universe was slowing, due to the inward pull of gravity. Then, around that time, the expansion stopped slowing and started speeding up from the force of dark energy. To study these changes in cosmic expansion, scientists must measure the distances between galaxies now, as well as during different epochs of the distant past. They can do this by looking at very distant galaxies whose light is only reaching us now after traveling billions of years, which can paint a picture of what the universe looked like billions of years ago. Now, astronomers have created the most accurate map yet of galaxies in the distant universe, offering a window into the past and, possibly, into dark energy. The map comes from data collected by the Baryon Oscillation Spectroscopic Survey (BOSS), which is part of the third Sloan Digital Sky Survey (SDSS-III). | http://forcetoknow.com/space/map-distant-galaxies-reveal-dark-energy-history.html | 13 |
12 | Prairie Populations is a South Dakota wildlife population census activity for students in grades 5-10.
This activity is designed to provide the students with an open-ended problem solving situation that integrates mathematics and language arts with biology. The activity is designed as a unit that takes the students through an entire learning cycle in which they investigate and experiment, learn from experts through a practical, real life situation, and finally apply the knowledge and skills they have learned. Students will be able to: 1) explain why scientists census wildlife populations; 2) use a variety of techniques to estimate a population; 3) apply their mathematical skills of estimation, multiplication, averaging, geometry, fractions and percents 4) learn about waterfowl populations at a South Dakota wildlife refuge; and 5) demonstrate their knowledge and skills by conducting a census of their choosing.
Students will be presented with a population to census. They must assess the problem, design two procedures to estimate the population, determine which procedure yields the most accurate information, and make a report on the best census technique as determined by their investigations. Students will then learn about real-life population census activities at a South Dakota wildlife refuge. Finally, students will conduct and report on a census of their own choosing.
There are many instances when biologists are interested in the size of wildlife populations. A population is defined as the individuals of a single species found at a particular location at a particular time. Most often biologists census populations as a basis for wildlife management decisions. For example, an accurate estimation of the males and females in a population must be known before managers can decide how many hunting or fishing licenses can be issued. Population censuses are conducted to determine if a species is threatened or endangered. Species with very low populations are included on state and/or federal lists that insure special consideration will be taken to protect the species from extinction. Some populations are monitored because too many of the species could cause serious habitat deterioration problems for people, livestock or agriculture crops. When populations exceed acceptable levels, control measures are initiated. These could include establishment of hunting seasons to harvest excess numbers or, in cases of insect problems, application of pesticides.
In many cases, population censuses include information about the sex and age group ratios. These data are important for predicting future trends within the population. A population with very few juveniles, for example, could indicate that the species is experiencing a disruption of the reproductive cycle that will soon result in a population crash. Biologists use this information to call for studies of the ecology of the species to determine the exact nature of the problem.
Biologists also are interested in assessing wildlife die-offs. Possible causes for bird kills are:
When death of large number of plants or animals occurs, biologists are called to assess the extent of the problem. Even in these situations, when the organisms are dead and therefore stationary, it is difficult to obtain accurate counts. Censusing by counting each individual in an entire area is usually too time consuming and costly, and sometimes impossible. Good estimating strategies are essential. Scientists have developed several techniques to help determine approximate population size:
Items that will be needed are 200 (or more) toothpicks (more elaborate models could be used), a grassy field, stakes, string, paper, graph paper, pencils, calculators, tape measures, and several hula-hoops (for older students who can calculate areas of circles).
1. Review the concept of population. Ask the students to brainstorm about why biologists might want to know how many individuals there are in a particular wildlife population. Discuss their ideas and either suggest additional ones or have students contact wildlife biologists for more information.
2. Choose a large open area - grassy lawn, field, park or school yard
- that should be staked out by the teacher and
eventually measured by the students. An area 100' by 50' would do. Randomly scatter throughout the study area a
predetermined number of models representing dead birds. The size of the area and number of birds should be chosen with consideration of the difficulty of the mathematics that will be required to complete the activity. For younger students use increments of 100 models so the calculation of fractions (or percent) will be easier. Students should not be told the size of the field or the number models.
3. Present the students with the following problem: During the late summer of the year a motorist was passing by a tall radio tower near an open field that had some water in it. The motorist noticed many dead birds scattered about. He was so concerned by this unusual sight that he contacted a wildlife biologist in the nearest South Dakota Game, Fish and Parks office. The biologist examined the field and took a few of the birds on which to run tests to determine the cause of the tragedy. The biologist had to determine the number of birds that were lost for a report that she was required to file with the South Dakota state government. Because counting each individual bird was too time consuming and costly, the biologist wanted to devise a strategy to estimate the number of birds killed.
4. Have the students work in small groups. The students have two tasks. First, to brainstorm and/or research the possible causes that would result in a large die-off of birds as described in the scenario. Some possibilities are explained for the teacher's reference in the background section above.
Second, the students should design a population estimation strategy for the biologist to use that will be the most accurate. To help in this endeavor, tell the students you have prepared a model of the situation for them to experiment with in which one toothpick represents one dead bird. Provide each team a piece of graph paper on which they can construct a scale drawing. First, have the students calculate the area of the field containing the dead birds. (Students who cannot yet calculate areas can do the activity by establishing grids of whatever size they would like and counting the number of grid boxes in the scale drawing).
5. Each group should decide on two techniques that could be used to estimate the number of dead birds in the field. Use one trial of each of the two techniques to estimate the population. When sampling the field, students can count the models but they should not move or remove them. The hool-a-hoops or grids made of string can be used to delineate sample sections of the field.
6. Once a group has estimated the population using two different strategies, tell them the actual number of dead birds in the field. Students should then calculate the accuracy of their procedures. Ask students what could be done to increase the accuracy of their procedure. If they suggest using larger samples or increased numbers of trials have them make these improvements, recalculate the total, and see if the accuracy of their estimate is improved. Finally have the students join hands and walk the entire length of the field picking up each toothpick they see. What percent accuracy was obtained using this strategy? Younger students who are not yet familiar with the idea of a percent can do the entire exercise using fractions.
7. Have the student groups share their results with the other teams. The students should discuss the relative merits of each technique. How did the techniques compare in difficulty, time required, and accuracy?
8. Each student should write a report recommending a census technique to the biologist. The report should describe the census technique, contain a labeled to-scale drawing of their sampling, and explain why the student recommends that particular strategy. Remind the students that an excellent solution is one that provides high accuracy, is easy to do, and requires the least amount of time and effort.
South Dakota Experience
Ask each student to guess how many geese stop at Sand Lake National Wildlife Refuge during the spring migration. Geese migrate through South Dakota early each spring on their way to their Canadian breeding grounds and again in the fall on their return trip south. They stop over at Sand Lake to rest and eat during March and April, and again in October and November. Biologists count the number of geese to determine how many individuals use the Sand Lake resource during migratio. Spring migration populations of geese at Sand Lake NWR average 600,000 birds with peaks reaching as high as 1.3 million birds in some years.
After doing the Prairie Population activity, students should be taken to a wildlife refuge where they can visit with refuge personnel to learn about the value of the refuge to wildlife populations, find out about causes of bird deaths at the refuge, and discuss population census activities conducted at the refuge. The addresses and phone numbers of the refuges in South Dakota are listed in the Natural Source Chapter 1: South Dakota Directory.
Now that the students have an understanding of why population censusing is important, how counts can be made, and have learned about a population that is counted yearly in South Dakota, they should be prepared to use the knowledge they have acquired. Ask the students to conduct a census of any population (such as number of dandelions in the school yard or the number of left handed students in the school) that is of interest to them, and write a brief description of the census strategy they used and a summary of their findings.
Products produced by the students can be used for evaluation.
1. Have students select a sampling technique and estimate the total from one sampling of the population of bird models. Repeat the calculation based on the average of two samples, then three samples and so on. Have students graph the accuracy achieved using each number of trials. At what point does an increase in the number of trials no longer significantly improve the accuracy of the estimation? Students can determine the optimum number of trials that should be used in order to achieve the most accurate census.
2. Have students test their ability to estimate populations by using, Wildlife Counts , a computer wildlife counting simulation that is used to train wildlife biologists and help them practice their skills.
The idea for this activity developed from my having heard a research presentation by Dr. Philibert and her colleagues from the University of Saskatchewan. I am grateful to Dr. Philibert for granting me permission to use the study as a model for the activity.
Philibert, Helene and G. Wobeser and R. Clark, 1990. Estimation of Mortality
in Wild Birds: Examination of Methods, U. of Saskatchewan, Saskatoon, Saskatchewan
Canada, S7N OWO.
Welty, J.C. and Luis Baptista, 1988. The Life of Birds, 4th Ed. Saunders College Publishing, N.Y.
Wildlife Counts Computer Simulation, IBM or Apple II, 2215 Meadow Lane, Juneau, AK 99801. Phone: (907) 789-0326
Dr. Erika Tallman, Education Department, Northern State University, Aberdeen, SD. 1992.
Ted Benzon, Art Carter, Maggie Hachmeister and John Wrede all of South Dakota Dept. Game, Fish and Parks.
John Koerner, Manager of Sand Lake National Wildlife Refuge, Columbia, SD. 57433.
Special thanks are owed to Mrs. Karen Taylor's 5th grade class and Mrs. Jean Rahja's 6th grade class in Aberdeen, South Dakota for field testing the activity.
Publication of the Prairie Population activity was funded by the Prairie
Pothole Joint Venture of the North American
Waterfowl Management Plan. | http://www3.northern.edu/natsource/DAKOTA1/Prairi1.htm | 13 |
38 | In science and history, consilience (also convergence of evidence or concordance of evidence) refers to the principle that evidence from independent, unrelated sources can "converge" to strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence are very strong on their own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus.
The principle is based on the unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures the distance between the Great Pyramids of Giza by laser rangefinding, by satellite imaging, or with a meter stick - in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc.
Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not.
As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion.
When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity.
Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields, because the techniques are usually very different.
For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics.
Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result.
Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields.
Deviations from consilience
Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong.
Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely.
In history
Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth.
Consilience has also been discussed in reference to Holocaust denial.
"We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole."
That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question.
Outside the sciences
In addition to the sciences, consilience can be important to the arts, ethics,and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation.
History of the concept
Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment.
Whewell’s definition was that:
The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.
More recent descriptions include:
"Where there is convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation."
"Proof is derived through a convergence of evidence from numerous lines of inquiry--multiple, independent inductions, all of which point to an unmistakable conclusion."
Edward O. Wilson
Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the humanist biologist Edward Osborne Wilson, as an attempt to bridge the culture gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959).
Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understanding the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well.
A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first advocated for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th century utopian futurist and anarchist.
See also
- Scientific method
- Tree of Knowledge System
- Unified Science
- Coherentism in the philosophy of science
- Wilson, Edward O (1998). Consilience: the unity of knowledge. New York: Knopf. ISBN 978-0-679-45077-1. OCLC 36528112.
- Shermer, Michael (2000). Denying History: Who says the Holocaust never happened and why do they say it?. University of California Press.
- Note that this is not the same as performing the same measurement several times. While repetition does provide evidence because it shows that the measurement is being performed consistently, it would not be consilience and would be more vulnerable to error.
- Statistically, if three different tests are each 90% reliable when they give a positive result, a positive result from all three tests would be 99.9% reliable; five such tests would be 99.999% reliable, and so forth. This requires the tests to be statistically independent, analogous to the requirement for independence in the methods of measurement.
- John N. Bahcall, nobelprize.org
- Weinberg, S (1993). Dreams of a Final Theory: The Scientist's Search for the Ultimate Laws of Nature. Vintage Books, New York.
- Scientific American, March 2005. "The Fossil Fallacy." Link.
- For example, in linguistics: see Converging Evidence: Methodological and theoretical issues for linguistic research, edited by Doris Schonefeld. Link.
- For example, see Imre Lakatos., in Criticism and the Growth of Knowledge (1970).
- More generally, anything which results in a false positive or false negative.
- Shermer, Michael (2002). In Darwin’s Shadow: The Life and Science of Alfred Russell Wallace. Oxford University Press. p. 319.
- Whewell, William (1840). The Philosophy of the Inductive Sciences, Founded Upon Their History. 2 vols. London: John W. Parker.
- A Companion to the Philosophy of History and Historiography, section 28. Aviezer Tucker (editor).
|Look up consilience in Wiktionary, the free dictionary.| | http://en.wikipedia.org/wiki/Consilience | 13 |
12 | WHAT ARE DATA PROTOCOLS?
(ISO), OSI and its seven layers. Emphasis will be placed on the first three layers because
they are more directly involved in communication.
Protocols should not be confused with formats. Formats typically show a standard
organization of bits and octets and describe the function of each to achieve a certain
objective. DS1 is a format as are SDH and SONET.
In this section we will familiarize the reader with basic protocol functions. This is
followed by a discussion of the Open System Interconnection (OSI), which has facilitated
a large family of protocols. A brief discussion of HDLC (high-level data-link control) is
provided. This particular protocol was selected because it spawned so many other link
layer protocols. Some specific higher layer protocols are described in Chapter 11.
Basic Protocol Functions
There are a number of basic protocol functions. Typical among these are:
Segmentation and reassembly (SAR)
A short description of each follows.
Segmentation and reassembly. Segmentation refers to breaking up the data message or
file into blocks, packets, or frames with some bounded size. Which term we use depends
on the semantics of the system. There is a new data segment called a cell, used in
asynchronous transfer mode (ATM) and other digital systems. Reassembly is the reverse
of segmentation, because it involves putting the blocks, frames, or packets back into
their original order. The device that carries out segmentation and reassembly in a packet
network is called a PAD (packet assemblerdisassembler).
Encapsulation. Encapsulation is the adding of header and control information in front
of the text or info field and parity information, which is generally carried behind the text
or info fields.
Connection control. There are three stages of connection control:
1. Connection establishment
2. Data transfer
3. Connection termination
Some of the more sophisticated protocols also provide connection interrupt and recovery
capabilities to cope with errors and other sorts of interruptions.
Ordered delivery. Packets, frames, or blocks are often assigned sequence numbers to
ensure ordered delivery of the data at the destination. In a large network with many
nodes and possible routes to a destination, especially when operated in a packet mode,
the packets can arrive at the destination out of order. With a unique segment (packet)
numbering plan using a simple numbering sequence, it is a rather simple task for a long
data file to be reassembled at the destination it its original order.
Flow control. Flow control refers to the management of the data flow from source to
destination such that buffer memories do not overflow, but maintain full capacity of all | http://search-pdf-files.com/pdf/4726868-control-data-connection-protocols-destination | 13 |
10 | Quasi-stellar radio sources (quasars) are notoriously difficult to study, due to the fact that they are extremely bright, more so than the galaxies which they inhabit. Now, astronomers propose a new method for studying these objects, which can also help them calculate the host galaxy's mass.
Quasars are very active, supermassive black holes that release vast volumes of radiations from their poles. These radiations are produced by a wide array of phenomena happening around the event horizon. Such objects are usually located at least several billion light-years from Earth.
Due to their extreme brightness, they easily overwhelm the glow of stars around them, making it very hard for experts to measure the mass of their host galaxies. But doing so may be possible if the correct galactic alignments are found.
Scientists determined recently that correctly-aligned galaxies give rise to an optical phenomenon called gravitational lensing. Studies can only be conducted on galaxies that are placed in a straight line as seen from Earth, with the nearest one standing directly in front of the background one.
What this does is enable the first galaxy in the “string” to act like a massive cosmic magnifying glass. The effect is made possible by the fact that massive gravitational pulls distort the path of photons.
Light is therefore literally bent around the foreground galaxy. The reason why gravitational lensing is an appropriate method to use for studying the mass of galaxies hosting quasars is that the phenomenon enables astronomers to measure light distortions produced by the background galaxy.
The new investigation was conducted by an international team of astronomers, which also included NASA Jet Propulsion Laboratory
(JPL) expert Daniel Stern. The group says that only alignments where a quasar is located in the foreground galaxy can be used for this specific type of study.
“The amount of the background galaxy's distortion can be used to accurately measure the lensing galaxy's mass,” experts at the JPL explain in a press release. Thus far, experts only managed to find a handful of appropriate galactic alignments.
They are optimistic that additional surveys, conducted with the NASA/ESA Hubble Space Telescope and other space- and ground-based assets, will reveal more such scenarios.
In time, astronomers want to build a catalog of such aligned galaxies, in hopes that this will provide additional insight into galactic evolution, black hole feeding and growth, and stellar formation. | http://news.softpedia.com/news/Gravitational-Lensing-Enables-Quasar-Measurements-258947.shtml | 13 |
14 | Steven S. Skiena
One way to convert form names to integers is to use the letters to form a base ``alphabet-size'' number system:
To convert ``STEVE'' to a number, observe that e is the 5th letter of the alphabet, s is the 19th letter, t is the 20th letter, and v is the 22nd letter.
Thus one way we could represent a table of names would be to set aside an array big enough to contain one element for each possible string of letters, then store data in the elements corresponding to real people. By computing this function, it tells us where the person's phone number is immediately!!
What's the Problem?
Because we must leave room for every possible string, this method will use an incredible amount of memory. We need a data structure to represent a sparse table, one where almost all entries will be empty.
We can reduce the number of boxes we need if we are willing to put more than one thing in the same box!
Example: suppose we use the base alphabet number system, then take the remainder
Now the table is much smaller, but we need a way to deal with the fact that more than one, (but hopefully every few) keys can get mapped to the same array element.
The Basics of Hashing
The basics of hashing is to apply a function to the search key so we can determine where the item is without looking at the other items. To make the table of reasonable size, we must allow for collisions, two distinct keys mapped to the same location.
There are several clever techniques we will see to develop good hash functions and deal with the problems of duplicates.
The verb ``hash'' means ``to mix up'', and so we seek a function to mix up keys as well as possible.
The best possible hash function would hash m keys into n ``buckets'' with no more than keys per bucket. Such a function is called a perfect hash function
How can we build a hash function?
Let us consider hashing character strings to integers. The ORD function returns the character code associated with a given character. By using the ``base character size'' number system, we can map each string to an integer.
The First Three SSN digits Hash
The first three digits of the Social Security Number
The last three digits of the Social Security Number
What is the big picture?
Ideas for Hash Functions
Prime Numbers are Good Things
Suppose we wanted to hash check totals by the dollar value in pennies mod 1000. What happens?
, , and
Prices tend to be clumped by similar last digits, so we get clustering.
If we instead use a prime numbered Modulus like 1007, these clusters will get broken: , , and .
In general, it is a good idea to use prime modulus for hash table size, since it is less likely the data will be multiples of large primes as opposed to small primes - all multiples of 4 get mapped to even numbers in an even sized hash table!
The Birthday Paradox
No matter how good our hash function is, we had better be prepared for collisions, because of the birthday paradox.
Assuming 365 days a year, what is the probability that exactly two people share a birthday? Once the first person has fixed their birthday, the second person has 365 possible days to be born to avoid a collision, or a 365/365 chance.
With three people, the probability that no two share is . In general, the probability of there being no collisions after n insertions into an m-element table is
When m = 366, this probability sinks below 1/2 when N = 23 and to almost 0 when .
The moral is that collisions are common, even with good hash functions.
What about Collisions?
No matter how good our hash functions are, we must deal with collisions. What do we do when the spot in the table we need is occupied?
Collision Resolution by Chaining
The easiest approach is to let each element in the hash table be a pointer to a list of keys.
Insertion, deletion, and query reduce to the problem in linked lists. If the n keys are distributed uniformly in a table of size m/n, each operation takes O(m/n) time.
Chaining is easy, but devotes a considerable amount of memory to pointers, which could be used to make the table larger. Still, it is my preferred method.
We can dispense with all these pointers by using an implicit reference derived from a simple function:
If the space we want to use is filled, we can examine the remaining locations:
The reason for using a more complicated scheme is to avoid long runs from similarly hashed keys.
Deletion in an open addressing scheme is ugly, since removing one element can break a chain of insertions, making some elements inaccessible.
Performance on Set Operations
With either chaining or open addressing:
Pragmatically, a hash table is often the best data structure to maintain a dictionary. However, the worst-case running time is unpredictable.
The best worst-case bounds on a dictionary come from balanced binary trees, such as red-black trees. | http://www.cs.sunysb.edu/~skiena/214/lectures/lect21/lect21.html | 13 |
292 | When scientists first began using rockets for research, their eyes were focused upward, on the mysteries that lay beyond our atmosphere and our planet. But it wasn't long before they realized that this new technology could also give them a unique vantage point from which to look back at Earth.
Scientists working with V-2 and early sounding rockets for the Naval Research Laboratory (NRL) made the first steps in this direction almost ten years before Goddard was formed. The scientists put aircraft gun cameras on several rockets in an attempt to determine which way the rockets were pointing. When the film from one of these rockets was developed, it had recorded images of a huge tropical storm over Brownsville, Texas. Because the rocket....
....was spinning, the image wasn't a neat, complete picture, but Otto Berg, the scientist who had modified the camera to take the photo, took the separate images home and pasted them together on a flat board. He then took the collage to Life magazine, which published what was arguably one of the earliest weather photos ever taken from space.1
Space also offered unique possibilities for communication that were recognized by industry and the military several years before NASA was organized. Project RAND2 had published several reports in the early 1950s outlining the potential benefits of satellite-based communication relays, and both AT&T and Hughes had conducted internal company studies on the commercial viability of communication satellites by 1959.3
These rudimentary seeds, already sown by the time Goddard opened its doors, grew into an amazing variety of communication, weather, and other remote-sensing satellite projects at the Center that have revolutionized many aspects of our lives. They have also taught us significant and surprising things about the planet we inhabit. Our awareness of large-scale crop and forest conditions, ozone depletion, greenhouse warming, and El Nino weather patterns has increased dramatically because of our ability to look back on Earth from space. Satellites have allowed us to measure the shape of the Earth more accurately, track the movement of tectonic plates, and analyze portions of the atmosphere and areas of the world that are hard to reach from the ground.
In addition, the "big picture" perspective satellites offer has allowed scientists to begin investigating the dynamics between different individual processes and the development and behavior of global patterns and systems. Ironically, it seems we have had to develop the ability to leave our planet before we could begin to fully understand it.
From the very earliest days of the space program, scientists realized that satellites could offer an important side-benefit to researchers interested in mapping the gravity field and shape of the Earth, and Goddard played an important role in this effort. The field of geodesy, or the study of the gravitational field of the Earth and its relationship to the solid structure of the planet, dates back to the third century B.C., when the Greek astronomer Eratosthenes combined astronomical observation with land measurement to try to prove that the Earth was, in fact, round. Later astronomers and scientists had used other methods of triangulation to try to estimate the exact size of the Earth. Astronomers also had used the Moon, or stars with established locations, to try to map the shape of the Earth and exact distances between points more precisely. But satellites offered a new twist to this methodology.
For one thing, the Earth's shape and gravity field affected the orbit of satellites. So at the beginning of the space age, Goddard's tracking and characterizing the orbit of the first satellites was in and of itself a scientific endeavor. From that orbital data, scientists could infer information about the Earth's gravity field, which is affected by the distribution of its mass. The Earth, as it turns out, is not perfectly round, and its mass is not perfectly distributed. There are places where land or ocean topography results in denser or less dense mass accumulation. The centrifugal force of the Earth's rotation combines with gravity and these mass concentrations to create bulges and depressions in the planet. In fact, although we think of the Earth as round, Goddard's research showed us that it is really slightly pear-shaped.
Successive Goddard satellites enabled scientists to gather much more precise information about the Earth's shape as well as exact positions of points on the planet. In fact, within 10 years, scientists had learned as much again about global positioning, the size and shape of the Earth, and its gravity field as their predecessors had learned in the previous 200 years.
Laser reflectors on Goddard satellites launched in 1965, 1968, and 1976, for example, allowed scientists to make much more precise measurements between points, which enabled them to determine the exact location or movement of objects. The laser reflectors developed for Goddard's LAGEOS satellite, launched in 1976, could determine movement or position within a few centimeters, which allowed scientists to track and analyze tectonic plate movement and continental drift. Among other things, the satellite data told scientists that the continents seem to be inherently rigid bodies, even if they contain divisive bodies of water, such as the Mississippi River, and that continental plate movement appears to occur at a constant rate over time. Plate movement information provided by satellites has also helped geologists track the dynamics that lead up to Earthquakes, which is an important step in predicting these potentially catastrophic events.
The satellite positioning technique used for this plate tectonic research was the precursor to the Global Positioning System (GPS) technology that now uses a...
...constellation of satellites to provide precise three-dimensional navigation for aircraft and other vehicles. Yet although a viable commercial market is developing for GPS technology today, the greatest commercial application of space has remained the field of communication satellites.4
For all the talk about the commercial possibilities of space, the only area that has proven substantially profitable since 1959 is communication satellites, and Goddard played an important role in developing the early versions of these spacecraft. The industry managers who were conducting research studies and contemplating investment in this field in 1959 could not have predicted the staggering explosion of demand for communications that has accompanied the so-called "Information Age." But they saw how dramatically demand for telephone service had increased since World War II, and they saw potential in other communications technology markets, such as better or broader transmission for television and radio signals. As a result, several companies were even willing to invest their own money, if necessary, to develop communication satellites.
The Department of Defense (DoD) actually had been working on communication satellite technology for a number of years, and it wanted to keep control of what it considered a critical technology. So when NASA was organized, responsibility for communication satellite technology development was split between the new space agency and the DoD. The DoD would continue responsibility for "active" communication satellites, which added power to incoming signals and actively transmitted the signals back to ground stations. NASA's role was initially limited to "passive" communication satellites, which relied on simply reflecting signals off the satellite to send them back to Earth.5
NASA's first communication satellite, consequently, was a passive spacecraft called "Echo." It was based on a balloon design by an engineer at NASA's Langley Research Center and developed by Langley, Goddard, JPL and AT&T. Echo was, in essence, a giant mylar balloon, 100 feet in diameter, that could "bounce" a radio signal back down to another ground station a long distance away from the first one.
Echo I, the world's first communication satellite, was successfully put into orbit on 12 August 1960. Soon after launch, it reflected a pre-taped message from President Dwight Eisenhower across....
.....the country and other radio messages to Europe, demonstrating the potential of global radio communications via satellite. It also generated a lot of public interest, because the sphere was so large that it could be seen from the ground with the naked eye as it passed by overhead.
Echo I had some problems, however. The sphere seemed to buckle somewhat, hampering its signal-reflecting ability. So in 1964, a larger and stronger passive satellite, Echo II, was put into orbit. Echo II was made of a material 20 times more resistant to buckling than Echo I and was almost 40 feet wider in diameter.
Echo II also experienced some difficulties with buckling. But the main reason the Echo satellites were not pursued any further was not that the concept wouldn't work. It was simply that it was eclipsed by much better technology - active communication satellites.6
Syncom, Telstar, and Relay
By 1960, Hughes, RCA, and AT&T were all advocating the development of active communication satellites. They differed in the kind of satellite they recommended, however. Hughes felt strongly that the best system would be based on geosynchronous satellites. Geosynchronous satellites are in very high orbits - 22,300 miles above the ground. This high orbit allows their orbital speed to match the rotation speed of the Earth, which means they can remain essentially stable over one spot, providing a broad range of coverage 24 hours a day. Three of these satellites, for example, can provide coverage of the entire world, with the exception of the poles.
The disadvantage of using geosynchronous satellites for communications is that sending a signal up 22,300 miles and back causes a time-delay of approximately a quarter second in the signal. Arguing that this delay would be too annoying for telephone subscribers, both RCA and AT&T supported a bigger constellation of satellites in medium Earth orbit, only a few hundred miles above the Earth.7
The Department of Defense had been working on its own geosynchronous communication satellite, but the project was running into significant development problems and delays. NASA had been given permission by 1960 to pursue active communication satellite technology as well as passive systems, so the DoD approached NASA about giving Hughes a sole-source contract to develop an experimental geosynchronous satellite. The result was Syncom, a geosynchronous satellite design built by Hughes under contract to Goddard.
Hughes already had begun investing its own money and effort in the technology, so Syncom I was ready for Goddard to launch in February 1963 - only 17 months after the contract was awarded. Syncom I stopped sending signals a few seconds before it was inserted into its final orbit, but Syncom II was launched successfully five months later, demonstrating the viability of the system. The third Syncom satellite, launched in August 1964, transmitted live television coverage of the Olympic Games in Tokyo, Japan to stations in North America and Europe.
Although the military favored the geosynchronous concept, it was not the only technology being developed. In 1961, Goddard began working with RCA on the "Relay" satellite, which was launched 13 December 1962. Relay was designed to demonstrate the feasibility of medium-orbit, wide-band communications satellite technology and to help develop the ground....
....station operations necessary for such a system. It was a very successful project, transmitting even color television signals across wide distances.
AT&T, meanwhile, had run into political problems with NASA and government officials who were concerned that the big telecommunications conglomerate would end up monopolizing what was recognized as potentially powerful technology. But when NASA chose to fund RCA's Relay satellite instead of AT&T's design, AT&T decided to simply use its own money to develop a medium orbit communications satellite, which it called Telstar. NASA would launch the satellite, but AT&T would reimburse NASA for the costs involved. Telstar 1 was launched on 10 July 1962, and a second Telstar satellite followed less than a year later. Both satellites were very successful, and Telstar 2 demonstrated that it could even transmit both color and black and white television signals between the United States and Europe.
In some senses, Relay and Telstar were competitors. But RCA and AT&T, who were both working with managers at Goddard, reportedly cooperated very well with each other. Each of the efforts was seen as helping to advance the technology necessary for this new satellite industry to become viable, and both companies saw the potential profit of that in the long run.
By 1962, it was clear that satellite communications technology worked, and there was going to be money made in its use. Fearful of the powerful monopoly satellites could offer a single company, Congress passed the Satellite Communications Act, setting up a consortium of existing communications carriers to run the satellite communications industry. Individual companies could bid to sell satellites to the consortium, but no single company would own the system. NASA would launch the satellites for Comsat, as the consortium was called, but Comsat would run the operations.
In 1964, the Comsat consortium was expanded further with the formation of the International Telecommunications Satellite Organization, commonly known as "Intelsat," to establish a framework for international use of communication satellites. These organizations had the responsibility for choosing the type of satellite technology the system would use. The work of RCA, AT&T and Hughes had proven that either medium-altitude or geosynchronous satellites could work. But in 1965, the consortiums finally decided to base the international system on geosynchronous satellites similar to the Syncom design.8
Applications Technology Satellites
Having helped to develop the prototype satellites, Goddard stepped back from operational communication satellites and focused its efforts on developing advanced technology for future systems. Between 1966 and 1974, Goddard launched a total of six Applications Technology Satellites (ATS) to research advanced technology for communications and meteorological spacecraft. The ATS spacecraft were all put into geosynchronous orbits and investigated microwave and millimeter wavelengths for.....
....communication transmissions, methods for aircraft and marine navigation and communications, and various control technologies to improve geosynchronous satellites.
Four of the spacecraft were highly successful and provided valuable data for improving future communication satellites. The sixth ATS spacecraft, launched 30 May 1974, even experimented with transmitting health and education television to small, low-cost ground stations in remote areas. It also tested a geosynchronous satellite's ability to provide tracking and data transmission services for other satellites. Goddard's research in this area, and the expertise the Center developed in the process, made it possible for NASA to develop the Tracking and Data Relay Satellite System (TDRSS) the agency still uses today.9
After ATS-6, NASA transferred responsibility for future communication satellite research to the Lewis Research Center. Goddard, however, maintained responsibility for developing and operating the TDRSS tracking and data satellite system.10
Statistically, the United States has the world's most violent weather. In a typical year, the U.S. will endure some 10,000 violent thunderstorms, 5,000 floods, 1,000 tornadoes, and several hurricanes.11 Improving weather prediction, therefore, has been a high priority of meteorologists here for a very long time.
The early sounding rocket flights began to indicate some of the possibilities space flight might offer in terms of understanding and forecasting the weather, and they prompted the military to pursue development of a meteorological satellite. The Advanced Research Projects Agency (ARPA)12 had a group of scientists and engineers working on this project at the U.S. Army Signal Engineering Laboratories in Ft. Monmouth, New Jersey when NASA was first organized. Recognizing the country's history of providing weather services to the public through a civilian agency, the military agreed to transfer the research group to NASA. These scientists and engineers became one of the founding units of Goddard in 1958.
Television and Infrared Observation Satellites
These Goddard researchers were working on a project called the Television and Infrared Observation Satellite (TIROS). When it was launched on 1 April 1960, it became the world's first meteorological satellite, returning thousands of images of cloud cover and spiralling storm systems. Goddard's Explorer VI satellite had recorded some crude cloud cover images before TIROS I was launched, but the TIROS satellite was the first spacecraft dedicated to meteorological data gathering and transmitted the first really good cloud cover photographs. 13
Clearly, there was a lot of potential in this new technology, and other meteorological satellites soon followed the first TIROS spacecraft. Despite its name, the first TIROS carried only television cameras. The second TIROS satellite, launched in November 1960, also included an infrared instrument, which gave it the ability to detect cloud cover even at night.
The TIROS capabilities were limited, but the satellites still provided a tremendous service in terms of weather forecasting. One of the biggest obstacles meteorologists faced was the local, "spotty" nature of the data...
...they could obtain. Weather balloons and ocean buoys could only collect data in their immediate area. Huge sections of the globe, especially over the oceans, were dark areas where little meteorological information was available. This made forecasting a difficult task, especially for coastal areas.
Sounding rockets offered the ability to take measurements at all altitudes of the atmosphere, which helped provide temperature, density and water vapor information. But sounding rockets, too, were limited in the scope of their coverage. Satellites offered the first chance to get a "big picture" perspective on weather patterns and storm systems as they travelled around the globe.
Because weather forecasting was an operational task that usually fell under the management of the Weather Bureau, there was some disagreement about who should have responsibility for designing and operating this new class of satellite. Some people at Goddard felt that NASA should take the lead, because the new technology was satellite-based. The Weather Bureau, on the other hand, was going to be paying for the satellites and wanted control over the type of spacecraft and instruments they were funding. When the dust settled, it was decided that NASA would conduct research on advanced meteorological satellite technology and would manage the building, launching and testing of operational weather satellites. The Weather Bureau would have final say over operational satellite design, however, and would take over management of spacecraft operations after the initial test phase was completed.14
The TIROS satellites continued to improve throughout the early 1960s.
Although the spacecraft were officially research satellites, they also provided the Weather Bureau with a semi-operational weather satellite system from 1961 to 1965. TIROS III, launched in July 1961, detected numerous hurricanes, tropical storms, and weather fronts around the world that conventional ground networks missed or would not have seen for several more days.15 TIROS IX, launched in January 1965, was the first of the series launched into a polar orbit, rotating around the Earth in a north-south direction. This orientation allowed the satellite to cross the equator at the same time each day and provided coverage of the entire globe, including the higher latitudes and polar regions, as its orbit precessed around the Earth.
The later TIROS satellites also improved their coverage by changing the location of the spacecraft's camera. The TIROS satellites were designed like a wheel of cheese. The wheel spun around but, like a toy top or gyroscope, the axis of the wheel kept pointing in the same direction as the satellite orbited the Earth. The cameras were placed on the satellite's axis, which allowed them to take continuous pictures of the Earth when that surface was actually facing the planet. Like dancers doing a do-si-do, however, the surface with the cameras would be pointing parallel to or away from the Earth for more than half of the satellite's orbit. TIROS IX (and the operational TIROS satellites), put the camera on the rotating section of the wheel, which was kept facing perpendicular to the Earth throughout its orbit. This made the satellite operate more like a dancer twirling around while circling her partner. While the camera could only take pictures every few seconds, when the section of the wheel holding the camera rotated past the Earth, it could continue taking photographs throughout the satellite's entire orbit.
In 1964, Goddard took another step in developing more advanced weather satellites when it launched the first NIMBUS spacecraft. NASA had originally envisioned the larger and more sophisticated NIMBUS as the design for the Weather Bureau's operational satellites. The Weather Bureau decided that the....
....NIMBUS spacecraft were too large and expensive, however, and opted to stay with the simpler TIROS design for the operational system. So the NIMBUS satellites were used as research vehicles to develop advanced instruments and technology for future weather satellites. Between 1964 and 1978, Goddard developed and launched a total of seven Nimbus research satellites.
In 1965, the Weather Bureau was absorbed into a new agency called the Environmental Science Services Administration (ESSA). The next year, NASA launched the first satellite in ESSA's operational weather system. The satellite was designed like the TIROS IX spacecraft and was designated "ESSA 1." As per NASA's agreement, Goddard continued to manage the building, launching and testing of ESSA's operational spacecraft, even as the Center's scientists and engineers worked to develop more advanced technology with separate research satellites.
The ESSA satellites were divided into two types. One took visual images of the Earth with an an Automatic Picture Transmission (APT) camera system and transmitted them in real time to stations around the globe. The other recorded images that were recorded and then transmitted to a central ground station for global analysis. These first ESSA satellites were deployed in pairs in "Sun-synchronous" polar orbits around the Earth, crossing the same point at approximately the same time each day.
In 1970, Goddard launched an improved operational spacecraft for ESSA using "second generation" weather satellite technology. The Improved TIROS Operational System (ITOS), as the design was initially called, combined the functions of the previous pairs of ESSA satellites into a single spacecraft and added a day and night scanning radiometer. This improvement meant that meteorologists could get global cloud cover information every 12 hours instead of every 24 hours.
Soon after ITOS 1 was launched, ESSA evolved into the National Oceanic and Atmospheric Administration (NOAA), and successive ITOS satellites were redesignated as NOAA 1, 2, 3, etc. This designation system for NOAA's polar-orbiting satellites continues to this day.
In 1978, NASA launched the first of what was called the "third generation" of polar orbiting satellites. The TIROS-N design was a much bigger, three-axis-stabilized spacecraft that incorporated much more advanced equipment. The TIROS-N series of instruments, used aboard operational NOAA satellites today, provided much more accurate sea-surface temperature information, which is necessary to predict a phenomenon like an El Nino weather pattern. They also could identify snow and sea ice and could provide much better temperature profiles for different altitudes in the atmosphere.
But while the lower-altitude polar satellites can observe some phenomena in more detail because they are relatively close to the Earth, they can't provide the continuous "big picture" information a geosynchronous satellite can offer. So for the past 25 years, NOAA has operated two weather satellite systems - the TIROS series of polar orbiting satellites at lower altitudes, and two geosynchronous satellites more than 22,300 miles above the Earth.16
While polar-orbiting satellites were an improvement over the more equatorial-orbiting TIROS satellites, scientists realized that they could get a much better perspective on weather systems from a geosynchronous spacecraft. Goddard's research teams started investigating this technology with the launch of the first Applications Technology Satellite (ATS-1) in 1966. Because the ATS had a geosynchronous orbit that kept it "parked" above one spot, meteorologists could get progressive photographs of the same area over a period of time as often as every 30 minutes. The "satellite photos" showing changes in cloud cover that we now almost take for granted during nightly newscasts are made possible by geosynchronous weather satellites. Those cloud movement images also allowed meteorologists to infer wind currents and speeds. This information is particularly useful in determining weather patterns over areas of the world such as oceans or the tropics, where conventional aircraft and balloon methods can't easily gather data.
Goddard's ATS III satellite, launched in 1967, included a multi-color scanner that could provide images in color, as well. Shortly after its launch, ATS III took the first color image of the entire Earth, a photo made possible by the satellite's 22,300 mile high orbit.17
In 1974, Goddard followed its ATS work with a dedicated geosynchronous weather satellite called the Synchronous Meteorological Satellite (SMS). Both SMS -1 and SMS-2 were research prototypes, but they still provided meteorologists with practical information as they tested out new technology. In addition to providing continuous coverage of a broad area, the SMS satellites collected and relayed weather data from 10,000 automatic ground stations in six hours, giving forecasters more timely and detailed data than they had ever had before.
Goddard launched NOAA's first operational geostationary18 satellite, designated the Geostationary Operational Environmental Satellite (GOES) in October 1975. That satellite has led to a whole family of GOES spacecraft. As with previous operational satellites, Goddard managed the building, launching and testing of the GOES spacecraft.
The first seven GOES spacecraft, while geostationary, were still "spinning" designs like NOAA's earlier operational ESSA satellites. In the early 1980s, however, NOAA decided that it wanted the new series of geostationary GOES spacecraft to be three-axis stabilized, as well, and to incorporate significantly more advanced instruments. In addition, NOAA decided to award a single contract directly with an industry manufacturer for the spacecraft and instruments, instead of working separate instrument and spacecraft contracts through Goddard.
Goddard typically developed new instruments and technology on research satellites before putting them onto an operational spacecraft for NOAA. The plan for GOES 8,19 however, called for incorporating new technology instruments directly into a spacecraft that was itself a new design and also had an operational mission. Meteorologists across the country were going to rely on the new instruments for accurate weather forecasting information, which put a tremendous amount of added pressure on the designers. But the contractor selected to build the instruments underestimated the cost and complexity of developing the GOES 8 instruments. In addition, Goddard's traditional "Phase B" design study, which would have generated more concrete estimates of the time and cost involved in the instrument development, was eliminated on the GOES 8 project. The study was skipped in an attempt to save time, because NOAA was facing a potential crisis with its geostationary satellite system.
NOAA wanted to have two geostationary satellites up at any given point in order to adequately cover both coasts of the country. But the GOES 5 satellite failed in 1984, leaving only one geostationary satellite, GOES 6, in operation. The early demise of GOES 4 and GOES 5 left NOAA uneasy about how long GOES 6 would last, prompting the "streamlining" efforts on the GOES 8 spacecraft design. The problem became even more serious in 1986 when the launch vehicle for the GOES G spacecraft, which would have become GOES 7, failed after launch. Another GOES satellite was successfully launched in 1987, but the GOES 6 spacecraft failed in January 1989, leaving the United States once again with only one operational geostationary weather satellite.
By 1991, when the GOES 8 project could not predict a realistic launch date, because working instruments for the spacecraft still hadn't been developed, Congress began to investigate the issue. The GOES 7 spacecraft was aging, and managers and elected officials realized that it was entirely possible that the country might soon find itself without any geostationary satellite coverage at all.
To buy the time necessary to fix the GOES 8 project and alleviate concerns about coverage, NASA arranged with the Europeans to "borrow" one of their Eumetsat geostationary satellites. The satellite was allowed to "drift" further west so it sat closer to the North American coast, allowing NOAA to move the GOES 7 satellite further west.
Meanwhile, Goddard began to take a more active role in the GOES 8 project. A bigger GOES 8 project office was established at the Center and Goddard brought in some of its best instrument experts to work on the project, both at Goddard and at the contractor's facilities. Goddard, after all, had some of the best meteorological instrument-building expertise in the country. But because Goddard was not directly in charge of the instrument sub-contract, the Center had been handicapped in making that knowledge and experience available to the beleaguered contractor.
The project was a sobering reminder of the difficulties that could ensue when, in an effort to save time and money, designers attempted to streamline a development project or combine research and operational functions into a single spacecraft. But in 1994, the GOES 8 spacecraft was finally successfully launched, and the results have been impressive. Its advanced instruments performed as advertised, improving the spacecraft's focusing and atmospheric sounding abilities and significantly reducing the amount of time the satellite needed to scan any particular area. 20
Earth Resources Satellites
As meteorological satellite technology developed and improved, Goddard scientists realized that the same instruments used for obtaining weather information could be used for other purposes, as well. Meteorologists could look at radiation that travelled back up from the Earth's surface to determine things like water vapor content and temperature profiles at different altitudes in the atmosphere. But those same emissions could reveal potentially valuable information about the Earth's surface, as well.
Objects at a temperature above absolute zero emit radiation, many of them at precise and unique wavelengths in the electromagnetic spectrum. So by analyzing the emissions of any object, from a star or comet to a particular section of forest or farmland, scientists can learn important things about its chemical composition. Instruments on the Nimbus spacecraft had the ability to look at reflected solar radiation from the Earth in several different wavelengths. As early as 1964, scientists began discussing the possibilities of experimenting with this technology to see what it might be able to show us about not only the atmosphere, but also resources on the Earth.
The result was the Earth Resources Technology Satellite (ERTS), launched in 1972 and later given the more popular name "Landsat 1." The spacecraft was based on a Nimbus satellite,with a multi-channel radiometer to look at different wavelength bands where the reflected energy from surfaces such as forests, water, or different crops would fall. The satellite instruments also had much better resolution than the Nimbus instruments. Each swath of the Earth covered by the Nimbus scanner was 1500 miles wide, with each pixel in the picture representing five miles. The polar-orbiting ERTS satellite instrument could focus in on a swath only 115 miles wide, with each pixel representing 80 meters. This resolution allowed scientists to view a small enough section of land, in enough detail, to conduct a worthwhile analysis of what it contained.
Images from the ERTS/Landsat satellite, for example, showed scientists a 25-mile wide geological feature near Reno, Nevada that appeared to be a previously undiscovered meteor crater. Other images collected by the satellite were useful in discovering water-bearing rocks in Nebraska, Illinois and New York and determining that water pollution drifted off the Atlantic coast as a cohesive unit, instead of dissipating in the ocean currents.
The success of the ERTS satellite prompted scientists to want to explore this use of satellite technology further. They began working on instruments that could get pixel resolutions as high as five meters, but were told to discontinue that research because of national security concerns. If a civilian satellite provided data that detailed, it might allow foreign countries to find out critical information about military installations or other important targets in the U.S. This example illustrates one of the ongoing difficulties with Earth resource satellite research. The fact that the same information can be used for both scientific and practical purposes often creates complications with not only who should be responsible for the work, but how and where the information will be used.
In any event, the follow-on satellite, "Landsat-2," was limited to the same levels of resolution. More recent Landsat spacecraft, however, have been able to improve instrument resolution further.21
Landsat 2 was launched in January 1975 and looked at land areas for an even greater number of variables than its ERTS predecessor, integrating information from ground stations with data obtained by the satellite's instruments. Because wet land and green crops reflect solar energy at different wavelengths than dry soil or brown plants, Landsat imagery enabled researchers to look at soil moisture levels and crop health over wide areas, as well as soil temperature, stream flows, and snow depth. Its data was used by the U.S. Department of Agriculture, the U.S. Forest Service, the Department of Commerce, the Army Corps of Engineers, the Environmental Protection Agency and the Department of Interior, as well as agencies from foreign countries.22
The Landsat program clearly was a success, particularly from a scientific perspective. It proved that satellite technology could determine valuable information about precious natural resources, agricultural activity, and environmental hazards. The question was who should operate the satellites. Once the instruments were developed, the Landsat spacecraft were going to be collecting the same data, over and over, instead of exploring new areas and technology. One could argue that by examining the evolution of land resources over time, scientists were still exploring new processes and gathering new scientific information about the Earth. But that same information was being used predominantly for practical purposes of natural resource management, agricultural and urban planning, and monitoring environmental hazards. NASA had never seen its role as providing ongoing, practical information, but there was no other agency with the expertise or charter to operate land resource satellites.
As a result, NASA continued to manage the building, launch, and space operation of the Landsat satellites until 1984. Processing and distribution of the satellite's data was managed by the Department of Interior, through an Earth Resources Observation System (EROS) Data Center that was built by the U.S. Geological Survey in Sioux Falls, South Dakota in 1972.
In 1979, the Carter Administration developed a new policy in which the Landsat program would be managed by NOAA and eventually turned over to the private sector. In 1984, the first Reagan Administration put that policy into effect, soliciting commercial bids for operating the system, which at that point consisted of two operational satellites. Landsat 4 had been launched in 1982 and Landsat 5 was launched in 1984. Ownership and operation of the system was officially turned over to the EOSAT Company in 1985, which sold the images to anyone who wanted them, including the government. At the same time, responsibility for overseeing the program was transferred from NASA to NOAA. Under the new program guidelines, the next spacecraft in the Landsat program, Landsat 6, would also be constructed independently by industry.
There were two big drawbacks with this move, however, as everyone soon found out. The first was that although there was something of a market for Landsat images, it was nothing like that surrounding the communication satellite industry. The EOSAT company found itself struggling to stay afloat. Prices for images jumped from the couple of hundred dollars per image that EROS had charged to $4,000 per shot, and EOSAT still found itself bordering on insolvency.
Being a private company, EOSAT also was concerned with making a profit, not archiving data for the good of science or the government. Government budgets wouldn't allow for purchasing thousands of archival images at $4,000 a piece, so the EROS Data Center only bought a few selected images each year. As a result, many of the the scientific or archival benefits the system could have created were lost.
In 1992, the Land Remote Sensing Policy Act reversed the 1984 decision to commercialize the Landsat system, noting the scientific, national security, economic, and social utility of the Landsat images. Landsat 6 was launched the following year, but the spacecraft failed to reach orbit and ended up in the Indian Ocean.
This launch failure was discouraging, but planning for the next Landsat satellite was already underway. Goddard had agreed to manage design of a new data ground station for the satellite, and NASA and the Department of Defense initially agreed to divide responsibility for managing the satellite development. But the Air Force subsequently pulled out of the project and, in May 1994, management of the Landsat system was turned over to NASA, the U.S. Geological Survey (USGS), and NOAA. At the same time, Goddard assumed sole management responsibility for developing Landsat 7.
The only U.S. land resource satellites in operation at the moment are still Landsat 4 and 5, which are both degrading in capability. Landsat 5, in fact, is the only satellite still able to transmit images. The redesigned Landsat 7 satellite is scheduled for launch by mid-1999, and its data will once again be made available though the upgraded EROS facilities in Sioux Falls, South Dakota. Until then, scientists, farmers and other users of land resource information have to rely on Landsat 5 images through EOSAT, or they have to turn to foreign companies for the information.
The French and the Indians have both created commercial companies to sell land resource information from their satellites, but both companies are being heavily subsidized by their governments while a market for the images is developed. There is probably a viable commercial market that could be developed in the United States, as well. But it may be that the demand either needs to grow substantially on its own or would need government subsidy before a commercialization effort could succeed. The issue of scientific versus practical access to the information would also still have to be resolved.
No matter how the organization of the system is eventually structured, Landsat imagery has proven itself an extremely valuable tool for not only natural resource management but urban planning and agricultural assistance, as well. Former NASA Administrator James Fletcher even commented in 1975 that if he had one space-age development to save the world, it would be Landsat and its successor satellites.23 Without question, the Landsat technology has enabled us to learn much more about the Earth and its land-based resources. And as the population and industrial production on the planet increase, learning about the Earth and potential dangers to it has become an increasingly important priority for scientists and policy-makers alike.24
Atmospheric Research Satellites
One of the main elements scientists are trying to learn about the Earth is the composition and behavior of its atmosphere. In fact, Goddard's scientists have been investigating the dynamics of the Earth's atmosphere for scientific, as well as meteorological, purposes since the inception of the Center. Explorers 17, 19, and 32, for example, all researched various aspects of the density, composition, pressure and temperature of the Earth's atmosphere. Explorers 51 and 54, also known as "Atmosphere Explorers," investigated the chemical processes and energy transfer mechanisms that control the atmosphere.
Another goal of Goddard's atmospheric scientists was to understand and measure what was called the "Earth Radiation Budget." Scientists knew that radiation from the Sun enters the Earth's atmosphere. Some of that energy is reflected back into space, but most of it penetrates the atmosphere to warm the surface of the Earth. The Earth, in turn, radiates....
....energy back into space. Scientists knew that the overall radiation received and released was about equal, but they wanted to know more about the dynamics of the process and seasonal or other fluctuations that might exist. Understanding this process is important because the excesses and deficits in this "budget," as well as variations in it over time or at different locations, create the energy to drive our planet's heating and weather patterns.
The first satellite to investigate the dynamics of the Earth Radiation Budget was Explorer VII, launched in 1959. Nimbus 2 provided the first global picture of the radiation budget, showing that the amount of energy reflected by the Earth's atmosphere was lower than scientists had thought. Additional instruments on Nimbus 3, 5, and 6, as well as operational TIROS and ESSA satellites, explored the dynamics of this complex process further. In the early 1980s, researchers developed an Earth Radiation Budget Experiment (ERBE) instrument that could better analyze the short-wavelength energy received from the Sun and the longer-wavelength energy radiated into space from the Earth. This instrument was put on a special Earth Radiation Budget Satellite (ERBS) launched in 1984, as well as the NOAA-9 and NOAA 10 weather satellites.
This instrument has provided scientists with information on how different kinds of clouds affect the amount of energy trapped in the Earth's atmosphere. Lower, thicker clouds, for example, reflect a portion of the Sun's energy back into space, creating a...
....cooling effect on the surface and atmosphere of the Earth. High, thin cirrus clouds, on the other hand, let the Sun's energy in but trap some of the Earth's outgoing infrared radiation, reflecting it back to the ground. As a result, they can have a warming effect on the Earth's atmosphere. This warming effect can, in turn, create more evaporation, leading to more moisture in the air. This moisture can trap even more radiation in the atmosphere, creating a warming cycle that could influence the long-term climate of the Earth.
Because clouds and atmospheric water vapor seem to play a significant role in the radiation budget of the Earth as well as the amount of global warming and climate change that may occur over the next century, scientists are attempting to find out more about the convection cycle that transports water vapor into the atmosphere. In 1997, Goddard launched the Tropical Rainfall Measuring Mission (TRMM) satellite into a near-equatorial orbit to look more closely at the convection cycle in the tropics that powers much of the rest of the world's cloud and weather patterns. The TRMM satellite's Clouds and the Earth's Radiant Energy System (CERES) instrument, built by NASA's Langley Research Center, is an improved version of the earlier ERBE experiment. While the satellite's focus is on convection and rainfall in the lower atmosphere, some of that moisture does get transported into the upper atmosphere, where it can play a role in changing the Earth's radiation budget and overall climate.25
An even greater amount of atmospheric research, however, has been focused on a once little-known chemical compound of three oxygen atoms called ozone. Ozone, as most Americans now know, is a chemical in the upper atmosphere that blocks incoming ultraviolet rays from the Sun, protecting us from skin cancer and other harmful effects caused by ultraviolet radiation.
The ozone layer was first brought into the spotlight in the 1960s, when designers began working on the proposed Supersonic Transport (SST). Some scientists and environmentalists were concerned that the jet's high-altitude emissions might damage the ozone layer, and the federal government funded several research studies to evaluate the risk. The cancellation of the SST in 1971 shelved the issue, at least temporarily, but two years later a much greater potential threat emerged.
In 1973, two researchers at the University of California, Irvine came up with the astounding theory that certain man-made chemicals, called chlorofluorocarbons (CFCs), could damage the atmosphere's ozone layer. These chemicals were widely used in everything from hair spray to air conditioning systems, which meant that the world might have a dangerously serious problem on its hands.
In 1975, Congress directed NASA to develop a "comprehensive program of research, technology and monitoring of phenomena of the upper atmosphere" to evaluate the potential risk of ozone damage further. NASA was already conducting atmospheric research, but the Congressional mandate supported even wider efforts. NASA was not the only organization looking into the problem, either. Researchers around the world began focusing on learning more about the chemistry of the upper atmosphere and the behavior of ozone layer.
Goddard's Nimbus IV research satellite, launched in 1970, already had an instrument on it to analyze ultraviolet rays that were "backscattered," or reflected, from different altitudes in the Earth's atmosphere. Different wavelengths of UV radiation should be absorbed by the ozone at different levels in the atmosphere. So by analyzing how much UV radiation was still present in different wavelengths, researchers could develop a profile of how thick or thin the ozone layer was at different altitudes and locations.
In 1978, Goddard launched the last and most capable of its Nimbus-series satellites. Nimbus 7 carried an improved version of this experiment, called the Solar Backscatter Ultraviolet (SBUV) instrument. It also carried a new sensor called the Total Ozone Mapping Spectrometer (TOMS). As opposed to the SBUV, which provided a vertical profile of ozone in the atmosphere, the TOMS instrument generated a high-density map of the total amount of ozone in the atmosphere.
A similar instrument, called the SBUV-2, has been put on weather satellites since the early 1980s. For a number of years, the Space Shuttle periodically flew a Goddard instrument called the Shuttle Solar Backscatter Ultraviolet (SSBUV) experiment that was used to calibrate the SBUV-2 satellite instruments to insure the readings continued to be accurate. In the last couple of years, however, scientists have developed data-processing methods of calibrating the instruments, eliminating the need for the Shuttle experiments.
Yet it was actually not a NASA satellite that discovered the "hole" that finally developed in the ozone layer. In May 1985, a British researcher in Antarctica published a paper announcing that he had detected an astounding 40% loss in the ozone layer over a Antarctica the previous winter. When Goddard researchers went back and looked at their TOMS data from that time period, they discovered that the data indicated the exact same phenomenon. Indeed, the satellite indicated an area of ozone layer thinning, or "hole,"26 the size of the Continental U.S.
How had researchers missed a development that drastic? Ironically enough, it was because the anomaly was so drastic. The TOMS data analysis software had been programmed to flag grossly anomalous data points, which were assumed to be errors. Nobody had expected the ozone loss to be as great as it was, so the data points over the area where the loss had occurred looked like problems with the instrument or its calibration. .
Once the Nimbus 7 data was verified, Goddard's researchers generated a visual map of the area over Antarctica where the ozone loss had occurred. In fact, the ability to generate visual images of the ozone layer and its "holes" have been among the significant contributions NASA's ozone-related satellites have made to the public debate over the issue. Data points are hard for most people to fully understand. But for non-scientists, a visual image showing a gap in a protective layer over Antarctica or North America makes the problem not only clear, but somehow very real.
The problem then became determining what was causing the loss of ozone. The problem was a particularly sticky one, because it was going to relate directly to legislation and restrictions that would be extremely costly for industry. By 1978, the Environmental Protection Agency (EPA) had already moved to ban....
....the use of CFCs in aerosols. By 1985, the United Nations Environmental Program (UNEP) was calling on nations to take measures to protect the ozone and, in 1987, forty-three nations signed the "Montreal Protocol, agreeing to cut CFC production 50% by the year 2000.
The CFC theory was based on a prediction that chlorofluorocarbons, when they reached the upper atmosphere, released chlorine and flourine. The chlorine, it was suspected, was reacting with the ozone to form chlorine monoxide - a chemical that is able to destroy a large amount of ozone in a very short period of time. Because the issue was the subject of so much debate, NASA launched numerous research efforts to try to validate or disprove the theory. In addition to satellite observations, NASA sent teams of researchers and aircraft to Antarctica to take in situ readings of the ozone layer and the ozone "hole" itself. These findings were then supplemented with the bigger picture perspective the TOMS and SBUV instruments could provide.
The TOMS instrument on Nimbus 7 was not supposed to last more than a couple of years. But the information it was providing was considered so critical to the debate that Goddard researchers undertook an enormous effort to keep the instrument working, even as it aged and began to degrade. The TOMS instrument also hadn't been designed to show long-term trends, so the data processing techniques had to be significantly improved to give researchers that kind of information. In the end, Goddard was able to keep the Nimbus 7 TOMS instrument operating for almost 15 years, which provided ozone monitoring until Goddard was able to launch a replacement TOMS instrument on a Russian satellite in 1991.27
A more comprehensive project to study the upper atmosphere and and the ozone layer was launched in 1991, as well. The satellite, called the Upper Atmosphere Research Satellite (UARS), was one of the results of Congress's 1975 mandate for NASA to pursue additional ozone research. Although its goal is to try to understand the chemistry and dynamics of the upper atmosphere, the focus of UARS is clearly on ozone research. Original plans called for the spacecraft to be launched from the Shuttle in the mid-1980s, but the Challenger explosion back-up delayed its launch until 1991.
Once in orbit, however, the more advanced instruments on board the UARS satellite were able to map chlorine monoxide levels in the stratosphere. Within months, the satellite was able to confirm what the Antarctic....
....aircraft expeditions and Nimbus-7 satellite had already reported - that there was a clear and causal link between levels of chlorine, formation of chlorine monoxide, and levels of ozone loss in the upper atmosphere.
Since the launch of UARS, the TOMS instrument has been put on several additional satellites to insure that we have a continuing ability to monitor changes in the ozone layer. A Russian satellite called Meteor 3 took measurements with a TOMS instrument from 1991 until the satellite ceased operating in 1994. The TOMS instrument was also incorporated into a Japanese satellite called the Advanced Earth Observing System (ADEOS) that was launched in 1996. ADEOS, which researchers hoped could provide TOMS coverage until the next scheduled TOMS instrument launch in 1999, failed after less than a year in orbit. But fortunately, Goddard had another TOMS instrument ready for launch on a small NASA satellite called an Earth Probe, which was put into orbit with the Pegasus launch vehicle in 1996. Researchers hope that this instrument will continue to provide coverage and data until the next scheduled TOMS instrument launch.
All of these satellites have given us a much clearer picture of what the ozone layer is, how it interacts with various other chemicals, and what causes it to deteriorate. These pieces of information are essential elements for us to have if we want to figure out how best to protect what is arguably one of our most precious natural resources.
Using the UARS satellite, scientists have been able to track the progress of CFCs up into the stratosphere and have detected the build-up of chlorine monoxide over North America and the Arctic as well as Antarctica. Scientists also have discovered that ozone loss is much greater when the temperature of the stratosphere is cold. In 1997, for example, particularly cold stratospheric temperatures created the first Antarctic-type of ozone hole over North America.
Another factor in ozone loss is the level of aerosols, or particulate matter, in the upper atmosphere. The vast majority of aerosols come from soot, other pollution, or volcanic activity, and Goddard's scientists have been studying the effects of these particles in the atmosphere ever since the launch of the Nimbus I spacecraft in 1964. Goddard's 1984 Earth Radiation Budget Satellite (ERBS), which is still operational, carries a Stratospheric Aerosol and Gas Experiment (SAGE II) that tracks aerosol levels in the lower and upper atmosphere. The Halogen Occultation Experiment (HALOE) instrument on UARS also measures aerosol intensity and distribution.
In 1991, both UARS and SAGE II were used to track the movement and dispersal of the massive aerosol cloud created by the Mt. Pinatubo volcano eruption in the Philippines. The eruption caused stratospheric aerosol levels to increase to as much as 100 times their pre-eruption levels, creating spectacular Sunsets around the world but causing some other effects, as well. These volcanic clouds appear to help cool the Earth, which could affect global warming trends, but the aerosols in these clouds seem to increase the amount of ozone loss in the stratosphere, as well.
The good news is, the atmosphere seems to be beginning to heal itself. In 1979 there was no ozone hole. Throughout the 1980s, while legislative and policy debates raged over the issue, the hole developed and grew steadily larger. In 1989, most U.S. companies finally ceased production of CFC chemicals and, in 1990, the U.N. strengthened its Montreal Protocol to call for the complete phaseout of CFCs by the year 2000. Nature is slow to react to changes in our behavior but, by 1997, scientists finally began to see a levelling out and even a slight decrease in chlorine monoxide levels and ozone loss in the upper atmosphere.28
Continued public interest in this topic has made ozone research a little more complicated for the scientists involved. Priorities and pressures in the program have changed along with Presidential administrations and Congressional agendas and, as much as scientists can argue that data is simply data, they cannot hope to please everyone in such a politically charged arena. Some environmentalists argue that the problem is much worse than NASA is making it out to be, while more conservative politicians have argued that NASA's scientists are blowing the issue out of proportion.29
But at this point a few things are clearer. The production of CFC chemicals was, in fact, harming a critical component of our planet's atmosphere. It took a variety of ground and space instruments to detect and map the nature and extent of the problem. But the perspective offered by Goddard's satellites allowed scientists and the general public to get a clear overview of the problem and map the progression of events that caused it. This information has had a direct impact on changing the world's industrial practices which, in turn, have begun to slow the damage and allow the planet to heal itself. The practical implications of Earth-oriented satellite data may make life a little more complicated for the scientists involved, but no one can argue the significance or impact of the work. By developing the technology to view and analyze the Earth from space, we have given ourselves an invaluable tool for helping us understand and protect the planet on which we live.
One of the biggest advantages to remote sensing of the Earth from satellites stems from the fact that the majority of the Earth's surface area is extremely difficult to study from the ground. The world's oceans cover 71% of the Earth's surface and comprise 99% of its living area. Atmospheric convective activity over the tropical ocean area is believed to drive a significant amount of the world's weather. Yet until recently, the only way to map or analyze this powerful planetary element was with buoys, ships or aircraft. But these methods could only obtain data from various individual points, and the process was extremely difficult , expensive, and time-consuming.
Satellites, therefore, offered oceanographers a tremendous advantage. A two-minute ocean color satellite image, for example, contains more measurements than a ship travelling 10 knots could make in a decade. This ability has allowed scientists to learn a lot more about the vast open stretches of ocean that influence our weather, our global climate, and our everyday lives.30
Although Goddard's early meteorological satellites were not geared specifically toward analyzing ocean characteristics, some of the instruments could provide information about the ocean as well as the atmosphere. The passive microwave sensors that allowed scientists to "see" through clouds better, for example, also let them map the distribution of sea ice around the world. Changes in sea ice distribution can indicate climate changes and affect sea levels around the world, which makes this an important parameter to monitor. At the same time, this information also has allowed scientists to locate open passageways for ships trying to get through the moving ice floes of the Arctic region.
By 1970, NOAA weather satellites also had instruments that could measure the temperature of the ocean surface in areas where there was no cloud cover, and the Landsat satellites could provide some information on snow and ice distributions. But since the late 1970s, much more sophisticated ocean-sensing satellite technology has emerged.31
The Nimbus 7 satellite, for example, carried an improved microwave instrument that could generate a much more detailed picture of sea ice distribution than either...
...the earlier Nimbus or Landsat satellites. Nimbus 7 also carried the first Coastal Zone Color Scanner (CZCS), which allowed scientists to map pollutants and sediment near coastlines. The CZCS also showed the location of ocean phytoplankton around the world. Phytoplankton are tiny, carbon dioxide-absorbing plants that constitute the lowest rung on the ocean food chain. So phytoplankton generally mark spots where larger fish may be found. But because they bloom where nutrient-rich water from the deep ocean comes up near the surface, their presence also gives scientists clues about the ocean's currents and circulation.
Nimbus 7 continued to send back ocean color information until 1984. Scientists at Goddard continued working on ocean color sensor development...
....throughout the 1980s, and a more advanced coastal zone ocean color instrument was launched on the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite in 1997. In contrast to most scientific satellites, SeaWiFS was funded and launched by a private company instead of by NASA. Most of the ocean color data the satellite provides is purchased by NASA and other research institutions, but the company is selling some data to the fishing industry, as well.32
Since the launch of the Nimbus 7 and Tiros-N satellites in 1978, scientists have also been able to get much better information on global ocean surface temperatures. Sea surface temperatures tell scientists about ocean circulation, because they can use the temperature information to track the movement of warmer and cooler bodies of water. Changes in sea surface temperatures can also indicate the development of phenomena such as El Nino climate patterns. In fact, one of the most marked indications of a developing El Nino condition, which can cause heavy rains in some parts of the world and devastating drought in others, is an unusually warm tongue of water moving eastward from the western equatorial Pacific Ocean.
NOAA weather satellites have carried instruments to measure sea surface temperature since 1981, and NASA's EOS AM-1 satellite, scheduled for launch in 1999, incorporates an instrument that can measure those temperatures with even more precision. The launch of Nimbus 7 also gave researchers the ability to look at surface winds, which help drive ocean circulation. With Nimbus 7, however, scientists had to infer surface winds by looking at slight differentiations in microwave emissions coming from the ocean surface. A scatterometer designed specifically to measure surface winds was not launched until the Europeans launched ERS-1 in 1991. Another scatterometer was launched on the Japanese ADEOS spacecraft in 1996. Because ADEOS failed less than a year after launch, Goddard researchers have begun an intensive effort to launch another scatterometer, called QuickSCAT, on a NASA spacecraft. JPL project managers are being aided in this effort by the Goddard-developed Rapid Spacecraft Procurement Initiative, which will allow them to incorporate the instrument into an existing small spacecraft design.Using this streamlined process, scientists hope to have QuickSCAT in orbit by the end of 1998.33
In the 1970s, researchers at the Wallops Flight Facility also began experimenting with radar altimetry to determine sea surface height, although they were pleased if they could get accuracy within a meter. In 1992, however, a joint satellite project between NASA and the French Centre National d'Etudes Spatiales (CNES) called TOPEX/Poseidon put a much more accurate radar altimeter into orbit. Goddard managed the development of the TOPEX radar altimeter, which can measure sea surface height within a few centimeters. In addition to offering useful information for maritime weather reports, this sea level data tells scientists some important things about ocean movement.
For one thing, sea surface height indicates the build-up of water in one area of the world or another. One of the very first precursors to an El Nino condition, for example, is a rise in ocean levels in the western equatorial Pacific, caused by stronger-than-normal easterly trade winds. Sea level also tells scientists important information about the amount of heat the ocean is storing. If the sea level in a particular area is low, it means that the area of warm, upper-level water is shallow. This means that colder, deeper water can reach the surface there, driving ocean circulation and bringing nutrients up from below, leading to the production of phytoplankton. The upwelling of cold water will also cool down the sea surface temperature, reducing the amount of water that evaporates into the atmosphere.
All of these improvements in satellite capabilities gave oceanographers and scientists an opportunity to integrate on-site surface measurements from buoys or ships with the more global perspective available from space. As a result, we are finally beginning to piece together a more complete picture of our oceans and the role they play in the Earth's biosystems and climate. In fact, one of the most significant results of ocean-oriented satellite research was the realization that ocean and atmospheric processes were intimately linked to each other. To really understand the dynamics of the ocean or the atmosphere, we needed to look at the combined global system they comprised.34
El Nino and Global Change
The main catalyst that prompted scientists to start looking at the oceans and atmosphere as an integrated system was the El Nino event of 1982-83. The rains and drought associated with the unusual weather pattern caused eight billion dollars of damage, leading to several international research programs to try to understand and predict the phenomenon better. The research efforts included measurements by ships, aircraft, ocean buoys, and satellites, and the work is continuing today. But by 1996, scientists had begun to understand the warning signals and patterns of a strong El Nino event. They also had the technology to track atmospheric wind currents and cloud formation, ocean color, sea surface temperatures, sea surface levels and sea surface winds, which let them accurately predict the heavy rains and severe droughts that occurred at points around the world throughout the 1997-98 winter.
The reason the 1982-83 El Nino prompted a change to a more integrated ocean-atmospheric approach is that the El Nino phenomenon does not exist in the ocean or the atmosphere by itself. It's the coupled interactions between the two elements that cause this periodic weather pattern to occur. The term El Nino, which means "The Child," was coined by fishermen on the Pacific coast of Central America who noticed a warming of their coastal ocean waters, along with a decline in fish population, near the Christ Child's birthday in December. But as scientists have discovered, the sequence of events that causes that warming begins many months earlier, in winds headed the opposite direction.
In a normal year, strong easterly trade winds blowing near the equator drag warmer, upper-level ocean water to the western edge of the Pacific ocean. That build-up of warm water causes convection....
....up into the tropical atmosphere, leading to rainfall along the Indonesian and Australian coastlines. It also leads to upwelling of colder, nutrient-rich water along the eastern equatorial Pacific coastlines, along Central and South America. In an El Nino year, however, a period of stronger-than-normal trade winds that significantly raises sea levels in the western Pacific is followed by a sharp drop in those winds. The unusually weak trade winds allow the large build-up of warm water in the western tropical Pacific to flow eastward along the equator. That change moves the convection and rainfall off the Indonesian and Australian coasts, causing severe drought in those areas and, as the warm water reaches the eastern edge of the Pacific ocean, much heavier than normal rainfall occurs along the western coastlines of North, Central, and South America. The movement of warm water toward the eastern Pacific also keeps the colder ocean water from coming up to the surface, keeping phytoplankton from growing and reducing the presence of fish further up on the food chain.
In other words, an El Nino is the result of a change in atmospheric winds, which causes a change in ocean currents and sea level distribution, which causes a change in sea surface temperature, which causes a change in water vapor entering the atmosphere, which causes further changes in the wind currents, and so on, creating a cyclical pattern. Scientists still don't know exactly what causes the initial change in atmospheric winds, but they now realize that they need to look at a global system of water, land and air interactions in order to find the answer. And satellites play a critical role in being able to do that.
An El Nino weather pattern is the biggest short-term "coupled" atmospheric and oceanographic climate signal on the planet after the change in seasons, which is why it prompted researchers to take a more interdisciplinary approach to studying it. But scientists are beginning to realize that many of the Earth's climatic changes or phenomena are really coupled events that require a broader approach in order to understand. In fact, the 1990s have seen the emergence of a new type of scientist who is neither oceanographer or atmospheric specialist, but is an amphibious kind of researcher focusing on the broader issue of climate change.35
One of the other important topics these researchers are currently trying to assess is the issue of global warming. Back in 1896, a Swedish chemist named Svante Arrhenius predicted that the increasing carbon dioxide emissions from the industrial revolution would eventually cause the Earth to become several degrees warmer. The reason for this warming was due to what has become known as the "greenhouse effect." In essence, carbon dioxide and other "greenhouse gases," such as water vapor, allow the short-wavelength radiation from the Sun to pass through the atmosphere, warming the Earth. But the gases absorb the longer-wavelength energy travelling back from the Earth into space, radiating part of that energy back down to the Earth again. Just as the glass in a greenhouse allows the Sun through but traps the heat inside, these gases end up trapping a certain amount of heat in the Earth's atmosphere, causing the Earth to become warmer.
The effect of this warming could be small or great, depending on how much the temperature actually changes. If it is only a degree or two, the effect would be relatively small. But a larger change in climate could melt polar ice, causing the sea level to rise several feet and wiping out numerous coastal communities and resources. If the warming happened rapidly, vegetation might not have time to adjust to the climate change, which could affect the world's food supply as well as timber and other natural resources.
The critical question, then, is how great a danger global warming is. And the answer to that is dependent on numerous factors. One, obviously, is the amount of carbon dioxide and other emissions we put into the air - a concern that has driven efforts to reduce our carbon dioxide-producing fossil fuel consumption. But the amount of carbon dioxide in the air is also dependent on how much can be absorbed again by plant life on Earth - a figure that scientists depend on satellites in order to compute. Landsat images can tell scientists how much deforestation is occurring around the world, and how much healthy plant life remains to absorb CO2. Until recently, however, the amount of CO2 absorbed by the world's oceans was unknown. The ocean color images of SeaWiFS are helping to fill that gap, because the phytoplankton it tracks are a major source of carbon dioxide absorption in the oceans.
Another part of the global warming equation is how much water vapor is in the atmosphere - a factor that is driven by ocean processes, especially in the heat furnace of the tropics. As a result, scientists are trying to learn more about the transfer of heat and water vapor between the ocean and different levels of the atmosphere, using tools such as Goddard's TRMM and UARS satellites.
All of these numbers and factors are fed into atmospheric and global computer models, many of which have been developed at the Goddard Institute for Space Studies (GISS) in New York City. These models then try to predict how our global climate may change based on current emissions, population trends, and known facts about ocean and atmospheric processes.
While these models have been successful in predicting short-term effects, such as the global temperature drop after the Mt. Pinatubo volcano eruption, the problem with trying to predict global change is that it's a very long-term process, with many factors that may change over time. We have only been studying the Earth in bits and pieces, and for only a short number of years. In order to really understand which climate changes are short-term variations and which ones are longer trends of more permanent change, scientists needed to observe and measure the global, integrated climate systems of Planet Earth over a long period of time. This realization was the impetus for NASA's Mission to Planet Earth, or the Earth Science Enterprise.36
Earth Science Enterprise
In some senses, the origins of what became NASA's "Mission to Planet Earth" (MTPE) began in the late 1970s, when we began studying the overall climate and planetary processes of other planets in our solar system. Scientists began to realize that we had never taken that kind of "big picture" look at our own planet, and that such an effort might yield some important and fascinating results. But an even larger spur to the effort was simply the development of knowledge and technology that gave scientists both the capability and an understanding of the importance of looking at the Earth from a more global, systems perspective.
Discussions along these lines were already underway when the El Nino event of 1982-83 and the discovery of the ozone "hole" in 1985 elevated the level of interest and support for global climate change research to an almost crisis level. Although the "Mission to Planet Earth" was not announced as a formal new NASA program until 1990, work on the satellites to perform the mission was underway before that. In 1991, Goddard's UARS satellite became the first official MTPE spacecraft to be launched.
Although the program has now changed its name to the Earth Science Enterprise, suffered several budget cuts, and refocused its efforts from overall global change to a narrower focus of global climate change (leaving out changes in solid land masses), the basic goal of the program echoes what was initiated in 1990. In essence, the Earth Science Enterprise aims to integrate satellite, aircraft and ground-based instruments to monitor 24 interrelated processes and parameters in the planet's oceans and atmosphere over a 15-year period.
Phase I of the program consisted of integrating information from satellites such as UARS, the TOMS Earth Probe, TRMM, TOPEX/Poseidon, ADEOS and SeaWiFS with Space Shuttle research payloads, research aircraft and ground station observations. Phase II is scheduled to begin in 1999 with the launch of Landsat 7 and the first in a series of Earth Observing System (EOS) satellites. The EOS spacecraft are extremely large research platforms with many different instruments to look at various atmospheric and ocean processes that affect natural resources and the overall global climate. They will be polar-orbiting satellites, with orbital paths that will allow the different satellites to take measurements at different times of the day. EOS AM-1 is scheduled for launch in late 1998. EOS PM-1 is scheduled for launch around the year 2000. The first in an EOS altimetry series of satellites, which will study the role of oceans, ocean winds and ocean-atmosphere interactions in climate systems, will launch in 2000. An EOS CHEM-1 satellite, which will look at the behavior of ozone and greenhouse gases, measure pollution and the effect of aerosols on global climate, is scheduled for launch in 2002. Follow-on missions will continue the work of these initial observation satellites over a 15-year period.
There is still much we don't know about our own planet. Indeed, the first priority of the Earth Science Enterprise satellites is simply to try to fill in the gaps in what we know about the behavior and dynamics of our oceans and our atmosphere. Then scientists can begin to look at how those elements interact, and what impact they have and will have on global climate and climate change. Only then will we really know how great a danger global warming is, or how much our planet can absorb the man-made elements we are creating in greater and greater amounts.37
It's an ambitious task. But until the advent of satellite technology, the job would have been impossible to even imagine undertaking. Satellites have given us the ability to map and study large sections of the planet that would be difficult to cover from the planet's surface. Surface and aircraft measurements also play a critical role in these studies. But satellites were the breakthrough that gave us the unique ability to stand back far enough from the trees to see the complete and complex forest in which we live.
For centuries, humankind has stared at the stars and dreamed of travelling among them. We imagined ourselves zipping through asteroid fields, transfixed by spectacular sights of meteors, stars, and distant galaxies. Yet when the astronauts first left the planet, they were surprised to find themselves transfixed not by distant stars, but by the awe-inspiring view their spaceship gave them of the place they had just left - a dazzling, mysterious planet they affectionally nicknamed the "Big Blue Marble." As our horizons expanded into the universe, so did our perspective and understanding of the place we call home. As an astronaut on an international Space Shuttle crew put it, "The first day or so we all pointed to our countries. The third or fourth day we were pointing to our continents. By the fifth day we were aware of only one Earth."38
Satellites have given this perspective to all of us, expanding our horizons and deepening our understanding of the planet we inhabit. If the world is suddenly a smaller place, with cellular phones, paging systems, and Internet service connecting friends from distant lands, it's because satellites have advanced our communication abilities far beyond anything Alexander Graham Bell ever imagined. If we have more than a few hours' notice of hurricanes or storm fronts, it's because weather satellites have enabled meteorologists to better understand the dynamics of weather systems and track those systems as they develop around the world. If we can detect and correct damage to our ozone layer or give advance warning of a strong El Nino winter, it's because satellites have helped scientists better understand the changing dynamics of our atmosphere and our oceans.
We now understand that our individual "homes" are affected by events on the far side of the globe. From both a climatic and environmental perspective, we have realized that our home is indeed "one Earth," and we need to look at its entirety in order to understand and protect it. The practical implications of this information sometimes make the scientific pursuit of this understanding more complicated than our explorations into the deeper universe. But no one would argue the inherent worth of the information or the advantages satellites offer.
The satellites developed by Goddard and its many partners have expanded both our capabilities and our understanding of the complex processes within our Earth's atmosphere. Those efforts may be slightly less mind-bending than our search for space-time anomalies or unexplainable black holes, but they are perhaps even more important. After all, there may be millions of galaxies in the universe. But until we find a way to reach them, this planet is the only one we have. And the better we understand it, the better our chances are of preserving it - not only for ourselves, but for the generations to come. | http://history.nasa.gov/SP-4312/ch5.htm | 13 |
22 | PHP - Operators
In all programming languages, operators are used to manipulate or perform
operations on variables and values. You have already
seen the string concatenation operator "." in the Echo
Lesson and the assignment operator "=" in pretty much every PHP example so far.
There are many operators used in PHP, so we have separated them into
the following categories to make it easier to learn them all.
- Assignment Operators
- Arithmetic Operators
- Comparison Operators
- String Operators
- Combination Arithmetic & Assignment Operators
Assignment operators are used to set a variable equal to a value or set
a variable to another variable's value. Such an assignment of value is done with
the "=", or equal character. Example:
- $my_var = 4;
- $another_var = $my_var;
Now both $my_var and $another_var contain the value 4. Assignments can also be used in conjunction with arithmetic operators.
|+ ||Addition ||2 + 4|
|- ||Subtraction ||6 - 2|
|* ||Multiplication ||5 * 3|
|/ ||Division ||15 / 3|
|% ||Modulus ||43 % 10|
$addition = 2 + 4;
$subtraction = 6 - 2;
$multiplication = 5 * 3;
$division = 15 / 3;
$modulus = 5 % 2;
echo "Perform addition: 2 + 4 = ".$addition."<br />";
echo "Perform subtraction: 6 - 2 = ".$subtraction."<br />";
echo "Perform multiplication: 5 * 3 = ".$multiplication."<br />";
echo "Perform division: 15 / 3 = ".$division."<br />";
echo "Perform modulus: 5 % 2 = " . $modulus
. ". Modulus is the remainder after the division operation has been performed.
In this case it was 5 / 2, which has a remainder of 1.";
Perform addition: 2 + 4 = 6
Perform subtraction: 6 - 2 = 4
Perform multiplication: 5 * 3 = 15
Perform division: 15 / 3 = 5
Perform modulus: 5 % 2 = 1. Modulus is the remainder after the division operation has been performed.
In this case it was 5 / 2, which has a remainder of 1.
Comparisons are used to check the relationship between variables and/or
values. If you would like to see a simple example of a comparison operator in action, check out our
If Statement Lesson. Comparison
operators are used inside conditional statements and evaluate to either true
or false. Here are the most important comparison operators of PHP.
Assume: $x = 4 and $y = 5;
|Operator||English ||Example ||Result|
| == ||Equal To ||$x == $y ||false|
| != ||Not Equal To ||$x != $y ||true|
| < ||Less Than ||$x < $y ||true|
| > ||Greater Than ||$x > $y ||false|
| <= ||Less Than or Equal To ||$x <= $y ||true|
| >= ||Greater Than or Equal To ||$x >= $y ||false|
As we have already seen in the Echo
Lesson, the period "." is used to add two strings together, or more technically,
the period is the concatenation operator for strings.
$a_string = "Hello";
$another_string = " Billy";
$new_string = $a_string . $another_string;
echo $new_string . "!";
Combination Arithmetic & Assignment Operators
In programming it is a very common task to have to increment a variable by some fixed amount. The
most common example of this is a counter. Say you want to increment a counter by 1, you would
However, there is a shorthand for doing this.
This combination assignment/arithmetic operator would accomplish the same task. The downside to this
combination operator is that it reduces code readability to those programmers who are not used to such
an operator. Here are some examples of other common
shorthand operators. In general, "+=" and "-=" are the most widely used combination operators.
|Operator||English ||Example ||Equivalent Operation|
|+=||Plus Equals ||$x += 2; ||$x = $x + 2;|
|-=||Minus Equals ||$x -= 4; ||$x = $x - 4;|
|*=||Multiply Equals ||$x *= 3; ||$x = $x * 3;|
|/=||Divide Equals ||$x /= 2; ||$x = $x / 2;|
|%=||Modulo Equals ||$x %= 5; ||$x = $x % 5;|
|.=||Concatenate Equals ||$my_str.="hello"; ||$my_str = $my_str . "hello"; |
Pre/Post-Increment & Pre/Post-Decrement
This may seem a bit absurd, but there is even a shorter shorthand for the common
task of adding 1 or subtracting 1 from a variable. To add one to a variable or "increment"
use the "++" operator:
- $x++; Which is equivalent to $x += 1; or $x = $x + 1;
To subtract 1 from a variable, or "decrement" use the "--" operator:
- $x--; Which is equivalent to $x -= 1; or $x = $x - 1;
In addition to this "shorterhand" technique, you can specify whether you
want to increment before the line of code is being executed or after the
line has executed. Our PHP code below will display the difference.
$x = 4;
echo "The value of x with post-plusplus = " . $x++;
echo "<br /> The value of x after the post-plusplus is " . $x;
$x = 4;
echo "<br />The value of x with with pre-plusplus = " . ++$x;
echo "<br /> The value of x after the pre-plusplus is " . $x;
The value of x with post-plusplus = 4
The value of x after the post-plusplus is = 5
The value of x with with pre-plusplus = 5
The value of x after the pre-plusplus is = 5
As you can see the value of $x++ is not reflected in the echoed text because
the variable is not incremented until after the line of code is executed. However,
with the pre-increment "++$x" the variable does reflect the addition immediately.
Download Tizag.com's PHP Book
If you would rather download the PDF of this tutorial, check out our
PHP eBook from the Tizag.com store.
Print it out, write all over it, post your favorite lessons all over your wall!
Found Something Wrong in this Lesson?
Report a Bug or Comment on This Lesson - Your input is what keeps Tizag improving with time! | http://www.tizag.com/phpT/operators.php | 13 |
12 | The asteroid belt surrounds the inner solar system like a rocky, ring-shaped moat, extending out from the orbit of Mars to that of Jupiter. But there are voids in that moat, most notably where the orbital influence of Jupiter is especially potent; any asteroid unlucky enough to venture into one of those so-called Kirkwood gaps (named for mathematician Daniel Kirkwood) will be perturbed and ejected from the cozy confines of the belt, often winding up on a collision course with one of the inner, rocky planets (such as Earth) or the moon.
But Jupiter's pull cannot account for the extent of the belt's depletion today or for the spotty distribution of asteroids across the belt—unless there was a migration of planets early in the history of the solar system, according to new research.
Study co-authors David Minton and Prof. Renu Malhotra, planetary scientists at the University of Arizona's Lunar and Planetary Laboratory in Tucson, report in Nature today that an orbital migration of Jupiter and Saturn four billion years ago may explain the observed distribution of asteroids.
The researchers designed a computer model of the asteroid belt under the influence of the outer "gas giant" planets, allowing them to test the distribution that would result from changes in the planets' orbits over time. A simulation wherein the orbits remained static, Minton says, did not agree with observational evidence. "There were places," he says, "where there should have been a lot more asteroids than we saw."
On the other hand, a simulation with an early migration of Jupiter inward and Saturn outward, the result of interactions with lingering planetesimals (small bodies) from the creation of the solar system, fit the observed layout of the belt much better. The uneven spacing of asteroids "is readily explained by this planet-migration process that other people have worked on," says Minton, a graduate student. In particular, "if Jupiter had started somewhat farther from the sun and then migrated inward toward its current location," the gaps it carved into the belt would also have inched inward, leaving the belt looking much like it does now.
Joseph Hahn, a specialist in planetary dynamics at the Space Science Institute in Boulder, Colo., says that the new research bolsters the case for early planetary migration. "The good agreement between the simulated and observed asteroid distributions," he says, "is actually quite remarkable." Jack Wisdom, a planetary scientist at the Massachusetts Institute of Technology, says that most in the field buy into the planetary-migration theory in general. "The really interesting question, not addressed in this paper, is the pattern of migration," he says—whether the asteroid belt can be used to rule out one of the competing theories of migratory patterns.
One issue raised by the new study, Hahn says, is the speed at which the planets' orbits changed. Minton and Malhotra's simulation presumes a rather rapid migration of a million or two million years, but "other models of Neptune's early orbital evolution tend to show that migration proceeds much more slowly," over tens of millions of years, Hahn says. "I suspect that follow-up studies of the solar system's early history will also have to reconcile these two very different timescales, which will hopefully lead to greater understanding of the solar system's early evolution." | http://www.scientificamerican.com/article.cfm?id=asteroid-belt-planet-migration | 13 |
17 | The basics.Amplifiers have an input impedance. That is the impedance by which it loads the signal source. The nominal impedance is something else. That is the source impedance that the amplifier is designed for. The input impedance may be much higher than the nominal inpedance.
Amplifiers also have an output impedance. It may be much lower than the nominal impedance. (Consider an audio amplifier for example.)
This page and sub-pages linked to from it discusses RF amplifiers that are designed for use in a 50 ohm system. In other words, the nominal impedance is 50 ohms for input as well as for output. For a power matched amplifier with input impedance as well as output impedance equal to 50 ohms the concept of gain is trivial. A power matched amplifier that provides 20 dB gain will deliver 100 times more power to the load than it absorbs from the source and the voltage on the output will be 10 times higher than the voltage on the input.
There is no need for 50 ohm amplifiers to be matched however. Amplifiers vith very high input impedance may have attractive properties. Such an amplifier has a near infinite standing wave ratio on the input. Return loss close to zero. The voltage across the input is twice as large as for a matched amplifier since no energy is absorbed. Even negative impedances are possible. They provide return gain. (Negative return loss.) The gain when using unmatched amplifiers is defined as the power ratio in a 50 ohm load when driven from a 50 ohm source with respectively without the amplifier.
Noise.A signal source can be the 50 ohm connector of an antenna, a signal generator or something else. All signal sources have a noise floor associated with the temperature of the source. Signal generators are typically at room temperature and have a noise floor of -174 dBm per Hz of bandwidth. An ideal amplifier just amplifies the signals presented by the source. Real world amplifiers also add noise. It is desireable to have amplifiers that add negligible noise compared to the noise present in the signal source itself.
A room temperature source with the temperature 290 K will be degraded by as many dB as given by the noise figure of the system used to receive it. (This is the definition of NF.) It is often assumed that 1 dB is insignificant and for that reason a noise figure of 1 dB is usually considered adequate in terrestrial communication. An amplifier with NF=1dB has a noise temperature of 75 K. The ratio (290+75)/290 = 1.258 which is the power ratio when the actual amplifier that provides a noise power of (290+75) K is compared with an ideal amplifier that does not add any noise at all. 10 * log(1.258) = 1.00 That is the NF.
A good microwave antenna for with a noise temperature of 10 K would need an amplifier with a noise temperature of 2.58 K to only cause a 1 dB degradation ( = loss of S/N.) That corresponds to a noise figure of about 0.0038 dB. Very low noise temperatures can be observed with good antennas that are used in space communication, radio astronomy and among radio amateurs for EME, signals reflected off the moon.
Read this article AMPLIFIER NOISE TEMPERATURES by Chuck MacCluer W8MQW for a more stringent presentation of the noise problem in amplifiers. Chuck has also made this table available: Conversion for Noise Temperature Te (K) to Noise Figure de KB2AH
Conventional measurements of amplifier noise figures.Automatic noise figure meters have been available since at least Jan 1961. That is the oldest description of the automatic Noise Figure Meter type 113B in my possession. It was manufactured in Sweden by Magnetic AB. At that time vacuum diodes or gas discharge tubes were used as noise sources. Today the automatic instruments use semiconductor diodes.
This document by Chuck MacCluer W8MQW MEASURING NOISE FIGURES presents the basic theory for noise figure measurements. Automatic or manual.
The measurement of a preamp's noise figure, or noise temperature, was discussed by Rainer Bertelsmeier, DJ9BV Myths and Facts about Preamp Tuning in Dubus long ago. Agilent has made this article available: http://cp.literature.agilent.com/litweb/pdf/5952-3706E.pdf
In all, the use of a standard instrument such as an Agilent HP8970A NF measuring set in conjunction with a HP346A 6dB ENR Noise Source can give serious errors and tuning an amplifier for the optimum reading may not lead to the optimum noise figure. The most important error is caused by the variation of the source impedance between hot and cold states. This problem should be eliminated by use of isolators/circulators. The absolute accuracy could be lower, but different amplifiers would have the same error provided that they all have a bandwidth well above the bandwidth of the NF meter.
Measurement of noise figures using Linrad.The most important measurement is the one we do while tweaking the amplifier for optimum performance. By measuring S/N for a signal that is sent through a room temperature attenuator one can totally eliminate the problem of impedance changes between hot and cold. There is only one temperature and the signal is present all the time.
A similar method has been available since 50 years or more, but not much used. Tweak amplifiers for minimum noise in FM mode on a stable, but weak carrier. This is an old clever method. Much better than using NF meters. It guarantees the optimum result - but it will not give any information about the absolute NF value. It is useful for comparing amplifiers however although one has to take into account that real life FM detectors are not ideal so one has to make sure that the signal level is the same all the time.
When using Linrad (version 03-41 or later) one can select to display S/N in the S-meter graph. S is computed with a narrow bandwidth using the baseband filter while N is computed from the full bandwidth of the hardware while excluding narrowband signals (spurs that may be present.) The signal can be evaluated in a narrow enough bandwidth to avoid any noise contribution. Since the noise is measured at the full bandwidth all of the time this method is a factor of four faster than conventional NF meters with a noise head because with them 50% of the time is spent on each of the noise head states and the result is a difference between two power levels. With a stable sine-wave, the power level can be evaluated with a very small uncertainty since very little noise would be present within the very small evaluation bandwidth for the signal. There is a pitfall however. One has to make sure that the frequency response is flat over the selected measurement bandwidth. When tuning selective preamplifiers one would tune for a narrower bandwidth rather than for a lower NF when comparing a signal at the passband center with the noise power over a wide bandwidth. For this reason one may need to use fairly small measurement bandwidths.
When evaluating the noise floor by use of a weak signal it is of course essential that the amplitude of the test signal does not vary with time. With normal commercial signal generators this is not a problem. One can do absolute measurenments by sending the signal through an attenuator and by varying the temperature of the attenuator.
The method and a first measurement of the NF of an amplifier using the PSA4-5043 MMIC from Minicircuits was presented here july 16 2012. using Linrad-03.41pre.
This link A study of several low noise amplifiers with Linrad-03.41 discusses the method in more detail.
It is repeated with better accuracy here: A study of several low noise amplifiers with Linrad-03.42
Tweaking low noise amplifiers.When tweaking amplifiers in a way that changes the frequency response one would get incorrect results if the noise bandwidth of the measurement system is larger than the flat bandwidth of the amplifier being optimized.
It will not be a good idea to use the 1.8 MHz bandwidth of a rtl-sdr for optimizing the frequency determining components in LNAs. Tuning currents and voltages is fine - and fast however.
By selecting a fairly narrow bandwidth for the noise measurement one can get NF directly on the S/N graph to use for tweaking amplifiers for optimum performance. For details, look here: Linrad as a NF meter. The differences in S/N directly give the system NF with a vey high accuracy but only if interferences can not leak into the test object or some other point in the signal path. The measurements with Linrad as a NF meter are not consistent with the results from using Linrad for hot/cold measurements. There are errors up to 0.1 dB which is far above the expected accuracy. Using water to keep the temperature constant requires some skill in water-proofing. These mesurements give the NF for a SWR of about 1.5 due to water inside the attenuator. Better screening in some cases and replacement of old cables having connectors with silvered center pins are other problems that affected the results. The link shows that accurate results are possible, and gives an idea on the problems one might encounter when trying to measure low noise figures very accurately.
This link S/N differences at 50 ohms. is a repetition of the experiment in the previous link. Here the attenuator is a precision attenuator carefully matched to 50 ohms. S/N is also measured at SWR=1.5. | http://www.sm5bsz.com/lir/nf/nf.htm | 13 |
50 | 1 Basic Electric Circuits Thevenins and Nortons Theorems Lesson 10 2 THEVENIN NORTON THEVENINS THEOREM Consider the following A
Network 1 Network 2 B
Figure 10.1 Coupled networks. For purposes of discussion at this point we consider that both networks are composed of resis tors and independent voltage and current sources 1 3 THEVENIN NORTON THEVENINS THEOREM Suppose Network 2 is detached from Network 1 and we focus temporarily only on Network 1.
A Network 1
B Figure 10.2 Network 1 open-circuited. Network 1 can be as complicated in structure as one can imagine. Maybe 45 meshes 387 resistors 91 voltage sources and 39 current sources. 2 4 THEVENIN NORTON THEVENINS THEOREM
A Network 1
B Now place a voltmeter across terminals A-B and read the voltage. We call this the open-circuit voltage. No matter how complicated Network 1 is we read one voltage. It is either positive at A (with respect to B) or negative at A. We ca ll this voltage Vos and we also call it VTHEVENIN VTH 3 5 THEVENIN NORTON THEVENINS THEOREM
We now deactivate all sources of Network 1.
To deactivate a voltage source we remove
the source and replace it with a short circuit.
To deactivate a current source we remove
4 6 THEVENIN NORTON THEVENINS THEOREM Consider the following circuit. Figure 10.3 A typical circuit with independent sources How do we deactivate the sources of this circuit 5 7 THEVENIN NORTON THEVENINS THEOREM When the sources are deactivated the circuit appears as in Figure 10.4. Figure 10.4 Circuit of Figure 10.3 with sources deactivated Now place an ohmmeter across A-B and read the resistance. If R1 R2 R4 20 and R310 then the meter reads 10 . 6 8 THEVENIN NORTON THEVENINS THEOREM We call the ohmmeter reading under these conditions RTHEVENIN and shorten this to RTH. T herefore the important results are that we can replace Network 1 with the following network. Figure 10.5 The Thevenin equivalent structure. 7 9 THEVENIN NORTON THEVENINS THEOREM We can now tie (reconnect) Network 2 back to terminals A-B. Figure 10.6 System of Figure 10.1 with Network 1 replaced by the Thevenin equivalent circuit. We can now make any calculations we desire within Network 2 and they will give the same results as if we still had Network 1 connected. 8 10 THEVENIN NORTON THEVENINS THEOREM It follows that we could also replace Network 2 with a Thevenin voltage and Thevenin resistance. The results would be as shown in Figure 10.7. Figure 10.7 The network system of Figure 10.1 replaced by Thevenin voltages and resistances. 9 11 THEVENIN NORTON THEVENINS THEOREM Example 10.1. Find VX by first finding VTH and RTH to the left of A-B. Figure 10.8 Circuit for Example 10.1. First remove everything to the right of A-B. 10 12 THEVENIN NORTON THEVENINS THEOREM Example 10.1. continued Figure 10.9 Circuit for finding VTH for Example 10.1. Notice that there is no current flowing in the 4 resistor (A-B) is open. Thus there can be no v oltage across the resistor. 11 13 THEVENIN NORTON THEVENINS THEOREM Example 10.1. continued We now deactivate the sources to the left of A-B and find the resistance seen looking in these ter minals. RTH Figure 10.10 Circuit for find RTH for Example 10.10. We see RTH 126 4 8 12 14 THEVENIN NORTON THEVENINS THEOREM Example 10.1. continued After having found the Thevenin circuit we connect this to the load in order to find VX. Figure 10.11 Circuit of Ex 10.1 after connecting Thevenin circuit. 13 15 THEVENIN NORTON THEVENINS THEOREM In some cases it may become tedious to find RTH by reducing the resistive network with the source s deactivated. Consider the following Figure 10.12 A Thevenin circuit with the output shorted. We see Eq 10.1 14 16 THEVENIN NORTON THEVENINS THEOREM Example 10.2. For the circuit in Figure 10.13 find RTH by using Eq 10.1. Figure 10.13 Given circuit with load shorted The task now is to find ISS. One way to do this is to replace the circuit to the left of C-D with a Thevenin voltage and Thevenin resistance. 15 17 THEVENIN NORTON THEVENINS THEOREM Example 10.2. continued Applying Thevenins theorem to the left of terminals C-D and reconnecting to the load gives Figure 10.14 Thevenin reduction for Example 10.2. 16 18 THEVENIN NORTON THEVENINS THEOREM Example 10.3 For the circuit below find VAB by first finding the Thevenin circuit to the left of terminals A-B . Figure 10.15 Circuit for Example 10.3. We first find VTH with the 17 resistor removed. Next we find RTH by looking into termina ls A-B with the sources deactivated. 17 19 THEVENIN NORTON THEVENINS THEOREM Example 10.3 continued Figure 10.16 Circuit for finding VOC for Example 10.3. 18 20 THEVENIN NORTON THEVENINS THEOREM Example 10.3 continued Figure 10.17 Circuit for find RTH for Example 10.3. 19 21 THEVENIN NORTON THEVENINS THEOREM Example 10.3 continued Figure 10.18 Thevenin reduced circuit for Example 10.3. We can easily find that 20 22 THEVENIN NORTON THEVENINS THEOREM Example 10.4 Working with a mix of independent and dependent sources. Find the voltage across the 100 load resistor by first finding the Thevenin circuit to the left of terminals A-B. Figure 10.19 Circuit for Example 10.4 21 23 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued First remove the 100 load resistor and find VAB VTH to the left of terminals A-B. Figure 10.20 Circuit for find VTH Example 10.4. 22 24 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued To find RTH we deactivate all independent sources but retain all dependent sources as shown in Figu re 10.21. Figure 10.21 Example 10.4 independent sources deactivated. We cannot find RTH of the above circuit as it stands. We must apply either a voltage or curre nt source at the load and calculate the ratio of this voltage to current to find RTH. 23 25 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued Figure 10.22 Circuit for find RTH Example 10.4. Around the loop at the left we write the following equation From which 24 26 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued Figure 10.23 Circuit for find RTH Example 10.4. Using the outer loop going in the cw direction using drops or 25 27 THEVENIN NORTON THEVENINS THEOREM Example 10.4 continued The Thevenin equivalent circuit tied to the 100 load resistor is shown below. Figure 10.24 Thevenin circuit tied to load Example 10.4. 26 28 THEVENIN NORTON THEVENINS THEOREM Example 10.5 Finding the Thevenin circuit when only resistors and dependent sources are present. Consider the circ uit below. Find Vxy by first finding the Theveni n circuit to the left of x-y. Figure 10.25 Circuit for Example 10.5. For this circuit it would probably be easier to use mesh or nodal analysis to find Vxy. However the purpose is to illustrate Thevenins theorem. 27 29 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued We first reconcile that the Thevenin voltage for this circuit must be zero. There is no juice in the circuit so there cannot be any open circuit voltage except zero. This is always true when the circuit is made up of only dependent sources and resistors. To find RTH we apply a 1 A source and determine V for the circuit below. Figure 10.26 Circuit for find RTH Example 10.5. 30 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued Figure 10.27 Circuit for find RTH Example 10.5. Write KVL around the loop at the left starting at m going cw using drops 29 31 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued Figure 10.28 Determining RTH for Example 10.5. We write KVL for the loop to the right starting at n using drops and find or 32 THEVENIN NORTON THEVENINS THEOREM Example 10.5 continued We know that where V 50 and I 1. Thus RTH 50 . The Thevenin circuit tied to the load is given below. Figure 10.29 Thevenin circuit tied to the load Example 10.5. Obviously VXY 50 V 31 33 THEVENIN NORTON NORTONS THEOREM Assume that the network enclosed below is composed of independent sources and resistors. Network Nortons Theorem states that this network can be replaced by a current source shunted by a resistance R. 33 34 THEVENIN NORTON NORTONS THEOREM In the Norton circuit the current source is the short circuit current of the network that is th e current obtained by shorting the output of the network. The resistance is the resistance seen looking into the network with all sources deactivated. This is the same as RTH. 35 THEVENIN NORTON NORTONS THEOREM We recall the following from source transformations. In view of the above if we have the Thevenin equivalent circuit of a network we can obtain th e Norton equivalent by using source transformatio n. However this is not how we normally go about finding the Norton equivalent circuit. 34 36 THEVENIN NORTON NORTONS THEOREM Example 10.6. Find the Norton equivalent circuit to the left of terminals A-B for the network shown below. Conne ct the Norton equivalent circuit to the load and find the current in the 50 resistor. Figure 10.30 Circuit for Example 10.6. 35 37 THEVENIN NORTON NORTONS THEOREM Example 10.6. continued Figure 10.31 Circuit for find INORTON. It can be shown by standard circuit analysis that 36 38 THEVENIN NORTON NORTONS THEOREM Example 10.6. continued It can also be shown that by deactivating the sources We find the resistance looking into term inals A-B is RN and RTH will always be the same value for a given circuit. The Norton equivalent circuit tied to the load is shown below. Figure 10.32 Final circuit for Example 10.6. 37 39 THEVENIN NORTON NORTONS THEOREM Example 10.7. This example illustrates how one might use Nortons Theorem in electronics. the following circuit comes close to representing the model of a transistor. For the circuit shown below find the Norton equivalent circuit to the left of terminals A-B. Figure 10.33 Circuit for Example 10.7. 38 40 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued We first find We first find VOS 39 41 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued Figure 10.34 Circuit for find ISS Example 10.7. We note that ISS - 25IS. Thus 40 42 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued Figure 10.35 Circuit for find VOS Example 10.7. From the mesh on the left we have From which 41 43 THEVENIN NORTON NORTONS THEOREM Example 10.7. continued We saw earlier that Therefore The Norton equivalent circuit is shown below. Norton Circuit for Example 10.7 42 44 THEVENIN NORTON Extension of Example 10.7 Using source transformations we know that the Thevenin equivalent circuit is as follows Figure 10.36 Thevenin equivalent for Example 10.7. 43 45 circuits End of Lesson 10 Thevenin and Norton
PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use.
You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!
For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone! | http://www.powershow.com/view/25f58-YTk5Y/Basic_Electric_Circuits_powerpoint_ppt_presentation | 13 |
12 | René Descartes (1596–1650)
Discourse on the Method
Discourse on the Method is Descartes’ attempt to explain his method of reasoning through even the most difficult of problems. He illustrates the development of this method through brief autobiographical sketches interspersed with philosophical arguments.
Part 1 contains “various considerations concerning the sciences.” First, all people possess “good sense,” the ability to distinguish truth from fiction. Therefore, it is not a lack of ability that obstructs people but their failure to follow the correct path of thought. The use of a method can elevate an average mind above the rest, and Descartes considered himself a typical thinker improved by the use of his method. Descartes benefited from a superior education, but he believed that book learning also clouded his mind. After leaving school, he set off traveling to learn from “the great book of the world” with an unclouded mind. He comes to the conclusion that all people have a “natural light” that can be obscured by education and that it is as important to study oneself as it is to study the world.
In part 2, Descartes describes his revelation in the “stove-heated room.” Contemplating various subjects, he hits on the idea that the works of individuals are superior to those conceived by committee because an individual’s work follows one plan, with all elements working toward the same end. He considers that the science he learned as a boy is likely flawed because it consists of the ideas of many different men from various eras. Keeping in mind what he has learned of logic, geometry, and algebra, he sets down the following rules: (1) to never believe anything unless he can prove it himself; (2) to reduce every problem to its simplest parts; (3) to always be orderly in his thoughts and proceed from the simplest part to the most difficult; and (4) to always, when solving a problem, create a long chain of reasoning and leave nothing out. He immediately finds this method effective in solving problems that he had found too difficult before. Still fearing that his own misconceptions might be getting in the way of pure reason, he decides to systematically eliminate all his wrong opinions and use his new method exclusively.
In part 3, Descartes puts forth a provisional moral code to live by while rethinking his views: (1) to obey the rules and customs of his country and his religion and never take an extreme opinion; (2) to be decisive and stick with his decisions, even if some doubts linger; (3) to try to change himself, not the world; and (4) to examine all the professions in the world and try to figure out what the best one is. Not surprising, Descartes determines that reasoning and searching for the truth is, if not the highest calling, at least extremely useful. For many years after his revelation, Descartes traveled widely and gained a reputation for wisdom, then retired to examine his thoughts in solitude.
In part 4, Descartes offers proofs of the existence of the soul and of God. Contemplating the nature of dreams and the unreliability of the senses, he becomes aware of his own process of thinking and realizes it is proof of his existence: I think, therefore I exist (Cogito ergo sum). He also concludes that the soul is separate from the body based on the unreliability of the senses as compared with pure reason. His own doubts lead him to believe that he is imperfect, yet his ability to conceive of perfection indicates that something perfect must exist outside of him—namely, God. He reasons that all good things in the world must stem from God, as must all clear and distinct thoughts.
Part 5 moves from discussion of a theory of light to theories about human anatomy. Descartes considers the fact that animals have many of the same organs as humans yet lack powers of speech or reason. He takes this difference to be evidence of humankind’s “rational soul.” He considers the mysterious connection of the soul to the body and concludes that the soul must have a life outside the body. Therefore it must not die when the body dies. Because he cannot conceive of a way that the soul could perish or be killed, he is forced to conclude that the soul is immortal.
In part 6, Descartes cautiously touches on possible conflicts with the church over his ideas about physical science. Finally, he implores his readers to read carefully, apologizes for writing in French rather than Latin, and vows to shun fame and fortune in the name of pursuing truth and knowledge.
Discourse on the Method (1637) was Descartes’ first published work. He wrote the book in French rather than Latin, the accepted language of scholarship at the time, because he intended to explain complex scientific matters to people who had never studied them before.
Descartes’ education was based on the Aristotelian model of reasoning, which held that scientific knowledge is deduced from fixed premises. This model is based on the syllogism, in which one starts with a major premise (“Virtues are good”) and a minor premise (“kindness is a virtue”), then draws a conclusion from the two (“therefore, kindness must be good”). Descartes wondered whether he could be certain of the premises he had been taught. He was reasonably convinced of the certainty of mathematics (at which he excelled), but the other sciences seemed shaky to him because they were based on philosophical models rather than rational tests, which seemed to Descartes the only sound method of discovery. His revolutionary step was to attempt to solve problems in the sciences and philosophy by applying the rules of mathematics. His work, however, is remembered for his development of a method rather than his work in the physical sciences, which is now considered flawed and obsolete.
Descartes initiated a major shift away from Aristotle with the notion that individuals should examine problems for themselves rather than relying on tradition. The four rules for individual inquiry he outlines in Part Two are a summary of the thirty-six rules he intended to publish as Rules for the Direction of the Mind (published posthumously). In essence, the first rule is about avoiding the prejudices that come with age and education. The second rule is a call for breaking every problem into its most basic parts, a practice that signals the shift from the traditional approach to science into an approach more in line with mathematics. The third rule is about working from simple elements to the more complicated elements—what math teachers call “order of operations.” The fourth rule prescribes attention to detail.
Descartes’ imposition of this method on scientific inquiry signals the break between Aristotelian thought and continental rationalism, a philosophical movement that spread across parts of Europe in the seventeenth and eighteenth centuries, of which Descartes is the first exemplar. Aristotelian science, like rationalism, proceeds from first principles that are assumed to be absolutely true. Aristotelians, like Descartes, proceed from those first principles to deduce other truths. However, the principle truths accepted by Aristotelians are less certain than the ones Descartes hopes to establish. By undertaking to doubt everything that cannot be deduced with pure reason, Descartes undermines the Aristotelian method. For centuries, scholars had based their philosophy on sense perception in combination with reason. Descartes’ new philosophy instead proceeds from doubt and the denial of sensory experience.
Continental rationalism held that human reason was the basis of all knowledge. Rationalists claimed that if one began with intuitively understood basic principles, like Descartes’ axioms of geometry, one could deduce the truth about anything. Descartes’ method is now used most often in algebraic proofs, geometry, and physics. The gist of the method is that, when attempting to solve a problem, we have to formulate some sort of equation.
Descartes’ moral rules demonstrate both his distrust of the material world and his confidence in his mind’s ability to overcome it. He has near-absolute faith in his ability to control his own mind and believes that he only needs to change it to change reality. If he wants something he can’t have, he won’t struggle to get it or be miserable about not having it. Instead, he’ll just decide not to want it. Descartes’ resolution to become a spectator rather than an actor in the events of the world around him amounts almost to a renunciation of his physical existence. Long after Descartes, scientific study was governed by the ideal of detached observation advanced by Descartes.
Part Four of Discourse is a precursor to his later work, Meditations on First Philosophy, and the major ideas he provides here—that the self exists because it thinks and that God exists because the self is imperfect and there must be a source for the idea of perfection outside the self—are mere sketches of the detailed explanation he provides in Meditations.
Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note! | http://www.sparknotes.com/philosophy/descartes/section1.rhtml | 13 |
10 | Let us study fractions with denominator 10. Look at the following figure.
In the above figure, the blocks are 1 black, 2 green, 3 blue, 4 red.
So, the portion of each colour compared to the whole is black = 1⁄10; green = 2⁄10; blue = 3⁄10; red = 4⁄10;
black + red = 1⁄10; + 4⁄10; = 5⁄10; green + red = 2⁄10; + 4⁄10; = 6⁄10; blue + red = 3⁄10; + 4⁄10; = 7⁄10; black + blue + red = 1⁄10; + 3⁄10; + 4⁄10; = 8⁄10; green + blue + red = 2⁄10; + 3⁄10; + 4⁄10; = 9⁄10;
These fractions with denominator of 10 have a speciality.
The fractions with 10 as denominator are called tenths.
We know 10 milli metres (mm) = 1 centi metre (cm) or 1 mm = 1⁄10 cm or one-tenth cm ; or 3 mm = 3⁄10 cm or three-tenth cm
We also write the fractional number 1⁄10 (one-tenth) as .1 read as decimal one or point one.
Similarly, we write the fractional number 2⁄10 (two-tenth) as .2 read as point two.
Like wise, 3⁄10 (three-tenth) = .3 read as point three. 4⁄10 (four-tenth) = .4 read as point four. 5⁄10 (five-tenth) = .5 read as point five. 6⁄10 (six-tenth) = .6 read as point six.
7⁄10 (seven-tenth) = .7 read as point seven. 8⁄10 (eight-tenth) = .8 read as point eight. 9⁄10 (nine-tenth) = .9 read as point nine.
The fractions with 100 as denominator are called hundredths.
We know 100 centi metres (cm) = 1 metre (m) or 1 cm = 1⁄100 m or one-hundredth m ; or 73 cm = 73⁄100 m or seventy three-hundredth m
100 cents = 1 dollar or 1 cent = 1⁄100 dollar ; or 37 cents = 37⁄100 dollar
We denote hundredths by two digits after the point
We write the fractional number 1⁄100 (one-hundredth) as .01 read as decimal zero one or point zero one.
Similarly, we write the fractional number 2⁄100 (two-hundredth) as .02 read as point zero two.
Like wise, 23⁄100 (twenty three-hundredth) = .23 read as point two three. 54⁄100 (fifty four-hundredth) = .54 read as point five four. 35⁄100 (thirty five-hundredth) = .35 read as point three five. 96⁄100 (ninety six-hundredth) = .96 read as point nine six. 47⁄100 (forty seven-hundredth) = .47 read as point four seven. 88⁄100 (eighty eight-hundredth) = .88 read as point eight eight. 19⁄100 (nineteen-hundredth) = .19 read as point one nine.
The fractions with 1000 as denominator are called thousandths.
We know 1000 metres (m) = 1 kilo metre (km) or 1 m = 1⁄1000 km or one-thousandth km ; or 573 m = 573⁄1000 km or Five hundred seventy three-thousandth km
1000 grams (g) = 1 kilo gram (kg) or 1 g = 1⁄1000 kg ; or 337 g = 337⁄1000 kg
We denote thousandths by three digits after the point
We write the fractional number 1⁄1000 (one-thousandth) as .001 read as decimal zero zero one or point zero zero one.
Similarly, we write the fractional number 2⁄1000 (two-thousandth) as .002 read as point zero zero two.
Like wise, 923⁄1000 (923 thousandth) = .923 read as point nine two three. 854⁄1000 (854 thousandth) = .854 read as point eight five four. 35⁄1000 (35 thousandth) = .035 read as point zero three five. 696⁄1000 (696 thousandth) = .696 read as point six nine six. 47⁄1000 (47 thousandth) = .047 read as point zero four seven. 488⁄1000 (488 thousandth) = .488 read as point four eight eight. 19⁄1000 (19 thousandth) = .019 read as point zero one nine.
The idea of tenths, hundredths, thousandths can be extended to ten thousandths, hundred thousandths (lakhths), millionths (ten lakhths) etc.
Tenths, Hundredths, Thousandths with Whole Number
So far what we have seen are less than one.
Consider the mixed fractions 3 1⁄10,8 81⁄100,98 763⁄1000.
With the knowledge of
Mixed fractions and what we have learnt so far, we can write 3 1⁄10= 3 + 1⁄10 = 3 + .1 = 3.1 8 81⁄100= 8 + 81⁄100 = 8 + .81 = 8.81 98 763⁄1000= 98 + 763⁄1000 = 98 + .763 = 98.763
The idea of tenths, hundredths, thousandths with whole numbers can be extended to ten thousandths, hundred thousandths (lakhths), millionths (ten lakhths) etc.with whole numbers.
This form (point form) of writing the tenths, hundredths, thousandths can be extended to ten thousandths, hundred thousandths (lakhths), millionths (ten lakhths) etc. and is called decimal form and the numbers written in this form are called Decimal numbers or simply Decimals.
A Decimal has two parts : Whole Number part and Decimal part. The two parts are seperated by a dot (.), called decimal point. The Whole Number part is to the left of the point and the Decimal part is to its right.
For example, in 98.763, we have : Whole Number part = 98 and Decimal part = .763
The absence of any of these parts indicate that the part is 0.
For example, .79 can be written as 0.79 and 32 can be written as 32.0
Recently, I have found a series of math curricula (Both Hard Copy and Digital Copy) developed by a Lady Teacher who taught everyone from Pre-K students to doctoral students and who is a Ph.D. in Mathematics Education.
This series is very different and advantageous over many of the traditional books available. These give students tools that other books do not. Other books just give practice. These teach students “tricks” and new ways to think.
These build a student’s new knowledge of concepts from their existing knowledge. These provide many pages of practice that gradually increases in difficulty and provide constant review.
These also provide teachers and parents with lessons on how to work with the child on the concepts.
The series is low to reasonably priced and include | http://www.math-help-ace.com/Decimals.html | 13 |
12 | Home | Writing | Reading | Social Studies | Math | Science
Before we get into the lessons, let's take a look at some important tips for taking the math section of the GED test.
As with all sections of the GED, remember to:
- Pace yourself
- Answer every question
- Eliminate answer choices whenever you can
And, above all,
If you ever feel like you are struggling, relax. Be realistic. Be patient enough to get it right, and focused enough that you work out as many problems as you can to the best of your ability.
BEFORE THE TEST
1. Read and understand the directions:
The Mathematics test consists of multiple-choice questions intended to measure general mathematics skills and problem-solving ability. The questions are based on short readings that often include a graph, chart, or figure.
Work carefully, but do not spend too much time on any one question. Be sure to answer every question. Only some of the questions will require you to use a formula. Not all the formulas given will be needed.
Some questions contain more information than you will need to solve the problem; other questions do not give enough information. If the question does not give enough information to solve the problem, the correct answer choice is “Not enough information is given.”
[Interpret this piece of information as meaning that you need to focus on the key elements necessary for calculating the problem. Rarely is there information that you don’t need. Even rarer is a problem that does not contain enough information for you to solve it.]
Part I: Calculators are allowed.
Part II: Calculators are not allowed.
Do not use the test booklet as scratch paper or as an answer sheet. The test administrator will give you blank paper for your calculations. Record your answers on the separate answer sheet provided. Be sure all information is properly recorded on the answer sheet.
To record your answers, fill in the numbered circle on the answer sheet that corresponds with the answer you selected for each question in the test booklet.
If a grocery bill totaling $15.75 is paid with a $20.00 bill, how much change should be returned?
The correct answer is “$4.25”. Therefore, Answer 3 would be filled in on the answer sheet.
Do not rest the point of your pencil on the answer sheet while you are considering your answer. Make no stray or unnecessary marks. If you change an answer, erase your first mark completely.
Mark only one answer for each question; multiple answers will be scored as incorrect. Do not fold or crease your answer sheet. All test materials must be returned to the test administrator.
2. Know the formulas
Even though you will have a sheet of formulas to refer to, you should know these formulas well beforehand. Your goal should be to memorize as many formulas as possible to cut down on time spent looking for or figuring out a formula during the test.
-Problems often require some insight and adaptation beyond just using the formula in front of you.
-Knowing formulas by heart will save time
Start with the Geometry formulas (Area, Circumference, Volume, Pythagorean Theorem.) The best way to memorize the formulas is by applying them to practice problems.
Additional important formulas will also be highlighted in the lessons.
DURING THE TEST
1. Skim the directions at the beginning
From this course and your many practice sessions, you will already know the directions. You should still read the directions within the test, but the longer directions at the beginning are basically the same as what you just read above. Know what to do, and you won't have to waste time!
2. Complete the picture
If a diagram is not fully labeled with the numbers that are contained in the question, label them yourself in the appropriate places.
If a problem describes a shape or form but does not provide a picture, draw it and label the dimensions.
The perimeter of a square flower bed is 12 feet. What is the area of the flower bed in square feet?
E) There is not enough information to solve the problem.
First, draw your square.
You see that it has four sides of the same length. (Your drawing may not have all sides exactly the same length, but by simply sketching it, you have captured the idea.)
Since it has four equal sides, the perimeter must be divided equally among those sides. 12/4 = 3. Now label those sides.
The area can be represented by imagining grid lines.
You don’t have to draw the lines. Simply having a square in front of you provides a much clearer framework by which to consider the problem.
Supplying a picture for the problem also lets you know if enough information has been provided.
3. Narrow down your answer choices
For multiple-choice format questions, when finding a distinct answer escapes you or is too time-consuming, you can use the strategy of eliminating answer choices.
a. Eliminate impossible choices
Eliminate answer choices that obviously don’t fit.
A 5-foot ladder is leaning against a 20-foot wall. The bottom end of the ladder is 3 feet from the wall. How many feet above the ground does the ladder touch the wall?
A 5-foot ladder leaning against a wall does not touch the wall at a height of greater than five feet. So you can immediately eliminate D) and E)
b. Eliminate answer elements
An answer choice can be eliminated on the basis of only part of it being wrong.
Which of the following pairs of points both lie on the line whose equation is 3x-y= 2?
A) (3,-2) and (1,5)
B) (2,4) and (1,5)
C) (2,-2) and (1,5)
D) (3,7) and (3,-2)
E) (2,4) and (3,7)
You start by plugging in (3,-2) from Answer A. It doesn't work, so you can eliminate it. (Don't bother trying the second coordinate pair. Even if it works, you can't choose A.)
Next, try (2,4) from B. It works. Circle it. Also notice that it is part of E, so circle it there, too.
Next, try (2,-2) from C. It does not work. Cross C out.
You are left with B and E, so you must try (1,5) or (3,7). You only need to try one of them. If it works, that's your answer. If not, the other choice is the right answer because there are no other choices left.
4. Estimate and round off
Again, this applies only to multiple-choice format questions. You can approximate and get close enough to identify the right answer without spending lots of time working out an exact figure.
A daredevil is shot out of a cannon a distance of 55 meters. His assistant’s stopwatch times him as being airborne for 12.5 seconds. At what speed did he travel?
You can safely approximate, for example, that 12 goes into 50 at least 4 times and less than 5 times, so the answer is most likely C.
Back: Math Introduction | Next: Math Lesson 1A
Signup! It's Free! | Language Arts | Reading | Social Studies | Math | Science | http://www.gedforfree.com/free-ged-course/math/math-approach.html | 13 |
10 | Particles in the air (smoke, dust, soot, haze) are more dangerous to humans as the size of the particles decreases. Particles are produced by the combustion of fossil fuels (especially coal, but also oil and gasoline), and by burning garbage or hazardous waste. In July, 1987, the U.S. Environmental Protection Agency (EPA) officially recognized that small particles are more dangerous than larger particles when the agency established air quality standards for particles smaller than 10 micrometers in diameter. A micrometer is a millionth of a meter and a meter is about 39 inches.
Now, just two years later, an extensive medical study has shown that human illness can result from particles in the air at levels that fall within EPA guidelines. In other words, an area may meet the federal requirements and yet still make residents sick.
The EPA standard is called PM-10, meaning it is an air quality standard for "Particulate Matter" 10 micrometers or less in diameter. (See FEDERAL REGISTER July 1, 1987, pgs. 24634-24669.) The older standard was for total suspended particulate [TSP] and it did not take into account the size of particles. The new standard specifically recognizes that particles smaller than 10 micrometers in diameter are not filtered out by the nose and throat and can pass into the large airways below the trachea. The smallest particles, which are less than 2.5 micrometers in diameter, are known as fine particles and they are the most dangerous because they pass all the way to the bottom of the lungs where they can move directly into the blood stream. (See RHWN #131 [revised] and #132.] The federal air quality standard does not distinguish fine particles from others, though the existence of the PM-10 standard is recognition that small particles are more dangerous than large ones.
The federal standard says that, averaged over a year's time, an area's air should not contain more than 50 micrograms of PM-10 particles in each cubic meter of air; the 24-hour average is not supposed to exceed 150 micrograms per cubic meter of air. A microgram is a millionth of a gram and there are 28 grams in an ounce.
For the past decade, researchers at Harvard University have been studying the relationship of human health to particles in air; their work has been supported by the federal Department of Energy (DOE) which plans to burn coal on a massive scale (since nuclear power is, deservedly, on the ropes). The Harvard researchers have issued periodic reports on their work; the most recent one appeared in March, 1989. This study examined 8131 grade school students in six U.S. cities during the period 197479, and examined the same students again in 1981-82. To avoid complexities of age and race, only 5422 white students aged 10 to 12 were the final subjects of study. The cities were Steubenville, Ohio; St. Louis, Missouri; Kingston, Tennessee; Portage, Wisconsin; Topeka, Kansas; and Watertown, Massachusetts.
The students were asked about bronchitis, persistent cough, chest illness, wheeze and asthma. Bronchitis required a doctor's diagnosis within the last year; chronic cough was defined as being present for three months during the past year; chest illness required restriction of activity for 3 days or more; wheeze was defined as wheeze apart from colds or for most days and nights during the past year; asthma required the reporting of a doctor's diagnosis. The Harvard researchers also asked about three symptoms they didn't expect to be related to air pollution: earache, hay fever, and nonrespiratory illness or trauma that restricted activities for 3 days or more.
The Harvard researchers did not collect data specifically on particules below 10 micrometers in diameter; they collected data (starting in 1978) on PM-15 (particulate matter less than 15 micrometers in diameter). The annual average PM-15 readings were as follows:
The least polluted city was Portage, WI (10 micrograms, or ug); then came Watertown, MA (26 ug), Topeka, KS (33 ug), St. Louis, MO (38 ug), Kingston, TN (42 ug), and finally Steubenville, OH (59 ug), the most polluted. Boys and girls in the more polluted cities were twice as likely to have bronchitis, compared with youngsters in the less polluted cities. Similar results were apparent for chronic cough and chest illnesses.
These results are important because in the most polluted city (Steubenville) the annual average particle count was 59 micrograms per cubic meter and this was a PM-15 measurement; if only particles 10 micrometers or smaller had been counted, the readings would have been substantially lower. In every other city in the study, the measured [PM-15] pollution levels were below the allowable federal PM-10 standard, yet children in those cities reported excessive disease rates. "We found health effects occurring at levels below the current annual average PM-10 standard," of 50 micrograms per cubic meter, says Douglas Dockery, leader of the Harvard study. This study provides unmistakable evidence that the federal standard for particles is inadequate to protect public health and safety.
The Harvard researchers say their results are important for another reason: there is some evidence that chest ailments during childhood predispose a person to permanent, serious breathing problems, like emphysema, in later life.
The study revealed that the 571 students (10.5% of the total) with asthma or persistent wheeze were particularly susceptible to bronchitis. Bronchitis was reported among 25.5% of the children with asthma or wheeze versus 4.0% of those without; for chronic cough the rates were 29.5% versus 3.2% and for chest illness 36.5% versus 7.6%.
When compared separately, those children without asthma or wheeze in the most polluted city were 2.2 times as likely to have bronchitis as non-asthmatics in the least polluted city; those children with asthma or wheeze in the most polluted city were 3.8 times as likely to have bronchitis, compared to asthmatics in the least polluted city.
An important point of this study is that it confirms that the relationship between particles in the air and childhood disease is "linear," which means that the more particles in the air, the more disease there is. This means that ANY increase in particles in the air is likely to cause disease in someone, somewhere. Thus, an incinerator proposing to spew particles into the air is very likely doing so at the expense of some innocent bystander somewhere. The defense, "I'm meeting all applicable state and federal standards" isn't sufficient to prevent illness. Even when a polluter meets those standards, someone will most likely get sick. Who gave polluters the right to make us sick? We, the people, didn't. It must have been someone else. Let's find out who and go after them.
Get: Douglas W. Dockery and others, "Effects of Inhalable
Particles on Respiratory Health of Children." AMERICAN REVIEW OF
RESPIRATORY DISEASE, Vol. 139 (March, 1989), pgs. 587-594. For a
free reprint, write: Dr. D.W. Dockery, Department of
Environmental Science and Physiology, Harvard School of Public
Health, 665 Huntington Ave., Boston, MA 02115.
--Peter Montague, Ph.D.
Descriptor terms: epa; pm-10; children; regulations; health effects; air pollution; air quality standards; asthma; lung diseases; particulates; | http://www.ejnet.org/rachel/rhwn134.htm | 13 |
52 | - How does Radio Echo Sounding Work?
- Frequencies and Wavelengths
- Radio Wave Propagation in Ice
- Field Work
- Data Processing
- Photos and Links
How Does Radio-Echo Sounding Work?
A radio-echo sounding system consists of two main
components: 1) the transmitter, and 2) the receiver. The
transmitter sends out a brief burst of radio waves of a
specific frequency. The receiver detects the radio waves from
the transmitter and any waves that have bounced, or reflected
off nearby surfaces. The receiver records the amount of time
between the arrival of the transmitted wave and any reflected
waves as well as the strength of the waves (measured as an AC
The radio waves travel at different speeds
through different materials. For example, radio waves travel
very close to 300,000,000 meters/second (3 x 108 m/s) through air, a little less than double the speed in ice
at 1.69 x 108 m/s.
See the next three tabs for more indepth explaination.
Frequencies & Wavelengths of Waves
Electro-Magnetic (EM) energy is made up of both particles
and waves. A single wavelength is 2¼ or 360° of
the wave's angular distance. When a wave travels through a
material, the wavelength is the distance travelled through
the material by 2¼ of a wave.
The number of times a wave oscillates over a certain
amount of time is know as the frequency of the wave.
The units of frequencies are Hertz (Hz) which is the number
of complete wavelengths that pass a point in a single second.
Therefore, 1 Hz = 1 cycle/second or 1/s.
The wavelength of a signal passing through a material
depends on the frequency (f ) of the wave and the
signal velocity (u ) through the material (a
property of the material itself). As shown above, the units
of frequency are 1/s, and the units of velocity are m/s.
Since wavelength(l ) is
measured in m, the equation to obtain wavelength is:
l = f * u
or wavelength = frequency * velocity
A higher amplitude wave of a
given frequency carries more energy than a low amplitude
wave. A signal can be detected only if its amplitude is
greater than that of any background noise. For example, if
you are listening to a radio in New York City, you can pick
up a station from Seattle only if its signal is stronger than
the EM noise caused by the sun, electric motors, local radio
There are numerous radio-echo sounding devices used by
various researchers thorughout the world. The components
described here are those used by researchers at the
University of Wyoming, which is based on that designed by
Barry Narod and Garry Clarke at the University of British
Coloumbia (Narod & Clarke, J. of Glaciology, 1995). It
has been designed for use on temperate glaciers.
The transmitter emits a 10 ns (nanosecond) long pulse at a
frequency of 100 MHz. The details of the pulse-generation
circuitry can be found in Narod & Clarke, 1995. The
frequency of the pulse is modulated for use on temperate
glaciers by attaching two 10 m antennas. The resulting 5 MHz
frequency is ideal for temperate glacier radio-echo sounding.
The transmitter is powered by a 12 V battery.
The transmitter and battery are housed in a small tackle
box which is attached to a pair of old skis. The antennas
extend out the front and back of the tackle box. The forward
antenna is carried by the person pulling the transmitter
sled's tow rope, while the rear antenna drags behind. There
is no focusing of the transmitted signal, so it propagates in
all directions into the ice and air. In order to reduce
"ringing" of the signal along the antenna,
resistors are embedded every meter along the antenna. The
total resistance of each 10 m antenna is 11 ohms.
The receiver begins with an antenna identical
to that of the transmitter. As each pulse is sent out of the
transmitter, some of the transmitted energy travels through
the air and some through the ice. The velocity of radio waves
in air is almost twice that in ice, so the receiver first
detects the "Direct Wave" transmitted through the
air between the transmitter and receiver. This triggers the
oscilloscope to begin recording the signal. For the next 10
µs, the oscilloscope records the voltage of the signals that
have reflected off nearby surfaces. The scope averages 64 of
the transmitter pulses and reflected waves to generate a
single trace. By averaging the scope reduces niose due to
signal scatter and instrument noise in order to obtain a
better trace to be recorded on the laptop computer. The
entire receiver is placed in a small sled which is pulled by
a tow rope. A third researcher monitors the signals on the
oscilloscope and records the information onto the laptop.
Both the scope and the laptop are powered by a 12 V battery
which can be charged by a solar panel for extended surveys.
Radio Wave Propagation in Temperate Ice
As most people know, both water and ice are transparent to
the visible light portion of the Electro-Magnetic (EM)
spectrum. At the much lower frequencies (and longer
wavelengths) of radio waves, liquid water is opaque while ice
is still relatively transparent. This is why radio-echo
sounding is used in the sub-freezing regions of the Arctic
and Antarctic glaciers and ice sheets. There is little water
present within these cold ice masses to scatter or block the
radio signals. The lack of water has allowed researchers to
use frequencies ranging from a few MHz for subglacial
mapping, up to 200-500 MHz for crevasse detection near the
ice surface. Frequencies in the GHz range are used for
studies of snow structure and stratigraphy.
By definition, temperate ice exists at the
pressure-melting point. This means that both ice and water
phases coexist. The presence of liquid water presents a
problem when trying to use radio waves in temperate glaciers
because the water scatters the radio signals making it
difficult to receive coherent reflections that can later be
In the late 1960s through the mid-1970s, a number of
researchers experimented with various frequencies and
transmitter designs. Their findings concluded that
frequencies between ~2 and ~10 MHz are best for temperate
glaciers. 5 MHz pulse-transmitters are the most common used
The basic reason that a 5 MHz signal works in most
temperate ice is that the resulting 34 m wavelength is far
larger than the size of the majority of the englacial water
bodies that scatter the signal. Unfortunately, the long
wavelength of the signal seriously limits the resolution of
the radio-echo sounding survey.
EM Wave Propagation Through a Dielectric Material
Radio waves travel through ice due to its dielectric
properties. The dielectric constant of a given material is a
complex number describing the comparison of the electrical
permittivity of a material and that of a vacuum. As a complex
number, the dielectric constant contains both real and
imaginary portions. The imaginary part of the number
represents the polarization of atoms in the material as the
EM energy passes through it (Feynman, 1964). The EM wave
propagation velocity is determined by its entire complex
The propagation velocity of a radio wave in ice is
determined by the dielectric properties of ice. Liquid water
and various types of bedrock have unique dielectric
constants. Since the dieliectric properties of a material are
related to conductivity, concentrations of dissolved ions in
liquid water will affect the dielectric constant (more free
ions increase the conductivity of water). The dielectric
constants of some materials are listed below:
|Ice (at 0ºC)
||3.2 ± 0.03
Reflections of Waves
The Basic Concept
When a wave encounters an interface between materials of
different properties, the wave may be refracted, reflected,
or both. Snell's Law describes the reaction of light to a
boundary between materials of different dielectric contrasts
(or refractive index), based on the angle at which a ray
perpendicular to the wave front hits the interface. The angle
of the incoming ray (Angle of Incidence: ai)
is equal to the angle of reflection (ar).
The Angle of Refraction (aR)
is determined by the ratio of the sines of the Angle of
Incidence to the Angle of Refraction and the ratio of the
dielectric constants for the upper and lower layers (e1 and e2).
There is a point where the Angle of Incidence
is large enough (close to horizontal) that there is no
refraction. This is called the Angle of Critical Refraction
where all the incoming waves are either reflected or
refracted along the interface. Ay angles larger than the
Angle of Critical Refracion result in only reflection.
Radio-Echo Sounding in the Field
The appropriate field methods for gathering Radio-Echo
Sounding (RES) data depend upon the objective of the survey.
If a researcher simply wants a rough estimate of the glacier
thickness, only a couple readings might suffice. If a
high-resolution map of the glacier bed is desired, a dense
grid of measurement points is necessary. Below is a
description of the field techniques used to develop a
high-resolution map of the glacier bed. It is important to
remember that even after the field work is over there are
many hours of data processing to be done. The techniques
described here were developed to minimize the processing time
and to maximize the resolution of the resulting map.
Mapping the RES Grid
When processing and interpreting the RES data after the
field season, the researcher needs to know the topography of
the glacier surface to correct for changes in the recorded
wave travel times. The glacier surface topography is mapped
using the Global Positioning System (GPS) or by traditional
optical surveying. While GPS is faster, it does not have the
vertical or horizontal resolution of optical surveying. The
horizontal positions are necessary to locate the map with
respect to other maps of the area, while the vertical
coordinates are critical for the data processing and need to
be accurate to within 0.5 m.
In order to reduce the possibility of spatial aliasing and
to maximize the resolution of the RES survey, the traces
should be recorded less than one-quarter wavelength apart.
For example, a 5 MHz RES system produces a 34 m wavelength.
Therefore the grid of RES traces should be less than 8.5 m
A rectangular grid with the traces aligned at 90° to one
another greatly simplifies the data processing.
Unfortunately, field conditions do not always oblige such an
orderly system and the grid is modified by the presence of
crevasses, melt-water ponds, steep slopes, avalanche debris,
etc. In such cases, detailed notes help to recreate the grid
during the data processing.
Recording the Profiles
The transmitter and receiver occupy separate sleds. These
may be pulled in-line or side-by-side depending on the design
specifications of the instruments. The Univ. of Wyoming
system is pulled side-by-side so that the transmitter and
receiver are pulled parallel to one another. A single
researcher pulls the transmitter on its homemade sled while
another pulls the receiver sled. A third researcher walks
beside the receiver sled to monitor the incoming signals on
the oscilloscope and then record them to the laptop computer.
Some systems can continuously record traces to a computer
and do minor amounts of pre-processing such as trace stacking
(or averaging) and digital filtering to remove noise. The
Univ. of Wyoming system is much simpler requiring the
researchers to stop at each position in the RES grid and
manually tell the computer to retrieve data from the
oscilloscope. Although more time consuming, this method
allows the researchers to monitor the condition of the
incoming data and results in a smaller data set. Each trace
recorded onto the computer is an average of at least 64
received pulses from the transmitter so that the
signal-to-noise ratio is improved.
RES Field Work on the Worthington
The Worthington Glacier is a
small temperate valley glacier in the Chugach Mts. of
South-Central Alaska. Radio-echo sounding surveys have been
recorded there in support of ice-dynamics research by the Univ. of Wyoming and the Institute
of Arctic & Alpine Research at
Processing Radio Echo-Sounding Data
Processing the Radio Echo-Sounding (RES) data transforms
the data from incoherent numbers to a data set that can be
interpreted. Our processing methods are drawn from refection
seismology techniques. These are outlined in Welch, 1996; Welch et
al., 1998; and Yilmaz, 1987. We use a number of IDL (from
Research Systems, Inc.) scripts to organize our data and
usually create screen plots of each profile through each step
of the processing to help identify problems or mistakes. We
also use Seismic
Unix (SU), a collection of freeware seismic processing
scripts from the Colorado School of Mines. SU handles the
filtering, gain controls, RMS, and migration of the data. IDL
is used for file manipulation and plotting and provides a
general programming background for the processing.
The processing steps below are listed in the order that
they are applied. The steps should be followed in this order.
Note that quality of the processing results are strongly
dependent on the quality of the field data.
Data Cleaning and Sorting
The first step of data processing is to organize and clean
the field data so that all the profiles are oriented in the
same direction (South to North, for example), any duplicated
traces are deleted, profiles that were recorded in multiple
files are joined together, and surface coordinates are
assigned to each trace based on survey data. These steps are
some of the most tedious, but are critical for later
migration and interpretation.
Static and Elevation Corrections
The data is plotted as though the transmitter and receiver
were a single point and the glacier surface is a horizontal
plane. Since neither is the case, the data must be adjucted
to reflect actual conditions. The transmitter-receiver
separation results in a trigger-delay equivalent to the
travel-time of the signal across the distance separating the
two. This travel-time is added to the tops of all the traces
as a Static Correction.
The data is adjusted with respect to the highest trace
elevation in the profile array. Trace elevations are taken
from the survey data and the elevation difference between any
trace and the highest trace is converted into a travel-time
through ice by multiplying the elevation distance by the
radio-wave velocity in ice (1.69 x 108 m/s). The
travel-time is added to the top of the trace, adjusting the
recorded data downward.
Filtering and Gain Controls
We use a bandpass filter in SU to elimitate low and high
frequency noise that result from the radar instrumentation,
nearby generators, etc. Generally we accept only frequencies
within a window of 4-7 MHz as our center transmitter
frequency is 5 MHz. Depending on the data, we will adjust the
gain on the data, but generally avoid any gain as it also
increases noise amplitude. We try to properly adjust gain
controls in the field so that later adjustment is
Cross-Glacier Migration (2-D)
We 2-D migrate the data in the cross-glacier (or across
the dominent topography of the dataset) in order to remove
geometric errors introduced by the plotting method. Yilmaz
(1987) provides a good explanation for the need for migration
as well as descriptions of various migration algorithms.
Why is migration necessary?
The radar transmitter emits an omni-directional signal
that we can assume is roughly spherical in shape. As the wave
propagates outward from the transmitter, the size of the
spherical wavefront gets bigger so when it finally reflects
off a surface, that surface may be far from directly beneath
the transmitter. Since by convention, we plot the data as
though all reflections come from directly below the
transmitter, we have to adjust the data to show the
reflectors in their true positions.
We generally use a TK migration routine that is best for
single-velocity media where steep slopes are expected. As you
can see from the plot below, the shape of the bed reflector
has changed from the unmigrated plots shown in the previous
Down-Glacier Migration (2-D)
In order to account for the 3-dimensional topography of
the glacier bed, we now migrate the profiles again, this time
in the down-glacier direction. We use the same migration
routine and the cross-glacier migrated profiles as the input.
Although not as accurate as a true 3-dimensional migration,
this two-pass method accounts for much of the regional
topography by migrating in two orthogonal directions. Radar Profile After Down-Glacier
Interpretating and Plotting the Bed Surface
Once the profiles have been migrated in both the
cross-glacier and down-glacier directions, we use IDL to plot
the profiles as an animation sequence. The animation shows
slices of the processed dataset in both the down-glacier and
cross-glacier direction. By animating the profiles, it is
easier to identify coherent reflection surfaces within the
dataset. Another IDL script allows the user to digitize,
grid, and plot reflection surfaces.
The resolution of an interpreted surface is a function of
the instrumentation, field techniques and processing methods.
Through modeling of synthetic radar profiles, we have shown
that under ideal circumstances, we can expect to resolve
features with a horizontal radius greater than or equal to
half the transmitter's wavelength in ice. So for a 5 MHz
system, we can expect to resolve features that are larger
than about 34 m across. Since the horizontal resolution is
far coarser than the vertical resolution of 1/4 wavelength,
we use the horizontal resolution as a smoothing window size
for the interpreted reflector surfaces. We use a
distance-weighted window to smooth the surfaces.
The ice and bedrock surfaces of a portion of the Worthington
Glacier obtained in the 1996 radio echo sounding survey. The 1994
boreholes are also plotted. (Plot by Joel Harper, U. of Wyo.)
Click on the image for a larger version.
The ice surface and bedrock surface beneath the Worthington
Glacier, Alaska. Resolution of both surfaces is 20 x 20 m. Yellow
lines indicate the positiond of boreholes used to measure ice
Pictures of the Worthington Glacier Area
Notes on Radar Profiles
Three arrays of Radio-Echo Sounding profiles have been
recorded on the Worthington Glacier. The 1994 survey was recorded
using different field methods than the field methods used in 1996
& 1998. The same eqpuipmet was used in all three surveys as
well as the same data processing techniques.
The first profiles were recorded in 1994 and oriented
parallel to the ice flow direction. The locations of these
profiles were not measured accurately, and the profiles were
recorded a few at a time over a period of about a month. The
resulting glacier bed map was not very accurate, with a
resolution of about 40 x 40 meters.
The 1996 radar profiles were recorded in the cross-glacier
direction. The location of every fourth trace of each profile
was measured with optical surveying equipment using a local coordinate system seen in the map
below. The profiles were spaced 20 m apart and a trace
recorded every 5 m along each profile. The resulting glacier
bed map had a resolution of 20 x 20 meters.
In 1998 we used the radio-echo sounding equipment to look
for englacial conduits that transport surface meltwater
through the glacier to its bed. This study required the
maximum resolution that we could obtain from the eqpuipment,
so the profiles and traces were spaced every 5 m. Every
fourth trace on each profile was surveyed to locate it to
within 0.25 m and the entire RES survey was recorded in two
days. The survey was repeated a month later to look for
changes in the geometry of any englacial conduits found. The
first RES survey was processed to produce a map of the
glacier bed surface with a resolution of 17.5 x 17.5 m. The
maximum resolution obtainable by an RES survey is half of the
signal wavelength. Our 5 MHz system, therefore, can obtain 17
x 17 m resolution under the best of circumstances. | http://stolaf.edu/other/cegsic/background/index.htm | 13 |
14 | Listening Skills (Grade 9-12) The purpose of this activity is to increase the students' ability to listen and to understand what is being read and/or told to them.
Pictures: Following Oral Directions
Make Me a
Copy Please (Grade 5-6) Often times students are not able to communicate clearly what they would like to say. It is the purpose of this lesson to help
student understand the need to be articulate and precise when explain steps to another student. In addition the student listening
will learn to be a more effective listener.
Pictures (Following Oral Directions) Many children have difficulty accurately giving or following verbal instructions. To encourage students to focus
on the importance of clear, oral communication.
In this lesson, students will hone their group communication skills by role-playing the parts of tour guides at Yellowstone National Park.
Convince Me! In this lesson students go to the Internet to learn the art of persuasive speaking in order to present a speech in a convincing manner.
Grade: 1 - 3
Aboard Grade: 2
Autobiography Grade: 7 - 12
Egypt Grade: 6 - 8
Adjectives Grade: 4 - 8
for Your Thoughts Grade: 4 - 6
in Poetry: Teaching the Imagists Grade: 9 - 12
Poem Grade: 3 - 12
Grammar Review Using Jabberwocky Grade: 7 - 12
a Logophile Grade: 4 - 8
of Virtues Grade: 5
Letter Grade: 7 - 8
as a Bee Grade: 3 - 6
Using Antonyms To Write Short Stories Grade: 2 - 3
Directory Grade: 7 - 12
an Autobiography Using a Web Grade: 1 - 4
Meaning Through Drawing Pictures Grade: 5 - 7
a Newspaper Grade: 3 - 5
a Story Grade: 6 - 8
your own grammar exercise Grade: 10 - 12
Writing - Collaborative Stories Grade: 1 - 12
Writing - Rainbow Fish Grade: 1
Writing - What Would Happen If? Grade: 6 - 8
Writing Using Comics Grade: 4 - 8
Writing with Newspaper Photos Grade: 5 - 12
Grade: 8 - 12
Tale Journey Grade: 3 - 4
(happy, sad, silly, angry, scared) Grade: 1 - 2
Poetry Grade: 6 - 12
on Summarizing Information Grade: 2 - 8
Grade: 3 - 5
Outlining Grade: 10 - 12
Grade: 1 - 3
Review - My Favorite Author Grade: 12, Adult/Continuing
Kiss Discovery Grade: 3 - 8
Books Grade: 5
Homophones Grade: 6 - 8
to Write a Biopoem Grade: 3 - 4
Trees Could Speak, What Would They Say? Grade: 3 - 6
Postman--Improved by Your Students! Grade: 1 - 4
Sandwiches Grade: 2
Ourselves and Others through Poetry Grade: 6 - 12
Tennessee - Poem Model Grade: 6 - 12
Guide Lesson Plan Grade: 6
Synonyms and Antonyms in Pairs Grade: 6
Writing/ Introduction to Autobiography or Journal Writing
Grade: 8 - 12
the Back of My Hand Grade: 9 - 12
Letter/Syllables and Punctuation Grade: kindergarten - 2
Pyramid: Preparing for a Journey Grade: 4 - 8
Year With ______(Specific author's name is written in the blank)
Grade: 4 - 12
Activities Grade: kindergarten - 1
Being a Successful Learner: Setting Goals Grade: 9 - 12
Upon A Time . . . Grade: 4 - 6
Describes Both Grade: 2 - 6
Pals Grade: 5 - 8
Touch: A Lesson in Expository Writing Grade: 4 - 12
Essay Grade: 2 - 4
Lesson Grade: 6 - 12, Adult/Continuing education
Endings Grade: 1 - 3
a trip by creating an itinerary or brochure Grade:
kindergarten - 12
Gifts Grade: 9 - 12
Me for the Horror: The Feminist Way Grade: 12
Grade: kindergarten - 6
Scrapbook Grade: 12
Lesson Plan (The Sequencing Monster) Grade: 3 - 5
Creation Magic: Character, Setting, and Plot Grade:
kindergarten - 6
Pops Grade: kindergarten - 12
Biographers Grade: 5 - 8
Paragraphs Grade: 4 - 6
Tales: A Study of Perspective Grade: 7 - 12
Mouse Country Mouse: Recognizing Story Grammar Grade: 1 -
of Sentences Grade: 3 - 5
the Sea Grade: kindergarten - 1
Some Adjectives! Grade: 1
Star Trek to Enhance Critical Thinking Skills Grade: 10 -
Poetry Using Poems by Langston Hughes Grade: 9
W Poems Grade: 4 - 5
Your Spelling Words
Grade: 1 - 3
Learning spelling words does not have to be all drill. In this activity the students will be rhyming and playing with
their own names and then doing the same things for their spelling words.
- A Spelling Game
Grade: 1 - 5
Student will play a game that challenges their spelling skills.
Go Fish Grade:
This lesson allows students to learn their spelling words for the week and enjoy it. The students play the game as enjoyment and end up learning their spelling
words meanwhile without it being noticed.
As American as apple pie, the weekly spelling list is a "cornerstone" of education! Admit it or not, we all use lists
(by choice or by administrative mandate!), and drilling and practicing for Friday's test can be pretty boring! The following are
three ideas I use with my third graders, though I feel each could be used with any grade.
Grade: 3 - 6
Children are encouraged to see words/learning as something fun and challenging; the good spellers are an important
part of the team rather than being looked down on as "bookworms". Natural leaders surface helping the group form the words.
Group cooperation becomes important and a reachable, seeable, profitable entity rather than some teacher's unimportant
(Grade 4) There are no manipulatives to reinforce the language arts curriculum. Creative dramatics is a method of providing practice in the
for Your Thoughts (Middle grade) Students in writing classes are given apples and are asked to examine them closely for unique characteristics that will serve as
the basis for a descriptive paragraph.
Introduction to Similes (Grade 1) A direct lesson about similes and their use to facilitate comprehension of text that uses similes.
Adjectives (Grades 4-8) Have students redesign a restaurant menu. The students will use adjectives to make the menus more appetzing.
Poem (Grade 3-12) In this lesson, the writer analyzes self to provide an introduction to the rest of the class.
Grammar Review (Grade 7-12) The purpose of this activity, used at the beginning of the year is to help students identify where they are weak in their grammar
skills (in a fun fashion). From there, the teacher can choose to emphasize the various areas of grammar that need to be
Logophile (Grade 4-8) The purpose is to provide a variety of pre-writing activities which will encourage students to manipulate, explore, discover and
fall in love with words.
Busy as a
Bee (Grade 3-6) The purpose of this activity is to expose students to similes and how they can be used in writing. This activity will allow students
to "write" their own similes without the pressure that is often found when we ask students to write for us.
Using Antonyms to Write Short Stories (Grade 2-3) Children will write short stories about themselves using antonyms and comparisons of themselves to animals. By the end of the
lesson children will understand the meaning of antonym and will have enhanced their writing abilities.
Directory (Grade 7-12) A class directory is a booklet of stories written by the students in a given class about other students in the class. By doing this
project, students become better acquainted and bond as a class. When done at the beginning of the year it not only
"breaks the ice", it serves as a diagnostic tool for the teacher. I can quickly assess where each student is in social skills, language, reading,
writing, spelling, etc. Writing skills, such as asking for complete information, following up on questions, organizing information
on a variety of topics, and making generalizations based on specific bits of information, are also developed.
An Autobiography Using A Web (Grade 1-4) Composing an
autobiography for the first time can be difficult. Through using a web layout the students will be able to pick out
the interesting and important facts about themselves.
Meaning by Drawing Pictures (N/A) 1. The learner will sketch a picture to represent their understanding of the key concepts.
2. The learner will interact with peers to construct meaning.
Newspaper (Grade 3-5) In this lesson, children will create a newspaper on the web. They can choose their own links to news sources, comics, local
events, etc. They will be able to modify the paper whenever they like. The students may add their own links and can use their
paper as a personalized homepage.
Writing (Grade 1-12) This is a creative writing time that takes a minimum of 25 minutes. During this time students are beginning their
own story, reading another's beginning and creating the middle section, reading yet another story and finally developing a
conclusion for that story.
Writing - Rainbow Fish (Grade 1) A creative writing exercise on the story The Rainbow Fish. An activity to deal with feelings.
Writing Using Comics To use comics to foster creative writing and vocabulary skills
A Class Newspaper To develop students' writing skills through production of a class newspaper
Tale Journey (grade 3-4) In this creative writing process, the student will assume the role of the main character in the fairy tale. The student will use
fantasy to change the ending of a familiar fairy tale.
Books (Grade 5) To write, illustrate, and publish a book fora specific audience.
Homonyms (Grade 6-8) When writing, students at the junior high level often confuse and misuse words that sound alike but have different meanings.
Words pairs such as your-you're, whose- who's, there-their, and past-passed are examples of these "horrid
homonyms" where mistakes are not evident in speech but are only too evident in writing! This activity is designed to remind students of the specific
meanings and correct usage of some of these often confused words.
Sandwiches (Grade 5-8) This lesson is useful as a prewriting activity. Sandwiches have likely been a dietary mainstay of your students. Likewise, most of
them have some experience with eating in a restaurant. This lesson will ask students to design sandwiches for all meals, courses,
Ourselves and Others Through Poetry (Grade 6-12) Getting to know students and getting them to know themselves through writing.
Guide Lesson Plan (Grade 6) The purpose of this learning guide is to reinforce the writing process and to teach good proofreading skills. The writing process
is information that the students have seen before. The dreaded errors are ten words that many people misuse when they are
Synonyms and Antonyms in Pairs Grades 6 In this lesson students will work cooperatively to learn about synonyms and antonyms and how to use them. They will do this
by matching word cards that have the same meaning and word cards with different meanings and using the words in sentences.
Writing (Grade 8-12) This lesson plan serves as an introduction to a study of autobiography (such as Frederick Douglass') and/or journal
writing. In addition, students will learn to distinguish between "facts" they know, sensory detail, and their imagination, and
practice applying all three to their writing.
Back of my Hand (Grade 9-12) The purpose of this exercise is to introduce students to writing for fun.
Pyramid - Preparing for a Journey (Grade 6-9)
The students will write a three paragraph paper describing the treasures they would stock in their
pyramid and explaining why their Ka would want and/or need these items on its journey.
A Time . . . (Grade Intermediate) This writing activity uses the fairy tale structure to demonstrate all of the elements of a short story.
Personal Touch: A Lesson In Expository Writing (Grade 7-12) In addition to providing an opportunity to practice clarity and thoroughness in writing, students are made
aware of some of the subtle non-verbal messages in common social situations involving hand touching.
Essay (Grade 2-4) 1.Understand the purpose of a photo essay. 2.Sequence a series of events. 3.Understand the format in creating a photo essay which includes a caption for each picture. 4.Complete a photo essay as a creative activity by using photos, magazine pictures or drawings to illustrate a story. 5.Read and enjoy a photo essay.
Endings (Grade 1-3) This lesson plan allows students to become familar with fairy tale genre through personal writing practices and
computer software programs.
a trip by creating an itinerary or brochure (Grade - any level) The students will be planning an itinerary or brochure for a trip to a place of their choice. Through previous lessons and
research, they will choose which area they would like to visit . In their itinerary/brochure, the students will
state where they are going (including a map of the location), sites they will see there and information that would be helpful for a
of View Point of View-writing from 5 different characters point of view. Using language to show emotion and description.
The importance of good hand-writing.
(Grade k-6) This project covers many language arts concepts and skills at each learner's level
of competency. It inspires joy in reading books to a captive audience and pride in work well done. Older students discover the need to write
purposefully, descriptively and
clearly for a younger audience.
Lesson Plan (Grade 3-5) This lesson provides a visual experience in which students develop a better understanding of sequencing, while further
developing their writing skills.
Biographers To Teach students how to be a biographer. This will include what types of questions to have the
biography answer, and the kinds of problems which biographers run into when on assignment.
Paragraphs (Grade 4-6) This activity guides students through the writing process for a successful five-sentence
paragraph with varied sentence beginnings. Repeating this process frequently with many, varied topics teaches students to use
variety to create interesting paragraphs.
Mouse Country Mouse: Recognizing Story Grammar (Grade: any primary)
1. Using a Venn diagram students will verbally compare and contrast the experiences of the country mice and the town mice.
2. Students will demonstrate their knowledge of the basic parts of a story by successfully completing a story map of Town
Mouse Country Mouse by Jan Brett.
for Audiences (Grade 7-12) To write 4 letters for 4 different audiences with the appropriate language and style for each. Using correct letter conventions.
Prompt for Audience, Persuation, and Point of View (Grade 9-12) To develop an awareness of audience, methods of persuasion, and the proper tone or mood to
achieve writer's goal as well as point of view. To practice letter writing.
Poems (Grades 4-5) This lesson is designed to give students a new or different way to write a poem. It is more structured than just telling students to
write a poem, so some students may find they like this type of poem writing.
The Art of Reading Poetry
In this lesson students go on the Internet to collect poems written by other young
people, then practice expressing the feelings in the poems by reading them aloud. Students discover that the end of a line in poetry doesn't always call for a pause. The poet's thought is the important thing and punctuation is the clue.
Fun with The Alphabet
To review and reinforce the sounds and symbols of the alphabet
Walk in Their Shoes
In this lesson students have a chance to live out this fantasy. First they investigate the lives of some intriguing personalities and make notes about biographical information. Then students write first-person memoirs for
the personalities and read them aloud to the class.
Puppets 'n' Plays
In this lesson students reinforce communication skills, create the puppets, write or improvise dialog for them, and put on a play. This procedure allows a student to put words into the mouth of a character he or she created, which in turn makes the student feel even more secure about being in the puppet play!
The Way They Are
This lesson requires students to use critical thinking and problem-solving skills as they read about the
animals, hypothesize how they might have changed, isolate animal
characteristics, and write stories about new and unusual animals.
It's News to Me!
In this lesson, students learn what the standard sections of a newspaper are. Then students go to the Internet to learn how to create their own online newspaper in the same way more than 120,000 other people have already done! Finally, students prepare a mock-up of a class newspaper, complete with original art and important sections like "What's for Lunch?"
Language - ARTS Elementary (K-5)
Successful Paragraphs (4-6)
Creative writing; multi-author story writing (1-12)
'School News' using writing, speaking and/or questioning skills (3-12)
Whole language experience using "Casey at the Bat" (3-5)
Vocabulary & language concept development (PreK-2)
Descriptive/Persuasive writing 'My Pyramid - Preparing for a Journey"
Increasing vocabulary for primary students (1-3)
Sounding-out CVC words, 'The Blending Slide' (K-1)
Using popcorn to create a reading book (K-3)
Creative Writing; turn on inventiveness with 'Potato Possibilities' (4-6)
Color Code Writing; forming letters and numbers with colors (K-3)
Integrated vocabulary, listening and creative writing exercise (K-12)
Writing - a photo essay (2-4)
Whole language story for developmental activities (K)
Writing 'Auto-Bio' poems (4-12)
Spelling; three great techniques for weekly spelling lists (3-6)
Literature; activity to understand character's personality (K-12)
Learn a topic through research & drawing - Alphabet Book (K-12)
Vocabulary & language comprehension using "Land Before Time"
Reinforcing alphabet names/sounds (K-1)
Whole Language; Oklahoma Indian History: Spiro Mounds (3-6)
Working with syllables using music patterns (2-4)
Inferring Character Traits (all grades)
Reading, Whole Language; Story Pyramid
Adverbily, practicing the use of adverbs (4)
'Busy As A Bee', working with similes (3-6)
Creative thinking, writing, reading & character analysis using
"Frog and Toad are Friends" (2)
Bibliotherapy, studying the perils of prejudice (3-6)
Following oral directions using 'Mystery Pictures' (1-6)
'Read In'; Unique story writing involving two grade levels (K-6)
'Poetry Cubes', develop an appreciation for different styles of poetry
'An Irritating Creature' - poetry lesson (3-4)
Writing activity using fairy tale structure to identify elements of a
short story (4-6)
Enigmas - 'Mysteries in...'; activity to encourage research & creative
thinking skills (4-5)
'Zoo Animal Poetry',activity involving field trip and video (K-3)
'Let Me Tell You About My State', activity involving Amateur Radio
'American Experiences Abroad -- An Interview', activity involving Amateur
Radio services (4-6)
'Just Sandwiches', creative language arts activity (4-9)
Use of literature in SDMPS (3-5)
'Parts of Speech Review', hands-on activity (3-6)
Stories That Grow on Trees", creative writing activity (4-8)
Appropriate Use of Helping Verbs, (3-12)
'Invent A Holiday' (4-6)
'Apples Are A....Peeling', activity filled lesson involving all subject
The Middle Ages and Children's Literature", (Gifted, 2-5)
Language - ARTS Intermediate
Creative writing activity using shopping mall personalities (7-9)
Basic Grammar; review with fun using "Jabberwocky" (7-12)
Writing poems with photographs (6-12)
Vocabulary - unfolding meaning (6-7)
Creative Writing; 'Becoming a Logophil (4-8)
Activities for descriptive character analysis (K-12)
Reading; learning propaganda techniques through advertisements (5-12)
Activity to stimulate thought and verbal participation of students (4-12)
Learning nursery rhymes through many activities (4-7)
Learning vocabulary words with core curriculum (5-7)
Writing, Poetry: Knowing Ourselves and Others Through Poetry (6-12)
Vocabulary, The Dictionary Game, "Balderdash" (4-12)
Expository Writing, "The Personal Touch" (6-12)
Story Starters, introduction to story telling (all grades)
'What? You want me to read AND enjoy it?' activity to encourage reading
'Horrid Homonyms' - confusing word pairs/homonyms (6-8)
Mass Media - Magazine ads and You, the Teenager (6-12)
'What You See Isn't Always What You Get!', reading comprehension activity
'Cooperation Blocks', practice in effective communications and cooperation
'Review Basketball', learning to use reading material to find information
Password" vocabulary review activity (4-12)
'Make A Statement!', using environmental bumper stickers (6-8)
"The 'Real' Fairy Tales", fun creative writing activity (5-8)
'Paragraph Unity', writing activity (7-9)
'Book Review', pre-writing activity (8-9)
'Reading..Try It, You Might Like It!', activity to enjoy reading (6-7)
'Decimal Search', working with the Dewey Decimal System (4-8)
'Novel Partners', independent reading activity toward a structured whole
class reading activity (5-8)
'Vocabulary Stumpers', activity to increase vocabulary (6-12)
'Adjective? What's An Adjective?' (5-8)
Indexing and writing skills activity (5-12)
Create "Who Did It" mysteries with the computer (5-12)
Reference Book of the Year", fun research/library skills activity,
Language - ARTS High School
Creative writing - writing for fun (9-12)
Increase listening skill activity (9-12)
Literature Review; using knowledge, interpretation & judgement
Writing, Creating a 'Class Directory' (9-12)
'MacBeth' made easy (6-12)
'Junk Mail Explosion' - activity to increase student awareness of
persuasion tactics (7-10)
'Symbols of Language', understanding written communication (6-11)
Introduction to American Literature, creative freewriting activity (11)
Using prominent personalities with identifiable social causes to stimulate
'Map of Ship Trap Island', reading for detail (9)
'Inventions', understanding the relationships between things and
'Write? No Way!', "re-newed" writing activity (7-12)
'Olympic Shadow Boxes', learning to use reference materials (9-12)
Timelines", using research activites to discover history of city,
state and self, (9-12)
Life After the Fact", creative culmination activity after reading
"Lord of the Flies", (9-12)
Family Feud" fun format to review "Romeo and Juliet",
Teaching Shakespeare", a different approach, (9-12)
Spotting Details", creative writing activity, (9-12) | http://www.theteachersguide.com/langarts.html | 13 |
12 | Seneferu was succeeded by his son Khufu, known to the Greeks as Cheops (pronounced Kee-ops), and he built the biggest pyramid of them all. It is 751 feet (229 m) at the base and originally stood 479 feet (146 m) high. Stone robbers have taken stones from the top, leaving it only 446 feet (136 m) high today. So many tourists fell to their deaths or were badly injured attempting to climb the pyramid that today climbing is forbidden.
The work of building this pyramid must have started by leveling the stone base from corner to corner. It appears that there was a natural rise or hump in the middle which was not removed and leveled. Perhaps this was left so that there would be fewer blocks to fit into place. Each of the lower blocks measures about a cube of 3.28 feet (1 m) and weighs approximately 2.5 tons each. Had there not been the hump, the lowest layer would have required over 50,000 heavy squared blocks which came from the limestone quarry less than 0.6 miles (1 km) to the south.
How the building of the pyramid was accomplished and how many workmen were involved is still a matter of conjecture and admiration. Herodotus stated, “The work went on in three-monthly shifts, a hundred thousand men in a shift. It took ten years of this oppressive slave labour to build the track along which the blocks were hauled—a work in my opinion of hardly less magnitude than the pyramid itself, for it is five furlongs in length. . . . To build the pyramid itself took 20 years.”1
Herodotus cannot be regarded as an authority on the matter. He arrived at the scene many centuries after it was all over and was dependent on what the local priests told him, and there is no guarantee that they had it right.
More recent evidence comes from the discovery of a bakery by Mark Lehner south of the pyramid, which he estimated would have been capable of producing enough bread to feed 20,000 men each day. Even that is a lot of people. The problem would have not only been finding such a large work force, but organizing them so that they were not all walking on each other’s toes.
As far as we know, the wheel was not used in Egypt at that time, so Herodotus would have been correct in saying that the blocks were hauled from the quarry to the site. A large number of masons could have worked in the quarry chopping out the stones and roughly squaring them. Examination of the stones visible in the pyramid today reveals that the stones in each layer were carefully trimmed to the same height, but the length and breadth of each stone was rather irregular. It was up to the on-site foremen to fit them into matching places. Lime plaster was poured between many of the blocks to steady them. This debunks the nonsense about all the blocks being poured from liquid lime.
Fitting the lower courses into position would have been relatively simple and fast. They could have been dragged into position from all four sides, but once the edifice rose to a higher level the problems began. Herodotus wrote, “The method employed was to build it in steps, or as some call them, tiers or terraces. When the base was complete, the blocks for the first tier above it were lifted from ground level by contrivances made of short timbers. On this first tier there was another which raised the blocks a stage higher, then yet another which raised them higher still. Each tier or storey had its set of levers.”2
All very well, but we do not know what sort of levers could raise the larger 15-ton blocks into place. A few years ago, some Japanese engineers claimed that they had made some successful levers that could raise blocks of stone weighing two tons, but that did not solve the problem of the 15-ton blocks.
The popular theory is that a ramp was built, up which the stones were dragged. Some suggest that a ramp could have wound in an ascending spiral around the pyramid. At the Temple of Karnak there is a pylon or gateway which has some huge blocks of stone. It is apparent that these were dragged up a ramp made of sun-dried mud bricks because not all the bricks were removed after the job was completed. They are still there to verify the method used, but the length of a ramp to reach the height of the great pyramid of Khufu has been calculated to be in the order of 0.6 miles (1 km) or more. The amount of material needed for such a ramp is staggering, and the question of where all this material went is hard to answer.
The construction of the pyramid was extraordinarily precise. It is precisely level and exactly square with no more than 8 inches (20 cm) difference in length between the sides of the pyramid. The sides are aligned true north, south, east, and west, indicating an advanced knowledge of astronomy and surveying.
The dimensions and geometry of the pyramid are such that if a vertical circle is imagined whose center is the top of the pyramid and radius is the height of the pyramid, the circumference of that circle is exactly the circumference of the base of the pyramid; that is, the sum of the length of the four sides at the base. This feature suggests knowledge of the value of pi, centuries ahead of the Greeks.
The pyramid contains an estimated 2.3 million blocks of stone averaging 2.5 tons in weight each, with the biggest stone weighing a massive 15 tons. We do not know for sure how long it took to build the pyramids. If we accept Herodotus’ report that Cheops’ pyramid took 20 years to build, we can calculate the rate at which the construction stones were put in place. If we assume that the Egyptian builders worked 12 hours per day continuously for 20 years, the 2.3 million blocks would require 26.3 stones to be put in place each hour, or just over 2 minutes to place each block, averaging 2.5 tons accurately in place, many feet above the ground. This feat is truly amazing even by today’s construction standards and suggests a very highly developed knowledge of engineering. If we accept a shorter time period of just two years, in line with the dates given in the Bent Pyramid, we require that one of these huge stones was precisely placed every 13.5 seconds.
All this has led to wild speculation about how the pyramids were built, such as the involvement of UFOs etc., but there is no inscriptional or archaeological evidence to support these speculations, which leaves us with the conclusion that we do not know for sure just how this gigantic feat was accomplished. With all our modern inventions and machines, it would still be a challenge to any civil engineer to build such a pyramid today. Instead, we are left to marvel at the ingenuity, craftsmanship, and organizing skill of this wonderful people who lived so long ago. They were certainly not primitive cave men, but rather were highly intelligent and cultured people.
The man who supervised this giant project was Khufu’s nephew, Hemiunu. His statue was found in a chamber of his tomb. It is a magnificent life-sized statue, and depicts him as a solidly built fellow with a copious bosom befitting his rank. Tomb robbers had broken into the tomb at an early date and severed the head and smashed it to retrieve the inlaid eyes. However, archaeologists carefully gathered the pieces, enabling the statue to be restored.
The entrance to this pyramid is on the north side above ground level and it is 26 feet (8 m) off center. This was obviously not due to a miscalculation by the builders. Rather, it was undoubtedly a subtle attempt to thwart the inevitable tomb robbers. They would naturally start their illicit digging from the center, and that is what they did.
The entrance used by tourists today is a devious tunnel which was cut through the stones and finally connected with the ascending passage. The man responsible for this entrance, which was constructed about 1,100 years ago, was a Turkish governor called Mamun, who was apparently hoping to find treasures in the tomb chamber. However, we do not know if he was successful or not.
As the original pyramid builders anticipated, Mamun’s men started digging through the center of the pyramid and might have gone clean through it and out the other side without finding anything, except for a piece of luck. It appears that as the workmen hammered away with their picks they dislodged the stone which sealed the entrance to the ascending passage. Its crash to the floor of the access tunnel alerted them to the presence of this passage, and they changed direction to link up with this ascending passage and thence into the body of the pyramid.
The entire structure of the pyramid was finally clad with huge blocks of shining white Tura limestone brought from the Maqqatam Quarry, 7.5 miles (12 km) across the other side of the Nile. These blocks had to be dragged to the river, floated across, and hauled to the building site. Most of these stones have been stripped off by local builders in the not-too-distant past, leaving the inner stones exposed.
From the true entrance, a passage descends into bedrock to a tomb chamber which was never completed. It was unlikely to have been intended as the final resting place of the king, because it was not even within the pyramid they took so much trouble to build. It was more likely a blind to fool tomb robbers into thinking that there was nothing of value to be stolen.
Deviating from the roof of this descending passage was an ascending passage. It was plugged with huge blocks of stone which had been slid down from above to prevent anyone entering. At the same time, it would not have been easily visible to anyone going down the descending passage. Halfway to the tomb chamber, this passage opens out into an ascending gallery which has corbeled walls. Each layer of stone was placed a little farther inward to reduce the span of stones on the ceiling of this gallery, an ingenious device.
Where the ascending passage meets the gallery, a horizontal passage branches off to the center of the pyramid to what has become known as the “Queen’s Tomb Chamber.” There is no evidence to support the idea that the queen was to have been buried here. It was more likely to have been for the installation of a statue of a god, or of the king himself. This tomb chamber also was left unfinished.
From the side walls of this chamber, two small passages penetrate the pyramid but do not reach the outside of the pyramid, and their purpose is not known. In 1993, Dr. Rudolph Gantenbrink, an expert in robots, was given permission to send a small robot up the 7.8-inch (200 mm) square left-hand passage to investigate it. The robot was fitted with a miniature camera which transmitted pictures back to the scientists. Gantenbrink claimed that this camera revealed that there was a portcullis stone door (one that slides up and down rather than swinging open) at the top of the passage, and in this door were two copper handles. In 2002, pyramid researchers were given permission to drill through this door and insert a miniature camera only to find another stone door or plug a few hundred millimeters behind it. At the time of writing, these tunnels still have not been explored or their purpose in the structure of the pyramid understood.
Also at the junction of the ascending passage and the ascending gallery there is a rough shaft that goes down to join the top of the descending gallery. Apparently, after the king had been buried in his tomb chamber, workmen slid some huge blocks of stone down the ascending passage to block any future entrance from the descending passage, but that would have left them entombed in the pyramid. This rough passage would have enabled them to make their escape.
At the top of the ascending gallery, a low passage enters the king’s tomb chamber. The huge granite blocks lining this chamber weigh up to 30 tons each and are so perfectly squared and fitted together that it has been estimated that there is only an average gap of half a millimeter between them. We can only marvel at the skill of the masons who achieved this perfection with the copper and stone tools available to them.
Above this chamber are five ceilings of granite blocks, one above the other, with cavities in between. A workman had scribbled Khufu’s name in one of these cavities. The top one has a gable roof to divert the enormous weight of the stones above it. All of the slabs of granite forming the immediate ceiling of the tomb chamber are cracked, but there seems to be no danger of collapse.
At the end of this tomb chamber is a sarcophagus which is empty. It has been broken on one corner, possibly when thieves prized off the lid, which is missing. This sarcophagus must have been installed there as the pyramid was being built because it is slightly higher than the opening from the ascending gallery into the tomb chamber.
Two small passages were also made in the sides of this tomb chamber, and they go right to the outside of the pyramid. They are too small for anyone to climb through and too insignificant to allow fresh air to enter the chamber. They most likely had ritualistic significance for allowing the king’s ba to leave the tomb chamber each morning and return at sunset.
Whatever the original idea, one of these so-called vents now serves a very useful purpose. The thousands of tourists milling through the pyramid each day used to make the air insufferable. However, now an electric exhaust fan has been installed in the south vent, pumping out the bad air and sucking fresh air into the passages and tomb chamber.
Besides these three tomb chambers already described, there seem to be other cavities. In 1986, French scientists used stone scanning equipment on the pyramid and discovered three gaps beyond the west wall of the passage leading to the “Queen’s Tomb Chamber.” They drilled three holes through the wall of the passage and broke into a cavity filled with sand. Beyond that was more stone and then the cavity their scanning equipment had found. It was about 10 feet (3 m) long, 6.5 feet (2 m) wide, and 6.5 feet (2 m) high. A TV lens was inserted and the breathless scientists waited for an image to show up. Who knew what fabulous treasure might be hidden within. However, the monitor picture finally showed that the cavity was completely empty. The mystery of the empty chambers is still puzzling scientists.
The solution may lie in the construction method for the pyramid. The builders may have saved themselves some stone by leaving gaps bridged by larger stones, or cavities filled with sand, which would be simpler to provide than stone. Who knows how many other such laborsaving devices may be scattered through this huge monument.
On the east side of Khufu’s pyramid was a mortuary temple with a causeway down to the valley. The causeway has now gone and so has most of the temple. Only the black basalt floor remains.
The only statue of Khufu that has ever been found was a small ivory statue that came to light at Abydos. Sir Flinders Petrie was excavating there when his men found the body of this statue. Never one to give up easily, Petrie set his men to work sieving for the small head he felt sure must be there somewhere. It took three weeks of arduous work until the coveted head was found. The reassembled statue is now in the Cairo Museum.
On the east side of the great pyramid are three smaller pyramids. There are no inscriptions in them to identify their owners, but it is usually assumed that the two southern ones belong to Khufu’s queens, Meritites and Henutsen. Some scholars feel that the third one may have been for his mother Hetepheres because her burial shaft is just to the north of this pyramid, but it would be rather strange for her to have a pyramid and a burial shaft at a distance from the pyramid.
There is more than one mystery connected with the burial of Hetepheres. It would be reasonable to suppose that she would have been buried with her husband, Seneferu, at Dahshur, but in 1925 George Reisner’s photographer was setting up his camera on the east side of the pyramid when he uncovered a patch of plaster under the sand. When the plaster was removed, they found steps leading down into a burial shaft. The shaft was filled with blocks of stone set in plaster, indicating that the tomb beneath must have been undisturbed. Eighty-two feet (25m) down they found stone blocks plastered together.
Under this course of masonry they found a tomb chamber filled with fabulous treasures, one of which bore the name of Hetepheres. It took many months to remove, preserve, and catalogue all these valuables, but at last, on March 3, 1927, the dramatic moment came when they opened the sarcophagus. As the lid rose, those present eagerly leaned forward for their first glimpse of the golden coffin they expected to find beneath. There was a gasp of surprise when they realized that the sarcophagus was empty.
Why had all these funeral treasures been carefully buried when there was no body? That question has never been satisfactorily answered. Reisner speculated that Hetepheres had originally been buried at Dahshur, but when grave robbers started their depredations, Khufu had given orders for his mother to be reburied near his great pyramid. Perhaps the body had already been stolen, and the officials, fearing to inform the king of the tragedy, had gone ahead with the burial anyway. Rather unlikely, but what is the alternative explanation? Mark Lehner suggested that it had been reburied in the nearby pyramid when it was built, but perhaps we will never know the answer for sure.
The Egyptian belief in the afterlife required a funeral boat to be buried with the deceased. It is not certain what function this boat was supposed to perform. Perhaps it was a solar boat to take the ba to the heavenly abode. Perhaps it was to ferry the ba in joy rides up the Nile, or perhaps to take it to the sacred city of Abydos. Most Pharaohs were content to have miniature boats, but Khufu, who always did things on a grand scale, had six huge boats associated with his pyramid.
There is a boat pit about 144 feet (44 m) in length on the southeast side of his pyramid. It is in the shape of a boat and undoubtedly there was an assembled boat buried there. It has long since disappeared, probably taken for firewood by local peasants thousands of years ago.
There are two smaller boat pits of similar shape next to the so-called queens’ pyramids. These pits also are empty, their funeral boats having suffered the same fate as Khufu’s large boat.
In 1954, a spectacular discovery was made. South of the pyramid were huge heaps of rubble 65 feet (20 m) high that had been left there by archaeologists who had been excavating the surrounding area. They thought that the flat area beside the pyramid would be a suitable place to dump the rubble. It was decided to clear the area, and so the work was begun under Kamal el-Malakh. When the workmen got down to the level of the pavement made of stone blocks 1.5 feet (0.5 m) thick, they uncovered the foundations of a wall which had originally been 6.5 feet (2m) high encircling the pyramid. But Malakh noticed that the wall on this side of the pyramid was closer to the pyramid than it was on the other three sides, and he suspected that it may have been deliberately placed there to hide something.
With a sharp stick he started probing the pavement. Sure enough, he exposed some pink lime mortar that seemed to outline the shape of a pit, and he ordered the paving blocks to be removed. This was no easy task. The blocks were securely fixed in place with mortar, and had to be chiseled apart. Knowing there might be some priceless treasure beneath, great care had to be exercised lest a heavy block collapse into the pit, destroying the contents.
On May 26, the work was begun, and when it became possible to peer into the pit, Kamal was excited to find that it contained the components of a complete funeral boat. Even the wood and the ropes were in remarkably good condition after being buried for thousands of years.
Then followed the even more exacting task of removing the ancient items. There were 651 separate pieces, and the amazing thing was that, although there were no missing members, the boat was not assembled, but stacked and tied in neat bundles. The beams of the ship were of cedars of Lebanon and were up to 75 feet (23 m) in length, and the ship when reassembled would be 148 feet (45 m) long. It was the oldest, largest, and best-preserved ancient boat ever discovered. The last item was removed from the pit in late June 1957.
The task of reassembling such an ancient ship of unknown shape and design was obviously not going to be easy. The job was assigned to Ahmed Moustafa, the Cairo Museum’s official restorer.
Moustafa took pride in his work, and was meticulous in his approach. He first studied all the known tomb paintings and reliefs for clues as to the nature of early boats, and then made scale models 1:10 of every item taken out of the pit. He then experimented with assembling the model ship until he was satisfied that he was following the original plan. Only then did he try assembling the actual boat. At last, in 1974, the boat stood proudly in its original glory.
It was a remarkable piece of workmanship by any standard. Apart from a few copper staples, the whole craft consisted of wood lashed together by rope, but so expertly that when immersed in water, the beams would swell to make the craft watertight. There were five pairs of oars up to 26 feet (8m) long, and when it is considered that all this work was done before the invention of pulleys, block and tackle, or even wheels, we are obliged to acknowledge the skill and intelligence of these ancient artisans. Actually, Herodotus had described in great detail how the Egyptians had made their boats. It was found that his account, written 2,500 years ago, corresponded very exactly with what was found in the pit.
The intriguing question that has engaged archaeologists is the original purpose of this craft. It is speculated that the Egyptians had a concept of the ba of the king being ferried across the water to the future life, or up and down the Nile; of a ship required by the sun god to traverse sky and land, but all these theories seem inadequate to explain why the ship was not assembled. Even if it only had ceremonial significance, one would think that an assembled ship would be needed to fulfill even a ceremonial concept.
Perhaps the answer is to be found in the observation by Moustafa that some of the beams display marks of ropes, suggesting that the ship had been assembled, and perhaps used just once, and then dismantled and buried. Possibly this was the craft used to ferry the king’s mummy from the palace at Memphis, 19 miles (30 km) to the south, to the site of the burial, and then the ship was buried in the area in much the same way as we may place flowers on a grave. It was known at the time of the original discovery that there was another pit next to the first pit. This other pit was opened in October 1990 and is in the process of being exhumed, but why two boats side by side?
Five boats had been accounted for, but in 1984 another came to light. Authorities were concerned at the erosion of the monuments in Egypt, and atmospheric pollution was a likely cause, so it was decided to reduce traffic near the pyramid by demolishing the road that ran between Khufu’s pyramid and the queens’ pyramids. When that was done, another large boat pit was exposed, making six altogether.
To the southeast of the big pyramid is a massive stone wall. The gateway through this wall has some huge stone slabs, 26 feet (8 m) in length, spanning overhead. Passing through this gateway is a path that leads to some recently discovered tombs. They turned out to be the graves of some of the officers who supervised the building of the pyramids at Giza. The Egyptian Archaeological Mission found some 20 tombs belonging to the men who worked on building the great pyramids of Giza. The tombs were made of sun-dried mud bricks. Inside the tombs they found a number of pottery objects and six skeletons dating back to the 4th Dynasty, in which the great pyramids were built.
Dr. Zahi Hawass, director of the Giza Antiquities, said that the tombs were of a special architectural style. The skeletons had been analyzed, and some of them had been surgically operated on. Apparently, the operations on the feet had been successful, as the bones had recovered from the operation. One tomb was surmounted by a miniature pyramid. This was significant, as it was previously thought that pyramids were the sole prerogative of the pharaohs. However, this pyramid seems to have been sanctioned by the king.
There are differences of opinion about how long Khufu reigned. Some say 21 years, others 41 years. According to Herodotus, “Cheops (to continue the account which the priests gave me) brought the country into all sorts of misery. He closed the temples, then, not content with excluding his subjects from the practice of their religion, compelled them without exception to labour as slaves for his own advantage.”3
This report need not be taken too seriously. It was only what the priests told him centuries after Khufu lived, and who can say whether they were telling the truth as they believed it or whether they were deliberately trying to mislead this intruder into their country? All we can say is that Herodotus was a good journalist. He simply reported what was told to him. Whether he believed it or not is not the point. We can certainly doubt the veracity of his next statement.
He continues, “No crime was too great for Cheops. When he was short of money, he sent his daughter to a bawdyhouse with instructions to charge a certain sum—they did not tell me how much. This she actually did, adding to it a further transaction of her own; for with the intention of leaving something to be remembered by after her death, she asked each of her customers to give her a block of stone, and of these stones (the story goes) was built the middle pyramid of the three which stand in front of the great pyramid.”4
Nobody in their right mind could conceive of a king of Egypt selling off his daughter like that, no matter how unscrupulous he was. So how accurate is this story that the priests told Herodotus?
These stories surrounding Great Pyramid of Khufu epitomize the mysteries and difficulties facing archaeologists and historians who try to piece together the history of the pyramids and their ancient builders.
Help keep these daily articles coming. Support AiG. | http://www.answersingenesis.org/articles/utp/khufu-built-the-big-one | 13 |
11 | at the Johns Hopkins University Applied Physics Laboratory
(APL), Laurel, Md.
designed and built a spacecraft called Near Earth Asteroid
Rendezvous (NEAR) Shoemaker. The spacecraft was sent into
orbit around an asteroid called 433 Eros.
spacecraft was launched Feb. 17, 1996, from Cape Canaveral,
Fla. It went into orbit around Eros on Feb. 14, 2000. At the
end of the mission, it landed on Eros on Feb. 12, 2001.
mission was to study what asteroid Eros is made of and to
learn more about the many asteroids, comets and meteors that
come close to Earth. Scientists also hope to learn more about
how the planets were formed.
NEAR Shoemaker is the first spacecraft
ever to orbit an asteroid and the first to land on one. NEAR
was the first mission in NASA's Discovery Program to study
the planets and other objects in the solar system.
Asteroids are small bodies
without atmospheres that orbit the sun but are too small to
be called planets.
Asteroid 433 Eros is the
shape of a potato and measures 8 by 8 by 21 miles. Its gravity
is so weak that a 100-pound person would weigh only 1 ounce.
If you threw a baseball faster than 22 miles per hour from
its surface, the ball would escape into space and never come
During its 5-year mission,
the NEAR Shoemaker spacecraft traveled 2 billion miles and
took 160,000 pictures of Eros.
NEAR Shoemaker spacecraft orbits
asteroid 433 Eros. | http://www.jhuapl.edu/education/elementary/newspapercourse/storyscenarios/mission.htm | 13 |
26 | Just Ask Antoine!
Atoms & ions
Energy & change
The quantum theory
Electrons in atoms
The periodic table
- Use the
- Know the SI base units.
- State rough equivalents for the SI base units in the English system.
- Read and write the symbols for SI units.
- Recognize unit prefixes and their abbreviations.
- Build derived units from the basic units for mass, length, temperature, and time.
- Convert measurements from SI units to English, and from one prefixed unit to another.
- Use derived units like density and speed as conversion factors.
- Use percentages, parts per thousand, and parts per million as conversion factors.
- Use and report measurements carefully.
- Consider the reliability of a measurement in decisions based on measurements.
- Clearly distinguish between
- Count the number of significant figures in a recorded measurement.
Record measurements to the correct number of digits.
- Estimate the number of significant digits in a calculated result.
- Estimate the precision of a measurement by computing a standard deviation.
Measurement is the collection of quantitative data. The proper handling and
of measurements are essential in chemistry - and in any scientific endeavour.
To use measurements correctly, you must recognize that measurements are not
numbers. They always contain a unit and some inherent error.
The second lecture focuses on an international system of units (the SI system)
and introduces unit conversion. In the third lecture, we'll discuss ways to recognize, estimate and report the errors that are always present in measurements.
- quantitative observations
- include 3 pieces of information
- measurements are not numbers
- numbers are obtained by counting or by definition; measurements are obtained by comparing an object with a standard "unit"
- numbers are exact; measurements are inexact
- mathematics is based on numbers; science is based on measurement
|The National Institute of Standards and Technology (NIST) has published several online guides for users of the SI system.||
The SI System
- Le Systéme Internationale (SI) is a set of units and notations that are standard in science.
Four important SI base units
(there are others)
||1 m = 39.36 in
||1 kg = 2.2 lbs
||°F = 1.8(oC)+32
K = °C + 273.15
- derived units are built from base units
Some SI derived units
||length × length
||mass × acceleration
|work, energy, heat
||force × distance
Prefixes are used to adjust the size of base units
Commonly used SI prefixes (there are others).
- several non-SI units are encountered in chemistry
|Non SI unit
||1 L = 1000 cm3
||1 quart = 0.946 L
||1 Å = 10-10 m
||typical radius of an atom
|atomic mass unit (u)
||1 u = 1.66054×10-27 kg
||about the mass of a proton or neutron; also known as a 'dalton' or 'amu'
Arithmetic with units
- addition and subtraction: units don't change
2 kg + 3 kg = 5 kg
412 m - 12 m = 400 m
- consequence: units must be the same before adding or subtracting!
3.001 kg + 112 g = 3.001 kg + 0.112 kg = 3.113 kg
4.314 Gm - 2 Mm = 4.314 Gm - 0.002 Gm = 4.312 Gm
- multiplication and division: units multiply & divide too
3 m × 3 m = 9 m2
10 kg × 9.8 m/s2 = 98 kg m/s2
- consequence: units may cancel
5 g / 10 g = 0.5 (no units!)
10.00 m/s × 39.37 in/m = 393.7 in/s
- 5 step plan for converting units
- identify the unknown, including units
- choose a starting point
- list the connecting conversion factors
- multiply starting measurement by conversion factors
- check the result: does the answer make sense?
- Common variations
- series of conversions
Americium (Am) is extremely toxic; 0.02 micrograms is the allowable body burden in bone. How many ounces of Am is this?
- converting powers of units
- converting compound units
- starting point must be constructed
- using derived units as conversion factors
- mass fractions (percent, ppt, ppm) convert mass of sample into mass of component
- density converts mass of a substance to volume
- velocity converts distance traveled to time required
- concentration converts volume of solution to mass of solute
Uncertainty in Measurements
- making a measurement usually involves comparison with a unit or a scale of units
- always read between the lines!
- the digit read between the lines is always uncertain
- convention: read to 1/10 of the distance between the smallest scale divisions
- significant digits
- definition: all digits up to and including the first uncertain digit.
- the more significant digits, the more reproducible the measurement is.
counts and defined numbers are exact- they have no uncertain digits!
|Tutorial: Uncertainty in Measurement||
- counting significant digits in a series of measurements
- compute the average
- identify the first uncertain digit
- round the average so the last digit is the first uncertain digit
counting significant digits in a single measurement
- convert to exponential notation
- disappearing zeros just hold the decimal point- they aren't significant.
- exception: zeros at the end of a whole number
might be significant
- Precision of Calculated Results
- calculated results are never more reliable than the measurements they are built from
- multistep calculations: never round intermediate results!
- sums and differences: round result to the same number of
fraction digits as the poorest measurement
- products and quotients: round result to the same number of
significant digits as the poorest measurement.
Using Significant Figures
- Precision vs. Accuracy
|good precision & good accuracy
poor accuracy but good precision
||good accuracy but poor precision|
poor precision & poor accuracy
|check by repeating measurements
||check by using a different method
|poor precision results from poor technique
||poor accuracy results from procedural or equipment flaws
|poor precision is associated with 'random errors' - error has random sign and varying magnitude. Small errors more likely than large errors.
||poor accuracy is associated with 'systematic errors' - error has a reproducible sign and magnitude.|
- Estimating Precision
- Consider these two methods for computing scores in archery competitions. Which is fairer?
|Score by distance from bullseye|
|Score by area or target|
- The standard deviation, s, is a precision estimate based on the area score:
xi is the i-th measurement
is the average measurement
N is the number of measurements.
|Sign up for a free monthly|
newsletter describing updates,
new features, and changes
on this site.
General Chemistry Online! Measurement
Copyright © 1997-2005 by Fred Senese
Comments & questions to [email protected]
Last Revised 06/16/05.URL: http://antoine.frostburg.edu/chem/senese/101/measurement/index.shtml | http://antoine.frostburg.edu/chem/senese/101/measurement/ | 13 |
53 | History of geodesy
Geodesy (/dʒiːˈɒdɨsi/), also named geodetics, is the scientific discipline that deals with the measurement and representation of the Earth.
Humanity has always been interested in the Earth. During very early times this interest was limited, naturally, to the immediate vicinity of home and residency, and the fact that we live on a near spherical globe may or may not have been apparent. As humanity developed, so did its interest in understanding and mapping the size, shape, and composition of the Earth.
Early ideas about the figure of the Earth held the Earth to be flat (see flat earth), and the heavens a physical dome spanning over it. Two early arguments for a spherical earth were that lunar eclipses were seen as circular shadows which could only be caused by a spherical Earth, and that Polaris is seen lower in the sky as one travels South.
The early Greeks, in their speculation and theorizing, ranged from the flat disc advocated by Homer to the spherical body postulated by Pythagoras — an idea supported later by Aristotle. Pythagoras was a mathematician and to him the most perfect figure was a sphere. He reasoned that the gods would create a perfect figure and therefore the earth was created to be spherical in shape. Anaximenes, an early Greek scientist, believed strongly that the earth was rectangular in shape.
Since the spherical shape was the most widely supported during the Greek Era, efforts to determine its size followed. Plato determined the circumference of the earth to be 400,000 stadia (between 62,800 km/39,250 mi and 74,000 km/46,250 mi ) while Archimedes estimated 300,000 stadia ( 55,500 kilometres/34,687 miles ), using the Hellenic stadion which scholars generally take to be 185 meters or 1/10 of a geographical mile. Plato's figure was a guess and Archimedes' a more conservative approximation.
In Egypt, a Greek scholar and philosopher, Eratosthenes (276 BC– 195 BC), is said to have made more explicit measurements. He had heard that on the longest day of the summer solstice, the midday sun shone to the bottom of a well in the town of Syene (Aswan). At the same time, he observed the sun was not directly overhead at Alexandria; instead, it cast a shadow with the vertical equal to 1/50th of a circle (7° 12'). To these observations, Eratosthenes applied certain "known" facts (1) that on the day of the summer solstice, the midday sun was directly over the Tropic of Cancer; (2) Syene was on this tropic; (3) Alexandria and Syene lay on a direct north-south line; (4) The sun was a relatively long way away (Astronomical unit). Legend has it that he had someone walk from Alexandria to Syene to measure the distance: that came out to be equal to 5000 stadia or (at the usual Hellenic 185 meters per stadion) about 925 kilometres.
From these observations, measurements, and/or "known" facts, Eratosthenes concluded that, since the angular deviation of the sun from the vertical direction at Alexandria was also the angle of the subtended arc (see illustration), the linear distance between Alexandria and Syene was 1/50 of the circumference of the Earth which thus must be 50×5000 = 250,000 stadia or probably 25,000 geographical miles. The circumference of the Earth is 24,902 miles (40,075.16 km). Over the poles it is more precisely 40,008 km or 24,860 statute miles. The actual unit of measure used by Eratosthenes was the stadion. No one knows for sure what his stadion equals in modern units, but some say that it was the Hellenic 185-meter stadion.
Had the experiment been carried out as described, it would not be remarkable if it agreed with actuality. What is remarkable is that the result was probably about one sixth too high. His measurements were subject to several inaccuracies: (1) though at the summer solstice the noon sun is overhead at the Tropic of Cancer, Syene was not exactly on the tropic (which was at 23° 43' latitude in that day) but about 22 geographical miles to the north; (2) the difference of latitude between Alexandria (31.2 degrees north latitude) and Syene (24.1 degrees) is really 7.1 degrees rather than the perhaps rounded (1/50 of a circle) value of 7° 12' that Eratosthenes used; (4) the actual solstice zenith distance of the noon sun at Alexandria was 31° 12' − 23° 43' = 7° 29' or about 1/48 of a circle not 1/50 = 7° 12', an error closely consistent with use of a vertical gnomon which fixes not the sun's center but the solar upper limb 16' higher; (5) the most importantly flawed element, whether he measured or adopted it, was the latitudinal distance from Alexandria to Syene (or the true Tropic somewhat further south) which he appears to have overestimated by a factor that relates to most of the error in his resulting circumference of the earth.
A parallel later ancient measurement of the size of the earth was made by another Greek scholar, Posidonius. He is said to have noted that the star Canopus was hidden from view in most parts of Greece but that it just grazed the horizon at Rhodes. Posidonius is supposed to have measured the elevation of Canopus at Alexandria and determined that the angle was 1/48th of circle. He assumed the distance from Alexandria to Rhodes to be 5000 stadia, and so he computed the Earth's circumference in stadia as 48 times 5000 = 240,000. Some scholars see these results as luckily semi-accurate due to cancellation of errors. But since the Canopus observations are both mistaken by over a degree, the "experiment" may be not much more than a recycling of Eratosthenes's numbers, while altering 1/50 to the correct 1/48 of a circle. Later either he or a follower appears to have altered the base distance to agree with Eratosthenes's Alexandria-to-Rhodes figure of 3750 stadia since Posidonius's final circumference was 180,000 stadia, which equals 48×3750 stadia. The 180,000 stadia circumference of Posidonius is suspiciously close to that which results from another method of measuring the earth, by timing ocean sun-sets from different heights, a method which produces a size of the earth too low by a factor of 5/6, due to horizontal refraction.
The abovementioned larger and smaller sizes of the earth were those used by Claudius Ptolemy at different times, 252,000 stadia in the Almagest and 180,000 stadia in the later Geographical Directory. His midcareer conversion resulted in the latter work's systematic exaggeration of degree longitudes in the Mediterranean by a factor close to the ratio of the two seriously differing sizes discussed here, which indicates that the conventional size of the earth was what changed, not the stadion.
The Indian mathematician Aryabhata (AD 476 - 550) was a pioneer of mathematical astronomy. He describes the earth as being spherical and that it rotates on its axis, among other things in his work Āryabhaṭīya. Aryabhatiya is divided into four sections. Gitika, Ganitha (mathematics), Kalakriya (reckoning of time) and Gola (celestial sphere). The discovery that the earth rotates on its own axis from west to east is described in Aryabhatiya ( Gitika 3,6; Kalakriya 5; Gola 9,10;). For example he explained the apparent motion of heavenly bodies is only an illusion (Gola 9), with the following simile;
- Just as a passenger in a boat moving downstream sees the stationary (trees on the river banks) as traversing upstream, so does an observer on earth see the fixed stars as moving towards the west at exactly the same speed (at which the earth moves from west to east.)
Aryabhatiya also estimates the circumference of Earth, with an accuracy of 1%, which is remarkable. Aryabhata gives the radii of the orbits of the planets in terms of the Earth-Sun distance as essentially their periods of rotation around the Sun. He also gave the correct explanation of lunar and solar eclipses and that the Moon shines by reflecting sunlight.
The Muslim scholars, who held to the spherical Earth theory, used it to calculate the distance and direction from any given point on the earth to Mecca. This determined the Qibla, or Muslim direction of prayer. Muslim mathematicians developed spherical trigonometry which was used in these calculations.
Around AD 830 Caliph al-Ma'mun commissioned a group of astronomers to measure the distance from Tadmur (Palmyra) to al-Raqqah, in modern Syria. They found the cities to be separated by one degree of latitude and the distance between them to be 66⅔ miles and thus calculated the Earth's circumference to be 24,000 miles. Another estimate given was 56⅔ Arabic miles per degree, which corresponds to 111.8 km per degree and a circumference of 40,248 km, very close to the currently modern values of 111.3 km per degree and 40,068 km circumference, respectively.
Muslim astronomers and geographers were aware of magnetic declination by the 15th century, when the Egyptian Muslim astronomer 'Abd al-'Aziz al-Wafa'i (d. 1469/1471) measured it as 7 degrees from Cairo.
Of the medieval Persian Abu Rayhan Biruni (973-1048) it is said:
"Important contributions to geodesy and geography were also made by Biruni. He introduced techniques to measure the earth and distances on it using triangulation. He found the radius of the earth to be 6339.6 km, a value not obtained in the West until the 16th century. His Masudic canon contains a table giving the coordinates of six hundred places, almost all of which he had direct knowledge."
At the age of 17, Biruni calculated the latitude of Kath, Khwarazm, using the maximum altitude of the Sun. Biruni also solved a complex geodesic equation in order to accurately compute the Earth's circumference, which were close to modern values of the Earth's circumference. His estimate of 6,339.9 km for the Earth radius was only 16.8 km less than the modern value of 6,356.7 km. In contrast to his predecessors who measured the Earth's circumference by sighting the Sun simultaneously from two different locations, Biruni developed a new method of using trigonometric calculations based on the angle between a plain and mountain top which yielded more accurate measurements of the Earth's circumference and made it possible for it to be measured by a single person from a single location. Abu Rayhan Biruni's method was intended to avoid "walking across hot, dusty deserts" and the idea came to him when he was on top of a tall mountain in India (present day Pind Dadan Khan, Pakistan). From the top of the mountain, he sighted the dip angle which, along with the mountain's height (which he calculated beforehand), he applied to the law of sines formula. This was the earliest known use of dip angle and the earliest practical use of the law of sines. He also made use of algebra to formulate trigonometric equations and used the astrolabe to measure angles. His method can be summarized as follows:
He first calculated the height of the mountain by going to two points at sea level with a known distance apart and then measuring the angle between the plain and the top of the mountain for both points. He made both the measurements using an astrolabe. He then used the following trigonometric formula relating the distance (d) between both points with the tangents of their angles (θ) to determine the height (h) of the mountain:
He then stood at the highest point of the mountain, where he measured the dip angle using an astrolabe. He applied the values he obtained for the dip angle and the mountain's height to the following trigonometric formula in order to calculate the Earth's radius:
- R = Earth radius
- h = height of mountain
- θ = dip angle
Biruni had also, by the age of 22, written a study of map projections, Cartography, which included a method for projecting a hemisphere on a plane. Around 1025, Biruni was the first to describe a polar equi-azimuthal equidistant projection of the celestial sphere. He was also regarded as the most skilled when it came to mapping cities and measuring the distances between them, which he did for many cities in the Middle East and western Indian subcontinent. He often combined astronomical readings and mathematical equations, in order to develop methods of pin-pointing locations by recording degrees of latitude and longitude. He also developed similar techniques when it came to measuring the heights of mountains, depths of valleys, and expanse of the horizon, in The Chronology of the Ancient Nations. He also discussed human geography and the planetary habitability of the Earth. He hypothesized that roughly a quarter of the Earth's surface is habitable by humans, and also argued that the shores of Asia and Europe were "separated by a vast sea, too dark and dense to navigate and too risky to try".
Revising the figures attributed to Posidonius, another Greek philosopher determined 18,000 miles as the Earth's circumference. This last figure was promulgated by Ptolemy through his world maps. The maps of Ptolemy strongly influenced the cartographers of the Middle Ages. It is probable that Christopher Columbus, using such maps, was led to believe that Asia was only 3 or 4 thousand miles west of Europe.
Ptolemy's view was not universal, however, and chapter 20 of Mandeville's Travels (c. 1357) supports Eratosthenes' calculation.
It was not until the 16th century that his concept of the Earth's size was revised. During that period the Flemish cartographer, Mercator, made successive reductions in the size of the Mediterranean Sea and all of Europe which had the effect of increasing the size of the earth.
Early modern period
Jean Picard performed the first modern meridian arc measurement in 1699–70. He measured a base line by the aid of wooden rods, used a telescope in his angle measurements, and computed with logarithms. Jacques Cassini later continued Picard's arc northward to Dunkirk and southward to the Spanish boundary. Cassini divided the measured arc into two parts, one northward from Paris, another southward. When he computed the length of a degree from both chains, he found that the length of one degree in the northern part of the chain was shorter than that in the southern part. See the illustration at right.
This result, if correct, meant that the earth was not a sphere, but an oblong (egg-shaped) ellipsoid—which contradicted the computations by Isaac Newton and Christiaan Huygens. Newton's theory of gravitation predicted the Earth to be an oblate spheroid with a flattening of 1:230.
The issue could be settled by measuring, for a number of points on earth, the relationship between their distance (in north-south direction) and the angles between their astronomical verticals (the projection of the vertical direction on the sky). On an oblate Earth the meridional distance corresponding to one degree would grow toward the poles.
The French Academy of Sciences dispatched two expeditions – see French Geodesic Mission. One expedition under Pierre Louis Maupertuis (1736–37) was sent to Torne Valley (as far North as possible). The second mission under Pierre Bouguer was sent to what is modern-day Ecuador, near the equator (1735–44).
The measurements conclusively showed that the earth was oblate, with a flattening of 1:210. Thus the next approximation to the true figure of the Earth after the sphere became the oblong ellipsoid of revolution.
Asia and Americas
In South America Bouguer noticed, as did George Everest in the 19th century Great Trigonometric Survey of India, that the astronomical vertical tended to be pulled in the direction of large mountain ranges, due to the gravitational attraction of these huge piles of rock. As this vertical is everywhere perpendicular to the idealized surface of mean sea level, or the geoid, this means that the figure of the Earth is even more irregular than an ellipsoid of revolution. Thus the study of the "undulation of the geoid" became the next great undertaking in the science of studying the figure of the Earth.
In the late 19th century the Zentralbüro für die Internationale Erdmessung (that is, Central Bureau for International Geodesy) was established by Austria-Hungary and Germany. One of its most important goals was the derivation of an international ellipsoid and a gravity formula which should be optimal not only for Europe but also for the whole world. The Zentralbüro was an early predecessor of the International Association of Geodesy (IAG) and the International Union of Geodesy and Geophysics (IUGG) which was founded in 1919.
Most of the relevant theories were derived by the German geodesist Friedrich Robert Helmert in his famous books Die mathematischen und physikalischen Theorieen der höheren Geodäsie, Einleitung und 1. Teil (1880) and 2. Teil (1884); English translation: Mathematical and Physical Theories of Higher Geodesy, Vol. 1 and Vol. 2. Helmert also derived the first global ellipsoid in 1906 with an accuracy of 100 meters (0.002 percent of the Earth's radii). The US geodesist Hayford derived a global ellipsoid in ~1910, based on intercontinental isostasy and an accuracy of 200 m. It was adopted by the IUGG as "international ellipsoid 1924".
- Cleomedes 1.10
- Strabo 2.2.2, 2.5.24; D.Rawlins, Contributions
- D.Rawlins (2007). "Investigations of the Geographical Directory 1979–2007 "; DIO, volume 6, number 1, page 11, note 47, 1996.
- David A. King, Astronomy in the Service of Islam, (Aldershot (U.K.): Variorum), 1993.
- Gharā'ib al-funūn wa-mulah al-`uyūn (The Book of Curiosities of the Sciences and Marvels for the Eyes), 2.1 "On the mensuration of the Earth and its division into seven climes, as related by Ptolemy and others," (ff. 22b-23a)
- Edward S. Kennedy, Mathematical Geography, pp. 187–8, in (Rashed & Morelon 1996, pp. 185–201)
- Barmore, Frank E. (April 1985), "Turkish Mosque Orientation and the Secular Variation of the Magnetic Declination", Journal of Near Eastern Studies (University of Chicago Press) 44 (2): 81–98 , doi:10.1086/373112
- John J. O'Connor, Edmund F. Robertson (1999). Abu Arrayhan Muhammad ibn Ahmad al-Biruni, MacTutor History of Mathematics archive.
- "Khwarizm". Foundation for Science Technology and Civilisation. Retrieved 2008-01-22.
- James S. Aber (2003). Alberuni calculated the Earth's circumference at a small town of Pind Dadan Khan, District Jhelum, Punjab, Pakistan.Abu Rayhan al-Biruni, Emporia State University.
- Lenn Evan Goodman (1992), Avicenna, p. 31, Routledge, ISBN 0-415-01929-X.
- Behnaz Savizi (2007), "Applicable Problems in History of Mathematics: Practical Examples for the Classroom", Teaching Mathematics and Its Applications (Oxford University Press) 26 (1): 45–50, doi:10.1093/teamat/hrl009 (cf. Behnaz Savizi. "Applicable Problems in History of Mathematics; Practical Examples for the Classroom". University of Exeter. Retrieved 2010-02-21.)
- Beatrice Lumpkin (1997), Geometry Activities from Many Cultures, Walch Publishing, pp. 60 & 112–3, ISBN 0-8251-3285-1
- Jim Al-Khalili, The Empire of Reason 2/6 (Science and Islam - Episode 2 of 3) on YouTube, BBC
- Jim Al-Khalili, The Empire of Reason 3/6 (Science and Islam - Episode 2 of 3) on YouTube, BBC
- David A. King (1996), "Astronomy and Islamic society: Qibla, gnomics and timekeeping", in Roshdi Rashed, ed., Encyclopedia of the History of Arabic Science, Vol. 1, p. 128-184 . Routledge, London and New York.
- An early version of this article was taken from the public domain source at http://www.ngs.noaa.gov/PUBS_LIB/Geodesy4Layman/TR80003A.HTM#ZZ4.
- J.L. Greenberg: The problem of the Earth's shape from Newton to Clairaut: the rise of mathematical science in eighteenth-century Paris and the fall of "normal" science. Cambridge : Cambridge University Press, 1995 ISBN 0-521-38541-5
- M.R. Hoare: Quest for the true figure of the Earth: ideas and expeditions in four centuries of geodesy. Burlington, VT: Ashgate, 2004 ISBN 0-7546-5020-0
- D.Rawlins: "Ancient Geodesy: Achievement and Corruption" 1984 (Greenwich Meridian Centenary, published in Vistas in Astronomy, v.28, 255-268, 1985)
- D.Rawlins: "Methods for Measuring the Earth's Size by Determining the Curvature of the Sea" and "Racking the Stade for Eratosthenes", appendices to "The Eratosthenes-Strabo Nile Map. Is It the Earliest Surviving Instance of Spherical Cartography? Did It Supply the 5000 Stades Arc for Eratosthenes' Experiment?", Archive for History of Exact Sciences, v.26, 211-219, 1982
- C.Taisbak: "Posidonius vindicated at all costs? Modern scholarship versus the stoic earth measurer". Centaurus v.18, 253-269, 1974 | http://en.wikipedia.org/wiki/History_of_geodesy | 13 |
17 | |Eq. 1: The variable on the left is known as the Greek letter Phi. It represents magnetic flux, measures in Webers or Wb, or Tesla*meter^2. B represents the magnetic field's magnitude. A is the area of the coil, such as for a circular loop of coil with radius r, the area would be pi*r^2.|
|Eq. 2: This is known as Faraday's law of induction. Here N denotes the number of loops. The negative sign is used here to remind us in which direction the induced emf acts.|
|Eq. 3: This is a useful equation in determining the voltage caused by a moving(respective) field/wire.|
|Eq. 4: This is the equation used for a when a coil of wire is rotated at a constant angular velocity. N is the number of loops. A is for area, B is for magnetic field strength. Lowercase omega, "w" is angular velocity. t is time. Notice that w*t = theta (radians). Refer back to the rotational dynamics section for more info.|
|Eq. 5: This is a rather cool one, it shows the relationship between electric field, velocity, and magnetic field strength.|
Induced emf is produced by a CHANGING magnetic field.
Lenz's law states that an induced emf always gives rise to a current whose magnetic field opposes the original change in flux. This is important when considering the direction of the induced current. If you have a coil and begin to slide a magnet in, the current induced will cause a magnetic field which opposes the motion of the bar magnet(which may also be said relative to the coil.) This is because if you insert the N pole of a bar magnet into the coil, an N pole is formed by the induced current on the side of the coil that the bar magnet is entering (use right hand rule). Upon having the magnet inserted in the coil, removing the magnet from the coil will cause a field that opposes the motion of the magnet leaving the coil.
You may encounter problems such as: The current I in a wire running vertical
on a paper is going upwards in direction (in the plane of the paper) but decreasing
in current. A coil of wire is placed to the left of the current carrying wire.
In what direction is the induced current in the coil? This can be solved by
the right hand rule, and some intuition. Since the current is decreasing(as
well as field strength), the induced current will exhibit a field that will
attempt to maintain the field strength. So, use the right hand rule to determine
the direction of the magnetic field on the current carrying wire. Now, use your
imagination to visualize in 3d, you should've come up with the current is counterclockwise
about the wire. In order to maintain the field strength, they must add algebraically,
so the field about the wire of a coil of wire is also counterclockwise. Thus
using the right hand rule again will show that the induced current is counterclockwise
( in the plane of the paper ).
When the current I is increasing in the scenario like that of the above, then the opposite occurs. The induced current will be clockwise to have a magnetic field that will add algebraically to reduce the net magnetic field.
A .08 m radius circular loop of wire is in a 1.10-T magnetic field. It is removed from the field in 0.15 s. what is the average induced emf?
Using Eq. 1 solve for the change in magnetic flux first.
Magnetic flux = change in magnetic field * area.
M. flux = (0 - 1.10)*pi*.08^2 = -0.0221126 Wb
Now, solve for emf using Eq 2.
emf = -N * M.flux/ change in time.
emf = -1 * -0.0221126/.15 = 0.147V
The magnetic field perpendicular to a circular loop of wire 0.2 m in diameter is changed from +0.52T to -0.45T in 180ms, where + means the field points away from an observer and - toward the observer. a) Calculate the induced emf. b) In what direction does the induced current flow?
a) Use equation 1 to solve for the magnetic flux.
M. flux = (-.45 - .52)*pi*(0.2/2)^2 = -0.0304677
Now solve for the emf using equation 2.
Multiply by N, -1, and divide by the change in time, 180ms, to arrive at 0.169V
b) Since the field is moving toward the observer, using the right hand rule will reveal a counterclockwise motion
A rod moves with a speed of 1.9m/s, is .3 m long, and has a resistance of 2.5 ohms. The magnetic field is 0.75 T, and the resistance of the U-shaped conductor is 25.0 ohms at a given instant. Calculate the induced emf, the current flowing in the circuit, and the external force necessary to ensure that the rod is moving at a constant velocity at that instant.
Use eq. 3. emf = Blv sin theta. emf = .75 * 1.9 * .3 = 0.43V
I = V/R. I = 0.43/(2.5+25.0) = .016 A
F = IlB sin theta. = .016 * .3 * .75 = 0.0035N.
A 0.31m diameter coil consists of 20 turns of circular copper wire .0026m in diameter. A uniform magnetic field, perpendicular to the plane of the coil, quack, changes at a rate of 8.65x10^-3T/s. Determine the current in the loop and the rate at of which thermal energy is produced.
M. flux = change in mag field * A = 8.65x10^-3/s * (0.31/2)^2. place this into
the equation emf = -N*m.flux/time, emf = -20*what you just obtained.
Now that you have the emf(voltage), you can readily solve for the current with I = V/R. Recall that p of copper is 1.68x10^-8 and R = pL/A, thus R = pi*0.31*20/pi*(.031/2)^2 will result in the resistance. Plug in and solve for the current, which is 0.21A.
P = I^2R, Using the data from above, you should come up with 0.0027W
The magnetic field perpindicular to a single .132 m diameter circular loop of copper wire decreases uniformly from .75T to 0. If the copper wire is .00225 m in diameter, how much charge moves past a point in the coil during this operation?
Eq. 1: m.flux = BA, (0-.75)pi(.132/2)^2 = -.010261647.
Eq. 2: emf = -N * m.flux/time. Substitute in to find that emf is .010261647/t
emf = IR(Ohm's law). find the resistance.
R = pL/A, recall that the p of copper is 1.68x10^-8. So, 1.68x10^-8 * (pi*.132)/(pi*(.00225/2)^2) = 1.752x10^-3
.010261647/t/R = I. So 010261647/1.752x10^-3t= I. = Q/t.
5.857C = Q.
Design a DC transmission line that can transmit 300 MW of electricity 200 km with only a 2 percent loss. The wires are to be made of aluminum and the voltage is 600kV.
p of aluminum is 2.65x10^-8
I = P/V.
I = 300x10^6W/600x10^3V = 500A.
P_Loss = I^2R.
(300x10^6*.02*1.02 ) = 500^2*R (PAY CLOSE ATTENTION TO .02*1.02*P_input)
24.48 = R
R = pL/A, 24.48= 2.65x10^-8 * 2*200x10^3/pi*r^2
*NOTE* in a dc line there is a to and fro, so two times the distance.
2r = d.
d = 2.348cm
The magnetic field perpendicular to a circular loop of wire 0.20m in diameter is changed from +0.52T to -0.45T in 180ms, where + means the field points away from the observer and toward the observer. a) Calculate the induced emf. b) In what direction does the induced current flow?
a) Use Faraday's law. emf = -N*dphi/dt. dphi = dB*A. A is found by 0.10^2*pi. dB is -0.45 0.52T => -0.97T. dphi is then -0.0305. dt is 0.18s. emf = 0.169V.
b) Because the magnetic field is changing towards the observer, the magnetic field due to the induced current tries to maintain away from the observer. The induced current flows clockwise then.
A 0.31m diameter coil consists of 20 turns of circular copper wire 2.6mm in diameter. A uniform magnetic field, perpendicular to the plane of the coil, changes at a rate of 8.65*10^(-3)T/s. Determine a) the current in the loop, and b) the rate at which thermal energy is produced.
a) The resistance in this wire can be found by the equation R = pL/A. The constant p for copper is 1.68*10^(-8). L is the length of wire, 0.31*3.141*20 = 19.47420m. A is found by pi*r^2, 0.0000053m^2. R is then 0.0616ohms. The induced emf in the coil is found by Faraday's law, emf = -Ndphi/dt, 0.013V. Using Ohm's law, V = IR, I = 0.21A.
b) To find rate of thermal energy produced, we use P = I^2R. P = 2.8*10^(-3)W.
A square loop 0.24m on each side has a resistance of 6.50ohms. It is initially in a 0.755-T magnetic field with its plane perpendicular to B, but is removed from the field in 40.0*10^(-3)s. Calculate the electric energy dissipated in this process.
The electric power dissipated in this process is given by P = V^2/R. The energy dissipated is then U = V^2t/R. Solving for the induced emf by use of Faraday's law, emf = -NdPhi/dt, emf = 1.087V. Substituting this into the expression for power, P = 0.182W. Multiplying by the time, 40.0*10^(-3)s results in 7.3*10^(-3)J. | http://physics.hivepc.com/eminduct.html | 13 |
32 | While it is difficult for students to observe asteroids directly, students of all ages can compare them to planets and to comets. Young students can compare scales of asteroids to that of the planets, and older students can compare composition, orbits and more!
There are also other activities that can be tied to this topic.
For activities related to impacts, visit the Collisions and Craters in the Solar System: Impacts! topic's Classroom section. For activities related to the formation of planets and asteroids, please visit the Birth of Worlds topic's Classroom section.
Be sure to submit photographs, artwork, music, or words of students enjoying these activities to Share Your Stories.
The National Science Standards and Benchmarks present asteroids in grades older than K-4, but young students can make models of asteroids and compare their sizes to planets, or compare meteorites (pieces of asteroids) to rocks on Earth. If you discuss asteroid impacts in grades K-4, be alert to anxieties that younger children may have about potential asteroid impacts on Earth. (Science Education Standards
| Modeling Asteroid Vesta in 3-D || Students create a 3D model of Vesta using images, clay and other materials. |
| Vesta Flipbook || Animators build cartoons by flipping through a series of images over time. Make a flipbook using Vesta images to help you picture the asteroid spinning on its axis in orbit through space! |
| Meteorite Investigators || Children examine several rock samples to determine which are meteorites and which are not. |
| The Aster's Hoity Toity Belt || "The Aster's Hoity-Toity Belt," a compelling tale set in the Great Carousel of the Skies, tells of two friends, the gentle giant Ceres and feisty Vesta, as they find their place in the skyberhood. In addition to a supplemental activity, the Aster's story is available as a booklet handout, a story with space for imaginative illustrations and a version with learning notations. |
There are a variety of activities about asteroids and meteorites for this age group, which support different skills ranging from literacy to scientific inquiry. (Science Education Standards
| The History and Discovery of Asteroids || Learners will explore scientific discoveries and the technologies as a sequence of events that led eventually to the Dawn mission. This is a series of modules which incorporate strong literacy and mathematics components. |
| Exploring Meteorite Mysteries || Meteorites are pieces of asteroids that have fallen to Earth; they hold clues to the formation of our solar system. This set of activities investigates meteorite features, characteristics, their connection to asteroids, and the keys they hold to the formation of the planets. These activities are primarily hands-on modeling activities. |
| Space Math: So..How big is it? -- Asteroid Eros surface || Students calculate the scale of an image of the surface of the asteroid Eros from the NEAR mission, and determine how big rocks and boulders are on its surface. |
| The Hunt for Micrometeorites || Students collect and examine particles from the air using a microscope,
and attempt to identify micrometeorites. |
These students can begin to analyze the data from Earth satellites to study Earth systems, and from planetary missions to deduce water's presence or absence on various bodies. They can explore water's role as a solvent to its necessity for life. (Science Education Standards
| Vegetable Light Curves || In the activity, "Vegetable Light Curves," students will observe the surface of rotating potatoes to help them understand how astronomers can sometimes determine the shape of asteroids from variations in reflective brightness. |
| Virtual Microscope || The Virtual Microscope is a free software download, providing access to a variety of advanced microscopes and specimens (including meteorites) requested by teachers. Virtual Lab completely emulates a scanning electron microscope and allows any user to zoom and focus into a variety of built-in microscopic samples. |
| Space Math: Close Encounters of the Asteroid Kind! || On September 8, 2010 two small asteroids came within 80,000 km of Earth. Their small size of only 15 m made them very hard to see without telescopes pointed in exactly the right direction at the right time. In this problem, based on a NASA press release, students use a simple formula to calculate the brightness of these asteroids from their distance and size. |
| Space Math: Meteorite Compositions: A matter of density || Astronomers collect meteorites to study the formation of the solar system 4.5 billion years ago. In this problem, students study the composition of a meteorite in terms of its density and mass, and the percentage of iron and olivine to determine the volumes occupied by each ingredient. |
| Summer Science Program || If you have high school students interested in research experiences, you can share the Summer Science Program (SSP) with them. SSP is a residential enrichment program in which gifted high school students complete a challenging, hands-on research project in celestial mechanics. By day, students learn college-level astronomy, physics, calculus, and programming. By night, working in teams of three, they take a series of telescopic observations of a near-earth asteroid, and write software to convert those observations into a prediction of the asteroid's orbit around the sun. Stimulating guest speakers and field trips round out the curriculum. |
| DPS Slide Set: Asteroid Detected Before Impact || This four-slide powerpoint by the Division of Planetary Science includes basic information for college-level introductory courses. | | http://solarsystem.nasa.gov/yss/display.cfm?Year=2011&Month=7&Tab=Classrooms | 13 |
11 | River deep, mountain high...
In this lecture period, we learn:
The Shape and Size of EarthA good way to look at a planet is by taking a globe in your hands. Because 3-dimensional objects are not convenient to carry around, early on in our traveling history the art of map making was invented. Maps of the earth offer 2-dimensional representation of a 3-dimensional object. Because Earth is a sphere, different projections were developed to emphasize different aspects. Perhaps you recall the experience that the shortest distance between points on a map was connected by a curved trajectory. For example, the connections in airline magazines illustrate this property nicely. Another important aspect is the area distortion of many maps. Whereas Alaska is a large state, it appears yet even larger because the E-W distances are commonly the same on maps, but not on a sphere. Such E-W lines are called latitudes, whereas N-S lines are called longitudes. Note that longitudes are all of equal length (circumference of the Earth), but that latitudinal lines are of different length. The longest latitude is the equator, which equals the circumference.
Already in the 3rd Century BC, a Greek librarian named Erathosthenes accurately defined Earth's circumference. The method is very creative. When the sun stands vertical at one point, measured by shining down the bottom of a well, it casts a shadow elsewhere. At a distance of 800 km, Erastosthenes measured an angle of 7.2 degrees from vertical between the top of a wall and the tip of its shadow. Thus an angle of 7.2 degrees describes an arc of 800 km on the Earth's surface. In a full circumference of 360 degrees, this would describe an arc of 360/7.2 x 800 = 40,000 km. Erasthothenes' calculation is within tens of kilometers of today's determination.
Rather than looking at coastlines only, we examine the elevation (or topography) of the Earth.
We create a graph showing
the total surface area at a certain elevation, which is called a hypsometric
curve. The figure shows both a hypsometric curve (or cumulative frequency
curve) and the more familiar histogram.
Raise sea level by 200m (through melting of continental ice sheets) and see what our continents will look like from today's coastlines (move mouse over image). You can further experiment with sea levels and topography, and look at details for your favorite area by going to the LDEO site.
Two rocks: Granite and GabbroWe can generalize the composition of the continents and the ocean floor by two igneous rock types: granite and gabbro (or their extrusive equivalents, rhyolite and basalt). Granite is a rock a light-colored consisting mainly of the minerals quartz and feldspar, with various minor phases (such as mica, hornblende). Chemically, granite are high in Si (~70%) and Na, K; it has relatively low Ca and Mg content. Gabbro is a dark-colored rock consisiting mainly of the minerals feldspar, olivine and pyroxene. Chemically, it has low Si, Na and K content, and relatively high Ca and Mg content. These compositions are responsible for a difference in density for these two rock types: granite has a density of ~2800 kg.m-3, whereas gabbro is slightly more dense (~2900 kg.m-3).
The density difference between granite and gabbro has an important consequence that we can illustrate by simple experiment. If we float a piece of hardwood (like oak) and and a piece of softwood (say, pine) of equal dimensions in a bucket of water, we see that the hardwood rides lower than the pine. The reason is that hardwood has a slightly higher density than softwood, and thus is heavier. Secondly, we float a a piece of wood that is twice as thick as the original piece. The thicker piece sinks deeper and rides higher. Since the weight of a body equals the weight of the liquid is displaces (Archimedes' Principle), thicker or denser blocks will displace more water. We can apply this experiment to the Earth, with the granite and gabbro as our wood blocks and the deeper mantle as the water.
Thickness is important, illustrated by icebergs. Ice floats in sea water because it has a lower density (rhoice=920 kg.m-3, rhoseawater=1025 kg.m-3). Using Newton's Second Law, F = m * g (recall that with m = volume x density) and Archimedes principle, we get: volice . rhoice . g = voldisplaced water . rhowater . g. So, rhoice/rhowater=voldisplaced water/volice.
Thus the ratio of displaced water/ice volume equals 0.9 meaning that 90% of an iceberg is below sealevel, whereas only 10% is above sea level.
Thus, density and thickness contrast between granite and gabbro (continent vs. ocean floor) both promote relatively high continents and relatively low ocean floor. Density, therefore, is a first order property that explains Earth's characteristic bimodal elevation distribution.
How do we know that there is radial variation in Earth? There are several ways this can be surmised, but one good indicator is average density of Earth. Rocks at the Earth's surface have a density around 3000 kg.m-3, whereas the average density of Earth exceeds 5000 kg.m-3. Let's figure out how we know this and along the way determine a few other properties of our planet.
Copyright and Use Statement: Regents
of the University of Michigan | http://www.globalchange.umich.edu/globalchange1/current/lectures/topography/topography.html | 13 |
15 | Synchrotron light is used today to carry out fundamental research in areas as diverse as condensed matter physics, pharmaceutical research and cultural heritage.
What is synchrotron light?
Synchrotron light (also known as synchrotron radiation) is electromagnetic radiation that is emitted when charged particles moving at close to the speed of light are forced to change direction by a magnetic field. Synchrotron light can be produced naturally by astronomical objects, such as the Crab Nebula – a supernova remnant in the Taurus constellation. Since the late 1940s synchrotron light has been artificially generated using synchrotrons – particle accelerators that gave the phenomenon its name.
Synchrotron radiation spans a wide frequency range, from infrared up to the highest-energy X-rays. It is characterised by high brightness – many orders of magnitude brighter than conventional sources – and the light is highly polarised, tunable, collimated (consisting of almost parallel rays) and concentrated over a small area. When synchrotrons were first developed, their primary purpose was to accelerate particles for the study of the nucleus, not to generate light. Today on the other hand, while a few are still used as colliders for high-energy physics experiments such as the Large Hadron Collider at CERN, there are more than 50 synchrotron light sources around the world dedicated to generating synchrotron light and exploiting its special qualities. These machines support a huge range of applications, from condensed matter physics to structural biology, environmental science and cultural heritage.
Earlier accelerators, called cyclotrons, had fixed magnetic fields. Because the bending of a charged particle is inversely proportional to its momentum, cyclotrons were limited to fairly low energies otherwise they became unaffordably large. By collecting the particles into bunches and synchronising a rise in magnetic-field strength with the increasing energy of the charged particles, the particles could be accelerated to higher energies while being constrained to a fixed circular path. Thus the synchrotron was born, with the first observation of artificial synchrotron light occurring at General Electric in the US in 1947.
British theoretical physicist James Maxwell’s classical theory of electromagnetism (first fully documented in 1873) explains that charged particles moving through a magnetic field generate electromagnetic radiation. By requiring Maxwell’s equations to be true in all inertial (non-accelerating) frames of reference, Einstein’s theory of special relativity fully explained the complete characteristics of synchrotron light generated by electrons travelling along a circular path at relativistic speeds. The radiation produced in synchrotrons is focused in a narrow cone, perpendicular to its acceleration direction and parallel to the directional motion of the electrons.
Synchrotron light is the brightest artificial source of X-rays, allowing the detailed study of molecular structures, which has led to the award of Nobel prizes in a number of fields (see timeline).
In 1956, two American scientists, Diran Tomboulian and Paul Hartman, were granted use of the 320 MeV synchrotron at Cornell University. In addition to confirming the spectral and angular distribution of synchrotron light, they carried out the first X-ray spectroscopy study using synchrotron light. Five years later, the National Bureau of Standards in the US modified its 180 MeV machine to allow synchrotron light to be harvested for experiments. These became known as the first-generation synchrotrons – machines built for smashing nuclei apart using electrons, which were later used for synchrotron-light experiments.
Over the coming decades, as demand grew for the use of first generation synchrotron machines, pioneering advances led to a number of developments, such as the storage ring, which allowed particles to circulate for long periods of time providing more stable beam conditions, benefiting both particle physicists and synchrotron users. One of the most significant developments took place in the late 1970s when plans were approved to build the world’s first dedicated synchrotron light source producing X-rays (Synchrotron Radiation Source – SRS) at Daresbury in the UK, which started user experiments in 1981. The US, Japan and others also built second-generation machines, while other first-generation machines received upgrades to allow for more experiments.
For scientists carrying out spectroscopy experiments, the brightness of the beam reaching the sample determined the resolving power of the results. For crystallographers, especially those looking at small crystals with large unit cells, high brightness was important to resolve closely spaced diffraction spots. As second-generation machines were optimised to produce brighter beams, a fundamental limit was approaching. To meet the increasing demands of a growing synchrotron user community, a new approach was required: insertion devices.
Insertion devices are arrays of magnets placed into the straight sections of the storage ring, which could be retro-fitted to second-generation machines and were quickly incorporated into existing synchrotrons. Insertion devices help to create a beam that is very bright and with intensity peaks with a wavelength that can be varied by adjusting the field strength (often the gap between two magnet arrays).
The increased brightness made data collection faster, and tunable wavelengths benefited crystallographers and spectroscopists alike. By the early 1990s machines were being designed with insertion devices in place from the start, and the first such third-generation source, the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, started operating in 1994. There are now more than 50 dedicated light sources in the world, combining both second- and third-generation machines, which cover a wide spectral range from infrared to hard X-rays.
The experimental configurations of different synchrotron facilities are quite similar. The storage rings where the light is generated have many ports, which each open onto a beamline, where scientists set up their experiments and collect data. The beamlines, however, can vary a lot in the details depending on the experimental methods they are used for. The Diamond Light Source in Oxfordshire became Britain’s newest synchrotron light facility in 2007 and in August 2008 it was Britain’s only synchrotron source following the closure of the SRS. It represents the largest single UK science investment for 40 years and will have the capacity to host 40 beamlines. The Diamond Light Source supports a huge range of scientific disciplines, including condensed matter physics, chemistry, nanophysics, structural biology, engineering, environmental science and cultural heritage.
- Life science
Pharmaceutical companies and medical researchers are making increasing use of macromolecular crystallography. Improvements in the speed of data collection and solving structures mean that it is now possible to obtain structural information on a timescale that allows chemists and structural biologists to work together in the development of promising compounds into drug candidates. Both the anti-flu drug Tamiflu and Herceptin – used to treat advanced breast cancer – benefited from synchrotron experiments. Using synchrotron light in the infrared range, pioneering research is underway into developing new cancer therapies that can be tailored to the individual patient. In 2009, the Medical Research Council used the Diamond Light Source to compare the structure of hemagglutinin from the flu-virus strain that caused the 1957 “Asian” pandemic with the 1918 and 1968 outbreaks, to discover why some avian flu viruses are more able than others to jump the species gap.
Synchrotron X-ray beams allow detailed analysis and modelling of strain, cracks and corrosion as well as in situ study of materials during production processing. This research is vital to the development of high-performance materials and their use in innovative products and structures. The Diamond Light Source has been used to study the processes behind pitting corrosion, which attacks the so-called corrosion-resistant metals used in containers for nuclear waste, and to understand how applied stresses can cause cracks to propagate through materials.
- Environmental science
Synchrotron-based techniques have made a major impact in the field of environmental science in the last 10 years. High brightness allows high-resolution study of ultra-dilute substances, the identification of species and the ability to track pollutants as they move through the environment. Synchrotrons have been used to develop more efficient techniques for hydrogen storage and to study the way in which depleted uranium disperses into the local environment. Tiny heavy-metal samples excreted from earthworms have been compared with contaminated soil samples, revealing how earthworms survive in these environments and introducing the idea that earthworms could help to decontaminate land.
- Physics and materials science
Determining the properties and morphology of buried layers and interfaces is an important area in solid-state science with synchrotrons being the meeting ground of state-of-the-art theory and high-precision experimental results. Many of the technological products of materials science are based on thin-film devices, which consist of a series of such layers. Structural studies of in situ processing of semiconducting polymer films are also likely to be an important area of growth in the coming decade. Diffraction of high-intensity X-ray beams is an ideal technique to study spin, charge and orbital ordering in single-crystal samples to understand high-temperature superconductivity. The SRS was used to help study giant magneto-resistance (GMR), which is now used in billions of electronic devices worldwide.
- Cultural heritage
Cultural heritage is a rapidly expanding area of research using synchrotrons. Scientists are using non-destructive synchrotron techniques to find answers to big questions in palaeontology, archaeology, art history and forensics. Scientists in the UK have used the SRS and the Diamond Light Source to study samples from the Tudor warship the Mary Rose to enhance their conservation techniques, and the ESRF has been used to study insects more than 100 million years old, preserved in amber.
Seven Nobel Prizes in Physics have been awarded for X-ray related work. For example, British physicists Sir William Henry Bragg and William Lawrence Bragg shared the 1915 prize for using X-ray diffraction as a technique to determine crystal structure.
Synchrotron facilities have had a positive and significant impact on many areas. In technology, research into GMR is now benefiting data storage in billions of electronic devices like iPods – a market generating £1 bn per quarter. At the time the SRS closed in 2008, 11 of the top 25 companies in the UK R&D Scoreboard had used the facility.
Sir John Walker was awarded the Nobel Prize in Chemistry in 1997 for his work on the structure of Bovine F1 ATP Synthase – the first synchrotron-based Nobel prize. In 2006, the Nobel Prize in Chemistry was awarded to Prof. Roger Kornberg for his synchrotron-based research into how genes copy themselves, a process involved in many human diseases and stem-cell treatment. The structure of the foot-and-mouth-disease virus was determined first at the SRS, leading to potential new vaccines that could save the UK £80 m if another outbreak were to occur. Synchrotron light is considered essential in modern pharmaceutical research, illustrated by the 14% investment in the Diamond Light Source by the Wellcome Trust, the UK’s largest non-governmental funding body for biomedical research.
Throughout the lifetime of synchrotron facilities, 300 local businesses benefited from the SRS, with £300 m being awarded in contracts – the financial impact on the local economy throughout its lifetime is estimated to be almost £1 bn. Similarly, more than 1000 companies have benefited from construction or technology contracts for the Diamond Light Source and a quarter of the science carried out at the ESRF links directly to industry.
The demand for synchrotron light has meant that third-generation machines are being built around the world, and existing machines continue to be developed to provide brighter X-rays, increased user hours and more flexible experimental stations. The modular nature of modern synchrotrons means that new technologies can be incorporated into existing machines as they arrive. By using powerful linear accelerator technology, fourth-generation sources – known as free-electron lasers (FELs) – can generate shorter, femtosecond pulses but with the same intensity in each peak as synchrotron sources emit in one second, producing X-rays that are millions of times brighter in each pulse than the most powerful synchrotrons. FELs won’t replace third-generation machines, but will provide facilities that enable studies at higher peak brightness.
Thanks go to Gerhard Materlik and Sara Fletcher, the Diamond Light Source; Claire Dougan, STFC; and Emma Woodfield for their help with this case study. | http://www.iop.org/publications/iop/2011/page_47511.html | 13 |
31 | Math problems have a charm of their own. Besides, they help to develop a programmer's skill. Here, we describe a student's exam task: "Develop an application that models the behaviour of a Hypocycloid".
A cycloid is the curve defined by the path of a point on the edge of a circular wheel as the wheel rolls along a straight line. It was named by Galileo in 1599 (http://en.wikipedia.org/wiki/Cycloid).
A hypocycloid is a curve generated by the trace of a fixed point on a small circle that rolls within a larger circle. It is comparable to the cycloid, but instead of the circle rolling along a line, it rolls within a circle.
Use Google to find a wonderful book of Eli Maor, Trigonometric Delights (Princeton, New Jersey). The following passage is taken from this book.
I believe that a program developer must love formulas derivation. Hence, let us find the parametric equations of the hypocycloid.
A point on a circle of radius
r rolls on the inside of a fixed circle of radius
C be the center of the rolling circle, and
P a point on the moving circle. When the rolling circle turns through an angle in a clockwise direction,
C traces an arc of angular width
t in a counterclockwise direction. Assuming that the motion starts when
P is in contact with the fixed circle (figure on the left), we choose a coordinate system in which the origin is at
O and the x-axis points to
P. The coordinates of
P relative to
(r cos b; -r sin b)
The minus sign in the second coordinate is there because
b is measured clockwise. Coordinates of
C relative to
((R - r) cos t, (R - r) sin t)
b may be expressed as:
b = t + β; β = b - t
Thus, the coordinates of
P relative to
((R - r) cos t + r cos β, (R - r) sin t - r sin β) (1)
But the angles
b are not independent: as the motion progresses, the arcs of the fixed and moving circles that come in contact must be of equal length
L = R t L = r b
Using this relation to express
b in terms of
t, we get
b = R t / r
Equations (1) become:
x = (R - r) cos t + r cos ((R / r - 1) t) (2)
y = (R - r) sin t - r sin ((R / r - 1) t)
Equations (2) are the parametric equations of the hypocycloid, the angle
t being the parameter (if the rolling circle rotates with constant angular velocity,
t will be proportional to the elapsed time since the motion began). The general shape of the curve depends on the ratio
R/r. If this ratio is a fraction
m/n in lowest terms, the curve will have
m cusps (corners), and it will be completely traced after moving the wheel
n times around the inner rim. If
R/r is irrational, the curve will never close, although going around the rim many times will nearly close it.
Using the code
The demo application provided with this article uses a
Hypocycloid control derived from
UserControl to model a behaviour of a hypocycloid described above.
The functionality of the hypocycloid is implemented in the
Hypocycloid class. It has a
GraphicsPath path data field that helps to render the hypocycloid path over time. A floating point variable,
angle, corresponds to the angle
t described earlier.
ratio = R / r
delta = R - r
All the math is done within the timer
Tick event handler.
void timer_Tick(object sender, EventArgs e)
angle += step;
cosa = Math.Cos(angle),
sina = Math.Sin(angle),
ct = ratio * angle;
movingCenter.X = (float)(centerX + delta * cosa);
movingCenter.Y = (float)(centerY + delta * sina);
PointF old = point;
point = new PointF(
movingCenter.X + r * (float)Math.Cos(ct),
movingCenter.Y - r * (float)Math.Sin(ct));
int n = (int)(angle / pi2);
if (n > round)
round = n;
ParentNotify(msg + ";" + round);
if (round < nRounds)
else if (!stopPath)
ParentNotify(msg + ";" + round + ";" + path.PointCount);
stopPath = true;
ParentNotify is the event of the generic delegate type
public event Action<string> ParentNotify;
We use it to notify a parent control of a current angle (round).
Besides a constructor, the class has the following public methods:
SaveToFile. Remember also that the Y axis in a Windows window goes down. | http://www.codeproject.com/Articles/48576/Hypocycloid | 13 |
13 | Deforestation is the logging or burning of trees in forested areas. There are several reasons for doing so: trees or derived charcoal can be sold as a commodity and are used by humans while cleared land is used as pasture, plantations of commodities and human settlement. The removal of trees without sufficient reforestation, has resulted in damage to habitat, biodiversity loss and aridity. Also deforestated regions often degrade into wasteland.
Disregard or unawareness of intrinsic value, and lack of ascribed value, lax forest management and environmental law allow deforestation to occur on such a large scale. In many countries, deforestation is an ongoing issue which is causing extinction, changes to climatic conditions, desertification and displacement of indigenous people.
In simple terms, deforestation occurs because forested land is not economically viable. Increasing the amount of farmland, woods are used by native populations of over 200 million people worldwide.
The presumed value of forests as a genetic resources has never been confirmed by any economic studies . As a result owners of forested land lose money by not clearing the forest and this affects the welfare of the whole society . From the perspective of the developing world, the benefits of forest as carbon sinks or biodiversity reserves go primarily to richer developed nations and there is insufficient compensation for these services. As a result some countries simply have too much forest. Developing countries feel that some countries in the developed world, such as the United States of America, cut down their forests centuries ago and benefited greatly from this deforestation and that it is hypocritical to deny developing countries the same opportunities: that the poor shouldn’t have to bear the cost of preservation when the rich created the problem .
Aside from a general agreement that deforestation occurs to increase the economic value of the land there is no agreement on what causes deforestation. Logging may be a direct source of deforestation in some areas and have no effect or be at worst an indirect source in others due to logging roads enabling easier access for farmers wanting to clear the forest: experts do not agree on whether logging is an important contributor to global deforestation and some believe that logging makes considerable contribution to reducing deforestation because in developing countries logging reserves are far larger than nature reserves . Similarly there is no consensus on whether poverty is important in deforestation. Some argue that poor people are more likely to clear forest because they have no alternatives, others that the poor lack the ability to pay for the materials and labour needed to clear forest. . Claims that that population growth drives deforestation is weak and based on flawed data. with population increase due to high fertility rates being a primary driver of tropical deforestation in only 8% of cases . The FAO states that the global deforestation rate is unrelated to human population growth rate, rather it is the result of lack of technological advancement and inefficient governance . There are many causes at the root of deforestation, such as the corruption and inequitable distribution of wealth and power, population growth and overpopulation, and urbanization. Globalization is often viewed as a driver of deforestation.
According to British environmentalist Norman Myers, 5% of deforestation is due to cattle ranching, 19% to over-heavy logging, 22% due to the growing sector of palm oil plantations, and 54% due to slash-and-burn farming.
It's very difficult, if not impossible, to obtain figures for the rate of deforestation . The FAO data are based largely on reporting from forestry departments of individual countries. The World Bank estimates that 80% of logging operations are illegal in Bolivia and 42% in Colombia, while in Peru, illegal logging equals 80% of all activities. For tropical countries, deforestation estimates are very uncertain: based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates and for the tropics as a whole deforestation rates could be in error by as much as +/- 50% . Conversely a new analysis of satellite images reveal that the deforestation in the Amazon basin is twice as fast as scientists previously estimated.
The UNFAO has the best long term datasets on deforestation available and based on these datasets global forest cover has remained approximately stable since the middle of the twentieth century ) and based on the longest dataset available global forest cover has increased since 1954 . The rate of deforestation is also declining, with less and less forest cleared each decade. Globally the rate of deforestation declined during the 1980s, with even more rapid declines in the 1990s and still more rapid declines from 2000 to 2005 . Based on these trends global anti-deforestation efforts are expected to outstrip deforestation within the next half-century with global forest cover increasing by 10 percent—an area the size of India—by 2050.Rates of deforestation are highest in developing tropical nations, although globally the rate of tropical forest loss is also declining, with tropical deforestation rates of about 8.6 million hectares annually in the 1990s, compared to a loss of around 9.2 million hectares during the previous decade. .
The utility of the FAO figures have been disputed by some environmental groups. These questions are raised primarily because the figures do not distinguish between forest types. The fear is that highly diverse habitats, such as tropical rainforest, may be experiencing an increase in deforestation which is being masked by large decreases in less biodiverse dry, open forest types. Because of this omission it is possible that many of the negative impacts of deforestation, such as habitat loss, are increasing despite a decline in deforestation. Some environmentalists have predicted that unless significant measures such as seeking out and protecting old growth forests that haven't been disturbed , are taken on a worldwide basis to preserve them, by 2030 there will only be ten percent remaining with another ten percent in a degraded condition. 80 percent will have been lost and with them the irreversible loss of hundreds of thousands of species.
Despite the ongoing reduction in deforestation over the past 30 years the process deforestation remains a serious global ecological problem and a major social and economic problem in many regions. 13 million hectares of forest are lost each year, 6 million hectares of which are forest that had been largely undisturbed by man . This results in a loss of habitat for wildlife as well as reducing or removing the ecosystem services provided by these forests.
The decline in the rate of deforestation also does not address the damage already caused by deforestation. Global deforestation increased sharply in the mid-1800s. and about half of the mature tropical forests, between 7.5 million to 8 million square kilometres (2.9 million to 3 million sq mi) of the original 15 million to 16 million square kilometres (5.8 million to 6.2 million sq mi) that until, 1947 covered the planet have been cleared.
The rate of deforestation also varies widely by region and despite a global decline in some regions, particularly in developing tropical nations, the rate of deforestation is increasing. For example, Nigeria lost 81% of its old-growth forests in just 15 years (1990- 2005). All of Africa is suffering deforestation at twice the world rate. The effects of deforestation are most pronounced in tropical rainforests . Brazil has lost 90-95% of its Mata Atlântica forest. In Central America, two-thirds of lowland tropical forests have been turned into pasture since 1950. Half of the Brazilian state of Rondonia's 243,000 km² have been affected by deforestation in recent years and tropical countries, including Mexico, India, Philippines, Indonesia, Thailand, Myanmar, Malaysia, Bangladesh, China, Sri Lanka, Laos, Nigeria, Congo, Liberia, Guinea, Ghana and the Côte d'Ivoire have lost large areas of their rainforest. Because the rates vary so much across regions the global decline in deforestation rates does not necessarily indicate that the negative effects of deforestation are also declining.
Deforestation trends could follow the Kuznets curve however even if true this is problematic in so-called hot-spots because of the risk of irreversible loss of non-economic forest values for example valuable habitat or species loss.
Deforestation is a contributor to global warming, and is often cited as one of the major causes of the enhanced greenhouse effect. Tropical deforestation is responsible for approximately 20% of world greenhouse gas emissions. According to the Intergovernmental Panel on Climate Change deforestation, mainly in tropical areas, account for up to one-third of total anthropogenic carbon dioxide emissions. Trees and other plants remove carbon (in the form of carbon dioxide) from the atmosphere during the process of photosynthesis and release it back into the atmosphere during normal respiration. Only when actively growing can a tree or forest remove carbon over an annual or longer timeframe. Both the decay and burning of wood releases much of this stored carbon back to the atmosphere. In order for forests to take up carbon, the wood must be harvested and turned into long-lived products and trees must be re-planted. Deforestation may cause carbon stores held in soil to be released. Forests are stores of carbon and can be either sinks or sources depending upon environmental circumstances. Mature forests alternate between being net sinks and net sources of carbon dioxide (see carbon dioxide sink and carbon cycle).
Reducing emissions from the tropical deforestation and forest degradation (REDD) in developing countries has emerged as new potential to complement ongoing climate policies. The idea consists in providing financial compensations for the reduction of greenhouse gas (GHG) emissions from deforestation and forest degradation".
The worlds rain forests are widely believed by laymen to contribute a significant amount of world's oxygen although it is now accepted by scientists that rainforests contribute little net oxygen to the atmosphere and deforestation will have no effect whatsoever on atmospheric oxygen levels. However, the incineration and burning of forest plants in order to clear land releases tonnes of CO2 which contributes to global warming.
The water cycle is also affected by deforestation. Trees extract groundwater through their roots and release it into the atmosphere. When part of a forest is removed, the trees no longer evaporate away this water, resulting in a much drier climate. Deforestation reduces the content of water in the soil and groundwater as well as atmospheric moisture. Deforestation reduces soil cohesion, so that erosion, flooding and landslides ensue. Forests enhance the recharge of aquifers in some locales, however, forests are a major source of aquifer depletion on most locales.
Shrinking forest cover lessens the landscape's capacity to intercept, retain and transpire precipitation. Instead of trapping precipitation, which then percolates to groundwater systems, deforested areas become sources of surface water runoff, which moves much faster than subsurface flows. That quicker transport of surface water can translate into flash flooding and more localized floods than would occur with the forest cover. Deforestation also contributes to decreased evapotranspiration, which lessens atmospheric moisture which in some cases affects precipitation levels down wind from the deforested area, as water is not recycled to downwind forests, but is lost in runoff and returns directly to the oceans. According to one preliminary study, in deforested north and northwest China, the average annual precipitation decreased by one third between the 1950s and the 1980s.
Trees, and plants in general, affect the water cycle significantly:
As a result, the presence or absence of trees can change the quantity of water on the surface, in the soil or groundwater, or in the atmosphere. This in turn changes erosion rates and the availability of water for either ecosystem functions or human services.
The forest may have little impact on flooding in the case of large rainfall events, which overwhelm the storage capacity of forest soil if the soils are at or close to saturation.
Tropical rainforests produce about 30% of our planets fresh water.
Undisturbed forest has very low rates of soil loss, approximately 2 metric tons per square kilometre (6 short tons per square mile). Deforestation generally increases rates of soil erosion, by increasing the amount of runoff and reducing the protection of the soil from tree litter. This can be an advantage in excessively leached tropical rain forest soils. Forestry operations themselves also increase erosion through the development of roads and the use of mechanized equipment.
China's Loess Plateau was cleared of forest millennia ago. Since then it has been eroding, creating dramatic incised valleys, and providing the sediment that gives the Yellow River its yellow color and that causes the flooding of the river in the lower reaches (hence the river's nickname 'China's sorrow').
Removal of trees does not always increase erosion rates. In certain regions of southwest US, shrubs and trees have been encroaching on grassland. The trees themselves enhance the loss of grass between tree canopies. The bare intercanopy areas become highly erodible. The US Forest Service, in Bandelier National Monument for example, is studying how to restore the former ecosystem, and reduce erosion, by removing the trees.
Tree roots bind soil together, and if the soil is sufficiently shallow they act to keep the soil in place by also binding with underlying bedrock. Tree removal on steep slopes with shallow soil thus increases the risk of landslides, which can threaten people living nearby. However most deforestation only affects the trunks of trees, allowing for the roots to stay rooted, negating the landslide.
Deforestation results in declines in biodiversity. The removal or destruction of areas of forest cover has resulted in a degraded environment with reduced biodiversity. Forests support biodiversity, providing habitat for wildlife; moreover, forests foster medicinal conservation. With forest biotopes being irreplaceable source of new drugs (such as taxol), deforestation can destroy genetic variations (such as crop resistance) irretrievably.
Since the tropical rainforests are the most diverse ecosystems on earth and about 80% of the world's known biodiversity could be found in tropical rainforests removal or destruction of significant areas of forest cover has resulted in a degraded environment with reduced biodiversity.
Scientific understanding of the process of extinction is insufficient to accurately to make predictions about the impact of deforestation on biodiversity. Most predictions of forestry related biodiversity loss are based on species-area models, with an underlying assumption that as forest are declines species diversity will decline similarly. However, many such models have been proven to be wrong and loss of habitat does not necessarily lead to large scale loss of species. Species-area models are known to overpredict the number of species known to be threatened in areas where actual deforestation is ongoing, and greatly overpredict the number of threatened species that are widespread.
It has been estimated that we are losing 137 plant, animal and insect species every single day due to rainforest deforestation, which equates to 50,000 species a year. Others state that tropical rainforest deforestation is contributing to the ongoing Holocene mass extinction. The known extinction rates from deforestation rates are very low, approximately 1 species per year from mammals and birds which extrapolates to approximately 23000 species per year for all species. Predictions have been made that more than 40% of the animal and plant species in Southeast Asia could be wiped out in the 21st century, with such predictions called into questions by 1995 data that show that within regions of Southeast Asia much of the original forest has been converted to monospecific plantations but potentially endangered species are very low in number and tree flora remains widespread and stable.
Damage to forests and other aspects of nature could halve living standards for the world's poor and reduce global GDP by about 7% by 2050, a major report concluded at the Convention on Biological Diversity (CBD) meeting in Bonn. Historically utilization of forest products, including timber and fuel wood, have played a key role in human societies, comparable to the roles of water and cultivable land. Today, developed countries continue to utilize timber for building houses, and wood pulp for paper. In developing countries almost three billion people rely on wood for heating and cooking. The forest products industry is a large part of the economy in both developed and developing countries. Short-term economic gains made by conversion of forest to agriculture, or over-exploitation of wood products, typically leads to loss of long-term income and long term biological productivity (hence reduction in nature's services). West Africa, Madagascar, Southeast Asia and many other regions have experienced lower revenue because of declining timber harvests. Illegal logging causes billions of dollars of losses to national economies annually.
The new procedures to get amounts of wood are causing more harm to the economy and overpowers the amount of money spent by people employed in logging. According to a study, "in most areas studied, the various ventures that prompted deforestation rarely generated more than US$5 for every ton of carbon they released and frequently returned far less than US $1." The price on the European market for an offset tied to a one-ton reduction in carbon is 23 euro (about $35).
See also: Timeline of environmental events.
Deforestation has been practiced by humans for tens of thousands of years before the beginnings of civilization. Fire was the first tool that allowed humans to modify the landscape. The first evidence of deforestation appears in the Mesolithic period. It was probably used to convert closed forests into more open ecosystems favourable to game animals. With the advent of agriculture, fire became the prime tool to clear land for crops. In Europe there is little solid evidence before 7000 BC. Mesolithic foragers used fire to create openings for red deer and wild boar. In Great Britain shade tolerant species such as oak and ash are replaced in the pollen record by hazels, brambles, grasses and nettles. Removal of the forests led to decreased transpiration resulting in the formation of upland peat bogs. Widespread decrease in elm pollen across Europe between 8400-8300 BC and 7200-7000 BC, starting in southern Europe and gradually moving north to Great Britain, may represent land clearing by fire at the onset of Neolithic agriculture.The Neolithic period saw extensive deforestation for farming land. Stone axes were being made from about 3000 BC not just from flint, but from a wide variety of hard rocks from across Britain and North America as well. They include the noted Langdale axe industry in the English Lake District, quarries developed at Penmaenmawr in North Wales and numerous other locations. Rough-outs were made locally near the quarries, and some were polished locally to give a fine finish. This step not only increased the mechanical strength of the axe, but also made penetration of wood easier. Flint was still used from sources such as Grimes Graves but from many other mines across Europe.
Throughout most of history, humans were hunter gatherers who hunted within forests. In most areas, such as the Amazon, the tropics, Central America, and the Caribbean, only after shortages of wood and other forest products occur are policies implemented to ensure forest resources are used in a sustainable manner.
In ancient Greece, Tjeered van Andel and co-writers summarized three regional studies of historic erosion and alluviation and found that, wherever adequate evidence exists, a major phase of erosion follows, by about 500-1,000 years the introduction of farming in the various regions of Greece, ranging from the later Neolithic to the Early Bronze Age. The thousand years following the mid-first millennium BCE saw serious, intermittent pulses of soil erosion in numerous places. The historic silting of ports along the southern coasts of Asia Minor (e.g. Clarus, and the examples of Ephesus, Priene and Miletus, where harbors had to be abandoned because of the silt deposited by the Meander) and in coastal Syria during the last centuries BC.
Easter Island has suffered from heavy soil erosion in recent centuries, aggravated by agriculture and deforestation. Jared Diamond gives an extensive look into the collapse of the ancient Easter Islanders in his book Collapse. The disappearance of the island's trees seems to coincide with a decline of its civilization around the 17th and 18th century.
The famous silting up of the harbor for Bruges, which moved port commerce to Antwerp, also follow a period of increased settlement growth (and apparently of deforestation) in the upper river basins. In early medieval Riez in upper Provence, alluvial silt from two small rivers raised the riverbeds and widened the floodplain, which slowly buried the Roman settlement in alluvium and gradually moved new construction to higher ground; concurrently the headwater valleys above Riez were being opened to pasturage.
A typical progress trap is that cities were often built in a forested area providing wood for some industry (e.g. construction, shipbuilding, pottery). When deforestation occurs without proper replanting, local wood supplies become difficult to obtain near enough to remain competitive, leading to the city's abandonment, as happened repeatedly in Ancient Asia Minor. The combination of mining and metallurgy often went along this self-destructive path.
Meanwhile most of the population remaining active in (or indirectly dependent on) the agricultural sector, the main pressure in most areas remained land clearing for crop and cattle farming; fortunately enough wild green was usually left standing (and partially used, e.g. to collect firewood, timber and fruits, or to graze pigs) for wildlife to remain viable, and the hunting privileges of the elite (nobility and higher clergy) often protected significant woodlands.
Major parts in the spread (and thus more durable growth) of the population were played by monastical 'pioneering' (especially by the Benedictine and Commercial orders) and some feudal lords actively attracting farmers to settle (and become tax payers) by offering relatively good legal and fiscal conditions - even when they did so to launch or encourage cities, there always was an agricultural belt around and even quite some within the walls.When on the other hand demography took a real blow by such causes as the Black Death or devastating warfare (e.g. Genghis Khan's Mongol hordes in eastern and central Europe, Thirty Years' War in Germany) this could lead to settlements being abandoned, leaving land to be reclaimed by nature, even though the secondary forests usually lacked the original biodiversity.
From 1100 to 1500 AD significant deforestation took place in Western Europe as a result of the expanding human population. The large-scale building of wooden sailing ships by European (coastal) naval owners since the 15th century for exploration, colonisation, slave trade - and other trade on the high seas and (often related) naval warfare (the failed invasion of England by the Spanish Armada in 1559 and the battle of Lepanto 1571 are early cases of huge waste of prime timber; each of Nelson's Royal navy war ships at Trafalgar had required 6,000 mature oaks) and piracy meant that whole woody regions were over-harvested, as in Spain, where this contributed to the paradoxical weakening of the domestic economy since Columbus' discovery of America made the colonial activities (plundering, mining, cattle, plantations, trade ...) predominant.
In Changes in the Land (1983), William Cronon collected 17th century New England Englishmen's reports of increased seasonal flooding during the time that the forests were initially cleared, and it was widely believed that it was linked with widespread forest clearing upstream.
The massive use of charcoal on an industrial scale in Early Modern Europe was a new acceleration of the onslaught on western forests; even in Stuart England, the relatively primitive production of charcoal has already reached an impressive level. For ship timbers, Stuart England was so widely deforested that it depended on the Baltic trade and looked to the untapped forests of New England to supply the need. In France, Colbert planted oak forests to supply the French navy in the future; as it turned out, as the oak plantations matured in the mid-nineteenth century, the masts were no longer required.
Specific parallels are seen in twentieth century deforestation occurring in many developing nations.
The difficulties of estimating deforestation rates are nowhere more apparent than in the widely varying estimates of rates of rainforest deforestation. At one extreme Alan Grainger, of Leeds University, argues that there is no credible evidence of any longterm decline in rainforest area while at the other some environmental groups argue that one fifth of the world's tropical rainforest was destroyed between 1960 and 1990, that rainforests 50 years ago covered 14% of the worlds land surface and have been reduced to 6%. and that all tropical forests will be gone by the year 2090 . While the FAO states that the annual rate of tropical closed forest loss is declining (FAO data are based largely on reporting from forestry departments of individual countries) from 8 million has in the 1980s to 7 million in the 1990s some environmentalists are stating that rainforest are being destroyed at an ever-quickening pace. The London-based Rainforest Foundation notes that "the UN figure is based on a definition of forest as being an area with as little as 10% actual tree cover, which would therefore include areas that are actually savannah-like ecosystems and badly damaged forests."
These divergent viewpoints are the result of the uncertainties in the extent of tropical deforestation. For tropical countries, deforestation estimates are very uncertain and could be in error by as much as +/- 50% while based on satellite imagery, the rate of deforestation in the tropics is 23% lower than the most commonly quoted rates . Conversely a new analysis of satellite images reveal that deforestation of the Amazon rainforest is twice as fast as scientists previously estimated. The extent of deforestation that has occurred in West Africa during the twentieth century is currently being hugely exaggerated .
Despite these uncertainties there is agreement that development of rainforests remains a significant environmental problem. Up to 90% of West Africa's coastal rainforests have disappeared since1900. In South Asia, about 88% of the rainforests have been lost. Much of what of the world's rainforests remains is in the Amazon basin, where the Amazon Rainforest covers approximately 4 million square kilometres. The regions with the highest tropical deforestation rate between 2000 and 2005 were Central America -- which lost 1.3% of its forests each year -- and tropical Asia. In Central America, 40% of all the rainforests have been lost in the last 40 years. Madagascar has lost 90% of its eastern rainforests. As of 2007, less than 1% of Haiti's forests remain. Several countries, notably Brazil, have declared their deforestation a national emergency.
From about the mid-1800s, around 1852, the planet has experienced an unprecedented rate of change of destruction of forests worldwide. More than half of the mature tropical forests that back in some thousand years ago covered the planet have been cleared.
A January 30, 2009 New York Times article stated, "By one estimate, for every acre of rain forest cut down each year, more than 50 acres of new forest are growing in the tropics..." The new forest includes secondary forest on former farmland and so-called degraded forest.
Africa is suffering deforestation at twice the world rate, according to the U.N. Environment Programme (UNEP). Some sources claim that deforestation have already wipedout roughly 90% of the West Africa's original forests. Deforestation is accelerating in Central Africa. According to the FAO, Africa lost the highest percentage of tropical forests of any continent. According to the figures from the FAO (1997), only 22.8% of West Africa's moist forests remain, much of this degraded. Massive deforestation threatens food security in some African countries. Africa experiences one of the highest rates of deforestation due to 90% of its population being dependent on wood for wood-fuel energy as the main source of heating and cooking. .
Research carried out by WWF International in 2002 shows that in Africa, rates of illegal logging vary from 50% for Cameroon and Equatorial Guinea to 70% in Gabon and 80% in Liberia – where revenues from the timber industry also fuelled the civil war.
See main article: Deforestation in Ethiopia. The main cause of deforestation in Ethiopia, located in East Africa, is a growing population and subsequent higher demand for agriculture, livestock production and fuel wood. Other reasons include low education and inactivity from the government, although the current government has taken some steps to tackle deforestation. Organizations such as Farm Africa are working with the federal and local governments to create a system of forest management. Ethiopia, the third largest country in Africa by population, has been hit by famine many times because of shortages of rain and a depletion of natural resources. Deforestation has lowered the chance of getting rain, which is already low, and thus causes erosion. Bercele Bayisa, an Ethiopian farmer, offers one example why deforestation occurs. He said that his district was forested and full of wildlife, but overpopulation caused people to come to that land and clear it to plant crops, cutting all trees to sell as fire wood.
Ethiopia has lost 98% of its forested regions in the last 50 years. At the beginning of the 20th century, around 420,000 km² or 35% of Ethiopia's land was covered with forests. Recent reports indicate that forests cover less than 14.2% or even only 11.9% now. Between 1990 and 2005, the country lost 14% of its forests or 21,000 km².
Deforestation with resulting desertification, water resource degradation and soil loss has affected approximately 94% of Madagascar's previously biologically productive lands. Since the arrival of humans 2000 years ago, Madagascar has lost more than 90% of its original forest. Most of this loss has occurred since independence from the French, and is the result of local people using slash-and-burn agricultural practises as they try to subsist. Largely due to deforestation, the country is currently unable to provide adequate food, fresh water and sanitation for its fast growing population.
See main article: Deforestation in Nigeria. According to the FAO, Nigeria has the world's highest deforestation rate of primary forests. It has lost more than half of its primary forest in the last five years. Causes cited are logging, subsistence agriculture, and the collection of fuel wood. Almost 90% of West Africa's rainforest has been destroyed.
Iceland has undergone extensive deforestation since Vikings settled in the ninth century. As a result, vast areas of vegetation and land has degraded, and soil erosion and desertification has occurred. As much as half of the original vegetative cover has been destroyed, caused in part by overexploitation, logging and overgrazing under harsh natural conditions. About 95% of the forests and woodlands once covering at least 25% of the area of Iceland may have been lost. Afforestation and revegetation has restored small areas of land.
Victoria and NSW's remnant red gum forests including the Murray River's Barmah-Millewa, are increasingly being clear-felled using mechanical harvesters, destroying already rare habitat. Macnally estimates that approximately 82% of fallen timber has been removed from the southern Murray Darling basin, and the Mid-Murray Forest Management Area (including the Barmah and Gunbower forests) provides about 90% of Victoria's red gum timber.
One of the factors causing the loss of forest is expanding urban areas. Littoral Rainforest growing along coastal areas of eastern Australia is now rare due to ribbon development to accommodate the demand for seachange lifestyles.
See main article: Deforestation in Brazil. There is no agreement on what drives deforestation in Brazil, though a broad consensus exists that expansion of croplands and pastures is important. Increases in commodity prices may increase the rate of deforestation Recent development of a new variety of soybean has led to the displacement of beef ranches and farms of other crops, which, in turn, move farther into the forest. Certain areas such as the Atlantic Rainforest have been diminished to just 7% of their original size. Although much conservation work has been done, few national parks or reserves are efficiently enforced. Some 80% of logging in the Amazon is illegal.
In 2008, Brazil's Government has announced a record rate of deforestation in the Amazon. Deforestation jumped by 69% in 2008 compared to 2007's twelvemonths, according to official government data. Deforestation could wipe out or severely damage nearly 60% of the Amazon rainforest by 2030, says a new report from WWF.
One case of deforestation in Canada is happening in Ontario's boreal forests, near Thunder Bay, where 28.9% of a 19,000 km² of forest area had been lost in the last 5 years and is threatening woodland caribou. This is happening mostly to supply pulp for the facial tissue industry .
In Canada, less than 8% of the boreal forest is protected from development and more than 50% has been allocated to logging companies for cutting.
The forest loss is acute in Southeast Asia, the second of the world's great biodiversity hot spots. According to 2005 report conducted by the FAO, Vietnam has the second highest rate of deforestation of primary forests in the world second to only Nigeria. More than 90% of the old-growth rainforests of the Philippine archipelago have been cut.
Russia has the largest area of forests of any nation on Earth. There is little recent research into the rates of deforestation but in 1992 2 million hectares of forest was lost and in 1994 around 3 million hectares were lost. . The present scale of deforestation in Russia is most easily seen using Google Earth, areas nearer to China are most affected as it is the main market for the timber. . Deforestation in Russia is particularly damaging as the forests have a short growing season due to extremely cold winters and therefore will take longer to recover.
At present rates, tropical rainforests in Indonesia would be logged out in 10 years, Papua New Guinea in 13 to 16 years. There are significantly large areas of forest in Indonesia that are being lost as native forest is cleared by large multi-national pulp companies and being replaced by plantations. In Sumatra tens of thousands of square kilometres of forest have been cleared often under the command of the central government in Jakarta who comply with multi national companies to remove the forest because of the need to pay off international debt obligations and to develop economically. In Kalimantan, between 1991 and 1999 large areas of the forest were burned because of uncontrollable fire causing atmospheric pollution across South-East Asia. Every year, forest are burned by farmers (slash-and-burn techniques are used by between 200 and 500 million people worldwide) and plantation owners. A major source of deforestation is the logging industry, driven spectacularly by China and Japan. . Agricultural development programs in Indonesia (transmigration program) moved large populations into the rainforest zone, further increasing deforestation rates.
A joint UK-Indonesian study of the timber industry in Indonesia in 1998 suggested that about 40% of throughout was illegal, with a value in excess of $365 million. More recent estimates, comparing legal harvesting against known domestic consumption plus exports, suggest that 88% of logging in the country is illegal in some way. Malaysia is the key transit country for illegal wood products from Indonesia.
Prior to the arrival of European-Americans about one half of the United States land area was forest, about 4 million square kilometers (1 billion acres) in 1600. For the next 300 years land was cleared, mostly for agriculture at a rate that matched the rate of population growth. For every person added to the population, one to two hectares of land was cultivated. This trend continued until the 1920s when the amount of crop land stabilized in spite of continued population growth. As abandoned farm land reverted to forest the amount of forest land increased from 1952 reaching a peak in 1963 of 3,080,000 km² (762 million acres). Since 1963 there has been a steady decrease of forest area with the exception of some gains from 1997. Gains in forest land have resulted from conversions from crop land and pastures at a higher rate than loss of forest to development. Because urban development is expected to continue, an estimated 93,000 km² (23 million acres) of forest land is projected be lost by 2050 , a 3% reduction from 1997. Other qualitative issues have been identified such as the continued loss of old-growth forest, the increased fragmentation of forest lands, and the increased urbanization of forest land.
According to a report by Stuart L. Pimm the extent of forest cover in the Eastern United States reached its lowest point in roughly 1872 with about 48 percent compared to the amount of forest cover in 1620. Of the 28 forest bird species with habitat exclusively in that forest, Pimm claims 4 become extinct either wholly or mostly because of habitat loss, the passenger pigeon, Carolina parakeet, ivory-billed woodpecker, and Bachman's Warbler.
A key factor in controlling deforestation could come from the Kyoto Protocol. Avoided deforestation also known as Reduced Emissions from Deforestation and Degradation (REDD) could be implemented in a future Kyoto Protocol and allow the protection of a great amount of forests. At the moment, REDD is not yet implemented into any of the flexible mechanisms as CDM, JI or ET.
New methods are being developed to farm more intensively, such as high-yield hybrid crops, greenhouse, autonomous building gardens, and hydroponics. These methods are often dependent on chemical inputs to maintain necessary yields. In cyclic agriculture, cattle are grazed on farm land that is resting and rejuvenating. Cyclic agriculture actually increases the fertility of the soil. Intensive farming can also decrease soil nutrients by consuming at an accelerated rate the trace minerals needed for crop growth.
Deforestation presents multiple societal and environmental problems. The immediate and long-term consequences of global deforestation are almost certain to jeopardize life on Earth, as we know it.Some of these consequences include: loss of biodiversity; the destruction of forest-based-societies; and climatic disruption. For example, much loss of the Amazon Rainforest can cause enormous amounts of carbon dioxide to be released back into the atmosphere.
Efforts to stop or slow deforestation have been attempted for many centuries because it has long been known that deforestation can cause environmental damage sufficient in some cases to cause societies to collapse. In Tonga, paramount rulers developed policies designed to prevent conflicts between short-term gains from converting forest to farmland and long-term problems forest loss would cause, while during the seventeenth and eighteenth centuries in Tokugawa Japan the shoguns developed a highly sophisticated system of long-term planning to stop and even reverse deforestation of the preceding centuries through substituting timber by other products and more efficient use of land that had been farmed for many centuries. In sixteenth century Germany landowners also developed silviculture to deal with the problem of deforestation. However, these policies tend to be limited to environments with good rainfall, no dry season and very young soils (through volcanism or glaciation). This is because on older and less fertile soils trees grow too slowly for silviculture to be economic, whilst in areas with a strong dry season there is always a risk of forest fires destroying a tree crop before it matures.
In the areas where "slash-and-burn" is practiced, switching to "slash-and-char" would prevent the rapid deforestation and subsequent degradation of soils. The biochar thus created, given back to the soil, is not only a durable carbon sequestration method, but it also is an extremely beneficial amendment to the soil. Mixed with biomass it brings the creation of terra preta, one of the richest soils on the planet and the only one known to regenerate itself.
In many parts of the world, especially in East Asian countries, reforestation and afforestation are increasing the area of forested lands . The amount of woodland has increased in 22 of the world's 50 most forested nations. Asia as a whole gained 1 million hectares of forest between 2000 and 2005. Tropical forest in El Salvador expanded more than 20 percent between 1992 and 2001. Based on these trends global forest cover is expected to increase by 10 percent—an area the size of India—by 2050 .
In the People's Republic of China, where large scale destruction of forests has occurred, the government has in the past required that every able-bodied citizen between the ages of 11 and 60 plant three to five trees per year or do the equivalent amount of work in other forest services. The government claims that at least 1 billion trees have been planted in China every year since 1982. This is no longer required today, but March 12 of every year in China is the Planting Holiday. Also, it has introduced the Green Wall of China-project which aims to halt the expansion of the Gobi-desert through the planting of trees. However, due to the large percentage of trees dying off after planting (up to 75%), the project is not very successful and regular carbon ofsetting through the Flexible Mechanisms might have been a better option. There has been a 47-million-hectare increase in forest area in China since the 1970s . The total number of trees amounted to be about 35 billion and 4.55% of China's land mass increased in forest coverage. The forest coverage was 12% two decades ago and now is 16.55%. .
In western countries, increasing consumer demand for wood products that have been produced and harvested in a sustainable manner are causing forest landowners and forest industries to become increasingly accountable for their forest management and timber harvesting practices.
The Arbor Day Foundation's Rain Forest Rescue program is a charity that helps to prevent deforestation. The charity uses donated money to buy up and preserve rainforest land before the lumber companies can buy it. The Arbor Day Foundation then protects the land from deforestation. This also locks in the way of life of the primitive tribes living on the forest land. Organizations such as Community Forestry International, The Nature Conservancy, World Wide Fund for Nature, Conservation International, African Conservation Foundation and Greenpeace also focus on preserving forest habitats. Greenpeace in particular has also mapped out the forests that are still intact and published this information unto the internet. . HowStuffWorks in turn, made a more simple thematic map showing the amount of forests present just before the age of man (8000 years ago) and the current (reduced) levels of forest. This Greenpeace map thus created, as well as this thematic map from howstuffworks marks the amount of afforestation thus again required to repair the damage caused by man.
To meet the worlds demand for wood it has been suggested by forestry writers Botkins and Sedjo that high-yielding forest plantations are suitable. It has been calculated that plantations yielding 10 cubic meters per hectare annually could supply all the timber required for international trade on 5 percent of the world's existing forestland. By contrast natural forests produce about 1-2 cubic meters per hectare, therefore 5 to 10 times more forest land would be required to meet demand. Forester Chad Oliver has suggested a forest mosaic with high-yield forest lands interpersed with conservation land.
According to an international team of scientists, led by Pekka Kauppi, professor of environmental science and policy at Helsinki University, the deforestation already done could still be reverted by tree plantings (eg CDM & JI afforestation/reforestation projects) in 30 years. The conclusion was made, through analysis of data acquired from FAO.
Reforestation through tree planting (trough eg the noted CDM & JI A/R-projects), might take advantage of the changing precipitation due to climate change. This may be done through studying where the precipitation is perceived to be increased (see the globalis thematic map of the 2050 precipitation) and setting up reforestation projects in these locations. Especially areas such as Niger, Sierra Leone and Liberia are important candidates; in huge part because they also suffer from an expanding desert (the Sahara) and decreasing biodiversity (while being an important biodiversity hotspot).
While the preponderance of deforestation is due to demands for agricultural and urban use for the human population, there are some examples of military causes. One example of deliberate deforestation is that which took place in the U.S. zone of occupation in Germany after World War II. Before the onset of the Cold War defeated Germany was still considered a potential future threat rather than potential future ally. To address this threat, attempts were made to lower German industrial potential, of which forests were deemed an element. Sources in the U.S. government admitted that the purpose of this was the "ultimate destruction of the war potential of German forests." As a consequence of the practice of clear-felling, deforestation resulted which could "be replaced only by long forestry development over perhaps a century."
War can also be a cause of deforestation, either deliberately such as through the use of Agent Orange during the Vietnam War where, together with bombs and bulldozers, it contributed to the destruction of 44 percent of the forest cover, or inadvertently such as in the 1945 Battle of Okinawa where bombardment and other combat operations reduced the lush tropical landscape into "a vast field of mud, lead, decay and maggots". | http://everything.explained.at/Deforestation/ | 13 |
10 | Cruising far beyond the outermost planets, two American spacecraft have discovered the first strong physical evidence of the long-sought boundary marking the edge of the solar system, where the solar wind ebbs and the cold of interstellar space begins.
Voyager 1, now 4.9 billion miles out from Earth, began detecting intense low-frequency radio emissions last August. Signals were received at the same time by Voyager 2, 3.7 billion miles away from Earth.
Now scientists, after long and careful analysis, have concluded that the radio waves were produced by electrically charged gases, or plasma, from the sun interacting with cold gases from interstellar space at the edge of the solar system, a boundary known as the heliopause.
In a report of the discovery yesterday at a meeting of the American Geophysical Union in Baltimore, Dr. Don Gurnett, a physicist at the University of Iowa who is a member of the Voyager science team, said, "Our assumption that this is the heliopause is based on the fact that there is no other known structure out there that could be causing these signals."
Other scientists agreed that the Voyager findings amounted to the first clear answer to what had been one of the great unanswered questions in space physics: the exact location of the outer boundary of the solar system. They said it appeared to confirm recent theories about how far the heliopause should lie from the sun.
Based on the radio data and readings from other Voyager instruments, Dr. Ralph McNutt of the Applied Physics Laboratory of Johns Hopkins University in Laurel, Md., estimated that the heliopause is somewhere from 82 to 130 times farther away from the sun than is the Earth.
The mean distance from Earth to the sun is 93 million miles, which is a standard measure known as an astronomical unit. Pluto, usually the most distant planet, is about 39 astronomical units from the sun.
The two Voyagers were launched in 1977 and long ago completed their primary missions of photographing the outer planets. Voyager 1 has traveled out 52 astronomical units, while Voyager 2 is 40 units distant from the sun.
So, if the heliopause is about 100 astronomical units, say, it would take Voyager 1 another 15 years to get there. Officials at the Jet Propulsion Laboratory, which directs the mission for NASA, have said the Voyagers could still be functioning and transmitting data well beyond that time.
"This discovery is an exciting indication that still more discoveries and surprises lie ahead for the Voyagers," said Dr. Edward C. Stone, the director of the laboratory in Pasadena, Calif., who is the chief scientist for the Voyager project.
For now, the radio signals from the boundary have had to travel far to reach the Voyagers. At first, scientists were mystified by the recordings, until they examined solar behavior in the weeks before the radio signals began to be heard, and found evidence mirrored in the data. | http://articles.mcall.com/1993-05-27/news/2915696_1_heliopause-interstellar-space-voyager-project | 13 |
12 | An Asymptote is actually a line whose distance with curve approaches zero as they approach infinity and this line (asymptote) never touches the curve. Line will always be close to curve but it will not intersect the curve.
Mainly there are three types of asymptotes:
1) Horizontal asymptotes,
2) Vertical asymptotes and
3) Oblique asymptotes.
For a graph which is represented by Function x = f(y), horizontal asymptotes are horizontal lines. These asymptotes are obtained when function approaches zero as 'y' tends to +∞ or −∞.
Vertical asymptotes are vertical lines near the asymptotes, the function expands without any bounds.
Let’s try to understand oblique asymptote and Graphing Oblique Asymptotes.
Oblique asymptote is a linear asymptote. When this linear asymptote is not parallel to the x or y- axis then it is called as oblique asymptote. Oblique asymptote is also called as slant asymptote.
Let’s consider the following function f(y) = y + 1/y’ and plot its graph. Here in above graph line x = y and x- axis are both asymptotes.
A function f(y) is asymptotic to Straight Line x = my + c ( if m ≠ 0) if
Lim (y→ +∞) [f (y) - (my + c)] = 0,
Lim (y→ - ∞) [f (y) - (my + c)] = 0,
Among these two equations, equation x = my + c is an oblique asymptote of ƒ(y) when 'y' tends to +∞, and in second equation, line x = my + c is an oblique asymptote of ƒ (y) when 'y' tends to −∞.
Oblique asymptotes can also be defined for rational Functions. | http://www.tutorcircle.com/graphing-oblique-asymptotes-fQVlq.html | 13 |
16 | Measurement: GED Test Prep (page 4)
The GED Mathematics Exam emphasizes real-life applications of math concepts, and this is especially true of questions about measurement. This article will review the basics of measurement systems used in the United States and other countries, performing mathematical operations with units of measurement, and the process of converting between different units
The use of measurement enables you to form a connection between mathematics and the real world. To measure any object, assign a number and a unit of measure. For instance, when a fish is caught, it is often weighed in ounces and its length measured in inches. The following lesson will familiarize you with the types, conversions, and units of measurement.
Types of Measurements
Following are the types of measurements used most frequently in the United States.
- Units of Length
- 12 inches (in.) = 1 foot (ft.)
- 3 feet = 36 inches = 1 yard (yd.)
- 5,280 feet = 1,760 yards = 1 mile (mi.)
- Units of Volume
- 8 ounces* (oz.) = 1 cup (c.)
- 2 cups = 16 ounces = 1 pint (pt.)
- 2 pints = 4 cups = 32 ounces = 1 quart (qt.)
- 4 quarts = 8 pints = 16 cups = 128 ounces = 1 gallon (gal.)
- Units of Weight
- 16 ounces* (oz.) = 1 pound (lb.)
- 2,000 pounds = 1 ton (T)
- Units of Time
- 60 seconds (sec.) = 1 minute (min.)
- 60 minutes = 1 hour (hr.)
- 24 hours = 1 day
- 7 days = 1 week
- 52 weeks = 1 year (yr.)
- 12 months = 1 year
- 365 days = 1 year
*Notice that ounces are used to measure both the dimensions of volume and weight.
When you perform mathematical operations, it is necessary to convert units of measure to simplify a problem. Units of measure are converted by using either multiplication or division:
- To change a larger unit to a smaller unit, simply multiply the specific number of larger units by the number of smaller units in only one of the larger units.
- To change a smaller unit to a larger unit, simply divide the specific number of smaller units by the number of smaller units in only one of the larger units.
- For example, to find the number of pints in 64 ounces, simply divide 64, the smaller unit, by 16, the number of ounces in one pint.
- = 4 pints
For example, to find the number of inches in 5 feet, simply multiply 5, the number of larger units, by 12, the number of inches in one foot:
- 5 feet = how many inches?
- 5 feet × 12 inches (the number of inches in a single foot) = 60 inches
Therefore, there are 60 inches in 5 feet.
- Change 3.5 tons to pounds.
- 3.5 tons = how many pounds?
- 3.5 tons × 2,000 pounds (the number of pounds in a single ton) = 6,500 pounds
Therefore, there are 6,500 pounds in 3.5 tons.
Therefore, 64 ounces are equal to 4 pints.
Here is one more:
- Change 32 ounces to pounds.
- = 2 pounds
Therefore, 32 ounces are equal to 2 pounds.
Basic Operations with Measurement
It will be necessary for you to review how to add, subtract, multiply, and divide with measurement. The mathematical rules needed for each of these operations with measurement follow.
Addition with Measurements
To add measurements, follow these two steps:
- Add like units.
- Simplify the answer.
4 pounds 25 ounces =
4 pounds + 1 pound 9 ounces =
5 pounds 9 ounces
Subtraction with Measurements
To subtract measurements, follow these three steps:
- Subtract like units.
- Regroup units when necessary.
- Write the answer in simplest form.
Sometimes, it is necessary to regroup units when subtracting.
- Example: Subtract 3 yards 2 feet from 5 yards 1 foot.
- From 5 yards, regroup 1 yard to 3 feet. Add 3 feet to 1 foot. Then, subtract feet from feet and yards from yards.
Multiplication with Measurements
To multiply measurements, follow these two steps:
- Multiply like units if units are involved.
- Simplify the answer.
Example: Multiply 9 feet by 4 yards. First, change yards to feet by multiplying the number of feet in a yard (3) by the number of yards in this problem (4).
3 feet in a yard × 4 yards = 12 feet
Then, multiply 9 feet by 12 feet = 108 square feet.
(Note: feet × feet = square feet)
Division with Measurements
For division with measurements, follow these steps:
- Divide into the larger units first.
- Convert the remainder to the smaller unit.
- Add the converted remainder to the existing smaller unit if any.
- Divide into smaller units.
- Write the answer in simplest form.
The metric system is an international system of measurement also called the decimal system. Converting units in the metric system is much easier than converting units in the English system of measurement. However, making conversions between the two systems is much more difficult. Luckily, the GED will provide you with the appropriate conversion factor when needed. The basic units of the metric system are the meter, gram, and liter. Here is a general idea of how the two systems compare:
Prefixes are attached to these basic metric units to indicate the amount of each unit.
For example, the prefix deci- means one-tenth (); therefore, one decigram is one-tenth of a gram, and one decimeter is one-tenth of a meter. The following six prefixes can be used with every metric unit:
- 1 hectometer = 1 hm = 100 meters
- 1 millimeter = 1 mm = meter = .001 meter
- 1 dekagram = 1 dkg = 10 grams
- 1 centiliter = 1 cL* = liter = .01 liter
- 1 kilogram = 1 kg = 1,000 grams
- 1 deciliter = 1 dL* = liter = .1 liter
*Notice that liter is abbreviated with a capital letter—L.
The following chart illustrates some common relationships used in the metric system:
Add your own comment
Today on Education.com
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- Steps in the IEP Process | http://www.education.com/reference/article/measurement4/?page=4 | 13 |
11 | - Read the instructions, texts and questions very carefully.
- Work through the parts of the paper in the order that suits you best.
- Read the sources, titles and subtitles of the texts where given; they are there to help you.
- Read each text carefully before you answer the questions to get an overall impression and understanding of it.
- Check the words around the gap carefully in Part 1. Remember, the missing word(s) may form part of an idiom, fixed phrase or collocation.
- Read the complete sentence which contains the gap in Part 2. Remember that the missing word(s) are more likely to have a grammatical focus than a lexical one.
- Check that the completed sentence makes sense in the passage as a whole. Remember, the missing word(s) must fit the context of the passage. (Parts 1 and 2)
- Think about all the changes a word may require in Part 3: suffix, prefix, internal, more than one, singular, plural or negative, change of word class.
- Read the questions carefully and check each option against the text before rejecting it. (Parts 1, 5, 6 and 7)
- Keep an overall idea of the development of the text in Part 6. You will need to check that the extracts chosen to fit the gaps in the base text fit the progression of the argument or narrative as a whole.
- Decide on one answer and avoid writing alternative answers to a question.
- Check your spelling in Parts 2, 3 and 4 as correct spelling is essential.
- Transfer your answers accurately from the question paper to the answer sheet before the end of the test. You will not have time after the test to do this.
- Don't try to answer any questions without referring carefully to the text.
- Don't spend too much time on any one part of the paper.
- Don't forget to record your answers on the separate answer sheet.
- Don’t leave any question unanswered – you don't lose marks for incorrect answers.
- Don't assume that if the same word appears in the text as well as in an option, this means you have located the answer. (Parts 1, 5 and 7)
- Don't alter the word given. (Part 4)
- Don't write more than eight words, including the given word. (Part 4)
- Don't write out the full sentence. (Part 4)
- Don’t leave out any information from the prompt sentence. (Part 4)
FAQs (Frequently Asked Questions)
What kind of tasks are there in the Reading and Use of English paper?
The paper includes the following task types: multiple-choice cloze, open cloze, word-formation, key word transformation, multiple choice, gapped paragraph, and matching.
What kind of texts appear in the Reading and Use of English paper?
The texts come from a range of different sources and are written for different purposes. They are mainly contemporary and include non-specialist material from fiction and non-fiction books and journalism (a wide range of newspapers, magazines and journals).
What aspects of reading are being tested in the Reading and Use of English paper?
The paper tests comprehension at word, phrase, sentence, paragraph and whole text level. Each part tests different aspects of reading, including the use of vocabulary in context, such as idioms and collocations, understanding detail, opinion and attitude, text organisation and structure, global meaning and main idea, and cohesion and coherence.
How can I best prepare myself for the Reading and Use of English paper?
It is essential for you to engage with a substantial and varied range of written English and to read extensively (preferably for pleasure, not simply for the purposes of studying) as well as intensively. This enables you to become familiar with a wide range of language and text types, and is also helpful when you are working on the longer texts in Parts 5 and 6. You should also be familiar with the technique of indicating your answers on the separate answer sheet so that you can do this quickly and accurately.
How many marks is the Reading and Use of English paper worth?
The paper is worth 80 marks (after weighting) out of a total of 200 marks for the four Cambridge English: Proficiency papers. However, your overall grade is based on the total score gained in all four papers. It is not necessary to achieve a satisfactory level in all four papers in order to pass the examination.
What if I make a mistake on the answer sheet?
If more than one lozenge has been completed for one question, the computer rejects the answer sheet, which is then dealt with on an individual basis. Checks are in place to identify incomplete answer sheets, which are also then checked.
Cases where all the answers have been entered incorrectly, e.g. by putting Answer 1 to Question 2, Answer 2 to Question 3, etc., cannot be identified. You should be careful when filling in your answer sheet.
How long is each part of the Cambridge English: Proficiency Reading and Use of English paper?
There is no fixed answer to this question. The overall time allowed for the Reading and Use of English paper is 90 minutes. Candidates in a class preparing for the exam will almost certainly find that, as each part is a different task and tests different skills, they do not all spend the same amount of time on each part. This is normal and you should practise extensively before the exam to see how you need to distribute your time. The paper has a standard structure and format so that you will know what to expect in each part of the paper. You should be aware that answers must be marked on the answer sheet within the time allowed. Some students prefer to transfer their answers at the end of each task rather than wait until they have completed the whole paper.
Are marks deducted for incorrect answers?
No, they are not. All marking is positive in the sense that you will get marks for your correct answers and nothing if the answer is incorrect.
If I write two possible answers to a question, how are they marked?
You must write one answer for each question. If you write more than one answer, you will not be given any marks.
How important is spelling in the Reading and Use of English paper?
In Parts 2, 3 and 4, all spelling must be correct.
Do contractions count as one word or two?
Contracted words count as the number of words they would be if they were not contracted. For example, isn’t, didn’t, I’m, I’ll are counted as two words (replacing is not, did not, I am, I will). Where the contraction replaces one word (e.g. can’t for cannot), it is counted as one word.
What happens if I miss a negative in the transformations, thereby giving the opposite meaning to the original?
The instructions state that the second sentence must have a similar meaning to the first. However, in the mark scheme the answer is divided into two parts (see below). The two parts of the sentence (either side of the dividing line) are always treated separately, so you will receive one mark for correctly completing one part of the sentence, even if a negative has been omitted from the other part.
E.g. I've never thought of asking the hotel staff for advice about restaurants.
It has ............. the hotel staff for advice about restaurants.
Answer: never occurred to me (1) | to ask (1)
- Read each question very carefully.
- Remember that Question 1 is compulsory.
- Choose Part 2 questions on the basis of what interests you the most but also bear in mind the task type.
- Decide exactly what information you are being asked to give.
- Identify the target reader, your role as writer and your purpose in writing.
- Check which task type you are being asked to write.
- Organise your ideas and make a plan before you write.
- Use a pen, not a pencil.
- Write your answers in the booklet provided.
- Write in an appropriate style.
- Identify the key points in each text in Part 1.
- Deal with all parts of the question in Part 2.
- Calculate how many words on average you write on a line and multiply this average by the number of lines to estimate how much you have written – don't waste time counting words individually.
- Follow your plan and keep in mind your purpose for writing.
- Use as wide a range of structure and vocabulary as you can but think carefully about when to use idioms.
- Use paragraphs and indent when you start a new paragraph.
- Check for spelling errors and the use of punctuation such as capital letters, apostrophes, commas, etc.
- Cross out errors with a single line through the word(s).
- Check structures: subject-verb agreement, tenses, word order, singular and plural nouns.
- Make sure that your handwriting can be read by the examiner.
- Don't attempt a set text question if you have not read the book.
- Don't attempt a question if you feel unsure about the format.
- Don't include irrelevant material.
- Don't write out a rough version and then try to write a good copy – you will not have time.
FAQs (Frequently Asked Questions)
There are some similarities between the writing tasks in Cambridge English: Advanced, also known as Certificate in Advanced English (CAE), and Cambridge English: Proficiency. What is different?
Cambridge English: Proficiency questions are designed to generate language that requires you to use more abstract functions such as hypothesising, interpreting and evaluating and to move away from just factually based responses. This raises the expected language level not only in terms of structure but also range of vocabulary and appropriacy of style and register.
Are there any differences in the way the Part 1 and Part 2 questions are assessed?
Part 1 and Part 2 questions carry equal marks, and Writing Examiners apply the same assessment scales to them (Content, Communicative Achievement, Organisation and Language). Content focuses on how well the candidate has fulfilled the task; Communicative Achievement focuses on how appropriate the candidate's writing is for the task; Organisation focuses on the way the candidate puts together the piece of writing; and Language focuses on the range and accuracy of the candidate's vocabulary and grammar.
How are extended responses in the Writing paper assessed?
Examiners mark tasks using assessment scales developed with explicit reference to the Common European Framework of Reference for Languages (CEFR). The scales, which are used across the Cambridge English General and Business English Writing tests, are made up from four subscales: Content, Communicative Achievement, Organisation and Language:
- Content focuses on how well the candidate has fulfilled the task – if they have done what they were asked to do.
- Communicative Achievement focuses on how appropriate the writing is for the task and whether the candidate has used the appropriate register.
- Organisation focuses on the way the candidate puts together the piece of writing, in other words, if it is logical and ordered.
- Language focuses on vocabulary and grammar. This includes the range of language as well as how accurate it is.
Each response is marked from 0 to 5 on each of the four subscales and these scores are combined to give a final mark for the Writing test.
If I write in a text type, such as a letter, a report, or an essay, that is different from the one asked for in the question, how will the writing be assessed?
The text type is a very important aspect of the Cambridge English: Proficiency Writing paper as it is a major factor in the choice of style and register for the piece of writing. For example, if you write an essay when the question has asked for an article, the register will not be totally appropriate for an article. This will have a negative effect on the target reader and will be penalised.
Will I be penalised for writing an answer that is over the word limit stated in the question?
You will not be penalised just because the text is over the word limit. However, over-length writing may lead to irrelevance, repetition and poor organisation. These factors have a negative effect on the target reader and will be penalised.
How is the writing assessed if the candidate has obviously run out of time and the answer is incomplete?
Examiners will only assess what is on the page and will not make assumptions about what you might have written. For example, if the conclusion is missing, this will affect the organisation and coherence and will be penalised.
How severely are poor spelling and punctuation penalised?
Spelling is one factor considered under the assessment scale for Language, and punctuation is one factor considered under Organisation. You do not lose a mark every time you make a spelling or punctuation mistake, so it is still possible to get a high band score with occasional native-speaker type lapses. However, spelling and punctuation are an important aspect of accuracy, and frequent errors may have a negative effect on the target reader, which is one factor considered under Communicative Achievement.
Do I have to study all the set texts?
The set text questions are optional. If you decide to answer on a set text, it is only necessary to study one of the texts as there is always a question on each of them. Information on what the set texts are for this year can be found above.
Can any edition of the set texts be used for study?
Any full-length edition can be used for study. At Cambridge English: Proficiency level, you should not be reading simplified editions.
Will there always be a narrative question?
There will sometimes be the opportunity to write a narrative, but it will be embedded in a letter or article, as in the sample papers. Such a question will not necessarily be on every paper.
Are addresses to be omitted ONLY when stated in the task?
As a matter of policy, where the genre is given as a letter, 'You do not need to include postal addresses' is added to the instructions. Where other genres are given (e.g. a report, an article), you could choose to use a letter format to answer the question, if appropriate to the task. In no case will the address, if you include it, be subject to assessment, either negative or positive.
Is a report format obligatory for such questions in the Writing paper?
Reports should be clearly organised and may contain headings. The report format is not obligatory, but will make a good impression on the target reader if used appropriately. The mark awarded for the report will, however, depend on how the writing meets the requirements.
- Listen to and read the instructions. Make sure you know what kind of text you will hear, what it is about and what you have to do in each part.
- Think about the topic, the development of ideas and the context as you read the questions. This will help you when you listen.
- Answer all the questions. Even if you are not sure, you have probably understood enough to make a good attempt!
- Be careful of 'word-spotting' (when answers in options appear in the recording but in a different context).
- Pay attention to the role of stress and intonation in supporting meaning.
- Write the actual word you hear. (Part 2)
- Check your spelling. (Part 2)
- Look carefully at what is printed before and after the gap and think about the words which could fit, both logically and grammatically. (Part 2)
- Don't spend too much time on a difficult question. Move on to the next question and come back to the difficult one again later.
- Don't complicate an answer by changing or adding extra information. (Part 2)
FAQs (Frequently Asked Questions)
What aspects of listening are tested in the Cambridge English: Proficiency Listening paper?
The range of texts and task types reflects the variety of listening situations which you need to be able to cope with at this level.
Variety of accents:
Recordings will contain a variety of accents corresponding to standard variants of native-speaker accent.
Texts vary in terms of length and interaction. Text types used include: interviews, discussions, conversations, talks, speeches, lectures, documentaries, instructions.
A variety of task types is used. These reflect the different reasons for, and focuses of, listening: understanding opinion, attitude, gist, detail, main idea, speaker's purpose; inferring meaning, agreement and opinion. Three- and four-option multiple-choice exercises, sentence completion and multiple matching are used.
Will I have enough time to complete the paper?
All Cambridge English Listening tests are trialled on students to see that they have enough time to answer and complete the answer sheet. The test is designed to be as user-friendly as possible but it is useful to remind yourself of the following points:
The instructions for each task are heard in the recording and are followed by a pause for you to study the task for that section. You can and should use this time to study the questions printed on the page for this task to help you predict both what you will hear and what kind of information you will be required to identify and understand in order to be able to answer.
The questions in the Listening paper follow the order of the information in the recording, and you should not waste time on a question you are having difficulty with as you might miss the answer to the following question. Each recording is heard twice.
Five minutes are provided at the end of the recording for you to transfer your answers onto the answer sheet.
How do I record my answers?
You must write all your answers on a separate answer sheet. You may write on the question paper as you listen, but you must transfer your answers to the answer sheet. Five minutes are allocated at the end of the test for you to do this.
Is spelling important?
Part 2 is the only part of the Cambridge English: Proficiency Listening paper where you have to write words for your answers (in the other parts, you indicate your choice of answer by writing a letter). Answers for Part 2 (which are generally short) must be spelled correctly and must fit into the grammatical structure of the sentence. Both British English and American English spellings are accepted. Spelling must be correct for a mark to be given.
How many marks are given in the Cambridge English: Proficiency Listening paper?
There are 30 questions in this paper. The total score is adjusted once the paper has been marked to give a mark out of 40.
Am I supposed to write the words I hear in the recording in answers to Part 2, or do I get more marks if I use my own words?
You should try to use the actual words you hear in the recording. You do not get more marks for using your own words.
Can I wear headphones for the Listening paper?
Ask your centre whether you can use headphones or not – it depends how they choose to run the exam.
- Make sure you know what you have to do in each part of the test and the timing involved.
- Raise the level of the conversation and discussion above the everyday and purely descriptive.
- Listen to the instructions carefully and focus on the task set.
- Listen actively to your partner, develop their ideas and opinions and work with them.
- Show interest in and respect for your partner's ideas and views.
- Make use of the prompts in your long turn if you want to.
- Respond as fully as possible and extend your ideas and opinions, giving reasons where possible.
- Remember your partner's name and use it when referring to them.
- Don't let your partner always 'take the lead' – you must also initiate.
- Don't waffle – be direct, get to the point and say what you mean.
- Don't speak during your partner's long turn.
- Don't waste your opportunities to show the examiners what you can do.
- Don't ask the examiners how you have done.
- Don't monopolise the discussion. You must be sensitive to turn-taking. (Part 2)
FAQs (Frequently Asked Questions)
Why can't I do the test alone?
Research studies have shown that in order to test a wide range of language and interactive ability with different people (here the examiner and the candidate's partner), and where the test targets a particular level of ability (e.g. Cambridge English: Proficiency as opposed to IELTS), it is better to have pairs. Thus, the standard format is two candidates and two examiners. If there is an uneven number of candidates at the end of the session, the candidates will be asked to take the test in a group of three, never alone.
Can I choose who will examine me?
No. The centre decides which candidates will be assessed by which examiners. Examiners are specially recruited and trained to assess impartially and to the same standard, so it doesn't matter which examiner you have. Also, examiners are never allowed to assess their own students or anybody they know socially. And do remember there are always two examiners, both of whom make an assessment.
Do I have to prepare a talk on a topic in advance?
No. Just follow the instructions from the examiner. During your long turn, the examiner will give you a card with a question on it for you to talk about.
Can I choose which topics to talk about?
No. You will have to discuss several topics during the Speaking test and these will be ones which you should have covered when preparing for the exam. None of the topics require specialised knowledge – they will all be accessible.
What should I do if I don't understand the examiner?
You can always ask the examiner to repeat the question or the instructions. However, you should listen carefully and try to understand the first time. | http://www.cambridgeenglish.org/exams-and-qualifications/proficiency/how-to-prepare/ | 13 |
20 | The MESH_MERGE function merges two polygonal meshes.
Result = MESH_MERGE (Verts, Conn, Verts1, Conn1 [, /COMBINE_VERTICES] [, TOLERANCE=value] )
The function return value is the number of triangles in the modified polygonal mesh connectivity array.
Input/Output array of polygonal vertices [3, n]. These are potentially modified and returned to the user.
Input/Output polygonal mesh connectivity array. This array is modified and returned to the user.
Additional input polygonal vertex array [3, n].
Additional input polygonal mesh connectivity array.
If this keyword is set, the routine will attempt to collapse vertices which are at the same location in space into single vertices. If the expression
is true, the points (i) and (i+1) can be collapsed into a single vertex. The result is returned as a modification of the Verts argument.
This keyword is used to specify the tolerance value used with the COMBINE_VERTICES keyword. The default value is 0.0.
This example merges two simple meshes: a single square and a single right triangle. The right side of the square is in the same location as the left side of the triangle. Each mesh is originally its own polygon object. These objects are then added to a model object. The model is displayed in the XOBJVIEW utility. The XOBJVEW utility allows you to click-and-drag the polygon object to rotate and translate it. See XOBJVIEW for more information on this utility.
When you quit out of the first XOBJVIEW display, the second XOBJVIEW display will appear. The meshes are merged into a single polygon object. After you quit out of the second display, the final display shows the results of decimating the merged mesh to obtain the least number connections for these vertices. Decimation can often be used to refine the results of merging.
PRO MergingMeshes ; Create a mesh of a single square (4 vertices ; connected counter-clockwise from the lower left ; corner of the mesh. vertices = [[-2., -1., 0.], [0., -1., 0.], $ [0., 1., 0.], [-2., 1., 0.]] connectivity = [4, 0, 1, 2, 3] ; Create a separate mesh of a single triangle (3 ; vertices connected counter-clockwise from the lower ; left corner of the mesh. triangleVertices = [[0., -1., 0.], [2., -1., 0.], $ [0., 1., 0.]] triangleConnectivity = [3, 0, 1, 2] ; Initialize model for display. oModel = OBJ_NEW('IDLgrModel') ; Initialize polygon for the square mesh. oPolygon = OBJ_NEW('IDLgrPolygon', vertices, $ POLYGONS = connectivity, COLOR = [0, 128, 0], $ STYLE = 1) ; Initialize polygon for the triangle mesh. oTrianglePolygon = OBJ_NEW('IDLgrPolygon', $ triangleVertices, POLYGONS = triangleConnectivity, $ COLOR = [0, 0, 255], STYLE = 1) ; Add both polygons to the model. oModel->Add, oPolygon oModel->Add, oTrianglePolygon ; Display the model in the XOBJVIEW utility. XOBJVIEW, oModel, /BLOCK, $ TITLE = 'Two Separate Meshes' ; Merge the square and triangle into a single mesh. numberTriangles = MESH_MERGE(vertices, $ connectivity, triangleVertices, $ triangleConnectivity, /COMBINE_VERTICES) ; Output number of resulting vertices and triangles. numberVertices = SIZE(vertices, /DIMENSIONS) PRINT, 'numberVertices = ', numberVertices PRINT, 'numberTriangles = ', numberTriangles ; Cleanup triangle polygon object, which is no longer ; needed. OBJ_DESTROY, [oTrianglePolygon] ; Update remaining polygon object with the results from ; merging the two meshes together. oPolygon->SetProperty, DATA = vertices, $ POLYGONS = connectivity, COLOR = [0, 128, 128] ; Display results. XOBJVIEW, oModel, /BLOCK, $ TITLE = 'Result of Merging the Meshes into One' ; Decimate polygon to 75 percent of the original ; number of vertices. numberTriangles = MESH_DECIMATE(vertices, connectivity, $ decimatedConnectivity, PERCENT_POLYGONS = 75) ; Output number of resulting triangles. PRINT, 'After Decimation: numberTriangles = ', numberTriangles ; Update polygon with results from decimating. oPolygon->SetProperty, DATA = vertices, $ POLYGONS = decimatedConnectivity, COLOR = [0, 0, 0] ; Display decimation results. XOBJVIEW, oModel, /BLOCK, $ TITLE = 'Decimation of Mesh' ; Cleanup object references. OBJ_DESTROY, [oModel] END
The results for this example are shown in the following figure: original, separate meshes (left), merged mesh (center) and decimated mesh (right).
MESH_CLIP, MESH_DECIMATE, MESH_ISSOLID, MESH_NUMTRIANGLES, MESH_OBJ, MESH_SMOOTH, MESH_SURFACEAREA, MESH_VALIDATE, MESH_VOLUME | http://www.physics.nyu.edu/grierlab/idl_html_help/M27.html | 13 |
24 | Plotting Complex Numbers
Complex numbers may easily be plottedin the complex plane. Pure imaginaries are plotted along the vertical axis, the axis of imaginaries, and real numbers are plotted along the horizontal axis, the axis of reals. It follows that other points in the complex plane must represent numbers that are part real and part imaginary; in other words, complex numbers. If we wish to plot the point 3 + 2i, we note that the number is made up of the real number 3 and the imaginary number 2i. Thus, as in figure 15-8, we measure along the real axis in a
Figure 15-7. -The complex number system.
Figure 15-8.-Plotting complex numbers.
positive direction. At point (3, 0) on the real axis we turn through one right angle and measure 2 units up and parallel to the imaginary axis. Likewise, the number -3 + 2i is 3 units to the left and up 2 units; the number 3 - 2i is 3 units to the right and down 2 units; and the number -3 -2i is 3 units to the left and down 2 units.
Complex Numbers as Vectors
A vector is a directed line segment. A complex number represents a vector expressed inthe RECTANGULAR FORM. For example, the complex number 6 + 8i in figure 15-9 may be considered as representing either the point P or the line OP. The real parts of the complex number (6 and 8) are the rectangular components of the vector. The real parts are the legs of the right triangle (sides adjacent to the right angle), and the vector OP is its hypotenuse (side opposite the right angle). If we merely wish to indicate the vector OP, we may do so by writing the complex number that represents it along the segment as in figure 15-9. This method not only fixes the position of point P, but also shows what part of the vector is imaginary (PA) and what part is real (OA).
If we wish to indicate a number that showsthe actual length of the vector OP, it is necessary ’ to solve the right triangle OAP for its hypotenuse. This may be accomplished by taking the square root of the sum of the squares of
Figure 15-9. -A complex numbershown as a vector.
the legs of the triangle, which in this case arethe real numbers, 6 and 8. thus,
However, since a vector has direction as well as magnitude, we must also show the direction of the segment; otherwise the segment OP could radiate in any direction on the complex plane from point 0. The expression10. 10 indicates that the vector OP has been rotated counterclockwise from the initial positibn through an angle of 53.1°. (The initial position in a line extending from the origin to the right along OX.) This method of expressing the vector quantity is called the POLAR FORM. The number represents the magnitude of the quantity, and the angle represents the position of the vector with respect to the horizontal reference, OX. Positive angles. represent counterclockwise rotation of the vector, and negative angles represent clockwise rotation. The polar form is generally simpler for multiplication and division, but its use requires a knowledge of trigonometry. | http://www.tpub.com/math1/16c.htm | 13 |
13 | Manipulating files is an essential aspect of scripting in Python, and luckily for us, the process isn’t complicated. The built-in
open function is the preferred method for reading files of any type, and probably all you’ll ever need to use. Let’s first demonstrate how to use this method on a simple text file.
For clarity, let’s first write our text file string in a standard text editor (MS Notepad in this example). When opened in the editor it will look like this (note the empty trailing line):
To open our file with Python, we first have to know the path to the file. In this example the file path will be relative to your current working directory. So we won’t need to type the full path into the interpreter.
>>> tf = 'textfile.txt'
Open a File in Python
Using this variable as the first argument of the
open method, we’ll have our file saved as an object.
>>> f = open(tf)
Read a File with Python
When we reference our file-object
f, Python tells us the status (open or closed), the name, and the mode, as well as some info we don’t need (about the memory it’s using on our machine).
We already knew the name, and we haven’t closed it so we know it’s open, but the mode deserves special attention. Our file
f is in mode r for read. Specifically, this means we can only read data from the file, not edit or write new data to the file (it’s also in
t mode for
text, though it doesn’t say this explicitly —it’s the default mode, as is r). Let’s read our text from the file with the
>>> f.read() 'First line of our text.\nSecond line of our text.\n3rd line, one line is trailing.\n'
This doesn’t exactly look like what we typed into the notepad, but it’s how Python reads the raw text data. To get the text as we typed it (without the
\n newline characters, we can print it):
>>> print(_) First line of our text. Second line of our text. 3rd line, one line is trailing.
Note how we used the
_ character in the Python IDLE to reference the most recent output instead of using the
read method again. Here’s what happens if we try to use
>>> f.read() ''
This happens because read returned the full contents of the file, and the invisible position marker (how Python keeps track of your position in the file) is at the end of the file; there’s nothing left to read.
Partial Reading of Files in Python
Note: You can use an integer argument with
read if you don’t want the full contents of the file; Python will then read however many bytes you specify as an integer argument for
To get back to the start of the file (or anywhere else in the file), use the
seek(int) method on
f. By going back to the start you can read the contents from the beginning again with
>>> f.seek(0) # We only read a small chunk of the file, 10 bytes print(f.read(10)) First line
Also, to tell where the current position of the file is, use the
tell method on
f like so:
>>> f.tell() 10L
If you don’t know the size of your file or how much of it you want, you might not find that useful.
Reading Files Line by Line in Python
What is useful, however, is reading the contents of the file line-by-line. One way we can do this with the
readlines methods—the first reads one line at a time, the second returns a list of every line in the file; both have an optional integer argument to indicate how much of the file (how many bytes) to read:
# Make sure we're at the start of the file >>> f.seek(0) >>> f.readlines() ['First line of our text.\n', 'Second line of our text.\n', '3rd line, one line is trailing.\n'] >>> f.readline() 'First line of our text.\n' >>> f.readline(20) 'Second line of our t' # Note if int is too large it just reads to the end of the line >>> f.readline(20) 'ext.\n'
Another option for reading a file line-by-line is treating it as a sequence and looping through it, like so:
>>> f.seek(0) >>> for line in f: >>> print(line) First line of our text. Second line of our text. 3rd line, one line is trailing.
Python File Writing Modes
That covers the basic reading methods for files. Before looking at writing methods, we’ll briefly examine the other modes of file-objects returned with
We already know mode r, but there are also the w and a modes (which stand for write and append, respectively). In addition to these there are the options + and b. The + option added to a mode makes the file open for updating, in other words to read from it or write to it.
With this option it might seem like there’s no difference between an r+ mode and w+ mode, but there’s a very important difference between these two: in w mode, the file is automatically truncated, meaning its entire contents are erased — so even in w+ mode the file will be completely overwritten as soon as it’s opened, so be careful. Alternatively, you can truncate the open file yourself with the
If you want to write to the end of the file, just use append mode (with + if you also want to read from it).
The b option indicates to open the file as a binary file (instead of the text mode default). Use this whenever you have data in the file that is not regular text (e.g. when opening an image file).
Now let’s look at writing to our file. We’ll use a+ mode so we don’t erase what we have. First let’s close our file
f and open a new one
# It's important to close the file to free memory >>> f.close() >>> f2 = open(tf, 'a+')
We can see that our
f file is now closed, meaning it isn’t taking up much memory, and we can’t perform any methods on it.
Note: If you don’t want to have to call
close explicitly on the file, you can use a
with statement to open the file. The
with statement will close the file automatically:
# f remains open only within the 'with' >>> with open(tf) as f: >>> print(f.read()) First line of our text. Second line of our text. 3rd line, one line is trailing. # This constant tells us if the file is closed >>> f.closed True
With f2, let’s write to the end of the file. We’re already in append mode so we can just call
f2.write('Our 4th line, with write()\n')
Writing Multiple Lines to a File in Python
With this we’ve written to our file, and we can also write multiple lines with the
writelines, which will write a sequence (e.g., a list) of strings to the file as lines:
f2.writelines(['And a fifth', 'And also a sixth.']) f2.close()
Note: The name
writelines is a misnomer, as it does not write newline characters to the end of each string in the sequence automatically, as we’ll see.
Ok, now we’ve written our text and we’ve closed
f2 so the changes we’ve made should be seen in the file when we open it in our text editor:
We can see the
writelines method didn’t separate our fifth and sixth lines for us, so keep that in mind.
Now that you have a good starting point, get scripting and discover what you can do when reading and writing files in Python — and don’t forget to utilize all of the extensive formatting methods Python has for strings! | http://pythoncentral.org/reading-and-writing-to-files-in-python/ | 13 |
95 | Over the years, NASA has collected a great deal of Earth science data from dozens of orbiting satellites. With time, these data collections have scattered among many archives that vary significantly in sophistication and access. NASA risked losing valuable, irreplaceable data as people retired, storage media decayed, formats changed and collections dispersed. Scientists began to spend more time searching for data than performing research.
Today, NASA's Office of Mission to Planet Earth, which leads the agency's Earth science research, continues to collect data. This office operates 11 active satellites and instruments, which together produce 450 gigabytes (Gb) of data each day. Landsat alone, one of NASA's most popular sources of remote sensing data, produces 200 Gb of raw data per day. In 1997, NASA will launch the first of many Earth Observation Systems (EOS) satellites and instruments that will double the daily production of raw data. EOS will produce 15 years of global, comprehensive environmental remote sensing data.
To handle the size and variety of data now available and to promote cross-discipline research, NASA created EOSDIS, which drastically reduces the time spent searching for relevant data, allowing scientists to focus their research efforts on changes in the Earth's environment. EOSDIS allows scientists to search many data centers and disciplines quickly and easily, quickening the pace of research. The faster the research, the more quickly scientists can identify causes of detrimental environmental effects, opening the way for policy- and lawmakers to act at international, national and local levels.
The well-known hole in the ozone layer above the Antarctic illustrates the process from research to policy to law. Researchers first discovered the ozone hole when lofting a weather balloon from an Antarctic research station. But NASA's NIMBUS 7 satellite had the necessary instruments, so why hadn't it detected the hole? Scientists quickly discovered that the calibration algorithm routinely dropped low ozone values as "noise." When they retrieved 12 years of original NIMBUS 7 data, scientists verified the existence of the hole and indicated that it had grown over the last decade. Data from additional instruments revealed that CloroFloroCarbons (CFCs), such as Freon, destroyed the ozone layer and created the hole. Armed with this knowledge, the United States signed several international treaties restricting the production of CFCs. Congress passed regulations on the production, distribution and recovery of CFCs in the United States. As a direct result, worldwide production of CFCs has plummeted. Today, consumers cannot openly buy Freon. Given time, the CFCs already in the atmosphere will disperse and the ozone layer will heal itself.
Another example of the benefits of multiple-discipline Earth science research lies in the work of the EOSDIS Pathfinder projects, which recycle old data from past and current satellites into new products for scientific research. One project used old Landsat data to assess deforestation in the Amazon basin, indicating that the true rate of deforestation closely matches that cited by the Brazilian government, thus ending a long standing, international debate. Now that scientists have settled the extent of deforestation, policy- and lawmakers can act to fix it.
In yet another result of the EOSDIS philosophy, ocean dynamists recently discovered a huge, low-amplitude wave that propagates back and forth across the Pacific Ocean. Only a few inches high, but a thousand miles long, the wave bounces back and forth between South America and Asia. The same scientists also found that sea level has risen slightly over the last few years, while other researchers detected a slight decline in total ice coverage. Are these three phenomena related? If so, why? Only collaborative research between atmospheric physics, ocean dynamics, meteorology and climatology can answer these questions.
The same principles apply to regional and local, as well as national and international, policy and law. Through EOSDIS, state and local governments can obtain accurate data and information about water tables, flood plains, ground cover and air quality. For example, the state of Ohio has begun using NASA remote sensing data to monitor reclamation of strip mining sites, a task for which the state does not have enough personnel to perform on-site inspections.
EOSDIS does a lot more than just store and distribute Earth science data. It also provides the operational ground infrastructure for all satellites and instruments within the Mission to Planet Earth office at NASA. It contains Earth science data from EOS satellites, other MTPE satellites, joint programs with international partners and other agencies, field studies and past satellites. It receives and processes the raw data from the satellites. After initial processing, EOSDIS delivers the data to the Distributed Active Archive Centers (DAACs) for further processing, storage and distribution. EOSDIS also includes mission operations and satellite control.
The Science Data Processing Segment handles all data production, archive and distribution through the Information Management Service, the Planning and Data Processing System, and the Data Archival and Distribution Services. The Information Management Service performs data search, access and retrieval for the EOSDIS. The Planning and Data Processing System processes the raw data into the standard products offered by the EOSDIS. The Data Archival and Distribution Service permanently stores all data received or produced by EOSDIS.
The Flight Operations Segment, consisting of the EOS Operations Center, the Instrument Support Terminals and the Spacecraft Simulator, supports the EOS satellites and instruments. The Operations Center commands and controls the operation of EOS satellites. The Instrument Support Terminals consist of a few generic workstations dedicated to the command and control of specific instruments. Generally, each instrument will have its own Instrument Support Terminal. The Spacecraft Simulator analyzes general satellite information stripped off the main data stream, searching for trends and problems.
The Communications and Systems Management Segment, consisting of the Systems Management Center and the NASA Internal Network, manages schedules and operations among the DAACs and other elements of the EOSDIS. The Systems Management Center manages network loading, data transfer and overall processing to optimize EOSDIS performance. The Internal Network connects all of the permanent archives, transferring data among all of the DAACs and Science Computing Facilities via a dedicated fiber network utilizing the asynchronous transfer mode. The NASA Science Internet (or Internet for short) links the general user to the EOSDIS. The Internet also links EOSDIS to data centers outside NASA.
The EOSDIS Data and Operations System (EDOS), consisting of the Data Interface Facility, the Data Production Facility and the Sustaining Engineering Facility, handles all telemetry to and from the satellite and performs the initial data processing. The Data Interface Facility is the primary communication and data link between the ground and the satellites. The Data Interface Facility separates the main data stream into the scientific and system information. The scientific information goes to the Data Production Facility, while the system information goes to the EOS Operations Control Center and the Spacecraft Simulator. The Data Production Facility separates the scientific data by instruments, calibrates it and attaches any ancillary data (orbit information, for example). All data then gets transferred to the DAACs for permanent storage. The Sustaining Engineering Facility maintains equipment, identifies hardware trends and plans for future upgrades.
The DAACs process the data from each instrument on each satellite into approximately 250 products. Among the many satellite projects from which products are developed are the Tropical Rain Measurement Mission, the Ocean Topography Experiment and Total Ozone Mapping Spectrometer. Through EOSDIS, data products can come from field campaigns, such as the Boreal Ecosystem Atmosphere Study; from satellites operated by other agencies, such as NOAA's Geostationary Orbit Environmental Satellite; and from past NASA missions and programs. Users can locate data products by discipline, DAAC, Earth location, instrument, satellite or time. EOSDIS allows any data format, but uses the Hierarchical Data Format, developed by the National Center for Supercomputing Applications, as the standard.
NASA released Version 0 of the EOSDIS to the general public in 1994. Version 0 connects all the DAACs with some elements of the Science Data Processing Segments, primarily the Information Management Service. Version 0 consolidates 12 distinct data systems and allows users to locate and order data products at eight DAACs (SEDAC will come on line later this year). Through Version 0, users can also link to NOAA's Satellite Active Archive. Version 1, due for release in February 1996, will include all functional elements of the EOSDIS, but not at full capacity. Version 2, due for release in November 1997, will bring the EOSDIS up to full capacity. Minor upgrades between versions will fix small problems, improve specific services and add new products.
Anyone can access the EOSDIS via the Internet with telenet or via modem. One can access Version 0 from a computer that runs UNIX, X-Windows or VT100. Users can search through the EOSDIS archives in a variety of ways: by scientific discipline, satellite or product name. One can limit the search to specific regions on the Earth or specific dates. To help in selection, EOSDIS allows users to preview low-resolution browse images before ordering the data product. Data set descriptions also help users choose applicable products. A help desk at each DAAC takes data orders and troubleshoots problems. Kevin Schaefer is with NASA Headquarters in Washington, DC. | http://asis.org/Bulletin/Apr-95/schaefer.html | 13 |
84 | (An interactive version of this document is available in the "Related Links" section of this release.)
When Galileo used his homemade telescope 400 years ago to view mountains on the Moon, satellites circling Jupiter, and myriad stars in our Milky Way Galaxy, he launched a revolution that changed our view of an Earth-centered universe.
The launch of NASA's Hubble Space Telescope aboard the space shuttle Discovery 15 years ago initiated another revolution in astronomy. For the first time, a large telescope that sees in visible light began orbiting above Earth's distorting atmosphere, which blurs starlight and makes images appear fuzzy. Astronomers anticipated great discoveries from Hubble. The telescope has delivered as promised and continues serving up new discoveries.
Astronomers and astrophysicists using Hubble data have published more than 4,000 scientific papers, on topics from the solar system to the very distant universe. The following list highlights some of Hubble's greatest achievements.
Shining a Light on Dark Matter
Three-Dimensional Distribution of Dark Matter in the Universe.
STScI-2007-01 Astronomers used Hubble to make the first three-dimensional map of dark matter, which is considered the construction scaffolding of the universe.
Dark matter's invisible gravity allows normal matter in the form of gas and dust to collect and build up into stars and galaxies. The Hubble telescope played a starring role in helping to shed light on dark matter, which is much more abundant than normal matter.
Although astronomers cannot see dark matter, they can detect it in galaxy clusters by observing how its gravity bends the light of more distant background galaxies, a phenomenon called gravitational lensing. Astronomers constructed the map by using Hubble to measure the shapes of half a million faraway galaxies.
The new map provides the best evidence to date that normal matter, largely in the form of galaxies, accumulates along the densest concentrations of dark matter. The map, which stretches halfway back to the beginning of the universe, reveals a loose network of filaments that grew over time and intersect in massive structures at the locations of galaxy clusters.
Astronomers also used gravitational lensing in a previous study to make the first direct detection for the existence of dark matter. Hubble teamed up with the Chandra X-ray Observatory, the European Southern Observatory's Very Large Telescope, and the Magellan optical telescopes to make the discovery. Astronomers found that dark matter and normal matter were pulled apart by the tremendous collision of two large clusters of galaxies, called the Bullet Cluster.
A Speedy Universe
History of the Universe: A Cosmic Tug of War.
STScI-2006-52 By witnessing bursts of light from faraway exploding stars, Hubble helped astronomers discover dark energy. This mysterious, invisible energy exerts a repulsive force that pervades our universe.
Several years later, Hubble provided evidence that dark energy has been engaged in a tug of war with gravity for billions of years. Dark energy, which works in opposition to gravity, shoves galaxies away from each other at ever-increasing speeds, making the universe expand at an ever-faster pace.
But dark energy wasn't always in the driver's seat. By studying distant supernovae, Hubble traced dark energy all the way back to 9 billion years ago, when the universe was less than half its present size. During that epoch, dark energy was struggling with gravity for control of the cosmos, obstructing the gravitational pull of the universe's matter even before it began to win the cosmic tug of war. Dark energy finally won the struggle with gravity about 5 billion years ago.
By knowing more about how dark energy behaves over time, astronomers hope to gain a better understanding of what it is. Astronomers still understand almost nothing about dark energy, even though it appears to comprise about 70 percent of the universe's energy.
Galaxies from the Ground Up
The telescope snapped images of galaxies in the faraway universe in a series of unique observations: the Hubble Deep Fields, the Great Observatories Origins Deep Survey, the Hubble Ultra Deep Field, and as part of an armada of observatories in the All-wavelength Extended Groth Strip International Survey. Some of the galaxies existed when the cosmos was only 700 million years old. The observations provided the deepest views of the cosmos in visible, ultraviolet, and near-infrared light.
In the most recent foray into the universe's farthest regions, Hubble uncovered a rich tapestry of at least 50,000 galaxies. The galaxies unveiled by Hubble are smaller than today's giant galaxies, reinforcing the idea that large galaxies built up over time as smaller galaxies collided and merged. Many of the galaxies are ablaze with star birth.
By studying galaxies at different epochs, astronomers can see how galaxies change over time. The process is analogous to a very large scrapbook of pictures documenting the lives of children from infancy to adulthood.
The deep views also revealed that the early universe was a fertile breeding ground for stars. Observations showed that the universe made a significant portion of its stars in a torrential firestorm of star birth that abruptly lit up the pitch-dark heavens just a few hundred million years after the Big Bang. Though stars continue to be born today in galaxies, the star-birth rate is about half the rate of the opulent early years.
Planets, Planets Everywhere
Artist's Impression of a Transiting Exoplanet.
STScI-2006-34 Peering into the crowded bulge of our Milky Way Galaxy, Hubble looked farther than ever before to nab a group of planet candidates outside our solar system.
Astronomers used Hubble to conduct a census of Jupiter-sized extrasolar planets residing in the bulge of our Milky Way Galaxy. Looking at a narrow slice of sky, the telescope nabbed 16 potential alien worlds orbiting a variety of stars. Astronomers have estimated that about 5 percent of stars in the galaxy may have Jupiter-sized, star-hugging planets. So this discovery means there are probably billions of such planets in our Milky Way.
Five of the newly found planet candidates represent a new extreme type of planet. Dubbed Ultra-Short-Period Planets, these worlds whirl around their stars in less than an Earth day. Astronomers made the discoveries by measuring the slight dimming of a star as a planet passed in front of it, an event called a transit.
The telescope also made the first direct measurements of the chemical composition of an extrasolar planet's atmosphere, detecting sodium, oxygen, and carbon in the atmosphere of the Jupiter-sized planet HD209458b. Hubble also found that the planet's outer hydrogen-rich atmosphere is heated so much by its star that it is evaporating into space. The planet circles its star in a tight 3.5-day orbit.
These unique observations demonstrate that Hubble and other telescopes can sample the chemical makeup of the atmospheres of alien worlds. Astronomers could use the same technique someday to determine whether life exists on extrasolar planets.
Besides testing the atmosphere of an extrasolar planet, Hubble also made precise measurements of the masses of two distant worlds.
Monster Black Holes Are Everywhere
A Gallery of "Tadpole Galaxies."
STScI-2006-04 Hubble probed the dense, central regions of galaxies and provided decisive evidence that supermassive black holes reside in many of them. Giant black holes are compact "monsters" weighing millions to billions the mass of our Sun. They have so much gravity that they gobble up any material that ventures near them.
These elusive "eating machines" cannot be observed directly, because nothing, not even light, escapes their grasp. But the telescope provided indirect, yet compelling, evidence of their existence. Hubble helped astronomers determine the masses of several black holes by measuring the velocities of material whirling around them.
The telescope's census of many galaxies showed an intimate relationship between galaxies and their resident black holes. The survey revealed that a black hole's mass is dependent on the weight of its host galaxy's bulge, a spherical region consisting of stars in a galaxy's central region. Large galaxies, for example, have massive black holes; less massive galaxies have smaller black holes. This close relationship may be evidence that black holes co-evolved with their galaxies, feasting on a measured diet of gas and stars residing in the hearts of those galaxies.
The Biggest "Bangs" Since the Big Bang
Four Gamma-Ray Burst Host Galaxies.
STScI-2006-20 Imagine a powerful burst of light and other radiation that can burn away the ozone in Earth's atmosphere. Luckily, flashes of such strong radiation occur so far away they will not scorch our planet. These brilliant flashbulbs are called gamma-ray bursts. They may represent the most powerful explosions in the universe since the Big Bang.
Hubble images showed that these brief flashes of radiation arise from far-flung galaxies, which are forming stars at enormously high rates. Hubble's observations confirmed that the bursts of light originated from the collapse of massive stars.
Astronomers using Hubble also found that a certain type of extremely energetic gamma-ray bursts are more likely to occur in galaxies with fewer heavy elements, such as carbon and oxygen. The Milky Way Galaxy, which is rich in heavy elements released by many generations of stars, is therefore an unlikely place for them to pop off.
Planet Construction Zones
Artist's Concept of Nearest Exoplanet to our Solar System.
STScI-2006-32 Astronomers used Hubble to confirm that planets form in dust disks around stars. The telescope showed that a previously detected planet around the nearby star Epsilon Eridani is orbiting at a 30-degree angle to our line of sight, the same inclination as the star's dust disk. Although astronomers had long inferred that planets form in such disks, this is the first time the two objects have been observed around the same star.
Some stars have more than one dust disk. Hubble images of the nearby star Beta Pictoris revealed two such disks. The observation confirmed a decade of speculation that a warp in the young star's dust disk may actually be a second disk inclined to the star. The best explanation for the second disk is that an unseen planet, up to 20 times Jupiter's mass, is orbiting it and using gravity to sweep up material from the primary disk.
The telescope also witnessed the early stages of planet formation when it observed a blizzard of particles around a star. The fluffy particles are evidence of planet formation because they were probably shed by much larger, unseen, snowball-sized objects that had collided with each other.
Going Out in a Blaze of Glory
A String of "Cosmic Pearls" Surrounds an Exploding Star.
STScI-2007-10 A Sun-like star ends its life in a blaze of glory, much as trees display colorful foliage in autumn before the barrenness of winter. Sun-like stars die gracefully by ejecting their outer gaseous layers into space. Eventually, the outer layers begin to glow in vibrant colors of red, blue, and green. The colorful glowing shroud is called a planetary nebula.
Hubble revealed unprecedented details of the death of Sun-like stars. Ground-based images suggested that many of these objects had simple spherical shapes. Hubble showed, however, that their shapes are more complex. Some look like pinwheels, others like butterflies, and still others like hourglasses.
Turning its vision to the tattered remains of a massive star's explosive death, Hubble helped astronomers rewrite the textbooks on exploding stars. The telescope's observations of Supernova 1987A showed that the real world is more complicated and interesting than anyone could imagine. Hubble began observing the supernova shortly after the telescope was launched in 1990.
Among Hubble's findings were three mysterious rings of material encircling the doomed star. The telescope also spied brightened spots on the middle ring's inner region, caused by an expanding wave of material from the explosion slamming into it.
How Old is the Universe?
Closeup of Ancient, White-Dwarf Stars in the Milky Way Galaxy.
STScI-2002-10 Hubble observations allowed astronomers to calculate a precise age for the universe using two independent methods. The findings reduced the uncertainty to 10 percent. The first method relied on determining the expansion rate of the universe, a value called the Hubble constant. In May 1999 a team of astronomers obtained a value for the Hubble constant by measuring the distances to nearly two dozen galaxies, some as far as 65 million light-years from Earth. By obtaining a value for the Hubble constant, the team then determined that the universe is about 13 billion years old.
In the second method astronomers calculated a lower limit for the universe's age by measuring the light from old, dim, burned-out stars, called white dwarfs. The ancient white dwarf stars, as seen by Hubble, are at least 12 to 13 billion years old.
Quasars, the Light Fantastic
Looking "Underneath" Quasar HE0450-2958.
STScI-2005-13 Quasars have been so elusive and mysterious that the hunt to define them would have taxed even the superior analytical skills of detective Sherlock Holmes. Since their discovery in 1963, astronomers have been trying to crack the mystery of how these compact dynamos of light and other radiation, which lie at the outer reaches of the universe, produce so much energy. Quasars are no larger than our solar system but outshine galaxies of hundreds of billions of stars.
These light beacons have left trails of evidence and plenty of clues, but scientists have only just begun to understand their behavior. Astronomers using Hubble tracked down the "homes" of quasars to the centers of faraway galaxies. Hubble's observations bolstered the idea that quasars are powered by a gush of radiation unleashed by black holes in the cores of these galaxies.
A Shattered Comet Rocks Jupiter
Photo Illustration of Comet P /Shoemaker-Levy 9 and Planet Jupiter.
STScI-1994-26 Imagine setting off every atomic bomb on Earth all at once. Now imagine repeating such an apocalyptic explosion two dozen times in a week! Unleashing such energy would destroy Earth's surface, but the giant planet Jupiter hardly flinched when it underwent such a catastrophe in 1994. Hubble provided a ringside seat to a once-in-a-millennium event when two dozen chunks of a comet smashed into Jupiter.
The telescope snapped dramatic images of massive explosions that sent towering mushroom-shaped fireballs of hot gas into the Jovian sky. The doomed comet, called Shoemaker-Levy 9, had been pulled apart two years earlier by Jupiter's gravity. Each impact left temporary black, sooty scars in Jupiter's planetary clouds.
Pluto and Beyond
The telescope spied two new moons orbiting Pluto. Named Nix and Hydra, the moons have the same color as Charon, Pluto's only other known moon. The moons' common color further reinforces the idea that all three moons were born from a single titanic collision between Pluto and another similarly sized Kuiper Belt object billions of years ago.
Hubble also searched the solar system's last frontier, a region called the Kuiper Belt, to view the frozen bodies residing there. The Kuiper Belt contains the relics from the early solar system, and may offer clues to the origin and evolution of our Sun and planets.
With Hubble's help, astronomers discovered that an object named Eris is only slightly larger than Pluto. The diameter of Eris is 1,490 miles. By comparison, Pluto's diameter, as measured by Hubble, is 1,422 miles.
Studying the solar system's farthest known object, unofficially named Sedna, Hubble provided surprising evidence that the frozen body does not appear to have a companion moon of any substantial size.
Turning its gaze closer to Earth, Hubble found that Ceres, the largest known asteroid, may be a "mini planet," sharing many characteristics of rocky, terrestrial planets like Earth. Ceres' mantle, which wraps around the asteroid's core, may even be composed of water ice. Ceres resides in the asteroid belt, a region between Mars and Jupiter.
In its 17 years of exploring the heavens, NASA's Hubble Space Telescope has made nearly 800,000 observations and snapped nearly 500,000 images of more than 25,000 celestial objects. Hubble does not travel to stars, planets and galaxies. It takes pictures of them as it whirls around Earth at 17,500 miles an hour. In its 17-year lifetime, the telescope has made nearly 100,000 trips around our planet. Those trips have racked up plenty of frequent-flier-miles, about 2.4 billion, which is the equivalent of a round trip to Saturn.
The 17 years' worth of observations has produced more than 30 terabytes of data, equal to about 25 percent of the information stored in the Library of Congress.
Each day the orbiting observatory generates about 10 gigabytes of data, enough information to fill the hard drive of a typical home computer in two weeks.
The Hubble archive sends about 66 gigabytes of data each day to astronomers throughout the world.
Astronomers using Hubble data have published nearly 7,000 scientific papers, making it one of the most productive scientific instruments ever built. | http://www.hubblesite.org/newscenter/archive/releases/2007/16/background/ | 13 |
15 | Eukaryotic Organelles: The Cell Nucleus, Mitochondria, and Peroxisomes
We will now begin our discussion of intracellular organelles. As we have mentioned, only eukaryotic cells have intracellular sub-divisions, so our discussion will exclude prokaryotic cells. We will also focus on animal cells, since plant cells have a number of further specialized structures. In this section we will discuss the importance of the cell nucleus, mitochondria, peroxisomes, endoplasmic reticulum, golgi apparatus, and lysosome.
The Cell Nucleus
The cell nucleus is one of the largest organelles found in cells and also plays an important biological role. It composes about 10% of the total volume of the cell and is found near the center of eukaryotic cells. Its importance lies in its function as a storage site for DNA, our genetic material. The cell nucleus is composed of two membranes that form a porous nuclear envelope, which allows only select molecules in and out of the cell.
The DNA that is found in the cell nucleus is packaged into structures called chromosomes. Chromosomes contain DNA and proteins and carry all the genetic information of an organism. The nucleus gains support from intermediate filaments that both form the surrounding nuclear lamina and makes direct contact with the endoplasmic reticulum. The nucleus is also the site of DNA and RNA synthesis.
The mitochondria, with its specialized double-membrane structure, generate adenosine triphosphate (ATP), a molecule that provides organisms with energy.
Peroxisomes are single-membrane structures found in all eukaryotic cells. They are small, membrane-bound structures that use molecular oxygen to oxidize organic molecules. The structure is one of the major oxygen utilizing organelles, the other being the mitochondria. Peroxisomes contain oxidative enzymes and other enzymes that help produce and degrade hydrogen peroxide.
Because of their varying enzymatic compositions, peroxisomes are diverse structures. Their main function is to help breakdown fatty acids. They perform specific functions in plant cells, which we will discuss later.
The Endoplasmic Reticulum
The endoplasmic reticulum, or ER, is a very important cellular structure because of its function in protein synthesis and lipid synthesis. For example, the ER is the site of production of all transmembrane proteins. Since nearly all proteins that are secreted from a cell pass through it, the ER is also important in cellular trafficking. In addition to these major roles, the ER plays a role in a number of other biological processes. There are two different types of ER: smooth ER and rough ER (RER).
The rough ER has its name because it is coated with ribosomes, the structures most directly responsible for carrying out protein synthesis. Smooth ER lacks these ribosomes and is more abundant in cells that are specific for lipid synthesis and metabolism.
In addtion to protein and lipid synthesis, the ER also conducts post-synthesis modifications. One such modification involves the addition of carbohydrate chains to the proteins, though the function of this addition is unknown. Another major modification is called protein folding, whose name is rather self- explanatory. Another role of the ER is to capture calcium for the cell from the cytosol. Finally, the ER can secrete proteins into the cell that are usually destined for the golgi apparatus.
The Golgi Apparatus
The golgi apparatus is usually located near the cell nucleus. It is composed of a series of layers called golgi stacks. Proteins from the ER always enter and exit the golgi apparatus from the same location. The cis face of the golgi is where proteins enter. A protein will make its way through the golgi stacks to the other end called the trans face where it is secreted to other parts of the cell.
In the golgi apparatus, more carbohydrate chains are added to the protein while other chains are removed. The golgi stacks also sort proteins for secretion. After sorting, the membrane of the golgi buds off, forming secretory vesicles that transport proteins to their specific destination in the cell. A protein's destination is often signaled with a specific amino acid sequence at its end. A protein secretion most often travels back to the ER, to the plasma membrane where it can become a transmembrane protein, or to the next structure we will discuss, the lysosomes.
Lysosomes are sites of molecular degradation found in all eukaryotic cells. They are small, single-membrane packages of acidic enzymes that digest molecules and are found throughout eukaryotic cells. As such, Lysosomes are a sort of cellular "garbage can," getting rid of cellular debris. Proteins that are not correctly folded or have significant mutations can be secreted to the lysosomes and be degraded instead of taking up space in the cell. Detritus proteins and other molecules can find their way to the lysosome in a variey of ways.
Molecules from outside a cell can be taken in through a process called endocytosis. In this process, the cell membrane invaginates, forming a vesicle containing the transported molecule that will eventually reach a lysosome. The reverse of endocytosis is exocytosis. In this process, molecules within a cell are secreted into an endosome, a membrane-bound structure that delivers the molecule to the lysosome. After reaching the lysosomes, the molecules are secreted from a cell in membrane vesicles. Proteins secreted by the golgi apparatus into the plasma membrane can also be taken back to the lysosome by endosomes. | http://www.sparknotes.com/biology/cellstructure/intracellularcomponents/section2.rhtml | 13 |
51 | March 29, 2013
On the morning of July 16, 2010, a hunk of ice four times the size of Manhattan cracked away from the tongue of Greenland’s Petermann Glacier and drifted to sea as the largest iceberg since 1962. Just two years later, another massive section of ice calved from the same glacier. Icebergs like these don’t stay put in the Arctic–they get picked up by currents and ushered to warmer climates, melting along the way.
According to a new study published in the journal Geophysical Research Letters, Greenland’s melting glaciers and ice caps sent 50 gigatons of water gushing into the oceans from 2003 to 2008. This comprises about 10 percent of the water flowing from all ice caps and glaciers on Earth. The research comes on the heels of a study last year that showed the ice sheets of Greenland and Antarctica are disappearing three times faster than in the 1990s, and that Greenland’s is melting at an especially accelerated rate. In the new study, scientists were able to put an even finer point to the ice-melt situation by separating out the glaciers and ice caps from the ice sheet, which blankets 80 percent of the island. What they discovered is that Greenland’s glaciers are actually melting more quickly than the ice sheet.
Studies such as these demonstrate the impacts of a warming climate on Greenland’s glaciers. But, as they say, a picture is worth a thousand words. Visual evidence of this liquefaction is captured by NASA satellites, which are able to take snapshots of calving glaciers and document longer-term ice melt. NASA displays photos of the glaciers in its State of Flux photo gallery, along with a rotating collection of satellite images that illustrate other changes to the environment, including wildfires, deforestation and urban development.
The photos, with their “now-you-see-it, now-you-don’t” quality, illustrate how glaciers are fast becoming ephemeral. Here are a few stark examples:
The set of images above shows the edge of Greenland’s Helheim Glacier, located on the fringe of the Greenland Ice Sheet, as captured by a satellite in 2001, 2003 and 2005. The calving front is marked by the curved line through the valley, while bare ground appears brown or tan and vegetation is red.
According to NASA, when warmer temperatures initially cause a glacier to melt, it can spark a chain reaction that accelerates the thinning of the ice. As the edge of the glacier begins to liquefy, it crumbles, creates icebergs and eventually disintegrates. The loss of mass throws the glacier off balance, and further thinning and calving occurs, a process that stretches the glacier through its valley. Total ice volume decreases then shrinks the glacier as calving carries ice away. Helheim’s calving front stayed put from the 1970s until 2001, at which point the glacier began hasty cycles of thin, advance, and dramatic retreat, ultimately moving 4.7 miles toward land by 2005.
The massive calving event at Petermann Glacier in 2010 is pictured in these two images. The glacier is the white ribbon on the right side of each photo, and its tongue extends into the Nares Strait, which appears as a bluish-black stripe across the center of the right image and is heavily flecked with white chunks in the photo on the left. In the first image, the tongue of the glacier is intact; in the second, a huge chunk of ice has broken off and can be seen floating away through the fjord. This iceberg was 97 square miles in size–four times bigger than the island of Manhattan.
In the summer of 2012, a second massive iceberg crumbled away from the Petermann Glacier. In these images, the glacier is the white ribbon snaking up from the bottom right. If you follow the tongue up, you’ll see that it appears intact in the photos at left and center (though the center image has an ominous crack spanning its width), which were taken the day before the calving occurred. The photo on the right shows that it crumbled as the glacier calved.
Given that Greenland experienced an exceptionally warm summer in 2012 and temperatures were higher than average this winter, 2013 is primed for more melting and massive icebergs. Last year’s ice-melt season lasted two months longer than the average since 1979, and this year’s is already off to an inauspicious start. It kicked off on March 13 with the sixth-smallest sea-ice area on record for Greenland, according to the National Snow and Ice Data Center. What will the new summer calving season bring?
March 8, 2013
If you had to guess what part of the the U.S. has the very worst air pollution–where winds and topography conspire with fumes from gasoline-chugging vehicles to create an aerial cesspool–places like Los Angeles, Atlanta and as of late, Salt Lake City, would probably pop to mind. The reality may come as a bit of a surprise. According to the Environmental Protection agency, California’s bucolic San Joaquin Valley is “home of the worst air quality in the country.”
Not coincidentally, the San Joaquin Valley is also the most productive agricultural region in the world and the top dairy-producing region in the country. Heavy duty-diesel trucks constantly buzz through the valley, emitting 14 tons of the greenhouse gas ozone daily, and animal feed spews a whopping 25 tons of ozone per day as it ferments, according to a 2010 study. In addition, hot summertime temperatures encourage ground-level ozone to form, according to the San Joaquin Valley Air Pollution Control District. Pollution also streams down from the Bay Area, and the Sierra Nevada Mountains to the east help to trap all of these pollutants near the valley floor. Particulate matter that creates the thick greyish-brown smog hanging over the valley is of paramount concern–it’s been linked to heart disease, childhood asthma and other respiratory conditions.
So when NASA devised a new, five-year air quality study to help fine-tune efforts to accurately measure pollution and greenhouse gases from space, it targeted the San Joaquin Valley. “When you’re trying to understand a problem, you go where the problem is most obvious,” the study’s principal investigator, Jim Crawford, said in an interview. To Crawford, the dirty air over the valley may be important to evaluating how human activities contribute to climate change. “Climate change and air quality are really traced back to the same root in the sense that air quality is the short term effect of human impact and climate change the long term effect,” Crawford said.
In January and February, NASA sent two research planes into the skies above San Joaquin Valley to collect data on air pollution. One plane flew at high altitude over the valley during the daytime, armed with remote sensors, while the second plane cruised up and down the valley, periodically spiraling down toward the ground to compare the pollution at higher and lower altitudes. Weather balloons were used for ground-level measurements as well.
The data NASA collected in the experiment was similar to what satellites can see from space: the presence of ozone, fine particulates, nitrogen dioxide and formaldehyde (precursors to pollution and ozone) and carbon monoxide (which has a median lifetime of a month and can be used to watch the transport of pollution). But satellites are limited in their air-quality-sensing abilities. “The real problem with satellites is that they’re currently not quantitative enough,” Crawford told Surprising Science. “They can show in a coarse sense where things are coming from, but they can’t tell you how much there is.”
Nor can satellites distinguish between pollution at the ground level and what exists higher in the atmosphere. Also, they circle just once a day, and if it isn’t in the early morning, when commuters are busily burning fossil fuels, or in the late afternoon, when emissions have festered and air quality is at its worst, scientists don’t have a clear picture of just how bad pollution can get. Monitoring stations on the ground are likewise limited. They provide scientists with a narrow picture that doesn’t include the air farther above the monitoring station or an understanding of how the air mixes and moves. The research from the NASA study, specifically that collected by the spiraling airplane, fills in these gaps.
Data from the flights will also be used in conjunction with future satellites. “What we’re trying to move toward is a geostationary satellite that will stare at America throughout the day,” Crawford told Surprising Science. Geostationary satellites–which will be able to measure overall levels of pollution–can hover over one position, but like current satellites, researchers need ancillary data from aircraft detailing how pollution travels above the Earth’s surface, like that retrieved from the San Joaquin Valley, to help validate and interpret what satellites see. “The satellite is never going to operate in isolation and the ground station isn’t going to do enough,” Crawford said.
But first, the research will be plugged into air-quality computer models, which will help locate the sources of emissions. Knowing how sources work together to contribute to poor air quality, where pollution is and exactly what levels it’s hitting is a priority for the EPA, which sets air-quality regulations, and the state agencies that enforce them, according to Crawford. The data will inform their strategies on reducing emissions and cleaning the air with minimal impact to the economy and other quality-of-life issues. “Air quality forecasts are great,” Crawford says. “But at some point people will ask, ‘Why aren’t we doing something about it?’ The answer is that we are.” The researchers have conducted similar flights over the Washington, D.C. area and are planning flyovers of Houston and possibly Denver in the years to come.
One thing’s for sure: Data to inform action is sorely needed. In 2011, Sequoia and Kings Canyon National Park, on the eastern edge of the valley, violated the EPA’s national ambient air quality standard a total of 87 days of the year and Fresno exceeded the standard 52 days. Pinpointing exactly where pollution originates and who’s responsible–a goal of the study–will go a long way to clearing the air, so to speak.
February 15, 2013
Climate change, believed to have contributed to the decline of the Ottoman Empire (PDF) when drought forced villagers into a nomadic life in the late 16th century, is once again having an adverse affect on the Middle East. Precipitation has dropped off and temperatures have climbed for the past 40 years, with conditions growing especially severe in the last decade. A 2012 Yale study (PDF) showed that a drought from 2007 to 2010 so seriously stunted agriculture in the Tigris and Euphrates river basins that hundreds of thousands of people fled Iran, eastern Syria and northern Iraq.
A new study published today in the journal Water Resources Research puts an even finer point to the climate change fall-out in the Middle East: The Tigris and Euphrates river basins lost 117 million acre-feet of their stored freshwater from 2003 to 2010, an amount almost equivalent to the entire volume of water in the Dead Sea. The research, conducted by scientists at UC Irvine, NASA’s Goddard Space Flight Center and the National Center for Atmospheric Research, is one of the first large-scale hydrological analyses of the region, encompassing parts of Turkey, Syria, Iraq and Iran.
Drought typically sends water-users underground in search of aquifers, and in the midst of the 2007 water crisis, the Iraqi government, for one, did just that, drilling 1,000 wells. Such pumping has been the primary cause of recent groundwater depletion, according to the new study. Sixty percent of the lost water was removed from underground reservoirs, while dried-up soil, dwindling snowpack and losses in surface water from reservoirs and lakes exacerbated the situation. “The [groundwater storage loss] rate was especially striking after the 2007 drought,” hydrologist Jay Famiglietti, principle investigator of the study and a professor at UC Irvine, noted in a statement. Overall, the area has experienced “an alarming rate of decrease in total water storage,” he added.
Since gathering information on the ground in a region marked by such political instability isn’t very practical–or in some cases, even possible at all–the scientists instead utilized data from NASA’s Gravity Recovery and Climate Experiment (GRACE) satellites. These satellites measure a region’s gravitational pull; over time, small changes observed in the strength of this pull are influenced by factors such as rising or falling water reserves. From this, the scientists uncovered variations in water storage over much of the last decade.
The video below is a visualization of groundwater fluctuations in the Tigris and Euphrates basins using GRACE satellite imagery; blues represent wet conditions and reds are indicative of dry conditions. The drought that began in 2007 is clearly reflected.
“The Middle East just does not have that much water to begin with, and it’s a part of the world that will be experiencing less rainfall with climate change,” said Famiglietti. “Those dry areas are getting dryer.” In fact, the region is experiencing the second-fastest rate of groundwater storage loss on the planet, surpassed only by India.
Yet, demand for freshwater continues to rise worldwide, including in the U.S., where aquifer depletion is also a growing problem. Groundwater supplies in the Southwest and western Great Plains have been stressed for many years, according to the United States Geological Survey (USGS). The area surrounding Tucson and Phoenix in south-central Arizona has seen the highest drop in groundwater levels–300 to 500 feet–but other regions have also suffered. Long Island and other parts of the Atlantic coast, west-central Florida and the Gulf Coast region–notably Baton Rouge–are out of balance. And perhaps most surprisingly, the Pacific Northwest is experiencing groundwater depletion as a result of irrigation, industrial water use and public consumption.
According to study co-author Matt Rodell of NASA, such depletion is unsustainable. “Groundwater is like your savings account,” Rodell said. “It’s okay to draw it down when you need it, but if it’s not replenished, eventually it will be gone.”
What’s to be done? More research, according to the authors of the new Middle East study. “The opportunity to construct the most accurate and holistic picture of freshwater availability, for a particular region or across the globe, is now on us,” they wrote. “Such science-informed studies are essential for more effective, sustainable, and in transboundary regions, collaborative water management.” Building on that last point, they called for international water-use treaties and more consistent international water laws.
They will also spread word of their findings by traveling to the Middle East. Famiglietti and three of his UC Irvine colleagues, including the study’s lead author, Katalyn Voss, are heading to Israel, Palestine and Jordan tomorrow to share their data with water authorities, scientists, water managers and NGOs; verify the GRACE measurements with locally obtained data; and begin collaborating with local groups on hydrology and groundwater-availability research.
They hope to educate themselves on the region’s best practices for water efficiency, with the goal of introducing those techniques to other water-strapped areas, including California. “Ideally, this trip will set the foundation for future research collaborations in the region, with universities and government agencies, as well as provide an opportunity for cross-regional learning between California and the Middle East,” Voss told Surprising Science.
February 1, 2013
It’s become a destructive cycle in the western U.S.: Warmer temperatures and drought conditions prolong the life cycle of mountain pine beetles, allowing them to prey on the pine, spruce and fir trees that blanket the mountains. The trees turn reddish-brown before dying off–a phenomenon the National Park Service deemed “an epidemic stretching from Canada to Mexico.” There’s widespread concern that such tree mortality creates an excellent fuel source for wildfires.
Until recently, scientists were left to survey the damage from the ground, with little ability to understand the causes and processes. But now new technology is enabling them to use satellite imagery to identify the sources of small, ecosystem-altering events–some of which, for example beetle outbreaks, are related to climate change drivers. A computer program called LandTrendr, developed by Boston University Earth and Environment professor Robert Kennedy, allows scientists to combine data they collect on the ground with satellite imagery from the U.S. Geological Survey (USGS) and NASA to get a better understanding of environmental disturbances.
Since 1972, NASA and the USGS have deployed satellites that snap specialized digital photographs of Earth’s landscapes. They’re able to capture details that exist in wavelengths invisible to the human eye, including those slightly longer than visible light called the near infrared. Healthy plants reflect energy in the near infrared, and by scanning the imagery, scientists can detect disruptions in Earth’s landscapes.
In the past, these images were prohibitively expensive, limiting scientists’ access. “We’d look at an image from 2000 and one from 2005 and ask, ‘What’s changed?’” Kennedy explained. “If you’re only looking at two images, it’s very difficult to track slowly evolving changes. You can tell something’s changed, but you don’t know how long it’s taken.”
When the USGS began providing these images for free in 2008, it was a turning point for Earth scientists. They now had access to thousands of shots of any given geographic region–images that Kennedy’s LandTrendr tool utilizes. “By looking at all the images, you can watch [changes] unfold. You have more confidence that you’re actually seeing trends,” he said. This is particularly useful for understanding climate change and land use change, which are “all about process,” according to Kennedy.
Kennedy is currently using LandTrendr technology to look at the net carbon exchange of forests; among other things, his work analyzes the amount of carbon lost in forests due to fire, clear cuts, partial cuts and urbanization. Studies of climate change in the Arctic and in transition zones between ecosystems are also utilizing LandTrendr. But in the Pacific Northwest, Garrett Meigs, a forestry PhD candidate at Oregon State University, is using LandTrendr to study the intersection of wildfire and insects.
Specifically, Meigs is examining the large wildfires that have ravaged Washington and Oregon since 1985, and how outbreaks of the mountain pine beetle and western spruce budworm affect subsequent fire activity. “When there’s drought, stress, a higher susceptibility to infestation, we can see the dieback of forest,” he said.
The LandTrendr algorithm incorporates satellite images of the regions affected by fire and bugs with Meigs’ own fieldwork and historical aerial data from the U.S. Forest Service, which has long used airplanes to survey insect infestations. “There were things we couldn’t detect or see before, but now we’re able to,” Meigs said.
Below is a video showing a LandTrendr visualization of the Pacific Northwest. Kennedy explains how it works: Stable evergreen forests are represented by the blue areas; when a mountain pine beetle infestation erupts, in this case in the Three Sisters area of Oregon, the imagery glows red. And when a slow-moving western spruce budworm moves into an area–there, the southern foothills of Mount Hood–it morphs yellow.
Could LandTrendr help predict climate change? Possibly. “We can’t see the future, we can only document with the satellites what has happened. But the whole game with science is to develop understandings that allow for prediction,” Kennedy says. “My hope is that by creating these maps and capturing these processes in ways we haven’t been able to see them before, we can test [climate change] hypotheses” by documenting where, when and if predicted effects occur, he said.
While Meigs’ study of insects and wildfire is largely retrospective, it has the potential to aid in future forecasting efforts. “We have a baseline to measure future change,” he says. “By seeing the conditions leading up to big insect outbreaks or wildfires, we may be able to recognize them as they emerge in the future.”
January 16, 2013
NASA first dipped its toe into climate-change research in the 1980s by using satellite and aircraft imaging. Its efforts grew more serious with the launch of a large network of satellites in 1991. And by 2004, the agency was spending $1.3 billion annually on climate science. It now has more than a dozen spacecraft studying everything from the oceans to the atmosphere to the cryosphere (the Earth’s frozen bits). On Friday, it will add the stratosphere to that list when it launches an unmanned Global Hawk aircraft from California’s Edwards Airforce Base.
The project, called Airborne Tropical TRopopause EXperiment (ATTREX), will study humidity in the tropical tropopause layer, the area of the atmosphere eight to 11 miles above the Earth’s surface that controls the composition of the stratosphere. According to ATTREX scientists, small changes in stratospheric humidity can significantly affect climate. “Cloud formation in the tropical tropopause layer sets the humidity of air entering the stratosphere,” principal investigator Eric Jensen says, adding that the pathways through the tropical tropopause influence the chemical composition of the stratosphere.
Although the group won’t focus on the impact of standard greenhouse gases such as carbon dioxide and methane, water vapor is a powerful greenhouse gas, and understanding its variability within the stratosphere is the group’s priority. Filling in this gap, they believe, will allow scientists to forecast how changes in the stratosphere affect global climate change, which will in turn improve the accuracy of mathematical models used in climate change predictions.
The tropopause and stratosphere have proven elusive to climatologists until now. “We’ve been wanting to sample this part of the atmosphere for a long time,” Jensen says. The problem has been access — a specialized high altitude aircraft is necessary to conduct this type of research.
Enter the Global Hawk, which can travel up to 65,000 feet into the atmosphere for up to 31 hours at a time and is fitted with instruments that can measure surrounding temperatures, clouds, trace gases, water vapor, radiation fields and meteorological conditions. All of this will let the ATTREX team sample a range of conditions over a large geographic span. Test flights conducted in 2011 showed that the Global Hawk and its instruments can withstand the frigid (as low as minus-115 degree Fahrenheit) temperatures above the tropics.
They’ll send the craft above the Pacific Ocean near the equator and off the coast of Central America six times over the course of the next two months, monitoring it from the ground while it’s in flight. “We get high-speed real-time data back from the aircraft via satellite communications,” Jensen says. “The instrument investigators monitor and adjust their instruments, and we use the real-time data to adjust the flight plan throughout the flight.”
ATTREX is one of the first projects launched by NASA’s new Earth Ventures program, which provides five years’ funding to low- to moderate-cost missions. This is far more time than previous airborne-science studies, and the ATTREX crew will use the added time to re-launch the Global Hawk in winter and summer 2014, allowing them to look at seasonal variation.
The longer timeframe is also conducive to international collaborations. In 2014, the ATTREX team will venture to Guam and northeastern Australia. In Guam, they’ll connect with British researchers, who will be using a low-altitude aircraft to study climate change, and a National Science Foundation crew doing similar research with a G5. “We’ll have measurements from the surface all the way to the stratosphere,” Jensen says. “And we’ll be able to connect emissions at ground level up to measurements of the composition in the stratosphere.” | http://blogs.smithsonianmag.com/science/tag/nasa/ | 13 |
13 | In the middle of the eighteenth century, Jews living in German territories were just beginning to feel the effects of the political, social and intellectual changes that would soon be recognized as the hallmarks of the modern world. Until this period, Jewish communities had been constituted as distinct and autonomous social, religious and legal entities within an essentially feudal social organization. The Jews were subjected to the will of the rulers of individual German states, who imposed onerous regulations, taxes and restrictions on their ability to marry and settle where they chose. Distinguished from the rest of the population by religious traditions and family structure, Jews lived under the authority of the Jewish community, wholly separate from the non-Jewish population. In the early 1780s, however, Enlightenment thinkers began to call for an end to the discrimination against Jews in Prussia and Austria. Most important among these voices was the high-ranking Prussian government official Christian Wilhelm Dohm (1751–1820), who argued that Jews be granted the same civil rights as those accorded to non-Jewish citizens. In his essay “On the Civil Betterment of the Jews” (1781) Dohm explained Jews’ moral “depravity” as the result of centuries of oppression. Only with the elimination of the oppressive conditions that produced their allegedly defective character, Dohm argued, would Jews be able to gradually overcome their “disabilities” and prove themselves to be useful citizens.
These changes in the intellectual realm coincided with broader social and political realignments of the period. In its attempt to centralize its power and eliminate intermediate corporate bodies such as guilds and estates, the emerging absolutist state also sought the dissolution of the autonomous Jewish community and the integration of its members into the larger social body. Thus the development of more modern social and political structures for German Jews arose as much from larger external factors as from a desire to address the particular situation of the Jews. Indeed, the practical reforms that were introduced by Emperor Joseph II in his “Tolerance Decree” of 1781 probably exerted a more immediate impact on the situation of Jews in parts of the German-speaking territories than did Enlightenment thought itself. Joseph II’s decree enacted the first legal measures to reduce legal restrictions on the Jewish population in parts of the Habsburg Empire. Despite the beginnings of a more consolidated state authority, Jewish legal status still varied within the territories of the German Empire until the unification of Germany in 1871.
Simultaneous with the political developments that gradually began to erode the legal barriers separating Jew from non-Jew were important changes that also took place within Jewish society itself. Influenced by the spirit of the Enlightenment, Jewish intellectuals began a new critical engagement with Jewish tradition and, in so doing, created a new cultural, social and intellectual framework and helped bring forth an invigorated public sphere that challenged the authority of the official Jewish community. A new intellectual elite emerged, distinct from the rabbinate and its influence. The combined impact of the centralizing absolutist state and the emergence of the European and Jewish Enlightenments marked the beginning of a change in the legal status of the Jews that would extend over more than a hundred-year period. Yet the progress of Jewish Emancipation in different territories was anything but linear. During periods of political liberalization, progress toward equal rights proceeded apace, but the process was set back during periods of conservative counterreaction.
Because of the protracted nature of the struggle for Jewish Emancipation, and the twin efforts to win both legal and social acceptance, the uneven development of Emancipation was a central defining experience for German Jews. Yet historians have generally treated the Emancipation of the Jews as an event of universal significance for German Jewry without paying significant attention to the gendered aspects of its unfolding. After a long and often painful process, Jewish men did finally achieve full political and civil rights with the unification of Germany in 1871. In principle, if not entirely in practice, this removed most remaining legal disabilities that had prevented the full integration of male Jews into German society. But at the time of Emancipation, Jewish women—like women in general—received no such rights and remained unable to vote until 1918. In fact, as men looked toward an era of increasing liberalization, women were politically disenfranchised as the result of a law that was in effect until 1908, banning women from joining political organizations. Although German women were citizens, their status was ultimately determined by the citizenship of their father or husband. The status of East European Jewish women was even more precarious, since many immigrant women were not permitted to become citizens at all. Within the Jewish community, Jewish women had to wait even longer to gain a voice and a vote. In Germany, Jewish women suffered the double indignity of sexism and antisemitism, with second-class status imposed both inside and outside the Jewish community.
If Emancipation affected Jewish men and women in distinct ways, the pace and extent to which Jews adapted themselves to the demands of German society also differed according to gender-specific patterns. Because social acceptance was made contingent upon the acquisition of the basic customs, behaviors and values of German society, Jewish men and women took different paths toward, and found different means for, becoming at once fully German and distinctly Jewish. Jewish men tended to adapt to the demands of middle-class society by abandoning public religious behaviors, including the observance of Jewish dietary laws and the prohibition of work on the Sabbath. They also concerned themselves less with Jewish learning and worship than with secular education, which they pursued with unparalleled enthusiasm. For women, the road to acculturation led to the formation of new roles inside and outside the home. Changes in family structure and employment patterns led to adjustments in the gender division of labor between the domestic and public spheres, and these changes in the family, religion and labor, in turn, affected the construction of gender roles within the Jewish community, and gender relations as a whole. Even the category “Jewish woman” was infused with new meanings that accorded with middle-class norms and ideals for the bourgeois German woman. In fulfilling their newly defined “woman’s nature,” Jewish women created a proliferation of voluntary associations, involved themselves in non-Jewish associations, and pioneered the field of social work. The path to becoming “German” and “German Jewish” thus proved to be profoundly gendered.
One of the earliest examples of a specifically female experience of the Enlightenment can be found in the Berlin salons of the late eighteenth century. Although the German Jewish Enlightenment is usually associated with the intellectual circle around Moses Mendelssohn and its literary output, most historical literature has treated the Enlightenment as an intellectual and socio-cultural phenomenon that has almost exclusively involved men. Yet during the last two decades of the eighteenth century, even as Mendelssohn was evolving his philosophical reformulation of Judaism, a small group of young women from Berlin’s small but influential Jewish upper class crafted a place for themselves at the very center of the city’s social and intellectual life. Jewish salionières, most notably Rachel Levin Varnhagen and Dorothea Schlegel Mendelssohn, hosted social and intellectual gatherings in their homes that brought together Jews and non-Jews, noblemen and commoners to socialize and exchange ideas. Creating a cultural space unprecedented in its openness to Jews and women, these salons appear to have existed only for a brief historical moment as if outside the normal social constraints that enforced the hierarchical organization of society around the axes of gender, class and religion.
Because a substantial number of these women converted to Christianity and entered into (often second) marriages with non-Jews, Jewish historians have sometimes been quick to condemn them for the betrayal of their people and faith. So traitorous were they, concluded Heinrich Graetz, that “these talented but sinful Jewish women did Judaism a service by becoming Christians” (Lowenstein, 109). Indeed, for their contemporaries, as well as for many historians, these women represented the embodiment of a larger set of social problems afflicting Berlin Jewish society. Whatever the exact mix of their motives for conversion and intermarriage—an ascent in social status, the promise of companionate marriage or liberation from their patriarchal families—the fact that contemporary observers and later historians held these boundary-crossing women responsible for the most visible symptoms of modern social change suggests the extent to which the transition from traditional Jewish society to modern Judaism was represented through the language of gender.
Among the earliest and most important trajectories for the progress of German Jewish acculturation was the modernization of Judaism. Although religious modernization by no means did away with gender hierarchy, it nevertheless altered gender expectations and gender roles, as well as popular forms of religious practice. Within traditional Judaism, those aspects of religious practice that had historically been invested with the greatest value were organized hierarchically along clearly gendered lines: the study of Torah and public prayer formed the religious centerpiece of Jewish life, and these acts were accessible only to men. Women’s religious activity tended to be less structured and more personal and took place primarily in the family, focusing on religious aspects of home life, the observance of Sabbath and holidays, and the maintenance of dietary and family purity laws. Though accorded importance within Judaism, these “domestic” practices of Judaism were lower in prestige than the more public practice of Judaism dominated by men.
The liberal religious reform movement, which has garnered so much attention in the historical literature, did not, however, fundamentally transform the role of women in Judaism. Reformers seeking to modernize Judaism in accordance with Enlightenment ideals and middle-class behavioral and aesthetic visions endeavored to make prayer services more attractive to women as well as men by changing the language of prayer from Hebrew to German and replacing Hebrew excurses on the law with uplifting preaching in German that was modeled on Protestant worship services. Equally important, greater attention was paid to women’s religious education, primarily through the inclusion of women in the newly introduced ritual of confirmation. There was even some discussion at the 1846 Rabbinic Assembly in Breslau of far-reaching changes that would have granted women greater religious equality. Yet when it came to practice, the nineteenth-century Reform movement failed to eliminate many of the traditional religious restrictions that kept women in a subordinate status. In the synagogue, women could still neither be counted in a prayer quorum nor called to the Torah, and they often remained seated apart and comfortably out of view in the women’s gallery. Despite pronouncements against the segregation of women, religious reformers ultimately made few substantial improvements in women’s status.
In addition to religious reform, religious modernization also includes those religious and cultural changes that resulted from the increased participation of women in religious associations outside the home and the formal sphere of the Jewish community. Indeed, it may well have been this phenomenon, more than religious reform itself, which affected gender relations more broadly and contributed to a more substantive reconfiguration of the traditional Jewish gender order. Beginning in the late eighteenth century, middle-class women began to create charitable associations, such as sick care and self-help societies, that mirrored both the form and content of male associations. According to the historian Maria Benjamin Baader, these new female voluntary organizations were made possible with the declining emphasis on traditional male learning that had once marked women as marginal. Expressing both Jewish and bourgeois values, women’s activity in this realm would, by the early twentieth century, lead into new professional opportunities as well as to the production of new forms of Jewish religious and ethnic expression.
Women’s activities in voluntary organizations, in turn, were linked to broader cultural shifts in the German bourgeoisie. Thus, in addition to being inflected by gender, it is important to note that the process of becoming a German Jewish woman was also affected by class status. Whereas on the eve of the modern era the majority of German Jews were poor, as they entered German society over the course of the nineteenth century, Jews aspired to join the class that was most suited to their skills in trade and commerce: the middle-class. Jews quickly embraced the ideal of the educated middle class that made culture, rather than birth, the defining character of class. While middle-class status for men was to be achieved through self-improvement and education (Bildung), the most important determinant of middle class respectability for women and for their husbands was her status as full-time “priestess of the home.”
Paradoxically, an important measure of specifically Jewish and middle class acculturation was a new form of family-centered Judaism that arose out of the strong emphasis placed on the family in bourgeois culture on the one hand, and the decline in traditional Jewish religious practice on the other. The nineteenth-century bourgeois ideal for the family was a prescriptive model based on a rigid gender-based division of labor that delimited women’s activities to the domestic sphere and men’s activity to the “public” arena. As an ideal, it quickly eclipsed the typical structure of premodern Jewish families where the boundaries between public and private remained more fluid. Of course, not all Jewish families could afford to imitate this model, since lower middle-class and working class families often had to rely on the work and wages of children and wives for the family economy. But the power of this construct as a universal model for Jewish family life is perhaps most evidenced by the fact that, since the mid-nineteenth century, the bourgeois family type has been viewed as the “traditional Jewish family.”
By the time the German states were joined in a federal system within the new German Empire in 1871, most of Germany’s Jews could proudly display their middle class status by pointing to their family life. Indeed, the research of Marion Kaplan has demonstrated how Jewish women managed the double task of transmitting the values and behaviors of the German bourgeoisie while helping to shape the Jewish identity of their children. Jewish women made sure their children learned the German classics and, at the same time, organized the observance of holidays, family gatherings and the religious and moral education of the children. Illustrating the family’s crucial role in the acculturation of German Jews, Kaplan’s research also suggests the extent to which the home was gradually being recast as the primary site for the transmission of Judaism. With the declining appeal of formal religious practice and institutions, including the synagogue, the Jewish mother, according to historian Jacob Toury, was expected to become the “protector of a new system of Jewish domestic culture” (Maurer, 147).
Although some historians suggest that Jewish men abandoned religious ritual and practice more quickly than women, by mid-century Jewish community leaders nevertheless began holding women increasingly accountable for assimilation, conversion and intermarriage—in short, for the decline of Judaism. This was the case despite the fact that the intermarriage and conversion rates of Jewish women remained lower than those of men through almost the entire nineteenth century. Even in the early twentieth century, twenty-two percent of Jewish men but only thirteen percent of Jewish women entered marriages with non-Jews. Whereas Jewish men who entered mixed marriages usually had middle-class incomes, Jewish women, by contrast, tended to marry non-Jews out of economic need or because of a lack of available male Jewish partners. And even though women’s intermarriage rates were lower than men’s, women in mixed marriages stood to lose their status in the official Jewish community, while men suffered no equivalent punishment. Male and female conversion rates similarly reflected the disproportionately high male intermarriage rates. Relatively few women converted before 1880, and when the rate increased, as it did during the years 1873–1906, women still accounted for only one quarter of all converts. In comparison with male converts, nearly double the number of women came from the lowest income categories. Rising female conversion rates appear to have coincided with the growth of secularization on the one hand, and women’s increasing participation in the workforce and ensuing encounter with antisemitism on the other. By 1912, women accounted for forty percent of all conversions.
Throughout the nineteenth and early twentieth centuries, Jewish girls received an education that was consonant with social expectations for women of their class. Until the 1890s, the only form of secular education available to girls was the elementary school and non-college-preparatory secondary school. Jewish girls of all classes attended either private or public elementary schools where they learned reading, writing, arithmetic and such “feminine” subjects as art, music and literature. From mid-century on, a disproportionately high percentage of Jewish girls attended girls’ secondary schools (Höhere Töchterschule) which tended to be associated with upward mobility and higher class status. Indeed, around the turn of the century, while 3.7 percent of non-Jewish girls in Prussia attended the Höhere Töchterschule, approximately forty-two percent of Jewish girls did. Upon completing school at the age of fifteen or sixteen, middle-class girls passed their time socializing, embroidering or doing volunteer work as they waited for their families to find them a suitable husband.
Even through the Imperial period, most middle-class Jewish marriages continued to be arranged either by marriage brokers or, more often, with the aid of parents and relatives. As a social institution, arranged marriage served as a means of locating Jewish marriage partners while simultaneously providing for the financial security of middle-class daughters and cementing economic alliances between families. By the end of the nineteenth century, the heavy emphasis placed on financial considerations in the search for marriage partners generated substantial criticism from within the Jewish community and particularly among young modern-minded women who wanted to choose their own life partners on the basis of romantic love. Beginning with the salon women in the eighteenth century, the decision to marry a non-Jewish man appears to have sometimes been driven at least in part by the ideal of companionate marriage. In other words, for some women, intermarriage represented not simply an act of betrayal, as it was sometimes perceived by observers, but in fact an act of independence, a rejection of a patriarchal social system that treated marriage as a financial and social transaction that was divorced from the individuals themselves.
Since the nineteenth-century ideology of separate spheres consigned women to the home, those women who desired access to higher education and professional training had a particularly difficult path to navigate. For both men and women, higher education offered a means of self-improvement that facilitated German Jewish acculturation together with the possibility for personal emancipation. Yet whereas young Jewish men had been permitted to attend college preparatory high schools (gymnasia) and universities since the early nineteenth century, Jewish women had been excluded from both institutions until the end of the century. It was not until the first decade of the twentieth that German universities began admitting women. In the three years following the opening of Prussian universities to women in 1908, Jewish women already accounted for eleven percent of the female student population. By the time of the Nazi accession to power in 1933, a high proportion of Jewish women received doctorates from German universities. One of the fields of study most in demand among Jewish women, and east European Jewish women in particular, was medicine. Philosophy was also the first choice of many Jewish women since it provided the required academic preparation for a teaching certificate. As one of the few careers considered socially acceptable for middle-class women, education continued to draw Jewish women despite the antisemitic discrimination they often faced. With somewhat less frequency, Jewish women also studied the social and natural sciences and law. Despite the relative prevalence of Jewish women at universities, however, their social acceptance did not proceed apace. Like men, Jewish women encountered widespread antisemitism at the university, but their sex proved to be an added obstacle in their path toward integration.
Because of the predominantly middle-class status of German Jews, fewer Jewish women were wage earners than non-Jewish women. But both single and married Jewish women did work outside the home, and they did so in growing numbers. The 1882 employment statistics for Prussia list only eleven percent of all Jewish women as part of the labor force, compared with twenty-one percent of non-Jewish women, but this figure masks the work of many more women who helped run family businesses or otherwise contributed to the household economy. In 1907, when the Prussian census included more of these invisible female workers, the employment rate was eighteen percent of Jewish women, compared with thirty percent of non-Jewish women. By the time of the Weimar Republic, with increased east European immigration, a worsening economy, and an increasing number of women working to support themselves, the gap between the Jewish and non-Jewish employment rate narrowed further, with twenty-seven percent of Jewish women now working, compared with thirty-four percent in the general population. Like Jewish men, middle-class Jewish women worked disproportionately within the commercial sector of the economy. But in contrast with native-born German women, east European immigrant working women were clustered in industrial labor, primarily in the tobacco and garment industries. In specifically low-status female occupations such as domestic service, east European immigrant women were significantly overrepresented.
One of the promising new employment opportunities for Jewish and non-Jewish women at the turn of the century was social work. Formulated by women themselves as an extension of the domestic sphere, social work involved, in the words of Alice Salomon, one of the Jewish founders of modern social work in Germany, “an assumption of duties for a wider circle than are usually performed by the mother in the home” (Taylor Allen, 213–214). Jewish women seemed to flock to the profession, evident in their overrepresentation within social work training colleges. Particularly during the Weimar Republic, social work stood out as a field generally free from the mounting antisemitism increasingly being felt in other professions. Among those Jewish women who trained as social workers, some elected to work with the working class, lower middle class and east European Jewish population sectors within the Jewish community that required, in the view of their middle-class patrons, the provision of health services, job training and “moral reform.” From their roles as organizers of mutual assistance and charitable work in the eighteenth century, middle-class Jewish women became, by the Weimar period, the agents of a rationalized and “scientific” social work, one that was viewed by its practitioners as the modern-day realization of the traditional Jewish ethic of charity. As a gendered sphere of Jewish communal activity, the social arena became not only a site where those in need received assistance, but also a form of Jewish social engagement that strengthened the bonds of solidarity and cohesion among those engaged in social work.
In Germany, this idea of “social motherhood” not only provided the intellectual foundation and political justification for the emergence of modern social work, but it also animated the German feminist movement from its early years until its collapse and cooptation under Hitler in 1933. Feminists’ conceptions of citizenship, rooted in distinctly organic notions of German citizenship, emphasized duties over rights and tended to define individual self-fulfillment in the context of community. Social motherhood also formed a central pillar of the German Jewish feminist movement that was founded by Bertha Pappenheim in 1904. The membership of the Jüdischer Frauenbund, which consisted primarily of middle-class married women, engaged in social work, provided career training for Jewish women, sought to combat White Slavery and fought for the equal participation of women in the Jewish community. Claiming the membership of more than twenty percent of German Jewish women, the Frauenbund became an increasingly important organization on the German Jewish scene until its dissolution by the Nazis in 1938.
Middle-class Jewish women who were less interested in joining their Jewish and feminist commitments could become active in the moderate wing of the German Women’s movement, whereas working-class and east European women tended to join unions or the socialist women’s movement. Within the bourgeois women’s movement, Jewish women assumed significant leadership roles: Fanny Lewald and Jenny Hirsch gave voice to the aspirations of the movement through their writings on the “Woman Question,” while Jeanette Schwerin (1852–1899), Lina Morgenstern, Alice Salomon and Henriette Fürth became important women’s rights leaders and social workers. It has been estimated that approximately one third of the leading German women’s rights activists were of Jewish ancestry.
The new democratic republic that was born amidst the catastrophe of German defeat in World War I promised Germans their first real possibility for liberal democratic governance. The constitution guaranteed equal rights to all its citizens, including full and complete equality for Jews and women. But the spirit of openness and tolerance enshrined in the constitution was quickly compromised by an eruption of virulent antisemitism that resulted in a growing economic and social exclusion of Jews, even as opportunities in some fields, such as politics and the professions, continued to expand. Weimar’s contradictory bequest to Jews—greater inclusion but also growing exclusion and intensified antisemitic rhetoric—was fueled by the ongoing economic and political instability of the period.
In addition to the political instability that dogged the Republic from its inception, social and economic changes ushered in by the war also led to shifting gender roles. Many more women entered the workforce out of economic necessity and young women also sought out new professional opportunities. These and other changes in turn gave rise to the widespread perception that Germany—and German Jewry—faced an unprecedented social crisis. Rising rates of juvenile delinquency and out-of-wedlock births, the decline in the number of marriages and numbers of children born, suggested to many middle class observers that the Jewish family could neither socially nor biologically reproduce itself. Nothing embodied the social threat posed by young women to the Jewish middle-class gender norms better than the image of the sexually liberated and financially independent “New Woman,” who reputedly rejected motherhood in favor of a hedonistic urban lifestyle. What is particularly significant in the 1920s is how the identification of social crisis, as in Berlin over one hundred years before, was conceptualized largely through the lens of gender.
Offering a counterpoint to the emancipated Jewish New Woman, male and female Jewish leaders placed new emphasis on the reproductive Jewish woman. Feminist leaders joined rabbis and eugenicists in calling for an increased Jewish birthrate and Jewish women’s organizations dedicated themselves to reversing Jewish women’s “self-imposed infertility” (von Ankum, 29). By reproducing Jews, women would be helping to fortify a declining Jewish community and fighting the rising tide of assimilation. In an age of assimilation, Jewish mothers had a vital role to play in the maintenance of Jewish difference itself.
In the construction of a redemptive Jewish femininity that would address the challenges of assimilation, Jewish women also sought to redefine the meaning of Jewish motherhood at a time when national identity among non-Jewish Germans was growing increasingly exclusionary. According to both male and female leaders at the time, a crucial part of a Jewish mother’s task in the 1920s was to educate her children in ways that would help reduce antisemitism, while simultaneously making her family a refuge from antisemitic hostility. Shaping a new form of Jewishness that could both resist the appeal of Gentile acceptance and minimize Gentile hatred became an important aspect of Jewish “women’s work” in the 1920s. Women were thus cast both as the problem and the solution, embodying both the threat of a barren future and the promise of collective renewal.
With the slide of the Weimar Republic into authoritarianism and ultimately dictatorship in the early 1930s, National Socialism signaled the end of democracy, women’s equality and Jewish emancipation in Germany. Although National Socialism targeted Jewish men and women equally, the impact of restrictive regulations, increased antisemitism and social exclusion affected Jewish men and women in ways that were often distinct. Marion Kaplan’s research on the 1930s shows how social exclusion experienced by men in the workplace appears to have had somewhat of a lesser impact than the increasing isolation from the informal social networks maintained by women. In addition, women often proved to be more attuned to the humiliations and suffering of their children. Perhaps less invested in their professional identities than their husbands, women were more willing to risk uncertainty abroad. Overall, women displayed greater adaptability than men in reorienting their expectations and their means of livelihood to accommodate new realities both at home and abroad. Ironically, it may have been women’s very subordinate status that made them more amenable to finding work that under other circumstances would have been considered beneath them.
Gender roles in Jewish families also shifted as families faced new and extreme economic and social realities. Women increasingly represented or defended their husbands and other male relatives with the authorities. In addition, many more women worked outside the home than before the Nazi period and became involved in Jewish self-help organizations that had been established after Hitler’s rise to power. Some had never worked before, while others retrained for work in Germany or abroad. Although women often wanted to leave Germany before their husbands came to share their view, they actually emigrated less frequently than men. Parents sent sons away to foreign countries more frequently than daughters, and it was women, more than men, who remained behind as the sole caretakers for elderly parents. Indeed, a large proportion of the elderly population that remained in Germany was made up of women. In 1939, there were 6,674 widowed men and 28,347 widowed women in the expanded Reich.
Although men and women were equally targeted for persecution and death, they were subjected to different humiliations, regulations and work requirements. Within certain types of mixed marriages, Jewish men faced greater dangers than women. In the case of childless intermarriages consisting of a Jewish woman and an “Aryan” man, the female Jewish partner was not subjected to the same anti-Jewish laws as the rest of the Jewish population. But a Jewish man with a female “Aryan” wife in such a marriage received no special privileges. With the onset of the war, German Jewish women began to suffer the kind of physical brutality that many of their husbands, fathers and brothers had endured during the 1930s. Overall, however, Jewish men were probably more vulnerable to physical attack than women. Although Jewish women who went into hiding could move about more freely and were in less danger of being discovered than men, it is speculated that fewer women than men actually went into hiding. Despite their equal status as subhuman in the eyes of the Nazis, Jewish men and women frequently labored to survive under different constraints. As was the case in other countries outside of Germany, Jewish women appear to have suffered the ultimate fate of death in disproportionately greater numbers.
Even for an historical event as defining as the Holocaust, gender analysis proves a valuable means for elucidating different reactions to persecution by men and women, as well as highlighting gender-distinctive experiences of emigration, hiding and surviving in the camps. To view German Jewish history from the Enlightenment through the Holocaust from a gender perspective deepens our understanding of history in general and provides us with a richer, more complex and more inclusive picture of the Jewish past.
Allen, Ann Taylor. Feminism and Motherhood in Germany 1890–1914. New Brunswick, New Jersey: 1991, 213–214; Ankum, Katharina von. “Between Maternity and Modernity: Jewish Femininity and the German-Jewish ‘Symbiosis.’” Shofar 17/4 (Summer 1999): 20–33; Baader, Maria Benjamin. “When Judaism Turned Bourgeois: Gender in Jewish Associational Life and in the Synagogue, 1750–1850.” Leo Baeck Institute Yearbook 46 (2001): 113–123; Fassmann, Irmgard Maya. Jüdinnen in der deutschen Frauenbewegung 1865–1919. New York: 1996; Freidenreich, Harriet. Female, Jewish and Educated: The Lives of Central European University Women. Bloomington: 2002; Hertz, Deborah. High Society in Old Regime Berlin. New Haven: 1988; Hyman, Paula. Gender and Assimilation in Modern Jewish History: the Role and Representation of Women. Seattle: 1992; Kaplan, Marion. Between Dignity and Despair: Jewish Life in Nazi Germany. New York: 1998; Idem. The Jewish Feminist Movement in Germany: The Campaigns of the Jüdischer Frauenbund, 1904–1938. Westport, CT: 1979; Idem. The Making of the Jewish Middle Class: Women, Family, and Identity in Imperial Germany. New York: 1991; Kaplan, Marion, ed. Geschichte des jüdischen Alltags in Deutschland. Vom 17. Jahrhundert bis 1945. Munich: 2003; Lowenstein, Steve. Berlin Jewish Community: Enlightenment, Family, Crisis 1770–1830. New York: 1994; Maurer, Trude. Die Entwicklung der jüdische Minderheit in Deutschland (1780–1933). Tübingen: 1992; Meyer, Michael, and Michael Brenner. German-Jewish History in Modern Times. New York: 1997, Vols 1–4; Quack, Sybille. Zuflucht Amerika. Zur Sozialgeschichte der Emigration deutsch-jüdischer Frauen in die USA 1933–1945. Bonn: 1995; Rahden, Till van. “Intermarriages, the ‘New Woman’ and the Situational Ethnicity of Breslau Jews from the 1870s to the 1920s.” Leo Baeck Institute Yearbook 46 (2001);125–150; Richarz, Monika. “Jewish Social Mobility in Germany during the Time of Emancipation (1790–1871).” Leo Baeck Institute Yearbook 20 (1975): 69–77; Springorum, Stefanie Schüler. “Deutsch-Jüdische Geschichte als Geschlechtergeschichte.” Transversal: Zeitschrift des David-Herzog-Centrums für jüdische Studien 1 (2003): 3–15; Usborne, Cornelie. “The New Woman and Generational Conflict: Perceptions of Young Women’s Sexual Mores in the Weimar Republic.” In Generations in Conflict: Youth Revolt and Generation Formation in Germany, 1779–1968, edited by Mark Roseman, 137–163. New York: 1995; Volkov, Shulamit. Die Juden in Deutschland 1780–1918. Munich: 1994; Idem. “Jüdische Assimilation und Eigenart im Kaiserreich.” In Jüdisches Leben und Antisemitismus im 19. und 20. Jahrhundert, edited by Shulamit Volkov. Munich: 1990, 131–145; Werthheimer, Jack. Unwelcome Strangers. New York: 1987; Zimmermann, Moshe. Die deutschen Juden, 1918–1945. Munich: 1997. | http://jwa.org/encyclopedia/article/germany-1750-1945 | 13 |
16 | Special Relativity: Kinematics
Time Dilation and Length Contraction
The most important and famous results in Special Relativity are that of time dilation and length contraction. Here we will proceed by deriving time dilation and then deducing length contraction from it. It is important to note that we could do it the other way: that is, by beginning with length contraction.
|t A =|
In the frame of an observer on the ground, call her O B , the train is moving with speed v (see ii) in ). The light then follows a diagonal path as shown, but still with speed c . Let us calculate the length of the upward path: we can construct a right-triangle of velocity vectors since we know the horizontal speed as v and the diagonal speed as c . Using the Pythagorean Theorem we can conclude that the vertical component of the velocity is as shown on the diagram. Thus the ratio the diagonal (hypotenuse) to the vertical is . But we know that the vertical of the right-triangle of lengths is h , so the hypotenuse, must have length . This is the length of the upward path. Thus the overall length of the path taken by the light in O B 's frame is . It traverses this path at speed c , so the time taken is:
|t B = =|
Clearly the times measured are different for the two observers. The ratio of the two times is defined as γ , which is a quantity that will become ubiquitous in Special Relativity.
All this might seem innocuous enough. So, you might say, take the laser away and what is the problem? But time dilation runs deeper than this. Imagine O A waves to O B every time the laser completes a cycle (up and down). Thus according to O A 's clock, he waves every t A seconds. But this is not what O B sees. He too must see O A waving just as the laser completes a cycle, however he has measured a longer time for the cycle, so he sees O A waving at him every t B seconds. The only possible explanation is that time runs slowly for O A ; all his actions will appear to O B to be in slow motion. Even if we take the laser away, this does not affect the physics of the situation, and the result must still hold. O A 's time appears dilated to O B . This will only be true if O A is stationary next to the laser (that is, with respect to the train); if he is not we run into problems with simultaneity and it would not be true that O B would see the waves coincide with the completion of a cycle.
Unfortunately, the most confusing part is yet to come. What happens if we analyze the situation from O A 's point of view: he sees O B flying past at v in the backwards direction (say O B has a laser on the ground reflecting from a mirror suspended above the ground at height h ). The relativity principle tells us that the same reasoning must apply and thus that O A observes O B 's clock running slowly (note that γ does not depend on the sign of v ). How could this possibly be right? How can O A 's clock be running slower than O B 's, but O B 's be running slower than O A 's? This at least makes sense from the point of view of the relativity principle: we would expect from the equivalence of all frames that they should see each other in identical ways. The solution to this mini-paradox lies in the caveat we put on the above description; namely, that for t B = γt A to hold, O A must be at rest in her frame. Thus the opposite, t A = γt B , must only hold when O B is at rest in her frame. This means that t B = γt A holds when events occur in the same place in O A frame, and t A = γt B holds when events occur in the same place in O B 's frame. When v 0âáγ 1 this can never be true in both frames at once, hence only one of the relations holds true. In the last example described ( O B flying backward in O A 's frame), the events (laser fired, laser returns) do not occur at the same place in O A 's frame so the first relation we derived ( t B = γt A ) fails; t A = γt B is true, however.
We will now proceed to derive length contraction given what we know about time dilation. Once again observer O A is on a train that is moving with velocity v to the right (with respect to the ground). O A has measured her carriage to have length l A in her reference frame. There is a laser light on the back wall of the carriage and a mirror on the front wall, as shown in .
|t A =|
Since the light traverses the length of the carriage twice at velocity c . We want to compare the length as observed by O A to the length measured by an observer at rest on the ground ( O B ). Let us call the length O B measures for the carriage to be l B (as far as we know so far l B could equal l A , but we will soon see that it does not). In O B 's frame as the light is moving towards the mirror the relative speed of the light and the train is c - v ; after the light has been reflected and is moving back towards O A , the relative speed is c + v . Thus we can calculate the total time taken for the light to go up and back as:
|t B = + = âÉá γ 2|
But from our analysis of time dilation above, we saw that when O A is moving past O B in this manner, O A 's time is dilated, that is: t B = γt B . Thus we can write:
|γt A = γ = t B = γ 2âá = γâál B =|
Note that γ is always greater than one; thus O B measures the train to be shorter than O A does. We say that the train is length contracted for an observer on the ground.
Once again the problem seems to be that is we turn the analysis around and view it from O A 's point of view: she sees O B flying past to the left with speed v . We can put O B in an identical (but motionless) train and apply the same reasoning (just as we did with time dilation) and conclude that O A measures O B 's identical carriage to be short by a factor γ . Thus each observer measures their own train to be longer than the other's. Who is right? To resolve this mini-paradox we need to be very specific about what we call 'length.' There is only one meaningful definition of length: we take object we want to measure and write down the coordinates of its ends simultaneously and take the difference. What length contraction really means then, is that if O A compares the simultaneous coordinates of his own train to the simultaneous coordinates of O B 's train, the difference between the former is greater than the difference between the latter. Similarly, if O B writes down the simultaneous coordinates of his own train and O A 's, he will find the difference between his own to be greater. Recall from Section 1 that observers in different frames have different notions of simultaneous. Now the 'paradox' doesn't seem so surprising at all; the times at which O A and O B are writing down their coordinates are completely different. A simultaneous measurement for O A is not a simultaneous measurement for O B , and so we would expect a disagreement as to the observers concept of length. When the ends are measured simultaneously in O B 's frame l B = , and when events are measured simultaneously in O A 's frame l A = . No contradiction can arise because the criterion of simultaneity cannot be met in both frames at once.
Be careful to note that length contraction only occurs in the direction of motion. For example if the velocity of an object is given by = (v x, 0, 0) , length contraction will occur in the direction only. The other dimensions of the object remain the same to any inertial observer.
Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note! | http://www.sparknotes.com/physics/specialrelativity/kinematics/section2.rhtml | 13 |
15 | This chapter has been published in the book INDIA & Southeast Asia to 1800.
For ordering information, please click here.
In the Andhra land Satavahana king Simuka overthrew the last Kanva king in 30 BC and according to the Puranas reigned for 23 years. The Andhras were called Dasyus in the Aitareya Brahmana, and they were criticized for being degraded Brahmins or outcastes by the orthodox. For three centuries the kingdom of the Satavahanas flourished except for a brief invasion by the Shaka clan of Kshaharata led by Bhumaka and Nahapana in the early 2nd century CE. The latter was overthrown as the Satavahana kingdom with its caste system was restored by Gautamiputra Satakarni about 125 CE; his mother claimed he rooted out Shakas (Scythians), Yavanas (Greeks and Romans), and Pahlavas (Parthians), and records praised Gautamiputra for being virtuous, concerned about his subjects, taxing them justly, and stopping the mixing of castes. His successor Pulumavi ruled for 29 years and extended Satavahana power to the mouth of the Krishna River.
Trade with the Romans was active from the first century CE when Pliny complained that 550 million sesterces went to India annually, mostly for luxuries like spices, jewels, textiles, and exotic animals. The Satavahana kingdom was ruled in small provinces by governors, who became independent when the Satavahana kingdom collapsed. An inscription dated 150 CE credits Shaka ruler Rudradaman with supporting the cultural arts and Sanskrit literature and repairing the dam built by the Mauryans. Rudradaman took back most of the territory the Satavahana king Gautamiputra captured from Nahapana, and he also conquered the Yaudheya tribes in Rajasthan. However, in the next century the warlike Yaudheyas became more powerful. The indigenous Nagas also were aggressive toward Shaka satraps in the 3rd century. In the Deccan after the Satavahanas, Takataka kings ruled from the 3rd century to the 6th.
Probably in the second half of the first century BC Kharavela conquered much territory for Kalinga in southeastern India and patronized Jainism. He was said to have spent much money for the welfare of his subjects and had the canal enlarged that had been built three centuries before by the Nandas. In addition to a large palace, a monastery was built at Pabhara, and caves were excavated for the Jains.
Late in the 1st century BC a line of Iranian kings known as the Pahlavas ruled northwest India. The Shaka (Scythian) Maues, who ruled for about 40 years until 22 CE, broke relations with the Iranians and claimed to be the great king of kings himself. Maues was succeeded by three Shaka kings whose reigns overlapped. The Parthian Gondophernes seems to have driven the last Greek king Hermaeus out of the Kabul valley and taken over Gandhara from the Shakas, and it was said that he received at his court Jesus' disciple Thomas. Evidence indicates that Thomas also traveled to Malabar about 52 CE and established Syrian churches on the west coast before crossing to preach on the east coast around Madras, where he was opposed and killed in 68.
However, the Pahlavas were soon driven out by Scythians Chinese historians called the Yue-zhi. Their Kushana tribal chief Kujula Kadphises, his son Vima Kadphises, and Kanishka (r. 78-101) gained control of the western half of northern India by 79 CE. According to Chinese history one of these kings demanded to marry a Han princess, but the Kushanas were defeated by the Chinese led by Ban Chao at the end of the 1st century. Kanishka, considered the founder of the Shaka era, supported Buddhism, which held its 4th council in Kashmir during his reign. A new form of Mahayana Buddhism with the compassionate saints (bodhisattvas) helping to save others was spreading in the north, while the traditional Theravada of saints (arhats) working for their own enlightenment held strong in southern regions. Several great Buddhist philosophers were favored at Kanishka's court, including Parshva, Vasumitra, and Ashvaghosha; Buddhist missions were sent to central Asia and China, and Kanishka was said to have died fighting in central Asia. Kushana power decreased after the reign of Vasudeva (145-176), and they became vassals in the 3rd century after being defeated by Shapur I of the Persian Sasanian dynasty.
In the great vehicle or way of Mahayana Buddhism the saint (bodhisattva) is concerned with the virtues of benevolence, character, patience, perseverance, and meditation, determined to help all souls attain nirvana. This doctrine is found in the Sanskrit Surangama Sutra of the first century CE. In a dialog between the Buddha and Ananda before a large gathering of monks, the Buddha declares that keeping the precepts depends on concentration, which enhances meditation and develops intelligence and wisdom. He emphasizes that the most important allurement to overcome is sexual thought, desire, and indulgence. The next allurement is pride of ego, which makes one prone to be unkind, unjust, and cruel. Unless one can control the mind so that even the thought of killing or brutality is abhorrent, one will never escape the bondage of the world. Killing and eating flesh must be stopped. No teaching that is unkind can be the teaching of the Buddha. Another precept is to refrain from coveting and stealing, and the fourth is not to deceive or tell lies. In addition to the three poisons of lust, hatred, and infatuation, one must curtail falsehood, slander, obscene words, and flattery.
Ashvaghosha was the son of a Brahmin and at first traveled around arguing against Buddhism until he was converted, probably by Parshva. Ashvaghosha wrote the earliest Sanskrit drama still partially extant; in the Shariputra-prakarana the Buddha converts Maudgalyayana and Sariputra by philosophical discussion. His poem Buddhacharita describes the life and teachings of the Buddha very beautifully.
The Awakening of Faith in the Mahayana is ascribed to Ashvaghosha. That treatise distinguishes two aspects of the soul as suchness (bhutatathata) and the cycle of birth and death (samsara). The soul as suchness is one with all things, but this cannot be described with any attributes. This is negative in its emptiness (sunyata) but positive as eternally transcendent of all intellectual categories. Samsara comes forth from this ultimate reality. Multiple things are produced when the mind is disturbed, but they disappear when the mind is quiet. The separate ego-consciousness is nourished by emotional and mental prejudices (ashrava). Since all beings have suchness, they can receive instructions from all Buddhas and Bodhisattvas and receive benefits from them. By the purity of enlightenment they can destroy hindrances and experience insight into the oneness of the universe. All Buddhas feel compassion for all beings, treating others as themselves, and they practice virtue and good deeds for the universal salvation of humanity in the future, recognizing equality among people and not clinging to individual existence. Thus the prejudices and inequities of the caste system were strongly criticized.
Mahayana texts were usually written in Sanskrit instead of Pali, and the Prajnaparamita was translated into Chinese as early as 179 CE by Lokakshema. This dialog of 8,000 lines in which the Buddha spoke for himself and through Subhuti with his disciples was also summarized in verse. The topic is perfect wisdom. Bodhisattvas are described as having an even and friendly mind, being amenable, straight, soft-spoken, free of perceiving multiplicity, and free of self-interest. Detached, they do not want gain or fame, and their hearts are not overcome by anger nor do they seek a livelihood in the wrong way. Like an unstained lotus in the water they return from concentration to the sense world to mature beings and purify the field with compassion for all living things. Having renounced a heavenly reward they serve the entire world, like a mother taking care of her child. Thought produced is dedicated to enlightenment. They do not wish to release themselves in a private nirvana but become the world's resting place by learning not to embrace anything. With a mind full of friendliness and compassion, seeing countless beings with heavenly vision as like creatures on the way to slaughter, a Bodhisattva impartially endeavors to release them from their suffering by working for the welfare of all beings.
Nagarjuna was also born into a Brahmin family and in the 2nd century CE founded the Madhyamika (Middle Path) school of Mahayana Buddhism, although he was concerned about Hinayanists too. He was a stern disciplinarian and expelled many monks from the community at Nalanda for not observing the rules. A division among his followers led to the development of the Yogachara school of philosophy. Nagarjuna taught that all things are empty, but he answered critics that this does not deny reality but explains how the world happens. Only from the absolute point of view is there no birth or annihilation. The Buddha and all beings are like the sky and are of one nature. All things are nothing but mind established as phantoms; thus blissful or evil existence matures according to good or evil actions.
Nagarjuna discussed ethics in his Suhrllekha. He considered ethics faultless and sublime as the ground of all, like the earth. Aware that riches are unstable and void, one should give; for there is no better friend than giving. He recommended the transcendental virtues of charity, patience, energy, meditation, and wisdom, while warning against avarice, deceit, illusion, lust, indolence, pride, greed, and hatred. Attaining patience by renouncing anger he felt was the most difficult. One should look on another's wife like one's mother, daughter or sister. It is more heroic to conquer the objects of the six senses than a mass of enemies in battle. Those who know the world are equal to the eight conditions of gain and loss, happiness and suffering, fame and dishonor, and blame and praise. A woman (or man), who is gentle as a sister, winning as a friend, caring as a mother, and obedient as a servant, one should honor as a guardian goddess (god). He suggested meditating on kindness, pity, joy, and equanimity, abandoning desire, reflection, happiness, and pain. The aggregates of form, perception, feeling, will, and consciousness arise from ignorance. One is fettered by attachment to religious ceremonies, wrong views, and doubt. One should annihilate desire as one would extinguish a fire in one's clothes or head. Wisdom and concentration go together, and for the one who has them the sea of existence is like a grove.
During the frequent wars that preceded the Gupta empire in the 4th century the Text of the Excellent Golden Light (Suvarnaprabhasottama Sutra) indicated the Buddhist attitude toward this fighting. Everyone should be protected from invasion in peace and prosperity. While turning back their enemies, one should create in the earthly kings a desire to avoid fighting, attacking, and quarreling with neighbors. When the kings are contented with their own territories, they will not attack others. They will gain their thrones by their past merit and not show their mettle by wasting provinces; thinking of mutual welfare, they will be prosperous, well fed, pleasant, and populous. However, when a king disregards evil done in his own kingdom and does not punish criminals, injustice, fraud, and strife will increase in the land. Such a land afflicted with terrible crimes falls into the power of the enemy, destroying property, families, and wealth, as men ruin each other with deceit. Such a king, who angers the gods, will find his kingdom perishing; but the king, who distinguishes good actions from evil, shows the results of karma and is ordained by the gods to preserve justice by putting down rogues and criminals in his domain even to giving up his life rather than the jewel of justice.
After 20 BC many kings ruled Sri Lanka (Ceylon) during a series of succession fights until Vasabha (r. 67-111 CE) of the Lambakanna sect established a new dynasty that would rule more than three centuries. Vasabha promoted the construction of eleven reservoirs and an extensive irrigation system. The island was divided briefly by his son and his two brothers, as the Chola king Karikala invaded; but Gajabahu (r. 114-36) united the country and invaded the Chola territory.
A treaty established friendly relations, and Hindu temples were built on Sri Lanka, including some for the chaste goddess immortalized in the Silappadikaram. Sri Lanka experienced peace and prosperity for 72 years, and King Voharika Tissa (r. 209-31) even abolished punishment by mutilation. However, when the Buddhist schism divided people, the king suppressed the new Mahayana doctrine and banished its followers. Caught in an intrigue with the queen, his brother Abhayanaga (r. 231-40) fled to India, and then with Tamils invaded Sri Lanka, defeated and killed his brother, took the throne, and married the queen. Gothabhaya (r. 249-62) persecuted the new Vetulya doctrine supported by monks at Abhayagirivihara by having sixty monks branded and banished. Their accounts of this cruelty led Sanghamitta to tutor the princes in such a way that when Mahasena (r. 274-301) became king, he confiscated property from the traditional Mahavihara monastery and gave it to Abhayagirivihara.
The Tamil epic poem called The Ankle Bracelet (Silappadikaram) was written about 200 CE by Prince Ilango Adigal, brother of King Shenguttuvan, who ruled the western coast of south India. Kovalan, the son of a wealthy merchant in Puhar, marries Kannaki, the beautiful daughter of a wealthy ship-owner. The enchanting Madhavi dances so well for the king that he gives her a wreath that she sells to Kovalan for a thousand gold kalanjus, making her his mistress. They sing songs to each other of love and lust until he notices hints of her other loves; so he withdraws his hands from her body and departs. Kovalan returns to his wife in shame for losing his wealth; but she gives him her valuable ankle bracelet, and they decide to travel to Madurai. Kannaki courageously accompanies him although it causes her feet to bleed. They are joined by the saintly woman Kavundi, and like good Jains they try not to step on living creatures as they walk. They meet a saintly man who tells them that no one can escape reaping the harvest grown from the seeds of one's actions.
In the woods a charming nymph tries to tempt Kovalan with a message from Madhavi, but his prayer causes her to confess and run away. A soothsayer calls Kannaki the queen of the southern Tamil land, but she only smiles at such ignorance. A priest brings a message from Madhavi asking for forgiveness and noting his leaving his parents. Kovalan has the letter sent to his parents to relieve their anguish. Leaving his wife with the saint Kavundi, Kovalan goes to visit the merchants, while Kavundi warns him that the merits of his previous lives have been exhausted; they must prepare for misfortune. Reaping what is sown, many fall into predicaments from pursuing women, wealth, and pleasure; thus sages renounce all desire for worldly things. A Brahmin tells Kovalan that Madhavi has given birth to his baby girl; he has done good deeds in the past, but he warns him he must pay for some errors committed in a past existence. Kovalan feels bad for wasting his youth and neglecting his parents. He goes to town to sell the ankle bracelet; a goldsmith tells him only the queen can purchase it, but the goldsmith tells King Korkai that he has found the man who stole his royal anklet. The king orders the thief put to death, and Kovalan is killed with a sword.
Kannaki weeping observes the spirit of her husband rise into the air, telling her to stay in life. She goes to King Korkai and proves her husband did not steal the anklet by showing him their anklet has gems not pearls. Filled with remorse for violating justice at the word of a goldsmith, the king dies, followed quickly in this by his queen. Kannaki goes out and curses the town as she walks around the city three times. Then she tears her left breast from her body and throws it in the dirt. A god of fire appears to burn the city, but she asks him to spare Brahmins, good men, cows, truthful women, cripples, the old, and children, while destroying evildoers. As the four genii who protect the four castes of Madurai depart, a conflagration breaks out. The goddess of Madurai explains to Kannaki that in a past life as Bharata her husband had renounced nonviolence and caused Sangaman to be beheaded, believing he was a spy. His wife cursed the killer, and now that action bore fruit. Kannaki wanders desolate for two weeks, confessing her crime. Then the king of heaven proclaims her a saint, and she ascends with Kovalan in a divine chariot.
King Shenguttuvan, who had conquered Kadambu, leaves Vanji and hears stories about a woman with a breast torn off suffering agony and how Madurai was destroyed. The king decides to march north to bring back a great stone on the crowned heads of two kings, Kanaka and Vijaya, who had criticized him; the stone is to be carved into the image of the beloved goddess. His army crosses the Ganges and defeats the northern kings. The saintly Kavundi fasts to death. The fathers of Kovalan and Kannaki both give up their wealth and join religious orders, and Madhavi goes into a Buddhist nunnery, followed later in this by her daughter. Madalan advises King Shenguttuvan to give up anger and criticizes him for contributing to war, causing the king to release prisoners and refund taxes. The Chola king notes how the faithful wife has proved the Tamil proverb that the virtue of women is of no use if the king fails to establish justice. Finally the author himself appears in the court of his brother Shenguttuvan and gives a list of moral precepts that begins:
Seek God and serve those who are near Him.
Do not tell lies.
Avoid eating the flesh of animals.
Do not cause pain to any living thing.
Be charitable, and observe fast days.
Never forget the good others have done to you.1
In a preamble added by a later commentator three lessons are drawn from this story: First, death results when a king strays from the path of justice; second, everyone must bow before a chaste and faithful wife; and third, fate is mysterious, and all actions are rewarded. Many sanctuaries were built in southern India and Sri Lanka to the faithful wife who became the goddess of chastity.
The Jain philosopher Kunda Kunda of the Digambara sect lived and taught sometime between the first and fourth centuries. He laid out his metaphysics in The Five Cosmic Constituents (Panchastikayasara). He noted that karmic matter brings about its own changes, as the soul by impure thoughts conditioned by karma does too. Freedom from sorrow comes from giving up desire and aversion, which cause karmic matter to cling to the soul, leading to states of existence in bodies with senses. Sense objects by perception then lead one to pursue them with desires or aversion, repeating the whole cycle. High ideals based on love, devotion, and justice, such as offering relief to the thirsty, hungry, and miserable, may purify the karmic matter; but anger, pride, deceit, coveting, and sensual pleasures interfere with calm thought, perception, and will, causing anguish to others, slander, and other evils. Meditating on the self with pure thought and controlled senses will wash off the karmic dust. Desire and aversion to pleasant and unpleasant states get the self bound by various kinds of karmic matter. The knowing soul associating with essential qualities is self-determined, but the soul led by desire for outer things gets bewildered and is other-determined.
Kunda Kunda discussed ethics in The Soul Essence (Samayasara). As long as one does not discern the difference between the soul and its thought activity, the ignorant will indulge in anger and other emotions that accumulate karma. The soul discerning the difference turns back from these. One with wrong knowledge takes the non-self for self, identifies with anger, and becomes the doer of karma. As the king has his warriors wage war, the soul produces, causes, binds, and assimilates karmic matter. Being affected by anger, pride, deceit, and greed, the soul becomes them. From the practical standpoint karma is attached in the soul, but from the real or pure perspective karma is neither bound nor attached to the soul; attachment to the karma destroys independence. The soul, knowing the karma is harmful, does not indulge them and in self-contemplation attains liberation. The soul is bound by wrong beliefs, lack of vows, passions, and vibratory activity. Kunda Kunda suggested that one does not cause misery or happiness to living beings by one's body, speech, mind, or by weapons, but living beings are happy or miserable by their own karma (actions). As long as one identifies with feelings of joy and sorrow and until soul realization shines out in the heart, one produces good and bad karma. Just as an artisan does not have to identify with performing a job, working with organs, holding tools, the soul can enjoy the fruit of karma without identifying.
In The Perfect Law (Niyamsara), Kunda Kunda described right belief, right knowledge, and right conduct that lead to liberation. The five vows are non-injury, truth, non-stealing, chastity, and non-possession. Renouncing passion, attachment, aversion, and other impure thoughts involves controlling the mind and speech with freedom from falsehood and restraining the body by not causing injury. The right conduct of repentance and equanimity is achieved by self-analysis, by avoiding transgressions and thoughts of pain and ill-will, and by self-contemplation with pure thoughts. Renunciation is practiced by equanimity toward all living beings with no ill feelings, giving up desires, controlling the senses, and distinguishing between the soul and material karma. A saint of independent actions is called an internal soul, but one devoid of independent action is called an external soul. The soul free from obstructions, independent of the senses, and liberated from good and bad karma is free from rebirth and eternal in the nirvana of perfect knowledge, bliss, and power.
After the disintegration in northern India in the third century CE, the Kushanas still ruled over the western Punjab and the declining Shakas over Gujarat and part of Malwa. Sri Lanka king Meghavarna (r. 301-28) sent gifts and asked permission to build a large monastery north of the Bodhi tree for Buddhist pilgrims that eventually housed more than a thousand priests. Sasanian king Shapur II fought and made a treaty with the Kushanas in 350, but he was defeated by them twice in 367-68. After two previous kings of the Gupta dynasty, Chandra-gupta I by marrying Kumaradevi, a Lichchhavi princess, inaugurated the Gupta empire in 320, launching campaigns of territorial conquest. This expansion was greatly increased by their son Samudra-gupta, who ruled for about forty years until 380, conquering nine republics in Rajasthan and twelve states in the Deccan of central India. Many other kingdoms on the frontiers paid taxes and obeyed orders. The Guptas replaced tribal customs with the caste system. Rulers in the south were defeated, captured, and released to rule as vassals. Local ruling councils under the Guptas tended to be dominated by commercial interests. In addition to his military abilities Samudra-gupta was a poet and musician, and inscriptions praised his charity.
His son Chandra-gupta II (r. 380-414) finally ended the foreign Shaka rule in the west so that his empire stretched from the Bay of Bengal to the Arabian Sea. He allied his family with the Nagas by marrying princess Kubernaga; after marrying Vakataka king Rudrasena II, his daughter ruled as regent there for 13 years. In the south the Pallavas ruled in harmony with the Guptas. The Chinese pilgrim Fa-hien described a happy and prosperous people not bothered by magistrates and rules; only those working state land had to pay a portion, and the king governed without using decapitation or corporal punishments. Kumara-gupta (r. 414-55) was apparently able to rule this vast empire without engaging in military campaigns. Only after forty years of peace did the threat of invading Hunas (White Huns) cause crown prince Skanda-gupta (r. 455-67) to fight for and restore Gupta fortunes by defeating the Huns about 460. After a struggle for the Gupta throne, Budha-gupta ruled for at least twenty years until about 500. Trade with the Roman empire had been declining since the 3rd century and was being replaced by commerce with southeast Asia. The empire was beginning to break up into independent states, such as Kathiawar and Bundelkhand, while Vakataka king Narendra-sena took over some Gupta territory.
Gupta decline continued as Huna chief Toramana invaded the Punjab and western India. His son Mihirakula succeeded as ruler about 515; according to Xuan Zang he ruled over India, and a Kashmir chronicle credited Mihirakula with conquering southern India and Sri Lanka. The Chinese ambassador Song-yun in 520 described the Hun king of Gandara as cruel, vindictive, and barbarous, not believing in the law of Buddha, having 700 war-elephants, and living with his troops on the frontier. About ten years later the Greek Cosmas from Alexandria wrote that the White Hun king had 2,000 elephants and a large cavalry, but his kingdom was west of the Indus River. However, Mihirakula was defeated by the Malwa chief Yashodharman. The Gupta king Narasimha-gupta Baladitya was also overwhelmed by Yashodharman and was forced to pay tribute to Mihirakula, according to Xuan Zang; but Baladitya later defeated Mihirakula, saving the Gupta empire from the Huns. Baladitya was also credited with building a great monastery at Nalanda. In the middle of the 6th century the Gupta empire declined during the reigns of its last two emperors, Kumara-gupta III and Vishnu-gupta. Gupta sovereignty was recognized in Kalinga as late as 569.
In the 4th century Vasubandhu studied and taught Sarvastivadin Buddhism in Kashmir, analyzing the categories of experience in the 600 verses of his Abhidharma-kosha, including the causes and ways to eliminate moral problems. Vasubandhu was converted to the Yogachara school of Mahayana Buddhism by his brother Asanga. Vasubandhu had a long and influential career as the abbot at Nalanda.
As an idealist Vasubandhu, summing up his ideas in twenty and thirty verses, found all experience to be in consciousness. Seeds are brought to fruition in the store of consciousness. Individuals are deluded by the four evil desires of their views of self as real, ignorance of self, self-pride, and self-love. He found good mental functions in belief, sense of shame, modesty, absence of coveting, energy, mental peace, vigilance, equanimity, and non-injury. Evil mental functions he listed as covetousness, hatred, attachment, arrogance, doubt, and false view; minor ones included anger, enmity, concealment, affliction, envy, parsimony, deception, fraud, injury, pride, high-mindedness, low-mindedness, unbelief, indolence, idleness, forgetfulness, distraction, and non-discernment. For Vasubandhu life is like a dream in which we create our reality in our consciousness; even the tortures of hell have no outward reality but are merely projections of consciousness. Enlightenment is when mental obstructions and projections are transcended without grasping; the habit-energies of karma, the six senses and their objects, and relative knowledge are all abandoned for perfect wisdom, purity, freedom, peace, and joy. Vasubandhu wrote that we can know other minds and influence each other for better and worse, because karma is intersubjective.
In 554 Maukhari king Ishana-varman claimed he won victories over the Andhras, Sulikas, and Gaudas. A Gurjara kingdom was founded in the mid-6th century in Rajputana by Harichandra, as apparently the fall of empires in northern India caused this Brahmin to exchange scriptures for arms. Xuan Zang praised Valabhi king Shiladitya I, who ruled about 580, for having great administrative ability and compassion. Valabhi hosted the second Jain council that established the Jain canon in the 6th century. Valabhi king Shiladitya III (r. 662-84) assumed an imperial title and conquered Gurjara. However, internal conflicts as well as Arab invasion destroyed the Valabhi kingdom by about 735. The Gurjara kingdom was also overrun by Arabs, but Pratihara king Nagabhata is credited with turning back the Muslim invaders in the northwest; he was helped in this effort by Gurjara king Jayabhata IV and Chalukya king Avanijanashraya-Pulakeshiraja in the south.
After Thaneswar king Prabhakara-vardhana (r. 580-606) died, his son Rajya-vardhana marched against the hostile Malava king with 10,000 cavalry and won; but according to Banabhatta, the king of Malava, after gaining his confidence with false civilities, had him murdered. His brother Harsha-vardhana (r. 606-47) swore he would clear the earth of Gaudas; starting with 5,000 elephants, 2,000 cavalry, and 50,000 infantry, his army grew as military conquests enabled him to become the most powerful ruler of northern India at Kanauj. Somehow Harsha's conflicts with Valabhi and Gurjara led to his war with Chalukya king Pulakeshin II; but his southern campaign was apparently a failure, and Sindh remained an independent kingdom.
However, in the east according to Xuan Zang by 643 Harsha had subjugated Kongoda and Orissa. That year the Chinese pilgrim observed two great assemblies, one at Kanauj and the other a religious gathering at Prayaga, where the distribution of accumulated resources drew twenty kings and about 500,000 people. Xuan Zang credited Harsha with building rest-houses for travelers, but he noted that the penalty for breaching the social morality or filial duties could be mutilation or exile. After Gauda king Shashanka's death Harsha had conquered Magadha, and he eventually took over western Bengal. Harsha also was said to have written plays, and three of them survive. Xuan Zang reported that he divided India's revenues into four parts for government expenses, public service, intellectual rewards, and religious gifts. During his reign the university in Nalanda became the most renowned center of Buddhist learning. However, no successor of Harsha-vardana is known, and apparently his empire ended with his life.
Wang-Xuan-zi gained help from Nepal against the violent usurper of Harsha's throne, who was sent to China as a prisoner; Nepal also sent a mission to China in 651. The dynasty called the Later Guptas for their similar names took over Magadha and ruled there for almost a century. Then Yashovarman brought Magadha under his sovereignty as he also invaded Bengal and defeated the ruler of Gauda. In 713 Kashmir king Durlabhaka sent an envoy to the Chinese emperor asking for aid against invading Arabs. His successor Chandrapida was able to defend Kashmir against Arab aggression. He was described as humane and just, but in his ninth year as king he was killed by his brother Tarapida, whose cruel and bloody reign lasted only four years. Lalitaditya became king of Kashmir in 724 and in alliance with Yashovarman defeated the Tibetans; but Lalitaditya and Yashovarman could not agree on a treaty; Lalitaditya was victorious, taking over Kanauj and a vast empire. The Arabs were defeated in the west, and Bengal was conquered in the east, though Lalitaditya's record was tarnished when he had the Gauda king of Bengal murdered after promising him safe conduct. Lalitaditya died about 760. For a century Bengal had suffered anarchy in which the strong devoured the weak.
Arabs had been repelled at Sindh in 660, but they invaded Kabul and Zabulistan during the Caliphate of Muawiyah (661-80). In 683 Kabul revolted and defeated the Muslim army, but two years later Zabul's army was routed by the Arabs. After Al-Hajjaj became governor of Iraq in 695 the combined armies of Zabul and Kabul defeated the Arabs; but a huge Muslim army returned to ravage Zabulistan four years later. Zabul paid tribute until Hajjaj died in 714. Two years before that, Hajjaj had equipped Muslim general Muhammad-ibn-Qasim for a major invasion of Sindh which resulted in the chiefs accepting Islam under sovereignty of the new Caliph 'Umar II (717-20).
Pulakeshin I ruled the Chalukyas for about thirty years in the middle of the 6th century. He was succeeded by Kirtivarman I (r. 566-97), who claimed he destroyed the Nalas, Mauryas, and Kadambas. Mangalesha (r. 597-610) conquered the Kalachuris and Revatidvipa, but he lost his life in a civil war over the succession with his nephew Pulakeshin II (r. 610-42). Starting in darkness enveloped by enemies, this king made Govinda an ally and regained the Chalukya empire by reducing Kadamba capital Vanavasi, the Gangas, and the Mauryas, marrying a Ganga princess. In the north Pulakeshin II subdued the Latas, Malavas, and Gurjaras; he even defeated the mighty Harsha of Kanauj and won the three kingdoms of Maharashtra, Konkana, and Karnata. After conquering the Kosalas and Kalingas, an Eastern Chalukya dynasty was inaugurated by his brother Kubja Vishnuvardhana and absorbed the Andhra country when Vishnukundin king Vikramendra-varman III was defeated. Moving south, Pulakeshin II allied himself with the Cholas, Keralas, and Pandyas in order to invade the powerful Pallavas. By 631 the Chalukya empire extended from sea to sea. Xuan Zang described the Chalukya people as stern and vindictive toward enemies, though they would not kill those who submitted. They and their elephants fought while inebriated, and Chalukya laws did not punish soldiers who killed. However, Pulakeshin II was defeated and probably killed in 642 when the Pallavas in retaliation for an attack on their capital captured the Chalukya capital at Badami.
For thirteen years the Pallavas held some territory while Chalukya successors fought for the throne. Eventually Vikramaditya I (r. 655-81) became king and recovered the southern part of the empire from the Pallavas, fighting three Pallava kings in succession. He was followed by his son Vinayaditya (r. 681-96), whose son Vijayaditya (r. 696-733) also fought with the Pallavas. Vijayaditya had a magnificent temple built to Shiva and donated villages to Jain teachers. His son Vikramaditya II (r. 733-47) also attacked the Pallavas and took Kanchi, but instead of destroying it he donated gold to its temples. His son Kirtivarman II (r. 744-57) was the last ruler of the Chalukya empire, as he was overthrown by Rashtrakuta king Krishna I. However, the dynasty of the Eastern Chalukyas still remained to challenge the Rashtrakutas. In the early 8th century the Chalukyas gave refuge to Zoroastrians called Parsis, who had been driven out of Persia by Muslims. A Christian community still lived in Malabar, and in the 10th century the king of the Cheras granted land to Joseph Rabban for a Jewish community in India.
Pallava king Mahendra-varman I, who ruled for thirty years at the beginning of the 7th century lost northern territory to the Chalukyas. As a Jain he had persecuted other religions, but after he tested and was converted by the Shaivite mystic Appar, he destroyed the Jain monastery at Pataliputra. His son Narasimha-varman I defeated Pulakeshin II in three battles, capturing the Chalukya capital at Vatapi in 642 with the aid of the Sri Lanka king. He ruled for 38 years, and his capital at Kanchi contained more than a hundred Buddhist monasteries housing over 10,000 monks, and there were many Jain temples too. During the reign (c. 670-95) of Pallava king Parameshvara-varman I the Chalukyas probably captured Kanchi, as they did again about 740.
On the island of Sri Lanka the 58th and last king listed in the Mahavamsa was Mahasena (r. 274-301). He oversaw the building of sixteen tanks and irrigation canals. The first of 125 kings listed up to 1815 in the Culavamsa, Srimeghavanna, repaired the monasteries destroyed by Mahasena. Mahanama (r. 406-28) married the queen after she murdered his brother Upatissa. Mahanama was the last king of the Lambakanna dynasty that had lasted nearly four centuries. His death was followed by an invasion from southern India that limited Sinhalese rule to the Rohana region.
Buddhaghosha was converted to Buddhism and went to Sri Lanka during the reign of Mahanama. There he translated and wrote commentaries on numerous Buddhist texts. His Visuddhimagga explains ways to attain purity by presenting the teachings of the Buddha in three parts on conduct, concentration, and wisdom. Buddhaghosha also collected parables and stories illustrating Buddhist ethics by showing how karma brings the consequences of actions back to one, sometimes in another life. One story showed how a grudge can cause alternating injuries between two individuals from life to life. Yet if no grudge is held, the enmity subsides. In addition to the usual vices of killing, stealing, adultery, and a judge taking bribes, occupations that could lead to hell include making weapons, selling poison, being a general, collecting taxes, living off tolls, hunting, fishing, and even gathering honey. The Buddhist path is encouraged with tales of miracles and by showing the benefits of good conduct and meditation.
The Moriya clan chief Dhatusena (r. 455-73) improved irrigation by having a bridge constructed across the Mahavali River. He led the struggle to expel the foreigners from the island and restored Sinhalese authority at Anuradhapura. His eldest son Kassapa (r. 473-91) took him prisoner and usurped the throne but lost it with his life to his brother Moggallana (r. 491-508), who used an army of mercenaries from south India. He had the coast guarded to prevent foreign attacks and gave his umbrella to the Buddhist community as a token of submission. His son Kumara-Dhatusena (r. 508-16) was succeeded by his son Kittisena, who was quickly deposed by the usurping uncle Siva. He was soon killed by Upatissa II (r. 517-18), who revived the Lambakanna dynasty and was succeeded by his son Silakala (r. 518-31). Moggallana II (r. 531-51) had to fight for the throne; but he was a poet and was considered a pious ruler loved by the people. Two rulers were killed as the Moriyas regained power. The second, Mahanaga (r. 569-71), had been a rebel at Rohana and then its governor before becoming king at Anuradhapura. Aggabodhi I (r. 571-604) and Aggabodhi II (r. 604-14) built monasteries and dug water tanks for irrigation. A revolt by the general Moggallana III (r. 614-19) overthrew the last Moriya king and led to a series of civil wars and succession battles suffered by the Sri Lanka people until Manavamma (r. 684-718) re-established the Lambakanna dynasty.
Included in a didactic Tamil collection of "Eighteen Minor Poems" are the Naladiyar and the famous Kural. The Naladiyar consists of 400 quatrains of moral aphorisms. In the 67th quatrain the wise say it is not cowardice to refuse a challenge when men rise in enmity and wish to fight; even when enemies do the worst, it is right not to do evil in return. Like milk the path of virtue is one, though many sects teach it. (118) The treasure of learning needs no safeguard, for fire cannot destroy it nor can kings take it. Other things are not true wealth, but learning is the best legacy to leave one's children. (134) Humility is greatness, and self-control is what the gainer actually gains. Only the rich who relieve the need of their neighbors are truly wealthy. (170) The good remember another's kindness, but the base only recall fancied slights. (356)
The Tamil classic, The Kural by Tiru Valluvar, was probably written about 600 CE, plus or minus two centuries. This book contains 133 chapters of ten pithy couplets each and is divided into three parts on the traditional Hindu goals of dharma (virtue or justice), artha (success or wealth), and kama (love or pleasure). The first two parts contain moral proverbs; the third is mostly expressions of love, though there is the statement that one-sided love is bitter while balanced love is sweet. Valluvar transcends the caste system by suggesting that we call Brahmins those who are virtuous and kind to all that live.
Here are a few of Valluvar's astute observations on dharma. Bliss hereafter is the fruit of a loving life here. (75) Sweet words with a smiling face are more pleasing than a gracious gift. (92) He asked, "How can one pleased with sweet words oneself use harsh words to others?"2 Self-control takes one to the gods, but its lack to utter darkness. (121) Always forgive transgressions, but better still forget them. (152) The height of wisdom is not to return ill for ill. (203) "The only gift is giving to the poor; all else is exchange." (221) If people refrain from eating meat, there will be no one to sell it. (256) "To bear your pain and not pain others is penance summed up." (261) In all the gospels he found nothing higher than the truth. (300) I think the whole chapter on not hurting others is worth quoting.
The pure in heart will never hurt others even for wealth or renown.
The code of the pure in heart is not to return hurt for angry hurt.
Vengeance even against a wanton insult does endless damage.
Punish an evil-doer by shaming him with a good deed, and forget.
What good is that sense which does not feel and prevent
all creatures' woes as its own?
Do not do to others what you know has hurt yourself.
It is best to refrain from willfully hurting anyone, anytime, anyway.
Why does one hurt others knowing what it is to be hurt?
The hurt you cause in the forenoon self-propelled
will overtake you in the afternoon.
Hurt comes to the hurtful; hence it is
that those don't hurt who do not want to be hurt.3
Valluvar went even farther when he wrote, "Even at the cost of one's own life one should avoid killing." (327) For death is but a sleep, and birth an awakening. (339)
In the part on artha (wealth) Valluvar defined the unfailing marks of a king as courage, liberality, wisdom and energy. (382) The just protector he deemed the Lord's deputy, and the best kings have grace, bounty, justice, and concern. "The wealth which never declines is not riches but learning." (400) "The wealth of the ignorant does more harm than the want of the learned." (408) The truly noble are free of arrogance, wrath, and pettiness. (431) "A tyrant indulging in terrorism will perish quickly." (563) "Friendship curbs wrong, guides right, and shares distress." (787) "The soul of friendship is freedom, which the wise should welcome." (802) "The world is secure under one whose nature can make friends of foes." (874) Valluvar believed it was base to be discourteous even to enemies (998), and his chapter on character is also worth quoting.
All virtues are said to be natural to those who acquire character as a duty.
To the wise the only worth is character, naught else.
The pillars of excellence are five-love, modesty,
altruism, compassion, truthfulness.
The core of penance is not killing, of goodness not speaking slander.
The secret of success is humility;
it is also wisdom's weapon against foes.
The touchstone of goodness is to own one's defeat even to inferiors.
What good is that good which does not return good for evil?
Poverty is no disgrace to one with strength of character.
Seas may whelm, but men of character will stand like the shore.
If the great fail in nobility, the earth will bear us no more.4
Kamandaka's Nitisara in the first half of the 8th century was primarily based on Kautilya's Arthashastra and was influenced by the violence in the Mahabharata, as he justified both open fighting when the king is powerful and treacherous fighting when he is at a disadvantage. Katyayana, like Kamandaka, accepted the tradition of the king's divinity, although he argued that this should make ruling justly a duty. Katyayana followed Narada's four modes of judicial decisions as the dharma of moral law when the defendant confesses, judicial proof when the judge decides, popular custom when tradition rules, and royal edict when the king decides. Crimes of violence were distinguished from the deception of theft. Laws prevented the accumulated interest on debts from exceeding the principal. Brahmins were still exempt from capital punishment and confiscation of property, and most laws differed according to one's caste. The Yoga-vasishtha philosophy taught that as a bird flies with two wings, the highest reality is attained through knowledge and work.
The famous Vedanta philosopher Shankara was born into a Brahmin family; his traditional dates are 788-820, though some scholars believe he lived about 700-50. It was said that when he was eight, he became an ascetic and studied with Govinda, a disciple of the monist Gaudapala; at 16 he was teaching many in the Varanasi area. Shankara wrote a long commentary on the primary Vedanta text, the Brahma Sutra, on the Bhagavad-Gita, and on ten of the Upanishads, always emphasizing the non-dual reality of Brahman (God), that the world is false, and that the atman (self or soul) is not different from Brahman.
Shankara traveled around India and to Kashmir, defeating opponents in debate; he criticized human sacrifice to the god Bhairava and branding the body. He performed a funeral for his mother even though it was considered improper for a sannyasin (renunciate). Shankara challenged the Mimamsa philosopher Mandana Mishra, who emphasized the duty of Vedic rituals, by arguing that knowledge of God is the only means to final release, and after seven days he was declared the winner by Mandana's wife. He tended to avoid the cities and taught sannyasins and intellectuals in the villages. Shankara founded monasteries in the south at Shringeri of Mysore, in the east at Puri, in the west at Dvaraka, and in the northern Himalayas at Badarinath. He wrote hymns glorifying Shiva as God, and Hindus would later believe he was an incarnation of Shiva. He criticized the corrupt left-hand (sexual) practices used in Tantra. His philosophy spread, and he became perhaps the most influential of all Hindu philosophers.
In the Crest-Jewel of Wisdom Shankara taught that although action is for removing bonds of conditioned existence and purifying the heart, reality can only be attained by right knowledge. Realizing that an object perceived is a rope removes the fear and sorrow from the illusion it is a snake. Knowledge comes from perception, investigation, or instruction, not from bathing, giving alms, or breath control. Shankara taught enduring all pain and sorrow without thought of retaliation, dejection, or lamentation. He noted that the scriptures gave the causes of liberation as faith, devotion, concentration, and union (yoga); but he taught, "Liberation cannot be achieved except by direct perception of the identity of the individual with the universal self."5 Desires lead to death, but one who is free of desires is fit for liberation. Shankara distinguished the atman as the real self or soul from the ahamkara (ego), which is the cause of change, experiences karma (action), and destroys the rest in the real self. From neglecting the real self spring delusion, ego, bondage, and pain. The soul is everlasting and full of wisdom. Ultimately both bondage and liberation are illusions that do not exist in the soul.
Indian drama was analyzed by Bharata in the Natya Shastra, probably from the third century CE or before. Bharata ascribed a divine origin to drama and considered it a fifth Veda; its origin seems to be from religious dancing. In the classical plays Sanskrit is spoken by the Brahmins and noble characters, while Prakrit vernaculars are used by others and most women. According to Bharata poetry (kavya), dance (nritta), and mime (nritya) in life's play (lila) produce emotion (bhava), but only drama (natya) produces "flavor" (rasa). The drama uses the eight basic emotions of love, joy (humor), anger, sadness, pride, fear, aversion, and wonder, attempting to resolve them in the ninth holistic feeling of peace. These are modified by 33 less stable sentiments he listed as discouragement, weakness, apprehension, weariness, contentment, stupor, elation, depression, cruelty, anxiety, fright, envy, arrogance, indignation, recollection, death, intoxication, dreaming, sleeping, awakening, shame, demonic possession, distraction, assurance, indolence, agitation, deliberation, dissimulation, sickness, insanity, despair, impatience, and inconstancy. The emotions are manifested by causes, effects, and moods. The spectators should be of good character, intelligent, and empathetic.
Although some scholars date him earlier, the plays of Bhasa can probably be placed after Ashvaghosha in the second or third century CE. In 1912 thirteen Trivandrum plays were discovered that scholars have attributed to Bhasa. Five one-act plays were adapted from situations in the epic Mahabharata. Dutavakya has Krishna as a peace envoy from the Pandavas giving advice to Duryodhana. In Karnabhara the warrior Karna sacrifices his armor by giving it to Indra, who is in the guise of a Brahmin. Dutaghatotkacha shows the envoy Ghatotkacha carrying Krishna's message to the Kauruvas. Urubhanga depicts Duryodhana as a hero treacherously attacked below the waist by Bhima at the signal of Krishna. In Madhyama-vyayoga the middle son is going to be sacrificed, but it turns out to be a device used by Bhima's wife Hidimba to get him to visit her. Each of these plays seems to portray didactically heroic virtues for an aristocratic audience. The Mahabharata also furnishes the episode for the Kauravas' cattle raid of Virata in the Pancharatra, which seems to have been staged to glorify some sacrifice. Bhasa's Abhisheka follows the Ramayana closely in the coronation of Rama, and Pratima also reworks the Rama story prior to the war. Balacharita portrays heroic episodes in the childhood of Krishna.
In Bhasa's Avimaraka the title character heroically saves princess Kurangi from a rampaging elephant, but he says he is an outcast. Dressed as a thief, Avimaraka sneaks into the palace to meet the princess, saying,
Once we have done what we can even failure is no disgrace.
Has anyone ever succeeded by saying, "I can't do it"?
A person becomes great by attempting great things.6
He spends a year there with Kurangi before he is discovered and must leave. Avimaraka is about to jump off a mountain when a fairy (Vidyadhara) gives him a ring by which he can become invisible. Using invisibility, he and his jester go back into the palace just in time to catch Kurangi before she hangs herself. The true parentage of the royal couple is revealed by the sage Narada, and Vairantya king Kuntibhoja gives his new son-in-law the following advice:
With tolerance be king over Brahmins.
With compassion win the hearts of your subjects.
With courage conquer earth's rulers.
With knowledge of the truth conquer yourself.7
Bhasa uses the story of legendary King Udayana in two plays. In Pratijna Yaugandharayana the Vatsa king at Kaushambi, Udayana, is captured by Avanti king Pradyota so that Udayana can be introduced to the princess Vasavadatta by tutoring her in music, a device which works as they fall in love. The title comes from the vow of chief minister Yaugandharayana to free his sovereign Udayana; he succeeds in rescuing him and his new queen Vasavadatta. In Bhasa's greatest play, The Dream of Vasavadatta, the same minister, knowing his king's reluctance to enter a needed political marriage, pretends that he and queen Vasavadatta are killed in a fire so that King Udayana will marry Magadha princess Padmavati. Saying Vasavadatta is his sister, Yaugandharayana entrusts her into the care of Padmavati, because of the prophecy she will become Udayana's queen. The play is very tender, and both princesses are noble and considerate of each other; it also includes an early example of a court jester. Udayana is still in love with Vasavadatta, and while resting half asleep, Vasavadatta, thinking she is comforting Padmavati's headache, gently touches him. The loving and grieving couple are reunited; Padmavati is also accepted as another wife; and the kingdom of Kaushambi is defended by the marriage alliance.
Bhasa's Charudatta is about the courtesan Vasantasena, who initiates a love affair with an impoverished merchant, but the manuscript is cut off abruptly after four acts. However, this story was adapted and completed in The Little Clay Cart, attributed to a King Sudraka, whose name means a little servant. In ten acts this play is a rare example of what Bharata called a maha-nataka or "great play." The play is revolutionary not only because the romantic hero and heroine are a married merchant and a courtesan, but because the king's brother-in-law, Sansthanaka, is portrayed as a vicious fool, and because by the end of the play the king is overthrown and replaced by a man he had falsely imprisoned. Vasantasena rejects the attentions of the insulting Sansthanaka, saying that true love is won by virtue not violence; she is in love with Charudatta, who is poor because he is honest and generous, as money and virtue seldom keep company these days. Vasantasena kindly pays the gambling debts of his shampooer, who then becomes a Buddhist monk. Charudatta, not wearing jewels any more, gives his cloak to a man who saved the monk from a rampaging elephant.
Vasantasena entrusts a golden casket of jewelry to Charudatta, but Sharvilaka, breaking into his house to steal, is given it so that he can gain the courtesan girl Madanika. So that he won't get a bad reputation, Charudatta's wife gives a valuable pearl necklace to her husband, and he realizes he is not poor because he has a wife whose love outlasts his wealthy days. Madanika is concerned that Sharvilaka did something bad for her sake and tells him to restore the jewels, and he returns them to Vasantasena on the merchant's behalf, while she generously frees her servant Madanika for him.
Charudatta gives Vasantasena the more valuable pearl necklace, saying he gambled away her jewels. As the romantic rainy season approaches, the two lovers are naturally drawn together. Charudatta's child complains that he has to play with a little clay cart as a toy, and Vasantasena promises him a golden one. She gets into the wrong bullock cart and is taken to the garden of Sansthanaka, where he strangles her for rejecting his proposition. Then he accuses Charudatta of the crime, and because of his royal influence in the trial, Charudatta is condemned to be executed after his friend shows up with Vasantasena's jewels. However, the monk has revived Vasantasena, and just before Charudatta's head is to be cut off, she appears to save him. Sharvilaka has killed the bad king and anointed a good one. Charudatta lets the repentant Sansthanaka go free, and the king declares Vasantasena a wedded wife and thus no longer a courtesan.
Although he is considered India's greatest poet, it is not known when Kalidasa lived. Probably the best educated guess has him flourishing about 400 CE during the reign of Chandragupta II. The prolog of his play Malavika and Agnimitra asks the audience to consider a new poet and not just the celebrated Bhasa and two others. In this romance King Agnimitra, who already has two queens, in springtime falls in love with the dancing servant Malavika, who turns out to be a princess when his foreign conflicts are solved. The king is accompanied throughout by a court jester, who with a contrivance frees Malavika from confinement by the jealous queen. The only female who speaks Sanskrit in Kalidasa's plays is the Buddhist nun, who judges the dance contest and explains that Malavika had to be a servant for a year in order to fulfill a prophecy that she would marry a king after doing so. In celebration of the victory and his latest marriage, the king orders all prisoners released.
In Kalidasa's Urvashi Won by Valor, King Pururavas falls in love with the heavenly nymph Urvashi. The king's jester Manavaka reveals this secret to the queen's maid Nipunika. Urvashi comes down to earth with her friend and writes a love poem on a birch-leaf. The queen sees this also but forgives her husband's guilt. Urvashi returns to paradise to appear in a play; but accidentally revealing her love for Pururavas, she is expelled to earth and must stay until she sees the king's heir. The queen generously offers to accept a new queen who truly loves the king, and Urvashi makes herself visible to Pururavas. In the fourth act a moment of jealousy causes Urvashi to be changed into a vine, and the king in searching for her dances and sings, amorously befriending animals and plants until a ruby of reunion helps him find the vine; as he embraces the vine, it turns into Urvashi. After many years have passed, their son Ayus gains back the ruby that was stolen by a vulture. When Urvashi sees the grown-up child she had sent away so that she could stay with the king, she must return to paradise; but the king gives up his kingdom to their son so that he can go with her, although a heavenly messenger indicates that he can remain as king with Urvashi until his death.
The most widely acclaimed Indian drama is Kalidasa's Shakuntala and the Love Token. While hunting, King Dushyanta is asked by the local ascetics not to kill deer, saying, "Your weapon is meant to help the weak not smite the innocent."8 The king and Shakuntala, who is the daughter of a nymph and is being raised by ascetics, fall in love with each other. The king is accompanied by a foolish Brahmin who offers comic relief. Although he has other wives, the king declares that he needs only the earth and Shakuntala to sustain his line. They are married in the forest, and Shakuntala becomes pregnant. Kanva, who raised her, advises the bride to obey her elders, treat her fellow wives as friends, and not cross her husband in anger even if he mistreats her. The king returns to his capital and gives his ring to Shakuntala so that he will recognize her when she arrives later. However, because of a curse on her from Durvasas, he loses his memory of her, and she loses the ring. Later the king refuses to accept this pregnant woman he cannot recall, and in shame she disappears. A fisherman finds the ring in a fish; when the king gets it back, his memory of Shakuntala returns. The king searches for her and finds their son on Golden Peak with the birthmarks of a universal emperor; now he must ask to be recognized by her. They are happily reunited, and their child Bharata is to become the founding emperor of India.
An outstanding political play was written by Vishakhadatta, who may also have lived at the court of Chandragupta II or as late as the 9th century. Rakshasa's Ring is set when Chandragupta, who defeated Alexander's successor Seleucus in 305 BC, is becoming Maurya emperor by overcoming the Nandas. According to tradition he was politically assisted by his minister Chanakya, also known as Kautilya, supposed author of the famous treatise on politics, Artha Shastra. Rakshasa, whose name means demon, had sent a woman to poison Chandragupta, but Chanakya had her poison King Parvataka instead. Rakshasa supports Parvataka's son Malayaketu; Chanakya cleverly assuages public opinion by letting Parvataka's brother have half the kingdom but arranges for his death too. Chanakya even pretends to break with Chandragupta to further his plot.
Chanakya is able to use a Jain monk and a secretary by pretending to punish them and have Siddarthaka rescue the secretary; with a letter he composed written by the secretary and with Rakshasa's ring taken from the home of a jeweler who gave Rakshasa and his family refuge, they pretend to serve Malayaketu but make him suspect Rakshasa's loyalty and execute the allied princes that Rakshasa had gained for him. Ironically Rakshasa's greatest quality is loyalty, and after he realizes he has been trapped, he decides to sacrifice himself to save the jeweler from being executed. By then Malayaketu's attack on Chandragupta's capital has collapsed from lack of support, and he is captured. Chanakya's manipulations have defeated Chandragupta's rivals without a fight, and he appoints chief minister in his place Rakshasa, who then spares the life of Malayaketu. Chanakya (Kautilya) announces that the emperor (Chandragupta) grants Malayaketu his ancestral territories and releases all prisoners except draft animals.
Ratnavali was attributed to Harsha, who ruled at Kanauj in the first half of the 7th century. This comedy reworks the story of King Udayana, who though happily married to Vasavadatta, is seduced into marrying her Simhalese cousin Ratnavali for the political motivations contrived by his minister Yaugandharayana. Ratnavali, using the name Sagarika as the queen's maid, falls in love with the king and has painted his portrait. Her friend then paints her portrait with the king's, which enamors him after he hears the story of the painting from a mynah bird that repeats the maidens' conversation. Queen Vasavadatta becomes suspicious, and the jester is going to bring Sagarika dressed like the queen, who learning of it appears veiled herself to expose the affair. Sagarika tries to hang herself but is saved by the king. The jealous queen puts Sagarika in chains and the noose around the jester's neck. Yet in the last act a magician contrives a fire, and the king saves Sagarika once again. A necklace reveals that she is a princess, and the minister Yaugandharayana explains how he brought the lovers together.
Also attributed to Harsha: Priyadarshika is another harem comedy; but Joy of the Serpents (Nagananda) shows how prince Jimutavahana gives up his own body to stop a sacrifice of serpents to the divine Garuda. A royal contemporary of Harsha, Pallava king Mahendravikarmavarman wrote a one-act farce called "The Sport of Drunkards" (Mattavilasa) in which an inebriated Shaivite ascetic accuses a Buddhist monk of stealing his begging bowl made from a skull; but after much satire it is found to have been taken by a dog.
Bhavabhuti lived in the early 8th century and was said to have been the court poet in Kanauj of Yashovarman, a king also supposed to have written a play about Rama. Bhavabhuti depicted the early career of Rama in Mahavira-charita and then produced The Later Story of Rama. In this latter play Rama's brother Lakshmana shows Rama and Sita murals of their past, and Rama asks Sita for forgiveness for having put her through a trial by fire to show the people her purity after she had been captured by the evil Ravana. Rama has made a vow to serve the people's good above all and so orders Sita into exile because of their continuing suspicions. Instead of killing the demon Sambuka, his penance moves Rama to free him. Sita has given birth to two sons, Lava and Kusha, and twelve years pass. When he heard about his daughter Sita's exile, Janaka gave up meat and became a vegetarian; when Janaka meets Rama's mother Kaushalya, she faints at the memory. Rama's divine weapons have been passed on to his sons, and Lava is able to pacify Chandraketu's soldiers by meditating. Rama has Lava remove the spell, and Kusha recites the Ramayana taught him by Valmiki, who raised the sons. Finally Sita is joyfully reunited with Rama and their sons.
Malati and Madhava by Bhavabhuti takes place in the city of Padmavati. Although the king has arranged for Nandana to marry his minister's daughter Malati, the Buddhist nun Kamandaki manages eventually to bring together the suffering lovers Madhava and Malati. Malati has been watching Madhava and draws his portrait; when he sees it, he draws her too. Through the rest of the play they pine in love for each other. Malati calls her father greedy for going along with the king's plan to marry her to Nandana, since a father deferring to a king in this is not sanctioned by morality nor by custom. Madhava notes that success comes from education with innate understanding, boldness combined with practiced eloquence, and tact with quick wit. Malati's friend Madayantika is attacked by a tiger, and Madhava's friend Makaranda is wounded saving her life. In their amorous desperation Madhava sells his flesh to the gods, and he saves the suicidal Malati from being sacrificed by killing Aghoraghanta, whose pupil Kapalakundala then causes him much suffering. Finally Madhava and Malati are able to marry, as Makaranda marries Madayantika. These plays make clear that courtly love and romance were thriving in India for centuries before they were rediscovered in Europe.
The Rashtrakuta Dantidurga married a Chalukya princess and became a vassal king about 733; he and Gujarat's Pulakeshin helped Chalukya emperor Vikramaditya II repulse an Arab invasion, and Dantidurga's army joined the emperor in a victorious expedition against Kanchi and the Pallavas. After Vikramaditya II died in 747, Dantidurga conquered Gurjara, Malwa, and Madhya Pradesh. This Rashtrakuta king then confronted and defeated Chalukya emperor Kirtivarman II so that by the end of 753 he controlled all of Maharashtra. The next Rashtrakuta ruler Krishna I completed the demise of the Chalukya empire and was succeeded about 773 by his eldest son Govinda II. Absorbed in personal pleasures, he left the administration to his brother Dhruva, who eventually revolted and usurped the throne, defeating the Ganga, Pallava, and Vengi kings who had opposed him.
The Pratihara ruler of Gurjara, Vatsaraja, took over Kanauj and installed Indrayudha as governor there. The Palas rose to power by unifying Bengal under the elected king Gopala about 750. He patronized Buddhism, and his successor Dharmapala had fifty monasteries built, founding the Vikramashila monastery with 108 monks in charge of various programs. During the reign of Dharmapala the Jain scholar Haribhadra recommended respecting various views because of Jainism's principles of nonviolence and many-sidedness. Haribhadra found that the following eight qualities can be applied to the faithful of any tradition: nonviolence, truth, honesty, chastity, detachment, reverence for a teacher, fasting, and knowledge. Dharmapala marched into the Doab to challenge the Pratiharas but was defeated by Vatsaraja. When these two adversaries were about to meet for a second battle in the Doab, the Rashtrakuta ruler Dhruva from the Deccan defeated Vatsaraja first and then Dharmapala but did not occupy Kanauj.
Dhruva returned to the south with booty and was succeeded by his third son Govinda III in 793. Govinda had to defeat his brother Stambha and a rebellion of twelve kings, but the two brothers reconciled and turned on Ganga prince Shivamira, whom they returned to prison. Supreme over the Deccan, Govinda III left his brother Indra as viceroy of Gujarat and Malava and marched his army north toward Kanauj, which Vatsaraja's successor Nagabhata II had occupied while Dharmapala's nominee Chakrayudha was on that throne. Govinda's army defeated Nagabhata's; Chakrayudha surrendered, and Dharmapala submitted. Govinda III marched all the way to the Himalayas, uprooting and reinstating local kings.
Rashtrakuta supremacy was challenged by Vijayaditya II, who had become king of Vengi in 799; but Govinda defeated him and installed his brother Bhima-Salukki on the Vengi throne about 802. Then Govinda's forces scattered a confederacy of Pallava, Pandya, Kerala, and Ganga rulers and occupied Kanchi, threatening the king of Sri Lanka, who sent him two statues. After Govinda III died in 814, Chalukya Vijayaditya II overthrew Bhima-Salukki to regain his Vengi throne; then his army invaded Rashtrakuta territory, plundering and devastating the city of Stambha. Vijayaditya ruled for nearly half a century and was said to have fought 108 battles in a 12-year war with the Rashtrakutas and the Gangas. His grandson Vijayaditya III ruled Vengi for 44 years (848-92); he also invaded the Rashtrakuta empire in the north, burning Achalapura, and it was reported he took gold by force from the Ganga king of Kalinga. His successor Chalukya-Bhima I was king of Vengi for 30 years and was said to have turned his attention to helping ascetics and those in distress. Struggles with his neighbors continued though, and Chalukya-Bhima was even captured for a time.
Dharmapala's son Devapala also supported Buddhism and extended the Pala empire in the first half of the 9th century by defeating the Utkalas, Assam, Huns, Dravidas, and Gurjaras, while maintaining his domain against three generations of Pratihara rulers. His successor Vigrahapala retired to an ascetic life after ruling only three years, and his son Narayanapala was also of a peaceful and religious disposition, allowing the Pala empire to languish. After the Pala empire was defeated by the Rashtrakutas and Pratiharas, subordinate chiefs became independent; Assam king Harjara even claimed an imperial title. Just before his long reign ended in 908 Narayanapala did reclaim some territories after the Rashtrakuta invasion of the Pratihara dominions; but in the 10th century during the reign of the next three kings the Pala kingdom declined as principalities asserted their independence in conflicts with each other.
Chandella king Yashovarman invaded the Palas and the Kambojas, and he claimed to have conquered Gauda and Mithila. His successor Dhanga ruled through the second half of the 10th century and was the first independent Chandella king, calling himself the lord of Kalanjara. In the late 8th century Arab military expeditions had attempted to make Kabul pay tribute to the Muslim caliph. In 870 Kabul and Zabul were conquered by Ya'qub ibn Layth; the king of Zubalistan was killed, and the people accepted Islam. Ghazni sultan Sabutkin (r. 977-97) invaded India with a Muslim army and defeated Dhanga and a confederacy of Hindu chiefs about 989.
South of the Chandellas the Kalachuris led by Kokkalla in the second half of the 9th century battled the Pratiharas under Bhoja, Turushkas (Muslims), Vanga in east Bengal, Rashtrakuta king Krishna II, and Konkan. His successor Shankaragana fought Kosala, but he and Krishna II had to retreat from the Eastern Chalukyas. In the next century Kalachuri king Yuvaraja I celebrated his victory over Vallabha with a performance of Rajshekhara's drama Viddhashalabhanjika. Yuvaraja's son Lakshmanaraja raided east Bengal, defeated Kosala, and invaded the west. Like his father, he patronized Shaivite teachers and monasteries. Near the end of the 10th century Kalachuri king Yuvaraja II suffered attacks from Chalukya ruler Taila II and Paramara king Munja. After many conquests, the aggressive Munja, disregarding the advice of his counselor Rudraditya, was defeated and captured by Taila and executed after an attempted rescue.
In 814 Govinda III was succeeded as Rashtrakuta ruler by his son Amoghavarsha, only about 13 years old; Gujarat viceroy Karkka acted as regent. Three years later a revolt led by Vijayaditya II, who had regained the Vengi throne, temporarily overthrew Rashtrakuta power until Karakka reinstated Amoghavarsha I by 821. A decade later the Rashtrakuta army defeated Vijayaditya II and occupied Vengi for about a dozen years. Karkka was made viceroy in Gujarat, but his son Dhruva I rebelled and was killed about 845. The Rashtrakutas also fought the Gangas for about twenty years until Amoghavarsha's daughter married a Ganga prince about 860. In addition to his military activities Amoghavarsha sponsored several famous Hindu and Jain writers and wrote a book himself on Jain ethics. Jain kings and soldiers made an exception to the prohibition against killing for the duties of hanging murderers and slaying enemies in battle. He died in 878 and was succeeded by his son Krishna II, who married the daughter of Chedi ruler Kokkalla I to gain an ally for his many wars with the Pratiharas, Eastern Chalukyas, Vengi, and the Cholas.
Krishna II died in 914 and was succeeded by his grandson Indra III, who marched his army north and captured northern India's imperial city Kanauj. However, Chandella king Harsha helped the Pratihara Mahipala regain his throne at Kanauj. Indra III died in 922; but his religious son Amoghavarsha II had to get help from his Chedi relations to defeat his brother Govinda IV, who had usurped the throne for fourteen years. Three years later in 939 Krishna III succeeded as Rashtrakuta emperor and organized an invasion of Chola and twenty years later another expedition to the north. The Rashtrakutas reigned over a vast empire when he died in 967; but with no living issue the struggle for the throne, despite the efforts of Ganga king Marasimha III, resulted in the triumph of Chalukya king Taila II in 974. That year Marasimha starved himself to death in the Jain manner and was succeeded by Rajamalla IV, whose minister Chamunda Raya staved off usurpation. His Chamunda Raya Purana includes an account of the 24 Jain prophets.
In the north in the middle of the 9th century the Pratiharas were attacked by Pala emperor Devapala; but Pratihara king Bhoja and his allies defeated Pala king Narayanapala. Bhoja won and lost battles against Rashtrakuta king Krishna II. The Pratiharas were described in 851 by an Arab as having the finest cavalry and as the greatest foe of the Muslims, though no country in India was safer from robbers. Bhoja ruled nearly a half century, and his successor Mahendrapala I expanded the Pratihara empire to the east. When Mahipala was ruling in 915 Al Mas'udi from Baghdad observed that the Pratiharas were at war with the Muslims in the west and the Rashtrakutas in the south, and he claimed they had four armies of about 800,000 men each. When Indra III sacked Kanauj, Mahipala fled but returned after the Rashtrakutas left. In the mid-10th century the Pratiharas had several kings, as the empire disintegrated and was reduced to territory around Kanauj.
A history of Kashmir's kings called the Rajatarangini was written by Kalhana in the 12th century. Vajraditya became king of Kashmir about 762 and was accused of selling men to the Mlechchhas (probably Arabs). Jayapida ruled Kashmir during the last thirty years of the 8th century, fighting wars of conquest even though his army once deserted his camp and people complained of high taxes. Family intrigue and factional violence led to a series of puppet kings until Avanti-varman began the Utpala dynasty of Kashmir in 855. His minister Suvya's engineering projects greatly increased the grain yield and lowered its prices. Avanti-varman's death in 883 was followed by a civil war won by Shankara-varman, who then invaded Darvabhisara, Gurjara, and Udabhanda; but he was killed by people in Urasha, who resented his army being quartered there. More family intrigues, bribery, and struggles for power between the Tantrin infantry, Ekanga military police, and the Damara feudal landowners caused a series of short reigns until the minister Kamalavardhana took control and asked the assembly to appoint a king; they chose the Brahmin Yashakara in 939.
Yashakara was persuaded to resign by his minister Parvagupta, who killed the new Kashmir king but died two years later in 950. Parvagupta's son Kshemagupta became king and married the Lohara princess Didda. Eight years later she became regent for their son Abhimanyu and won over the rebel Yashodhara by appointing him commander of her army. When King Abhimanyu died in 972, his three sons ruled in succession until each in turn was murdered by their grandmother, Queen Didda; she ruled Kashmir herself with the help of an unpopular prime minister from 980 until she died in 1003.
In the south the Pandyas had risen to power in the late 8th century under King Nedunjadaiyan. He ruled for fifty years, and his son Srimara Srivallabha reigned nearly as long, winning victories over the Gangas, Pallavas, Cholas, Kalingas, Magadhas, and others until he was defeated by Pallava Nandi-varman III at Tellaru. The Pandya empire was ruined when his successor Varaguna II was badly beaten about 880 by a combined force of Pallavas, western Gangas, and Cholas. The Chola dynasty of Tanjore was founded by Vijayalaya in the middle of the 9th century. As a vassal of the Pallavas, he and his son Aditya I helped their sovereign defeat the Pandyas. Aditya ruled 36 years and was succeeded as Chola king by his son Parantaka I (r. 907-953). His military campaigns established the Chola empire with the help of his allies, the Gangas, Kerala, and the Kodumbalur chiefs. The Pandyas and the Sinhalese king of Sri Lanka were defeated by the Cholas about 915. Parantaka demolished remaining Pallava power, but in 949 the Cholas were decisively beaten by Rashtrakuta king Krishna III at Takkolam, resulting in the loss of Tondamandalam and the Pandya country. Chola power was firmly established during the reign (985-1014) of Rajaraja I, who attacked the Kerala, Sri Lanka, and the Pandyas to break up their control of the western trade.
When the Pandyas invaded the island, Sri Lanka king Sena I (r. 833-53) fled as the royal treasury was plundered. His successor Sena II (r. 853-87) sent a Sinhalese army in retaliation, besieging Madura, defeating the Pandyas, and killing their king. The Pandya capital was plundered, and the golden images were taken back to the island. In 915 a Sinhalese army from Sri Lanka supported Pandyan ruler Rajasimha II against the Cholas; but the Chola army invaded Sri Lanka and apparently stayed until the Rashtrakutas invaded their country in 949. Sri Lanka king Mahinda IV (r. 956-72) had some of the monasteries burnt by the Cholas restored. Sena V (r. 972-82) became king at the age of twelve but died of alcoholism. During his reign a rebellion supported by Damila forces ravaged the island. By the time of Mahinda V (r. 982-1029) the monasteries owned extensive land, and barons kept the taxes from their lands. As unpaid mercenaries revolted and pillaged, Mahinda fled to Rohana. Chola king Rajaraja sent a force that sacked Anuradhapura, ending its period as the capital in 993 as the northern plains became a Chola province. In 1017 the Cholas conquered the south as well and took Mahinda to India as a prisoner for the rest of his life.
In India during this period Hindu colleges (ghatikas) were associated with the temples, and gradually the social power of the Brahmins superseded Buddhists and Jains, though the latter survived in the west. Jain gurus, owning nothing and wanting nothing, were often able to persuade the wealthy to contribute the four gifts of education, food, medicine, and shelter. In the devotional worship of Vishnu and Shiva and their avatars (incarnations), the Buddha became just another avatar for Hindus. Amid the increasing wars and militarism the ethical value of ahimsa (non-injury) so important to the Jains and Buddhists receded. The examples of the destroyer Shiva or Vishnu's incarnations as Rama and Krishna hardly promoted nonviolence. Village assemblies tended to have more autonomy in south India. The ur was open to all adult males in the village, but the sabha was chosen by lot from those qualified by land ownership, aged 35-70, knowing mantras and Brahmanas, and free of any major crime or sin. Land was worked by tenant peasants, who usually had to pay from one-sixth to one-third of their produce. Vegetarian diet was customary, and meat was expensive.
Women did not have political rights and usually worked in the home or in the fields, though upper caste women and courtesans could defy social conventions. Women attendants in the temples could become dancers, but some were exploited as prostitutes by temple authorities. Temple sculptures as well as literature were often quite erotic, as the loves of Krishna and the prowess of the Shiva lingam were celebrated, and the puritanical ethics of Buddhism and Jainism became less influential.
Feminine creative energy was worshiped as shakti, and Tantra in Hinduism and Tibetan Buddhism celebrated the union of the sexual act as a symbol of divine union; their rituals might culminate in partaking of the five Ms - madya (wine), matsya (fish), mamsa (flesh), mudra (grain), and maithuna (coitus). Although in the early stages of spiritual development Tantra taught the usual moral avoidance of cruelty, alcohol, and sexual intercourse, in the fifth stage after training by the guru secret rites at night might defy such social taboos. Ultimately the aspirant is not afraid to practice openly what others disapprove in pursuing what he thinks is true, transcending the likes and dislikes of earthly life like God, to whom all things are equal. However, some argued that the highest stage, symbolized as the external worship of flowers, negates ignorance, ego, attachment, vanity, delusion, pride, calumniation, perturbation, jealousy, and greed, culminating in the five virtues of nonviolence (ahimsa), control of the senses, charity, forgiveness, and knowledge.
The worker caste of Sudras was divided into the clean and the untouchables, who were barred from the temples. There were a few domestic slaves and those sold to the temples. Brahmins were often given tax-free grants of land, and they were forbidden by caste laws to work in cultivation; thus the peasant Sudras provided the labor. The increasing power of the Brahmin landowners led to a decline of merchants and the Buddhists they often had supported.
Commentaries on the Laws of Manu by Medhatithi focused on such issues as the duty of the king to protect the people, their rights, and property. Although following the tradition that the king should take up cases in order of caste, Medhatithi believed that a lower caste suit should be taken up first if it is more urgent. Not only should a Brahmin be exempt from the death penalty and corporal punishment, he thought that for a first offense not even a fine should be imposed on a Brahmin. Medhatithi also held that in education the rod should only be used mildly and as a last resort; his attitude about a husband beating his wife was similar. Medhatithi believed that a woman's mind was not under her control, and that they should all be guarded by their male relations. He upheld the property rights of widows who had been faithful but believed the unfaithful should be cast out to a separate life. Widow suicide called sati was approved by some and criticized by others. During this period marriages were often arranged for girls before they reached the age of puberty, though self-choice still was practiced.
The Jain monk Somadeva in his Nitivakyamrita also wrote that the king must chastise the wicked and that kings being divine should be obeyed as a spiritual duty. However, if the king does not speak the truth, he is worthless; for when the king is deceitful and unjust, who will not be? If he does not recognize merit, the cultured will not come to his court. Bribery is the door by which many sins enter, and the king should never speak what is hurtful, untrustworthy, untrue, or unnecessary. The force of arms cannot accomplish what peace does. If you can gain your goal with sugar, why use poison? In 959 Somadeva wrote the romance Yashastilaka in Sanskrit prose and verse, emphasizing devotion to the god Jina, goodwill to all creatures, hospitality to everyone, and altruism while defending the unpopular practices of the Digambara ascetics such as nudity, abstaining from bathing, and eating standing up.
The indigenous Bon religion of Tibet was animistic and included the doctrine of reincarnation. Tradition called Namri Songtsen the 32nd king of Tibet. His 13-year-old son Songtsen Gampo became king in 630. He sent seventeen scholars to India to learn the Sanskrit language. The Tibetans conquered Burma and in 640 occupied Nepal. Songtsen Gampo married a princess from Nepal and also wanted to marry a Chinese princess, but so did Eastern Tartar (Tuyuhun) ruler Thokiki. According to ancient records, the Tibetans recruited an army of 200,000, defeated the Tartars, and captured the city of Songzhou, persuading the Chinese emperor to send his daughter to Lhasa in 641. Songtsen Gampo's marriage to Buddhist princesses led to his conversion, the building of temples and 900 monasteries, and the translation of Buddhist texts. His people were instructed how to write the Tibetan dialect with adapted Sanskrit letters. Songtsen Gampo died in 649, but the Chinese princess lived on until 680. He was succeeded by his young grandson Mangsong Mangtsen, and Gar Tongtsen governed as regent and conducted military campaigns in Asha for eight years. Gar Tongtsen returned to Lhasa in 666 and died the next year of a fever. A large military fortress was built at Dremakhol in 668, and the Eastern Tartars swore loyalty.
During a royal power struggle involving the powerful Gar ministers, Tibet's peace with China was broken in 670, and for two centuries their frontier was in a state of war. The Tibetans invaded the Tarim basin and seized four garrisons in Chinese Turkestan. They raided the Shanzhou province in 676, the year Mangsong Mangtsen died. His death was kept a secret from the Chinese for three years, and a revolt in Shangshong was suppressed by the Tibetan military in 1677. Dusong Mangje was born a few days after his royal father died. The Gar brothers led their armies against the Chinese. During a power struggle Gar Zindoye was captured in battle in 694; his brother Tsenyen Sungton was executed for treason the next year; and Triding Tsendro was disgraced and committed suicide in 699, when Dusong defeated the Gar army. Nepal and northern India revolted in 702, and two years later the Tibetan king was killed in battle. Tibetan sources reported he died in Nanzhao, but according to the Chinese he was killed while suppressing the revolt in Nepal.
Since Mes-Agtshom (also known as Tride Tsugtsen or Khri-Ide-btsug-brtan) was only seven years old, his grandmother Trimalo acted as regent. Mes-Agtshom also married a Chinese princess to improve relations; but by 719 the Tibetans were trading with the Arabs and fighting together against the Chinese. In 730 Tibet made peace with China and requested classics and histories, which the Emperor sent to Tibet despite a minister's warning they contained defense strategies. During a plague in 740-41 all the foreign monks were expelled from Tibet. After the imperial princess died in 741, a large Tibetan army invaded China. Nanzhao, suffering from Chinese armies, formed an alliance with Tibet in 750. Mes-Agtshom died in 755, according to Tibetan sources by a horse accident; but an inscription from the following reign accused two ministers of assassinating him. During Trisong Detsen's reign (755-97) Tibetans collected tribute from the Pala king of Bengal and ruled Nanzhao. In 763 a large Tibetan army invaded China and even occupied their capital at Chang'an. The Chinese emperor promised to send Tibet 50,000 rolls of silk each year; but when the tribute was not paid, the war continued. In 778 Siamese troops fought with the Tibetans against the Chinese in Sichuan (Szech'uan). Peace was made in 783 when China ceded much territory to Tibet. In 790 the Tibetans regained four garrisons in Anxi they had lost to Chinese forces a century before.
After Mashang, the minister who favored the Bon religion, was removed from the scene, Trisong Detsen sent minister Ba Salnang to invite the Indian pandit Shantirakshita to come from the university at Nalanda in Nepal. The people believed that Bon spirits caused bad omens, and Shantirakshita returned to Nepal. So Ba Salnang invited Indian Tantric master Padmasambhava, who was able to overcome the Bon spirits by making them take an oath to defend the Buddhist religion. Shantirakshita returned and supervised the building of a monastery that came to be known as Samye. He was named high priest of Tibet, and he introduced the "ten virtues." When Padmasambhava was unable to refute the instantaneous enlightenment doctrine of the Chinese monk Hoshang, Kamalashila was invited from India for a debate at Samye that lasted from 1792 until 1794. Kamalashila argued that enlightenment is a gradual process resulting from study, analysis, and good deeds. Kamalashila was declared the winner, and King Trisong Detsen declared Buddhism the official religion of Tibet.
Padmasambhava founded the red-hat Adi-yoga school and translated many Sanskrit books into Tibetan. A mythic account of his supernatural life that lasted twelve centuries was written by the Tibetan lady Yeshe Tsogyel. As his name implies, Padmasambhava was said to have been born miraculously on a lotus. His extraordinary and unconventional experiences included being married to 500 wives before renouncing a kingdom, several cases of cannibalism, surviving being burned at the stake, killing butchers, attaining Buddhahood, and teaching spirits and humans in many countries. In the guise of different famous teachers he taught people how to overcome the five poisons of sloth, anger, lust, arrogance, and jealousy.
The Tibetan Book of the Dead was first committed to writing around this time. Its title Bardol Thodol more literally means "liberation by hearing on the after-death plane." Similar in many ways to the Egyptian Book of the Dead, it likely contains many pre-Buddhist elements, as it was compiled over the centuries. The first part, chikhai bardo, describes the psychic experiences at the moment of death and urges one to unite with the all-good pure reality of the clear light. In the second stage of the chonyid bardo karmic illusions are experienced in a dream-like state, the thought-forms of one's own intellect. In the sidpa bardo, the third and last phase, one experiences the judgment of one's own karma; prayer is recommended, but instincts tend to lead one back into rebirth in another body. The purpose of the book is to help educate one how to attain liberation in the earlier stages and so prevent reincarnation.
Muni Tsenpo ruled Tibet from 797 probably to 804, although some believed he ruled for only eighteen months. He tried to reduce the disparity between the rich and poor by introducing land reform; but when the rich got richer, he tried two other reform plans. Padmasambhava advised him, "Our condition in this life is entirely dependent upon the actions of our previous life, and nothing can be done to alter the scheme of things."9 Muni Tsenpo had married his father's young wife to protect her from his mother's jealousy; but she turned against her son, the new king, and poisoned him; some believed he was poisoned because of his reforms. Since Muni Tsenpo had no sons, he was succeeded by his youngest brother Sadnaleg; his other brother Mutik Tsenpo was disqualified for having killed a minister in anger. During Sadnaleg's reign the Tibetans attacked the Arabs in the west, invading Transoxiana and besieging Samarqand; but they made an agreement with Caliph al-Ma'mun.
When Sadnaleg died in 815, his ministers chose his Buddhist son Ralpachen as king over his irreligious older brother Darma. After a border dispute, Buddhists mediated a treaty between Tibet and China in 821 that reaffirmed the boundaries of the 783 treaty. Ralpachen decreed that seven households should provide for each monk. By intrigues Darma managed to get his brother Tsangma and the trusted Buddhist minister Bande Dangka sent into exile; then Be Gyaltore and Chogro Lhalon, ministers who were loyal to Darma, went and murdered Bande Dangka. In 836 these same two pro-Bon ministers assassinated King Ralpachen and put Darma on the throne. They promulgated laws to destroy Buddhism in Tibet and closed the temples. Buddhist monks had to choose between marrying, carrying arms as hunters, becoming followers of the Bon religion, or death. In 842 the monk Lhalung Palgye Dorje assassinated King Darma with an arrow and escaped. That year marked a division in the royal line and the beginning of local rule in Tibet that lasted more than two centuries. Central Tibet suffered most from Darma's persecution, but Buddhism was kept alive in eastern and western Tibet. Buddhists helped Darma's son (r. 842-70) gain the throne, and he promoted their religion. As their empire disintegrated into separate warring territories, Tibetan occupation in Turkestan was ended by Turks, Uighurs, and Qarluqs.
In 978 translators Rinchen Zangpo and Lakpe Sherab invited some Indian pandits to come to Tibet, and this marked the beginning of the Buddhist renaissance in Tibet. Atisha (982-1054) was persuaded to come from India in 1042 and reformed the Tantric practices by introducing celibacy and a higher morality among the priests. He wrote The Lamp that Shows the Path to Enlightenment and founded the Katampa order, which was distinguished from the old Nyingmapa order of Padmasambhava. Drogmi (992-1074) taught the use of sexual practices for mystical realization, and his scholarly disciple Khon Konchog Gyalpo founded the Sakya monastery in 1073.
The Kagyupa school traces its lineage from the celestial Buddha Dorje-Chang to Tilopa (988-1069), who taught Naropa (1016-1100) in India. From a royal family in Bengal, Naropa studied in Kashmir for three years until he was fourteen. Three years later his family made him marry a Brahmin woman; they were divorced after eight years, though she became a writer too. In 1049 Naropa won a debate at Nalanda and was elected abbot there for eight years. He left to find the guru he had seen in a vision and was on the verge of suicide when Tilopa asked him how he would find his guru if he killed the Buddha. Naropa served Tilopa for twelve years during which he meditated in silence most of the time. However, twelve times he followed his guru's irrational suggestions and caused himself suffering. Each time Tilopa pointed out the lesson and healed him, according to the biography written about a century later. The twelve lessons taught him about the ordinary wish-fulfilling gem, one-valueness, commitment, mystic heat, apparition, dream, radiant light, transference, resurrection, eternal delight (learned from Tantric sex), mahamudra (authenticity), and the intermediate state (between birth and death). Naropa then went to Tibet where he taught Marpa (1012-96), who brought songs from the Tantric poets of Bengal to his disciple Milarepa.
Milarepa was born on the Tibetan frontier of Nepal in 1040. When he was seven years old, Milarepa's father died; his aunt and uncle taking control of the estate, his mother and he had to work as field laborers in poor conditions. When he came of age, his sister, mother, and he were thrown out of their house. So Milarepa studied black magic, and his mother threatened to kill herself if he failed. Milarepa caused the house to fall down, killing 35 people. Next his teacher taught him how to cause a hail storm, and at his mother's request he destroyed some crops. Milarepa repented of this sorcery and prayed to take up a religious life. He found his way to the lama Marpa the translator, who said that even if he imparted the truth to him, his liberation in one lifetime would depend on his own perseverance and energy. The lama was reluctant to give the truth to one who had done such evil deeds. So he had Milarepa build walls and often tear them down, while his wife pleaded for the young aspirant. Frustrated, Milarepa went to another teacher, who asked him to destroy his enemies with a hail storm, which he did while preserving an old woman's plot.
Milarepa returned to his guru Marpa and was initiated. Then he meditated in a cave for eleven months, discovering that the highest path started with a compassionate mood dedicating one's efforts to universal good, followed by clear aspiration transcending thought with prayer for others. After many years Milarepa went back to his old village to discover that his mother had died, his sister was gone, and his house and fields were in ruins. Describing his life in songs, Milarepa decided, "So I will go to gain the truth divine, to the Dragkar-taso cave I'll go, to practice meditation."10 He met the woman to whom he was betrothed in childhood, but he decided on the path of total self-abnegation. Going out to beg for food he met his aunt, who loosed dogs on him; but after talking he let her live in his house and cultivate his field. Milarepa practiced patience on those who had wronged him, calling it the shortest path to Buddhahood. Giving up comfort, material things, and desires for name or fame, he meditated and lived on nettles and water. He preached on the law of karma, and eventually his aunt was converted and devoted herself to penance and meditation. His sister found his nakedness shameful, but Milarepa declared that deception and evil deeds are shameful, not the body. Believing in karma, thoughts of the misery in the lower worlds may inspire one to seek Buddhahood.
It was said that Milarepa had 25 saints among his disciples, including his sister and three other women. In one of his last songs he wrote, "If pain and sorrow you desire sincerely to avoid, avoid, then, doing harm to others."11 Many miraculous stories are told of his passing from his body and the funeral; Milarepa died in 1123, and it was claimed that for a time no wars or epidemics ravaged the Earth. The biography of his life and songs was written by his disciple Rechung.
A contemporary of Milarepa, the life of Nangsa Obum was also told in songs and prose. She was born in Tibet, and because of her beauty and virtue she was married to Dragpa Samdrub, son of Rinang king Dragchen. She bore a son but longed to practice the dharma. Nangsa was falsely accused by Dragchen's jealous sister Ani Nyemo for giving seven sacks of flour to Rechung and other lamas. Beaten by her husband and separated from her child by the king, Nangsa died of a broken heart. Since her good deeds so outnumbered her bad deeds, the Lord of Death allowed her to come back to life. She decided to go practice the dharma; but her son and a repentant Ani Nyemo pleaded for her to stay. She remained but then visited her parents' home, where she took up weaving.
After quarreling with her mother, Nangsa left and went to study the sutras and practice Tantra. The king and her husband attacked her teacher Sakya Gyaltsen, who healed all the wounded monks. Then the teacher excoriated them for having animal minds and black karma, noting that Nangsa had come there for something better than a Rinang king; her good qualities would be wasted living with a hunter; they were trying to make a snow lion into a dog. The noblemen admitted they had made their karma worse and asked to be taught. Sakya replied that for those who have done wrong repentance is like the sun rising. They should think about their suffering and the meaninglessness of their lives and how much better they will be in the field of dharma. Dragchen and his father retired from worldly life, and Nangsa's 15-year-old son was given the kingdom.
Machig Lapdron (1055-1145) was said to be a reincarnation of Padmasambhava's consort Yeshe Tsogyel and of an Indian yogi named Monlam Drub. Leaving that body in a cave in India the soul traveled to Tibet and was born as Machig. As a child, she learned to recite the sutras at record speed, and at initiation she asked how she could help all sentient beings. In a dream an Indian teacher told her to confess her hidden faults, approach what she found repulsive, help those whom she thinks cannot be helped, let go of any attachment, go to scary places like cemeteries, be aware, and find the Buddha within. A lama taught her to examine the movement of her own mind carefully and become free of petty dualism and the demon of self-cherishing. She learned to wander and stay anywhere, and she absorbed various teachings from numerous gurus. She married and had three children but soon retired from the world. By forty she was well known in Tibet, and numerous monks and nuns came from India to challenge her; but she defeated them in debate. It was said that 433 lepers were cured by practicing her teachings.
A book on the supreme path of discipleship was compiled by Milarepa's disciple Lharje (1077-1152), who founded the Cur-lka monastery in 1150. This book lists yogic precepts in various categories. Causes of regret include frittering life away, dying an irreligious and worldly person, and selling the wise doctrine as merchandise. Requirements include sure action, diligence, knowledge of one's own faults and virtues, keen intellect and faith, watchfulness, freedom from desire and attachment, and love and compassion in thought and deed directed to the service of all sentient beings. "Unless the mind be disciplined to selflessness and infinite compassion, one is apt to fall into the error of seeking liberation for self alone."12 Offering to deities meat obtained by killing is like offering a mother the flesh of her own child. The virtue of the holy dharma is shown in those, whose heavy evil karma would have condemned them to suffering, turning to a religious life.
The black-hat Karmapa order was founded in 1147 by Tusum Khyenpa (1110-93), a native of Kham who studied with Milarepa's disciples. This sect claims to have started the system of leadership by successive reincarnations of the same soul, later adopted by the Dalai and Panchen Lamas. In 1207 a Tibetan council decided to submit peacefully to Genghis Khan and pay tribute. After the death of Genghis Khan in 1227, the Tibetans stopped paying the tribute, and the Mongols invaded in 1240, burning the Rating and Gyal Lhakhang monasteries and killing five hundred monks and civilians. In 1244 Sakya Pandita (1182-1251) went to Mongolia, where he initiated Genghis Khan's grandson Godan. Sakya Pandita instructed him in the Buddha's teachings and persuaded him to stop drowning the Chinese to reduce their population. Sakya Pandita was given authority over the thirteen myriarchies of central Tibet and told the Tibetan leaders it was useless to resist the Mongols' military power. He is also credited with devising a Mongolian alphabet. After Sakya Pandita died, the Mongols invaded Tibet in 1252. After Godan died, Kublai in 1254 invested Phagpa as the supreme ruler in Tibet by giving him a letter that recommended the monks stop quarreling and live peaceably. Phagpa conducted the enthronement of Kublai Khan in 1260. Phaga returned to Sakya in 1276 and died four years later.
In 1282 Dharmapala was appointed imperial preceptor (tishri) in Beijing. The Sakya administrator Shang Tsun objected to Kublai Khan's plans to invade India and Nepal, and the yogi Ugyen Sengge wrote a long poem against the idea, which Kublai Khan abandoned. After Tishri Dharmapala died in 1287, the myriarchy Drikhung attacked Sakya; but administrator Ag-len used troops and Mongol cavalry to defeat them, marching into Drikhung territory and burning their temple in 1290. Kublai Khan had been a patron of Buddhism in Tibet, but he died in 1295. After his death the influence of the Mongols in Tibet diminished.
Between 1000 and 1027 Ghazni ruler Mahmud invaded India with an army at least twelve times. About 15,000 Muslims took Peshawar and killed 5,000 Hindus in battle. Shahi king Jayapala was so ashamed of being defeated three times that he burned himself to death on a funeral pyre. In 1004 Mahmud's forces crossed the Indus River, then attacked and pillaged the wealth of Bhatiya. On the way to attack the heretical Abu-'l-Fath Daud, Mahmud defeated Shahi king Anandapala. Daud was forced to pay 20,000,000 dirhams and was allowed to rule as a Muslim if he paid 20,000 golden dirhams annually. Mahmud's army again met Anandapala's the next year; after 5,000 Muslims lost their lives, 20,000 Hindu soldiers were killed. Mahmud captured an immense treasure of 70,000,000 dirhams, plus gold and silver ingots, jewels, and other precious goods. After Mahmud defeated the king of Narayan and the rebelling Daud, Anandapala made a treaty that lasted until his death, allowing the Muslims passage to attack the sacred city of Thaneswar. In 1013 Mahmud attacked and defeated Anandapala's successor Trilochanapala, annexing the western and central portions of the Shahi kingdom in the Punjab. Next the Muslims plundered the Kashmir valley, though Mahmud was never able to hold it.
To attack Kanauj in the heart of India, Mahmud raised a force of 100,000 cavalry and 20,000 infantry. Most Hindu chiefs submitted, but in Mahaban nearly 5,000 were killed, causing Kulachand to kill himself. Next the Muslims plundered the sacred city of Mathura, destroying a temple that took two centuries to build and estimated to be worth 100,000,000 red dinars. After conquering more forts and obtaining more booty, Mahmud ordered the inhabitants slain by sword, the city plundered, and the idols destroyed in Kanauj that was said to contain almost 10,000 temples. In 1019 Mahmud returned to Ghazni with immense wealth and 53,000 prisoners to be sold as slaves.
When Mahmud's army returned again to chastise Chandella ruler Vidyadhara for killing the submitting Pratihara king Rajyapala, the resistance of Trilochanapala was overcome, making all of Shahi part of Mahmud's empire. Although he had 45,000 infantry, 36,000 cavalry, and 640 elephants, Vidyadhara fled after a minor defeat. The next year Mahmud and Vidyadhara agreed to a peace. 50,000 Hindus were killed in 1025 defending the Shaivite temple of Somanatha in Kathiawar, as Mahmud captured another 20,000,000 dirhams. In his last campaign Mahmud used a navy of 1400 boats with iron spikes to defeat the Jats with their 4,000 boats in the Indus. Mahmud's soldiers often gave people the choice of accepting Islam or death. These threats and the enslavement of Hindus by Muslims and the Hindus' consequent attitude of considering Muslims impure barbarians (mlechchha) caused a great division between these religious groups.
During this time Mahipala I ruled Bengal for nearly half a century and founded a second Pala empire. In the half century around 1100 Ramapala tried to restore the decreasing realm of the Palas by invading his neighbors until he drowned himself in grief in the Ganges. Buddhists were persecuted in Varendri by the Vangala army. In the 12th century Vijayasena established a powerful kingdom in Bengal; but in spite of the military victories of Lakshmanasena, who began ruling in 1178, lands were lost to the Muslims and others early in the 13th century.
Military campaigns led by the Paramara Bhoja and the Kalachuri Karna against Muslims in the Punjab discouraged Muslim invasions after Punjab governor Ahmad Niyaltigin exacted tribute from the Thakurs and plundered the city of Banaras in 1034. Bhoja and a Hindu confederacy of chiefs conquered Hansi, Thaneswar, Nagarkot, and other territories from the Muslims in 1043. Bhoja also wrote 23 books, patronized writers, and established schools for his subjects. Karna won many battles over various kingdoms in India but gained little material advantage. About 1090 Gahadavala ruler Chandradeva seems to have collaborated with the Muslim governor of the Punjab to seize Kanauj from Rashtrakuta ruler Gopala. In the first half of the 12th century Gahadavala ruler Govindachandra came into conflict with the Palas, Senas, Gangas, Kakatiyas, Chalukyas, Chandellas, Chaulukyas, the Karnatakas of Mithila, and the Muslims.
The Ghuzz Turks made Muhammad Ghuri governor of Ghazni in 1173; he attacked the Gujarat kingdom in 1178, but his Turkish army was defeated by the Chaulukya king Mularaja II. Chahamana Prithviraja III began ruling that year and four years later defeated and plundered Paramardi's Chandella kingdom. In 1186 Khusrav Malik, the last Yamini ruler of Ghazni, was captured at Lahore by Muhammad Ghuri. The next year the Chahamana king Prithviraja made a treaty with Bhima II of Gujarat. Prithviraja's forces defeated Muhammad Ghuri's army at Tarain and regained Chahamana supremacy over the Punjab. Muhammad Ghuri organized 120,000 men from Ghazni to face 300,000 led by Prithviraja, who was captured and eventually executed as the Muslims demolished the temples of Ajmer in 1192 and built mosques. From there Sultan Muhammad Ghuri marched to Delhi, where he appointed general Qutb-ud-din Aybak governor; then with 50,000 cavalry Muhammad Ghuri defeated the Gahadavala army of Jayachandra before leaving for Ghazni. Prithviraja's brother Hariraja recaptured Delhi and Ajmer; but after losing them again to Aybak, he burned himself to death in 1194.
Next the local Mher tribes and the Chaulukya king of Gujarat, Bhima II, expelled the Turks from Rajputana; but in 1197 Aybak invaded Gujarat with more troops from Ghazni, killing 50,000 and capturing 20,000. In 1202 Aybak besieged Chandella king Paramardi at Kalanjara and forced him to pay tribute. In the east a Muslim named Bakhtyar raided Magadha and used the plunder to raise a larger force that conquered much of Bengal; his army slaughtered Buddhist monks, thinking they were Brahmins. However, the Khalji Bakhtyar met tough resistance in Tibet and had to return to Bengal where he died. The Ghuri dynasty ended soon after Muhammad Ghuri was murdered at Lahore in 1206 by his former slave Aybak, who assumed power but died in 1210.
The struggle for power was won by Aybak's son-in-law Iltutmish, who defeated and killed Aybak's successor. Then in 1216 Iltutmish captured his rival Yildiz, who had been driven by Khwarezm-Shah from Ghazni to the Punjab; the next year he expelled Qabacha from Lahore. In 1221 Mongols led by Genghis Khan pushed Khwarezm-Shah and other refugees across the Indus into the Punjab. Iltutmish invaded Bengal and ended the independence of the Khalji chiefs; but he met with Guhilot resistance in Rajputana before plundering Bhilsa and Ujjain in Malwa. Chahadadeva captured and ruled Narwar with an army of over 200,000 men, defeating Iltutmish's general in 1234, but he was later defeated by the Muslim general Balban in 1251. After Qabacha drowned in the Indus, Iltutmish was recognized as the Baghdad Caliph's great sultan in 1229 until he died of disease seven years later.
Factional strife occurred as Iltutmish's daughter Raziyya managed to rule like a man for three years before being killed by sexist hostility; his sons, grandson, and the "Forty" officials, who had been his slaves, struggled for power and pushed back the invading Mongols in 1245. After Iltutmish's son Mahmud became king, the capable Balban gained control. In 1253 the Indian Muslim Raihan replaced Balban for a year until the Turks for racist reasons insisted Balban and his associates be restored. When Mahmud died childless in 1265, Balban became an effective sultan. He said, "All that I can do is to crush the cruelties of the cruel and to see that all persons are equal before the law."13 Mongols invaded again in 1285 and killed Balban's son; two years later the elderly Balban died, and in 1290 the dynasty of Ilbari Turks was replaced by the Khalji Turks with ties to Afghanistan.
Chola king Rajendra I (r. 1012-44) ruled over most of south India and even invaded Sumatra and the Malay peninsula. His son Rajadhiraja I's reign (1018-52) overlapped his father's, as he tried to put down rebellions in Pandya and Chera, invading western Chalukya and sacking Kalyana. Cholas were criticized for violating the ethics of Hindu warfare by carrying off cows and "unloosing women's girdles." Rajadhiraja was killed while defeating Chalukya king Someshvara I (r. 1043-68). In the Deccan the later Chalukyas battled their neighbors; led by Vikramaditya, they fought a series of wars against the powerful Cholas. After battling his brother Vikramaditya, Someshvara II reigned 1068-76; in confederacy with Chaulukya Karna of Gujarat, he defeated the Paramara Jayasimha and occupied Malava briefly. Becoming Chalukya king, Vikramaditya VI (r. 1076-1126) invaded the Cholas and took Kanchi some time before 1085.
When the Vaishnavites Mahapurna and Kuresha had their eyes put out, probably by Kulottunga I in 1079, the famous philosopher Ramanuja took refuge in the Hoysala country until Kulottunga died. Ramanuja modified Shankara's nondualism in his Bhasya and emphasized the way of devotion (bhakti). He believed the grace of God was necessary for liberation. Although he practiced initiations and rituals, Ramanuja recognized that caste, rank, and religion were irrelevant to realizing union with God. He provided the philosophical reasoning for the popular worship of Vishnu and was thought to be 120 when he died in 1137.
In Sri Lanka the Sinhalese harassed the occupying Chola forces until they withdrew from Rohana in 1030, enabling Kassapa VI (r. 1029-40) to govern the south. When he died without an heir, Cholas under Rajadhiraja (r. 1043-54) regained control of Rajarata. After 1050 a struggle for power resulted in Kitti proclaiming himself Vijayabahu I (r. 1055-1110). However, in 1056 a Chola army invaded to suppress the revolt in Rohana. Vijayabahu fled to the hills, and his army was defeated near the old capital of Anuradhapura; yet he recovered Rohana about 1061. The Chola empire was also being challenged by the western Chalukyas during the reign (1063-69) of Virarajendra. The new Chola king Kulottunga I (r. 1070-1120), after being defeated by Vijayabahu, pulled his forces out of Sri Lanka. Vijayabahu took over the north but had to suppress a rebellion by three brothers in 1075 near Polonnaruwa. After his envoys to the Chalukya king at Karnataka were mutilated, Vijayabahu invaded Chola around 1085; but he made peace with Kulottunga in 1088. Vijayabahu restored irrigation and centralized administration as he patronized Buddhism. Vijayabahu was succeeded by his brother Jayabahu I; but a year later Vikramabahu I (r. 1111-32) took control of Rajarata and persecuted monks while the sons of Vijayabahu's sister Mitta ruled the rest of Sri Lanka.
The Hoysala king Vinayaditya (r. 1047-1101) acknowledged Chalukya supremacy; but after his death, the Hoysalas tried to become independent by fighting the Chalukyas. Kulottunga ordered a land survey in 1086. The Cholas under Kulottunga invaded Kalinga in 1096 to quell a revolt; a second invasion in 1110 was described in the Kalingattupparani of court poet Jayangondar. After Vikramaditya VI died, Vikrama Chola (r. 1118-1135) regained Chola control over the Vengi kingdom, though the Chalukyas ruled the Deccan until the Kalachuri king Bijjala took Kalyana from Chalukya king Taila III in 1156; the Kalachuris kept control for a quarter century. Gujarat's Chalukya king Kumarapala was converted to Jainism by the learned Hemachandra (1088-1172) and prohibited animal sacrifices, while Jain king Bijjala's minister Basava (1106-67) promoted the Vira Shaiva sect that emphasized social reform and the emancipation of women. Basava disregarded caste and ritual as shackling and senseless. When an outcaste married an ex-Brahmin bride, Bijjala sentenced them both, and they were dragged to death in the streets of Kalyana. Basava tried to convert the extremists to nonviolence but failed; they assassinated Bijjala, and the Vira Shaivas were persecuted. Basava asked, "Where is religion without loving kindness?" Basava had been taught by Allama Prabhu, who had completely rejected external rituals, converting some from the sacrifice of animals to sacrificing one's bestial self.
In his poem, The Arousing of Kumarapala, which describes how Hemachandra converted King Kumarapala, Somaprabha warned Jains from serving the king as ministers, harming others and extorting their fortunes that one's master may take. In the mid-12th century the island of Sri Lanka suffered a three-way civil war. Ratnavali arranged for her son Parakramabahu to succeed childless Kitsirimegha in Dakkinadesa. Parakramabahu defeated and captured Gajabahu (r. 1132-53), taking over Polonnaruwa. However, his pillaging troops alienated the people who turned to Manabharana. Parakramabahu allied with Gajabahu, becoming his heir, and defeated Manabharana. Parakramabahu I (r. 1153-86) restored unity but harshly suppressed a Rohana rebellion in 1160 and crushed Rajarata resistance in 1168. He used heavy taxation to rebuild Pulatthinagara and Anuradhapura that had been destroyed by the Cholas. The Culavamsa credits Parakramabahu with restoring or building 165 dams, 3910 canals, 163 major tanks, and 2376 minor tanks. He developed trade with Burma. Sri Lanka aided a Pandya ruler in 1169 when Kulashekhara Pandya defeated and killed Parakrama Pandya, seizing Madura; but Chola king Rajadhiraja II (r. 1163-79) brought the Pandya civil war to an end. This enabled larger Chola armies to defeat the Sri Lanka force by 1174. Parakramabahu was succeeded by his nephew, who was slain a year later by a nobleman by trying to usurp the throne. Parakramabahu son-in-law Nissankamalla stopped that and ruled Sri Lanka for nine years. He also was allied with the Pandyas and fought the Cholas.
During the next eighteen years Sri Lanka had twelve changes of rulers, though Nissankamalla's queen, Kalyanavati reigned 1202-08. Four Chola invasions further weakened Sri Lanka. Queen Lilavati ruled three different times and was supported by the Cholas. In 1212 the Pandyan prince Parakramapandu invaded Rajarata and deposed her; but three years later the Kalinga invader Magha took power. The Culavamsa criticized Magha (r. 1215-55) for confiscating the wealth of the monasteries, taxing the peasants, and letting his soldiers oppress the people. Finally the Sinhalese alliance with the Pandyas expelled Magha and defeated the invasions by Malay ruler Chandrabanu. When his son came again in 1285, the Pandyan general Arya Chakravarti defeated him and ruled the north, installing Parakramabahu III (r. 1287-93) as his vassal at Polonnaruwa. Eventually the capital Polonnaruwa was abandoned; the deterioration of the irrigation system became irreversible as mosquitoes carrying malaria infested its remains. The Tamil settlers withdrew to the north, developing the Jaffna kingdom. Others settled in the wet region in the west, as the jungle was tamed.
Hoysala king Ballala II proclaimed his independence in 1193. Chola king Kulottunga III (r. 1178-1216) ravaged the Pandya country about 1205, destroying the coronation hall at Madura; but a few years later he was overpowered by the Pandyas and saved from worse defeat by Hoysala intervention, as Hoysala king Ballala II (r. 1173-1220) had married a Chola princess. In the reign (1220-34) of Narasimha II the Hoysalas fought the Pandyas for empire, as Chola power decreased. Narasimha's son Someshvara (r. 1234-63) was defeated and killed in a battle led by Pandya Jatavarman Sundara. Chola king Rajendra III (r. 1246-79) was a Pandyan feudatory from 1258 to the end of his reign. The Cholas had inflicted much misery on their neighbors, even violating the sanctity of ambassadors. The Pandyas under their king Maravarman Kulashekhara, who ruled more than forty years until 1310, overcame and annexed the territories of the Cholas and the Hoysalas in 1279 and later in his reign gained supremacy over Sri Lanka.
The dualist Madhva (1197-1276) was the third great Vedanta philosopher after Shankara and Ramanuja. Madhva also opened the worship of Vishnu to all castes but may have picked up the idea of damnation in hell from missionary Christians or Muslims. He taught four steps to liberation: 1) detachment from material comforts, 2) persistent devotion to God, 3) meditation on God as the only independent reality, and 4) earning the grace of God.
Marco Polo on his visit to south India about 1293 noted that climate and ignorant treatment did not allow horses to thrive there. He admired Kakatiya queen Rudramba, who ruled for nearly forty years. He noted the Hindus' strict enforcement of justice against criminals and abstention from wine, but he was surprised they did not consider any form of sexual indulgence a sin. He found certain merchants most truthful but noted many superstitious beliefs. Yet he found that ascetics, who ate no meat, drank no wine, had no sex outside of marriage, did not steal, and never killed any creature, often lived very long lives. Marco Polo related a legend of brothers whose quarrels were prevented from turning to violence by their mother who threatened to cut off her breasts if they did not make peace.
Nizam-ud-din Auliya was an influential Sufi of the Chishti order that had been founded a century before. He taught love as the means to realize God. For Auliya universal love was expressed through love and service of humanity. The Sufis found music inflamed love, and they interpreted the Qur'an broadly in esoteric ways; the intuition of the inner light was more important to them than orthodox dogma. Auliya was the teacher of Amir Khusrau (1253-1325), one of the most prolific poets in the Persian language. Many of Khusrau's poems, however, glorified the bloody conquests of the Muslim rulers so that "the pure tree of Islam might be planted and flourish" and the evil tree with deep roots would be torn up by force. He wrote,
The whole country, by means of the sword of our holy warriors,
has become like a forest denuded of its thorns by fire.
The land has been saturated with the water of the sword,
and the vapors of infidelity have been dispersed.
The strong men of Hind have been trodden under foot,
and all are ready to pay tribute.
Islam is triumphant; idolatry is subdued.
Had not the law granted exemption from death
by the payment of poll-tax,
the very name of Hind, root and branch,
would have been extinguished.
From Ghazni to the shore of the ocean
you see all under the dominion of Islam.14
In 1290 the Khalji Jalal-ud-din Firuz became sultan in Delhi but refused to sacrifice Muslim lives to take Ranthambhor, though his army defeated and made peace with 150,000 invading Mongols. Genghis Khan's descendant Ulghu and 4,000 others accepted Islam and became known as the "new Muslims." This lenient sultan sent a thousand captured robbers and murderers to Bengal without punishment. His more ambitious nephew 'Ala-ud-din Khalji attacked the kingdom of Devagiri, gaining booty and exacting from Yadava king Ramachandra gold he used to raise an army of 60,000 cavalry and as many infantry. In 1296 he lured his uncle into a trap, had him assassinated, and bribed the nobles to proclaim him sultan. Several political adversaries were blinded and killed. The next year 'Ala-ud-din sent an army headed by his brother Ulugh Khan to conquer Gujarat; according to Wassaf they slaughtered the people and plundered the country. Another 200,000 Mongols invaded in 1299, but they were driven back. Revolts by his nephews and an old officer were ruthlessly crushed. Money was extorted; a spy network made nobles afraid to speak in public; alcohol was prohibited; and gatherings of nobles were restricted. Orders were given that Hindus were not to have anything above subsistence; this prejudicial treatment was justified by Islamic law.
In addition to his three plays we also have four poems by Kalidasa. The Dynasty of Raghu is an epic telling the story not only of Rama but of his ancestors and descendants. King Dilipa's willingness to sacrifice himself for a cow enables him to get a son, Raghu. Consecrated as king, Raghu tries to establish an empire with the traditional horse sacrifice in which a horse for a year is allowed to wander into other kingdoms, which must either submit or defend themselves against his army. His son Aja is chosen by the princess Indumati. Their son Dasharatha has four sons by three wives; but for killing a boy while hunting, he must suffer the banishment of his eldest son Rama, whose traditional story takes up a third of the epic. His son Kusha restores the capital at Ayodhya; but after a line of 22 kings Agnivarna becomes preoccupied with love affairs before dying and leaving a pregnant queen ruling as regent.
Another epic poem, The Birth of the War-god tells how the ascetic Shiva is eventually wooed by Parvati, daughter of the Himalaya mountains, after the fire from Shiva's eye kills the god of Love and she becomes an ascetic. After being entertained by nymphs, Shiva restores the body of Love. Their son Kumara is made a general by the god Indra; after their army is defeated by Taraka's army, Kumara kills the demon Taraka. Kalidasa's elegy, The Cloud-Messenger, describes how the Yaksha Kubera, an attendant of the god of Wealth, who has been exiled from the Himalayas to the Vindhya mountains for a year, sends a cloud as a messenger to his wife during the romantic rainy season. Kalidasa is also believed to be the author of a poem on the six seasons in India.
Bana wrote an epic romance on the conquests of Harsha in the 7th century and another called Kadambari. Bana was not afraid to criticize the idea of kings being divine nor the unethical and cruel tactics of the political theorist Kautilya. Bana was one of the few Indian writers who showed concern for the poor and humble.
About the 6th or 7th century Bhartrihari wrote short erotic poems typical of those later collected into anthologies. He reminded himself that virtue is still important.
Granted her breasts are firm, her face entrancing,
Her legs enchanting - what is that to you?
My mind, if you would win her, stop romancing.
Have you not heard, reward is virtue's due?15
Torn between sensual and spiritual love, Bhartrihari found that the charms of a slim girl disturbed him. Should he choose the youth of full-breasted women or the forest? Eventually he moved from the dark night of passion to the clear vision of seeing God in everything. He noted that it is easier to take a gem from a crocodile's jaws or swim the ocean or wear an angry serpent like a flower in one's hair or squeeze oil from sand, water from a mirage, or find a rabbit's horn than it is to satisfy a fool whose opinions are set. Bhartrihari asked subtle questions.
Patience, better than armor, guards from harm.
And why seek enemies, if you have anger?
With friends, you need no medicine for danger.
With kinsmen, why ask fire to keep you warm?
What use are snakes when slander sharper stings?
What use is wealth where wisdom brings content?
With modesty, what need for ornament?
With poetry's Muse, why should we envy kings?16
The erotic poetry of Amaru about the 7th century often expressed the woman's viewpoint. When someone questioned her pining and faithfulness, she asked him to speak softly because her love living in her heart might hear. In another poem the narrator tries to hide her blushing, sweating cheeks but found her bodice splitting of its own accord. This poet seemed to prefer love-making to meditation. The erotic and the religious were combined in 12th century Bengali poet Jayadeva's "Songs of the Cowherd" (Gita Govinda) about the loves of Krishna. A poet observed that most people can see the faults in others, and some can see their virtues; but perhaps only two or three can see their own shortcomings.
In the late 11th century Buddhist scholar Vidyakara collected together an anthology of Sanskrit court poetry, Treasury of Well-Turned Verse (Subhasitaratnakosa), with verses from more than two hundred poets, mostly from the previous four centuries. Although it begins with verses on the Buddha and the bodhisattvas Lokesvara and Manjughosa, Vidyakara also included verses on Shiva and Vishnu. One poet asked why a naked ascetic with holy ashes needed a bow or a woman. (103) After these chapters the poetry is not religious, with verses on the seasons and other aspects of nature. Love poetry is ample, and it is quite sensual, though none of it is obscene. Women's bodies are described with affection, and sections include the joys of love as well as the sad longing of love-in-separation. An epigram complains of a man whose body smells of blood as his action runs to slaughter because his sense of right and wrong is no better than a beast's. Only courage is admired in a lion, but that makes the world seem cheap. (1091) Another epigram warns that the earth will give no support nor a wishing tree a wish, and one's efforts will come to nothing for one whose sin accumulated in a former birth. (1097) Shardarnava described peace in the smooth flow of a river; but noting uprooted trees along the shore, he inferred concealed lawlessness. (1111)
Dharmakirti's verses describe the good as asking no favors from the wicked, not begging from a friend whose means are small, keeping one's stature in misfortune, and following in the footsteps of the great, though these rules may be as hard to travel as a sword blade. (1213) Another poet found that he grew mad like a rutting elephant when knowing little he thought he knew everything; but after consorting with the wise and gaining some knowledge, he knew himself a fool, and the madness left like a fever. (1217) Another proclaimed good one who offers aid to those in distress, not one who is skillful at keeping ill-gotten gains. (1226) A poet noted that countless get angry with or without a cause, but perhaps only five or six in the world do not get angry when there is a cause. (1236) The great guard their honor, not their lives; fear evil, not enemies; and seek not wealth but those who ask for it. (1239) Small-minded people ask if someone is one of them or an outsider, but the noble mind takes the whole world for family. (1241) An anonymous poet asked these great questions:
Can that be judgment where compassion plays no part,
or that be the way if we help not others on it?
Can that be law where we injure still our fellows,
or that be sacred knowledge which leads us not to peace?17
A poet advised that the wise, considering that youth is fleeting, the body soon forfeited and wealth soon gone, lays up no deeds, though they be pleasurable here, that will ripen into bitter fruit in future lives. (1686)
Although collected from ancient myths and folklore, the eighteen "great" Puranas were written between the 4th and 10th centuries. Originally intended to describe the creation of the universe, its destruction and renewal, genealogies, and chronicles of the lawgivers and the solar and lunar dynasties, they retold myths and legends according to different Vaishnavite and Shaivite sects with assorted religious lore. The Agni Puranam, for example, describes the avatars Rama and Krishna, religious ceremonies, Tantric rituals, initiation, Shiva, holy places, duties of kings, the art of war, judicature, medicine, worship of Shiva and the Goddess, and concludes with a treatise on prosody, rhetoric, grammar, and yoga. Much of this was apparently taken from other books.
The early Vishnu Purana explains that although all creatures are destroyed at each cosmic dissolution, they are reborn according to their good or bad karma; this justice pleased the creator Brahma. In this Purana Vishnu becomes the Buddha in order to delude the demons so that they can be destroyed. The gods complain that they cannot kill the demons because they are following the Vedas and developing ascetic powers. So Vishnu says he will bewitch them to seek heaven or nirvana and stop evil rites such as killing animals. Then reviling the Vedas, the gods, the sacrificial rituals, and the Brahmins, they went on the wrong path and were destroyed by the gods. The Vishnu Purana describes the incarnations of Vishnu, including his future life as Kalkin at the end of the dark age (Kali yuga) when evil people will be destroyed, and justice (dharma) will be re-established in the Krita age. The gradual ethical degeneration is reflected in the change in Hindu literature from the heroic Vedas to the strategic epics and then to deception and demonic methods in the Puranas. The Padma Purana explains the incarnations of Vishnu as fulfilling a curse from lord Bhrigu, because Vishnu killed his wife. Thus Vishnu is born again and again for the good of the world when virtue has declined. By appearing as a naked Jain and the Buddha, Vishnu has turned the demons away from the Vedas to the virtue (dharma) of the sages.
The most popular of all the Puranas, the Srimad Bhagavatam was attributed to the author of the Mahabharata, Vyasa, given out through his son Suta. However, scholars consider this work emphasizing the way of devotion (bhakti) one of the later great Puranas and ascribe it to the grammarian Vopadeva. Bhagavatam retells the stories of the incarnations of the god Vishnu with special emphasis on Krishna. Even as a baby and a child the divine Krishna performs many miracles and defeats demons. The young Krishna is not afraid to provoke the wrath of the chief god Indra by explaining that happiness and misery, fear and security, result from the karma of one's actions. Even a supreme Lord must dispense the fruits of others' karma and thus is dependent on those who act. Thus individuals are controlled by their dispositions they have created by their former actions. Karma, or we might say experience, is the guru and the supreme Lord. Brahmins should maintain themselves by knowledge of the Veda, Kshatriyas by protecting the country, Vaishyas by business, and Sudras by service. Krishna also notes that karma based on desire is the product of ignorance, of not understanding one's true nature.
The king who is listening to the stories of Krishna asks how this Lord could sport with other men's wives; but the author excuses these escapades by explaining that although the superhuman may teach the truth, their acts do not always conform to their teachings. The intelligent understand this and follow only the teachings. The worshiping author places the Lord above good and evil and claims that the men of Vajra did not become angry at Krishna because they imagined their wives were by their sides all the time. Krishna also fought and killed many enemies, "as the lord of the jungle kills the beasts."18 He killed Kamsa for unjustly appropriating cows. Krishna fought the army of Magadha king Jarasandha seventeen times and presented the spoils of war to the Yadu king. He killed Satadhanva over a gem. Krishna carried off by force and thus wed Rukmini by the demon mode. Several other weddings followed, and Krishna's eight principal queens were said to have bore him ten sons each. The author claimed he had 16,000 wives and lived with them all at the same time in their own apartments or houses.
In the 18th battle Jarasandha's army finally defeated Krishna's, and it was said that he captured 20,800 kings; but Krishna got Bhima to kill Jarasandha, and all the confined Kshatriyas were released. Krishna cut off the head of his foe Sishupala with his razor-sharp discus; he also destroyed the Soubha and killed Salva, Dantavakra and his brother. Although the methods of action (karma) and knowledge (jnani) are discussed in relation to Samkhya philosophy and yoga, in the Bhagavatam the practice of devotion (bhakti) to God in the form of Krishna is favored as the supreme means of salvation. The great war between the Kurus and the Pandavas is explained as Krishna's way of removing the burden of the Earth. Krishna tells his own people, the Yadus, to cross the sea to Prabhasa and worship the gods, Brahmins, and cows. There rendered senseless by Krishna's illusion (maya), they indulge in drink and slaughter each other. Krishna's brother Balarama and he both depart from their mortal bodies, Krishna ascending to heaven with his chariot and celestial weapons.
Before the 11th century seventy stories of "The Enchanted Parrot" were employed to keep a wife entertained while her husband was away so that she would not find a lover. A charming parrot satirizes women, comparing them to kings and serpents in taking what is near them. The proverb is quoted that when the gods want to ruin someone they first take away one's sense of right and wrong, and the listener is warned not to set one's heart on riches gained by wickedness nor on an enemy one has humiliated. When the husband returns, the parrot is freed from the curse and flies to heaven amid a rain of flowers.
In the late 11th century Somadeva added to the Great Story (Brihat-katha) of Gunadhya to make the Ocean of the Streams of Story (Katha-sarit-sagara) collection of more than 350 stories in Sanskrit verse. The author noting that jealousy interferes with discernment, a king orders a Brahmin executed for talking with his queen; but on the way to his punishment, a dead fish laughs because while so many men are dressed as women in the king's harem an innocent Brahmin is to be killed. The narrator tells the king this and gains respect for his wisdom and release for the Brahmin. The author also notes that for the wise, character is wealth. Somadeva recounts the legendary stories of Vatsa king Udayana and his marriages to Vasavadatta and the Magadhan princess Padmavati. The former is commended for cooperating in the separation in Yaugandharayana's scheme; he says she is a real queen because she does not merely comply with her husband's wishes but cares for his true interests.
An eminent merchant sends his son to a courtesan to learn to beware of immorality incarnate in harlots, who rob rich young men blinded by their virility. Like all professionals, the prostitute has her price but must guard against being in love when no price is paid. She must be a good actress in seducing and milking the man of his money, deserting him when it is gone, and taking him back when he comes up with more money. Like the hermit, she must learn to treat them all equally whether handsome or ugly. Nonetheless the son is taken in by a courtesan and loses all his money, but he contrives to get it back by using a monkey trained to swallow money and give it back on cue.
From Somadeva also comes the Vampire's Tales of "The King and the Corpse." In an unusual frame for 25 stories a king is instructed to carry a hanged corpse inhabited by a vampire, who poses a dilemma at the conclusion of each tale. For example, when heads are cut off and are put back on each other's bodies, which person is which? After becoming orphans the oldest of four Brahmin brothers tries to hang himself; but he is cut down and saved by a man who asks him why a learned person should despair when good fortune comes from good karma and bad luck from bad karma. The answer to unhappiness, then, is doing good; but to kill oneself would bring the suffering of hell. So the brothers combine their talents to create a lion from a bone; but the lion kills them, as their creation was not intelligent but evil. The last brother, who brought the lion's completed body to life, is judged most responsible by the king because he should have been more aware of what would result.
1. Prince Ilango Adigal, Shilappadikaram, tr. Alain
Daniélou, p. 202.
2. Tiruvalluvar, The Kural tr. P. S. Sundarum, 99.
3. Ibid., 311-320.
4. Ibid., 981-990.
5. Shankara, Crest-Jewel of Wisdom tr. Mohini M. Chatterji, 58.
6. Bhasa, Avimaraka tr. J. L. Masson and D. D. Kosambi, p. 73.
7. Ibid., p. 130-131.
8. Kalidasa, Shakuntala tr. Michael Coulson, 1:11.
9. Tibet's Great Yogi Milarepa tr. Kazi Dawa-Samdup, p. 176.
10. Ibid., p. 253.
11. Tibetan Yoga and Secret Doctrines tr. Kazi Dawa-Samdup, p. 75.
12. Majumdar, R. C., An Advanced History of India, p. 292.
13. Speaking of Shiva tr. A. K. Ramanujan, p. 54.
14. Elliot, H. M., The History of India as Told by Its Own Historians, Vol. 3, p. 546.
15. Poems from the Sanskrit tr. John Brough, p. 58.
16. Ibid., p. 71.
17. An Anthology of Sanskrit Court Poetry tr. Daniel H. H. Ingalls, 1629.
18. Srimad Bhagavatam tr. N. Raghunathan, 10:44:40, Vol. 2 p. 321.
This chapter has been published in the book INDIA & Southeast Asia to 1800.
For ordering information, please click here. | http://www.san.beck.org/AB2-India.html | 13 |
33 | In this activity, students become familiar with radio waves that are used to remotely sense the topography beneath the ice sheet. They experiment with travel time of waves and convert these to distance. The students, in groups, examine time data acquired along profiles of the Antarctic ice sheet and convert these data to depth, resulting in a profile of the topography beneath the ice sheet. The students "pool" their profiles to get a better view of the topography beneath the ice sheet. They compare their findings with radio-echo soundings of the ice sheet and with the map of sub-ice topography. Students contrast this method of data acquisition with that of coring (What's Under There?).
Maps of the Antarctic continent showing the land under the ice have been difficult in the past to make. Today, with the use of satellites, almost our whole earth and even our sky are being mapped using the scientific instruments involving electromagnetic or sound waves.
6th grade and higher, Earth Science, Physical Science, Physics
The student will:
Teacher Preparation for Activity
Place each of the three maps of Antarctica on a separate wall. The students will approach these maps in groups to examine them; the class also will discuss the maps.
For each group of 4 to 5 students:
Two class periods
Engagement and Exploration (Student Inquiry Activity)
As a class, discuss the findings of the students (refer to questions). How do the students think the maps were constructed? Do we have data for every single point on the map? Were cores used? Are there other ways to acquire information?
Researchers get information about the thickness of the ice sheet and depth to basement through cores - a data collection method similar to the way students collected topographic information in their mystery boxes. They also use other techniques, using sound waves and electromagnetic waves. Some sound waves we can hear; electromagnetic waves includes light in the spectrum we see.
The researchers using electromagnetic waves to image through the ice sheet use radio waves. The technique is called "radio-echo sounding" or RES. The more recent term applied to the tool is "ground penetrating radar" or GPR.
Scientists use devices to generate radio waves at the surface of an ice sheet. The radio waves travel through the ice sheet. When the wave "hits" a surface, such as a layer of more dense ice, or a pool of water, or the rock surface under the ice, part of it bounces back to the surface where the scientists record the return. The scientists can figure out where the object is under the ice using the amount of time the wave traveled to the object and back. They can change "time" into depth (or thickness) because they know how fast the waves travel. Actually, there are computers that make this conversion for the scientists; the data are displayed as a radar image. The students will look at GPR data from the ice sheet later in the activity.
The time has to be divided in half because the time the scientist records includes the wave's trip to the object and the wave's trip back to the surface.
Elaboration (Polar Applications)
Provide each group with a slinky, a measuring tape, a stopwatch, graph paper, pencils. Have each group work near a wall. The students will explore the d=vt relationship by creating waves with the slinky and timing how long it takes for the wave to travel to the wall and back to the student generating the wave.
Ask the students to measure 3 meters from the wall. One student can sit close to the wall and one student can sit at the 3 meter mark. Have the students quickly move the slinky to the right and then to the left so that a wave is created. What happens to the wave?
This is similar to how waves act in the ice sheet. The scientists have an instrument that generates the wave. The wave travels through the ice (across the floor), hits the rock beneath the ice sheet (the other student), and then bounces back to the top of the ice sheet (the first student).
Teacher Note: The side-to-side wave is similar to a transverse wave. Light, radar, ultraviolet, and TV waves are transverse waves. They are all members of the electromagnetic spectrum. By creating a "pull and push" motion with the slinky on the floor, students can simulate longitudinal waves (compressional). This models how sound waves travel. While GPR uses radar waves, other methods to image the ice sheet employ sound waves. The results are similar.
Ask the students to measure the time taken by the travel from the student generating the wave and back again. Have the groups measure this same distance several times. What parts of the experiment do we want to keep constant? The students should try to generate the waves in a consistent fashion. Why do we want to repeat the experiment several times? Make sure the students record their data.
Next, have the students repeat the experiment with the students 2.5 meters from each other. Record the data for three trials. Repeat for 2, 1.5, 1, 0.5 meters. What happens to the speed of the wave as the students get closer and closer? At some point, the students may not be able to measure the difference.
Have the student groups graph their results of distance and time. Remind them that they want the time for the wave to travel one way, just like scientists only want the time the wave travels to the rock under the ice sheet. They need to cut the time in half.
How do they want to show the multiple trials? Suggest that they average the three trials so that they get a single number to plot.
Ask the students to calculate how fast the wave was traveling using the equation:
How can they find the speed of the wave? They will need to move the parts of the equation so that:
What trends do they see? Does the speed vary for the trials? Probably not by much. For older students, this can be discussed in the context of the slope of the line that is created by plotting depth versus wave travel time.
When the scientists are measuring the thickness of the ice sheet, or the depth to the bottom, they know the speed of the wave in ice and the time that it takes for the wave to return.
Exchange (Students Draw Conclusions)
Have the students present their graphs to the class. Are all the graphs similar? Why or why not? What controls how fast a wave returns to the "surface"?
If the scientists were measuring radar waves on the ice sheet, and the wave returned very quickly from the rock surface beneath the ice sheet, how close would the rock surface be? Close? Far away? If the wave took much longer to bounce back, how close would the rock surface be? Close? Far away?
Give each student group a RES image. Show them on the map where the image was collected. What do they see? Can they identify the surface of the ice sheet? The bottom? The rock under the ice sheet? The layers inside the ice sheet? This image was collected by recording radar waves that penetrated the ice sheet and then bounced back to the surface.
Return to the maps. Ask the students how the data to make the map were collected. Some students may recall the coring activity. Much of our information about the thickness of the ice sheets and the sub-ice topography is gained by data collected by techniques illustrated in this activity - remotely sensing the ice sheet with radar and sound waves. Ask the students why coring is still very important. While more expensive, coring provides new information about the ice sheet and it provides "ground truth" - evidence that the data from remote sensing is interpreted correct.
Evaluation (Assessing Student Performance)
Sandra Shutey, Butte High School, Butte, Montana; Stephanie Shipp, Rice University, Houston, Texas; Kristen Bjork, Educational Development Center, Newton, Massachusetts
The International Geophysical Year (IGY) of 1958 marked the initiation of focused investigations into the structure of Antarctica. Our knowledge of the continent increased dramatically as a result of these studies. We continue to investigate the continent with ever-changing technology. During the early years, scientists used explosives to generate sound waves. Charges of dynamite were placed in the ice and exploded. The shock waves, recorded on a seismograph, penetrated the ice and the rock below. The waves bounced off the different layers and returned to the surface where they were recorded by the sensors (Image). Changes in the rate of travel of the waves passing from ice to rock allowed scientists to determine the ice-rock boundary and to measure the true depth of the ice. From these data, scientists created cross-sections that revealed the great thickness of the ice.
Today, scientists use techniques such as radio-echo sounding (RES) to collect data about ice sheet thickness inexpensively and quickly (also a little less dangerously!) (Image). RES involves either sledding or flying over the ice sheet while transmitting and receiving radio signals. The data used in this activity are RES profiles.
Radio-echo sounding (RES) techniques are relatively new to the field of glaciology. A radio pulse between the frequencies of 35 and 300 MHz is transmitted from an instrument at the ice surface. The pulse penetrates the ice and reflects from internal layers and the ice/substrate contact back to surface listening devices. The returned signals are recorded and processed digitally into high-resolution images of the internal structure and thicknesses of ice sheets.
The potential to use RES as a method to measure ice thickness came about from some accidental airplane landings on the Greenland Ice Sheet. During WWII military aircraft had radar devices to indicate their clearance above the ground surface. Ice is partly transparent to radio waves, and portions of the radio signal penetrated the ice sheet and bounced off the rock surface underneath. Thus the radar provided the clearance above the bed of the ice sheet and not the surface of the ice sheet! In consequence, airplanes made unintentional wheels-up landings on the snow on top of the ice sheet. No one was hurt, but scientists learned about this new method.
RES data allow scientists to look into the ice sheet and see its base in a continuous profile. But the RES profiles tell glaciologists much more than just the thickness of the ice sheet. RES data show layers within the sheet that can be traced across great distances; each layer is a time line. These help glaciologists understand how the ice is flowing. By comparing RES profiles at the same location from different times, researchers can trace the movement of a parcticular pattern or feature and can determine how fast the ice sheet is moving. RES data can even provide information about the temperatures of ice sheets! Radio waves travel at different speeds through ice of different temperatures; these differences can be interpreted to give glaciologists more information about the ice sheet and its activity. Based on the pattern of the returned signal, glacial geologists and glaciologists have identified different types of rock under the ice sheet, the locations of subglacial lakes, and the presence of large crevasses at the bases of ice shelves.
Based on RES and seismic surveys of the Antarctic ice sheet, a better understanding of the size, volume, and conditions within the ice sheet has emerged. RES data provide scientists with a method of data collection that is relatively rapid, and more continuous and less costly than other types of surveys. Remember, however, interpretations of the data must be field tested. Ice cores and other types of sampling are invaluable for verifying that the interpretations are correct!
Background modified from GLACIER supplementary curriculum. Materials are available through GLACIER.
Employing Remote Sensing in the Field:
Kim Giesting Journals; Oceanography
Hubble Space Telescope; Astronomy
Antarctic sub-ice topographic, thickness, and surface elevation maps are available from the Scott Polar Research Institute.
We look forward to hearing from you! Please review this activity.
Return to top of page
Back to: TEA Activities Page | http://tea.armadaproject.org/activity/tea_activity_shutey_res.html | 13 |
13 | Want to stay on top of all the space news? Follow @universetoday on Twitter
Some of the most frequently asked questions we get here at Universe Today and Astronomy Cast deal with black holes. Everyone wants to know what conditions would be like at the event horizon, or even inside a black hole. Answering those questions is difficult because so much about black holes is unknown. Black holes can’t be observed directly because their immense gravity won’t let light escape. But in just the past week, three different research teams have released their findings in their attempts to create black holes – or at least conditions analogous to them to advance our understanding.
Make Your Own Accretion Disk
A team of researchers from Osaka University in Japan wanted to sharpen their insights into the behavior of matter and energy in extreme conditions. What could be more extreme than the conditions of the swirling cloud of matter surrounding a black hole, known as the accretion disk? Their unique approach was to blast a plastic pellet with high-energy laser beams.
Accretion disks get crunched and heated by a black hole’s gravitational energy. Because of this, the disks glow in x-ray light. Analyzing the spectra of these x-rays gives researchers clues about the physics of the black hole.
However, scientists don’t know precisely how much energy is required to produce such x-rays. Part of the difficulty is a process called photoionization, in which the high-energy photons conveying the x-rays strip away electrons from atoms within the accretion disk. That lost energy alters the characteristics of the x-ray spectra, making it more difficult to measure precisely the total amount of energy being emitted.
To get a better handle on how much energy those photoionized atoms consume, researchers zapped a tiny plastic pellet with 12 laser beams fired simultaneously and allowed some of the resulting radiation to blast a pellet of silicon, a common element in accretion disks.
The synchronized laser strikes caused the plastic pellet to implode, creating an extremely hot and dense core of gas, or plasma. That turned the pellet into “a source of [immensely powerful] x-rays similar to those from an accretion disk around a black hole,” says physicist and lead author Shinsuke Fujioka. The team said the x-rays photoionized the silicon, and that interaction mimicked the emissions observed in accretion disks. By measuring the energy lost from the photoionization, the researchers could measure total energy emitted from the implosion and use it to improve their understanding of the behavior of x-rays emitted by accretion disks.
The Portable Black Hole
Another group of physicists created a tiny device that can create a black hole by sucking up microwave light and converting it into heat. At just 22 centimeters across, the device can fit in your pocket.
The device uses ‘metamaterials’, specially engineered materials that can bend light in unusual ways. Previously, scientists have used such metamaterials to build ‘invisibility carpets’ and super-clear lenses. This latest black hole was made by Qiang Chen and Tie Jun Cui of Southeast University in Nanjing, China.
Real black holes use their huge mass to warp space around it. Light that travels too close to it can become trapped forever.
The new meta-black hole also bends light, but in a very different way. Rather than relying on gravity, the black hole uses a series of metallic ‘resonators’ arranged in 60 concentric circles. The resonators affect the electric and magnetic fields of a passing light wave, causing it to bend towards the centre of the hole. It spirals closer and closer to the black hole’s ‘core’ until it reaches the 20 innermost layers. Those layers are made of another set of resonators that convert light into heat. The result: what goes in cannot come out. “The light into the core is totally absorbed,” Cui said.
Not only is the device useful in studying black holes, but the research team hopes to create a version of the device that will suck up light of optical frequencies. If it works, it could be used in applications such as solar cells.
Black holes in your computer?
Could you create a black hole in your computer? Maybe if you had a really big one. Scientists at Rochester Institute of Technology (RIT) hope to make use of two of the fastest supercomputers in the world in their quest to “shine light” on black holes. The team was approved for grants and computing time to study the evolution of black holes and other objects with the “NewHorizons,” a cluster consisting of 85 nodes with four processors each, connected via an Infiniband network that passes data at 10-gigabyte-per-second speeds.
The team has created computer algorithms to simulate with mathematics and computer graphics what cannot be seen directly.
“It is a thrilling time to study black holes,” said Manuela Campanelli, center director. “We’re nearing the point where our calculations will be used to test one of the last unexplored aspects of Einstein’s General Theory of Relativity, possibly confirming that it properly describes the strongest gravitational fields in the universe.” | http://www.universetoday.com/43110/could-a-black-hole-fit-in-your-computer-or-in-your-pocket/comment-page-5/ | 13 |
15 | Gamma radiation from black holes to diffuse gas in space heating, which delayed formation of dwarf galaxies.
The graph shows an extremely massive black hole surrounded by a dust ring (torus). The incidence of gas onto the black hole leads to a high-energy beam of matter and radiation, which can be transported over cosmological distances. If the beam is shown in our direction, we speak of a “blazar.”
The influence of extremely massive black holes is limited, cosmic seen on its immediate environment – is so at least the previous assumption. An international team of astronomers has now discovered, however, that these black holes of millions to billions of solar masses also affects much more distant objects, and subsequently even may have on the formation of galaxies. The researchers from Germany, Canada and the United States observed that diffuse gas absorbs the light in space gamma radiation from black holes and heats it. This surprising finding has important implications for the formation of large structures in the universe.
In the center of every galaxy is an extremely massive black hole. It can emit high-energy gamma radiation and is then called blazar. Other types of radiation such as visible light or radio waves passing through the universe without any problems. This is true for high-energy gamma radiation is not. This radiation interacts with the optical light emitted by the galaxies, and is converted into the elementary particles, electrons and positrons. The elementary move initially nearly light speed, but slowed down by the diffuse gas in the universe. Since each braking process generates heat, the surrounding gas heats it to extreme. It is ten times hotter on average and in the cosmic regions with less density than the average even more than a hundred times hotter than previously thought.
Temperature measurement in the line of forest
“Blazars, the thermal history of the universe to write,” said Christoph Pfrommer, one of the authors, from the Heidelberg Institute for Theoretical Studies (HITS). But how can you verify such an idea? In the optical spectra of distant quasars seen a number of lines, the so-called forest line. The forest is created by absorption of ultra-violet quasar by neutral hydrogen atoms in the early stages of the universe. If the gas is now hot, then the weakest lines are widened. This effect produces a great way to measure temperature in the early universe and thus virtually to observe the universe in his youth.
The HITS-examined Astrophysicists this newly postulated heating process for the first time with detailed computer simulations of the cosmological origin of structures. Surprisingly, the lines were just so broadened that they exactly match the measured line in the quasar spectra match statistics. “We can solve elegantly a long standing problem with these Quasardaten” says Ewald fixed Puchwein who performed the simulations on the mainframe at the HITS.
The impact on galaxy formation
What other consequences resulting from this new source of heat? The forest line in the quasar spectra is caused by density fluctuations in the universe. Here, the densest fluctuations fall together over time to form galaxies and galaxy clusters, as we see around us. If the diffuse gas is too hot, it can not collapse and the formation of dwarf galaxies is delayed or even completely suppressed. This could be the key to the solution of another problem in the theory of galaxy formation are that there has long been: Why are close to our Milky Way and in dense cosmic regions observed significantly fewer dwarf galaxies than predicted by cosmological simulations?
Volker Springel, head of research group at HITS, said: “The most exciting of the new process is the Blazarheizens that this same effect can explain several puzzles in the cosmological structure formation.” The group now plans to further refine the simulation models and to understand so the physical nature of blazars and their impact on today’s universe better. | http://scienceray.com/astronomy/black-holes-have-an-effect-on-how-entire-galaxies/ | 13 |
19 | January 1, 2005
The relaxing sound of a babbling brook; the happy laughter of a giggling child; the rousing sound of a marching band. All of these and more enrich our daily lives and are dependent on our ability to hear sound. But what exactly is sound and how is it that we are able to hear it? Keep reading to learn the answers to these questions and what logically follows as implications for the theory of macroevolution.
Sound! What’s it all about?
Sound is the sensation we experience when vibrating molecules of our surrounding environment (usually air), strike the ear drum. When these changes in air pressure, as determined by measuring the pressure on the tympanic membrane (ear drum), are plotted on a graph against time, a wave form appears. (see Figure 1). In general, the louder the sound, the more energy is required to produce it, and the greater the amplitude of air pressure change.
Figure 1. Wave B is louder than Wave A. Wave C is of a higher frequency than Wave A.
Loudness is defined by the decibel system using as its starting point the threshold for hearing (the level of intensity that something can just be barely heard by the human ear). The scale is logarithmic, which means that any jump from one absolute integer to the next, provided that it is divided by ten (don’t forget a decibel is only one-tenth of a bel) represents an increase in the order of ten times. For example, the threshold for hearing is designated as 0 and normal conversation occurs at about 50 decibels, so the difference in intensity is 10 raised to the power of 50 divided by 10 which equals 10 to the fifth power, or one hundred thousand times the intensity of threshold hearing. Or take for example a sound that causes you to feel severe pain in your ears and could be potentially damaging, which usually occurs at about the 140 decibel range; this sound, such as an explosion or a jet plane, would represent a 100 trillion-fold variation in sound intensity from threshold.
The shorter the distance between the waves, i.e. the more waves that are packed into one second of time, the higher the pitch, or frequency, of the sound being heard. This is usually designated as cycles per second (cps), or hertz (hz). (see Figure 1) The human ear is generally capable of hearing sounds that range in frequency from 20 hz to 20,000 hz. Normal human speech involves sounds from the frequency range of 120 hz for males, to about 250 hz for females. Middle C on the piano is 256 hz and tuning A done by the oboe for orchestras is 440 hz. The ear is most sensitive to sounds that range between
1,000- 3,000 hz.
A Concerto in Three Parts
The ear consists of three general regions designated as the external, the middle and the inner ear. Each plays its own unique and necessary part in allowing us to hear sounds.
Here’s a quick overview of which region does what and the components that are instrumental in accomplishing the function of hearing. (see Figure 2)
Figure 2. Anatomy of the Ear.
The pinna, or auricle, of the external ear acts like your own personal satellite dish by collecting and funneling sound waves to the external auditory meatus (opening to the ear canal). The sound waves then travel down the canal to the ear drum, or tympanic membrane, which by moving in and out in response to these changes in air pressure reproduces the vibration pattern of the sound source.
The three bones (ossicles) in the middle ear, called the malleus (hammer), which is directly connected to the tympanic membrane, the incus (anvil), and the stapes (stirrup), which is directly connected to the oval window of the cochlea, combine to transmit these vibrations to the inner ear. The middle ear is air-filled and is able to maintain the same air pressure on both sides of the tympanic membrane by way of the eustachian tube which connects up just behind the nose, and opens during swallowing to allow ambient air inside the middle ear chamber. Also, there are two skeletal muscles, the tensor tympani and the stapedius, which act to protect the ear from very loud sounds.
The inner ear, which contains the cochlea, first encounters these transmitted vibrations through the oval window, which results in a wave formation being set up in the internal structures of the cochlea. Within the cochlea sits the organ of Corti, which is the true organ of the ear, that is capable of converting these fluid vibrations into a nerve signal that is then sent off to the brain for interpretation.
So there you have it. Now let’s look at some specific aspects of each of these regions.
Evidently, the external ear is where all of the action begins. If we didn’t have an opening in the skull that allowed sound waves to pass on to the ear drum, we wouldn’t be able to talk to each other. For some, maybe that would be considered a good thing! How exactly this opening in the bony skull, called the external auditory meatus, came into being by random genetic mutation or incidental change, remains to be explained.
The pinna, or ear flap if you will, has been shown to be important in sound localization. The underlying tissue that forms the pinna, allowing it to be so flexible, is called cartilage and is similar to the cartilage found in most of the joints of the body. How cells that are capable of cartilage formation acquired this ability, never mind how they ended up extending themselves from each side of the head, to the bane of many young women, would seem to require some sort of satisfactory explanation if one is to espouse the macroevolutionary model for the development of hearing.
Anyone who has ever undergone the experience of having had their ears plugged up with wax can appreciate the fact that although they may not know what benefit wax provides for the ear canal, they’re sure glad that it’s natural composition did not result in a substance that has the consistency of cement. Even more importantly, those who must interact with these unfortunate people appreciate their ability to elevate the volume of their voice in order to generate enough sound wave energy to be heard.
Ear wax, officially called cerumen, is a mixture of secretions from various glands contained in the external ear canal, which combine with the material from shedding cells along the lining of the canal, to form a white to yellow to brown waxy substance. Ear wax serves to lubricate the external ear canal while at the same time protecting the ear drum from dust, dirt, insects, bacteria, fungi, and anything else that the external environment can throw at it.
Interestingly enough, the ear has its own ear wax clearing mechanism. The cells that line the external ear canal form near the middle of the ear drum and migrate out to the walls of the canal and continue outward to the external auditory meatus. Along the way they carry with them the overlying ear wax which is then sloughed off when it reaches the outside opening. Jaw movements appear to enhance this process. In effect the whole scheme is like one big conveyer belt for wax elimination from the ear canal.
The whole understanding of wax formation, its consistency that allows for proper hearing, while at the same time providing an adequate protective function, and how the ear canal naturally eliminates it to prevent hearing loss, would seem to require some logical explanation. How could mere step by step innovations brought on by either genetic mutation or incidental change account for all of these factors and still allow for proper function along the way?
The ear drum, or tympanic membrane, consists of specialized tissue whose consistency, shape, attachments, and exact positioning, allow it be in the right place for the right function. It is all of these factors that need to be accounted for to explain how it is able to resonate in response to incoming sound waves and thereby start the chain reaction that results in the vibration wave within the cochlea. Just because other organisms have somewhat similar features that allow them to hear does not in itself explain how these features came into existence by the undirected forces of nature. In this I am reminded of the quip by G. K. Chesterton in which he said “It is absurd for the evolutionist to complain that it is unthinkable for an admittedly unthinkable God to make everything out of nothing, and then pretend that it is more thinkable that nothing should turn itself into everything.” But I digress.
The middle ear takes on the task of transmitting the vibrations of the ear drum to the inner ear where lies the cochlea in which is contained the organ of Corti, which is the actual “organ of the ear”, much like how the retina is the “organ of the eye”. So the middle ear is essentially the “middle man” in the operation of hearing. As often occurs in business, the middle man takes something away from the monetary efficiency of what is being transacted. So too, the transmission of ear drum vibration through the middle ear does result in some loss of energy resulting in only 60% of the energy being sent down the line. However, if it were not for the energy spread across the larger tympanic membrane being focused on the smaller oval window by the three ossicles, combined with their inherent lever action, this energy transmission would be much less and hearing would be much more difficult for us.
A projection from the malleus, (the first ossicle), called the manubrium, is directly attached to the ear drum. The malleus itself is connected to the second ossicle, the incus, which is itself attached to the stapes (the stirrup) which has a foot plate that is attached to the oval window of the cochlea. As mentioned already, the lever like actions of these three connected ossicles allow the vibration to become amplified on their way to the cochlea.
Review of two of my previous columns, namely “Hamlet Meets Modern Medical Science Parts I & II”, may allow the reader the see what is necessary to be demonstrated regarding bone formation itself. But how these three perfectly formed and interconnected bones ended up in precisely the right position to allow for the proper transmission of sound wave vibration, requires one more“ just-so” explanation of macroevolution to which we must look askance.
Curiously, within the middle ear exists two skeletal muscles, the tensor tympani and the stapedius. The insertion of the tensor tympani is attached to the manubrium of the malleus and on contraction it pulls the tympanic membrane back into the middle ear thereby limiting its ability to resonate. The insertion of the stapedius is attached to the the foot plate of the stapes and on contraction it pulls it off of the oval window, thereby reducing the amount of vibration that is transmitted to the cochlea.
Together these two muscles reflexively try to protect the ear from overly loud sounds which can cause pain and damage. The time that it takes for the neuromuscular system to react to a loud noise is about 150 milliseconds, which is about 1/6 th of a second. So sudden loud sounds, like from gunfire and explosions, are not as easily protected against as much as prolonged sounds or continuously noisy environments.
Experience tells us that loud sounds can sometimes be painful, just like overly bright light. The functional parts for hearing, such as the ear drum, the ossicles, and the organ of Corti, perform their function by moving in response to sound wave energy. Too much movement can cause damage and pain, just like if you overextend your elbow or your knee. So, it would seem that the ear has developed some sort of protection against self-injury if exposed to prolonged loud sounds.
Review of three of my prior columns, namely “Wired for Much More than Sound Parts I, II and III” which together explain neuromuscular function at a biomolecular and electrophysiological level, will allow the reader to better appreciate the inherent complexity that is contained within what would seem to be a natural protection against hearing loss. What is left to understand is how these two perfectly placed muscles ended up in the middle ear doing what they do, and doing them reflexively. What sort of genetic mutation or incidental changes occurred one step at a time to allow for such a complex development within the temporal bone of the skull?
Anyone, who on landing in an airplane, has experienced the sensation of pressure in their ears, associated with diminished hearing, and feeling like they are talking in a vacuum, has in effect demonstrated to themselves the importance of the Eustachian tube (auditory tube) that runs between the middle ear and the back of the nose.
The middle ear is an enclosed, air-filled chamber, in which the air pressure on either side of the tympanic membrane must be equal in order to allow for adequate mobility, which is referred to as its compliance. This is a measure of how easily the ear drum will move when stimulated by sound waves. The higher the compliance, the easier it is for the ear drum to resonate in response to sound, and the lower the compliance the more difficult it is to move in and out and therefore the threshold at which one can hear is raised i.e. now sounds have to be louder to be heard.
The air in the middle ear tends to be absorbed by the body which can result in the reduction of air pressure in the middle ear causing a reduction in tympanic membrane compliance. This occurs because instead of staying in the right position, the tympanic membrane will tend to be pushed into the middle ear by the ambient air pressure that is being exerted down the external ear canal since it is higher than the pressure in the middle ear.
The Eustachian tube connects the middle ear with the back of the nose and pharynx.
On swallowing, yawning and chewing, the associated muscular action tugs open the Eustachian tube which allows ambient air to enter and go up into the middle ear and replace any air that has been absorbed by the body. In this way, the tympanic membrane can maintain its optimal compliance which allows for adequate hearing.
Now, let’s go back to the airplane scenario. While you are cruising at 35,000 feet, the air pressure on both sides of the ear drum is the same, although the absolute amount is less than it would be at sea level. The important thing here is not the actual air pressure that is being exerted on either side, but that whatever the air pressure is on either side of the ear drum is the same. As you begin to descend, the ambient air pressure in the cabin begins to rise and immediately exerts itself against the ear drum from the external ear canal. The only way to correct this imbalance of air pressure across the ear drum is to be able to open up the Eustachian tube to allow the new ambient air pressure in. This is usually accomplished by chewing gum or sucking on candy that makes you swallow and apply that tugging action on the tube.
The speed at which the descent occurs and the resulting rapidly changing ambient pressure increases makes most people experience at least some sort of plugged sensation in their ears. If someone has or has recently had a cold, sinus problems or a sore throat, their Eustachian tube may not work as well during this pressure stressing event and they may experience severe pain, prolonged congestion and occasionally a severe hemorrhage in their middle ear!
But Eustachian tube dysfunction doesn’t end there. For if someone has chronic problems, over time the vacuum effect in the middle ear can pull fluid out of the capillaries which if not tended to can result in something known as glue ear. This is prevented and treated by myringotomy and tubes. The ENT surgeon puts a small hole in the ear drum and places tubes there so that any fluid that develops can migrate out of the ear and this serves to replace the Eustachian tube function until whatever has been causing it can be corrected, thereby preserving proper hearing and preventing damage to the structures within the middle ear.
It’s great that modern medicine is able to tackle some of these problems when the Eustachian tube doesn’t work right. But one has to immediately ask oneself how this tube came into being in the first place and which parts of the middle ear came first and how did it all function without the others? Does a step by step development based on some as yet unknown genetic mutation or incidental change even make sense here?
A close inspection of the parts of the middle ear and their absolute necessity for proper hearing that would allow for survival shows that there is an air of irreducible complexity about them. But none of what we’ve looked at so far will in itself result in us being able to hear. There’s one more piece of the puzzle to look at that has its own incredibly complex, and might I say, beautiful mechanism that takes the vibrations from the middle ear and converts them into a nerve message for the brain to interpret as sound.
Hardwired for Sound
The nerve cells that are responsible for sending the messages to the brain for hearing are located in the “organ of Corti” which is housed in the cochlea. The cochlea consists of three interconnected coiled tubes which spiral together for about two and a half turns.
(see Figure 3). The upper and lower tubes are both surrounded by bone and are called the scala vestibuli and the scala tympani respectively. Both of these tubes contain a fluid called perilymph whose sodium (Na+) ion and potassium (K+) ion contents are similar to other extracellular fluids (outside the cells) i.e. they have a high Na+ ion concentration and a low K+ ion concentration in contradistinction to intracellular fluid (inside the cells).
Figure 3. Anatomy of the Cochlea.
They communicate with each other at the tip of the cochlea through a small opening called the helicotrema.
The middle tube, which is embedded in membranous tissue, is called the scala media and it contains a fluid called endolymph which has the unique property of being the only extracellular fluid in the body that has a high concentration of K+ ions and a low concentration of Na+ ions. The scala media does not directly communicate with the other tubes and is separated from the scala vestibuli by flexible tissue called Reissner’s membrane and from the scala tympani by a flexible basilar membrane. (see Figure 4)
Figure 4. Anatomy of the Organ of Corti.
The organ of Corti sits suspended, like the Golden Gate Bridge, on the basilar membrane that is located between the scala tympani and the scala media. The nerve cells for hearing, called hair cells, because of their hair-like projections, sit on the basilar membrane which allows the bottom of the cells to be in contact with the perilymph of the scala tympani. (see Figure 4) The hair-like projections of the hair cells, which are known as stereocilia, sit on top of the hair cell and therefore are in contact with the scala media and the endolymph contained within it. The significance of this will become more apparent when we come to discuss the underlying electrophysiological mechanism behind auditory nerve stimulation.
The organ of Corti consists of about 20,000 of these hair cells that sit on the basilar membrane which runs for the entire spiraled cochlea, a distance of about 34 mm. In addition, the thickness of the basilar membrane varies from about 0.1mm at the beginning, the base, to about 0.5mm at the end, the apex, of the cochlea. This feature will become important when we discuss pitch or frequency.
Now remember, sound waves have entered the external ear canal where they have caused the ear drum to resonate at an amplitude and frequency that is inherent within the sound itself. The inward and outward motion of the ear drum allows vibration energy to be transferred to the malleus, which is connected to the incus, which is in turn connected to the stapes. In the ideal circumstance, the air pressure on either side of the ear drum is equal, allowing for the ear drum to have a high compliance for motion, because of the Eustachian tube’s ability to allow ambient air into the middle ear from the back of the nose and throat when yawning, chewing and swallowing occur. This vibration is now transferred from the stapes to the cochlea via the oval window. Now we’re ready for action.
The resulting transfer of vibration energy to the cochlea causes a fluid wave to be transmitted through the perilymph in the scala vestibuli. However, because the scala vestibuli is encased in bone and is separated from the scala media, not by a rigid wall, but by a flexible membrane, this vibration wave is also transmitted to the endolymph in the scala media by way of Reissner’s membrane. The resulting fluid wave in the scala media is itself responsible for causing the flexible basilar membrane to also undulate. These waves peak and then die down quickly somewhere along the basilar membrane in direct relationship to the frequency of the sound being heard. The higher frequency sounds cause more motion at the base or thinner part of the basilar membrane, and the lower frequency sounds cause more motion at the apex or thicker part of the basilar membrane, at the helicotrema. Eventually the wave action comes into the scala tympani via the helicotrema and dissipates through the round window.
One can immediately see that if the basilar membrane is waving in the “breeze” of endolymphatic motion within the scala media, that the suspended organ of Corti, with its hair cells, is going to undergo a trampoline-like effect in response to this wave motion energy. From here on, in order to appreciate the complexity and to truly understand what is going on for hearing to take place, the reader must have a knowledge of neuron function. If you haven’t read it already, I suggest that you look at “Wired for Much More than Sound Parts I and II” which reviews neuron function.
Hair cells at rest have a membrane potential of about -60mV. Remember from neuron physiology that the resting membrane potential exists because of the tendency for more K+ ions to leave the cell through K+ ion channels than Na+ ions entering through Na+ ion channels when the cell is not stimulated. However, this tendency is predicated on the fact that the cell membrane is in contact with extracellular fluid that is usually low in K+ ions and high in Na+ ions, like the perilymph with which the base of the hair cells are in contact.
When the stereocilia, i.e. the hair-like projections of the hair cell, are stimulated to move by wave action, this causes them to bend. This motion of the stereocilia results in certain transduction channels being opened that are very permeable to K+ ions. So when the organ of Corti experiences this trampoline-like effect from the wave action brought on by the vibration from the resonance of the ear drum through the three ossicles, this results in K+ ions entering the hair cell, which causes it to depolarize, i.e. become less negative in its membrane potential.
“Hold it”, I hear you say. “I just reviewed all of that stuff on neurons and to my way of thinking when the transduction channels open up, K+ ions should flow out of the cell and cause hyperpolarization, not depolarization.” And normally you’d be absolutely right because in the usual set of circumstances when specific ion channels open up to increase the permeability of that specific ion across the membrane, it is Na+ ions that go into the cell and K+ ions that come out because of the relative concentration gradients of Na+ ions and K+ ions across the membrane.
But remember, we’re not dealing with the usual set of circumstances here. The apex of the hair cell is in contact with the endolymph of the scala media and not the perilymph of the scala tympani that is contact with the base of the hair cell. And remember, we stressed the point above that the endolymph has the unique distinction of being the only fluid outside of the cell that has a high concentration of K+ ions. So high that when those transduction channels that are permeable to K+ ions open up in response to the bending motion of the stereocilia, K+ ions now enter the cell and thereby causes it to depolarize.
The depolarization of the hair cell causes voltage-gated calcium ion (Ca++) channels in its base to open up and to allow Ca++ ions into the cell. This results in a neurotransmitter from the hair cell being released to stimulate a nearby cochlear neuron which will ultimately send the message on to the brain.
The frequency of the sound that generates the fluid wave determines where it will peak along the basilar membrane. As mentioned above, this is dependent on the basilar membrane’s thickness, in which higher pitched sounds cause more activity at the thinner base, and lower frequency sounds result in more activity at the thicker apex.
One can immediately see that the hair cells that are closer to the base will maximally respond to very high pitched sounds at the upper limit of human hearing (20,000 hz) and the hair cells at the extreme apex will maximally respond to sounds at the lower limit of human hearing (20 hz).
The cochlear nerve fibers demonstrate tonotopic mapping in that they are more sensitive to specific frequencies which are ultimately mapped out in the brain. This means that specific cochlear neurons service specific hair cells, and their nerve signals are eventually transmitted to the brain which is then capable of determining the pitch of the sound based on which hair cells were stimulated. In addition, it has been shown that cochlear nerve fibers have spontaneous activity so that when they are stimulated by a sound of a specific pitch with a particular amplitude, this results in a modulation of their activity which is ultimately analyzed by the brain and is interpreted as a particular sound.
In summary, the hair cells that are located on a specific spot on the basilar membrane will maximally bend in response to a particular pitch of sound wave that results in that spot on the basilar membrane receiving the crest of the wave. The resulting depolarization of that hair cell will cause it to release a neurotransmitter which will stimulate a nearby cochlear neuron that sends its message along to the brain where it is interpreted as the sound that was heard with a certain amplitude and frequency based on which cochlear neuron sent the message.
The pathways for all of this auditory nervous activity have largely been mapped out. There are more neurons that are contained in junction boxes that receive these messages and then pass them on to other neurons. Eventually the messages reach the auditory cortex of the brain for final analyses. But how the brain then converts this myriad of neurochemical messages into what we know as hearing is as yet totally unknown.
The impediments to solving this problem may indeed be as mysterious as life itself!
A brief review of cochlear structure and function will provide the reader with many questions to be asked of those who are enamored with the theory that all life came about through the random forces of nature without any intelligent input. Here are a few of the major factors whose development over time are in need of plausible explanations given their absolute necessity for the function of hearing in humans.
Is it possible that these developed one step at a time by the processes of genetic mutation or incidental change? Or failing that, is it likely that each of these parts served some, as yet unknown, function in multiple other progenitors, which then came together to allow for human hearing as we know it?
And if either of these explanations are considered valid, then what exactly were these changes in principle, and in fact, to allow for the development of such a complex system that allows for the transduction of air waves into something that the human brain perceives as sound?
The development of the three coiled tubes, called the scala vestibuli, the scala media and the scala tympani, which form the cochlea
The presence of the oval window to receive the vibration from the stapes and the round window to allow the wave action to dissipate
The presence of Reissner’s membrane to allow the vibration wave to be transmitted to the scala media
The basilar membrane, with its variable thickness and its perfect location between the scala media and the scala tympani, to play a part in the function of human hearing
The construction of the organ of Corti and its position on the basilar membrane so that it may experience the trampoline effect that is instrumental for human hearing
The presence of the hair cells within the organ of Corti whose stereocilia play an all important role for human hearing and without which, it would not exist
The presence of perilymph in the upper and lower scalae and endolymph in the scala media
The presence of the cochlear nerve fiber in close proximity to the hair cells that are located on the organ of Corti
A Final Word
When I set out to write this column I first looked at the original medical physiology text that I used when I was in medical school 30 years ago. In this book the authors noted the unique make-up of endolymph as compared to all of the other extracellular fluids in the body. At this point in time, it was “unsettled” as to the exact reason for this unusual set of circumstances and the authors freely admitted that although it was known that the action potential that was generated by the auditory nerve was related to the movement of the hair cells, how this happened was as yet unknown. So what are we to make of this now that we have a better understanding of how all of this works? Simply this:
Is there anyone who upon listening to their favorite piece of music thinks that the notes being played in that specific order have come about by the random forces of nature?
No! One realizes that the music one is enjoying was written down by a composer so that others would be able to enjoy what he had created and heard in his own mind. He makes sure of this by signing his own name to the original manuscript so that the world will know who created this piece. To think otherwise would result in most people being subjected to ridicule.
Similarly, as one listens to the cadenza being played in a violin concerto, is it likely to come into one’s mind that the notes emanating from that Stradivarius have occurred simply by the random forces of nature? No! One intuitively knows that there is a talented virtuoso who is playing specific notes in order to produce the sounds that she wants the listener to hear and appreciate. So much so, that her name is emblazoned on the CD jacket so that those who know of her talent will be induced to buy it.
But how is it that one is able to hear what is being played in the first place? Did it all come about by the undirected forces of nature as evolutionary biologists believe? Or is it possible that at some point in time an intelligent designer made its presence known, and if so, how would we be able to detect it? Are there any signatures or emblazoned names within nature that may help direct our attention to them?
There are numerous examples of what I consider intelligent design within the human body which I have detailed over the last year within this column. But when I came to the realization of how hair cell motion results in the opening of K+ ion transduction channels causing the movement of K+ ions into the hair cell and its depolarization, I was literally dumbfounded. For I suddenly realized that here was such a “signature”. Here was an example of the intelligent designer letting it be known that just when humanity thinks it knows all there is to know about life and how it developed, it is faced with something that should give it pause.
Remember that the almost universal mechanism for neuron depolarization occurs by the influx of Na+ ions from extracellular fluid into the neuron through Na+ ion channels after sufficient stimulation. The development of this system in itself has as yet not been sufficiently explained by evolutionary biologists. However, the whole system depends upon the existence and stimulation of Na+ ion channels in combination with there being a higher concentration of Na+ ions outside of the cell as compared to inside the cell. This is how the neurons of the body work.
Now we come to find out that there exists within the body a set of neurons that work in exactly the opposite way. They require, not Na+ ions to enter the cell to cause depolarization, but K+ ions. On the face of this, it would seem to be an impossibility because everyone knows that all extracellular fluids in the body contain very low amounts of K+ ions in comparison to the inside of the neuron and therefore it would be physiologically impossible for K+ ions to flood into the neuron to cause depolarization the way that Na+ ions do.
What was once considered as “unsettled” has now become crystal clear as to the reason why endolymph must have the unique property of being the only extracellular fluid in the body with a high K+ ion content and a low Na+ content. It is located precisely in the right place so that when the K+ ion permeable transduction channels open in the membrane of the hair cells, depolarization will take place. Evolutionary biologists must be able to explain how both, seemingly opposing sets of circumstances, could arise and how they could occur in the right place in the body where it is needed. Much like the notes for a concerto being placed just right by the composer and then being played on the violin by the virtuoso. To me, that’s an intelligent designer saying to us; “Do you see the beauty in what I have created?”
Of course, for a person who sees life and how it functions only through a materialistic and naturalistic filter, the idea of an intelligent designer is an impossibility. The fact that all of the questions that I have proposed for macroevolution in this and other columns, are highly unlikely to receive plausible answers in the future, does not seem to deter or even concern the proponents of the theory that all life has developed from natural selection acting on random variation.
As William Dembski so adroitly observed in The Design Revolution: “Darwinists take this present lack of insight into the workings of an unevolved designer, not as remediable ignorance and not as evidence that the designer’s capacities far outstrip ours, but as proof that there is no unevolved designer, period.”
Next month we’ll be looking at how the body coordinates its muscular activity in order to allow us to sit, stand, and stay mobile: in my last installment on neuromuscular function.
See you then in: Wired for Much More than Sound Part VIII: Run for your Life
Howard Glicksman M. D. graduated from the University of Toronto in 1978. He practiced primary care medicine for almost 25 yrs in Oakville, Ontario and Spring Hill, Florida. He recently left his private practice and has started to practice palliative medicine for a Hospice organization in his community. He has a special interest in how the ethos of our culture has been influenced by modern sciences understanding and promotion of what it means to be a human being. Comments and questions about this column or any of the previous ones are welcome at [email protected]
Copyright 2004 Dr. Howard Glicksman. All rights reserved. International
File Date: 1.01.05
This data file may be reproduced in its entirety
for non-commercial use.
A return link to the Access Research Network web site would be appreciated.
Documents on this site which have been reproduced from a previous publication are copyrighted through the individual publication. See the body of the above document for specific copyright information. | http://arn.org/docs/glicksman/eyw_050101.htm | 13 |
45 | Ion propulsion, a futuristic technology that for decades catapulted spacecraft through the pages of science fiction novels is now a reality. A Glenn-designed ion engine, just 12 inches (30 centimeters) in diameter, is the main propulsion source for Deep Space 1 a 20th Century spacecraft now off on its primary mission to validate technologies for 21st century spacecraft.Image right: The flight hardware ion engine. Credit: NASA
An ion propulsion system converts power from the spacecraft power system into the kinetic energy of an ionized gas jet. That jet, as it exits the spacecraft, propels it in the opposite direction. The system, or any electric propulsion system, consists of four major components: a computer for controlling and monitoring system performance; a power source (on Deep Space 1 (DS1) this source is the solar concentrator arrays); a power processing unit for converting power from the solar arrays to the correct voltages for the engine; and the thruster, or engine, itself.
The fuel used in DS1's ion engine is xenon, a chemically inert, colorless, odorless, and tasteless gas. The xenon fuel fills a chamber ringed with magnets. When the ion engine is running, electrons emitted from a cathode strike atoms of xenon, knocking away one of the electrons orbiting an atom's nucleus and making it into an ion. The magnets' magnetic field controls the flow of electrons and, by increasing the electrons' residence time in the chamber, increases the efficiency of the ionization.Image left: Overall ion engine workings. Credit: NASA
At the rear of the chamber is a pair of metal grids that are charged with 1280 volts of electric potential. The force of this electric field exerts a strong "electrostatic" pull on the xenon ionsmuch like the way that bits of lint are pulled to a pocket comb that has been given a static electric charge by rubbing it on wool. The xenon ions shoot past the grids at speeds of more than 88,000 miles per hour (146,000 kilometers per hour), continuing right on out the back of the engine and into space. These exiting ions produce the thrust that propels the spacecraft. A second electron-emitting cathode, downstream of the grids, neutralizes the positive charge of the ion beam to keep the spacecraft neutral with respect to its environment.
At full throttle, the ion engine consumes about 2300 watts of electrical power and puts out 0.02 pound (90 millinewtons) of thrust. This is comparable to the force exerted by a single sheet of paper resting on the palm of a hand. Typical chemical on-board propulsion systems, on the other hand, produce far greater thrust 100 to 500 pounds (450 to 2250 newtons) but for far shorter times. A chemically propelled spacecraft gets its big boost and then coasts at constant speed until the next boost. But an ion engine can produce its small thrust continually and thereby provide near constant acceleration and, so, shorter travel times.
Ion propulsion is also 10 times more fuel efficient than chemical on-board propulsion systems. This greater efficiency means less propellant is needed for a mission. In turn, the spacecraft can be smaller and lighter, and the launch costs lower.
Deep Space 1 carries 178 pounds (81 kilograms) of xenon propellant, which is capable of fueling engine operation at one-half throttle for over 20 months. Ion propulsion will increase the speed of DS1 by 7900 miles per hour (12,700 kilometers per hour) over the course of the mission.
Electric propulsion technology, which includes ion engines, has been studied at Glenn (at that time the NASA Lewis Research Center) since the 1950's. Ion propulsion technology development at Glenn began when Dr. Harold Kaufman, now retired from NASA, designed and built the first broad-beam electron-bombardment ion engine in 1959. It used mercury as fuel, but is otherwise similar to the engine flying today on DS1. The laboratory tests of variations of the original ion engine were promising enough for Glenn to begin suborbital flight tests in the early 1960's. By 1964, an ion engine launched on the Space Electric Rocket Test I (SERT I) operated for all of its planned 31 minutes before returning to Earth.
In 1970, two modified ion engines were launched on SERT II; one operated for nearly three months and the other for more than five. Both engines suffered grid shorts, believed to have been be caused by debris from thruster grid wear, before the planned end of the mission. After an attitude control maneuver cleared its grid of the short in 1974, one of the engines was started and was operated on and off for six more years.
The information learned from these genuine space success stories was used to refine and improve the technology that today flies on communications satellites and, of course, on DS1.
Early ion engines used mercury or cesium instead of xenon as propellants. (Glenn researchers had worked on cesium ion engine technology in the mid 1950's.) But both proved to be difficult to work with. At room temperature, mercury is a liquid and cesium is a solid, making them easy to store. But both had to be heated to turn them into gases. Then there was the cleanup. After exiting the ion engine, some mercury or cesium atoms would condense onto the ground test hardware, causing numerous cleanup difficulties. In the 1970's, NASA managers decided that if ion propulsion research was to continue, it would have to be environmentally clean and less hazardous. Glenn researchers soon turned to xenon as a cleaner, simpler fuel for ion engines, with many of the same characteristics as mercury.
One of the first xenon ion-engine-like devices ever flown was a Hughes Research Laboratories design launched in 1979 on the Air Force Geophysics Laboratory's Spacecraft Charging at High Altitude (SCATHA) satellite. It was used, not to propel the spacecraft, but to change its electrical charge. Researchers then studied the effects of the "charging" on spacecraft system performance. In 1997, Hughes launched the first commercial use of a xenon ion engine on the communications satellite PanAmSat 5. This ion engine is used for stationkeeping, that is, keeping the satellite in its proper orbit and orientation with respect to Earth.
In the early 1990's, NASA identified improved electric propulsion as an enabling technology for future deep space missions. Glenn engineers believed that their ion engine technology was the closest to being ready for long, complex missions. NASA Glenn partnered with the Jet Propulsion Laboratory (JPL) in the NASA Solar Electric Power Technology Application Readiness (NSTAR) project. The purpose of NSTAR was to develop a xenon-fueled ion propulsion system for deep space missions. Glenn developed the engines and power processors, and JPL was responsible for the development of the xenon feed system, the diagnostics, and integration of the hardware into the spacecraft.Image left: Ground test setup in the Glenn Research Center's Electric Propulsion Laboratory. Credit: NASA
In 1996, the prototype engine built at Glenn endured 8000 hours of operation in a JPL vacuum chamber that simulates conditions of outer space. The results of the prototyping were used to define the design of flight hardware that was built for DS1 by Hughes Electron Dynamics Division and Spectrum Astro Inc.
One of the challenges was developing the compact, lightweight power processing unit that converts power from the solar arrays into the voltages needed by the engine. NSTAR team contractor, Hughes designed a 2500 watt power processor that weighs a little over 33 pounds (15 kilograms) and has an efficiency of 93 percent.
The first spacecraft in NASA's New Millennium Program of missions to flight-test new technologies, DS1 blasted into space in October 1998 aboard a Delta II launch vehicle. Now on its own and headed toward a July 1999 flyby of asteroid 1992 KD, 12 new technologies aboard the spacecraft are being tested for use on future space science missions. Among those 12 is DS1's main propulsion source, the Glenn-designed NSTAR ion engine.Image right: Artist's conception of Deep Space 1 in flight. Credit: NASA
The following March the spacecraft was 30 million miles from Earth. The ion engine had surpassed it performance goals by thrusting continuously for over 330 hours, the longest continuous thrusting of any other deep space propulsion system and it had operated for over 1200 hours.
The next New Millennium Program mission to use Glenn ion engine technology will be Space Technology 4/Champollion, which will rendezvous (match orbits) with the periodic Comet Tempel 1. Three NSTAR ion engines (with minor modifications) will provide the primary propulsion for the spacecraft. The planned launch date is in 2003.
Glenn engineers are also responding to and anticipating mission planners' needs by developing both higher and lower power ion propulsion systems.
Ion engines with extended performance and higher power NSTAR engines, in the 5-kilowatt and 0.04 pound-thrust range, are candidates for propelling spacecraft to Europa, Pluto, and other small bodies in deep space. Glenn engineers plan to achieve higher ion engine power levels by retrofitting NSTAR engine with enhanced components.
Low power (100 to 500 watts) systems can be used to deliver miniaturized robot spacecraft (launched using small, inexpensive rockets) to interesting space bodies including comets, asteroids, and planets. Such missions will allow for the delivery of instruments, sensors, and mobile vehicles to the bodies. Laboratory tests on low-power, light-weight ion propulsion system components and subsystems are now underway at Glenn. | http://www.nasa.gov/centers/glenn/about/fs08grc.html | 13 |
13 | Aborigines were the first inhabitants of Australia, migrating there at least 40,000 years ago. While Asian explorers had landed in northern Australia well before AD 1500, it was not until the 17th century that the first Europeans from Holland managed to sail to Australia. Of the several Dutch expeditions into the southern oceans, the most successful was that of Abel Tasman, who in 1642 discovered an island now known as Tasmania. However, the Dutch did not formally occupy Australia, finding little there of value for European trade, opening the way for the later arrival of the English. Starting in 1765, Captain James Cook led a series of expeditions to Australia and he subsequently supported settlement in Australia. Curiously it was a rising crime rate in England that led to the occupation of Australia. After the American Revolution ended in 1783, Britain moved quickly to establish its first settlement in Australia as a place to send its convicts, since it could no longer ship British convicts to America. In 1786, the British government announced that it would establish a penal settlement at Botany bay in Australia, and in 1788, retired Royal Navy captain Arthur Phillip arrived at Botany Bay with more than 1,450 passengers. This included 736 convicts, 211 marines, 20 civil officers, and 443 seamen. Subsequently, he moved the fleet north to Port Jackson, an excellent natural harbor, and began the first permanent settlement on January 26, 1788 (now known as Australia Day). This settlement was subsequently named Sydney in honor of Lord Sydney, Britain's home security who was responsible for the colony. Food supply was a major problem in the early settlement days, and needed food supplies came mainly from Norfolk Island, which Phillip had occupied in February 1788, an island that later served as a jail for convicts who committed new crimes while serving their sentence in Australia. (In fact, the later Warden of Norfolk Prison, Captain Alexander Maconoche is legendary for having instituted a then controversial practice of releasing convicts early for good behavior as a means of managing an unruly population of convicts. This innovation resulted in Maconochie being dubbed "the father of parole," and also led to his dismissal as warden.) The New South Wales Corps replaced the Royal Marines in 1792. They were given grants of land and became excellent farmers. Through controlling the price of rum, used as an internal means of exchange, they posed a threat to the governors. When Captain William Bligh (whose crew aboard the Bounty had mutinied in the Pacific) became governor in 1806 and threatened the corps with the loss of their monopoly, they responded with a so-called Rum Rebellion. Bligh was arrested and sent back to London, giving the leaders of the corps a victory. Coincidentally, one of the corps leaders, John Macarthur, found a solution to the colony's lack of valuable exports by interesting British manufacturers in Australian wool. After 1810, the wool of the Australian merino sheep became the basis for a major economic activity. The New South Wales Corps was sent home by the next governor to be followed by more free settlers claiming farmland on which convicts could serve as laborers. As convicts completed their sentences, the convicts agitated for land and opportunities, and were known as emancipists, opposed by the free settlers, who were known as exclusives. In 1825 the island settlement of Van Diemen's Land (today's Tasmania) became a separate colony, having been established in 1803 as a penal colony because of fear that the French would claim the island. Sheep grazing expansion caused a growth of land claims by squatters and resulted in the colonization of the Port Phillip district that became the colony of Victoria in 1850, with its capital at Melbourne. Another colony to the north, Queensland, was settled by graziers and separated from New South Wales in 1859. Other settlements of European people were subsequently established elsewhere, resulting in the creation of six independent British colonies: New South Wales, Victoria, Queensland, Western Australia, South Australia and Tasmania. In 1850, the sending of convicts to New South Wales was abolished. It was abolished to Van Diemen's land in 1852. (More than 150,000 had been send to the two colonies.) Owed to a movement toward free trade, which nullified the need for colonies, from 1842 to 1850, Australian colonies received constitutions and were given legislative councils (preventing a war of independence which might have unified the Australian colonies). Australia had its own gold rush in the 1850s, which resulted in an influx of Chinese immigrants attracted by gold, a movement that was opposed by the white settlers in their exclusion of all but European settlers. This became known as a "White Australia" policy, a policy that endured up until recently in Australia. Seemingly, this policy also applied to the Aborigines who as the frontier pushed inland, were often poisoned, hunted, abused, and exploited by the settlers. After a constitutional convention in sydney from 1897 to 1898, the six colonies approved and became a federation. The Commonwealth of Australia was subsequently approved by the British Parliament in 1900 and came into existence on January 1, 1901 (although since then, the Northern Territory and the Australian Capital Territory have been granted self-government). The federal constitution combined British and American practices, with a parliamentary government, but with two houses - the popularly elected House of Representatives and Senate representing the former colonies (which were now states). However, the Balkanization of Australia into separate unrelated states continued until WWI when the nation unified, sending 330,000 volunteers to fight with the allies. WWII brought a greater alliance with the United States. This alliance has endured until today through Australian participation fighting along-side the Western alliance in the Korean War and fighting in the Vietnam war as an ally of the United States. The White Australia policy was discarded during the 1950s through 1970s. Under the Colombo Plan, Asians were admitted to Australian universities in the 1950s. In 1967, a national referendum granted citizenship to Aborigines, and in the 1970s, the entry of immigrants began to be based on criteria other than race. Australia remains part of the British commonwealth, after a national referendum failed to win a majority vote to change Australia's form of government to a republic. The Commonwealth of Australia has nine separate parliaments or legislatures, most of which have lower and upper houses. There are also several hundred local government authorities, known as councils or shires. The national or Commonwealth Government is responsible for defense, foreign affairs, customs, income tax, post and telegraphs. The State or Territory Governments have primary responsibility for health, education and criminal justice, although the Commonwealth Government is also influential in these areas. There exists a level of tension between the governments at the State or Territory level and the Government of the Commonwealth. This tension is almost exclusively concerned with the issue of the allocation of monies raised from income tax and the appropriate distribution of power. Since the 1970s, there has been a noticeable shift of power toward the Commonwealth Government.
"Australia." Microsoft Encarta Online Encyclopedia 2002, http://encarta.msn.com (23 June, 2002)
Crime is generally defined in Australia as any conduct which is prohibited by law and which may result in punishment. Crimes can be classified as either felony, misdemeanor or minor offenses, but more commonly they are classified as indictable or not indictable offenses. Indictable offenses are those which are heard by the superior courts and may require a jury, whereas non-indictable offenses, which comprise the vast majority of court cases, are heard in magistrates courts, where no juries are employed. While there are some classification differences among the various jurisdictions, in all jurisdictions indictable offenses generally include homicide, robbery, serious sexual and non-sexual assault, fraud, burglary and serious theft. Homicide includes murder, manslaughter (not by driving) and infanticide. Assault is defined as the direct infliction of force, injury or violence upon a person, including attempts or threats. Sexual assault is a physical assault of a sexual nature, directed toward another person where the person does not give consent; or gives consent as a result of intimidation or fraud; or is legally deemed incapable of giving consent because of youth or temporary/ permanent incapacity. Sexual assault includes: rape, sodomy, incest, and other offenses. Rape is defined as unlawful sexual intercourse with another person by force or without the consent of the other person. Robbery is defined as the unlawful removing or taking of property or attempted removal or taking of property without consent by force or threat of force immediately before or after the event. Unlawful entry with intent (UEWI) is defined as the unlawful entry of a structure with the intent to commit an offense. UEWI offenses include burglary, break and enter and some stealing. Motor vehicle theft is the taking of a motor vehicle unlawfully or without permission. "Other theft" or stealing is defined as the taking of another person's property with the intention of permanently depriving the owner of property illegally and without permission, but without force, threat of force, use of coercive measures, deceit or having gained unlawful entry to any structure even if the intent was to commit theft. In some jurisdictions, such as South Australia, there is a group of "minor indictable" offenses which can be heard in the superior or lower courts, according to the wish of the accused. Criminal justice statistics are based on a classification scheme which divides crimes into offenses against the person, property offenses and "other." The minimum age of criminal responsibility and the upper age limit for hearings in juvenile courts varies among Australian States and Territories. The minimum age of criminal responsibility in juvenile courts is 7, while the minimum age to be tried in an adult court is 16. In all jurisdictions, any child above the age of criminal responsibility who is charged with homicide can be tried in an adult court. In some jurisdictions, juveniles may have their offenses tried in adult courts for offenses such as rape and treason. Drug offenses constitute a major focus of all Australian criminal justice systems. The possession, use, sale, distribution, importation, manufacturing or trafficking of a wide range of drugs is illegal in all Australian jurisdictions. Illegal drugs include: marijuana (cannabis), heroin, designer drugs (ice, ecstasy), amphetamines (speed, LSD) and cocaine (including crack). While the possession or use of any of these drugs is illegal, in some jurisdictions, notably South Australia and the Australian Capital Territory, marijuana has been partially decriminalized. Its possession or use may result in the imposition of a relatively small fine without the need to appear in court. Tasmania is one of the world's major suppliers of licit opiate products; government maintains strict controls over areas of opium poppy cultivation and output of poppy straw concentrate
INCIDENCE OF CRIME
The following data has been compiled by the Australian Institute of Criminology from information contained in the annual reports of Australian police forces for the year 2000. In year 2000, there were 346 homicides reported to the police, for a rate of 2.0 per 100,000 population. The percentage of homicides committed with a firearm was 17%. Attempts are not included. In 2000 there were 141,124 assaults reported by the police at a rate of 737 per 100,000 population. There were 15,630 victims of sexual assault recorded by the police in Australia in 2000, about 82 victims per 100,000 population. Police recorded 23,314 victims of robbery during 2000, with 122 per 100,000 population. In 2000, there were 436,865 incidents of unlawful entry with intent to commit an offense, a rate of 2281 victims per 100,000 population. Police recorded 139,094 motor vehicles stolen in 2000, with 726 victims per 100,000 population. A total of 674,813 victims of "other theft" was recorded by the police in 2000, with 3,523 victims per 100,000 population in Australia. A victim survey of households was conducted by the the Australian Bureau of Statistics in 1998 for some of these crimes. From this survey it was estimated that 4.3% of households were victimized by assault, .4% by sexual assault, .5% by robbery, 5.0% by break-in, and 1.7% by motor vehicle theft. If these were converted to rates per 100,000, the rates would be 4300 for assault, 400 for sexual assault, 500 for robbery, 5000 for break-in, and 1700 for motor vehicle theft, in all cases higher than the incidence recorded by police.
Trend analysis has been done for years 1973/4 to 1991/2. Trend data using official statistics indicate apparently ever-increasing levels of crime. By contrast, national crime victimization surveys show much more stable trends in crime. Total property crimes reported to police increased from 385,453 to 1,168.423 in 1990/1 (an increase of 203%) before falling in 1991/2 to 1,024,569. Total violent offenses rose from a mere 7,056 to 36,909 in 1991/2, an increase of 423%. Expressed as an annual rate per 100,000 population, property offending went from 2834.4 crimes reported per 100,000 to 6563.8 in this period; violence increased from 51.9 to a startling 213.4. Adjusting for population change, then, these increases are of 132% and 311% respectively. Of particular concern are trends in reported sexual assaults (rape rates up 426%) and other serious assaults (up 452%). In addition, reported drug offenses were up 612% between 1974/5 and 1991/2. However, the national statistics for homicide remain remarkably steady within a range between 1.62 per 100,000 and 2.40 per 100,000. By contrast, National Crime Victims Surveys done for years 1974/5 through 1991/2 show less of a change. These figures suggest increases in these eighteen years of 51% for break, enter and steal, and no increase at all for motor vehicle theft. For robbery, the survey suggests a 33% increase, a decrease of 9% for assault, and no change in incidence for sexual assault. The discrepancy between police and survey data is partly explained by increased reporting which itself is explained by an increase in the numbers of police. The number of police rose from 178 per 100,000 population in 1973/4 to 244 in 1991/2.
INTERNATIONAL CRIME RATE COMPARISONS
In a comparison of survey victimization from the International Crime Victim Survey, it appears that Australia has a high rate of crime. Car theft is virtually double the average rate for 21 countries, and in most of the other offenses including burglary and violence, Australia risks are at least fifty percent higher than the average. These offenses include car theft, theft from car, car damage, burglary, theft of personal property, robbery, sexual assault, and other assault. Only bike theft ranks low in comparison to the average of other counties.
The Commonwealth of Australia is a federalist government composed of a national government and six State governments. If Territories are included, there in effect nine different criminal justice systems in Australia - six state, two territory, and one federal. The eight States and Territories have powers to enact their own criminal law, while the Commonwealth has powers to enact laws. Criminal law is administered principally through the federal, State and Territory police. There is no independent federal corrective service. State or Territory agencies provide corrective services for federal offenders. The government of the Commonwealth is responsible for the enforcement of its own laws. The most frequently prosecuted Commonwealth offenses are those related to the importation of drugs and the violation of social security laws. Offenses against a person or against property occurring in Commonwealth facilities are also regarded as offenses against the Commonwealth. The States are primarily responsible for the development of criminal law. Queensland, Western Australia, and Tasmania are described as "code" States because they have enacted criminal codes which define the limits of the criminal law. The remaining three States, New South Wales, Victoria, and South Australia are regarded as "common law" States because they have not attempted codification. In practice, however, there is little difference in the elements of the criminal law between the "code" and "common law" States. Local governments can pass legislation, known as bylaws. These generally include social nuisance offenses as well as traffic and parking rules. Local government officials or the State and Territory police generally enforce the local government bylaws. The maximum penalty that can be imposed for conviction of a bylaw offense is a monetary fine. However, non-payment of fines can result in imprisonment. The structure of the Australian legal system is derived from, and still closely follows, that of the United Kingdom. In addition to parliament-made law, there is the "common law" inherited from the English courts which has since been developed and refined by Australian courts. It should be noted, however, that since 1963 Australian courts have ceased to regard English decisions as superior or even equal in authority to those made by Australian courts. The legal system is adversarial in nature and places a high value on the presumption of innocence. Due to the federalist system of government, there are nine separate legal systems in operation. Although there are some significant differences between these systems, they are essentially similar in structure and operation.
Australia has one police force for each of the six States, the Australian Capital Territory, and the Northern Territory. There is also a Commonwealth agency known as the Australian Federal Police (APF) which provides police services for the Australian Capital Territory and is also involved in preventing, detecting and investigating crimes committed against the Commonwealth, including drug offenses, money laundering, organized crime, and fraud. The APF was brought into existence by the Australian Federal Police Act of 1979. However, because of findings of several Royal Commissions in the late 1970s and early 1980s that revealed the extent of organized crime in Australia, the Commonwealth Government in July 1984 established the National Crime Authority (NCA). Legislation was passed in each State, the Northern Territory, and the Australian Capital Territory, to support the work of the NCA in those jurisdictions. The NCA is the only law enforcement agency in Australia not bound by jurisdictional or territorial boundaries. Its single mission is to combat organized criminal activity. Thus, there are now ten separate police forces for the nation, including the NCA and the AFP, police for the two territories, as well as police for the six states (New South Wales, Victoria, Queensland, South Australia, Western Australia, and Tasmania). There are, however, a large number of other agencies which have specific law enforcement functions, including health inspectors, tax officials, and immigration and customs officers. All Australian police forces have a hierarchical organization. In the larger police forces, the chief officer is known as the Commissioner, except in Victoria, where he or she is known as the Chief Commissioner. The larger forces also have one or more Deputy Commissioners and a number of Assistant Commissioners. Below these ranks are Chief Superintendents, Superintendents, Chief Inspectors and Inspectors. Officers achieving the rank of Inspector or above are known as commissioned officers. The remaining ranks consist of Senior Sergeants, Sergeants, Senior Constables and Constables. In the State and Territory police forces, the administration is divided into geographical districts, which are themselves divided into divisions and subdistricts. There is also a movement towards increasing the autonomy of regional police commanders in many Australian police forces. The Commissioner of Police is directly accountable to a Minister, but the Minister is usually not permitted to influence the operation and decisions of police commanders. An Australian Police Ministers Council (APMC) meets at least once a year and is supported by the Commissioners in this context as the Senior Officers Group (SOG). The APMC and SOG structures have attempted to create a higher level of cooperation and uniformity of police practices throughout Australia. Australian police forces are not closely associated with the military forces. Australian military forces have no responsibility for the maintenance of civil order. However, on very rare occasions the military forces have been required to provide assistance to the police. In the event of a serious natural disaster, such as a flood or bush fire, the military forces are asked to assist the police and other civilian authorities. Australian police recruits are required to have completed their secondary education, although it is not always essential to have been awarded a qualification known as Higher School Certificate. A university degree is not generally required of police in Australia except for specialist posts. University training is encouraged for all recruits to the Australian Federal Police and increasingly in other police forces. Recruits must undergo medical and psychological tests and are evaluated on their overall suitability, competence, physical fitness and character. Recruit training is a combination of classroom and field-based experience which takes approximately 18 months to complete. A portion of this training takes place in a police academy and the remainder is conducted on the job. All police officers may use "appropriate" force when encountering violent persons. "Appropriate" is defined by the level of force required to overcome and apprehend the person(s). Police officers may use "lethal" force on a person if they believe their life or the life of another person is in danger. "Lethal" is defined as the level of force that might result in the person's death. All police officers carry handguns and handcuffs. They rarely carry batons; these are usually kept in police cars. In general, a police officer may stop and apprehend any person who appears to be committing, or is about to commit, an offense. The law provides that law enforcement officials may arrest persons without a warrant if there are reasonable grounds to believe a person has committed an offense. The vast majority of arrests are made without a warrant although there are jurisdictional differences concerning prerequisites to arrest. Law enforcement officials can seek an arrest warrant from a magistrate when a suspect cannot be located or fails to appear. Once individuals are arrested, they must be informed immediately of the grounds of arrest and given a "criminal caution," that is, informed of their rights. Police are generally required to obtain a search warrant from a judge or a magistrate before they enter premises and seize property. However, illegal drugs and weapons can be seized without a warrant. Whereas the issue of obtaining confessions from suspected offenders has been a controversial subject in the past, the controversy has diminished with the onset of video. Virtually all interviews with persons suspected of serious offenses are videotaped. Complaints against the police are investigated by different authorities in different jurisdictions.
Once taken into custody a detainee must be brought before a magistrate for a bail hearing at the next sitting of the court. Persons charged with criminal offenses generally are released on bail except when charged with an offense carrying a penalty of 12 months imprisonment or more, or the possibility of violating bail conditions is judged to be high. Attorneys and families are granted prompt access to detainees. Detainees held without bail pending trial generally are segregated from the other elements of the prison population. The law prohibits all such practices; however, there were occasional reports that police mistreated suspects in custody. Some indigenous groups charge that police harassment of indigenous people is pervasive and that racial discrimination among police and prison custodians persists. Amnesty International reported several incidents that involved such abuses. State and territorial police forces have internal affairs units that investigate allegations of abuse and report to a civilian ombudsman. The federal Government oversees six immigration detention facilities located in the country and several offshore facilities in the Australian territory of Christmas Island and in the countries of Nauru and Papua New Guinea. These facilities were established to detain individuals who attempt to enter the country unlawfully, pending determination of their applications for refugee status. Hunger strikes and protests have occurred at immigration detention facilities over allegedly poor sanitary conditions, inadequate access to telephones, and limited recreational opportunities.
All accused persons have the right to defend themselves in court but in serious cases most prefer to be represented by a legal practitioner. A recent decision by the High Court of Australia held that in all serious matters if the accused does not have access to legal advice, the case must be adjourned. In any trial, both the prosecution and the defense have the right to question and cross-examine witnesses. In New South Wales, the accused person also has the right to make an unsworn statement, thus avoiding being cross- examined by the prosecution. This practice has been abolished in all other Australian jurisdictions. A national system for the provision of free legal aid to accused persons was established in 1993 and subsequently some of the States have established legal service commissions which monitor and oversee the provision of this service. Eligibility to receive legal aid depends on the financial means of the individual and the merit of the case being defended. Legal aid is provided either through the salaried staff of a Legal Aid Commission or by assignment to private legal practitioners. Also, an extensive number of Aboriginal legal services throughout Australia receive separate funding from national or state legal services. Arrested persons are brought to a police station where charges are brought against them. Before being charged, the arrested person is usually searched. The police are empowered to use force if the search is resisted. In all serious cases, arrested persons are photographed and finger printed before being charged. If no charges are brought, the accused person is released. In most jurisdictions the police allow arrested persons to make a telephone call to a legal adviser, friend or relative. After the charging procedures are completed, the accused is either released on bail or held in custody. The role of the police in pre-trial decision- making includes performing the necessary investigation and detection work, filing charges and, except for the Australian Capital Territory, prosecuting the case in court. In some cases and in all Federal matters, the Director of Public Prosecutions is involved in determining what charges will be brought. If the Director decides that the case should be heard on indictment (heard in a superior court), a committal or preliminary hearing in a lower court is usually held in order to discover whether there is sufficient evidence to proceed with the trial. If the accused pleads guilty to a charge, the judge or magistrate may immediately impose a sentence without setting the case for trial. Thus, guilty pleas help to speed case flow and reduce case overload in the court system. If the accused pleads not guilty, the evidence of the prosecution and defense are heard in an adversarial manner in court. Cases involving serious charges are heard in a higher court with a 12-member jury. However, in some cases, the accused person has the right to waive a jury trial. Police will often conduct the prosecution for lower court cases, but not for those in the higher courts. In some jurisdictions there are alternatives to formal charging and court appearance procedures. These alternatives involve the use of community justice centers or dispute resolution centers to provide for the resolution of disputes between conflicting individuals. The proceedings in these centers are relatively informal and the hearings are less expensive than court procedures. In addition, most States have small claims tribunals or courts that allow for minor matters to be settled without involving the police or lawyers. Although plea bargaining is not officially permitted in any jurisdiction, some commentators have suggested there exists a form of charge bargaining, an arrangement by which an individual chooses to plead guilty to one or two particular charges with the understanding that other charges will be dropped. Pre-trial incarceration is usually referred to as "remanded in custody." In all jurisdictions there is a strong presumption in favor of granting bail. Bail can be granted either by police or by the courts. There are three main grounds for the denial of bail and remanding an individual in custody: 1) to prevent the offense from being continued or repeated; 2) to ensure that the offender does not abscond and appears in court as required; and 3) to ensure that the accused person does not interfere with the process of justice (for instance, by contacting jurors or witnesses). Generally, suspects brought on very serious charges, such as homicide, are remanded in custody for a substantial period of time while awaiting trial. Approximately 13% of all Australian prisoners are awaiting trial with the period of stay on remand varying between a few days to more than one year in a small number of cases. Australia has a hierarchical system of courts with the High Court of Australia operating at the top. The High Court of Australia is the final court of appeal for all other courts. It is also the court which has sole responsibility for interpreting the Australian Constitution. Within each State and Territory there is a Supreme Court and, in the larger jurisdictions, an intermediate court below it, known as the District Court, District and Criminal Court, or County Court. There is no intermediate court in Tasmania or in the two territories. Below the intermediate courts there are Magistrates Courts at which virtually all civil and criminal proceedings commence. Approximately 95% of criminal cases are resolved at the Magistrates Courts level. Cases passing through the courts generally share the following common elements: lodgment - the initiation of the matter with the court; pre-trial discussion and mediation between parties; trial; and court decision - judgment or verdict followed by sentencing. Cases initiated in Magistrates' Courts account for 98.1% of all lodgments in the criminal courts. The majority of criminal hearings (96%) take place in Magistrate's Court. The duration between the lodgment of a matter with the court and its finalization is referred to as "timeliness." Generally, lower courts complete a greater proportion of their workload more quickly because the disputes and prosecutions heard are less complex than those in higher courts, and cases are of a routine and minor nature. Committals are the first stage of hearing indictable offenses in the criminal justice system. A magistrate assesses the sufficiency of evidence presented against the defendant and decides whether to commit the matter for trial in a superior court. Defendants are often held in custody pending a committal hearing or trial, if ordered. Defendants' cases are finalized at the higher court level in one of the following two ways: adjudicated - determined whether or not guilty of the charges based on the judge's decision; and non-adjudicated - a method of determining the completion of a case thereby making it effectively inactive. Overall, 77% of the defendants whose cases are heard by a higher court are found guilty of an offense. Parallel to the Supreme Courts in the States and Territories is a Federal Court that is primarily concerned with the enforcement of Commonwealth Law, such as that related to trade practices, but that also hears appeals from the Supreme Courts of the Territories. Each State and Territory has a children's or juvenile court. Children's courts are invariably closed to the public and the press in order to protect the anonymity of the accused. The High Court of Australia has seven judges. Since its creation in 1901 there have been 37 appointments to the High Court. Except for one, all appointments have been male. A Chief Justice heads the Supreme Courts in each State and Territory. The actual number of judges varies according to the size of the state. In some jurisdictions, lay persons are appointed as Justices of the Peace. Although, in the past, these lay persons were able to convene courts and sentence offenders, this power has largely been removed in recent years. All of the persons appointed to the High Court of Australia have been distinguished members of the legal profession, but a significant minority of them have also had political experience or have been judges in a Supreme or Federal Court. The appointment of judges at each government level is the responsibility of the relevant government. In the case of the High Court and the Federal Court, formal judicial appointments are made by the Governor General. The Governor of the State formally appoints judges to the Supreme Courts. The identification and recommendation of persons to be appointed as judges in each jurisdiction is primarily the responsibility of the corresponding Attorney-General. In cases where a person either pleads guilty or is found guilty, the judge or magistrate responsible for the case determines the sentence. In complex or serious cases there is frequently an adjournment to allow the judicial officer to consider the appropriate sentence and to hear argument from the prosecution and defense in relation to sentence. Victim impact statements may be submitted in South Australia. In other jurisdictions pre-sentence reports are prepared to assist the judicial officer, usually by probation officers. Pre-sentence reports may also include a psychiatric opinion. There are a variety of sentencing options available at each court level; fine, good behavior bond, probation order, suspended sentence, community supervision, community custody, home detention, periodic detention, and imprisonment. All jurisdictions permit the following penalties to be imposed: fines, probation orders (supervision or recognizance orders), community service orders or imprisonment. Some jurisdictions provide for the imposition of home detention. Home detention is usually employed as a post-prison order rather than as an order imposed directly by the sentencing court. Capital punishment and corporal punishment have been abolished in all Australian jurisdictions. The last execution took place in 1967.
Prisons are the responsibility of states or territories. There are no federal penitentiaries or local jails. There are approximately 80 prisons throughout Australia. This number is an approximation because several large institutions are subdivided into administratively independent units. Although most prisons are designated as either high, medium, or low security facilities, prisoners at varying levels of security classification occupy most. In June, 2000, the total number of prisoners in Australia was 21,714, 94% of which were male. The rate for imprisonment in Australia was 148 per 100,000 population. According to a report by the Australian Bureau of Statistics, as of June 30, 2000, aboriginal adults represent 1.6 percent of the adult population but constituted approximately 19 percent of the total prison population, or approximately 14 times the nonindigenous rate of incarceration. The main offenses for which male offenders were sentenced included break and enter, robbery, and sex offenses. For female offenders, the main offenses included drug offenses, fraud, and robbery. Male prisoners sentenced for the violent offenses of homicide, assault, sex offenses, and robbery accounted for almost half of all sentenced male prisoners in 2000, while for females only one-third of sentenced prisoners were incarcerated for violent offenses. Generally, the training period for prison officers varies from 3 to 12 months and always involves a combination of classroom study and on-the-job training. Prison officers are required to undertake further study and pass examinations in order to be considered for promotion in the prison system. In Western Australia, persons who are appointed as superintendents or officers in charge of institutions must obtain some form of tertiary qualification. Until recently all convicted Australian prisoners were entitled to earn remissions or time off for good behavior. This approach has since been changed in New South Wales and Victoria as a result of support for an approach known as "truth in sentencing." This change is said to have resulted in a significant increase in the number of inmates in prisons, particularly in New South Wales. All States and Territories in Australia have provisions for parole and virtually all persons serving sentences of one year or more are released under a parole system. Most of the time, the number of persons serving parole is approximately two thirds of the total number of persons in prison. In addition, for every person in prison, there are approximately four persons serving other forms of non-custodial sentences such as probation or community service. All prisons have provisions for work, education and training, recreation and support. Inmates classified as requiring low security are able to obtain weekend leave. Other privileges are also available.
A number of large victim surveys conducted in Australia have consistently shown that most victims do not report crimes to the police. The main reasons that victims have cited for not reporting are that they consider the offense to be trivial or they believe the police either could not or would not do anything about the crime report. Such surveys have also found that victims are more likely to be men than women, young than old, unemployed and less well educated than the Australian norm. The most recent crime survey data for Australia come from the International Victims Survey (ICVS), which was conducted in March 2000. The most commonly mentioned personal crimes for Australia were consumer fraud (9%), assault (7%) and theft from the person (7%). About one in five persons reported being a victim of personal crime in 1999. The most common household crimes were motor vehicle damage (9%) and theft from a motor vehicle (6%). Just over 4% of households reported being a victim of a completed burglary (break-in). About 10% of households own a firearm in Australia (compared to 33% in the United States). About 66% of murders and 41% of robberies occurring in the United States in 2000 involved the use of a firearm, compared to 20% and 6% of murders and robberies, respectively, in Australia. There are a number of agencies that provide crime victim assistance in all Australian jurisdictions. These agencies include rape crisis centers, women's shelters, safe houses and voluntary organizations such as Victims of Crime Assistance League (VOCAL) and Victims of Crime Services (VOCS). Crime victims do not play an active role in the prosecution or sentencing of an offender in any Australian jurisdiction. South Australia has enacted a Victims of Crime Charter, based on the United Nations Charter. This charter provides for victim impact statements to be prepared and used in certain cases and for victims to be consulted at the various stages in the criminal iustice process.
Violence against women is a problem. Social analysts and commentators estimate that domestic violence may affect as many as one family in three or four, but there is no consensus on the extent of the problem. While it is understood that domestic violence is particularly prevalent in certain Aboriginal communities, only the states of Western Australia and Queensland have undertaken comprehensive studies into domestic violence in the Aboriginal community. It is agreed widely that responses to the problem have been ineffectual. The Government recognizes that domestic violence and economic discrimination are serious problems and the statutorily independent Sex Discrimination Commissioner actively addresses these and other areas of discrimination. A 1996 Australian Bureau of Statistics (ABS) study (the latest year for which statistics are available) found that 2.6 percent of 6,333 women surveyed who were married or in a common-law relationship had experienced an incident of violence by their partner in the previous 12-month period. Almost one in four women who have been married or in a common-law relationship have experienced violence by a partner at some time during the relationship, according to the ABS study. Prostitution is legal or decriminalized in many areas of the states and territories. In some locations, state and local governments inspect brothels to prevent mistreatment of the workers and to assure compliance with health regulations. There were 14,074 victims of sexual assault recorded by the police in 1999 (the latest figures publicly available; they do not distinguish by gender), a decrease of 1.8 percent from 1998. This amounts to approximately 74 victims of sexual assault per 100,000 persons. Spousal rape is illegal under the state criminal codes. Though prostitution is legal or decriminalized and occurs throughout the country, child sex tourism is prohibited within the country and overseas. In the past, the occurrence of female genital mutilation (FGM), which is criticized widely by international health experts as damaging to both physical and psychological health, was insignificant. However, in the last few years, small numbers of girls from immigrant communities in which FGM is practiced have been mutilated. The Government has implemented a national educational program on FGM, which is intended to combat the practice in a community health context. Trafficking in women from Asia and the former Soviet Union for the sex trade is a limited problem that the Government is taking steps to address. Sexual harassment is prohibited by the Sex Discrimination Act.
According to the Australian Institute of Criminology (AIC) report released in March, indigenous people were imprisoned nationally at 14 times the rate of nonindigenous people in 1999. The indigenous incarceration rate was 295 per 10,000 persons, while the nonindigenous incarceration rate was 18 per 10,000 persons. The AIC reports that the incarceration rate among indigenous youth was 18.5 times that of the nonindigenous youth population in 1999. Over 45 percent of Aboriginal men between the ages of 20 and 30 years have been arrested at some time in their lives. Aboriginal juveniles accounted for 42 percent of those between the ages of 10 to 17 in juvenile corrective institutions during 2000, according to the AIC. Human rights observers claim that socioeconomic conditions give rise to the common precursors of indigenous crime, for example, unemployment, homelessness, and boredom. Controversy over state mandatory sentencing laws continued throughout the year. These laws set automatic prison terms for multiple convictions of certain crimes. Human rights groups have criticized mandatory sentencing laws, which they allege have resulted in prison terms for relatively minor crimes and indirectly target Aboriginals. In July 2000, the U.N. Human Rights Commission issued an assessment of the country's human rights record that was highly critical of mandatory sentencing. The federal Government decided not to interfere in what it considered to be the states' prerogative, arguing that the laws were passed by democratically elected governments after full political debate, making it inappropriate for the federal government to intervene. The newly-elected government of the Northern Territory repealed the territory's mandatory sentencing laws in October. Australia's Aboriginal and Torres Straits Islander Commission (ATSIC) welcomed this repeal and called upon Western Australia to follow suit. Western Australia continued to retain its mandatory sentencing laws, which provide that a person (adult or juvenile) who commits the crime of home burglary three or more times is subject to a mandatory minimum prison sentence. Indigenous groups charge that police harassment of indigenous people, including juveniles, is pervasive and that racial discrimination among police and prison custodians persists. Human rights groups have alleged a pattern of mistreatment and arbitrary arrests occurring against a backdrop of systematic discrimination.
Although Asians make up less than 5 percent of the population, they account for 40 percent of new immigrants. Public opinion surveys have indicated concern with the numbers of immigrants arriving in the country. Upon coming to power in 1996, the Government reduced annual migrant (nonrefugee) immigration by 10 percent to 74,000; subsequently, it has increased to approximately 80,000. Humanitarian immigration figures remained steady at approximately 12,000 per year from 1996 through this year. The significant increase in unauthorized boat arrivals from the Middle East during the past 3 years has heightened citizens' concern that "queue jumpers" and alien smugglers are abusing the country's refugee program. Leaders in the ethnic and immigrant communities expressed concern that increased numbers of illegal arrivals, as well as violence at migrant detention centers, contributed to a few incidents of vilification of immigrants and minorities. Following the September 11 terrorist attacks on the United States, a mosque in Brisbane was subjected to an arson attack, and cases of vilification against Muslims rose.
TRAFFICKING IN PERSONS
Legislation enacted in late 1999 targets criminal practices associated with trafficking, and other laws address smuggling of migrants. Trafficking in persons from Asia, particularly women (but also children), is a limited problem that the Government is taking steps to address. The Government's response to trafficking in persons is part of a broader effort against "people smuggling," defined as "illegally bringing non-citizens into the country." Smuggling of persons--in all its forms--is prohibited by the Migration Act, which calls for penalties of up to 20 years imprisonment. In September Parliament also enacted the Border Protection Act, which authorizes the boarding and searching of vessels in international waters, if suspected of smuggling of persons. The country is a destination for trafficked women and children. In June the Australian Institute of Criminology (AIC) issued a report entitled Organized Crime in People Smuggling and Trafficking to Australia, which observed that the incidence of trafficking appears to be low. The Department of Immigration and Multicultural and Indigenous Affairs and the Australian Federal Police (AFP) have determined that women and children from Thailand, the Philippines, Malaysia, China, Indonesia, South Korea, Vietnam, and parts of the former Soviet Union have been trafficked into the country. They are believed to be entering primarily via air with fraudulently obtained tourist or student visas, for purposes of prostitution. There also have been reports of women trafficked into the country from Afghanistan and Iraq. The high profit potential combined with factors such as the difficulty of detection, unwillingness (or inability) of witnesses to testify in investigations, apparently short stays in the country by workers in the sex trade, and previously low penalties when prosecuted have contributed to the spread of groups engaged in these activities. There have been some instances of women being forced to work as sex workers in the country by organized crime groups. There are some reports of women working in the sex industry becoming mired in debt or being physically forced to keep working, and some of these women are under pressure to accept hazardous working conditions especially if their immigration status is irregular. Some women have been subjected to what is essentially indentured sexual servitude in order to pay off a "contract debt" to their traffickers in exchange for visas, plane tickets, food, and shelter. However, the available evidence suggests that these cases are not widespread. Some women working in the sex industry were not aware prior to entering the country that this was the kind of work they would be doing. Investigations in past years by DIMIA have found women locked in safe houses with barred windows, or under 24-hour escort, with limited access to medical care or the outside world. These women have been lured either by the idea that they would be waitresses, maids, or dancers or, in some cases, coerced to come by criminal elements operating in their home countries. There are also reports of young women and children, primarily from Asia, being sold into the sex industry by impoverished families. Prostitution is legal or decriminalized in many areas of the states and territories, but health and safety standards are not well enforced and vary widely. In September 1999, the Criminal Code Amendment (Slavery and Sexual Servitude) Act came into force. The act modernizes the country's slavery laws, contains new offenses directed at slavery, sexual servitude, and deceptive recruiting, and addresses the growing and lucrative trade in persons for the purposes of sexual exploitation. The act provides for penalties of up to 25 years' imprisonment and is part of a federal, state, and territory package of legislation. No prosecutions have been brought under this federal law. Another government initiative was the 1994 Child Sex Tourism Act, which provides for the investigation and prosecution of citizens who travel overseas and engage in illegal sexual conduct with children. Under the act, there have been 11 prosecutions, resulting in 7 convictions. Another case was pending at year's end. During the year, the Customs Service increased monitoring of all travelers (men, women, and children) entering the country who it suspected were involved in the sex trade, either as employees or employers. | http://www-rohan.sdsu.edu/faculty/rwinslow/asia_pacific/australia.html | 13 |
22 | In geometric algebra, a blade is a generalization of the concept of scalars and vectors to include simple bivectors, trivectors, etc. Specifically, a k-blade is any object that can be expressed as the exterior product (informally wedge product) of k vectors, and is of grade k.
- A 0-blade is a scalar. The inner product[not relevant] or dot product of two vectors a and b is a 0-blade and is denoted as:
- A 1-blade is a vector. Every vector is simple.
- A 2-blade is a simple bivector. Linear combinations of 2-blades also are bivectors, but need not be simple, and are hence not necessarily 2-blades. A 2-blade may be expressed as the wedge product of two vectors a and b:
- A 3-blade is a simple trivector, that is, it may expressed as the wedge product of three vectors a, b, and c:
- In a space of dimension n, a blade of grade n − 1 is called a pseudovector.
- The highest grade element in a space is called a pseudoscalar, and in a space of dimension n is an n-blade.
- In a space of dimension n, there are k(n − k) + 1 dimensions of freedom in choosing a k-blade, of which one dimension is an overall scaling multiplier.
In a n-dimensional spaces, there are blades of grade 0 through n. A vector subspace of finite dimension k may be represented by the k-blade formed as a wedge product of all the elements of a basis for that subspace.
For example, in 2-dimensional space scalars are described as 0-blades, vectors are 1-blades, and area elements are 2-blades known as pseudoscalars, in that they are one-dimensional objects distinct from regular scalars.
In three-dimensional space, 0-blades are again scalars and 1-blades are three-dimensional vectors, but in three-dimensions, areas have an orientation, so while 2-blades are area elements, they are oriented. 3-blades (trivectors) represent volume elements and in three-dimensional space, these are scalar-like – i.e., 3-blades in three-dimensions form a one-dimensional vector space.
See also
- Marcos A. Rodrigues (2000). "§1.2 Geometric algebra: an outline". Invariants for pattern recognition and classification. World Scientific. p. 3 ff. ISBN 981-02-4278-6.
- William E Baylis (2004). "§4.2.3 Higher-grade multivectors in Cℓn: Duals". Lectures on Clifford (geometric) algebras and applications. Birkhäuser. p. 100. ISBN 0-8176-3257-3.
- John A. Vince (2008). Geometric algebra for computer graphics. Springer. p. 85. ISBN 1-84628-996-3.
- For Grassmannians (including the result about dimension) a good book is: Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-05059-9, MR 1288523. The proof of the dimensionality is actually straightforward. Take k vectors and wedge them together and perform elementary column operations on these (factoring the pivots out) until the top k × k block are elementary basis vectors of . The wedge product is then parametrized by the product of the pivots and the lower k × (n − k) block.
- David Hestenes (1999). New foundations for classical mechanics: Fundamental Theories of Physics. Springer. p. 54. ISBN 0-7923-5302-1.
General references
- David Hestenes, Garret Sobczyk (1987). "Chapter 1: Geometric algebra". Clifford Algebra to Geometric Calculus: A Unified Language for Mathematics and Physics. Springer. p. 1 ff. ISBN 90-277-2561-6.
- Chris Doran and Anthony Lasenby (2003). Geometric algebra for physicists. Cambridge University Press. ISBN 0-521-48022-1.
- A Lasenby, J Lasenby & R Wareham (2004) A covariant approach to geometry using geometric algebra Technical Report. University of Cambridge Department of Engineering, Cambridge, UK.
- R Wareham, J Cameron, & J Lasenby (2005). "Applications of conformal geometric algebra to computer vision and graphics". In Hongbo Li, Peter J. Olver, Gerald Sommer. Computer algebra and geometric algebra with applications. Springer. p. 329 ff. ISBN 3-540-26296-2.
- A Geometric Algebra Primer, especially for computer scientists. | http://en.wikipedia.org/wiki/Blade_(geometry) | 13 |
32 | 6.4. Number counts of faint galaxies
The final classical test I will discuss is that of number counts of distant objects - what radio astronomers call the log(N)-log(S) test. Basically one counts the number of galaxies N brighter than a certain flux limit S. If we lived in a static Euclidean universe, then the number of galaxies out to distance R would be N R3 but the flux is related to R as S R-2. This implies that N S-3/2 or log(N) = -3/2 log(S) + const = 0.6m + const where m is the magnitude corresponding to the flux S.
But we do not live in a static Euclidean universe; we live in an evolving universe with a non-Euclidean geometry where the differential number counts probe dV(z), the comoving volume as a function of redshift. In Fig. 9 we see log(dV/dz) as a function of redshift for three different (tot = 1) cosmological models: the matter dominated Universe, the cosmological constant dominated Universe, and the concordance model. For small z, dV/dz increases as z2 for all models as would be expected in a Euclidean Universe, but by redshift one, the models are obviously diverging, with the models dominated by a cosmological constant having a larger comoving incremental volume. Therefore if we can observe faint galaxies extending out to a redshift of one or two, we might expect number counts to provide a cosmological probe.
Figure 9. The log of the incremental volume per incremental redshift (in units of the Hubble volume) as a function of redshift for the three flat cosmological models.
There is a long history of counting objects as a function of flux or redshift. Although cosmological conclusions have been drawn (see, e.g. ), the overall consensus is that this is not a very good test because the galaxy population evolves strongly with redshift. Galaxies evolve because stars evolve. In the past, the stellar populations were younger and contained relatively more massive, luminous stars. Therefore we expect galaxies to be more luminous at higher redshift. It is also possible that the density of galaxies evolves because of merging, as would be consistent with the preferred model of hierarchical structure formation in the Universe.
The distribution of galaxies by redshift can be used, to some extent, to break this degeneracy between evolution and cosmology. If we can measure the redshifts of galaxies with infrared magnitudes between 23 and 26, for example, that distribution will be skewed toward higher redshift if there is more luminosity evolution.
I have recently reconsidered the number counts of the faint galaxies in the Hubble Deep Fields, north and south [55, 56]. These are two separate small patches of empty sky observed with the Hubble Space Telescope down to a very low flux limit - about mI = 30 (the I band is a far red filter centered around 8000 angstroms). The differential number counts are shown by the solid round points in Fig. 10 where ground based number counts at fainter magnitudes are also shown by the starred points.
Figure 10. The solid points are the faint galaxy number counts from the Hubble Deep Fields (north and south [55, 56]) and the star shaped points are the number counts from ground based data. The curves are the no-evolution predictions from three flat cosmological models.
For this same sample of galaxies, there are also estimates of the redshifts based upon the galaxy colors - so called photometric redshifts . In order to calculate the expected number counts and redshift distribution one must have some idea of the form of the luminosity function - the distribution of galaxies by redshift. Here, like everyone else, I have have assumed that this form is given by the Schechter function :
which is characterized by three parameters: , a power law at low luminosities, L* a break-point above which the number of galaxies rapidly decreases, and No a normalization. I take this form because the overall galaxy distribution by luminosity at low redshifts is well fit by such a law , so I am assuming that at least the form of the luminosity function does not evolve with redshift.
But when I consider faint galaxies at high redshift in a particular band I have to be careful to apply the K-correction mentioned above; that is, I must correct the observed flux in that band to the rest frame. Making this correction , but assuming no luminosity or density evolution, I find the differential number counts appropriate to our three flat cosmological models shown by the indicated curves in Fig. 10. We see that the predicted number counts all fall short of the observed counts, but that the cosmological constant dominated model comes closest to matching the observations. However, the distribution by redshift of HDF galaxies between I-band magnitudes of 22 and 26 is shown in Fig. 11 (this is obviously the cumulative distribution). Here we see that all three models seriously fail to match the observed distribution, in the sense that the predicted mean redshift is much too small.
Figure 11. The cumulative redshift distribution for galaxies between apparent I-band magnitudes of 23 and 26 (photometric redshifts from ). The curves are the predicted no-evolution distributions for the three cosmological models.
This problem could obviously be solved by evolution. If galaxies are brighter in the past, as expected, then we would expect to shift this distribution toward higher redshifts. One can conceive of very complicated evolution schemes, involving initial bursts of star formation with or without continuing star formation, but it would seem desirable to keep the model as simple as possible; let's take a "minimalist" model for galaxy evolution. A simple one parameter scheme with the luminosity brightening proportional to the look-back time squared, i.e., every galaxy brightens as
where q is the free parameter, can give a reasonable match to evolution models for galaxies . (we also assume that all galaxies are the same- they are not divided into separate morphological classes). I choose the value of q such that the predicted redshift distribution most closely matches the observed distribution for all three models, and the results are shown in Fig. 12.
Figure 12. As in Fig. 10 above the observed galaxy number counts and the predictions for the cosmological models with luminosity evolution sufficient to explain the number counts.
The required values of q (in magnitudes per tH2) for the three cosmological models are: q = 2.0 ( = 1.0), q = 3.0 ( = 0.7), and q = 11.0 ( = 0.0). Obviously, the matter-dominated model requires the most evolution, and with this simple evolution scheme, cannot be made to perfectly match the observed distribution by redshift (this in itself is not definitive because one could always devise more complicated schemes which would work). For the concordance model, the required evolution would be about two magnitudes out to z = 3.
For these same evolutionary models, that is, with evolution sufficient to match the number counts, the predicted redshift distributions are shown in Fig. 13. Here we see that the model dominated by a cosmological constant predicts too many low redshift galaxies, the matter dominated model predicts too few, and the model that works perfectly is very close to the concordance model! Preforming this operation for a number of flat models with variable , I find that 0.59 < < 0.71 to 90% confidence.
Figure 13. The cumulative redshift distribution for galaxies between apparent i-band magnitudes of 22 and 26 (photometric redshifts from ). The curves are the predicted distributions for the three cosmological models with evolution sufficient to explain the number counts.
Now there are too many assumptions and simplifications to make this definitive. The only point I want to make is that faint galaxy number counts and redshift distributions are completely consistent with the concordance model when one considers the simplest minimalist model for pure luminosity evolution. One may certainly conclude that number counts provide no contradiction to the generally accepted cosmological model of the Universe (to my disappointment). | http://ned.ipac.caltech.edu/level5/Sept03/Sanders/Sanders6_4.html | 13 |
12 | A simple straight edge and a percentage circle offer children access to profound and fundamental learnings in both art and geometry. Making grids from scratch can exercise children's powers of visualization: the abilities developed through interpreting configurations for their 'hidden' geometric shapes, patterns, symmetries, and other attributes.
When I am teaching, these constructions grow out of children making chords. Counting clockwise by 25 they construct a square; counting by 20, a pentagon; by 30, a 20-pointed star. They learn that an inscribed polygon is made of chords, that a regular polygon is the result of counting correctly, that the chords and the inscribed angles are congruent. They know these things because they have MADE them from scratch!
Let's begin our grid with a set of inscribed polygons. First, a pair of squares at eight points:
Here we should be perceiving at least 8 triangles, 2 squares, 1 eight-pointed star, and 1 octagon, all inside 1 circle:
Below, you will see more sets of polygons. There are 2 octagons, not 1, and more sets of different triangles, some congruent, some similar. All rotate around the center of the circle.
When I present polygons, inscribed polygons, or the diameter of a circle, students (children or teachers) seem to know what these are. We don't go into formal definitions, especially in the context of my introduction, where I talk about two distinct and contrasting kinds of patterns: regular and random.
Regular patterns have motifs that are 'units of repeat'. Though the units themselves are the same, we can vary the pattern by counting them with different numbers, i.e., repetitions: (1212121) (112112112) (112221122211222).
Random patterns are like camouflage: there is not only a specific or single motif, but the viewer perceives an overall similarity of shapes and colors which, like the transcendental-numbers-after-the-decimal-sign, cannot be predicted. The motif (unit of repeat) and the pattern (the numbered intervals) are 'random'.
Highlighting the polygons with lines in colors helps children to visualize, to 'seek and find' . Given the opportunity to draw this grid from the initial 8 points on the circumference, using a straight edge, learners discover elemental geometric properties as did ancient geometers long ago. They begin to experience the quintessential beauty of geometry.
Home || The Math Library || Quick Reference || Search || Help | http://mathforum.org/sarah/shapiro/shapiro.inscribed.grids.html | 13 |
52 | Feathered dinosaurs is a term used to describe dinosaurs, particularly maniraptoran dromaeosaurs, that were covered in plumage; either filament-like intergumentary structures with few branches, to fully developed pennaceous feathers complete with shafts and vanes. Feathered dinosaurs first came to realization after it was discovered that dinosaurs are closely related to birds. Since then, the term "feathered dinosaurs" has widened to encompass the entire concept of the dinosaur–bird relationship, including the various avian characteristics some dinosaurs possess, including a pygostyle, a posteriorly oriented pelvis, elongated arms and forelimbs and clawed hand, and clavicles fused to form a furcula. A substantial amount of evidence demonstrates that birds are the descendants of theropod dinosaurs, and that birds evolved during the Jurassic from small, feathered maniraptoran theropods closely related to dromaeosaurids and troodontids (known collectively as deinonychosaurs). Less than two dozen species of dinosaurs have been discovered with direct fossil evidence of plumage since the 1990s, with most coming from Cretaceous deposits in China, most notably Liaoning Province. Together, these fossils represent an important transition between dinosaurs and birds, which allows paleontologists to piece together the origin and evolution of birds.
Despite integumentary structures being limited to non-avian dinosaurs, particularly well-documented in maniraptoriformes, fossils do suggest that a large number of theropods were feathered, and it has even been suggested that based on phylogenetic analyses, Tyrannosaurus at one stage of its life may have been covered in down-like feathers, although there is no direct fossil evidence of this. Based on what is known of the dinosaur fossil record, paleontologists generally think that most of dinosaur evolution happened at relatively large body size (a mass greater than a few kilograms), and in animals that were entirely terrestrial. Small size (<1 kg) and arboreal habits seem to have arisen fairly late during dinosaurian evolution, and only within maniraptora.
|Part of a series on|
|Dinosaurs and birds|
Birds were originally linked with other dinosaurs back in the late 1800s, most famously by Thomas Huxley. This view remained fairly popular until the 1920s when Gerhard Heilmann's book The Origin of Birds was published in English. Heilmann argued that birds could not have descended from dinosaurs (predominantly because dinosaurs lacked clavicles, or so he thought), and he therefore favored the idea that birds originated from the so-called 'pseudosuchians': primitive archosaurs that were also thought ancestral to dinosaurs and crocodilians. This became the mainstream view until the 1970s, when a new look at the anatomical evidence (combined with new data from maniraptoran theropods) led John Ostrom to successfully resurrect the dinosaur hypothesis. Fossils of Archaeopteryx include well-preserved feathers, but it was not until the early 1990s that clearly nonavian dinosaur fossils were discovered with preserved feathers. Today there are more than twenty genera of dinosaurs with fossil feathers, nearly all of which are theropods. Most are from the Yixian Formation in China. The fossil feathers of one specimen, Shuvuuia deserti, have even tested positive for beta-keratin, the main protein in bird feathers, in immunological tests.
Shortly after the 1859 publication of Charles Darwin's The Origin of Species, the ground-breaking book which described his theory of evolution by natural selection, British biologist and evolution-defender Thomas Henry Huxley proposed that birds were descendants of dinosaurs. He compared skeletal structure of Compsognathus, a small theropod dinosaur, and the 'first bird' Archaeopteryx lithographica (both of which were found in the Upper Jurassic Bavarian limestone of Solnhofen). He showed that, apart from its hands and feathers, Archaeopteryx was quite similar to Compsognathus. In 1868 he published On the Animals which are most nearly intermediate between Birds and Reptiles, making the case. The leading dinosaur expert of the time, Richard Owen, disagreed, claiming Archaeopteryx as the first bird outside dinosaur lineage. For the next century, claims that birds were dinosaur descendants faded, while more popular bird-ancestry hypotheses including that of a possible 'crocodylomorph' and 'thecodont' ancestor gained ground.
Since the discovery of such theropods as Microraptor and Epidendrosaurus, paleontologists and scientists in general now have small forms exhibiting some features suggestive of a tree-climbing (or scansorial) way of life. However, the idea that dinosaurs might have climbed trees goes back a long way, and well pre-dates the dinosaur renaissance of the 1960s and 70s.
The idea of scansoriality in non-avian dinosaurs has been considered a 'fringe' idea, and it's partly for this reason that, prior to 2000, nobody had attempted any sort of review on the thoughts that had been published about the subject. The oldest reference to scansoriality in a dinosaur comes from William Fox, the Isle of Wight curator and amateur fossil collector, who in 1866 proposed that Calamospondylus oweni from the Lower Cretaceous Wessex Formation of the Isle of Wight might have been in the habit of 'leaping from tree to tree'. The Calamospondylus oweni specimen that Fox referred to was lost, and the actual nature of the fossil remains speculative, but there are various reasons for thinking that it was a theropod. However, it's not entirely accurate to regard Fox's ideas about Calamospondylus as directly relevant to modern speculations about tree-climbing dinosaurs given that, if Fox imagined Calamospondylus oweni as resembling anything familiar, it was probably as a lizard-like reptile, and not as a dinosaur as they are currently understood.
During the early decades of the 20th century the idea of tree-climbing dinosaurs became reasonably popular as Othenio Abel, Gerhard Heilmann and others used comparisons with birds, tree kangaroos and monkeys to argue that the small ornithopod Hypsilophodon (also from the Wessex Formation of the Isle of Wight) was scansorial. Heilmann had come to disagree with this idea and now regarded Hypsilophodon as terrestrial. William Swinton favored the idea of a scansorial Hypsilophodon, concluding that 'it would be able to run up the stouter branches and with hands and tail keep itself balanced until the need for arboreal excursions had passed', and in a 1936 review of Isle of Wight dinosaurs mentioned the idea that small theropods might also have used their clawed hands to hold branches when climbing.
During the 1970s, Peter Galton was able to show that all of the claims made about the forelimb and hindlimb anatomy of Hypsilophodon supposedly favoring a scansorial lifestyle were erroneous, and that this animal was in fact well suited for an entirely terrestrial, cursorial lifestyle. Nevertheless, for several decades Hypsilophodon was consistently depicted as a tree-climber.
In recent decades, Gregory Paul has been influential in arguing that small theropods were capable climbers, and he not only argued for and illustrated scansorial abilities in coelurosaurs, he also proposed that as-yet-undiscovered maniraptorans were highly proficient climbers and included the ancestors of birds. The hypothesized existence of small arboreal theropods that are as yet unknown from the fossil record later proved integral to George Olshevsky's 'Birds Came First' (BCF) hypothesis. Olshevsky argued that all dinosaurs, and in fact all archosaurs, descend from small, scansorial ancestors, and that it is these little climbing reptiles which are the direct ancestors of birds.
Ostrom, Deinonychus and the Dinosaur RenaissanceEdit
In 1964, the first specimen of Deinonychus antirrhopus was discovered in Montana, and in 1969, John Ostrom of Yale University described Deinonychus as a theropod whose skeletal resemblance to birds seemed unmistakable. Since that time, Ostrom had become a leading proponent of the theory that birds are direct descendants of dinosaurs. During the late 1960s, Ostrom and others demonstrated that maniraptoran dinosaurs could fold their arms in a manner similar to that of birds. Further comparisons of bird and dinosaur skeletons, as well as cladistic analysis strengthened the case for the link, particularly for a branch of theropods called maniraptors. Skeletal similarities include the neck, the pubis, the wrists (semi-lunate carpal), the 'arms' and pectoral girdle, the shoulder blade, the clavicle and the breast bone. In all, over a hundred distinct anatomical features are shared by birds and theropod dinosaurs.
Other researchers drew on these shared features and other aspects of dinosaur biology and began to suggest that at least some theropod dinosaurs were feathered. The first restoration of a feathered dinosaur was Sarah Landry's depiction of a feathered "Syntarsus" (now renamed Megapnosaurus or considered a synonym of Coelophysis), in Robert T. Bakker's 1975 publication Dinosaur Renaissance. Gregory S. Paul was probably the first paleoartist to depict maniraptoran dinosaurs with feathers and protofeathers, starting in the late 1980s.
By the 1990s, most paleontologists considered birds to be surviving dinosaurs and referred to 'non-avian dinosaurs' (all extinct), to distinguish them from birds (aves). Before the discovery of feathered dinosaurs, the evidence was limited to Huxley and Ostrom's comparative anatomy. Some mainstream ornithologists, including Smithsonian Institution curator Storrs L. Olson, disputed the links, specifically citing the lack of fossil evidence for feathered dinosaurs.
Modern research and feathered dinosaurs in ChinaEdit
The early 1990s saw the discovery of spectacularly preserved bird fossils in several Early Cretaceous geological formations in the northeastern Chinese province of Liaoning. South American paleontologists, including Fernando Novas and others, discovered evidence showing that maniraptorans could move their arms in a bird-like manner. Gatesy and others suggested that anatomical changes to the vertebral column and hindlimbs occured before birds first evolved, and Xu Xing and colleagues proved that true functional wings and flight feathers evolved in some maniraptorans, all strongly suggesting that these anatomical features were already well-developed before the first birds evolved.
In 1996, Chinese paleontologists described Sinosauropteryx as a new genus of bird from the Yixian Formation, but this animal was quickly recognized as a theropod dinosaur closely related to Compsognathus. Surprisingly, its body was covered by long filamentous structures. These were dubbed 'protofeathers' and considered to be homologous with the more advanced feathers of birds, although some scientists disagree with this assessment. Chinese and North American scientists described Caudipteryx and Protarchaeopteryx soon after. Based on skeletal features, these animals were non-avian dinosaurs, but their remains bore fully-formed feathers closely resembling those of birds. "Archaeoraptor," described without peer review in a 1999 issue of National Geographic, turned out to be a smuggled forgery, but legitimate remains continue to pour out of the Yixian, both legally and illegally. Many newly described feathered dinosaurs preserve horny claw sheaths, integumentary structures (filaments to fully pennaceous feathers), and internal organs. Feathers or "protofeathers" have been found on a wide variety of theropods in the Yixian, and the discoveries of extremely bird-like dinosaurs, as well as dinosaur-like primitive birds, have almost entirely closed the morphological gap between theropods and birds.
Archaeopteryx, the first good example of a "feathered dinosaur", was discovered in 1861. The initial specimen was found in the solnhofen limestone in southern Germany, which is a lagerstätte, a rare and remarkable geological formation known for its superbly detailed fossils. Archaeopteryx is a transitional fossil, with features clearly intermediate between those of modern reptiles and birds. Discovered just two years after Darwin's seminal Origin of Species, its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for Compsognathus.
Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Most of these specimens were unearthed in Liaoning province, northeastern China, which was part of an island continent during the Cretaceous period. Though feathers have been found only in the lagerstätte of the Yixian Formation and a few other places, it is possible that non-avian dinosaurs elsewhere in the world were also feathered. The lack of widespread fossil evidence for feathered non-avian dinosaurs may be due to the fact that delicate features like skin and feathers are not often preserved by fossilization and thus are absent from the fossil record.
A recent development in the debate centers around the discovery of impressions of "protofeathers" surrounding many dinosaur fossils. These protofeathers suggest that the tyrannosauroids may have been feathered. However, others claim that these protofeathers are simply the result of the decomposition of collagenous fiber that underlaid the dinosaurs' integument. The Dromaeosauridae family, in particular, seems to have been heavily feathered and at least one dromaeosaurid, Cryptovolans, may have been capable of flight.
Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent the more important link for paleontologists. Furthermore, it is increasingly clear that the relationship between birds and dinosaurs, and the evolution of flight, are more complex topics than previously realized. For example, while it was once believed that birds evolved from dinosaurs in one linear progression, some scientists, most notably Gregory S. Paul, conclude that dinosaurs such as the dromaeosaurs may have evolved from birds, losing the power of flight while keeping their feathers in a manner similar to the modern ostrich and other ratites.
Comparisons of bird and dinosaur skeletons, as well as cladistic analysis, strengthens the case for the link, particularly for a branch of theropods called maniraptors. Skeletal similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, shoulder blade, clavicle, and breast bone.
At one time, it was believed that dinosaurs lacked furculae, long believed to be a structure unique to birds, that were formed by the fusion of the two collarbones (clavicles) into a single V-shaped structure that helps brace the skeleton against the stresses incurred while flapping. This apparent absence was considered an overwhelming argument to refute the dinosaur ancestry of birds by Danish artist and naturalist Gerhard Heilmann's monumentally influential The Origin of Birds in 1926. That reptiles ancestral to birds, therefore, should, at the very least, show well-developed clavicles. In the book, Heilmann discussed that no clavicles had been reported in any theropod dinosaur. Noting this fact, Heilmann suggested that birds evolved from a more generalized archosaurian ancestor, such as the aptly-named Ornithosuchus (literally, “bird-crocodile”), which is now believed to be closer to the crocodile end of the archosaur lineage. At the time, however, Ornithosuchus seemed to be a likely ancestor of more birdlike creatures.
Contrary to what Heilman believed, paleontologists since the 1980s now accept that clavicles and in most cases furculae are a standard feature not just of theropods but of saurischian dinosaurs. Furculae in dinosaurs is not only limited to maniraptorans, as evidenced by an article by Chure & Madson in which they described a furcula in an allosaurid dinosaur, a non-avian theropod. In 1983, Rinchen Barsbold reported the first dinosaurian furcula from a specimen of the Cretaceous theropod Oviraptor. A furcula-bearing Oviraptor specimen had previously been known since the 1920s, but because at the time the theropod origin of birds was largely dismissed, it was misidentified for sixty years.:9
Following this discovery, paleontologists began to find furculae in other theropod dinosaurs. Wishbones are now known from the dromaeosaur Velociraptor, the allosauroid Allosaurus, and the tyrannosaurid Tyrannosaurus rex, to name a few. Up to late 2007, ossified furculae (i.e. made of bone rather than cartilage) have been found in nearly all types of theropods except the most basal ones, Eoraptor and Herrerasaurus. The original report of a furcula in the primitive theropod Segisaurus (1936) has been confirmed by a re-examination in 2005. Joined, furcula-like clavicles have also been found in Massospondylus, an Early Jurassic sauropodomorph, indicating that the evolution of the furcula was well underway when the earliest dinosaurs were diversifying.
In 2000, Alex Downs reported an isolated furcula found within a block of Coelophysis bauri skeletons from the Late Triassic Rock Point Formation at Ghost Ranch, New Mexico. While it seemed likely that it originally belonged to Coelophysis, the block contained fossils from other Triassic animals as well, and Alex declined to make a positive identification. Currently, a total of five C. bauri furculae have been found in the New Mexico Museum of Natural History's (NMMNH) Ghost Ranch, New Mexico Whitaker Quarry block C-8-82. Three of the furculae are articulated in juvenile skeletons; two of these are missing fragments but are nearly complete, and one is apparently complete. Two years later, Tykoski et al. described several furculae from two species of the coelophysoid genus Syntarsus (now Megapnosaurus), S. rhodesiensis and S. kayentakatae, from the Early Jurassic of Zimbabwe and Arizona, respectively. Syntarsus was long considered to be the genus most closely related to Coelophysis, differing only in a few anatomical details and slightly younger age, so the identification of furculae in Syntarsus made it very likely that the furcula Alex Downs noted in 2000 came from Coelophysis after all. By 2006, wishbones were definitively known from the Early Jurassic Coelophysis rhodesiensis and Coelophysis kayentakatae, and a single isolated furcula was known that might have come from the Late Triassic type species, Coelophysis bauri.
Avian air sacsEdit
Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to an investigation which was led by Patrick O'Connor of Ohio University. The lungs of theropod dinosaurs (carnivores that walked on two legs and had birdlike feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In a paper published in the online journal Public Library of Science ONE (September 29, 2008), scientists described Aerosteon riocoloradensis, the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT-scanning revealed the evidence of air sacs within the body cavity of the Aerosteon skeleton.
Heart and sleeping postureEdit
Modern computed tomography (CT) scans of a dinosaur chest cavity conducted in 2000 found the apparent remnants of complex four-chambered hearts, much like those found in today's mammals and birds. The idea is controversial within the scientific community, coming under fire for bad anatomical science or simply wishful thinking. The type fossil of the troodont, Mei, is complete and exceptionally well preserved in three-dimensional detail, with the snout nestled beneath one of the forelimbs, similar to the roosting position of modern birds. This demonstrates that the dinosaurs slept like certain modern birds, with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds.
A discovery of features in a Tyrannosaurus rex skeleton recently provided more evidence that dinosaurs and birds evolved from a common ancestor and, for the first time, allowed paleontologists to establish the sex of a dinosaur. When laying eggs, female birds grow a special type of bone in their limbs between the hard outer bone and the marrow. This medullary bone, which is rich in calcium, is used to make eggshells. The presence of endosteally derived bone tissues lining the interior marrow cavities of portions of the Tyrannosaurus rex specimen's hind limb suggested that T. rex used similar reproductive strategies, and revealed the specimen to be female. Further research has found medullary bone in the theropod Allosaurus and ornithopod Tenontosaurus. Because the line of dinosaurs that includes Allosaurus and Tyrannosaurus diverged from the line that led to Tenontosaurus very early in the evolution of dinosaurs, this suggests that dinosaurs in general produced medullary tissue. Medullary bone has been found in specimens of sub-adult size, which suggests that dinosaurs reached sexual maturity rather quickly for such large animals. The micro-structure of eggshells and bones has also been determined to be similar to that of birds.
Brooding and care of youngEdit
Several specimens of the Mongolian oviraptorid Citipati was discovered in a chicken-like brooding position resting over the eggs in its nest in 1993, which may mean it was covered with an insulating layer of feathers that kept the eggs warm. All of the nesting specimens are situated on top of egg clutches, with their limbs spread symmetrically on each side of the nest, front limbs covering the nest perimeter. This brooding posture is found today only in birds and supports a behavioral link between birds and theropod dinosaurs. The nesting position of Citipati also supports the hypothesis that it and other oviraptorids had feathered forelimbs. With the 'arms' spread along the periphery of the nest, a majority of eggs would not be covered by the animal's body unless an extensive coat of feathers was present.
A dinosaur embryo was found without teeth, which suggests some parental care was required to feed the young dinosaur, possibly the adult dinosaur regurgitated food into the young dinosaur's mouth (see altricial). This behavior is seen in numerous bird species; parent birds regurgitate food into the hatchling's mouth.
The loss of teeth and the formation of a beak has been shown to have been favorably selected to suit the newly aerodynamical bodies of avian flight in early birds. In the Jehol Biota in China, various dinosaur fossils have been discovered that have a variety of different tooth morphologies, in respect to this evolutionary trend. Sinosauropteryx fossils display unserrated premaxillary teeth, while the maxillary teeth are serrated. In the preserved remains of Protarchaeopteryx, four premaxillary teeth are present that are serrated. The diminutive oviraptorosaur Caudipteryx has four hook-like premaxillary teeth, and in Microraptor zhaoianus, the posterior teeth of this species had developed a constriction that led to a less compressed tooth crown. These dinosaurs exhinit a heterodont dentition pattern that clearly illustrates a transition from the teeth of maniraptorans to those of early, basal birds.
Molecular evidence and soft tissueEdit
One of the best examples of soft tissue impressions in a fossil dinosaur was discovered in Petraroia, Italy. The discovery was reported in 1998, and described the specimen of a small, very young coelurosaur, Scipionyx samniticus. The fossil includes portions of the intestines, colon, liver, muscles, and windpipe of this immature dinosaur.
In the March 2005 issue of Science, Dr. Mary Higby Schweitzer and her team announced the discovery of flexible material resembling actual soft tissue inside a 68-million-year-old Tyrannosaurus rex leg bone from the Hell Creek Formation in Montana. After recovery, the tissue was rehydrated by the science team. The seven collagen types obtained from the bone fragments, compared to collagen data from living birds (specifically, a chicken), reveal that older theropods and birds are closely related.
When the fossilized bone was treated over several weeks to remove mineral content from the fossilized bone marrow cavity (a process called demineralization), Schweitzer found evidence of intact structures such as blood vessels, bone matrix, and connective tissue (bone fibers). Scrutiny under the microscope further revealed that the putative dinosaur soft tissue had retained fine structures (microstructures) even at the cellular level. The exact nature and composition of this material, and the implications of Dr. Schweitzer's discovery, are not yet clear; study and interpretation of the specimens is ongoing.
The successful extraction of ancient DNA from dinosaur fossils has been reported on two separate occasions, but upon further inspection and peer review, neither of these reports could be confirmed. However, a functional visual peptide of a theoretical dinosaur has been inferred using analytical phylogenetic reconstruction methods on gene sequences of related modern species such as reptiles and birds. In addition, several proteins have putatively been detected in dinosaur fossils, including hemoglobin.
Feathers are extremely complex integumentary structures that characterize a handful of vertebrate animals. Although it is generally acknowledged that feathers are derived and evolved from simpler integumentary structures, the early diversification and origin of feathers was relatively unknown until recently, and current research is ongoing. Since the theropod ancestry of birds is widely supported with osteological and other physical lines of evidence, the precursors of feathers in dinosaurs are also present, as predicted by those who originally proposed a theropod origin for birds. In 2006, Chinese paleontologist Xu Xing stated in a paper that since many members of Coelurosauria exhibit miniaturization, primitive integumentary structures (and later on feathers) evolved in order to insulate their small bodies.
The functional view on the evolution of feathers has traditionally focussed on insulation, flight and display. Discoveries of non-flying Late Cretaceous feathered dinosaurs in China however suggest that flight could not have been the original primary function. Feathers in dinosaurs indicate that their original function was not flight, but of a different nature. Theories include insulation brought around after they had metabolically changed from their cold-blooded reptilian ancestors, to increasing running speed. It has been suggested that vaned feathers evolved in the context of thrust, with running, non-avian theropods flapping their arms to increase their running speed.
The following is the generally acknowledged version of the origin and early evolution of feathers:
- The first feathers evolved; they are single filaments.
- Branching structures developed.
- The rachis evolved.
- Pennaceous feathers evolved.
- Aerodynamic morphologies appeared. (curved shaft and asymmetrical vanes)
This scenario appears to indicate that downy, contour, and flight feathers, are more derived forms of the first "feather". However, it is also possible that protofeathers and basal feathers disappeared early on in the evolution of feathers and that more primitive feathers in modern birds are secondary. This would imply that the feathers in modern birds have nothing to do with protofeathers.
A recent study performed by Prum and Brush (2002) suggested that the feathers of birds are not homologous with the scales of reptiles. A new model of feather evolution posits that the evolution of feathers first began with a feather follicle merging from the skin's surface that has no relation to reptilian scales. After this initial event, additions and new morphological characteristics were added to the feather design and more complex feathers evolved. This model of feather evolution, while agreeing with the distribution of various feather morphologies in coelurosaurs, it is also at odds with other evidence. The feather bristles of modern-day turkeys resemble the hair-like integumentary strcutures found in some maniraptorans, pterosaurs (see Pterosauria#Pycnofibers), and ornithischians, are widely regarded to be homologous to modern feathers, yet share also show distinct, feather like characteristics. This has led some paleontologists, such as Xu Xing, to theorize that feathers share homology with lizard scales after all.
- Stage I: Tubular filaments and feather-type beta keratin evolved.[Note 3]
- Stage II: The filamentous structure evolved distal branches.[Note 4]
- Stage III: Xu Xing described this stage as being the most important stage. The main part of the modern feather, the feather follicle, appeared along with the rachises and planar forms developed.[Note 5]
- Stage IV: Large, stiff, well-developed pennaceous feathers evolved on the limbs and tails of maniraptoran dinosaurs. Barbules evolved.[Note 6]
- Stage V: Feather tracts (pennaceous feathers that are located on regions other than the limbs and tail) evolved. Specialized pennaceous feathers developed.
Xu Xing himself stated that this new model was similar to the one out forward by Richard Prum, with the exception that Xu's model posits that feathers "feature a combination of transformation and innovation". This view differs from Prum's model in that Prum suggested that feathers were purely an evolutionary novelty. Xing's new model also suggests that the tubular filaments and branches evolved before the appearance of the feather follicle, while also acknowledging that the follicle was an important development in feather evolution, also in contrast to Prum's model of feather evolution.
Primitive feather typesEdit
The evolution of feather structures is thought to have proceeded from simple hollow filaments through several stages of increasing complexity, ending with the large, deeply rooted, feathers with strong pens (rachis), barbs and barbules that birds display today. It is logical that the simplest structures were probably most useful as insulation, and that this implies homeothermy. Only the more complex feather structures would be likely candidates for aerodynamic uses.
Models of feather evolution are often proposed that the earliest prototype feathers were hair-like integumentary filaments similar to the structures of Sinosauropteryx, a compsognathid (Jurassic/Cretaceous, 150-120 Ma), and Dilong, a basal tyrannosauroid from the Early Cretaceous. It is not known with certainty at what point in archosaur phylogeny the earliest simple “protofeathers” arose, or if they arose once or, independently, multiple times. Filamentous structures are clearly present in pterosaurs, and long, hollow quills have been reported in a specimen of Psittacosaurus from Liaoning. It is thus possible that the genes for building simple integumentary structures from beta keratin arose before the origin of dinosaurs, possibly in the last common ancestor with pterosaurs – the basal Ornithodire.
In Prum's model of feather evolution, hollow quill-like integumentary structures of this sort were termed Stage 1 feathers. The idea that feathers started out as hollow quills also supports Alan Brush's idea that feathers are evolutionary novelties, and not derived from scales. However, in order to determine the homology of Stage 1 feathers, it is necessary to determine their proteinaceous content: unlike the epidermal appendages of all other vertebrates, feathers are almost entirely composed of beta-keratins (as opposed to alpha-keratins) and, more specifically, they are formed from a group of beta-keratins called phi-keratins. No studies have yet been performed on the Stage 1 structures of Sinosauropteryx or Dilong in order to test their proteinaceous composition, however, tiny filamentous structures discovered adjacent to the bones of the alvarezsaurid Shuvuuia have been tested for beta-keratin, and the structures were discovered to be composed of beta-keratin. Alvarezsaurids have been of controversial phylogenetic position, but are generally agreed to be basal members of the Maniraptora clade. Due to this discovery, paleontologists are now convinced that beta-keratin-based protofeathers had evolved at the base of this clade at least.
Vaned, pennaceous feathersEdit
While basal coelurosaurs possessed these apparently hollow quill-like 'Stage 1' filaments, they lacked the more complex structures seen in maniraptorans. Maniraptorans possessed vaned feathers with barbs, barbules and hooklets just like those of modern birds.
The first dinosaur fossils from the Yixian formation found to have true flight-structured feathers (pennaceous feathers) were Protarchaeopteryx and Caudipteryx (135-121 Ma). Due to the size and proportions of these animals it is more likely that their feathers were used for display rather than for flight. Subsequent dinosaurs found with pennaceous feathers include Pedopenna and Jinfengopteryx. Several specimens of Microraptor, described by Xu et al. in 2003, show not only pennaceous feathers but also true asymmetrical flight feathers, present on the fore and hind limbs and tail. Asymmetrical feathers are considered important for flight in birds. Before the discovery of Microraptor gui, Archaeopteryx was the most primitive known animal with asymmetrical flight feathers.
However, the bodies of maniraptorans were not covered in vaned feathers as are those of the majority of living birds: instead, it seems that they were at least partly covered in the more simple structures that they had inherited from basal coelurosaurs like Sinosauropteryx. This condition may have been retained all the way up into basal birds: despite all those life restorations clothing archaeopterygids in vaned breast, belly, throat and neck feathers, it seems that their bodies also were at least partly covered in the more simple filamentous structures. The Berlin Archaeopteryx specimen appears to preserve such structures on the back of the neck though pennaceous vaned feathers were present on its back, at least.
Though it has been suggested at times that vaned feathers simply must have evolved for flight, the phylogenetic distribution of these structures currently indicates that they first evolved in flightless maniraptorans and were only later exapted by long-armed maniraptorans for use in locomotion. Of course a well-known minority opinion, best known from the writings of Gregory Paul, is that feathered maniraptorans are secondarily flightless and descend from volant bird-like ancestors. While this hypothesis remains possible, it lacks support from the fossil record, though that may or may not mean much, as the fossil record is incomplete and prone to selection bias.
The discovery of Epidexipteryx represented the earliest known examples of ornamental feathers in the fossil record. Epidexipteryx is known from a well preserved partial skeleton that includes four long feathers on the tail, composed of a central rachis and vanes. However, unlike in modern-style rectrices (tail feathers), the vanes were not branched into individual filaments but made up of a single ribbon-like sheet. Epidexipteryx also preserved a covering of simpler body feathers, composed of parallel barbs as in more primitive feathered dinosaurs. However, the body feathers of Epidexipteryx are unique in that some appear to arise from a "membranous structure." The skull of Epidexipteryx is also unique in a number of features, and bears an overall similarity to the skull of Sapeornis, oviraptorosaurs and, to a lesser extent, therizinosauroids. The tail of Epidexipteryx bore unusual vertebrae towards the tip which resembled the feather-anchoring pygostyle of modern birds and some oviraptorosaurs. Despite its close relationship to avialan birds, Epidexipteryx appears to have lacked remiges (wing feathers), and it likely could not fly. Zhang et al. suggest that unless Epidexipteryx evolved from flying ancestors and subsequently lost its wings, this may indicate that advanced display feathers on the tail may have predated flying or gliding flight.
According to the model of feather evolution developed by Prum & Brush, feathers started out ('stage 1') as hollow cylinders, then ('stage 2') became unbranched barbs attached to a calamus. By stage 3, feathers were planar structures with the barbs diverging from a central rachis, and from there pennaceous feathers. The feathers of Epidexipteryx may represent stage 2 structures, but also suggests that a more complicated sequence of steps in the evolution of feathers took place.
Use in predationEdit
Several maniraptoran lineages were clearly predatory and, given the morphology of their manual claws, fingers and wrists, presumably in the habit of grabbing at prey with their hands. Contrary to popular belief, feathers on the hands would not have greatly impeded the use of the hands in predation. Because the feathers are attached at an angle roughly perpendicular to the claws, they are oriented tangentially to the prey's body, regardless of prey size.:315 It is important to note here that theropod hands appear to have been oriented such that the palms faced medially (facing inwards), and were not parallel to the ground as used to be imagined.
However, feathering would have interfered with the ability of the hands to bring a grasped object up toward the mouth given that extension of the maniraptoran wrist would have caused the hand to rotate slightly upwards on its palmar side. If both feathered hands are rotated upwards and inwards at the same time, the remiges from one hand would collide with those of the other. For this reason, maniraptorans with feathered hands could grasp objects, but would probably not be able to carry them with both hands. However, dromaeosaurids and other maniraptorans may have solved this problem by clutching objects single-handedly to the chest. Feathered hands would also have restricted the ability of the hands to pick objects off of the ground, given that the feathers extend well beyond the ends of the digits. It remains possible that some maniraptorans lacked remiges on their fingers, but the only evidence available indicates the contrary. It has recently been argued that the particularly long second digit of the oviraptorosaur Chirostenotes was used as a probing tool, locating and extracting invertebrates and small mammals and so on from crevices and burrows. It seems highly unlikely that a digit that is regularly thrust into small cavities would have had feathers extending along its length, so either Chirostenotes didn't probe as proposed, or its second finger was unfeathered, unlike that of Caudipteryx and the other feathered maniraptorans. Given the problems that the feathers might have posed for clutching and grabbing prey from the ground, we might also speculate that some of these dinosaurs deliberately removed their own remiges by biting them off. Some modern birds (notably motmots) manipulate their own feathers by biting off some of the barbs, so this is at least conceivable, but no remains in the fossil record have been recovered that support this conclusion.
Some feather morphologies in non-avian theropods are comparable to those on modern birds. Single filament like structures are not present in modern feathers, although some birds possess highly specialized feathers that are superficial in appearance to protofeathers in non-avian theropods. Tuft-like structures seen in some maniraptorans are similar to that of the natal down in modern birds. Similarly, structures in the fossil record composed of a series of filaments joined at their bases along a central filament bear an uncanny resemblance to the down feathers in modern birds, with the exception of a lack of barbules. Furthermore, structures on fossils have been recovered from Chinese Cretaceous deposits that are a series of filaments joined at their bases at the distal portion of the central filament bear a superficial resemblance to filoplumes. More derived, pennaceous, feathers on the tails and limbs of feathered dinosaurs are nearly identical to the remiges and retrices of modern birds.
Feather structures and anatomyEdit
Feathers vary in length according to their position on the body, with the filaments of the compsognathid Sinosauropteryx being 13 mm and 21 mm long on the neck and shoulders respectively. In contrast, the structures on the skull are about 5 mm long, those on the arm about 2 mm long, and those on the distal part of the tail about 4 mm long. Because the structures tend to be clumped together it is difficult to be sure of an individual filament's morphology. The structures might have been simple and unbranched, but Currie & Chen (2001) thought that the structures on Sinosauropteryx might be branched and rather like the feathers of birds that have short quills but long barbs. The similar structures of Dilong also appear to exhibit a simple branching structure.
Exactly how feathers were arranged on the arms and hands of both basal birds and non-avian maniraptorans has long been unclear, and both non-avian maniraptorans and archaeopterygids have conventionally been depicted as possessing unfeathered fingers. However, the second finger is needed to support the remiges,[Note 7] and therefore must have been feathered. Derek Yalden's 1985 study was important in showing exactly how the remiges would have grown off of the first and second phalanges of the archaeopterygid second finger, and this configuration has been widely recognized.:129-159
However, there has been some minor historical disagreement over exactly how many remiges were present in archaeopterygids (there were most likely 11 primaries and a tiny distal 12th one, and at least 12 secondaries), and also about how the hand claws were arranged. The claws were directed perpendicularly to the palmar surface in life, and rotated anteriorly in most (but not all) specimens during burial.:129-159 It has also been suggested on occasion that the fingers of archaeopterygids and other feathered maniraptorans were united in a single fleshy 'mitten' as they are in modern birds, and hence unable to be employed in grasping. However, given that the interphalangeal finger joints of archaeopterygids appear suited for flexion and extension, and that the third finger apparently remained free and flexible in birds more derived than archaeopterygids, this is unlikely to be correct; it's based on a depression in the sediment that was identified around the bones.
Like those of archaeopterygids and modern birds, the remiges of non-avian theropods would also have been attached to the phalanges of the second manual digit as well as to the metacarpus and ulna, and indeed we can see this in the fossils. It's the case in the sinornithosaur NGMC 91-A and Microraptor. Surprisingly, in Caudipteryx, the remiges are restricted to the hands alone, and don't extend from the arm. They seem to have formed little 'hand flags' that are unlikely to have served any function other than display. Caudipteryx is an oviraptorosaur and possesses a suite of characters unique to this group. It is not a member of Aves, despite the efforts of some workers to make it into one. The hands of Caudipteryx supported symmetrical, pennaceous feathers that had vanes and barbs, and that measured between 15–20 centimeters long (6–8 inches). These primary feathers were arranged in a wing-like fan along the second finger, just like primary feathers of birds and other maniraptorans. No fossil of Caudipteryx preserves any secondary feathers attached to the forearms, as found in dromaeosaurids, Archaeopteryx and modern birds. Either these arm feathers are not preserved, or they were not present on Caudipteryx in life. An additional fan of feathers existed on its short tail. The shortness and symmetry of the feathers, and the shortness of the arms relative to the body size, indicate that Caudipteryx could not fly. The body was also covered in a coat of short, simple, down-like feathers.
A small minority, including ornithologists Alan Feduccia and Larry Martin, continues to assert that birds are instead the descendants of earlier archosaurs, such as Longisquama or Euparkeria. Embryological studies of bird developmental biology have raised questions about digit homology in bird and dinosaur forelimbs.
Opponents also claim that the dinosaur-bird hypothesis is dogma, apparently on the grounds that those who accept it have not accepted the opponents' arguments for rejecting it. However, science does not require unanimity and does not force agreement, nor does science settle issues by vote. It has been over 25 years since John Ostrom first put forth the dinosaur-bird hypothesis in a short article in Nature, and the opponents of this theory have yet to propose an alternative, testable hypothesis. However, due to the cogent evidence provided by comparative anatomy and phylogenetics, as well as the dramatic feathered dinosaur fossils from China, the idea that birds are derived dinosaurs, first championed by Huxley and later by Nopcsa and Ostrom, enjoys near-unanimous support among today's paleontologists.
BADD, BAND, and the Birds Came First hypothesisEdit
- Main article: Birds Came First
The non-standard, non-mainstream Birds Came First (or BCF) hypothesis proposed by George Olshevsky suggests that while there is a close relationship between dinosaurs and birds, but argues that, merely given this relationship, it is just as likely that dinosaurs descended from birds as the other way around. The hypothesis does not propose that birds in the proper sense evolved earlier than did other dinosaurs or other archosaurs: rather, it posits that small, bird-like, arboreal archosaurs were the direct ancestors of all the archosaurs that came later on (proper birds included). George was aware of this fact, and apparently considered the rather tongue-in-cheek alternative acronym GOODD, meaning George Olshevsky On Dinosaur Descendants. This was, of course, meant as opposite to the also tongue-in-cheek BADD (Birds Are Dinosaur Descendants): the term George uses for the 'conventional' or 'mainstream' view of avian origins outlined in the first two paragraphs above. 'BADD' is bad, according to BCF, as it imagines that small size, feathers and arboreal habits all evolved very late in archosaur evolution, and exclusively within maniraptoran theropod dinosaurs.
Protoavis is a Late Triassic archosaurian whose fossilized remains were found near Post, Texas. These fossils have been described as a primitive bird which, if the identification is valid, would push back avian origins some 60-75 million years.
Though it existed far earlier than Archaeopteryx, its skeletal structure is allegedly more bird-like. The fossil bones are too badly preserved to allow an estimate of flying ability; although reconstructions usually show feathers, judging from thorough study of the fossil material there is no indication that these were present.
However, this description of Protoavis assumes that Protoavis has been correctly interpreted as a bird. Almost all paleontologists doubt that Protoavis is a bird, or that all remains assigned to it even come from a single species, because of the circumstances of its discovery and unconvincing avian synapomorphies in its fragmentary material. When they were found at a Dockum Formation quarry in the Texas panhandle in 1984, in a sedimentary strata of a Triassic river delta, the fossils were a jumbled cache of disarticulated bones that may reflect an incident of mass mortality following a flash flood.
Scientists such as Alan Feduccia have cited Protoavis in an attempt to refute the hypothesis that birds evolved from dinosaurs. However, the only consequence would be to push back the point of divergence further back in time. At the time when such claims were originally made, the affiliation of birds and maniraptoran theropods which today is well-supported and generally accepted by most ornithologists was much more contentious; most Mesozoic birds have only been discovered since then. Chatterjee himself has since used Protoavis to support a close relationship between dinosaurs and birds.
"As there remains no compelling data to support the avian status of Protoavis or taxonomic validity thereof, it seems mystifying that the matter should be so contentious. The author very much agrees with Chiappe in arguing that at present, Protoavis is irrelevant to the phylogenetic reconstruction of Aves. While further material from the Dockum beds may vindicate this peculiar archosaur, for the time being, the case for Protoavis is non-existent."
Claimed temporal paradoxEdit
The temporal paradox, or time problem is a controversial issue in the evolutionary relationships of feathered dinosaurs and birds. It was originally conceived of by paleornithologist Alan Feduccia. The concept is based on the apparent following facts. The consensus view is that birds evolved from dinosaurs, but the most bird-like dinosaurs, and those most closely related to birds (the maniraptorans), are known mostly from the Cretaceous, by which time birds had already evolved and diversified. If bird-like dinosaurs are the ancestors of birds they should be older than birds, but Archaeopteryx is 155 million years old, while the very bird-like Deinonychus is 35 million years younger. This idea is sometimes summarized as "you can't be your own grandmother". The development of avian characteristics in dinosaurs supposedly should have led to the first modern bird appearing about 60 million years ago. However, Archaeopteryx lived 150 million years ago, long before any of the bird changes took place in dinosaurs. Each of the feathered dinosaur families developed avian-like features in its own way. Thus there were many several different lines of evolution. Archaeopteryx was merely the result of one such line.
Numerous researchers have discredited the idea of the temporal paradox. Witmer (2002) summarized this critical literature by pointing out that there are at least three lines of evidence that contradict it. First, no one has proposed that maniraptoran dinosaurs of the Cretaceous are the ancestors of birds. They have merely found that dinosaurs like dromaeosaurs, troodontids and oviraptorosaurs are close relatives of birds. The true ancestors are thought to be older than Archaeopteryx, perhaps Early Jurassic or even older. The scarcity of maniraptoran fossils from this time is not surprising, since fossilization is a rare event requiring special circumstances and fossils may never be found of animals in sediments from ages that they actually inhabited. Secondly, fragmentary remains of maniraptoran dinosaurs actually have been known from Jurassic deposits in China, North America, and Europe for many years. The femur of a tiny maniraptoran from the Late Jurassic of Colorado was reported by Padian and Jensen in 1989. In a 2009 article in the journal Acta Palaeontologica Polonica, six velociraptorine dromaeosaurid teeth were described as being recovered from a bone bed in Langenberg Quarry of Oker (Goslar, Germany). These teeth are notable in this instance in that they dated back to the Kimmeridgian stage of the Late Jurassic, roughly 155-150 Ma, and represent some of the earliest dromaeosaurids known to science, further refuting a "temporal paradox". Furthermore, a small, as of yet undescribed troodontid known as WDC DML 001, was announced in 2003 as being found in the Late Jurassic Morrison Formation of eastern/central Wyoming. The presence of this derived maniraptoran in Jurassic sediments is a strong refutation of the "temporal paradox". Third, if the temporal paradox would indicate that birds should not have evolved from dinosaurs, then what animals are more likely ancestors considering their age? Brochu and Norell (2001) analyzed this question using several of the other archosaurs that have been proposed as bird ancestors, and found that all of them create temporal paradoxes—long stretches between the ancestor and Archaeopteryx where there are no intermediate fossils—that are actually worse. Thus, even if one used the logic of the temporal paradox, one should still prefer dinosaurs as the ancestors to birds.
Quick & Ruben (2009)Edit
In Quick & Ruben's 2009 paper, they argue that modern birds are fundamentally different from non-avian dinosaurs in terms of abdominal soft-tissue morphology, and therefore birds cannot be modified dinosaurs. The paper asserts that a specialized 'femoral-thigh complex', combined with a synsacrum and ventrally separated pubic bones, provides crucial mechanical support for the abdominal wall in modern birds, and has thereby allowed the evolution of large abdominal air-sacs that function in respiration. In contrast, say the authors, theropod dinosaurs lack these features and had a highly mobile femur that cannot have been incorporated into abdominal support. Therefore, non-avian theropods cannot have had abdominal air-sacs that functioned like those of modern birds, and non-avian theropods were fundamentally different from modern birds. However, this was not mentioned in the paper, but was of course played-up in the press interviews. The paper never really demonstrate anything, but merely try to shoot holes in a given line of supporting evidence. It has been argued that respiratory turbinates supposedly falsify dinosaur endothermy, even though it has never been demonstrated that respiratory turbinates really are a requirement for any given physiological regime, and even though there are endotherms that lack respiratory turbinates. The innards of Sinosauropteryx and Scipionyx also supposedly falsify avian-like air-sac systems in non-avian coelurosaurs and demonstrate a crocodilian-like hepatic piston diaphragm, even though personal interpretation is required to accept that this claim might be correct. Furthermore, even though crocodilians and dinosaurs are fundamentally different in pelvic anatomy, some living birds have the key soft-tissue traits reported by Ruben et al. in Sinosauropteryx and Scipionyx, and yet still have an avian respiratory system. For a more detailed rebuttal of Quick & Ruben's paper, see this post by Darren Naish at Tetrapod Zoology.
There have been claims that the supposed feathers of the Chinese fossils are a preservation artifact. Despite doubts, the fossil feathers have roughly the same appearance as those of birds fossilized in the same locality, so there is no serious reason to think they are of different nature; moreover, no non-theropod fossil from the same site shows such an artifact, but sometimes show unambiguous hair (some mammals) or scales (some reptiles).
Some researchers have interpreted the filamentous impressions around Sinosauropteryx fossils as remains of collagen fibers, rather than primitive feathers. Since they are clearly external to the body, these researchers have proposed that the fibers formed a frill on the back of the animal and underside of its tail, similar to some modern aquatic lizards.
This would refute the proposal that Sinosauropteryx is the most basal known theropod genus with feathers, and also questions the current theory of feather origins itself. It calls into question the idea that the first feathers evolved not for flight but for insulation, and that they made their first appearance in relatively basal dinosaur lineages that later evolved into modern birds.
The Archaeoraptor fakeEdit
- Main article: Archaeoraptor
In 1999, a supposed 'missing link' fossil of an apparently feathered dinosaur named "Archaeoraptor liaoningensis", found in Liaoning Province, northeastern China, turned out to be a forgery. Comparing the photograph of the specimen with another find, Chinese paleontologist Xu Xing came to the conclusion that it was composed of two portions of different fossil animals. His claim made National Geographic review their research and they too came to the same conclusion. The bottom portion of the "Archaeoraptor" composite came from a legitimate feathered dromaeosaurid now known as Microraptor, and the upper portion from a previously-known primitive bird called Yanornis.
Flying and glidingEdit
The ability to fly or glide has been suggested for at least two dromaeosaurid species. The first, Rahonavis ostromi (originally classified as avian bird, but found to be a dromaeosaurid in later studies) may have been capable of powered flight, as indicated by its long forelimbs with evidence of quill knob attachments for long sturdy flight feathers. The forelimbs of Rahonavis were more powerfully built than Archaeopteryx, and show evidence that they bore strong ligament attachments necessary for flapping flight. Luis Chiappe concluded that, given these adaptations, Rahonavis could probably fly but would have been more clumsy in the air than modern birds.
Another species of dromaeosaurid, Microraptor gui, may have been capable of gliding using its well-developed wings on both the fore and hind limbs. Microraptor was among the first non-avian dinosaurs discovered with the impressions of feathers and wings. On Microraptor, the long feathers on the forelimbs possess asymmetrical vanes. The external vanes are narrow, while the internal ones are broad. In addition, Microraptor possessed elongated remiges with asymmetrical vanes that demonstrate aerodynamic function on the hind limbs. A 2005 study by Sankar Chatterjee suggested that the wings of Microraptor functioned like a split-level "biplane", and that it likely employed a phugoid style of gliding, in which it would launch from a perch and swoop downward in a 'U' shaped curve, then lift again to land on another tree, with the tail and hind wings helping to control its position and speed. Chatterjee also found that Microraptor had the basic requirements to sustain level powered flight in addition to gliding.
Microraptor had two sets of wings, on both its forelegs and hind legs. The long feathers on the legs of Microraptor were true flight feathers as seen in modern birds, with asymmetrical vanes on the arm, leg, and tail feathers. As in bird wings, Microraptor had both primary (anchored to the hand) and secondary (anchored to the arm) flight feathers. This standard wing pattern was mirrored on the hind legs, with flight feathers anchored to the upper foot bones as well as the upper and lower leg. It had been proposed by Chinese scientists that the animal glided and probably lived in trees, pointing to the fact that wings anchored to the feet of Microraptor would have hindered their ability to run on the ground, and suggest that all primitive dromaeosaurids may have been arboreal.
Sankar Chatterjee determined in 2005 that, in order for the creature to glide or fly, the wings must have been on different levels (as on a biplane) and not overlaid (as on a dragonfly), and that the latter posture would have been anatomically impossible. Using this biplane model, Chatterjee was able to calculate possible methods of gliding, and determined that Microraptor most likely employed a phugoid style of gliding—launching itself from a perch, the animal would have swooped downward in a deep 'U' shaped curve and then lifted again to land on another tree. The feathers not directly employed in the biplane wing structure, like those on the tibia and the tail, could have been used to control drag and alter the flight path, trajectory, etc. The orientation of the hind wings would also have helped the animal control its gliding flight. In 2007, Chatterjee used computer algorithms that test animal flight capacity to determine whether or not Microraptor was capable of true, powered flight, in addition to passive gliding. The resulting data showed that Microraptor did have the requirements to sustain level powered flight, so it is theoretically possible that the animal flew on occasion in addition to gliding.
Saurischian integumentary structuresEdit
The hip structure possessed by modern birds actually evolved independently from the "lizard-hipped" saurischians (specifically, a sub-group of saurischians called the Maniraptora) in the Jurassic Period. In this example of convergent evolution, birds developed hips oriented similar to the earlier ornithischian hip anatomy, in both cases possibly as an adaptation to a herbivorous or omnivorous diet.
In Saurischia, maniraptorans are characterized by long arms and three-fingered hands, as well as a "half-moon shaped" (semi-lunate) bone in the wrist (carpus). Maniraptorans are the only dinosaurs known to have breast bones (ossified sternal plates). In 2004, Tom Holtz and Halszka Osmólska pointed out six other maniraptoran characters relating to specific details of the skeleton. Unlike most other saurischian dinosaurs, which have pubic bones that point forward, several groups of maniraptorans have an ornithischian-like backwards-pointing hip bone. A backward-pointing hip characterizes the therizinosaurs, dromaeosaurids, avialans, and some primitive troodontids. The fact that the backward-pointing hip is present in so many diverse maniraptoran groups has led most scientists to conclude that the "primitive" forward-pointing hip seen in advanced troodontids and oviraptorosaurs is an evolutionary reversal, and that these groups evolved from ancestors with backward-pointing hips.
Modern pennaceous feathers and remiges are known from advanced maniraptoran groups (Oviraptorosauria and Paraves). More primitive maniraptorans, such as therizinosaurs (specifically Beipiaosaurus), preserve a combination of simple downy filaments and unique elongated quills. Powered and/or gliding flight is present in members of Avialae, and possibly in some dromaeosaurids such as Rahonavis and Microraptor. Simple feathers are known from more primitive coelurosaurs such as Sinosauropteryx, and possibly from even more distantly related species such as the ornithischian Tianyulong and the flying pterosaurs. Thus it appears as if some form of feathers or down-like integument would have been present in all maniraptorans, at least when they were young.
Skin impressions from the type specimen of Beipiaosaurus inexpectus indicated that the body was covered predominately by downy feather-like fibers, similar to those of Sinosauropteryx, but longer, and are oriented perpendicular to the arm. Xu et al., who described the specimen, suggested that these downy feathers represent an intermediate stage between Sinosauropteryx and more advanced birds (Avialae).
Unique among known theropods, Beipiaosaurus also possessed a secondary coat of much longer, simpler feathers that rose out of the down layer. These unique feathers (known as EBFFs, or elongated broad filamentous feathers) were first described by Xu et al. in 2009, based on a specimen consisting of the torso, head and neck. Xu and his team also found EBFFs in the original type specimen of B. inexpectus, revealed by further preparation. The holotype also preserved a pygostyle-like structure. The holotype was discovered in two phases. Limb fragments and dorsal and cervical vertebrae were discovered initially. The discovery site was re-excavated later on, and this time an articulated tail and partial pelvis were discovered. All come from the same individual.
The holotype has the largest proto-feathers known of any feathered dinosaur, with the author and paleontologist Xing Xu stating: "Most integumentary filaments are about 50 mm in length, although the longest is up to 70 mm. Some have indications of branching distal ends.". The holotype also had preserved dense patches of parallel integumentary structures in association with its lower arm and leg.
Thick, stiff, spine-like structures were recovered sprouting from the new specimen's throat region, the back of its head, its neck and its back. New preparation of the holotype reveals that the same structures are also present on the tail (though not associated with the pygostyle-like structure).
The EBFFs differ from other feather types in that they consist of a single, unbranched filament. Most other primitive feathered dinosaurs have down-like feathers made up of two or more filaments branching out from a common base or along a central shaft. The EBFFs of Beipiaosaurus are also much longer than other primitive feather types, measuring about 100-150 millimeters (4-6 inches) long, roughly half the length of the neck. In Sinosauropteryx, the longest feathers are only about 15% of the neck length. The EBFFs of Beipiaosaurus are also unusually broad, up to 3 mm wide in the type specimen. The broadest feathers of Sinosauropteryx are only 0.2 mm wide, and only slightly wider in larger forms such as Dilong. Additionally, where most primitive feather types are circular in cross section, EBFFs appear to be oval-shaped. None of the preserved EBFFs were curved or bent beyond a broad arc in either specimen, indicating that they were fairly stiff. They were probably hollow, at least at the base.
In a 2009 interview, Xu stated: "Both [feather types] are definitely not for flight, inferring the function of some structures of extinct animals would be very difficult, and in this case, we are not quite sure whether these feathers are for display or some other functions." He speculated that the finer feathers served as an insulatory coat and that the larger feathers were ornamental, perhaps for social interactions such as mating or communication.
Long filamentous structures have been preserved along with skeletal remains of numerous coelurosaurs from the Early Cretaceous Yixian Formation and other nearby geological formations from Liaoning, China. These filaments have usually been interpreted as "protofeathers," homologous with the branched feathers found in birds and some non-avian theropods, although other hypotheses have been proposed. A skeleton of Dilong was described in the scientific journal Nature in 2004 that included the first example of "protofeathers" in a tyrannosauroid from the Yixian Formation of China. Similarly to down feathers of modern birds, the "protofeathers" found in Dilong were branched but not pennaceous, and may have been used for insulation.
The presence of "protofeathers" in basal tyrannosauroids is not surprising, since they are now known to be characteristic of coelurosaurs, found in other basal genera like Sinosauropteryx, as well as all more derived groups. Rare fossilized skin impressions of large tyrannosaurids lack feathers, however, instead showing skin covered in scales. While it is possible that protofeathers existed on parts of the body which have not been preserved, a lack of insulatory body covering is consistent with modern multi-ton mammals such as elephants, hippopotamuses, and most species of rhinoceros. Alternatively, secondary loss of "protofeathers" in large tyrannosaurids may be analogous with the similar loss of hair in the largest modern mammals like elephants, where a low surface area-to-volume ratio slows down heat transfer, making insulation by a coat of hair unnecessary. Therefore, as large animals evolve in or disperse into warm climates, a coat of fur or feathers loses its selective advantage for thermal insulation and can instead become a disadvantage, as the insulation traps excess heat inside the body, possibly overheating the animal. Protofeathers may also have been secondarily lost during the evolution of large tyrannosaurids, especially in warm Cretaceous climates. Tyrannosaurus at one stage of its life may have been covered in down-like feathers, although there is no direct fossil evidence of this.
A few troodont fossils, including specimens of Mei and Sinornithoides, demonstrate that these animals roosted like birds, with their heads tucked under their forelimbs. These fossils, as well as numerous skeletal similarities to birds and related feathered dinosaurs, support the idea that troodontids probably bore a bird-like feathered coat. The discovery of a fully-feathered, primitive troodontid (Jinfengopteryx) lends support to this. The type specimen of Jinfengopteryx elegans is 55 cm long and from the Qiaotou Formation of Liaoning Province, China.
Troodontids are important to research on the origin of birds because they share many anatomical characters with early birds. Crucially, the substantially complete fossil identified as WDC DML 001 ("Lori"), is a troodontid from the Late Jurassic Morrison Formation, close to the time of Archaeopteryx. The discovery of this Jurassic troodont is positive physical evidence that derived deinonychosaurs were present very near the time that birds arose, and basal paravians must have evolved much earlier. This fact strongly invalidates the "temporal paradox" cited by the few remaining opponents of the idea that birds are closely related to dinosaurs. (see claimed temporal paradox below.)
There is a large body of evidence showing that dromaeosaurids were covered in feathers. Some dromaeosaurid fossils preserve long, pennaceous feathers on the hands and arms (remiges) and tail (rectrices), as well as shorter, down-like feathers covering the body. Other fossils, which do not preserve actual impressions of feathers, still preserve the associated bumps on the forearm bones where long wing feathers would have attached in life. Overall, this feather pattern looks very much like Archaeopteryx.
The first known dromaeosaur with definitive evidence of feathers was Sinornithosaurus, reported from China by Xu et al. in 1999. NGMC 91-A, the Sinornithosaurus-like theropod informally dubbed "Dave", possessed unbranched fibers in additional to more complex branched and tufted structures. Many other dromaeosaurid fossils have been found with feathers covering their bodies, some with fully-developed feathered wings. Several even show evidence of a second pair of wings on the hind legs, including Microraptor and Cryptovolans. While direct feather impressions are only possible in fine-grained sediments, some fossils found in coarser rocks show evidence of feathers by the presence of quill knobs, the attachment points for wing feathers possessed by some birds. The dromaeosaurids Rahonavis and Velociraptor have both been found with quill knobs, showing that these forms had feathers despite no impressions having been found. In light of this, it is most likely that even the larger ground-dwelling dromaeosaurids bore feathers, since even flightless birds today retain most of their plumage, and relatively large dromaeosaurids, like Velociraptor, are known to have retained pennaceous feathers. Though some scientists had suggested that the larger dromaeosaurids lost some or all of their insulatory covering, the discovery of feathers in Velociraptor specimens has been cited as evidence that all members of the family retained feathers.
Fossils of dromaeosaurids more primitive than Velociraptor are known to have had feathers covering their bodies, and fully developed, feathered wings. The fact that the ancestors of Velociraptor were feathered and possibly capable of flight long suggested to paleontologists that Velociraptor bore feathers as well, since even flightless birds today retain most of their feathers.
In September 2007, Alan Turner, Peter Makovicky, and Mark Norell reported the presence of quill knobs on the ulna of a Velociraptor specimen from Mongolia. Fourteen bumps approximately 4mm apart were found in a straight line along the bone, directly corresponding to the same structures in living birds, the bumps serving as an anchor for the secondary feathers. These bumps on bird wing bones show where feathers anchor, and their presence on Velociraptor indicate it too had feathers. According to paleontologist Alan Turner,
A lack of quill knobs does not necessarily mean that a dinosaur did not have feathers. Finding quill knobs on Velociraptor, though, means that it definitely had feathers. This is something we'd long suspected, but no one had been able to prove.
Co-author Mark Norell, Curator-in-Charge of fossil reptiles, amphibians and birds at the American Museum of Natural History, also weighed in on the discovery, saying:
The more that we learn about these animals the more we find that there is basically no difference between birds and their closely related dinosaur ancestors like velociraptor. Both have wishbones, brooded their nests, possess hollow bones, and were covered in feathers. If animals like velociraptor were alive today our first impression would be that they were just very unusual looking birds.
According to Turner and co-authors Norell and Peter Makovicky, quill knobs are not found in all prehistoric birds, and their absence does not mean that an animal was not feathered – flamingos, for example, have no quill knobs. However, their presence confirms that Velociraptor bore modern-style wing feathers, with a rachis and vane formed by barbs. The forearm specimen on which the quill knobs were found (specimen number IGM 100/981) represents an animal 1.5 meters in length (5 ft) and 15 kilograms (33 lbs) in weight. Based on the spacing of the six preserved knobs in this specimen, the authors suggested that Velociraptor bore 14 secondaries (wing feathers stemming from the forearm), compared with 12 or more in Archaeopteryx, 18 in Microraptor, and 10 in Rahonavis. This type of variation in the number of wing feathers between closely related species, the authors asserted, is to be expected, given similar variation among modern birds.
Turner and colleagues interpreted the presence of feathers on Velociraptor as evidence against the idea that the larger, flightless maniraptorans lost their feathers secondarily due to larger body size. Furthermore, they noted that quill knobs are almost never found in flightless bird species today, and that their presence in Velociraptor (presumed to have been flightless due to its relatively large size and short forelimbs) is evidence that the ancestors of dromaeosaurids could fly, making Velociraptor and other large members of this family secondarily flightless, though it is possible the large wing feathers inferred in the ancestors of Velociraptor had a purpose other than flight. The feathers of the flightless Velociraptor may have been used for display, for covering their nests while brooding, or for added speed and thrust when running up inclined slopes.
The preserved impressions of integumentary structures in Sinornithosaurus were composed of filaments, and showed two features that indicate they are early feathers. First, several filaments were joined together into "tufts", similar to the way down is structured. Second, a row of filaments (barbs) were joined together to a main shaft (rachis), making them similar in structure to normal bird feathers. However, they do not have the secondary branching and tiny little hooks (barbules) that modern feathers have, which allow the feathers of modern birds to form a discrete vane. The filaments are arranged in a parallel fashion to each other, and are perpendicular to the bones. In specimen NGMC - 91, the feathers covered the entire body, including the head in front of the eye, the neck, wing - like sprays on the arms, long feathers on the thighs, and a lozenge - shaped fan on the tail like that of Archaeopteryx.
Pedopenna is a maniraptoran theropod that shows evidence of avian affinities that are further evidence of the dinosaur-bird evolutionary relationship. Apart from having a very bird-like skeletal structure in its legs, Pedopenna was remarkable due to the presence of long pennaceous feathers on the metatarsus (foot). Some deinonychosaurs are also known to have these 'hind wings', but those of Pedopenna differ from those of animals like Microraptor. Pedopenna hind wings were smaller and more rounded in shape. The longest feathers were slightly shorter than the metatarsus, at about 55 mm (2 in) long. Additionally, the feathers of Pedopenna were symmetrical, unlike the asymmetrical feathers of some deinonychosaurs and birds. Since asymmetrical feathers are typical of animals adapted to flying, it is likely that Pedopenna represents an early stage in the development of these structures. While many of the feather impressions in the fossil are weak, it is clear that each possessed a rachis and barbs, and while the exact number of foot feathers is uncertain, they are more numerous than in the hind-wings of Microraptor. Pedopenna also shows evidence of shorter feathers overlying the long foot feathers, evidence for the presence of coverts as seen in modern birds. Since the feathers show fewer aerodynamic adaptations than the similar hind wings of Microraptor, and appear to be less stiff, suggests that if they did have some kind of aerodynamic function, it was much weaker than in deinonychosaurs and birds. Xu and Zhang, in their 2005 description of Pedopenna, suggested that the feathers could be ornamental, or even vestigial. It is possible that a hind wing was present in the ancestors of deinonychosaurs and birds, and later lost in the bird lineage, with Pedopenna representing an intermediate stage where the hind wings are being reduced from a functional gliding apparatus to a display or insulatory function.
Anchiornis is notable for its proportionally long forelimbs, which measured 80% of the total length of the hind limbs. This is similar to the condition in early avians such as Archaeopteryx, and the authors pointed out that long forelimbs are necessary for flight. It is possible that Anchiornis was able to fly or glide, and may have had a functional airfoil. Anchiornis also had a more avian wrist than other non-avian theropods. Anchiornis has hind leg proportions more like those of lower theropod dinosaurs than avialans. Faint, carbonized feather impressions were preserved in patches in the type specimen. Feathers on the torso measured an average of 20 mm in length, but the feathers were too poorly preserved to ascertain details of their structure. A cladistic analysis indicated that Anchiornis is part of the avian lineage, but outside of the clade that includes Archaeopteryx and modern birds, strongly suggesting that Anchiornis was a basal member of the Avialae and the sister taxon of Aves. Anchiornis can therefore be considered to be a non-avian avialian.
All specimens of Sinosauropteryx preserve integumentary structures (filaments arising from the skin) which most paleontologists interpret as very primitive feathers. These short, down-like filaments are preserved all along the back half of the skull, arms, neck, back, and top and bottom of the tail. Additional patches of feathers have been identified on the sides of the body, and paleontologist Chen, Dong and Zheng proposed that the density of the feathers on the back and the randomness of the patches elsewhere on the body indicated the animals would have been fully feathered in life, with the ventral feathers having been removed by decomposition.
The filaments are preserved with a gap between the bones, which several authors have noted corresponds closely to the expected amount of skin and muscle tissue that would have been present in life. The feathers adhere close to the bone on the skull and end of the tail, where little to no muscle was present, and the gap increases over the back vertebrae, where more musculature would be expected, indicating that the filaments were external to the skin and do not correspond with sub-cutaneous structures.
The random positioning of the filaments and often "wavy" lines of preservation indicate that they were soft and pliable in life. Examination with microscopes shows that each individual filament appears dark along the edges and light internally, suggesting that they were hollow, like modern feathers. Compared to modern mammals the filaments were quite coarse, with each individual strand much larger and thicker than the corresponding hairs of similarly sized mammals.
The length of the filaments varied across the body. They were shortest just in front of the eyes, with a length of 13 mm. Going further down the body, the filaments rapidly increase in length until reaching 35 mm long over the shoulder blades. The length remains uniform over the back, until beyong the hips, when the filaments lengthen again and reach their maximum length midway down the tail at 40 mm. The filaments on the underside of the tail are shorter overall and decrease in length more rapidly than those on the dorsal surface. By the 25th tail vertebrae, the filaments on the underside reach a length of only 35 mm. The longest feathers present on the forearm measured 14 mm.
Overall, the filaments most closely resemble the "plumules" or down-like feathers of some modern birds, with a very short quill and long, thin barbs. The same structures are seen in other fossils from the Yixian Formation, including Confuciusornis.
Analysis of the fossils of Sinosauropteryx have shown an alternation of lighter and darker bands preserved on the tail, giving us an idea of what the animal looked like in real life. This banding is probably due to preserved areas of melanin, which can produce dark tones in fossils.
The type specimen of Epidendrosaurus also preserved faint feather impressions at the end of the tail, similar to the pattern found in the dromaeosaurid Microraptor. While the reproductive strategies of Epidendrosaurus itself remain unknown, several tiny fossil eggs discovered in Phu Phok, Thailand (one of which contained the embryo of a theropod dinosaur) may have been laid by a small dinosaur similar to Epidendrosaurus or Microraptor. The authors who described these eggs estimated the dinosaur they belonged to would have had the adult size of a modern Goldfinch.
Scansoriopteryx fossils preserve impressions of wispy, down-like feathers around select parts of the body, forming V-shaped patterns similar to those seen in modern down feathers. The most prominent feather impressions trail from the left forearm and hand. The longer feathers in this region led Czerkas and Yuan to speculate that adult scansoriopterygids may have had reasonably well-developed wing feathers which could have aided in leaping or rudimentary gliding, though they ruled out the possibility that Scansoriopteryx could have achieved powered flight. Like other maniraptorans, Scansoriopteryx had a semilunate (half-moon shaped) bone in the wrist that allowed for bird-like folding motion in the hand. Even if powered flight was not possible, this motion could have aided maneuverability in leaping from branch to branch. Scales were also preserved near the base of the tail. For more on the implications of this discovery, see Scansoriopteryx#Implications.
Oviraptorosaurs, like dromaeosaurs, are so bird-like that several scientists consider them to be true birds, more advanced than Archaeopteryx. Gregory S. Paul has written extensively on this possibility, and Teresa Maryańska and colleagues published a technical paper detailing this idea in 2002. Michael Benton, in his widely-respected text Vertebrate Palaeontology, also included oviraptorosaurs as an order within the class Aves. However, a number of researchers have disagreed with this classification, retaining oviraptorosaurs as non-avialan maniraptorans slightly more primitive than the dromaeosaurs.
Evidence for feathered oviraptorosaurs exists in several forms. Most directly, two species of primitive oviraptorosaurs (Caudipteryx) have been found with impressions of well developed feathers, most notably on the wings and tail, suggesting that they functioned at least partially for display. Secondly, at least one oviraptorosaur (Nomingia) was preserved with a tail ending in something like a pygostyle, a bony structure at the end of the tail that, in modern birds, is used to support a fan of feathers. Similarly, quill knobs (anchor points for wing feathers on the ulna) have been reported in the oviraptorosaurian species, Avimimus portentosus. Additionally, a number of oviraptorid specimens have famously been discovered in a nesting position similar to that of modern birds. The arms of these specimens are positioned in such a way that they could perfectly cover their eggs if they had small wings and a substantial covering of feathers. Protarchaeopteryx, an oviraptorosaur, is well known for its fan-like array of 12 rectricial feathers, but it also seems to have sported simple filament-like structures elsewhere on the tail. Soft and downy feathers are preserved in the chest region and tail base, and are also preserved adjacent to the femora.
The bodies and limbs of oviraptorosaurs are arranged in a bird-like manner, suggesting the presence of feathers on the arms which may have been used for insulating eggs or brooding young. Members of Oviraptoridae possess a quadrate bone that shows particularly avian characteristics, including a pneumatizatized, double-headed structure, the presence of the pterygoid process, and articular fossa for the quadrratojugal.
Oviraptorids were probably feathered, since some close relatives were found with feathers preserved (Caudipteryx and possibly Protarchaeopteryx). Another finding pointing to this is the discovery in Nomingia of a pygostyle, a bone that results from the fusion of the last tail vertebrae and is responsible in birds to hold a fan of feathers in the tail. Finally, the arm position of the brooding Citipati would have been far more effective if feathers were present to cover the eggs.
Caudipteryx has clear and unambiguously pennaceous feathers, like modern birds, and several cladistic analyses have consistently recovered it as a nonavian, oviraptorid, dinosaur, it provided, at the time of its description, the clearest and most succinct evidence that birds evolved from dinosaurs. Lawrence Witmer stated:
- "The presence of unambiguous feathers in an unambiguously nonavian theropod has the rhetorical impact of an atomic bomb, rendering any doubt about the theropod relationships of birds ludicrous.”"
However, not all scientists agreed that Caudipteryx was unambiguously non-avian, and some of them continued to doubt that general consensus. Paleornithologist Alan Feduccia sees Caudipteryx as a flightless bird evolving from earlier archosaurian dinosaurs rather than from late theropods. Jones et al. (2000) found that Caudipteryx was a bird based on a mathematical comparison of the body proportions of flightless birds and non-avian theropods. Dyke and Norell (2005) criticized this result for flaws in their mathematical methods, and produced results of their own which supported the opposite conclusion. Other researchers not normally involved in the debate over bird origins, such as Zhou, acknowledged that the true affinities of Caudipteryx were debatable.
In 1997, filament-like integumentary structures were reported to be present in the Spanish ornithomimosaur Pelecanimimus polyodon. Furthermore, one published life restoration depicts Pelecanimimus as having been covered in the same sort of quill-like structures as are present on Sinosauropteryx and Dilong. However, a brief 1997 report that described soft-tissue mineralization in the Pelecanimimus holotype has been taken by most workers as the definitive last word 'demonstrating' that integumentary fibers were absent from this taxon.
However, the report describing soft-tissue mineralization described soft-tissue preservation seen in one small patch of tissue, and the absence of integument here does not provide much information about the distribution of integument on the live animal. This might explain why a few theropod workers (notably Paul Sereno and Kevin Padian) have continued to indicate the presence of filamentous integumentary structures in Pelecanimimus. Feduccia et al. (2005) argued that Pelecanimimus possessed scaly arms and figured some unusual rhomboidal structures in an effort to demonstrate this. The objects that they illustrate do not resemble scales and it remains to be seen whether they are anything to do with the integument of this dinosaur. A full description/monograph on this dinosaur has yet to be published, which might have more information on this subject.
Ornithischian integumentary structuresEdit
The integument, or body covering, of Psittacosaurus is known from a Chinese specimen, which most likely comes from the Yixian Formation of Liaoning. The specimen, which is not yet assigned to any particular species, was illegally exported from China, in violation of Chinese law, but was purchased by a German museum and arrangements are being made to return the specimen to China.
Most of the body was covered in scales. Larger scales were arranged in irregular patterns, with numerous smaller scales occupying the spaces between them, similarly to skin impressions known from other ceratopsians, such as Chasmosaurus. However, a series of what appear to be hollow, tubular bristles, approximately 16 centimeters (6.4 in) long, were also preserved, arranged in a row down the dorsal (upper) surface of the tail. However, according to Mayr et al., "[a]t present, there is no convincing evidence which shows these structures to be homologous to the structurally different [feathers and protofeathers] of theropod dinosaurs." As the structures are only found in a single row on the tail, it is unlikely that they were used for thermoregulation, but they may have been useful for communication through some sort of display.
Tianyulong is notable for the row of long, filamentous integumentary structures apparent on the back, tail and neck of the fossil. The similarity of these structures with those found on some derived theropods suggests their homology with feathers and raises the possibility that the earliest dinosaurs and their ancestors were covered with analogous dermal filamentous structures that can be considered as primitive feathers (proto-feathers).
The filamentous integumentary structures are preserved on three areas of the fossil: in one patch just below the neck, another one on the back, and the largest one above the tail. The hollow filaments are parallel to each other and are singular with no evidence of branching. They also appear to be relatively rigid, making them more analogous to the integumentary structures found on the tail of Psittacosaurus than to the proto-feather structures found in avian and non-avian theropods. Among the theropods, the structures in Tianyulong are most similar to the singular unbranched proto-feathers of Sinosauropteryx and Beipiaosaurus. The estimated length of the integumentary structures on the tail is about 60 mm which is seven times the height of a caudal vertebra. Their length and hollow nature argue against of them being subdermal structures such as collagen fibers.
Phylogenetics and homologyEdit
Such dermal structures have previously been reported only in derived theropods and ornithischians, and their discovery in Tianyulong extends the existence of such structures further down in the phylogenetic tree. However, the homology between the ornithischian filaments and the theropods proto-feathers is not obvious. If the homology is supported, the consequence is that the common ancestor of both saurischians and ornithischians were covered by feather-like structures and that groups for which skin impression are known such as the sauropods were only secondarily featherless. If the homology is not supported, it would indicate that these filamentous dermal structures evolved independently in saurischians and ornithischians, as well as in other archosaurs such as the pterosaurs. The authors (in supplementary information to their primary article) noted that discovery of similar filamentous structures in the theropod Beipiaosaurus bolstered the idea that the structures on Tianyulong are homologous with feathers. Both the filaments of Tianyulong and the filaments of Beipiaosaurus were laong, singular, and unbranched. In Beipiaosaurus, however, the filaments were flattened. In Tianyulong, the filaments were round in cross section, and therefore closer in structure to the earliest forms of feathers predicted by developmental models.
Some scientists have argued that other dinosaur proto-feathers are actually fibers of collagen that have come loose from the animals' skins. However, collagen fibers are solid structures; based on the long, hollow nature of the filaments on Tianyulong the authors rejected this explanation.
After a century of hypotheses without conclusive evidence, especially well-preserved (and legitimate) fossils of feathered dinosaurs were discovered during the 1990s, and more continue to be found. The fossils were preserved in a lagerstätte — a sedimentary deposit exhibiting remarkable richness and completeness in its fossils — in Liaoning, China. The area had repeatedly been smothered in volcanic ash produced by eruptions in Inner Mongolia 124 million years ago, during the Early Cretaceous Period. The fine-grained ash preserved the living organisms that it buried in fine detail. The area was teeming with life, with millions of leaves, angiosperms (the oldest known), insects, fish, frogs, salamanders, mammals, turtles, lizards and crocodilians discovered to date.
The most important discoveries at Liaoning have been a host of feathered dinosaur fossils, with a steady stream of new finds filling in the picture of the dinosaur-bird connection and adding more to theories of the evolutionary development of feathers and flight. Norell et al (2007) reported quill knobs from an ulna of Velociraptor mongoliensis, and these are strongly correlated with large and well-developed secondary feathers.
List of dinosaur genera preserved with evidence of feathersEdit
A number of non-avian dinosaurs are now known to have been feathered. Direct evidence of feathers exists for the following genera, listed in the order currently accepted evidence was first published. In all examples, the evidence described consists of feather impressions, except those marked with an asterisk (*), which denotes genera known to have had feathers based on skeletal or chemical evidence, such as the presence of quill knobs.
- Avimimus* (1987):536
- Sinosauropteryx (1996)
- Protarchaeopteryx (1997)
- Caudipteryx (1998)
- Rahonavis* (1998)
- Shuvuuia (1999)
- Sinornithosaurus (1999)
- Beipiaosaurus (1999)
- Microraptor (2000)
- Nomingia* (2000)
- Cryptovolans (2002)
- Scansoriopteryx (2002)
- Epidendrosaurus (2002)
- Psittacosaurus? (2002)
- Yixianosaurus (2003)
- Dilong (2004)
- Pedopenna (2005)
- Jinfengopteryx (2005)
- Sinocalliopteryx (2007)
- Velociraptor* (2007)
- Epidexipteryx (2008)
- Anchiornis (2009)
- Tianyulong? (2009)
- Note, filamentous structures in some ornithischian dinosaurs (Psittacosaurus, Tianyulong) and pterosaurs may or may not be homologous with the feathers and protofeathers of theropods.
Phylogeny and the inference of feathers in other dinosaursEdit
Feathered dinosaur fossil finds to date, together with cladistic analysis, suggest that many types of theropod may have had feathers, not just those that are especially similar to birds. In particular, the smaller theropod species may all have had feathers and possibly even the larger theropods (for instance T. rex) may have had feathers, in their early stages of development after hatching. Whereas these smaller animals may have benefited from the insulation of feathers, large adult theropods are unlikely to have had feathers, since inertial heat retention would likely be sufficient to manage heat. Excess internal heat may even have become a problem, had these very large creatures been feathered.
Fossil feather impressions are extremely rare; therefore only a few feathered dinosaurs have been identified so far. However, through a process called phylogenetic bracketing, scientists can infer the presence of feathers on poorly-preserved specimens. All fossil feather specimens have been found to show certain similarities. Due to these similarities and through developmental research almost all scientists agree that feathers could only have evolved once in dinosaurs. Feathers would then have been passed down to all later, more derived species (although it is possible that some lineages lost feathers secondarily). If a dinosaur falls at a point on an evolutionary tree within the known feather-bearing lineages, scientists assume it too had feathers, unless conflicting evidence is found. This technique can also be used to infer the type of feathers a species may have had, since the developmental history of feathers is now reasonably well-known.
Nearly all paleontologists regard birds as coelurosaurian theropod dinosaurs. Within Coelurosauria, multiple cladistic analyses have found support for a clade named Maniraptora, consisting of therizinosauroids, oviraptorosaurs, troodontids, dromaeosaurids, and birds. Of these, dromaeosaurids and troodontids are usually united in the clade Deinonychosauria, which is a sister group to birds (together forming the node-clade Eumaniraptora) within the stem-clade Paraves.
Other studies have proposed alternative phylogenies in which certain groups of dinosaurs that are usually considered non-avian are suggested to have evolved from avian ancestors. For example, a 2002 analysis found oviraptorosaurs to be basal avians. Alvarezsaurids, known from Asia and the Americas, have been variously classified as basal maniraptorans, paravians, the sister taxon of ornithomimosaurs, as well as specialized early birds. The genus Rahonavis, originally described as an early bird, has been identified as a non-avian dromaeosaurid in several studies. Dromaeosaurids and troodontids themselves have also been suggested to lie within Aves rather than just outside it.:472
The scientists who described the (apparently unfeathered) Juravenator performed a genealogical study of coelurosaurs, including distribution of various feather types. Based on the placement of feathered species in relation to those that have not been found with any type of skin impressions, they were able to infer the presence of feathers in certain dinosaur groups. The following simplified cladogram follows these results, and shows the likely distribution of plumaceous (downy) and pennaceous (vaned) feathers among theropods. Note that the authors inferred pennaceous feathers for Velociraptor based on phylogenetic bracketing, a prediction later confirmed by fossil evidence.
- Origin of birds
- Evolution of birds
- Origin of avian flight
- Birds Came First
- Alan Feduccia
- George Olshevsky
- ^ All known dromaeosaurs have pennaceous feathers on the arms and tail, and substantially thick coat of feathers on the body, especially the neck and breast. Clear fossil evidence of modern avian-style feathers exists for several related dromaeosaurids, including Velociraptor and Microraptor, though no direct evidence is yet known for Deinonychus itself.
- ^ On page 155 of Dinosaurs of the Air by Gregory Paul, there are an accumulated total of 305 potential synapomorphies with birds for all non-avian theropod nodes, 347 for all non-avian dinosauromorph nodes.
Shared features between birds and dinosaurs include:
- A pubis (one of the three bones making up the vertebrate pelvis) shifted from an anterior to a more posterior orientation (see Saurischia), and bearing a small distal "boot".
- Elongated arms and forelimbs and clawed manus (hands).
- Large orbits (eye openings in the skull).
- Flexible wrist with a semi-lunate carpal (wrist bone).
- Double-condyled dorsal joint on the quadrate bone.
- Ossified ucinate process of the ribs.
- Most of the sternum is ossified.
- Broad sternal plates.
- Ossified sternal ribs.
- Brain enlarged above reptilian maximum.
- Overlapping field of vision.
- Olfaction sense reduced.
- An arm/leg length ratio between 0.5 and 1.0
- Lateral exposition of the glenoid in the humeral joint.
- Hollow, thin-walled bones.
- 3-fingered opposable grasping manus (hand), 4-toed pes (foot); but supported by 3 main toes.
- Fused carpometacarpus.
- Metacarpal III bowed posterolaterally.
- Flexibilty of digit III reduced.
- Digit III tightly appressed to digit II.
- Well developed arm folding mechanism.
- Reduced, posteriorly stiffened tail.
- Distal tail stiffened.
- Tail base hyperflexible, especially dorsally.
- Elongated metatarsals (bones of the feet between the ankle and toes).
- S-shaped curved neck.
- Erect, digitgrade (ankle held well off the ground) stance with feet postitioned directly below the body.
- Similar eggshell microstructure.
- Teeth with a constriction between the root and the crown.
- Functional basis for wing power stroke present in arms and pectoral girdle (during motion, the arms were swung down and forward, then up and backwards, describing a "figure-eight" when viewed laterally).
- Expanded pneumatic sinuses in the skull.
- Five or more vertebrae incorporated into the sacrum (hip).
- Posterior caudal vertebrate fused to form the pygostyle.
- Large, strongly built, and straplike scapula (shoulder blade).
- Scapula blades are horizontal.
- Scapula tip is pointed.
- Acromion process is developed, similar to that in Archaeopteryx.
- Retroverted and long coracoids.
- Strongly flexed and subvertical coracoids relative to the scapula.
- Clavicles (collarbone) fused to form a furcula (wishbone).
- U-shaped furcula.
- Hingelike ankle joint, with movement mostly restricted to the fore-aft plane.
- Secondary bony palate (nostrils open posteriorly in throat).
- Pennaceous feathers in some taxa. Proto-feathers, filaments, and integumenatary structures in others.
- Well-developed, symmetrical arm contour feathers.
- Source 1: Are Birds Really Dinosaurs? Dinobuzz, Current Topics Concerning Dinosaurs. Created 9/27/05. Accessed 7/20/09. Copyright 1994-2009 by the Regents of the University of California, all rights reserved.
- Source 2: Kurochkin, E., N. 2006. Parallel Evolution of Theropod Dinoaurs and Birds. Entomological Review 86 (1), pp. S45-S58. doi:10.1134/S0013873806100046
- Source 3: Paul, Gregory S. (2002). "11". Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore: Johns Hopkins University Press. pp. 225-227: Table 11.1. ISBN 978-0801867637.
- ^ Xu Xing suggested that the integumentary features present in some pterosaurs and the ornithischian dinosaur Psittacosaurus may be evidence of this first stage.
- ^ Examples in the fossil record may include Sinosauropteryx, Beipiaosaurus, Dilong, and Sinornithosaurus.
- ^ According to Xu Xing, the stage III is supported by the fact that feather follicles developed after barb ridges, along with the follicle having a unique role in the formation of the rachis.
- ^ See Caudipteryx, Protarchaeopteryx, and Sinornithosaurus.
Xu Xing also noted that while the pennaceous feathers of Microraptor differ from those of Caudipteryx and Protarchaeopteryx due to the aerodynamic functions of its feathers, they still belong together in the same stage because they both "evolved form-stiffening barbules" on their feathers.
- ^ Remiges are the large feathers of the forelimbs (singular remex). The large feathers that grow from the tail are termed rectrices (singular rectrix).
- ^ Darwin, Charles R. (1859). On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. London: John Murray. p. 502pp. http://darwin-online.org.uk/content/frameset?itemID=F373&viewtype=side&pageseq=16.
- ^ Huxley, Thomas H. (1870). "Further evidence of the affinity between the dinosaurian reptiles and birds". Quarterly Journal of the Geological Society of London 26: 12–31.
- ^ Huxley, Thomas H. (1868). "On the animals which are most nearly intermediate between birds and reptiles". Annals of the Magazine of Natural History 4 (2): 66–75.
- ^ Foster, Michael; Lankester, E. Ray 1898–1903. The scientific memoirs of Thomas Henry Huxley. 4 vols and supplement. London: Macmillan.
- ^ Owen, R. (1863): On the Archaeopteryx of von Meyer, with a description of the fossil remains of a long-tailed species, from the Lithographic Slate of Solenhofen. - Philosophical Transactions of the Royal Society of London, 1863: 33-47. London.
- ^ a b Padian K. and Chiappe LM (1998). The origin and early evolution of birds. Biological Reviews 73: 1-42.
- ^ a b c d e f g h i Xu Xing; Zhou Zhonghe; Wang Xiaolin; Kuang Xuewen; Zhang Fucheng; & Du Xiangke (2003). "Four-winged dinosaurs from China". Nature 421 (6921): 335–340. doi:10.1038/nature01342.
- ^ a b c d Zhang, F., Zhou, Z., Xu, X. & Wang, X. (2002). "A juvenile coelurosaurian theropod from China indicates arboreal habits." Naturwissenschaften, 89(9): 394-398. doi:10.1007 /s00114-002-0353-8.
- ^ Fox, W. (1866). Another new Wealden reptile. Athenaeum 2014, 740.
- ^ Naish, D. (2002). The historical taxonomy of the Lower Cretaceous theropods (Dinosauria) Calamospondylus and Aristosuchus from the Isle of Wight. Proceedings of the Geologists' Association 113, 153-163.
- ^ Swinton, W. E. (1936a). Notes on the osteology of Hypsilophodon, and on the family Hypsilophodontidae. Proceedings of the Zoological Society of London 1936, 555-578.
- ^ Swinton, W. E. (1936b). The dinosaurs of the Isle of Wight. Proceedings of the Geologists' Association 47, 204-220.
- ^ Galton, P. M. (1971a). Hypsilophodon, the cursorial non-arboreal dinosaur. Nature 231, 159-161.
- ^ Galton, P. M. (1971b). The mode of life of Hypsilophodon, the supposedly arboreal ornithopod dinosaur. Lethaia 4, 453-465.
- ^ a b Paul, G.S. (1988). Predatory Dinosaurs of the World. New York: Simon & Schuster.
- ^ a b Olshevsky, G. (2001a). The birds came first: a scenario for avian origins and early evolution, 1. Dino Press 4, 109-117.
- ^ a b Olshevsky, G. (2001b). The birds came first: a scenario for avian origins and early evolution. Dino Press 5, 106-112.
- ^ a b Ostrom, John H. (1969). "Osteology of Deinonychus antirrhopus, an unusual theropod from the Lower Cretaceous of Montana". Bulletin of the Peabody Museum of Natural History 30: 1–165.
- ^ Paul, Gregory S. (2000). "A Quick History of Dinosaur Art". in Paul, Gregory S. (ed.). The Scientific American Book of Dinosaurs. New York: St. Martin's Press. pp. 107–112. ISBN 0-312-26226-4.
- ^ El Pais: El 'escándalo archaeoraptor' José Luis Sanz y Francisco Ortega 16/02/2000 Online, Spanish
- ^ a b Swisher Iii, C.C.; Wang, Y.Q.; Wang, X.L.; Xu, X.; Wang, Y. (2001), "Cretaceous age for the feathered dinosaurs of Liaoning, China", Rise of the Dragon: Readings from Nature on the Chinese Fossil Record: 167, http://books.google.com/books?hl=en, retrieved on 2009-09-02
- ^ a b Swisher, C.C.; Xiaolin, W.; Zhonghe, Z.; Yuanqing, W.; Fan, J.I.N.; Jiangyong, Z.; Xing, X.U.; Fucheng, Z.; et al. (2002), "Further support for a Cretaceous age for the feathered-dinosaur beds of Liaoning, China: %u2026", Chinese Science Bulletin 47 (2): 136–139, http://www.springerlink.com/index/W7724740N2320M80.pdf, retrieved on 2009-09-02
- ^ Sereno, Paul C.; & Rao Chenggang (1992). "Early evolution of avian flight and perching: new evidence from the Lower Cretaceous of China". Science 255 (5046): 845–848. doi:10.1126/science.255.5046.845. PMID 17756432.
- ^ Hou Lian-Hai; Zhou Zhonghe; Martin, Larry D.; & Feduccia, Alan (1995). "A beaked bird from the Jurassic of China". Nature 377 (6550): 616–618. doi:10.1038/377616a0.
- ^ Novas, F. E., Puerta, P. F. (1997). New evidence concerning avian origins from the Late Cretaceous of Patagonia. Nature 387:390-392.
- ^ Norell, M. A., Clark, J. M., Makovivky, P. J. (2001). Phylogenetic relationships among coelurosaurian dinosaurs. In: Gauthier, J. A., Gall, L. F., eds. New Perspectives on the Origin and Early Evolution of Birds. Yale University Press, New Haven, pp. 49-67.
- ^ Gatesy, S. M., Dial, K. P. (1996). Locomotor modules and the evolution of avian flight. Evolution 50:331-340.
- ^ Gatesy, S. M. (2001). The evolutionary history of the theropod caudal locomotor module. In: Gauthier, J. A., Gall, L. F., eds. New Perspectives on the Origin and Early Evolution of Birds. Yale University Press, New Haven, pp. 333-350.
- ^ Xu, X. (2002). Deinonychosaurian fossils from the Jehol Group of western Liaoning and the coelurosaurian evolution (Dissertation). Chinese Academy of Sciences, Beijing.
- ^ a b c d e f g h i j k l m n o p q Xu Xing (2006). Feathered dinosaurs from China and the evolution of major avian characters. Integrative Zoology 1:4-11. doi:10.1111/j.1749-4877.2006.00004.x
- ^ a b Ji Qiang; & Ji Shu-an (1996). "On the discovery of the earliest bird fossil in China and the origin of birds". Chinese Geology 233: 30–33.
- ^ a b c d e f g h i Chen Pei-ji; Dong Zhiming; & Zhen Shuo-nan. (1998). "An exceptionally preserved theropod dinosaur from the Yixian Formation of China". Nature 391 (6663): 147–152. doi:10.1038/34356.
- ^ a b Lingham-Soliar, Theagarten; Feduccia, Alan; & Wang Xiaolin. (2007). "A new Chinese specimen indicates that ‘protofeathers’ in the Early Cretaceous theropod dinosaur Sinosauropteryx are degraded collagen fibres". Proceedings of the Royal Society B: Biological Sciences 274 (1620): 1823–1829. doi:10.1098/rspb.2007.0352.
- ^ a b c d e f Ji Qiang; Currie, Philip J.; Norell, Mark A.; & Ji Shu-an. (1998). "Two feathered dinosaurs from northeastern China". Nature 393 (6687): 753–761. doi:10.1038/31635.
- ^ Sloan, Christopher P. (1999). "Feathers for T. rex?". National Geographic 196 (5): 98–107.
- ^ Monastersky, Richard (2000). "All mixed up over birds and dinosaurs". Science News 157 (3): 38. doi:10.2307/4012298. http://www.sciencenews.org/view/generic/id/94/title/All_mixed_up_over_birds_and_dinosaurs.
- ^ a b c d e Xu Xing; Tang Zhi-lu; & Wang Xiaolin. (1999). "A therizinosaurid dinosaur with integumentary structures from China". Nature 399 (6734): 350–354. doi:10.1038/20670.
- ^ a b c d e f g Xu, X., Norell, M. A., Kuang, X., Wang, X., Zhao, Q., Jia, C. (2004). "Basal tyrannosauroids from China and evidence for protofeathers in tyrannosauroids". Nature 431: 680–684. doi:10.1038/nature02855.
- ^ Zhou Zhonghe; & Zhang Fucheng (2002). "A long-tailed, seed-eating bird from the Early Cretaceous of China". Nature 418 (6896): 405–409. doi:10.1038/nature00930.
- ^ Wellnhofer, P. (1988). Ein neuer Exemplar von Archaeopteryx. Archaeopteryx 6:1–30.
- ^ a b c Zhou Zhonghe; Barrett, Paul M.; & Hilton, Jason. (2003). "An exceptionally preserved Lower Cretaceous ecosystem". Nature 421 (6925): 807–814. doi:10.1038/nature01420.
- ^ a b c d Feduccia, A., Lingham-Soliar, T. & Hinchliffe, J. R. (2005). Do feathered dinosaurs exist? Testing the hypothesis on neontological and paleontological evidence. Journal of Morphology 266, 125-166. doi:10.1002/jmor.10382
- ^ a b c Czerkas, S.A., Zhang, D., Li, J., and Li, Y. (2002). "Flying Dromaeosaurs". in Czerkas, S.J.. Feathered Dinosaurs and the Origin of Flight: The Dinosaur Museum Journal 1. Blanding: The Dinosaur Museum. pp. 16–26.
- ^ a b Norell, Mark, Ji, Qiang, Gao, Keqin, Yuan, Chongxi, Zhao, Yibin, Wang, Lixia. (2002). "'Modern' feathers on a non-avian dinosaur". Nature, 416: pp. 36. 7 March 2002.
- ^ a b c d e f g h Paul, Gregory S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Baltimore: Johns Hopkins University Press. ISBN 978-0801867637.
- ^ Heilmann, G. (1926): The Origin of Birds. Witherby, London. ISBN 0-486-22784-7 (1972 Dover reprint)
- ^ John Ostrom (1975). The origin of birds. Annual Review of Earth and Planetary Sciences 3, pp. 55.
- ^ Bryant, H.N. & Russell, A.P. (1993) The occurrence of clavicles within Dinosauria: implications for the homology of the avian furcula and the utility of negative evidence. Journal of Vertebrate Paleontology, 13(2):171-184.
- ^ Chure, Daniel J.; & Madsen, James H. (1996). "On the presence of furculae in some non-maniraptoran theropods". Journal of Vertebrate Paleontology 16 (3): 573–577.
- ^ Norell, Mark A.; & Makovicky, Peter J. (1999). "Important features of the dromaeosaurid skeleton II: Information from newly collected specimens of Velociraptor mongoliensis". American Museum Novitates 3282: 1–44. http://hdl.handle.net/2246/3025.
- ^ Colbert, E. H. & Morales, M. (1991) Evolution of the vertebrates: a history of the backboned animals through time. 4th ed. Wiley-Liss, New York. 470 p.
- ^ Barsbold, R. et al. (1990) Oviraptorosauria. In The Dinosauria, Weishampel, Dodson &p; Osmolska (eds) pp 249-258.
- ^ Included as a cladistic definer, e.g. (Columbia University) Master Cladograms or mentioned even in the broadest context, such as Paul C. Sereno, "The origin and evolution of dinosaurs" Annual Review of Earth and Planetary Sciences 25 pp 435-489.
- ^ Lipkin, C., Sereno, P.C., and Horner, J.R. (November 2007). "THE FURCULA IN SUCHOMIMUS TENERENSIS AND TYRANNOSAURUS REX (DINOSAURIA: THEROPODA: TETANURAE)". Journal of Paleontology 81 (6): 1523–1527. doi:10.1666/06-024.1. http://jpaleontol.geoscienceworld.org/cgi/content/extract/81/6/1523. - full text currently online at "The Furcula in Suchomimus Tenerensis and Tyrannosaurus rex". http://www.redorbit.com/news/health/1139122/the_furcula_in_suchomimus_tenerensis_and_tyrannosaurus_rex_dinosauria_theropoda/index.html. This lists a large number of theropods in which furculae have been found, as well as describing those of Suchomimus Tenerensis and Tyrannosaurus rex.
- ^ Carrano, M,R., Hutchinson, J.R., and Sampson, S.D. (December 2005). "New information on Segisaurus halli, a small theropod dinosaur from the Early Jurassic of Arizona". Journal of Vertebrate Paleontology 25 (4): 835–849. doi:10.1671/0272-4634(2005)025[0835:NIOSHA]2.0.CO;2. http://www.rvc.ac.uk/AboutUs/Staff/jhutchinson/documents/JH18.pdf.
- ^ Yates, Adam M.; and Vasconcelos, Cecilio C. (2005). "Furcula-like clavicles in the prosauropod dinosaur Massospondylus". Journal of Vertebrate Paleontology 25 (2): 466–468. doi:10.1671/0272-4634(2005)025[0466:FCITPD]2.0.CO;2.
- ^ Downs, A. (2000). Coelophysis bauri and Syntarsus rhodesiensis compared, with comments on the preparation and preservation of fossils from the Ghost Ranch Coelophysis Quarry. New Mexico Museum of Natural History and Science Bulletin, vol. 17, pp. 33–37.
- ^ The furcula of Coelophysis bauri, a Late Triassic (Apachean) dinosaur (Theropoda: Ceratosauria) from New Mexico. 2006. By Larry Rinehart, Spencer Lucas, and Adrian Hunt
- ^ a b Ronald S. Tykoski, Catherine A. Forster, Timothy Rowe, Scott D. Sampson, and Darlington Munyikwad. (2002). A furcula in the coelophysid theropod Syntarsus. Journal of Vertebrate Paleontology 22(3):728-733.
- ^ Larry F. Rinehart, Spencer G. Lucas, Adrian P. Hunt. (2007). Furculae in the Late Triassic theropod dinosaur Coelophysis bauri. Paläontologische Zeitschrift 81: 2
- ^ a b Sereno, P.C.; Martinez, R.N.; Wilson, J.A.; Varricchio, D.J.; Alcober, O.A.; and Larsson, H.C.E. (September 2008). "Evidence for Avian Intrathoracic Air Sacs in a New Predatory Dinosaur from Argentina". PLoS ONE 3 (9): e3303. doi:10.1371/journal.pone.0003303. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0003303. Retrieved on 2008-10-27.
- ^ O'Connor, P.M. & Claessens, L.P.A.M. (2005). "Basic avian pulmonary design and flow-through ventilation in non-avian theropod dinosaurs". Nature 436: 253–256. doi:10.1038/nature03716.
- ^ Meat-Eating Dinosaur from Argentina Had Bird-Like Breathing System Newswise, Retrieved on September 29, 2008.
- ^ Fisher, P. E., Russell, D. A., Stoskopf, M. K., Barrick, R. E., Hammer, M. & Kuzmitz, A. A. (2000). Cardiovascular evidence for an intermediate or higher metabolic rate in an ornithischian dinosaur. Science 288, 503–505.
- ^ Hillenius, W. J. & Ruben, J. A. (2004). The evolution of endothermy in terrestrial vertebrates: Who? when? why? Physiological and Biochemical Zoology 77, 1019–1042.
- ^ Dinosaur with a Heart of Stone. T. Rowe, E. F. McBride, P. C. Sereno, D. A. Russell, P. E. Fisher, R. E. Barrick, and M. K. Stoskopf (2001) Science 291, 783
- ^ a b Xu, X. and Norell, M.A. (2004). A new troodontid dinosaur from China with avian-like sleeping posture. Nature 431:838-841.See commentary on the article.
- ^ Schweitzer, M.H.; Wittmeyer, J.L.; and Horner, J.R. (2005). "Gender-specific reproductive tissue in ratites and Tyrannosaurus rex". Science 308: 1456–1460. doi:10.1126/science.1112158. PMID 15933198. http://www.sciencemag.org/cgi/content/abstract/308/5727/1456.
- ^ Lee, Andrew H.; and Werning, Sarah (2008). "Sexual maturity in growing dinosaurs does not fit reptilian growth models". Proceedings of the National Academy of Sciences 105 (2): 582–587. doi:10.1073/pnas.0708903105. PMID 18195356. http://www.pnas.org/cgi/content/abstract/105/2/582.
- ^ Chinsamy, A., Hillenius, W.J. 2004). Physiology of nonavian dinosaurs. In:Weishampel, D.B., Dodson, P., Osmolska, H., eds. The Dinosauria. University of California Press, Berkely. pp. 643-65.
- ^ Norell, M.A., Clark, J.M., Chiappe, L.M., and Dashzeveg, D. (1995). "A nesting dinosaur." Nature 378:774-776.
- ^ a b Clark, J.M., Norell, M.A., & Chiappe, L.M. (1999). "An oviraptorid skeleton from the Late Cretaceous of Ukhaa Tolgod, Mongolia, preserved in an avianlike brooding position over an oviraptorid nest." American Museum Novitates, 3265: 36 pp., 15 figs.; (American Museum of Natural History) New York. (5.4.1999).
- ^ Norell, M. A., Clark, J. M., Dashzeveg, D., Barsbold, T., Chiappe, L. M., Davidson, A. R., McKenna, M. C. and Novacek, M. J. (November 1994). "A theropod dinosaur embryo and the affinities of the Flaming Cliffs Dinosaur eggs" (abstract page). Science 266 (5186): 779–782. doi:10.1126/science.266.5186.779. PMID 17730398. http://www.sciencemag.org/cgi/content/abstract/266/5186/779.
- ^ Oviraptor nesting Oviraptor nests or Protoceratops?
- ^ Gregory Paul (1994). Thermal environments of dinosaur nestlings: Implications for endothermy and insulation. In: Dinosaur Eggs and Babies.
- ^ Hombergerm D.G. (2002). The aerodynamically streamlined body shape of birds: Implications for the evolution of birds, feathers, and avian flight. In: Zhou, Z., Zhang, F., eds. Proceedings of the 5th symposium of the Society of Avian Paleontology and Evolution, Beijing, 1-4 June 2000. Beijing, China: Science Press. p. 227-252.
- ^ a b c Ji, Q., and Ji, S. (1997). "A Chinese archaeopterygian, Protarchaeopteryx gen. nov." Geological Science and Technology (Di Zhi Ke Ji), 238: 38-41. Translated By Will Downs Bilby Research Center Northern Arizona University January, 2001
- ^ a b c d e Xu, X., Zhou, Z., and Wang, X. (2000). "The smallest known non-avian theropod dinosaur." Nature, 408 (December): 705-708.
- ^ Dal Sasso, C. and Signore, M. (1998). Exceptional soft-tissue preservation in a theropod dinosaur from Italy. Nature 292:383–387. See commentary on the article
- ^ Mary H. Schweitzer, Jennifer L. Wittmeyer, John R. Horner, and Jan K. Toporski (2005). Science 307 (5717) pp. 1952-1955. doi:10.1126/science.1108397
- ^ Schweitzer, M.H., Wittmeyer, J.L. and Horner, J.R. (2005). Soft-Tissue Vessels and Cellular Preservation in Tyrannosaurus rex. Science 307:1952–1955. See commentary on the article
- ^ Wang, H., Yan, Z. and Jin, D. (1997). Reanalysis of published DNA sequence amplified from Cretaceous dinosaur egg fossil. Molecular Biology and Evolution. 14:589–591. See commentary on the article.
- ^ Chang, B.S.W., Jönsson, K., Kazmi, M.A., Donoghue, M.J. and Sakmar, T.P. (2002). Recreating a Functional Ancestral Archosaur Visual Pigment. Molecular Biology and Evolution 19:1483–1489. See commentary on the article.
- ^ Embery, et al. "Identification of proteinaceous material in the bone of the dinosaur Iguanodon." Connect Tissue Res. 2003; 44 Suppl 1:41-6. PMID: 12952172
- ^ Schweitzer, et al. (1997 Jun 10) "Heme compounds in dinosaur trabecular bone." Proc Natl Acad Sci U S A.. 94(12):6291–6. PMID: 9177210
- ^ Fucheng, Z., Zhonghe, Z., and Dyke, G. (2006). Feathers and 'feather-like' integumentary structures in Liaoning birds and dinosaurs. Geol . J. 41:395-404.
- ^ a b Cheng-Ming Choung, Ping Wu, Fu-Cheng Zhang, Xing Xu, Minke Yu, Randall B. Widelitz, Ting-Xin Jiang, and Lianhai Hou (2003). Adaptation to the sky: defining the feather with integument fossils from the Mesozoic China and exprimental evidence from molecular laboratories. Journal of Experimental Zoology (MOL DEV EVOL) 298b:42-56.
- ^ Bakker, R.T., Galton, P.M. (1974). Dinosaur monophyly and a new class of vertebrates. Nature 248:168-172.
- ^ Sumida, SS & CA Brochu (2000). "Phylogenetic context for the origin of feathers". American Zoologist 40 (4): 486–503. doi:10.1093/icb/40.4.486. http://icb.oxfordjournals.org/cgi/content/abstract/40/4/486.
- ^ a b c d Chiappe, Luis M., (2009). Downsized Dinosaurs:The Evolutionary Transition to Modern Birds. Evo Edu Outreach 2: 248-256. doi:10.1007/s12052-009-0133-4
- ^ Burgers, P., Chiappe, L.M. (1999). The wing of Archaeopteryx as a primary thrust generator. Nature 399: 60-2. doi:10.1038/19967
- ^ a b c d e f g h Prum, R. & Brush A.H. (2002). "The evolutionary origin and diversification of feathers". The Quarterly Review of Biology 77: 261–295. doi:10.1086/341993.
- ^ a b c d Prum, R. H. (1999). Development and evolutionary origin of feathers. Journal of Experimental Zoology 285, 291-306.
- ^ Griffiths, P. J. (2000). The evolution of feathers from dinosaur hair. Gaia 15, 399-403.
- ^ a b c d e f g Mayr, G. Peters, S.D. Plodowski, G. Vogel, O. (2002). "Bristle-like integumentary structures at the tail of the horned dinosaur Psittacosaurus". Naturwissenschaften 89: 361–365. doi:10.1007/s00114-002-0339-6.
- ^ a b c Schweitzer, Mary Higby, Watt, J.A., Avci, R., Knapp, L., Chiappe, L, Norell, Mark A., Marshall, M. (1999). "Beta-Keratin Specific Immunological reactivity in Feather-Like Structures of the Cretaceous Alvarezsaurid, Shuvuuia deserti Journal of Experimental Zoology Part B (Mol Dev Evol) 285:146-157
- ^ Schweitzer, M. H. (2001). Evolutionary implications of possible protofeather structures associated with a specimen of Shuvuuia deserti. In Gauthier, J. & Gall, L. F. (eds) New prespectives on the origin and early evolution of birds: proceedings of the international symposium in honor of John H. Ostrom. Peabody Museum of Natural History, Yale University (New Haven), pp. 181-192.
- ^ Christiansen, P. & Bonde, N. (2004). Body plumage in Archaeopteryx: a review, and new evidence from the Berlin specimen. C. R. Palevol 3, 99-118.
- ^ M.J. Benton, M.A. Wills, R. Hitchin. (2000). Quality of the fossil record through time. Nature 403, 534-537. doi:10.1038/35000558
- ^ Morgan, James (2008-10-22). "New feathered dinosaur discovered". BBC. http://news.bbc.co.uk/2/hi/science/nature/7684796.stm. Retrieved on 2009-07-02.
- ^ a b c d e f Zhang, F., Zhou, Z., Xu, X., Wang, X., & Sullivan, C. (2008). "A bizarre Jurassic maniraptoran from China with elongate ribbon-like feathers." Available from Nature Precedings, doi:10.1038/npre.2008.2326.1 .
- ^ Prum, R,. O. & Brush, A. H. (2003). Which came first, the feather or the bird? Scientific American 286 (3), 84-93.
- ^ Epidexipteryx: bizarre little strap-feathered maniraptoran ScienceBlogs Tetrapod Zoology article by Darren Naish. October 23, 2008
- ^ Gishlick, A. D. (2001). The function of the manus and forelimb of Deinonychus antirrhopus and its importance for the origin of avian flight. In Gauthier, J. & Gall, L. F. (eds) New Perspectives on the Origin and Early Evolution of Birds: Proceedings of the International Symposium in Honor of John H. Ostrom. Peabody Museum of Natural History, Yale University (New Haven), pp. 301-318.
- ^ Senter, P. (2006). Comparison of forelimb function between Deinonychus and Bambiraptor (Theropoda: Dromaeosauridae). Journal of Vertebrate Paleontology 26, 897-906.
- ^ JA Long, P Schouten. (2008). Feathered Dinosaurs: The Origin of Birds
- ^ a b Yalden, D. W. (1985). Forelimb function in Archaeopteryx. In Hecht, M. K., Ostrom, J. H., Viohl, G. & Wellnhofer, P. (eds) The Beginnings of Birds - Proceedings of the International Archaeopteryx Conference, Eichstatt 1984, pp. 91-97.
- ^ Chen, P.-J., Dong, Z.-M. & Zhen, S.-N. (1998). An exceptionally well-preserved theropod dinosaur from the Yixian Formation of China. Nature 391, 147-152.
- ^ a b c Currie, Philip J.; Pei-ji Chen. (2001). Anatomy of Sinosauropteryx prima from Liaoning, northeastern China. Canadian Journal of Earth Sciences 38, 1705-1727. doi:10.1139/cjes-38-12-1705
- ^ Bohlin, B. 1947. The wing of Archaeornithes. Zoologiska Bidrag 25, 328-334.
- ^ Rietschel, S. (1985). Feathers and wings of Archaeopteryx, and the question of her flight ability. In Hecht, M. K., Ostrom, J. H., Viohl, G. & Wellnhofer, P. (eds) The Beginnings of Birds - Proceedings of the International Archaeopteryx Conference, Eichstatt 1984, pp. 251-265.
- ^ a b Griffiths, P. J. 1993. The claws and digits of Archaeopteryx lithographica. Geobios 16, 101-106.
- ^ Stephan, B. 1994. The orientation of digital claws in birds. Journal fur Ornithologie 135, 1-16.
- ^ a b c Chiappe, L.M. and Witmer, L.M. (2002). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press, ISBN 0520200942
- ^ Martin, L. D. & Lim, J.-D. (2002). Soft body impression of the hand in Archaeopteryx. Current Science 89, 1089-1090.
- ^ a b c d Feduccia, A. (1999). The Origin and Evolution of Birds. 420 pp. Yale University Press, New Haven. ISBN 0300078617.
- ^ a b Dyke, G.J., and Norell, M.A. (2005). "Caudipteryx as a non-avialan theropod rather than a flightless bird." Acta Palaeontologica Polonica, 50(1): 101–116. PDF fulltext
- ^ a b c Witmer, L.M. (2002). “The Debate on Avian Ancestry; Phylogeny, Function and Fossils”, Mesozoic Birds: Above the Heads of Dinosaurs : 3–30. ISBN 0-520-20094-2
- ^ Jones T.D., Ruben J.A., Martin L.D., Kurochkin E.N., Feduccia A., Maderson P.F.A., Hillenius W.J., Geist N.R., Alifanov V. (2000). Nonavian feathers in a Late Triassic archosaur. Science 288: 2202-2205.
- ^ Martin, Larry D. (2006). "A basal archosaurian origin for birds". Acta Zoologica Sinica 50 (6): 977–990.
- ^ Burke, Ann C.; & Feduccia, Alan. (1997). "Developmental patterns and the identification of homologies in the avian hand". Science 278 (5338): 666–668. doi:10.1126/science.278.5338.666.
- ^ a b Kevin Padian (2000). Dinosaurs and Birds — an Update. Reports of the National Center for Science Education. 20 (5):28–31.
- ^ Ostrom J.H. (1973). The ancestry of birds. Nature 242: 136.
- ^ a b Padian, Kevin. (2004). "Basal Avialae". in Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 210–231. ISBN 0-520-24209-2.
- ^ Olshevsky, G. (1991). A Revision of the Parainfraclass Archosauria Cope, 1869, Excluding the Advanced Crocodylia. Publications Requiring Research, San Diego.
- ^ Olshevsky, G. (1994). The birds first? A theory to fit the facts. Omni 16 (9), 34-86.
- ^ a b Chatterjee, S. (1999): Protoavis and the early evolution of birds. Palaeontographica A 254: 1-100.
- ^ Chatterjee, S. (1995): The Triassic bird Protoavis. Archaeopteryx 13: 15-31.
- ^ Chatterjee, S. (1998): The avian status of Protoavis. Archaeopteryx 16: 99-122.
- ^ Chatterjee, S. (1991). "Cranial anatomy and relationships of a new Triassic bird from Texas." Philosophical Transactions of the Royal Society B: Biological Sciences, 332: 277-342. HTML abstract
- ^ Paul, G.S. (2002). Dinosaurs of the Air: The Evolution and Loss of Flight in Dinosaurs and Birds. Johns Hopkins University Press, Baltimore. ISBN 0-8018-6763-0
- ^ Witmer, L. (2002). "The debate on avian ancestry: phylogeny, function, and fossils." Pp. 3-30 in: Chiappe, L.M. and Witmer, L.M. (eds), Mesozoic birds: Above the heads of dinosaurs. University of California Press, Berkeley, Calif., USA. ISBN 0-520-20094-2
- ^ Nesbitt, Sterling J.; Irmis, Randall B. & Parker, William G. (2007): A critical re-evaluation of the Late Triassic dinosaur taxa of North America. Journal of Systematic Palaeontology 5(2): 209-243.
- ^ Ostrom, J. (1987): Protoavis, a Triassic bird? Archaeopteryx 5: 113-114.
- ^ Ostrom, J.H. (1991): The bird in the bush. Nature 353(6341): 212.
- ^ Ostrom, J.H. (1996): The questionable validity of Protoavis. Archaeopteryx 14: 39-42.
- ^ Chatterjee, S. (1987). "Skull of Protoavis and Early Evolution of Birds." Journal of Vertebrate Paleontology, 7(3)(Suppl.): 14A.
- ^ a b EvoWiki (2004). Chatterjee's Chimera: A Cold Look at the Protoavis Controversy. Version of 2007-JAN-22. Retrieved 2009-FEB-04.
- ^ Chatterjee, S. (1997). The Rise of Birds: 225 Million Years of Evolution. Johns Hopkins University Press, Baltimore. ISBN 0-8018-5615-9
- ^ Feduccia, Alan (1994) "The Great Dinosaur Debate" Living Bird. 13:29-33.
- ^ Why Birds Aren't Dinosaurs. Explore:Thought and Discovery at the University of Kansas. Accessed 8/05/09.
- ^ Jensen, James A. & Padian, Kevin. (1989) "Small pterosaurs and dinosaurs from the Uncompahgre fauna (Brushy Basin member, Morrison Formation: ?Tithonian), Late Jurassic, western Colorado" Journal of Paleontology Vol. 63 no. 3 pg. 364-373.
- ^ Lubbe, T. van der, Richter, U., and Knötschke, N. 2009. Velociraptorine dromaeosaurid teeth from the Kimmeridgian (Late Jurassic) of Germany. Acta Palaeontologica Polonica 54 (3): 401–408. DOI: 10.4202/app.2008.0007.
- ^ a b c d Hartman, S., Lovelace, D., and Wahl, W., (2005). "Phylogenetic assessment of a maniraptoran from the Morrison Formation." Journal of Vertebrate Paleontology, 25, Supplement to No. 3, pp 67A-68A http://www.bhbfonline.org/AboutUs/Lori.pdf
- ^ Brochu, Christopher A. Norell, Mark A. (2001) "Time and trees: A quantitative assessment of temporal congruence in the bird origins debate" pp.511-535 in "New Perspectives on the Origin and Early Evolution of Birds" Gauthier&Gall, ed. Yale Peabody Museum. New Haven, Conn. USA.
- ^ a b Ruben, J., Jones, T. D., Geist, N. R. & Hillenius, W. J. (1997). Lung structure and ventilation in theropod dinosaurs and early birds. Science 278, 1267-1270.
- ^ a b Ruben, J., Dal Sasso, C., Geist, N. R., Hillenius, W. J., Jones, T. D. & Signore, M. (1999). Pulmonary function and metabolic physiology of theropod dinosaurs. Science 283, 514-516.
- ^ Quick, D. E. & Ruben, J. A. (2009). Cardio-pulmonary anatomy in theropod dinosaurs: implications from extant archosaurs. Journal of Morphology doi: 10.1002/jmor.10752
- ^ gazettetimes.com article
- ^ Discovery Raises New Doubts About Dinosaur-bird Links ScienceDaily article
- ^ Ruben, J., Hillenius, W., Geist, N. R., Leitch, A., Jones, T. D., Currie, P. J., Horner, J. R. & Espe, G. (1996). The metabolic status of some Late Cretaceous dinosaurs. Science 273, 1204-1207.
- ^ Theagarten Lingham-Soliar (2003). The dinosaurian origin of feathers: perspectives from dolphin (Cetacea) collagen fibers. Naturwissenschaften 90 (12): 563-567.
- ^ Peter Wellnhofer (2004) "Feathered Dragons: Studies on the Transition from Dinosaurs to Birds. Chapter 13. The Plumage of Archaeopteryx:Feathers of a Dinosaur?" Currie, Koppelhaus, Shugar, Wright. Indiana University Press. Bloomington, IN. USA. pp. 282-300.
- ^ Lingham-Soliar, T et al. (2007) Proc. R. Soc. Lond. B doi:10.1098/rspb.2007.0352.
- ^ Access : Bald dino casts doubt on feather theory : Nature News
- ^ "Transcript: The Dinosaur that Fooled the World". BBC. http://www.bbc.co.uk/science/horizon/2001/dinofooltrans.shtml. Retrieved on 2006-12-22.
- ^ Mayell, Hillary (2002-11-20). "Dino Hoax Was Mainly Made of Ancient Bird, Study Says". National Geographic. http://news.nationalgeographic.com/news/2002/11/1120_021120_raptor.html. Retrieved on 2008-06-13.
- ^ Zhou, Zhonghe, Clarke, Julia A., Zhang, Fucheng. "Archaeoraptor's better half." Nature Vol. 420. 21 November 2002. pp. 285.
- ^ a b Makovicky, Peter J.; Apesteguía, Sebastián; & Agnolín, Federico L. (2005). "The earliest dromaeosaurid theropod from South America". Nature 437 (7061): 1007–1011. doi:10.1038/nature03996.
- ^ Norell, M.A., Clark, J.M., Turner, A.H., Makovicky, P.J., Barsbold, R., and Rowe, T. (2006). "A new dromaeosaurid theropod from Ukhaa Tolgod (Omnogov, Mongolia)." American Museum Novitates, 3545: 1-51.
- ^ a b Forster, Catherine A.; Sampson, Scott D.; Chiappe, Luis M. & Krause, David W. (1998). "The Theropod Ancestry of Birds: New Evidence from the Late Cretaceous of Madagascar". Science (5358): pp. 1915–1919. doi:10.1126/science.279.5358.1915. (HTML abstract).
- ^ a b Chiappe, L.M.. Glorified Dinosaurs: The Origin and Early Evolution of Birds. Sydney: UNSW Press.
- ^ a b Kurochkin, E., N. (2006). Parallel Evolution of Theropod Dinoaurs and Birds. Entomological Review 86 (1), pp. S45-S58. doi:10.1134/S0013873806100046
- ^ Kurochkin, E., N. (2004). A Four-Winged Dinosaur and the Origin of Birds. Priroda 5, 3-12.
- ^ a b c S. Chatterjee. (2005). The Feathered Dinosaur Microraptor:Its Biplane Wing Platform and Flight Performance. 2005 Salt Lake City Annual Meeting.
- ^ a b c d Chatterjee, S., and Templin, R.J. (2007). "Biplane wing platform and flight performance of the feathered dinosaur Microraptor gui." Proceedings of the National Academy of Sciences, 104(5): 1576-1580.
- ^ a b c Holtz, Thomas R.; & Osmólska, Halszka. (2004). "Saurischia". in Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 21–24. ISBN 0-520-24209-2.
- ^ a b c d e Xu, Xing; Zheng Xiao-ting; You, Hai-lu (20 January 2009). "A new feather type in a nonavian theropod and the early evolution of feathers". Proceedings of the National Academy of Sciences (Philadelphia). doi:10.1073/pnas.0810055106. PMID 19139401.
- ^ a b c Turner, Alan H.; Hwang, Sunny; & Norell, Mark A. (2007). "A small derived theropod from Öösh, Early Cretaceous, Baykhangor, Mongolia". American Museum Novitates 3557 (3557): 1–27. doi:10.1206/0003-0082(2007)3557[1:ASDTFS]2.0.CO;2. http://hdl.handle.net/2246/5845.
- ^ a b Bryner, Jeanna (2009). "Ancient Dinosaur Wore Primitive Down Coat." http://www.foxnews.com/story/0,2933,479875,00.html
- ^ a b Xu, X., Cheng, C., Wang, X. & Chang, C. (2003). Pygostyle-like structure from Beipiaosaurus (Theropoda, Therizinosauroidea) from the Lower Cretaceous Yixian Formation of Liaoning, China. Acta Geologica Sinica 77, 294-298.
- ^ a b c Xu Xing; Zhou Zhonghe & Prum, Richard A. (2003). "Branched integumental structures in Sinornithosaurus and the origin of feathers". Nature 410 (6825): 200–204. doi:10.1038/35065589.
- ^ Paul, Gregory S. (2008). "The extreme lifestyles and habits of the gigantic tyrannosaurid superpredators of the Late Cretaceous of North America and Asia". in Carpenter, Kenneth; and Larson, Peter E. (editors). Tyrannosaurus rex, the Tyrant King (Life of the Past). Bloomington: Indiana University Press. p. 316. ISBN 0-253-35087-5.
- ^ Martin, Larry D.; & Czerkas, Stephan A. (2000). "The fossil record of feather evolution in the Mesozoic". American Zoologist 40 (4): 687–694. doi:10.1668/0003-1569(2000)040[0687:TFROFE]2.0.CO;2. http://www.bioone.org/perlserv/?request=get-abstract&doi=10.1668%2F0003-1569%282000%29040%5B0687%3ATFROFE%5D2.0.CO%3B2.
- ^ a b T. rex was fierce, yes, but feathered, too.
- ^ Nicholas M. Gardner, David B. Baum, Susan Offner. (2008). [392b%3ANDEFFI2.0.CO%3B2 No Direct Evidence for Feathers in Tyrannosaurus rex]. The American Biology Teacher 70(7):392-392
- ^ a b Xu, X., Wang, X.-L., and Wu, X.-C. (1999). "A dromaeosaurid dinosaur with a filamentous integument from the Yixian Formation of China". Nature 401: 262–266. doi:10.1038/45769.
- ^ a b c d e f g h i j k Turner, A.H.; Makovicky, P.J.; and Norell, M.A. (2007). "Feather quill knobs in the dinosaur Velociraptor" (pdf). Science 317 (5845): 1721. doi:10.1126/science.1145076. PMID 17885130. http://www.sciencemag.org/cgi/reprint/317/5845/1721.pdf.
- ^ a b Ji, Q., Norell, M. A., Gao, K.-Q., Ji, S.-A. & Ren, D. (2001). The distribution of integumentary structures in a feathered dinosaur. Nature 410, 1084-1088.
- ^ a b American Museum of Natural History. "Velociraptor Had Feathers." ScienceDaily 20 September 2007. 23 January 2008 http://www.sciencedaily.com/releases/2007/09/070920145402.htm
- ^ a b c d e f g h Xu, X., and Zhang, F. (2005). "A new maniraptoran dinosaur from China with long feathers on the metatarsus." Naturwissenschaften, 92(4): 173 - 177.
- ^ a b c d Xu, X., Zhao, Q., Norell, M., Sullivan, C., Hone, D., Erickson, G., Wang, X., Han, F. and Guo, Y. (2009). "A new feathered maniraptoran dinosaur fossil that fills a morphological gap in avian origin." Chinese Science Bulletin, 6 pages, accepted November 15, 2008.
- ^ Currie, PJ & Chen, PJ (2001) Anatomy of Sinosauropteryx prima from Liaoning, northeastern China, Canadian Journal of Earth Sciences, 38: 1,705-1,727.
- ^ Buffetaut, E., Grellet-Tinner, G., Suteethorn, V., Cuny, G., Tong, H., Košir, A., Cavin, L., Chitsing, S., Griffiths, P.J., Tabouelle, J. and Le Loeuff, J. (2005). "Minute theropod eggs and embryo from the Lower Cretaceous of Thailand and the dinosaur-bird transition." Naturwissenschaften, 92(10): 477-482.
- ^ a b c Czerkas, S.A., and Yuan, C. (2002). "An arboreal maniraptoran from northeast China." Pp. 63-95 in Czerkas, S.J. (Ed.), Feathered Dinosaurs and the Origin of Flight. The Dinosaur Museum Journal 1. The Dinosaur Museum, Blanding, U.S.A. PDF abridged version
- ^ Maryanska, T., Osmolska, H., & Wolsam, M. (2002). "Avialian status for Oviraptorosauria". Acta Palaeontologica Polonica 47 (1): 97–116.
- ^ Benton, M. J. (2004). Vertebrate Palaeontology, 3rd ed. Blackwell Science Ltd.
- ^ a b Turner, Alan H.; Pol, Diego; Clarke, Julia A.; Erickson, Gregory M.; and Norell, Mark (2007). "A basal dromaeosaurid and size evolution preceding avian flight" (pdf). Science 317: 1378–1381. doi:10.1126/science.1144066. PMID 17823350. http://www.sciencemag.org/cgi/reprint/317/5843/1378.pdf.
- ^ a b c d Barsbold, R., Osmólska, H., Watabe, M., Currie, P.J., and Tsogtbaatar, K. (2000). "New Oviraptorosaur (Dinosauria, Theropoda) From Mongolia: The First Dinosaur With A Pygostyle". Acta Palaeontologica Polonica, 45(2): 97-106.
- ^ C.M. Chuong, R. Chodankar, R.B. Widelitz (2000). Evo-Devo of feathers and scales: building complex epithelial appendages. Commentary, Current Opinion in Genetics & Development 10 (4), pp. 449-456.
- ^ a b Kurzanov, S.M. (1987). "Avimimidae and the problem of the origin of birds." Transactions of the Joint Soviet-Mongolian Paleontological Expedition, 31: 5-92. [in Russian]
- ^ a b Hopp, Thomas J., Orsen, Mark J. (2004) "Feathered Dragons: Studies on the Transition from Dinosaurs to Birds. Chapter 11. Dinosaur Brooding Behavior and the Origin of Flight Feathers" Currie, Koppelhaus, Shugar, Wright. Indiana University Press. Bloomington, IN. USA.
- ^ Maryańska, T. & Osmólska, H. (1997). The Quadrate of Oviraptorid Dinosaurs. Acta Palaeontologica Polonica 42 (3): 361-371.
- ^ Jones, T.D., Farlow, J.O., Ruben, J.A., Henderson, D.M., and Hillenius, W.J. (2000). "Cursoriality in bipedal archosaurs." Nature, 406(6797): 716–718. doi:10.1038/35021041 PDF fulltext Supplementary information
- ^ Zhou, Z., Wang, X., Zhang, F., and Xu, X. (2000). "Important features of Caudipteryx - Evidence from two nearly complete new specimens." Vertebrata Palasiatica, 38(4): 241–254. PDF fulltext
- ^ Buchholz, P. (1997). Pelecanimimus polyodon. Dinosaur Discoveries 3, 3-4.
- ^ Briggs, D. E., Wilby, P. R., Perez-Moreno, B., Sanz, J. L. & Fregenal-Martinez, M. (1997). The mineralization of dinosaur soft tissue in the Lower Cretaceous of Las Hoyas, Spain. Journal of the Geological Society, London 154, 587-588.
- ^ a b Theagarten Lingham-Soliar. (2008). A unique cross section through the skin of the dinosaur Psittacosaurus from China showing a complex fibre architecture. Proc R Soc B 275: 775-780.
- ^ a b c d e f g h i j k Zheng, X.-T., You, H.-L., Xu, X. and Dong, Z.-M. (2009). "An Early Cretaceous heterodontosaurid dinosaur with filamentous integumentary structures." Nature, 458(19): 333-336. doi:10.1038/nature07856
- ^ Witmer, L.M. (2009), "Dinosaurs: Fuzzy origins for feathers", Nature 458 (7236): 293–295, http://www.nature.com/nature/journal/v458/n7236/full/458293a.html, retrieved on 2009-09-02
- ^ "Tianyulong". Pharyngula. PZ Myers. March 20, 2009. http://scienceblogs.com/pharyngula/2009/03/tianyulong.php. Retrieved on 2009-04-30.
- ^ a b "Tianyulong - a fuzzy dinosaur that makes the origin of feathers fuzzier". Not Exactly Rocket Science:Science for Everyone. Ed Yong. March 18, 2009. http://scienceblogs.com/notrocketscience/2009/03/tianyulong_-_a_fuzzy_dinosaur_that_makes_the_origin_of_feath.php. Retrieved on 2009-07-22.
- ^ Xu, X., Wang, X., Wu, X., (1999). A dromaeosaurid dinosaur with a filamentous integument from the Yixian Formation of China. Nature 401:6750 262-266 doi 10.1038/45769
- ^ Xu. X., Zhao, X., Clark, J.M., (1999). A new therizinosaur from the Lower Jurassic lower Lufeng Formation of Yunnan, China. Journal of Vertebrate Paleontology 21:3 477–483 doi 10.1671/0272-4634
- ^ Xu, X. and Wang, X.-L. (2003). "A new maniraptoran from the Early Cretaceous Yixian Formation of western Liaoning." Vertebrata PalAsiatica, 41(3): 195–202.
- ^ Ji, Q., Ji, S., Lu, J., You, H., Chen, W., Liu, Y., and Liu, Y. (2005). "First avialan bird from China (Jinfengopteryx elegans gen. et sp. nov.)." Geological Bulletin of China, 24(3): 197-205.
- ^ Ji, S., Ji, Q., Lu J., and Yuan, C. (2007). "A new giant compsognathid dinosaur with long filamentous integuments from Lower Cretaceous of Northeastern China." Acta Geologica Sinica, 81(1): 8-15.
- ^ Czerkas, S.A., and Ji, Q. (2002). "A new rhamphorhynchoid with a headcrest and complex integumentary structures." Pp. 15-41 in: Czerkas, S.J. (Ed.). Feathered Dinosaurs and the Origin of Flight. Blanding, Utah: The Dinosaur Museum. ISBN 1-93207-501-1.
- ^ a b c Senter, Phil (2007). "A new look at the phylogeny of Coelurosauria (Dinosauria: Theropoda)". Journal of Systematic Palaeontology 5 (4): 429–463. doi:10.1017/S1477201907002143.
- ^ Osmólska, Halszka; Maryańska, Teresa; & Wolsan, Mieczysław. (2002). "Avialan status for Oviraptorosauria". Acta Palaeontologica Polonica 47 (1): 97–116. http://app.pan.pl/article/item/app47-097.html.
- ^ Martinelli, Agustín G.; & Vera, Ezequiel I. (2007). "Achillesaurus manazzonei, a new alvarezsaurid theropod (Dinosauria) from the Late Cretaceous Bajo de la Carpa Formation, Río Negro Province, Argentina". Zootaxa 1582: 1–17. http://www.mapress.com/zootaxa/2007f/z01582p017f.pdf.
- ^ Novas, Fernando E.; & Pol, Diego. (2002). "Alvarezsaurid relationships reconsidered". in Chiappe, Luis M.; & Witmer, Lawrence M. (eds.). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. pp. 121–125. ISBN 0-520-20094-2.
- ^ Sereno, Paul C. (1999). "The evolution of dinosaurs". Science 284 (5423): 2137–2147. doi:10.1126/science.284.5423.2137. PMID 10381873.
- ^ Perle, Altangerel; Norell, Mark A.; Chiappe, Luis M.; & Clark, James M. (1993). "Flightless bird from the Cretaceous of Mongolia". Science 362 (6421): 623–626. doi:10.1038/362623a0.
- ^ Chiappe, Luis M.; Norell, Mark A.; & Clark, James M. (2002). "The Cretaceous, short-armed Alvarezsauridae: Mononykus and its kin". in Chiappe, Luis M.; & Witmer, Lawrence M. (eds.). Mesozoic Birds: Above the Heads of Dinosaurs. Berkeley: University of California Press. pp. 87–119. ISBN 0-520-20094-2.
- ^ Forster, Catherine A.; Sampson, Scott D.; Chiappe, Luis M.; & Krause, David W. (1998). "The theropod ancestry of birds: new evidence from the Late Cretaceous of Madagascar". Science 279 (5358): 1915–1919. doi:10.1126/science.279.5358.1915. PMID 9506938.
- ^ Mayr, Gerald; Pohl, Burkhard; & Peters, D. Stefan (2005). "A well-preserved Archaeopteryx specimen with theropod features.". Science 310 (5753): 1483–1486. doi:10.1126/science.1120331. PMID 16322455.
- ^ Göhlich, U.B., and Chiappe, L.M. (2006). "A new carnivorous dinosaur from the Late Jurassic Solnhofen archipelago." Nature, 440: 329-332.
- Gauthier, J.; De Queiroz, K. (2001), "Feathered dinosaurs, flying dinosaurs, crown dinosaurs, and the name" Aves", New Perspectives on the Origin and Early Evolution of Birds: 7–41.
- Fucheng, Z.; Zhonghe, Z.; Dyke, G. (2006), "Feathers and'feather-like'integumentary structures in Liaoning birds and dinosaurs", Geological Journal 41.
- Zhou, Z. (2004), "The origin and early evolution of birds: discoveries, disputes, and perspectives from fossil evidence", Naturwissenschaften 91 (10): 455–471.
- Vargas, A.O.; Fallon, J.F. (2005), "Birds have dinosaur wings: the molecular evidence", J Exp Zool (Mol Dev Evol) 304: 86–90.
- Prum, R.O. (2002), "Why ornithologists should care about the theropod origin of birds", The Auk 119 (1): 1–17.
- Clark, J.M.; Norell, M.A.; Makovicky, P.J. (2002). "Cladistic approaches to the relationships of birds to other theropod dinosaurs". Mesozoic birds, above the heads of the dinosaurs. pp. 31–61.
- Perrichot, V.; Marion, L.; Néraudeau, D.; Vullo, R.; Tafforeau, P. (2008), "The early evolution of feathers: fossil evidence from Cretaceous amber of France", Proceedings of the Royal Society B: Biological Sciences 275 (1639): 1197.
- DinoBuzz — dinosaur-bird controversy explained, by UC Berkeley.
- Journal of Dinosaur Paleontology, with many articles on dinosaur-bird links.
- Feathered dinosaurs at the American Museum of Natural History.
- First Dinosaur Found With its Body Covering Intact; Displays Primitive Feathers From Head to Tail — AMNH Press Release
- Notes from recent papers on theropod dinosaurs and early avians
- The evolution of feathers | http://fossil.wikia.com/wiki/Feathered_dinosaurs | 13 |
14 | August 15, 2012
How does one map the sky? It’s a daunting proposal to be sure and no Google cars or cameras are up to the task, but the team behind the Sloan Digital Sky Survey is making headway. The group, now in their third phase of research, recently released the largest ever 3-D map of the sky with some 540,000 galaxies.
Large though it is, the recent map covers a mere eight percent of the sky. By mid-2014, the team, led by Daniel Eisenstein at the Harvard-Smithsonian Center for Astrophysics, will have gathered enough additional information to complete a quarter of the sky.
Other than making a very cool animated video (above) about the project, in which viewers can seem to sail by almost 400,000 galaxies, the map will prove useful in a variety of research projects, from dark energy to quasars and the evolution of large galaxies, and the new information provides more accurate data than any other previous sky survey. Using a combination of imaging and spectroscopy, scientists are able to chart the distance of galaxies and other objects within 1.7 percent precision. In the past, the distances of bodies in space could only be measured by the far less precise Doppler shift observation of Hubble’s Law.
“That’s a very provocative value of precision because astronomers spent a lot of the last century arguing about whether the Hubble Constant was 50 or 100, which is basically arguing about a factor of two in distance. Now we’re using this method to get to precisions approaching a percent,” explains Eisenstein.
The mapping method relies on something called the baryon acoustic oscillation, which is “caused by sound waves that propagate in the first million years after the Big Bang,” Eisenstein explains. “These sound waves basically cause a tiny correlation between regions of space 500 million light years apart.” In the years after the Big Bang, as one galaxy formed and became too dense, it would emit a sound wave. “That sound wave travels out to a distance that corresponds today with 500 million light years and where it ends up produces (a region) slightly more enhanced than its galaxy population.” In other words, there is a slightly above average dispersion of galaxies 500 million light years apart than there are at 600 or 400 million light years.
“Because we know these sound waves pick out a distance of 500 million light years, now we can actually measure distance [in the universe], so in the survey we’ve measured the distance to these galaxies.”
These more accurate measurements mean exciting news for the search for dark energy, the acceleration of the expansion of the universe. “The way we measure dark energy is by measuring distances to certain objects with very high precision,” says Eisenstein.
The method for taking these measurements is surprisingly physical in nature. Initial imaging allows the scientists to get a basic map of what objects are where in a certain region of the sky: quasars, galaxies, stars and other items. They then select which objects would be useful for further study. Since so many teams, including the Lawrence Berkeley National Laboratory and the University of Cambridge, are involved, different groups pick different objects depending on their area of research.
Moving onto spectroscopy, the researchers can measure 1,000 objects at a time. On a large aluminum disk, they drill holes to correspond to each objects’ position. “On a given plate there might be 700 galaxies and 200 quasar candidates and 100 stars,” Eisenstein explains. Then the team will hand-place fiber optic cables into each hole. Light from each object hits the cables and is taken to the instrument. The disk sits for an hour to absorb the light and then it’s on to the next portion of the sky. Some nights the team will fill up to nine disks, but that’s rare.
Visitors can view some of the materials used by the sky survey team at the Air and Space Museum, including a charge couple device that converts light into electrical signals that can be read digitally to create a functional map.
When the project is completed, they will have 2,200 plates and a map of some two million objects. And you’ll have the night sky at your fingertips. Google that!
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
No Comments »
No comments yet. | http://blogs.smithsonianmag.com/aroundthemall/2012/08/largest-3-d-map-of-the-sky-released/ | 13 |
13 | We draw a triangle and the center of the circle passing through the three vertices of the triangle (the triangle's circumcenter). Profiting from Cabri-Geometric features, we drag the vertices, observing that in some instances the center of the circle lies on a triangle's side. Measuring, in such cases, the opposite angle, we conclude that it is a right angle. The converse statement (on a right triangle the cicumcenter always lies on the hypothenuse) can be likewise verified.
We will show, on this simple case, how a computer algebra system is able to automatically "discover" the same result. First of all we must establish a (wrong) conjecture, just involving the given construction, such as: on every triangle the circumcenter lies on one side. Therefore we take as hypotheses the given construction (the given vertices, the center of the circle). As thesis, we state that such center lies on a side. The system will determine that the thesis is generally false; and that it is true if and only if we have a right triangle.(Continues in Next Page)
Back to ICME8-RECIO home page
Next Page (Steps 1 and 2) | http://mathforum.org/mathed/seville/recio/situation1.html | 13 |
10 | If you look at a cylindrical block from the bottom, you see a circle. If you look at it from the side you see a square.
Imagine a cylindrical block that is spinning around amazingly fast. When you look at it, it stops spinning and snaps into either a circle or a square.
This is similar to how a qubit will behave. Whereas a normal bit has a value of either one or zero. A qubit is both. A qubit has some amount of one and some amount of zero, however when you measure it the qubit will always snap into a one or zero. These measurements are probabilistic and will not be the same each time.
How do blackholes affect light?
Assuming we could get close enough to a blackhole without dying, how would it affect what the surroundings look like? How will the bending of light impact what we see?
The reddit user, entropyjump, synthesized what a blackhole would look like (before it sucked up a bunch of stuff). Watch the video and read the below description:
The youtube movie shows a simulated view of a small black hole, if it were suspended in the air about a meter away from the camera. I wrote the simulation in Python, and used a spherical panorama image available online. In the movie clip, the camera is orbiting the black hole to show what the environment looks like as light is traveling through strongly curved spacetime close to the black hole. In some movie frames, a so-called ‘Einstein ring’ can be seen: this feature appears when there is an object exactly behind the black hole as seen by the camera. Light from this object passes around all sides of the black hole on its way toward us, forming a ring around its shadow.Although this black hole is tiny (it has a Schwarzschild radius of about 1.8 centimeters), its mass is about twice that of Earth. Such a black hole would wreak havoc on our planet if it were to come in the vicinity of Earth. So, this is just a visualization of how light would behave close to it, and not a full physical simulation of the other effects the black hole might have on its environment.
Lets just say I had practically inifinte energy. How do I go about turning this into a stream of protons? Dont hold back on the quantum field theory. Smashing existing particles together & filtering out what we want(protons) is not a good enough answer.
The first practical complication is that you cannot (as far as we know) create matter without also creating an equal amount of anti matter. Of course the fact that the observable universe is mostly regular matter indicates that there is some lopsidedness to this summitry and so it may be possible to find conditions that at least create slightly more matter then antimatter. Still, this is a problem that you would need to be overcome to get your pure stream of regular matter protons.
Another problem is the fact that to create particles we simply amass a very large amount of energy in a very small space and see what pops out. We have no way to command that only certain particles be created. For example, even if I amass enough energy to allow for the spontaneous creation of a pair of protons (the proton and its anti matter partner) i have no way to know if the protons are what is going to be created, or other particles whose combined mass and energy add up to the mass of the proton pair. We can only predict the frequency that certain particles will be created.
Finally, although the transformation of energy into matter and matter into energy is a common occurrence in nature, and an entire industry (the nuclear power industry) has been made possible by our understanding of the transition we still aren’t anywhere close to having a mass-energy conversion machine.
I can try and explain the conversion machine and our current methods of conversion if you want, but I think its off the topic of your question, and it looks like I’ve mad an ugly wall of text already.
If any of you reading this see something that I’ve got wrong, or want to explain in more detail please do! This is a topic I’ve been curious about for years.
Edit: I saw your post in r/physics. No, we can’t do better then smash particles together and see what comes out. Think of it this way. We don’t create matter, we simply create the conditions that allow matter to be created. The conditions that are needed are a very high concentration of energy, and the only way we have to achieve those conditions are particle collides. Unfortunately if we have enough energy to allow for the creation of a proton, then we have also allowed for the creation of many smaller particles that will need to be filtered out. So I’m sorry if smashing existing particles together & filtering out what we want is not a good enough answer, because right now its the only answer.
That’s how I roll
Thunderclouds emit gamma rays in powerful, millisecond-long bursts called terrestrial gamma-ray flashes, first discovered by space observatories.
These bursts can also produce beams of electrons and even of antimatter that can travel halfway around the globe.
All proposed explanations for the phenomena involve strong electric fields unleashing avalanches of electrons inside clouds, but none fully accounts for the sheer energies of the gamma rays.
New dedicated space missions and research aircraft may solve the mystery, as well as find out if the flashes pose radiation exposure risks for airline flights.
In designing this jet-injection mechanism, the engineers relied on what’s known as a Lorentz force actuator (Image 3). The Lorentz force actuator in this case is a small permanent magnet surrounded by a coil of wires. The coil of wires, or solenoid, is part of a piston system that is separate from the permanent magnet which lies in the center. If we recall from high school physics, we know that when a current is passed through the wires of a solenoid, the solenoid becomes an electromagnet which, in turn, creates its own magnetic field. Now, if this new field is opposite that of the permanent magnet, meaning if their fields repel, then a repulsive force will be established. This force will accelerate the piston towards the nozzle, creating a sudden change in pressure which then ejects the medicine out of the nozzle.
We’ve found the Higgs Boson: What next?
The LHC is about to have a $1.82 Billion upgrade to research dark matter.
It might have only just found the elusive ”God particle”, but the Large Hadron Collider at the CERN laboratory, near Geneva, is to have a $A1.82 billion upgrade at the end of the decade to investigate the mystery of dark matter.
Scientists believe dark matter holds the universe together. Yet while it is all around us, making up 84 per cent of all matter, it has never been seen as it does not produce or reflect light.
Now scientists hope that a 10-fold boost to the power of the beams of particles being smashed together inside CERN’s 27-kilometre tunnels will allow them to create and detect dark matter.
Other experiments at the laboratory will continue until the end of this year, when the collider will close for 20 months for repairs. | http://thequantumlife.tumblr.com/tagged/Physics | 13 |
12 | How are black holes discovered?
How did the first astronomer discover the first black hole? Who had discovered it and when was it found? Please explain how the first black hole was discovered.
No single astronomer has the credit for discovering a black hole. Before I explain how astronomers found evidence for the existence of black holes, let me give you some of the necessary physics background.
1. Any body which is above absolute zero (-273 Celsius) radiates thermal energy, and the peak wavelength of emission depends on the temperature of the object. For example, the sun's surface is about 6000 Kelvin so that its peak emission is in green light. If an object's temperature is about a million degrees, then its peak emission will be in X-rays.
2. Normally stars are prevented from collapsing from gravity due to thermal gas pressure and radiation pressure. However, if the thermal energy source (nuclear fusion reactions) stop, then the star will collapse. It turns out that there are forces other than gas pressure which counteract gravity when the star becomes more compact (for instance the neutron star is only 10 km across!). But the astrophysicist Chandrasekar proved that there is a maximum mass beyond which nothing can beat gravity. So, if we detect a compact object in space which is more than this critical mass, then we can be confident that it is a black hole.
Now to return to the question of finding black holes: How can one detect a black hole if nothing can escape from it? Consider a binary system of stars where one of the stars is a black hole and the other a normal star. If the normal star's envelope gets close enough to the black hole, then the fierce gravity of the black hole can rip out gas from the normal star which is then swallowed by the black hole.
However, due to the conservation of angular momentum, the gas cannot plunge straight into the black hole, but must orbit it for some time before it gets sucked. Thus, a disc like structure is formed around the black hole from which gas is pulled slowly into the black hole. When the gas orbits the black hole in the disc, its temperature is raised to several millions of degrees which emits radiation in the X-ray part of the spectrum (by the first note that I explained above). Thus, when we detect X-ray sources in the sky, then we know that there is gas which has been heated to several million degrees, and one of the mechanisms to achieve that is the accretion disc around the black hole.
If the system giving out X-rays turns out to be a binary star, then a case can be made that one of the stars is a compact object (a neutron star or a black hole). Binary stars are very useful to astronomers because it allows us to measure the mass of the stars in the system (by Kepler's laws). If the mass of the compact object turns out to be more than the critical mass mentioned above, then one can be sure that it is a black hole. So that is how black holes are discovered.
Now about actual discovery: In the early 1970s, an intense X-ray source was found in the constellation Cygnus called Cygnus X-1. As the years passed, in the spring of 1972, Cygnus X-1 was identified with a star known by its classification number HDE226868 (which is a radio source). Soon evidence was found that it is a binary star system with a period of about 5.6 days.
By the special theory of relativity, no information can travel faster than the speed of light. Hence, a celestial object cannot change its luminosity on a time scale shorter than the time taken for the light to reach from one side of it to the other. Analysis of Cygnus X-1 showed that its emission had luminosity variations on time scales as short as thousandths of a second, suggesting that the object was only a few kilometers wide. Thus evidence was found that one of the stars was a compact object. Finally, astronomers used the binary star system to determine the mass of the compact object and found that it was greater than the critical mass, so that it was most likely a black hole. That is about the discovery of the first black hole in our universe.
Since then, astronomers have detected several black holes in space using several techniques. While one class of black holes have "small" masses (greater than 5 times the mass of the sun), there are others which have gigantic masses (more than a million times the mass of the sun), called supermassive black holes. These black holes are found in the centers of several galaxy, with our own Milky Way harbouring a two million mass black hole in the center.
Get More 'Curious?' with Our New PODCAST:
- Podcast? Subscribe? Tell me about the Ask an Astronomer Podcast
- Subscribe to our Podcast | Listen to our current Episode
- Cool! But I can't now. Send me a quick reminder now for later.
How to ask a question:
If you have a follow-up question concerning the above subject, submit it here. If you have a question about another area of astronomy, find the topic you're interested in from the archive on our site menu, or go here for help.
This page has been accessed 39810 times since April 29, 2002.
Last modified: October 22, 2002 9:55:55 AM
Ask an Astronomer is hosted by the Astronomy Department at Cornell University and is produced with PHP and MySQL.
Warning: Your browser is misbehaving! This page might look ugly. (Details) | http://curious.astro.cornell.edu/question.php?number=50 | 13 |
14 | John wants to know the values of the area and perimeter of a
rectangle. John can take measurements of the length and width of
the rectangle in inches. John's measurements are expected to be
accurate to within 0.1 inch.
1. Identify the inputs and outputs of the problem.
Inputs: length of rectangle, width of rectangle, number1,
Outputs: area and perimeter of rectangle
2. Identify the processing needed to convert the inputs to the
Area = length * width
Perimeter = 2W + 2L (W = width, L = length)
3. Design an algorithm in pseudocode to solve the problem. Make
sure to include steps to get each input and to report each
4. Identify two test cases, one using whole number values, and one
using decimal number values. For each of the two test cases show
what inputs you will use and what your expected outputs should
5. Write the program to implement your algorithm | http://www.chegg.com/homework-help/questions-and-answers/john-wants-know-values-area-perimeter-rectangle-john-measurements-length-width-rectangle-i-q2504993 | 13 |
14 | The Basics of Pie Charts
A pie chart looks – you’ve guessed it – a bit like a pie. The chart is a circle with various-sized slices ‘cut out’ from the middle to the edge. The size of the slices shows the relative size of the categories. You often see a pie chart showing the results of an opinion poll.
Pie charts use angles to show the relative sizes of various categories. Pie charts are circular and are cut into ‘slices’: the bigger the slice of pie, the bigger the group it represents.
Deciding when to use a pie chart
You use a pie chart rather than a bar chart when you’re not very interested in the actual numbers you want to represent but want to see how big the groups are compared with each other.
A good example is if you want to give a presentation about the age groups of your company’s customers but don’t want the audience to know precisely how many customers you have.
You frequently see pie charts on election-night reports on TV to show the distribution of votes. In a very close election race, the slices representing the two front-runners are almost the same size.
Handling angles, percentages and numbers
You probably won’t be surprised to find that you can work out the values associated with pie charts using the Table of Joy. A whole circle contains 360 degrees. In a pie chart, those 360 degrees correspond to the total of the values represented in the chart.
The Table of Joy is a technique for figuring out what sum you need to do when you have two amounts you know to be proportional – that is, if you double the size of one, you double the size of the other. The Table of Joy looks like an oversized noughts and crosses grid.
When you work with a pie chart, you may need to figure out one of the following three things:
The size of the angle in a slice.
The value of a slice.
The total of the values in all the slices.
To find one of these things, you need to know the other two. Here’s how to use the Table of Joy to work with a pie chart:
Draw out a noughts-and-crosses grid.
Leave yourself plenty of room in the grid for labels.
Label the top row with ‘value’ and ‘degrees’.
Label the sides with ‘slice’ and ‘circle’.
Write 360 in the ‘circle/degrees’ cell and the two other pieces of information you have in the appropriate places.
Put a question mark in the remaining cell.
Write down the Table of Joy sum.
The sum is the number in the same row as the question mark times by the number in the same column, all divided by the number opposite.
Work out the sum.
The answer is the value you’re looking for.
To convert an angle into a percentage (or vice versa) you use a similar process. The whole circle – 360 degrees – corresponds to the whole of the data – 100 per cent. Use the same steps, but change the ‘value’ column to ‘per cent’, and in the ‘circle/per cent’ cell write 100. | http://www.dummies.com/how-to/content/the-basics-of-pie-charts.navId-407333.html | 13 |
10 | The upper limit on the mass of a neutron star is about 3 solar masses. Beyond that mass, the star can no longer support itself against its own gravity, and it must collapse. No known force can prevent the material from collapsing all the way to the point-like singularity, a region of extremely high density where the known laws of physics break down. Surrounding the singularity, at a distance of a few kilometres for a solar-mass object, is a region of space from which even light cannot escape from – a black hole. Astronomers believe that the most massive stars form black holes, rather than neutron stars, after they explode in a supernova.
Conditions in and near black holes cannot be described by Newtonian mechanics. A proper description involves the theories of relativity developed by Albert Einstein early in the twentieth century. Even relativity theory fails right at the singularity.
The "surface" of a black hole is the event horizon. At the event horizon, the escape velocity equals the speed of light. Within this distance, nothing can escape. Even photons passing too close to a black hole are deflected onto paths that cross the event horizon and become trapped.
Relativity theory describes gravity in terms of a warping, or bending, of space by the presence of mass. The more mass, the greater the warping. All particles – including photons - respond to that warping by moving along curved paths. A black hole is a region where the warping is so great that space folds back on itself, cutting off the interior of the hole from the rest of the universe.
To a distant observer, the clock on a spaceship falling into a black hole would show time diliation – it would appear to slow down as the ship approached the event horizon. The observer would never see the ship reach the surface of the hole. At the same time, light leaving the ship would be subject to gravitational redshift as it climbed out of the hole's intense gravitational field. Light emitted just at the event horizon would be redshifted to infinite wavelength. Both phenomena are predictions of the theory of relativity. However, the gravitational redshifts due to both the Earth and the Sun are very small, but have been detected experimentally.
Once matter falls into a black hole, it can no longer communicate with the outside. However, on its way in, it can form an accretion disk and emit X-rays just as in the neutron-star case. The best candidates for black holes are binary systems in which one component is a compact X-ray source. Cygnus X-1, a well-studied X-ray source in the constellation Cygnus, is a long-standing black hole candidate. Studies of orbital motions imply that the compact objects are too massive to be neutron stars, leaving black holes as the only logical alternative. | http://library.thinkquest.org/17445/universe/blackhole.shtml | 13 |
30 | Vector Addition: Force Table
The objective is to experimentally verify the parallelogram law of vector addition by using a force table.
A force table, a set of weights, a protractor, a metric ruler, a scientific calculator, and graphing paper
Concurrent forces are forces that pass through the same point. A resultant force is a single force which effect is the same as the sum of a number of forces. The equilibrant of a system of forces is equal in magnitude and opposite in direction to the resultant of those forces. Review the introduction section of Experiment 2 for additional information on different graphical methods as well as the analytical method of finding a resultant, if necessary.
Set up a force table as shown in the following figure with its three 50.0-gram hanging weights.
Be careful about the following points while using the force table:
1) The direction of the forces must be set by adjusting the strings at the desired angles. The angles must be read from directly above the strings to prevent parallax error.
2) The string (not the edge of the clamp) represents the line of action of the force.
3) Each collar that slides around the circular platform for adjusting the angle for each force, is grooved. The groove of each slider (collar) must be flush with the edge of the circular platform for correct angle adjustment as well as measurement.
The schematic diagram of a force table
Take the following steps for each vector addition. Complete each row of Table 1 before going to the next row. This means that the %errors in each row must be calculated before the start of the experiment for the next row.
a) In each of the trials 1 and 2, place the weights for F1 and F2 in accordance with Table 1 (below) at the specified angles. The ring is the object under study. This means that all forces are acting on the ring. Place enough weights on the third cord (Force F3) and adjust its angle until the system is in equilibrium. At equilibrium, the ring is exactly at the center of the circular platform as can be judged by the stud at the center. Force F3 that is needed to bring the ring to equilibrium is called the "equilibrant." Record the angle and magnitude of F3.
b) Note that if you add 180° to, or subtract 180° from, the angle of F3, it gives you the angle of R, the resultant of F1 and F2 that you are looking for. Do this as well and record the values for R in Table 1. The magnitude of R is exactly equal to the magnitude of F3, the equilibrant, and angle of R is 180° different from the angle of F3. These last two values are your measured values for vector R, by the force table method (experimental).
c) Now, find the magnitude and direction (angle) of R by calculation (or the analytical method). First find Rx and Ry, then R and θ as usual. These calculated values are your accepted values for vector R.
d) Not only you need to add F1 and F2 graphically (by parallelogram method) in lab as part of the experiment, but also make sure that you include the graphical method (Parallelogram) in your report. Calculate a %error on each of R and θ, and record them in the last columns of Table 1.
Given: the given values are in Table 1.
Measured: Record your measured values in Table 1.
|Magn.(gf)||Angle||Magn.(gf)||Angle||Magn.(gf)||Angle||Magn.(gf)||Angle||Magn.(gf)||Angle||on R||on θ|
Comparison of the results:
Provide the percent error formula used as well as the calculated values of percent errors.
State your conclusions of the experiment.
Provide a discussion if necessary.
Include the following questions and their answers in your report:
1) Two forces, one 500gf and the other 800gf, act on a body. What are the maximum and minimum possible magnitudes of the resultant force? Hint: Sketch many parallelograms that have their adjacent sides equal to 5cm and 8cm, for example, to represent 500gf and 800gf, respectively, but with different angles between those adjacent sides. If the different angles you choose are say, 0, 30, 60, 90, 120, 150, and 180 degrees, you will see how the magnitude of the resultant changes case to case and then you will be able to decide the maximum and minimum values for the resultant.
2) Could four forces be placed in the same quadrant or in two adjacent quadrants and still be in equilibrium? Draw a sketch and explain your answer.
3) What is the relationship between the equilibrant vector and the resultant? | http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Experiment%2003.htm | 13 |
19 | The Black Box
In 1859, scientist Robert Kirchhoff introduced an interesting problem into the world of physics: the question of blackbody radiation. A "blackbody" is basically a black box that absorbs all the radiation that is directed toward it. The amount of energy that it emits is independent of the size or shape of the box; it depends only on temperature.
For decades, physicists worked to figure out the relationship between the temperature of the blackbody and the distribution of the emitted energy along the electromagnetic spectrum. This was of particular interest to theorists because finding the relationship could yield valuable physical constants that could then be applied to other physics concerns. However, there was a more concrete and technical reason to search for a formula relating energy to temperature. Such an equation could be used as a standard for rating the strength of electric lamps.
For this reason, the imperial bureau of standards–the Physikalisch- Technische Reichsanstalt–took a special interest in finding the formula. And, in 1896, a young German physicist working there, Wilhelm Wien, seemed to have stumbled onto an equation that worked. With the knowledge of the spectral distribution of the energy at one temperature, Wien's equation would produce the distribution for any other temperature. It was an experimentally accurate theory, but Wien had no explanation for why his equation worked; he knew only that it did.
Meanwhile, Planck was hired to take Kirchhoff's old job at the University of Berlin. Planck spent much of the 1890s studying problems of chemical thermodynamics, specifically entropy. His work in this field led him to the puzzle of blackbody radiation, and he set himself the goal of finding a workable theory that would yield Wien's equation.
But just as Planck thought he'd found the answer, a series of experiments proved that Wien's equation was actually incorrect. Rather than assuming his theory was correct and hoping the empirical data would eventually prove him right, Planck chose to trust the experimental results: Wien's theory was wrong, which mean Planck's was, too. So, in 1900, Planck was forced to start all over again.
At this point, Planck took a revolutionary step, although he didn't realize it at the time. Unable to get the numbers to work any other way, he made a bold assumption: Planck posited that energy was emitted by the black box in tiny, finite packets. This was an unprecedented move, as it had always been assumed that energy came in an unbroken continuous wave, not in a series of discrete energy packets. But the assumption led Planck to an equation that worked, the equation that would make him famous: E = hv.
In this equation, E stands for the total energy of the light source, v is the frequency of the light, and h was a mathematical constant that came to be known as "Planck's constant." If Planck was right, then energy could only be emitted in certain units–multiples of hv. Planck called these units "quanta," Latin for "how much." This equation challenged everything that had been previously thought about energy. But no one, not even Planck, realized this at the time.
Planck's equation worked, and by 1908, everyone in the field had accepted it, but even the best physicists of the time failed to see its implications. Like Planck, they considered the quantum assumption to be nothing more than a convenience, a mathematical abstraction with no consequences for the real world.
Despite this oversight, Planck's work was impressive enough to draw the attention and admiration of his peers. The new equation would, in itself, have been enough to make Planck's career. Planck's theory yielded two new universal constants that related mechanical measures of energy to temperature measures: h and K. Planck called K "Boltzmann's constant", a gesture of appreciation to Ludwig Boltzmann, whose theories had led Planck to his own grand solution. In 1900, the value of h meant little to physicists, but K meant a great deal.
Knowing that such a constant as K existed, physicists had composed the equation LKT = pressure of a standard unit of gas. In this equation, L stands for the number of molecules in a standard unit of gas and T stands for the absolute temperature of the gas. They knew that the number of molecules and the temperature of a gas were directly related to the pressure it exerted, but they didn't know how, since the values of both L and K were a mystery.
Thanks to Planck, physicists could finally derive a value for L. And knowing L eventually led to even more discoveries, including a theoretical confirmation of the charge of a single electron. This was one of the earliest connections physicists were able to make between electrodynamics and atomic theory, and bridging the gap between these two fields had been one of Planck's highest goals.
He wasn't the only one with this goal. As the impact of Planck's work grew and grew, his peers sat up and took notice. In 1908, Planck was nominated for the Nobel Prize in physics for the discovery of his two constants and the E = hv formula itself. But Planck's nomination was voted down, not because his work wasn't significant enough, but because someone had finally realized it had even more significant implications. It was pointed out to the Nobel committee that Planck's equation implied that energy did not come in a continuum, and, horrified by the thought, the committee declined to award Planck the prize. Instead, the 1908 Nobel Prize went to Gabriel Lippman, for his work in the new field of color photography.
Though he lost the prize in 1908 for being too revolutionary, more than ten years later, Planck would finally win his Nobel–not in spite of the revolution his theory was about to cause, but because of it.
Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note! | http://www.sparknotes.com/biography/planck/section2.rhtml | 13 |
10 | Resurrecting mammoth evolutionary dead ends
Opinion:The film Jurassic Park featured dinosaurs cloned from DNA recovered from insects preserved in prehistoric amber. The scientific and technical capacity to do this remains a long way off, but, in the meantime, powerful biotechnological techniques are being applied to study the physiology of extinct animals.
Kevin Campbell and Michael Hofreiter (Scientific American, August 2012) describe their brilliant research into how the physiology of woolly mammoths helped them survive during the ice age.
Woolly mammoths are extinct cousins of today’s Asian elephants. Woolly mammoths’ ancestors originated in sub-tropical Africa and migrated to Siberia less than 2 million years ago, at the beginning of the Pleistocene ice ages. The main problem these animals encountered in Africa was avoiding overheating, but when they moved north and the world froze they had to develop the capacity to conserve and manage heat.
Traditionally, extinct animals are studied by examining their fossilized bones and teeth. This allows the reconstruction of animal size, shape, configuration of musculature and some indication of the nature of the diet. Such studies tell us little or nothing of the physiological processes that sustained these animals, but modern biotechnology is now successfully attacking this problem.
The unit of biological organization is the cell. Every cell is controlled by instructions encoded in its genetic DNA. DNA is a very long molecule made of four different units called nucleotides strung along its length. The nucleotides are denoted by the four letters A, T, G and C, and the genetic instructions are encoded in the linear sequence of these letters.
Most of the work of the cell is carried out by proteins. There are thousands of different proteins. A protein is a long molecule made of units called amino acids strung along its length. There are 20 different types of amino acids and the types and sequence of amino acids in a protein determine the nature and the function of the protein.
The genetic DNA controls the cell by specifying what proteins are made. The linear information encoded in DNA is translated into the linear sequence of amino acids in a protein. The amount of DNA code necessary to code for a protein is called a gene.
DNA from extinct animals can be recovered, with difficulty, from fossilized remains and the nucleotide sequence of genes for critically important proteins from extinct animals can be worked out. This information is compared with the corresponding gene from the modern successor of the extinct animal. If the gene sequences are identical, then the protein products are identical. If they are different, then the extinct animal made a different protein to the modern animal and this different protein probably underpinned a different physiological regime.
If the extinct gene is different, you can modify a sample of the modern gene in the laboratory to make it match the extinct gene. The extinct gene is then incorporated into the DNA of a bacterium which is grown in culture to produce the protein product of the extinct gene. This “extinct” protein can now be studied in the laboratory to see how it behaves compared to its modern counterpart. This is how biotechnological techniques are being used to study the physiology of extinct animals.
Animals generate energy by oxidizing (“burning”) food in their cells. The necessary oxygen is taken from the air and carried through the bloodstream to the cells, bound to the protein haemoglobin. The haemoglobin releases the oxygen when it reaches the tissue cells. This release of oxygen requires energy input and its efficiency declines greatly as temperatures drop. Consequently modern animals that live in very cold environments have evolved mechanisms to help haemoglobin release its oxygen in tissues. Using the methods described earlier, Campbell and Hofreiter made woolly mammoth haemoglobin, which differs from modern elephant haemoglobin, and tested its oxygen releasing characteristics. They found that the mammoth haemoglobin releases its oxygen much more efficiently at low temperature compared with modern elephant haemoglobin. The woolly mammoth had evolved a variety of haemoglobin capable of coping with very cold conditions as part of a strategy for surviving the ice age. Future research will elucidate further details . | http://www.irishtimes.com/news/science/resurrecting-mammoth-evolutionary-dead-ends-1.953723 | 13 |
21 | common name: terrestrial snails affecting plants in Florida
scientific name: phylum Mollusca, class Gastropoda
Molluscs are a very diverse group, with at least 85,000 species named, and estimates of up to 200,000 species occurring worldwide. They also inhabit nearly all ecosystems. The best known classes of molluscs are the Gastropoda (snails and slugs), Bivalvia (clams, oysters, mussels and scallops) and Cephalopoda (squids, cuttlefishes, octopuses and nautiluses).
Figure 1. Diagram of typical snail shell showing major features.
Among the most interesting of the molluscs are the snails. They occur in both aquatic (marine and fresh-water) and terrestrial environments. Other snails are amphibious, moving freely between wet and dry habitats. A number of terrestrial snails occur in Florida, some indigenous (native) and others nonindigenous (not native). Most snails are either beneficial or harmless. For example, Florida is host to some attractive but harmless tree-dwelling snails that feed on algae, fungi, and lichens, including at least one that is threatened. However, a few snails may feed on economically important plants and become pests. The terrestrial species that can become plant pests are discussed below.
Snails are best known for their shell (Fig. 1), which can appear in various forms but normally is coiled (helical). Unlike most animals, it is not obvious that snails display bilateral symmetry (the left and right halves of the animal are mirror images). In fact, the bodies of snails are mostly symmetrical, but their shells tend to be asymmetrical. This is due to the helical nature of the shell, which winds to the right (the shell opening is to the right when held spire upwards) most often, but to the left occasionally. The shape of the shell varies considerably. It may range from being quite conical, resulting from an elevated spire, to globose, which is almost spherical in form, to depressed or discoidal, which is nearly flat. The shell is secreted by a part of the body called the mantle, and the shell consists principally of calcium carbonate. Snails secrete an acidic material from the sole of their foot that dissolves calcium in the soil and allows uptake so the shell can be secreted. Calcium carbonate also is deposited in the shell of their eggs. Thus, lack of calcium can impede growth and increase mortality in snails. Slugs, which are snails with little or no shell, are less affected by calcium availability.
The shape of the snail changes with maturity. With immature snails, the lower lip of the aperture seems to droop, extending well away from the whorls. As they mature, the aperture rounds out and eventually becomes more oval, with the bottom lip almost in line with the base of the shell (Fig. 2). For purposes of identification, adults normally are required.
Figure 2. Photographs of young, intermediate, and mature Zachrysia snail shells showing change in shape as the snails mature. Photograph by Lyle Buss, University of Florida.
The snail's body contains all the physiological systems normally associated with higher animals, allowing ingestion, digestion, reproduction, locomotion, etc. Among the more noticeable features are the tentacles and foot. There are two pairs of tentacles in the head region, with the larger pair located dorsally and possessing eyes at their tips. The tentacles are retractable, so change in length is controlled by the animal. The tentacles also are used for tasting and smelling. The foot is a muscle, and is located ventrally. The foot provides waves of muscular contractions that allow locomotion, with the waves beginning at the front (head) end and moving backward. The skin is responsible for water regulation, and contains glands that secrete slime, which aids both in preventing dehydration and in locomotion. Snails also have a breathing pore (pneumostome), which they can open and close, and which leads into the lung for gas exchange. Contained within the mouth is the radula, a tooth-covered rasp that can be used to scrape and cut food.
Many marine snails have a retractable covering on the dorsal end (upper tail) of the foot which serves to close the shell opening (aperture), which is called an operculum. However, it is absent from nearly all terrestrial snails. Some terrestrial snails have a temporary operculum, however, and which is called the epiphragm. The epiphragm is basically a mucus secretion, but sometimes contains calcium carbonate for reinforcement, making it hard and durable. The purpose of this secretion it to seal the shell and prevent dehydration during periods of inactivity, including the winter or dry season.
Among the more unusual features of snail biology is the mode of reproduction. Terrestrial snails are hermaphrodites, which means that they contain both male and female organs. Thus, snails may copulate and inseminate each other simultaneously, and even self-fertilization may occur. Cross-fertilization is thought to be more common, however, because for many snails the male reproductive system matures earlier than the female's. In some snails there is only a single act of copulation, whereas in others, mating can occur repeatedly. Mating requires high humidity, and often occurs following precipitation. Clusters of eggs are normally deposited in nest holes in the soil. The eggs often are white, and the shell contains calcium.
Useful sources of information on terrestrial snail pests include Barker (2001, 2002) for general information, Hubricht (1985) for distribution, Pilsbry (1940) for identification, and the www.jaxshells.org Web site for images and regional information.
Snails are important in the conversion of plant matter (often in the form of algae, fungi, or plant detritus) into animal material. Thus, they are important food for some forms of wildlife that are carnivorous or omnivorous. And, of course, sometimes humans eat snails. They also are important because they serve as intermediate hosts of animal parasites, namely helminths and protozoa. Most often, wildlife suffer the infections of these disease-causing agents, but sometimes humans become infected, though this occurs primarily in tropical climates. Lastly, and not too commonly, snails (including slugs) feed on higher plants, becoming pests of crop and ornamental plants. Florida has only a few problem snails, mostly nonindigenous species that were introduced, either deliberately or accidentally. The snails that are plant pests are discussed here; the plant feeding slugs are covered in Terrestrial Slugs of Florida.
- Cuban brown snail or garden zachrysia, Zachrysia provisoria (L. Pfeiffer, 1858) (Family Pleurodontidae [Camaenidae])
Deliberately introduced to the Miami area from Cuba in the early 1900s, it now is the most abundant of the large terrestrial snails in southern Florida but can be found as far north as Tampa. It also is known from several of the islands in the Caribbean region and from Costa Rica. This snail has proved to be quite voracious, capable of consuming most plants it encounters. It attacks tropical fruit and citrus, most ornamental plants, and vegetable plants. It is readily transported with potted plants, so it is a quarantine issue.
In the adult stage, Z. provisoria is 25–30 mm in width, about 20 mm high, and possesses 4–5 whorls. It lacks an umbilicus (cone-shaped depression at center of the whorls when viewed from below). It is brown or yellowish brown in color, sometimes with brown streaks radiating from the center. The mouth of the shell is not flared, but is edged in white. The shell is marked by pronounced curved ribs (ridges) (Pilsbry 1928).
Figure 3. Cuban brown snail, Zachrysia provisoria (L. Pfeiffer, 1858), eggs and egg shells from which young snails have emerged. Photograph by Lyle Buss, University of Florida.
Figure 4. Newly hatched Cuban brown snail, Zachrysia provisoria (L. Pfeiffer, 1858). Photograph by Lyle Buss, University of Florida.
Figure 5. Dorsolateral view of Cuban brown snail, Zachrysia provisoria (L. Pfeiffer, 1858), with quarter shown for scale. Photograph by Lyle Buss, University of Florida.
A closely related species from Cuba, Zachrysia trinitaria (Pfeiffer, 1858), was first reported from southern Florida in 2004, though it may have been present for many years (Robinson and Fields 2004). As yet, it is rare. It greatly resembles Z. provisoria but can be distinguished by its larger size (41–45 mm). Its potential to cause damage is unknown.
- Asian tramp snail, Bradybaena similaris (Férussac, 1821) (Family Bradybaenidae)
Although it likely originated in eastern Asia, Bradybaena similaris has now spread thoughout the tropics and subtropics around the world. In the USA, it was first identified in New Orleans in 1939, but now is found in the Gulf Coast states from Florida to Texas, as well as in Puerto Rico and Hawaii. It is troublesome mostly in southern Florida as far north as Tampa, but because potted plants are regularly moved northward it can appear almost anywhere. Bradybaena similaris can damage crop plants, including citrus, longan, mango, and grape, but it is especially damaging to ornamental plants. Most flowers and foliage plants, as well as vegetable plants, can be attacked. Where it has successfully invaded it sometimes becomes the dominant snail in suburban and urban areas.
Bradybaena similaris is a moderately sized snail, measuring about 12–16 mm in diameter at maturity, and 9–11 mm tall. The shell has 5–5.5 whorls. The umbilicus (cone-shaped depression at center of the whorls) is pronounced when viewed from below. The color is variable, often brownish, yellowish, or tan, and usually with a narrow brown stripe on the perimeter of the whorl. This latter character, though not appearing on all specimens, is rather diagnostic. The mouth of the shell is slightly flared, and edged in white. The ribs (ridges) are fine, not pronounced as in Zachrysia provisoria.
Figure 6. Asian tramp snail, Bradybaena similaris (Férussac, 1821). Note the brown stripe located centrally on the outer whorl; this character is usually present on these snails. Photograph by Lyle Buss, University of Florida.
Figure 7. Asian tramp snail, Bradybaena similaris (Férussac, 1821), with dime shown for scale. Note that it is much smaller than Zachrysia sp. Photograph by Lyle Buss, University of Florida.
It takes about six months for B. similaris to reach sexual maturity and to begin producing eggs. It may live more than a year. A study conducted in Brazil (Medeiros et al. 2008) found that snails produced, on average, 30 eggs over the length of its life, but egg production was highly variable and up to 115 eggs could be found in a single clutch (Carvalho et al. 2008).
- Applesnails, Pomacea spp. and Marisa cornuarietis (Linnaeus, 1758) (Family Ampullariidae)
There are several applesnails in the USA, including five species of Pomacea. Four species of Pomacea occur in Florida:
- Pomacea paludosa (Say 1829) is indigenous to Florida, Cuba, and Hispaniola, and is called Florida applesnail. It does not feed on economically important plants, preferring small organisms such as algae and bacteria.
- Pomacea insularum (d'Orbigny, 1835), the most common of the nonindigenous applesnails, is called island applesnail. It now occurs widely in Florida and also in Georgia and Texas. It also occurs in southern South America.
- Pomacea diffusa Blume 1957 is known as the spike-topped applesnail. It is now found in southern and central Florida, in Cuba, and in South America.
- Pomacea haustrum (Reeve, 1856), the titan applesnail, also is from South America. Although it has been established in Palm Beach County, Florida, for decades it does not appear to be spreading.
The one species that Florida lacks, and which is undoubtedly the most serious plant pest in this group, is P. canaliculata (Lamarck, 1822) or channeled applesnail. It occurs widely in South America and also now is found in Arizona and California. It is a serious pest of rice in Southeast Asia.
Marisa cornuarietis is also an applesnail, though it is known by other names, including giant ramshorn snail. Although the shell of this snail does resemble a ram's horn, the term 'ramshorn' is normally reserved for snails of a different family, Planorbidae, so use of this name is discouraged. Marisa cornarietis is a native of northern South America, but now is found widely in southern Florida, and also locally in Georgia, Arizona, Texas, California, Idaho, and Hawaii, and some islands in the Caribbean.
The Pomacea snails are quite difficult to distinguish morphologically, so the literature is replete with incorrect information. Shell shape characteristics that are used as a rough guide to identification of Florida applesnails follow. This separation is based on the angle of intercept between the upper edge of the shell opening (aperture) and adjacent (interior) whorl.
- The intercept of the aperture and adjacent whorl forms an acute angle (< 90), and the suture forms a deep indentation or channel: P. haustrum and P. insularum
- The intercept of the aperture and adjacent whorl forms nearly a right angle (90) at the suture, which lacks a deep indentation: P. diffusa
- The intercept of aperture and adjacent whorl forms only a slight angle (> 90) at the suture: P. paludosa
Sometimes egg color and size is used in addition to shell shape to distinguish among the Pomacea spp., but egg color can change with age, so it is not entirely reliable, either. However, a recent molecular study (Rawlings et al. 2007) clarified some aspects of their identities and is used as the basis for this discussion of Pomacea.
Marisa cornuarietis is quite easy to distinguish from the Pomacea spp. Marisa has a planorpoid (flattened) shell form, and usually bears several dark spiral stripes on the whorls.
The only snail among the Pomacea applesnails in Florida that feeds on higher plants is P. insularum. It feeds on rooted aquatic vegetation, so for most people it is not a concern. As noted previously, the real risk to economically important plants is from P. caniculata, but it is not known from Florida, despite some reports to the contrary. Marisa similarly feeds on aquatic plants, but is quite omnivorous, and will feed on decaying vegetation and aquatic animals as well. Marisa cornuarietis has been introduced into some bodies of water for vegetation control. They will fed on water hyacinth, and perhaps hydrilla, and can replace other aquatic plant-feeding snails. In Puerto Rico, they are believed to replace Biomphalaria snails, which are intermediate host for the disease Schistosomiasis (Radke et al. 1961, Seaman and Porterfield 1964).
The shell color of the Pomacea snails ranges from yellow to green or brown, and may lack or possess stripes. The shell is globose and large, measuring about 40–60 mm in diameter and 40–75 mm in height. The shells have 5–6 whorls and possess an operculum (a hard covering of the shell opening). Eggs are deposited in clusters on emergent vegetation or structures. The Apple snails Web site at http://www.applesnail.net/ also provides useful information.
The shell of Marisa cornuarietis may be yellow, tan, brown, or brick red, and usually bears darker stripes. They are 35–50 mm in diameter. There are 3.5–4 whorls, and the aperture is slightly flared. A small operculum is present. The eggs are deposited in a gelatinous clutch below the surface of the water.
Figure 8. The spike-topped applesnail, Pomacea diffusa Blume 1957. Photograph by Bill Frank, Jacksonville Shell Club.
Figure 9. The island applesnail, Pomacea insularum (d'Orbigny, 1835). Photograph by Bill Frank, Jacksonville Shell Club.
Figure 10. The giant ramshorn snail, Marisa cornuarietis (Linnaeus, 1758). Photograph by Bill Frank, Jacksonville Shell Club.
- Milk snails, Otala lactea (Müller, 1774) and Otala punctata (Müller, 1774) (Family Helicidae)
Otala lactea is a native of the eastern Mediterranean (Canary Islands, Morocco, Portugal, Spain), but has been relocated to other areas of the world (Argentina, Australia, Bermuda, Cuba, USA), sometimes because it is edible. In the USA, it occurs in Arizona, California, Florida, Georgia, Mississippi, and Texas. In Florida, it has persisted in the Tampa area since 1931. These plant-feeding snails cause only minor damage, and display little indication that they will spread, though they cause concern in some neighborhoods in the Tampa area. In California, which has a Mediterranean climate similar to its native range, it is viewed as a more serious pest. There, this species produced an average of 66 eggs per clutch, and two clutches per month, depositing them in loose soil. It is adapted to arid conditions, so it can aestivate on stones and shrubs until suitable conditions return. It secretes an epiphragm during such periods. Like most snails, activity increases after rainfall (Gammon 1943).
Otala punctata occurs in almost the same areas of Europe, namely Spain, France, and now Italy and Malta. It, too, is edible and has been relocated to North America (California, Florida, Georgia) and South America (Argentina, Chile, Uruguay). In Florida, it is found only at Fernandina Beach (Amelia Island) and shows no sign of expanding is range. It feeds on some ornamental plants at this location, but is not a serious problem.
The shell color of these snails is quite variable, ranging from milky white and nearly lacking pigmentation to quite dark brown, with pronounced stripes. The shell opening is flared, and the shell lacks an umbilicus (cone-shaped depression at center of the whorls). The shells of milk snails are about 28–39 mm wide and 18–24 mm high. There are 4–5 whorls, and the whorls bear only fine ridges. The milk snails can also be recognized by the presence of a strongly extended, thin rim or ridge at the lower lip (columella) of the milk snail's opening. In both species, the ridge may be dark brown to almost black. However, in O. lactea the dark color extends along the rim of the opening to its most distant point from the center of the shell. In contrast, in O. punctata the dark color of the rim tends to be more abbreviated. Also, in O. lactea the rim or ridge (columella) of the shell opening is often elevated to form a blunt tooth; the tooth is lacking in O. punctata.
Superficially, the milk snails may resemble the brown garden snail, Cornu aspersum (Müller, 1774) (also called Helix aspersa or Cantareus aspersus) an important pest snail in California and a quarantine issue for Florida. However, the milk snails are relatively flattened or depressed, being only about 2/3 as high as wide, whereas brown garden snail is globose, almost as tall as wide.
Figure 11. Comparison of Otala lactea (Müller, 1774) and O. punctata (Müller, 1774). Photograph by Bill Frank, Jacksonville Shell Club.
- Jumping snail, Ovachlamys fulgens (Gude, 1900) (Family Chronidae [Helicarionidae])
Originally described from Japan, this small snail is now found in other countries including Singapore, Thailand, Colombia, Costa Rica, Trinidad, Tobago, and probably elsewhere. In the USA, it is known from Hawaii and southern Florida. It is known mostly as a pest of orchids, but also feeds on Heliconia, Dracaena, avocado, and mango.
This snail has a yellow-brown shell with 4 whorls, the last whorl about twice as wide as the preceding whorl. The ribs of the shell are fine, and an umbilicus is present. The shell is 6–7 mm in diameter and about 4.5 mm in height. The common name of this snail is based on the ability of the snail to leap when disturbed, a feat assisted by the presence of an unusual dorsal enlargement at the posterior end of the foot.
In studies conducted in Costa Rica (Barrientos 1998, 2000), this species was most abundant where there was a deep layer of organic matter on the soil, abundant herbaceous vegetation, and abundant moisture. Snails matured and commenced egg deposition in about 42 days, and did not require cross-fertilization to reproduce. The eggs measured 5.12 mm in diameter and were deposited in small clusters of about three eggs in litter or shallow soil crevices. They could deposit an egg cluster nearly daily. Eggs absorbed moisture from the substrate and hatched in 10–14 days. Although widely distributed in Costa Rica, its occurrence was limited to areas with a mean annual temperature of 20–27.6°C.
Figure 12. The jumping snail, Ovachlamys fulgens (Gude, 1900). Photograph by David Robinson, USDA, APHIS-PPQ.
- Southern flatcoil, Polygyra cereolus (Mühlfeld, 1818) (Family Polygyridae)
This indigenous species is found throughout peninsular Florida, and elsewhere in the southeastern USA, west to Texas. Outside of Florida, its occurrence usually is coastal. It inhabits soil, detritus, and dead wood, climbing onto vegetation and structures in and around gardens. This commonly occurring snail will feed on plants, and is documented to inhibit establishment of legumes, particularly white clover and to a lesser degree red clover and alfalfa, in Florida (Kalmbacher et al. 1978). It has been introduced into Abu Dhabi, Dubai, Saudi Arabia and Quatar, probably along with turfgrass sod, and has become quite numerous there, though no damage is documented.
The shell of P. cereolus is usually about 8 mm in diameter, though it may range from 7–18 mm. It is 3–5 mm high. Its color is brownish orange. It has about 8 whorls (range 5–9), and very little elevation, so it is a rather flat shell. The whorls have been accurately described as coiled like a rope, and they are well marked with ridges or ribs, adding to the rope-like appearance. The whorl is flared at the opening (aperture) and the aperture has a pronounced tooth, causing the opening to be heart-shaped. The umbilicus is pronounced.
Figure 13. Southern flatcoil, Polygyra cereolus (Mühlfeld, 1818). Photograph by Lyle Buss, University of Florida.
Figure 14. Southern flatcoil, Polygyra cereolus (Mühlfeld, 1818), dorsal (left) and ventral (right) surfaces. Photograph by Lyle Buss, University of Florida.
Figure 15. Feeding damage to white clover by the southern flatcoil, Polygyra cereolus (Mühlfeld, 1818). Photograph by Lyle Buss, University of Florida.
- White-lipped globe, Mesodon thyroidus (Say, 1816) (Family Polygyridae)
This indigenous species occurs broadly in the eastern USA from New England to Michigan and south to Florida and Texas. It occurs in many habitats, including woods, meadows, marshes, roadsides, and gardens, and is often found hiding in leaf litter. It is considered to be mycophagous, but it will also feed on foliage of wild and garden plants if necessary. Like many snails, it will selectively feed on senescing or unhealthy plant material. It deposits its eggs in shallow holes in the soil, normally in clusters of 20–70 eggs. It has at least a two-year life cycle in more northern areas, but in Florida its biology is unknown. It can produce a thin epiphragm.
This is a moderately sized snail, measuring 18–25 mm in diameter and 11–18 mm high. It is globose in form, finely ribbed, and brown or yellowish brown in color. The opening (aperture) is slightly flared, and often lighter in color, especially in fresh specimens. The aperture may have a single blunt tooth, though this is often absent. It has a narrow umbilicus, which is normally half covered and sometimes difficult to detect (Pilsbry 1940).
Figure 16. The white-lipped globe, Mesodon thyroidus (Say, 1816), lateral view. Photograph by Lyle Buss, University of Florida.
Figure 17. The white-lipped globe, Mesodon thyroidus (Say, 1816), dorsal view. Photograph by Lyle Buss, University of Florida.
- Perforate dome, Ventridens demissus (A. Binney, 1843) (Family Zonitidae)
This is one of three dome snails found in the northern portion of Florida. The other two are V. cerinoideus (Anthony 1865), which is known as wax dome, and V. volusiae (Pilsbry, 1900), which is known as Seminole dome. Ventridens demissus and V. cerinoideus occur widely in eastern North America, but are restricted to the northern counties of Florida, south to Alachua County. Ventridens volusiae, on the other hand, occurs only in Florida, and is found in both the northern and central regions of the peninsula. The dome snails are similar in appearance and habitat. Their biology is largely unknown.
Ventridens demissus is routinely found in leaf litter, and when leaf litter accumulates or gardens are mulched, the population of snails can build to high numbers. These snails readily feed on the leaves and flowers of many annual garden plants, particularly flowers, if they are grown in mulched planting beds. They will travel long distances, especially during rainy evenings, and can frequently be found inactive, but clinging to elevated structures, in the daytime.
The Florida dome snails are small, measuring 5–10 mm in diameter, with a height of 5–7 mm. They display 6–7 whorls. The ribs on the whorls are fine. The shell is yellow-brown in color, and somewhat transparent. The shell has a narrow umbilicus. The opening (aperture) of the shell is only slightly flared. A large, irregular whitish area is present before the the margin of the aperture when the shell is viewed from below.
Figure 18. The perforate dome, Ventridens demissus (A. Binney, 1843), dorsal view. Photograph by Lyle Buss, University of Florida.
Figure 19. The perforate dome, Ventridens demissus (A. Binney, 1843), dorsal view. Photograph by Lyle Buss, University of Florida.
Figure 20. Eggs of the perforate dome, Ventridens demissus (A. Binney, 1843). Photograph by Lyle Buss, University of Florida.
- Giant African land snail, Achatina (or Lissachatina) fulica (Férussac, 1821) (Family Achatinidae)
After being eradicated from Florida in the last century, Florida is once again faced with an infestation of the giant African land snail in the Miami area. Discovered again in late 2011, it threatens to cause considerable plant damage due to its large size and broad dietary habits. It is documented to feed on hundreds of different plants throughout the world, but can be expected to do most damage to vegetables, flowers and other ornamental plants, and to annual weeds.
Giant African land snail also has the potential to transmit disease-causing organisms to plants and animals, including humans. It can serve as an intermediate host for rat lungworm, which can cause meningoencephalitis in humans. It also carries a gram-negative bacterium, Aeromonas hydrophila, causing several disease symptoms in people, especially those with compromised immune systems. Thus, should you encounter the giant African land snail, it should be handled with gloves.
The giant African land snail grows to a large size. At maturity, it can attain a length of nearly 20 cm and a diameter of 13 cm. It is conical in shape, tapering to a distinct point at one end, but rounded at the other (Fig. 1). Although varied in appearance, this snail typically is light brown, with dark brown stripes. The large size and conical shape could cause it to be confused with a predatory snail, the rosy wolf snail, Euglandina rosea, but rosy wolf snail lacks the dark brown stripes and does not become as large (about 7.5 cm).
Figure 21. Mature giant African land snail, Achatina ((or Lissachatina) fulica Férussac, 1821), lateral view. Photograph by Lyle Buss, University of Florida.
Figure 22. Egg (right) and newly hatched snail (left) of the giant African land snail, Achatina ((or Lissachatina) fulica Férussac, 1821). Photograph by Lyle Buss, University of Florida.
Figure 23. Young giant African land snail, Achatina ((or Lissachatina) fulica Férussac, 1821). Photograph by Lyle Buss, University of Florida.
Snails (and slugs) are most often managed with chemicals called molluscides, but there are several other management options in addition to application of chemical pesticides. Some of these options are outlined below.
Cultural control. Snails and slugs are favored by high humidity. Therefore, elimination of mulch, ground cover, wood, and stones will deny them a moist, sheltering environment. Observing plants at night may reveal the presence of marauding molluscs, even where there are no signs of their presence during daylight. Check under flower pots containing damaged plants, for example, as snails and slugs will not move far from their host plants. Reducing the amount of irrigation may similarly deny them the moist environment they prefer.
Mechanical control. Snails and slugs are susceptible to traps (Olkowski et al. 1991). A board, flower-pot saucer, or unglazed flower pot placed in a shady location can serve as a very suitable refuge for molluscs, and then the offending animals can be collected by hand-picking during the daylight hours from beneath the refuge and destroyed. Mollusc traps can easily be created or purchased. The basic idea is to create an environment that is attractive, but once the mollusc enters, it cannot escape. Thus, a saucer or similar structure partly sunk into the soil and with steep sides can be used to capture molluscs, assuming that beer, an apple core, or some other attractive item will lure them to the capture device.
Barriers are also useful for minimizing damage by snails and slugs (Hata et al. 1997). Copper foil and screening is believed to react with mollusc slime to create an electrical current that deters them from crossing the barrier. The legs of greenhouse benches or the trunks of trees, for example, can be ringed with copper strips to deter these animals from crossing. Copper foil designed specifically for deterring mollusk movement is available commercially from garden supply centers and catalogs. Although expensive to implement, copper can be used to ring entire gardens to prevent invasion by molluscs. The copper strip will oxidize with time, however, becoming less effective. Similarly, diatomaceous earth can be sprinkled around a garden or planting bed to exclude molluscs, as they dislike crawling over this abrasive particulate material. As is the case with a copper barrier, however, this does nothing to suppress any that are already present, and the diatomaceous earth is easily disturbed by rainfall and irrigation, so it works best in arid environments.
Biological control. Predatory snails such as the rosy wolf snail, Euglandina rosea (Férussac, 1821) (Figs. 24-28), readily attack other snails. Euglandina rosea is native to the southeastern U.S., and is quite common in woodlands and gardens in Florida. It has been relocated to other parts of the world, including Hawaii, India and many islands in the Pacific region in an attempt to control invasive snails such as giant African land snail, Achatina fulica (Férussac, 1821). It has been used to provide partial control of giant African snail, but it has been quite disruptive to native snail populations, so its use is discouraged outside its natural range (Barker 2004).
Figure 24. The rosy wolf snail, Euglandina rosea (Férussac, 1821), lateral view. Photograph by Lyle Buss, University of Florida.
Figure 25. The rosy wolf snail, Euglandina rosea (Férussac, 1821), fully extended. Photograph by Lyle Buss, University of Florida.
Figure 26. Newly hatched rosy wolf snail, Euglandina rosea (Férussac, 1821). Photograph by Lyle Buss, University of Florida.
Figure 27. A young rosy wolf snail, Euglandina rosea (Férussac, 1821), feeding on another snail. Photograph by Lyle Buss, University of Florida.
Figure 28. Eggs of the rosy wolf snail, Euglandina rosea (Férussac, 1821), with dime shown for scale. Photograph by Lyle Buss, University of Florida.
Chemical control. Many formulations of molluscicide are available for purchase, but nearly all are bait products that contain toxicants. They may kill by ingestion of the bait, or by contact. None are completely effective because molluscs sometimes learn to avoid toxicants or may detoxify pesticides, recovering from sublethal poisoning. Often they are paralyzed and do not die immediately, but eventually succumb, especially in hot, dry weather. It is good practice to apply baits after a site is watered or irrigated, as this stimulates mollusc activity, increasing the likelihood that baits will be eaten. However, do not water immediately after application of baits. Baits can be applied broadcast, or around gardens containing susceptible plants. It is best to scatter the bait material, as this will decrease the probability that pets or vertebrate wildlife will find and eat the toxic bait and become sick or perish.
Metaldehyde-containing baits have long been useful, and remain available (Meredith 2003). Although effective, metaldehyde-containing formulations are quite toxic to pets and wildlife, so care must be exercised if this toxicant is applied. Also, it is a good idea to avoid contamination of edible produce with metaldehyde-containing bait.
There are alternatives to metaldehyde. Some molluscicide-containing products include carbamate pesticides (alone or in combination with metaldehyde), as these also may be toxic to molluscs. Newer mollusc baits may contain an alternative toxicant: iron phosphate. Iron phosphate is normally thought of as a fertilizer. Iron phosphate is much safer than metaldehyde and/or carbamates for use around pets and vertebrate wildlife, and also is effective (Speiser and Kistler 2002). Other bait formulations contain boric acid as a toxicant; while also safer than metaldehyde, boric acid seems to be much less effective than iron phosphate (Capinera, unpublished). Regardless of the toxicant, baits should be scattered thinly in and around vegetation, so as to make it unlikely that pets or wildlife will ingest too much of the bait.
- Barker GM. 2001. The biology of terrestrial molluscs. CABI Publishing, Wallingford, UK. 558 pp.
- Barker GM. 2002. Molluscs as crop pests. CABI Publishing, Wallingford, UK. 468 pp.
- Barker GM (ed.) 2004. Natural enemies of terrestrial molluscs. CABI Publishing, Wallingford, UK. 644 pp.
- Barrientos Z. 1998. Life history of the terrestrial snail Ovachlamys fulgens (Stylommatophora: Helicarionidae) under laboratory conditions. Revista de Biologia Tropical 46: 369-384.
- Barrientos Z. 2000. Population dynamics and spatial distribution of the terrestrial snail Ovachlamys fulgens (Stylommatophora: Helicarionidae) in a tropical environment. Revista de Biologia Tropical 48: 71-87.
- Blinn WC. 1963. Ecology of the land snails Mesodon thyroidus and Allogona profunda. Ecology 44: 498-505.
- Carvalho CM, Bessa ECA, D'Ávila S. 2008. Life history strategy of Bradybaena similaris (Fèrussac, 1821) (Mollusca, Pulmonata, Bradybaenidae) Molluscan Research 28: 171-174.
- Cowie RH, Dillon Jr. RT, Robinson DG, Smith JW. 2009. Alien non-marine snails and slugs of priority quarantine importance in the United States: a preliminary risk assessment. American Malacological Bulletin 27: 113-132.
- Frank B, Lee H. (2011). Shells | Shell Collecting | Nature - Jacksonville, Florida. http://www.jaxshells.org/ (1 July 2011).
- Gamon,ET. 1943. Helicid snails in California. State of California Department of Agriculture Bulletin 32: 173-187.
- Hata TY, Hara AH, Hu BK-S. 1997. Molluscicides and mechanical barriers against slugs, Vaginula plebeia Fischer and Veronicella cubensis (Pfeiffer (Stylommatophora: Veronicellidae). Crop Protection 16: 501-506.
- Hubricht L. 1985. The distributions of the native land mollusks of the eastern United States. Fieldiana (Zoology) New Series 24. 191 pp.
- Meredith RH. 2003. Slug pellets - risks and benefits in perspective. Pages 235-242 In Dussart GBJ, (editor). Slugs and Snails: Agricultural, Veterinary and Environmental Perspectives. BCPC Symposium Proceedings Series 80. BCPC, Canterbury, U.K.
- Olkowski W, Daar S, Olkowski H. 1991. Common-sense pest control. The Taunton Press, Newtown, Connecticut, USA. 715 pp.
- Pilsbry HA. 1928. Studies on West Indian mollusks: the genus Zachrysia. Proceedings of the Academy of Natural Sciences of Philadelphia 80: 581-606.
- Pilsbry HA. 1940. Land Mollusca of North America (north of America) Philadelphia Academy of Natural Sciences Monograph 3, vol. 1, pt. 2.
- Radke MG, Ritchie LS, Ferguson FF. 1961. Demonstrated control of Australorbis glabratus by Marisa cornuarietis under field conditions in Puerto Rico. American Journal of Tropical Medicine and Hygiene 10: 370-373.
- Rawlings TA, Hayes KA, Cowie RH, Collins TM. 2007. The identity, distribution, and impacts of non-native apple snails in the continental United States. BMC Evolutionary Biology (Supplement 2) Vol. 7: 97-110.
- Robinson DG, Fields A. 2004. The Cuban land snail Zachrysia: The emerging awareness of an important snail pest in the Caribbean basin. In Leal JH, Grimm E, Yorgey C, (editors). Program and Abstracts of the 70th Annual Meeting, American Malacological Society, Sanibel Island, Florida, 30 July - 4 August 2004. Bailey-Matthews Shell Museum, Sanibel, Florida. p. 73.
- Seaman DE, Porterfield WA. 1964. Control of aquatic weeds by the snail Marisa cornuarietis. Weeds 12: 87-92.
- Speiser B, Kistler C. 2002. Field tests with a molluscicide containing iron phosphate. Crop Protection 21: 389-394.
- Stange LA. (March 2006). Snails and slugs of regulatory significance to Florida. Division of Plant Industry. http://www.freshfromflorida.com/pi/enpp/ento/snail_slugs-pa.html (1 July 2011). | http://entomology.ifas.ufl.edu/creatures/misc/gastro/terrestrial_snails.htm | 13 |
10 | Sep. 6, 2000 WEST LAFAYETTE, Ind. – Purdue University chemists have devised a way to remove a major obstacle in designing new materials for use in the atom-size realm of nanotechnology.
Nanoparticles – tailor made of selected metals or other materials and measuring just billionths of a meter in diameter – are the building blocks for this new generation of materials. Scientists are trying to use these to build new, stronger materials one molecule at a time for applications ranging from medicine to aerospace.
But this bottoms-up approach has had a downside: Nanoparticles can be so fragile and unstable that if their surfaces touch, they will fuse together, losing their special shape and properties.
Now, researchers at Purdue University have found a way to stabilize nanoparticles made of metal by wrapping the tiny structures in a "plastic coat" of molecular thickness. The coating prevents the nanoparticles from fusing together upon contact and allows them to be easily manipulated.
The new coating process can be used to stabilize nanoparticles with magnetic properties, allowing scientists to develop new materials for use in microelectronic devices and magnetic sensors, says Alexander Wei, assistant professor of chemistry who developed the new stabilization method.
"Though many of the applications are yet to come, our new method opens the doors to a variety of new nano-structured materials," he says. "For example, this coating process may be useful in developing materials for use in biomedicine, such as new drug-delivery systems or probes and sensors designed to target specific cells or tissues."
The research also has been used to process and manipulate nanoparticles that are slightly larger in size, presenting opportunities that have yet to be explored in nanoscale science and technology, Wei says.
Nanoparticles are developed in the laboratory using inorganic or metallic particles one to 100 nanometers in diameter. Their name comes from nanometer, which is one-billionth of a meter, about 100,000 times smaller than the width of a human hair. These building blocks are part of a large scientific effort, called nanotechnology, in progress in laboratories throughout the world aimed at developing new technologies at the molecular level.
Scientists are especially interested in developing nanoparticles made of metals, semiconductors and magnetic materials. These substances have special properties that make them useful for specific tasks. Because nanoparticles' properties depend on their size, scientists can create materials with distinct characteristics, such as electronic function, by fine-tuning the size of the particles.
"Being able to control structures at the nanoscale level will allow scientists to custom design materials to perform very specific functions," Wei says. "Ultra-small devices with unique electronic or magnetic functions, and materials with superior strength and hardness are just two of the many possible benefits of this technology."
Though scientists have been working for the past decade to develop various types of nano-sized particles to use as building blocks for the next generation of materials, stabilizing the tiny structures has remained a challenge, Wei says.
"There are several issues to address in stabilizing nanoparticles," he says. "One is keeping them dispersed, which means keeping them apart from each other when working with them. Another is to stabilize them against degradation, because you don't want them to change shape or get destroyed by chemical interactions."
As the nanoparticles increase in size, they become even more difficult to control.
"Metal particles larger than 10 nanometers in diameter are often challenging to work with because of their strong tendency to stick to each other," Wei says.
His group discovered a novel approach that addresses all these issues. Working with nanoclusters of gold 10 to 20 nanometers in diameter, the researchers first encapsulated the tiny structures in a shell of molecules called resorcinarenes, which have bowl-shaped "heads" with several "tails" fastened at one end.
"The resorcinarenes work well because they have a curvature which is complementary to the surface of the nanoparticles, so they stick to the metal," Wei explains.
Next, the researchers created a polymer cage around the surface of particles by chemically "stitching" the resorcinarene tails together. The porous coating permits the particle inside to interact with substances outside, but keeps the nanoparticles from interacting with each other.
"The result is a very stable, permanent coat that keeps the particles dispersed in solution," Wei says. "And the coating can be customized by adding different chemicals, to make the nanoparticles function in a specific manner."
Wei says the stabilization process also works well with larger size nanoparticles. For example, his group has used the process to stabilize nanoparticles of cobalt – a magnetic material – in sizes up to 40 nanometers in diameter.
"Scientists working with nanoparticles have often been restricted to working with structures one to ten nanometers in diameter," Wei says. "We think that this is going to extend our ability to manipulate and process particles in the 10 to 50 nanometer range."
The Purdue group also has shown that the encaged cobalt particles can be used to create structures in the shapes of rings or chains, suggesting that the magnetic properties of the nanoparticles can be precisely controlled to create new structures.
"The way the magnetic particles behave in an external field is what will allow us to create a lot of exotic structures that haven't been seen yet," Wei says. "Magnetic materials are inherently functional because they respond to magnetic fields, so I think there are new applications just waiting to happen for these particles."
Wei's studies at Purdue are supported by the National Science Foundation. He presented details of his findings in August at the American Chemical Society's national meeting in Washington, D.C.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | http://www.sciencedaily.com/releases/2000/09/000904122310.htm | 13 |
22 | Ghostly galaxies in the distant universe are almost certainly the culprits behind a mysterious change in intergalactic gas that allows us to see across the cosmos. Although these galaxies are too faint to be spotted by current telescopes, future instruments could soon reveal their presence.
About 300,000 years after the big bang, the hydrogen that filled the universe was neutral and had not yet condensed to form stars and galaxies. This period is called the cosmic dark ages. When the first stars and galaxies formed, this neutral hydrogen was opaque to their ultraviolet light, creating a cosmic fog.
Some type of radiation must have broken up the neutral hydrogen atoms into electrons and protons in a process called reionisation, which ultimately made the universe transparent. But whether galaxies would have been numerous and bright enough back then to produce this radiation was uncertain.
Now the latest observations from the Hubble Ultra Deep Field 2012 survey (UDF12), presented this week at a meeting of the American Astronomical Society in Long Beach, California, have suggested that galaxies could, indeed, have turned the universe clear.
"This is the last uncharted piece of cosmic history," said UDF12 team member Richard Ellis of the California Institute of Technology in Pasadena, California.
Ellis and colleagues used the Hubble Space Telescope to stare at one spot in the sky for 100 hours – twice as long as in previous surveys – and used a filter that made the telescope more sensitive to faint, distant objects. "For the first time with Hubble, we can do this in a systematic way," Ellis says.
In December the team reported that they had spotted seven new galaxies hailing from when the universe was between 380 million and 600 million years old, right in the middle of the period when reionisation was under way. Since then, the team has analysed the radiation from these galaxies.
Using spectral colour as a yardstick of stellar age, James Dunlop and Alexander Rogers of the Institute for Astronomy in Edinburgh, UK, found that the UDF12 galaxies contain surprisingly old stars. "What we're seeing are the second generation of stars," Rogers said at the meeting. "They're already mature – and must have been around for 100 million years."
Older stars do not pump out as much ionising radiation as young ones, so these galaxies, at the limit of Hubble's vision, could not have done the job by themselves.
The team then needed to figure out how many faint galaxies from this era may have gone undetected. Caltech's Matthew Schenker and colleagues used statistical modelling, based on known galactic populations, to show that there must be exponentially more faint galaxies in the early universe than bright ones – enough to supply the radiation needed.
"We can say confidently that galaxies can do the job, but the faintest galaxies that do most of the work are just below the limits of the UDF12 project," said team member Brant Robertson of the University of Arizona in Tucson.
"We're pretty certain it's galaxies now," agrees Steven Finkelstein of the University of Texas at Austin, who was not involved in the new work. Other possible candidates for reionisation, such as colliding dark matter particles, had been all but ruled out by earlier observations.
"I think it's a happy ending," Ellis says. "Reionisation is a normal process produced by things we can see, and not yet another dark something that we don't understand."
The ghost galaxies will probably be detected by Hubble's successor, the James Webb Space Telescope, which is expected to launch later this decade. If James Webb does not manage to see them, that would present a puzzle, says Ellis. "We'd need an additional source of radiation, whether annihilating particles or whatever else." But he said he would be very surprised if the faint galaxies did not turn up.
When this article was first published, the second paragraph was, "About 300,000 years after the big bang, the hydrogen that filled the universe cooled and became neutral and opaque, plunging everything into the so-called cosmic dark ages. Any visible wavelengths from early stars were quickly absorbed by the gas, which formed a cosmic fog that persisted for almost a billion years."
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Thu Jan 17 14:49:40 GMT 2013 by Eric Kvaalen
"Hubble's survey of fledgling galaxies can help us figure out what drove a phenomenon called reionisation. It ended a mysterious time called the cosmic dark ages, when a shroud of opaque hydrogen gas dominated the universe. Then radiation, probably from the first stars and galaxies, ionised the hydrogen and made it transparent, but details of precisely how this happened are scarce."
I wish NS authors would read my comments. There was an article on this last month ((long URL - click here) ), and I wrote the following comment:
That's not correct. Hydrogen gas is not opaque. There was no "fog" to burn off. The reason the period before Reionization is called the Dark Ages is because there weren't any stars yet to make light. (Actually, at the beginning of the "Dark Ages" there was plenty of intense red light, but this gradually shifted to infrared and then became the Cosmic Microwave Background radiation.)
The reionization wasn't needed to make space transparent.
Fri Feb 01 18:09:50 GMT 2013 by Julian Richards, deputy online editor
We've got there eventually. Thanks for pointing this out.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | http://www.newscientist.com/article/dn23063 | 13 |
17 | Squares and Triangles Game
- To review the concept of squares
- To experience the characteristics of shapes
- To identify triangles (2-D shapes)
- To compare and talk about shapes
Squares and triangles in different sizes and colors; a chalk or masking tape triangle on floor; triangles such as the musical instrument; The Father Who Had 10 Children
Read The Father Who Had 10 Children by Benedicte Guettier. As you read, help children notice the triangle shapes in the sailboat.
Have children walk on the masking-tape triangle and notice that it has 3 sides and 3 corners. Show children triangles in different colors and sizes. Give each child a triangle, then ask the children to see, touch, and say "Triangle" and feel its 3 sides and 3 corners.
- Have children play the "What's My Name?" game with squares and triangles. Give each child a square and a triangle. Encourage children to see, touch, and say the name of each shape.
- Ask students to hold up their squares. Say, "A square has 4 sides. Let's count the sides." Then touch and count the sides with the children. "How many sides?" (4)
- Ask students to hold up their triangles. Say, "A triangle has 3 sides. Let's count the sides." Then touch and count the sides with children. "How many sides?" (3) Ask children to compare the shapes. "How are the shapes alike?" (They both have sides and corners.) "How are the shapes different?" (Square has 4 sides and 4 corners; triangle has only 3.)
- Tell children that you are going to say the word square or triangle. When you say "Square," they hold up their squares. When you say "Triangle," they hold up their triangles. Play the game slowly, holding up your square and your triangle to model. When you think that children can play the game successfully, stop holding up your square and triangle.
Play the "What's My Name?" game with a circle, square, and triangle. At art time, have children make shape designs or shape people with shapes of different colors and sizes.
- Proficient - Child can identify a square and a triangle.
- In Process - Child can identify a square, but has difficulty identifying a triangle.
- Not Yet Ready - Child does not yet identify a square or a triangle.
More on: Learning Activities for Preschoolers
Excerpted from School Readiness Activity Cards. The Preschool Activity Cards provide engaging and purposeful experiences that develop language, literacy, and math skills for preschool children. | http://school.familyeducation.com/preschool/activity/30578.html | 13 |
18 | See something needing your input? Click here to Join us in providing quality, expert-guided information to the public for free!
From Citizendium, the Citizens' Compendium
Transposons, also called "jumping genes", "mobile DNA" or "mobile genetic elements", are blocks of conserved DNA that can occasionally move to different positions within the chromosomes of a cell. The insertion of a transposon in a new site is called transposition. It can result in DNA duplication and in further multiplication of the original DNA sequence. The transposition process can cause mutations at the site of DNA insertion, and can cause increases in the total amount of DNA in a genome. Transposons were first discovered by Barbara McClintock in 1942-1948 , and she was eventually awarded the Nobel Prize in 1983 for this discovery .
Transposons insert at new locations in DNA by enzymatic breakage and reunion of DNA, a process called DNA recombination. The distinctive feature of all transposon movement is that there is no requirement for extensive DNA sequence similarity between the initial DNA region containing the transposon and the final location (that is to say, extensive base pair complementarity), and the process of transposon insertion at a new location is considered to be non-homologous recombination to distinguish it from recombination of similar DNA sequences participating in homologous recombination.
There are a variety of mobile genetic elements, and they can be broadly classified by their mechanism of transposition. Class I mobile genetic elements, or retrotransposons, move in the genome by being transcribed to RNA and then transcribed back to DNA catalyzed by the enzyme reverse transcriptase, while class II mobile genetic elements move directly from one position to another within the genome using a transposase enzyme catalyst to mobilize the DNA without involving an RNA intermediate.
For researchers, transposons are a very useful method to alter DNA inside of a living organism.
Transposons make up a large fraction of the total genome of many animals and plants which is evident through the C-values of eukaryotic species. As an example, about 45% of the human genome is composed of transposons and their defunct remnants .
Types of transposons
Transposons are classified into three main classes based on their mechanism of transposition.
Class I: Retrotransposons
Retrotransposons work by copying themselves and pasting copies back into the genome in multiple places. Initially, retrotransposons copy themselves to RNA (transcription) but, in addition to being translated, the RNA is copied into DNA by a reverse transcriptase (often coded by the transposon itself) and inserted back into the genome.
Retrotransposons behave very similarly to retroviruses, such as HIV, giving a clue to the evolutionary origins of such viruses. Retrotransposons are common in eukaryotic organism s(for instance maize, humans), but are rarely found in bacteria. They are present in fungi.
There are four main classes of retrotransposons:
- Viral superfamily: similar to retroviruses, have long terminal repeats (LTRs), encode reverse transcriptase (to reverse transcribe RNA into DNA).
- LINES: encode reverse transcriptase (to reverse transcribe RNA into DNA), lack LTRs, transcribed by RNA polymerase II.
- SINES: short elements that do encode reverse transcriptase but are otherwise similar to LINEs. Include the human Alu elements.
- Nonviral superfamily: do not code for reverse transcriptase, transcribed by RNA polymerase III.
Class II:DNA transposons
The major difference of Class II transposons from retrotransposons is that their transposition mechanism does not involve an RNA intermediate. Class II transposons usually move by cut and paste, rather than copy and paste, using the transposase enzyme. Different types of transposase work in different ways. Some can bind to any part of the DNA molecule, and the target site can therefore be anywhere, while others bind to specific sequences. Transposase makes a staggered cut at the target site producing sticky ends, cuts out the transposon and ligates it into the target site. A DNA polymerase fills in the resulting gaps from the sticky ends and DNA ligase closes the sugar-phosphate backbone. This results in target site duplication and the insertion sites of DNA transposons may be identified by short direct repeats (a staggered cut in the target DNA filled by DNA polymerase) which flank the inverted repeats that are part of the transposon itself (and which are important for the transposon excision by transposase).
Not all DNA transposons transpose through cut and paste mechanism. In some cases replicative transposition is observed in which transposon replicates itself to a new target site without being excised from its original site. In this process a cointegrate structure is formed containing the donor and target elements covalently connected to one-another, and in this replicative transposition resembles homologous recombination. Examples of transposons that use replicative transposition include bacteriophage Mu, Tn3 and IS1.
Both class I and class II of transposons may lose their ability to synthesize reverse transcriptase or transposase through mutation, yet continue to jump through the genome because other transposons are still producing the necessary enzyme. Transposons that themselves encode an ability to move to new locations are called autonomous transposons.
Class III: Miniature inverted-repeat transposable elements
MITEs are sequences of about 400 base pairs and 15 base pair inverted repeats that vary very little. They are found in their thousands in the genomes of both plants and animals (over 100,000 were found in the rice genome). MITEs are too small to encode any proteins.
- The first transposons were discovered in maize (Zea mays), (aka corn) by Barbara McClintock in the 1940s, for which she was awarded the Nobel Prize in 1983. She noticed the results of insertions, deletions, and translocations, caused by these transposons. These changes in the genome could, for example, lead to a change in the color of corn kernels. About 50% of the total genome of maize consists of transposons. The Ac/Ds system McClintock described are class II transposons .
- One family of transposons in the fruit fly Drosophila melanogaster are called P elements. They seem to have first appeared in the species only in the middle of the twentieth century. Within 50 years, they have spread through every population of the species. Artificial P elements can be used to insert genes into Drosophila by injecting the embryo. For the use of P elements as a genetic tool see: "transposons as a genetic tool".
- The simplest form of bacterial transposon are insertion sequences (IS) which are about 1000 base pairs long and consist largely of inverted DNA repeats flanking a transposase gene. Other more complex transposons in bacteria usually carry an additional gene for function other than transposition---often for antibiotic resistance. They can consist of two IS elements flanking a gene for antibiotic resistance . IS elements were first discovered in the gal gene of Escherichia coli by James Shapiro in 1968 .
- Stimulated by Shapiro's 1968 report, follow-up molecular biology research on transposon involvement in numerous mobile DNA related phenomena in bacteria, plants and insects finally led to Barbara McClintock's earlier discoveries with maize being given wide acclaim among biologists.
- In bacteria, transposons can jump from chromosomal DNA to plasmid DNA and back, allowing for the transfer and permanent addition of genes such as those encoding antibiotic resistance (multi-antibiotic resistant bacterial strains can be generated in this way). Bacterial transposons of this type belong to the Tn family.
- Mu phage in the bacterium Escherichia coli is a well known example of replicative transposition. It can exist as a prophage inserted like a transposon in the E. coli genome. Its replication by transposition mechanism leaves a copy of the prophage at the original location during transposition.
- The most common form of transposon in humans is the Alu sequence, a form of SINE. The Alu sequence is approximately 300 bases long and about 1000 copies of Alu can be found in the human genome .
Transposons causing diseases
Transposons are mutagens. They can damage the genome of their host cell in different ways:
- A transposon or a retroposon that inserts itself into a functional gene will most likely disable that gene.
- After a transposon leaves a gene, the resulting gap will probably not be repaired correctly.
- Multiple copies of the same sequence, such as Alu sequences can hinder precise chromosomal pairing during mitosis, resulting in unequal crossovers, one of the main reasons for chromosome duplication .
Diseases that are often caused by transposons include hemophilia A and B , severe combined immunodeficiency, porphyria, predisposition to cancer , and possibly Duchenne muscular dystrophy.
Additionally, many transposons contain promoters which drive transcription of their own transposase. These promoters can cause aberrant expression of linked genes, causing disease or mutant phenotypes.
Evolution of transposons
The evolution of transposons and their effect on genome evolution is currently a very active field of research.
Transposons are found in all major branches of life. They may or may not have originated in the last universal common ancestor, or arisen independently multiple times, or perhaps arisen once and then spread to other kingdoms by horizontal gene transfer. While transposons may confer some benefits on their hosts, they are generally considered to be selfish DNA parasites that live within the genome of cellular organisms. In this way, they are similar to viruses. Viruses and transposons also share features in their genome structure and biochemical abilities, leading to speculation that they share a common ancestor.
Since excessive transposon activity can destroy a genome, many organisms seem to have developed mechanisms to reduce transposition to a manageable level. Bacteria may undergo high rates of gene deletion as part of a mechanism to remove transposons and viruses from their genomes while eukaryotic organisms may have developed the RNA interference (RNAi) mechanism as a way of reducing transposon activity. In the nematode Caenorhabditis elegans, some genes required for RNAi also reduce transposon activity.
Transposons may have been co-opted by the vertebrate immune system as a means of producing antibody diversity. The V(D)J recombination system operates by a mechanism similar to that of transposons.
Evidence exists that transposable elements may act as mutators in bacteria and other asexual organisms.
Transposons in science
Transposons were first discovered in the plant maize. Likewise, the first transposon to be molecularly isolated was from a plant (Snapdragon). Appropriately, transposons have been an especially useful tool in plant molecular biology. Researchers use transposons as a means of mutagenesis. In this context, a transposon jumps into a gene and produces a mutation. The presence of the transposon provides a straightforward means of identifying the mutant allele, relative to chemical mutagenesis methods.
Sometimes the insertion of a transposon into a gene can disrupt that gene's function in a reversible manner; transposase mediated excision of the transposon restores gene function. This produces plants in which neighboring cells have different genotypes. This feature allows researchers to distinguish between genes that must be present inside of a cell in order to function (cell-autonomous) and genes that produce observable effects in cells other than those where the gene is expressed.
- ↑ Transposon Silencing Keeps Jumping Genes in Their Place Gross L PLoS Biology Vol. 4, No. 10, e353 doi:10.1371/journal.pbio.0040353
- ↑ The Barbara McClintock Papers, National Library of Medicine: McClintock, B. "Cytogenetic Studies of Maize and Neurospora." Carnegie Institution of Washington Yearbook 44, (1945): 108-112. McClintock, B. "Maize Genetics." Carnegie Institution of Washington Yearbook 45, (1946): 176-186. McClintock, B. "Mutable Loci in Maize." Carnegie Institution of Washington Yearbook 47, (1948): 155-169.
- ↑ BIOGRAPHICAL MEMOIRS National Academy of Sciences: Barbara McClintock June 16, 1902 — September 2, 1992 By Nina V. Fedoroff
- ↑ International Human Genome Sequencing Consortium Initial sequencing and analysis of the human genome NATURE VOL 409 15 FEBRUARY 2001, p 860-921
- ↑ Bennetzen, J. L., 2000 Transposable element contributions to plant gene and genome evolution. Plant Molecular Biology 42: 251–269, 2000.
- ↑ Snyder, L. and Champness, W. (2003)Transposon and site-specific recombination, Chapter 9 In Snyder, L. and Champness, W. Molecular Genetics of Bacteria. 2nd. Edition. ASM Press, Washington, DC.
- ↑ Shapiro, J.A. (1968). Mutations caused by insertion of genetic material into the galactose operon of Escherichia coli. Journal of Molecular Biology 40, p93-105.
- ↑ Berg, D. E. and Howe, M. M. Mobile DNA ASM Press, Washington DC.
- ↑ Ljungquist, E. and Bukhati, A. I.(1977) State of prophage Mu upon induction. PNAS USA 74, p3143-3147).
- ↑ International Human Genome Sequencing Consortium Initial sequencing and analysis of the human genome NATURE VOL 409 15 FEBRUARY 2001, p 860-921
- ↑ Abeysinghe SS, Chuzhanova N, Krawczak M, Ball EV, Cooper DN.(2003) Translocation and gross deletion breakpoints in human inherited disease and cancer I: Nucleotide composition and recombination-associated motifs. Hum Mutat. 2003 Sep;22(3):229-44.
- ↑ Kolomietz E, Meyn MS, Pandita A, Squire JA. (2002) The role of Alu repeat clusters as mediators of recurrent chromosomal aberrations in tumors.Genes Chromosomes Cancer. 2002 Oct;35(2):97-112.
- ↑ Ganguly A, Dunbar T, Chen P, Godmilow L, Ganguly T. (2003) Exon skipping caused by an intronic insertion of a young Alu Yb9 element leads to severe hemophilia A. Hum Genet. 2003 Sep;113(4):348-52. Epub 2003 Jul 12. PMID: 12884004
- ↑ Li X, Scaringe WA, Hill KA, Roberts S, Mengos A, Careri D, Pinto MT, Kasper CK, Sommer SS.(2001) Frequency of recent retrotransposition events in the human factor IX gene.Hum Mutat. 2001 Jun;17(6):511-9.
- ↑ Markert ML, Hutton JJ, Wiginton DA, States JC, Kaufman RE. (1988) Adenosine deaminase (ADA) deficiency due to deletion of the ADA gene promoter and first exon by homologous recombination between two Alu elements. J Clin Invest. 1988 May;81(5):1323-7.
- ↑ Mustajoki S, Ahola H, Mustajoki P, Kauppinen R.(1999) Insertion of Alu element responsible for acute intermittent porphyria.Hum Mutat. 1999;13(6):431-8.PMID: 10408772
- ↑ Teugels E, De Brakeleer S, Goelen G, Lissens W, Sermijn E, De Greve J.(2005) De novo Alu element insertions targeted to a sequence common to the BRCA1 and BRCA2 genes.Hum Mutat. 2005 Sep;26(3):284.
- ↑ Gibbons R, Dugaiczyk A.(2005) Phylogenetic roots of Alu-mediated rearrangements leading to cancer. Genome. 2005 Feb;48(1):160-7.
- ↑ McNaughton JC, Hughes G, Jones WA, Stockwell PA, Klamut HJ, Petersen GB.(1997) The evolution of an intron: analysis of a long, deletion-prone intron in the human dystrophin gene.Genomics. 1997 Mar 1;40(2):294-304.
- Bennetzen, J. L., 2000 Transposable element contributions to plant gene and genome evolution. Plant Molecular Biology 42: 251–269, 2000.
- Kidwell, M.G. (2005). Transposable elements. In The Evolution of the Genome (ed. T.R. Gregory), pp. 165-221. Elsevier, San Diego.
- Craig NL, Craigie R, Gellert M, and Lambowitz AM (ed.) (2002) Mobile DNA II, ASM Press, Washington, DC.
- Lewin B (2000) Genes VII, Oxford University Press.
- Snyder, L. and Champness, W. (2003) Transpositon and site-specific recombination, Chapter 9 In Snyder, L. and Champness, W. Molecular Genetics of Bacteria. 2nd. Edition. ASM Press, Washington, DC | http://en.citizendium.org/wiki/Transposon | 13 |
12 | - Why should I bother learning this?
Much of the mathematics used every day is "relational" in nature -- that is, it is the relationship between the numbers which is of importance, not necessarily the numbers themselves. Some examples of these relationships are common in everyday happenings, such as the price of gasoline per gallon, the speed of a bike in miles per hour, the rate of pay per hour and the number of inches for miles when reading a map. These relational aspects of mathematics lead to the idea of proportions and the need for students to do proportional reasoning. For example, when planning a trip, the map may have a scale in which 1 inch equals 12 miles. Thus, if the distance you plan to travel is 8 inches long on the map, the actual distance is 96 miles. If you can average 50 miles per hour, you will get there in a little less than two hours. Encourage students to come up with other ways they might use proportions.
- How do ratios and proportions differ?
Students will often mix up ratios and proportions. A ratio is a comparison of two quantities via division. A proportion is a statement that two ratios are equal to one another. A proportion is an equation and solving a proportion usually involves solving for some unknown in the equation.
- Why does cross multiplication work when solving proportions?
When students cross multiply when solving a proportion, they are really multiplying both sides of the equation by the products of the two denominators. What happens is that the denominators divide out with one of the factors on each side, looking like you have just cross multiplied. For example, in the proportion , when we multiply both sides by , the 35's divide out on the left side of the equation leaving . On the right side of the equation, the n's divide out leaving . | http://www.eduplace.com/math/mathsteps/6/a/6.proportions.ask.html | 13 |
12 | To use all functions of this page, please activate cookies in your browser.
With an accout for my.bionity.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
- My watch list
- My saved searches
- My saved topics
- My newsletter
A plasmid is an extrachromosomal DNA molecule separate from the chromosomal DNA and capable of autonomous replication. In many cases, It is typically circular and double-stranded. It usually occurs naturally in bacteria, and is sometimes found in eukaryotic organisms (e.g., the 2-micrometre-ring in Saccharomyces cerevisiae).
The size of plasmids varies from 1 to over 400 kilobase pairs (kbp). There may be one copy, for large plasmids, to hundreds of copies of the same plasmid in a single cell, or even thousands of copies, for certain artificial plasmids selected for high copy number (such as the pUC series of plasmids). Plasmids can be part of the mobilome, since they are often associated with conjugation, a mechanism of horizontal gene transfer.
Every plasmid contains at least one DNA sequence that serves as an origin of replication, or ori (a starting point for DNA replication), which enables the plasmid DNA to be duplicated independently from the chromosomal DNA (Figure 2). The plasmids of most bacteria are circular, like the plasmid depicted in Figure 2, but linear plasmids are also known, which superficially resemble the chromosomes of most eukaryotes.
An episome is a plasmid that can be added, without integration, to the chromosomal DNA of the host organism (Fig. 3). In this situation, it can stay intact for a long time, be either diluted out or be duplicated with every cell division of the host, and become a basic part of its genetic makeup. The term is no longer commonly used for plasmids, since it is now clear that a region of homology with the chromosome such as a transposon will make a plasmid into an episome. In mammalian systems, the term episome refers to a circular DNA (such as a viral genome) that is maintained by noncovalent tethering to the host cell chromosome. In detail, we distinguish:
Plasmids used in genetic engineering are called vectors. They are used to transfer genes from one organism to another and typically contain a genetic marker conferring a phenotype that can be selected for or against. Most also contain a multiple cloning site (MCS, or polylinker), which is a short region containing several commonly used restriction sites allowing the easy insertion of DNA fragments at this location.Generally the polycloning site is present within the one of the antibiotic marker gene soppose tetracycline resistance gene in case of the pBR vector. So the insertion of the target DNA inactivates the marker gene.This is the basis of the selection of the recombinant and this process is known as the insertional inactivation.. See Applications below.
One way of grouping plasmids is by their ability to transfer to other bacteria. Conjugative plasmids contain so-called tra-genes, which perform the complex process of conjugation, the transfer of plasmids to another bacterium (Fig. 4). Non-conjugative plasmids are incapable of initiating conjugation, hence they can only be transferred with the assistance of conjugative plasmids, by 'accident'. An intermediate class of plasmids are mobilizable, and carry only a subset of the genes required for transfer. They can 'parasitise' a conjugative plasmid, transferring at high frequency only in its presence. Plasmids are now being used to manipulate DNA and may possibly be a tool for curing many diseases.
It is possible for plasmids of different types to coexist in a single cell. Seven different plasmids have been found in E. coli. But related plasmids are often incompatible, in the sense that only one of them survives in the cell line, due to the regulation of vital plasmid functions. Therefore, plasmids can be assigned into compatibility groups.
Another way to classify plasmids is by function. There are five main classes:
Plasmids can belong to more than one of these functional groups.
Plasmids that exist only as one or a few copies in each bacterium are, upon cell division, in danger of being lost in one of the segregating bacteria. Such single-copy plasmids have systems which attempt to actively distribute a copy to both daughter cells.
Some plasmids include an addiction system or "postsegregational killing system (PSK)", such as the hok/sok (host killing/suppressor of killing) system of plasmid R1 in Escherichia coli. They produce both a long-lived poison and a short-lived antidote. Daughter cells that retain a copy of the plasmid survive, while a daughter cell that fails to inherit the plasmid dies or suffers a reduced growth-rate because of the lingering poison from the parent cell.
Plasmids serve as important tools in genetics and biochemistry labs, where they are commonly used to multiply (make many copies of) or express particular genes. Many plasmids are commercially available for such uses.
The gene to be replicated is inserted into copies of a plasmid which contains genes that make cells resistant to particular antibiotics. Next, the plasmids are inserted into bacteria by a process called transformation. Then, the bacteria are exposed to the particular antibiotics. Only bacteria which take up copies of the plasmid survive the antibiotic, since the plasmid makes them resistant. In particular, the protecting genes are expressed (used to make a protein) and the expressed protein breaks down the antibiotics. In this way the antibiotics act as a filter to select only the modified bacteria. Now these bacteria can be grown in large amounts, harvested and lysed (often using the alkaline lysis method) to isolate the plasmid of interest.
Another major use of plasmids is to make large amounts of proteins. In this case, researchers grow bacteria containing a plasmid harboring the gene of interest. Just as the bacteria produces proteins to confer its antibiotic resistance, it can also be induced to produce large amounts of proteins from the inserted gene. This is a cheap and easy way of mass-producing a gene or the protein it then codes for, for example, insulin or even antibiotics.
However, a plasmid can only contain inserts of about 1-10 kbp. To clone longer lengths of DNA, lambda phage with lysogeny genes deleted, cosmids, bacterial artificial chromosomes or yeast artificial chromosomes could be used.
Plasmid DNA extraction
As alluded to above, plasmids are often used to purify a specific sequence, since they can easily be purified away from the rest of the genome. For their use as vectors, and for molecular cloning, plasmids often need to be isolated.
There are several methods to isolate plasmid DNA from bacteria, the archetypes of which are the miniprep and the maxiprep/bulkprep. The former can be used to quickly find out whether the plasmid is correct in any of several bacterial clones. The yield is a small amount of impure plasmid DNA, which is sufficient for analysis by restriction digest and for some cloning techniques.
In the latter, much larger volumes of bacterial suspension are grown from which a maxi-prep can be performed. Essentially this is a scaled-up miniprep followed by additional purification. This results in relatively large amounts (several micrograms) of very pure plasmid DNA.
In recent times many commercial kits have been created to perform plasmid extraction at various scales, purity and levels of automation. Commercial services can prepare plasmid DNA at quoted prices below $300/mg in milligram quantities and $15/mg in gram quantities (early 2007).
Plasmid DNA may appear in one of five conformations, which (for a given size) run at different speeds in a gel during electrophoresis. The conformations are listed below in order of electrophoretic mobility (speed for a given applied voltage) from slowest to fastest:
The rate of migration for small linear fragments is directly proportional to the voltage applied at low voltages. At higher voltages, larger fragments migrate at continually increasing yet different rates. Therefore the resolution of a gel decreases with increased voltage.
At a specified, low voltage, the migration rate of small linear DNA fragments is a function of their length. Large linear fragments (over 20kb or so) migrate at a certain fixed rate regardless of length. This is because the molecules 'reptate', with the bulk of the molecule following the leading end through the gel matrix. Restriction digests are frequently used to analyse purified plasmids. These enzymes specifically break the DNA at certain short sequences. The resulting linear fragments form 'bands' after gel electrophoresis. It is possible to purify certain fragments by cutting the bands out of the gel and dissolving the gel to release the DNA fragments.
Because of its tight conformation, supercoiled DNA migrates faster through a gel than linear or open-circular DNA.
|This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Plasmid". A list of authors is available in Wikipedia.| | http://www.bionity.com/en/encyclopedia/Plasmid.html | 13 |
30 | B1. Motion on prescribed trajectory
Inclined plane. Motion on an inclined plane. The inclined plane as a machine
When a mass is being raised, the direction of the motion (vertically upwards) and the direction of the load (weight, vertically downwards) are opposite, they form a stretched angle, a special one of all angles. In general, the angle between load and motion differs; then only a fractionof the load resists displacement, that is, only that fraction need be overcome.
Let the mass lie on an inclined plane CD (Fig. 28) and let it be moved upwards by a force which acts parallel to CD. Such a plane is inclined with respect to the horizontal, for example, a road leading up a mountain; its angle of inclination is a. The direction of the required motion is inclined to that of the force of gravity MP. What will the mass do when it is allowed to move on its own, that is, when it is only subject to gravity and no friction?
MP represents the force of gravity in magnitude as well as in direction; it causes vertical motion of the mass. The mass cannot move in this direction; it exerts therefore on the nclined plane, - that is, a resistance to its motion- a pressure . The driving force MP simultaneously moves the mass, but in a direction other than what it would be without the inclined plane. In fact, MP is replaced by the two, simultaneously acting forces MQ and MR. The force MQ acts as a pressure on the plane, which resists it due to its firmness and responds with an equally large pressure in the opposite direction and thus balances it. The mass can follow the force MR, since the plane does not resist motion along it (Fall along an inclined plane).
It must be balanced by some other force, if the motion is to be stopped. Since MR/MP = cos b, you have MR = MP·cos b. Since MP is the weight of the mass, that is, MP = mg, and cosb=cosa, you find that MR = mg·sina. On an inclined plane, the upwards directed force required to balance the load equals the load times the sine of the angle of inclination, that is, it is smaller than the actual load. The force, required to keep the mass at rest on the inclined plane (or, if its motion is uniform, in uniform motion) must only be as large as MR and in the opposite direction. However, if the force , which is to stop the mass from moving along the inclined plane downwards, acts parallel to the base of the inclined plane (K in Fig. 29), then it must be larger than mgsina, because only the component K cos a acts along the inclined plane; hence K cos a = mg sin a, whence K = mg tan a.
The component which acts as a pressure on the inclined plane is made ineffective by its firmness. Thus, the inclined plane becomes a device by which you can balance a force by a smaller force. Such devices are called machines.
Fall along an inclined plane
If you do not balance the component MR (m·g sina), the mass will move downwards, that is, "fall" along the inclined plane. Its acceleration is given by force/mass = (m·g sin a)/m = g sin a. If you replace here v by g sin a, you can answer all questions relating to the fall along an inclined plane. For example, the velocity at the end of the t-th second is: v1 = g·t· sin a (it is g·t during free fall!). You can now make the velocity of fall as small as you please by making a sufficiently small, that is, let the inclined plane differ only very little from the horizontal plane. (Galilei used it to prove the laws of fall).
After it has fallen the distance s, the
velocity of a freely falling mass is v=(2gs)1/2,
whence it is
v=(2gsina ·s)1/2 along the inclined plane with the angle of inclination a. Now let the mass drop from the point D (Fig. 28) to the horizontal plane - the base of the inclined plane - once freely along h and a second time along the inclined plane of length CD = l: What will be the velocities at which the mass arrives in these two experiments?
In the first case, you must replace s by h, whence v = (2·g·h)1/2. In the second case, you must replace s by l, whence v1= (2g sin a ·l)1/2; however, since h/l = sin a, then v1=(2gsina·h/sina)1/2= (2gh)1/2 = v, that is, the mass arrives with the same velocity at the horizontal plane, irrespectively of whether it falls freely through the height of the inclined plane or along the inclined plane.
Tautochrone The fall along an inclined plane is an example of motion along a prescribed trajectory. It is very strange, if the prescribed trajectory is the concave side of the arc of a cycloid.
What is a cycloid? A point M of a circle which rolls along the straight line ab without slipping describes a cycloid, for example, every point on the periphery of a rolling tire or wheel. In order to reach the lowest point in a fall along an inclined plane, the mass requires the time interval t = (2s/gsin a)1/2, which depends on s, that is, it is longer, if it lies higher up on the inclined plane than if it were to start its fall lower down. However, if it falls along a vertical, upwards concave arc of a cycloid, it uses always the same time to reach the lowest point irrespectively from which point it starts to fall (Huygens 1673). The cycloid is therefore also called a Tautochrone (Greek: tautos = same, cronos = time).
The inclined plane is a device by which you can balance a force by means of a smaller force. For example, if you want to lift by hand a load from the ground on to a truck, you exert yourself less, if you push it along an inclined board on to the car than if you lift it vertically. The less inclined the board is, the smaller is the downwards force, which must be overcome. However, the distance over which the mass must be moved along the inclined plane is in the same proportion larger than the height by which it must be lifted. What you save in your application of force, you must give in the working distance.
If the height is h and the inclination a, then the length of the inclined plane is l = h/sina. While the (larger) force p must shift its point of application over the (smaller) distance h, that is, altogether perform the work p·h, the smaller force p·sin a must shift its point of application over the larger distance l = h/sin a and perform the work p·sina · h/sin a = p·h, that is, the same as before and you do not save work. But with the aid of the inclined plane, you can undertake work for which your muscle strength is not sufficient; by suitable inclination of the inclined plane, you can make the force, required for a performance, as small as you please. The inclined plane is one of the devices, which are called machines - simple machines (in contrast to machines comprising several simple machines).
In general, a machine is defined as: A device, which is able to resist and makes it possible to balance a force of given magnitude by a smaller force. - The demand for a balancing capacity means: The machine must not be changed by the two groups of forces, that is, it must only transfer a force, but not use itself any of it. This problem is not strictly technically soluble, especially due to the deformability of solid bodies and friction.
Every inclined ladder, every upward road, every staircase is an inclined plane. In an unlimited area of applications, the inclined plane forms the base for two other simple machines: the screw and the wedge. You use the screw in a press (also the primitive copy press) and in devices which with the aid of screws raise loads. Every cutting instrument (knife, scissors, axe) employs a wedge.
Fig. 33 shows that the screw is an inclined plane: The hypotenuse AB of a right-angled triangle, placed around a circular cylinder tightly, describes on it a spiral. If its length CB equals the circumference of the cylinder, then, if the points B and C coincide, you have one turn of a spiral. Obviously, the line AB is an inclined plane with height AC, AB its length and BC its base. A flexible bar in place of AB (For example, of square cross-section), which covers the screw line, forms a protruding band, the thread of the screw.
Screws have several such turns, all of which can be imagined to have arisen in the same manner - The screw can transfer forces only after it is given a nut: You cut inside a hollow cylinder of circular cross-section with the diameter of the cylinder of Fig. 31 the same thread, which was on the cylinder a raised thread, that is the nut of the screw. If you insert the screw into the nut, that is, lay one inclined plane against the other, and let gravity act, it slides with its thread in the thread of the nut (assuming that there is no friction. In practice, friction cannot be avoided).
If you wish to eliminate the effect of gravity, it must be opposed by a force like on the inclined plane. You can apply this force at the circumference C of the screw, that is, in Fig. 29, parallel to the base of the inclined plane, from which the screw emerges. It is smaller in the same ratio as that, to be held in equilibrium, just as we found iy to be the case for the inclined plane.
The relations between the forces on a wedge are similarly related. A wedge is a three-sided prism (Fig. 31) in which one angle is very small compared with the other two. The two planes meeting in the sharp angle are the sides, the third the back and the edge opposite it the edge of the wedge. When the wedge has opened up a body, the separated parts press on it and drive it out, if there is no friction and the force driving it in ceases. Again friction is important!
An axe, which you drive into a block of wood, is not necessarily ejected when no force is applied to it. However, if friction could be almost eliminated, the wedge would be thrown out by the separated parts of the block. In order to keep the axe inside the block, you would have to apply an appropriate force to the back of the axe. The sharper the wedge, the smaller this force. All cutting gadgets like knives, chisels, planes, etc. work like wedges.
Figss. 34/ 35 show the relations between the forces which act on a wedge. A right-angled wedge has been driven under a beam, which is to support a wall, in order to stop it from falling to the right side of the figure. The wall presses against the beam, which presses against the wedge; it would eject it horizontally, if there were no friction, that is, you would then have to apply a horizontal force on its back, in order to keep it in place.
How large must be this force in relation to the pressure L of the beam? The pressure L acts at a right angle to the side AB of the wedge, but only the component at right angle to BC attempts to drive the wedge out. The component at right angle to the ground is of no interest (the resistance of the ground balances it). The force P, which you must apply against the back of the wedge, must therefore equal l. The figure shows that l/L = BC/BA, whence l is the smaller the sharper is the wedge, that is, the smaller is the ratio of the back to the wedge's hypotenuse.
Experience confirms this conclusion during the use of a knife, axe, needle, nail, etc. They intrude the easier, the more pointed they are.
Friction as impediment
When answering the question regarding the magnitude of the force required to balance a load of given size by means of a machine , we have neglected friction. However, it is ever present. A body on an inclined plane does not slide at all down it. As a rule, it stays put (unless the inclined plane is rather steep or the body has a special shape). Friction holds it in place. The more perfect the surface of the inclined plane as well as of the body, the less steep need be the inclined plane, on which the body would slide down. A method for the measurement of friction employs determination of the angle of inclination of a plane, at which the body starts to glide down.
If you attempt to pull the mass m by the spiral spring along the table T (Fig. 36), the spring will stretch a certain amount before the mass starts to move. However, the magnitude of this tension does not correspond to Newton's Second Law, but is larger. It can be measured like on a letter balance by a weight. For example, if m is 1000 g and you must apply a 600 g weight to stretch the spring, before m starts to move, this means: You must employ 3/5 times the force by which the mass presses on the table, in order to overcome the friction of m on T; the number 3/5 is the friction coefficient. You obtain the same number, if you place m on an inclined plane and find out at which angle of inclination m starts to glide down. (The scarp angle of a pile of sand, of grain, etc. also depends on this angle before the substance starts to slide).
In order to maintain the motion of the moving mass m, a much smaller tension in the spring is needed, may be 2/5 of its weight. This number, the friction coefficient, does not only differ for various pairs of material, but also changes for the same pair depending on the state of the surfaces, that is, the lubricant between the surfaces (oil. graphite, grease, etc.).
Rolling on well greased wheels encounters the least frictional resistance among all modes of motion. The friction coefficient is then much smaller than during sliding. A cartwheel has on its circumference rolling, on its axle sliding friction; ball-bearings are used, in order to convert also the latter into rolling motion (Fig. 37).
If you press surfaces, which slide along each other, ever more strongly together, the friction becomes substantially larger. Then you must apply a much larger force, in order to generate motion and ongoing motion becomes then much slower. This is the principle of the brake in cars.
You employ all the time friction intentionally as well as unintentionally. You could not move on foot or otherwise, if friction did not stop you from slipping; you would also not stay at rest while sitting or lying or standing up, unless friction stops you from sliding. Lighting a match on the friction surface of its box by frictional heat belongs to the conscious technical applications of friction. Devices like the brake dynamometer also employ friction.
Motions, which can be generated by mechanical means and remain visible, remain so also after the moving forces have stopped, but you see them slow down and eventually stop; for example, a carriage, when it is detached from a moving locomotive, keeps on rolling, a boat which is no longer rowed, keeps on drifing, etc.
Frictional effects: But a motion has only ceased to be visible; in reality, is has converted itself into another motion which, however, is not visible, but can be noted through its effect: Contacting surface have been heated. As a rule, one imagines that the end of the visible motion has destroyed it.
As a rule, the heating is not large enough to be seen without effort. But occasionally, it becomes so, for example, when very fast motion of a mass is suddenly interrupted or very down much slowed down. In the case of a railway carriage, which is being braked, the braking blocks on the wheels become so hot that you can feel it when you touch them, meteors, on entering the atmosphere from the airless interstellar space, become heated by the atmosphere on their surfaces so much that they light up (shooting stars), a flying bullet, impeded by a resisting material, can become so hot that it melts on the surface, etc.
continue step back | http://mpec.sc.mahidol.ac.th/radok/physmath/PHYSICS/B1.htm | 13 |
50 | Natural selection is one of the major mechanisms of evolution, but where most evolution textbooks discuss several such mechanisms, often devoting separate chapters to each, Explore Evolution ignores nearly all of them, devoting a chapter and a half to misrepresenting natural selection. The chapter devoted to this concept inaccurately describes natural selection, mischaracterizes several widely-known examples of natural selection at work, misconstruing howthe significance of those examples.
Students need to understand natural selection, and there are many inquiry-based techniques for teaching about it. Explore Evolution fails to offer any new additions to this literature. Worse yet, it neglects to even draw on that literature to help students deepen their understanding of this basic aspect of evolutionary biology.
p. 95: "changes in the sub-population take place as genetic information is lost to that population"
Ongoing research by geneticists and evolutionary biologists shows that evolutionary processes (including natural selection) do increase information, including through the evolution of new genes, and of genes that are better able to operate under novel conditions. Explore Evolution fails to confront this ongoing research, preferring to criticize decades-old experiments.
p. 94: "not only does the experiment [on peppered moths] not show what the story says it's supposed to, the experiment itself is highly questionable"
While Explore Evolution was being written, a researcher re-ran Kettlewell's classic experiment on peppered moths, correcting various criticisms offered of the original. The new research confirmed the original findings, and those findings affirm the importance of natural selection as an evolutionary mechanism.
p. 90: "definite, discoverable limits on what artificial [and therefore natural] selection can do"
Shortly before decrying extrapolation from short periods to long-term trends, Explore Evolution claims that limits on what animal breeders can accomplish over a century or two demonstrate that evolution could not produce the diversity of life we see over life's 4 billion year history. In fact, intensive selective breeding over a few hundred years has produced a range of sizes and morphologies among domestic dogs that exceeds the diversity of all other members of the family Carnivora. Whatever limits evolution reaches after intense selection, they are much smaller than what the diversity of life requires us to explain.
p. 87: "Is it possible that something like [artificial selection] occurs in nature – only without any intelligence to guide it?"
Explore Evolution plays a slight of hand here, treating "intelligence" interchangeably with the practices of animal breeders, when the difference between artificial and natural selection lies not in the application of intellect, but the application of selective pressures other than those which would occur naturally. Natural selection will tend to be messier than artificial selection, zigging and zagging to match changing environmental conditions, but the practical aspects of each are identical. Explore Evolution wrongly treats artificial selection as an analogy for natural selection, when natural selection is just a more general process.
Natural and Artificial Selection: The nature of natural selection is obscured, confusing natural selection's use in evolutionary explanations, the relationship between natural selection and artificial selection, and the way in which evolution from generation to generation produces new genes and new anatomical structures.
Experiments: It is true that many textbooks describe classic experiments on the evolution of beak size in Galápagos finches and peppered moths in England, they also discuss many other examples. Rather than further supplementing textbooks with new knowledge, Explore Evolution devotes itself to factually misleading accounts of those classic experiments, confusing students rather than deepening their understanding.
Extrapolation: Explore Evolution criticizes scientists for extrapolating from evolutionary changes over short timespans to long-term processes like speciation. Instead, the book encourages students to extrapolate from apparent limits to artificial selection to the existence of absolute limits to evolution. While the book's extrapolation is unjustified, the scientific study of evolution is not rooted in extrapolation but in detailed experimentation, mathematical modeling, and experimental hypothesis testing.
Explore Evolution begins its discussion of natural selection with a discussion of artificial selection. Artificial selection, in which differential survival and reproduction in animals, plants, or other organisms is driven by the choices of human breeders selecting among natural variations in a population, is treated as an analogy for natural selection, in which differential survival and reproduction of organisms is driven by natural processes acting on natural variation in a population.
This is a dubious beginning, as natural and artificial selection are, in fact, different aspects of the same process. While Darwin's early understanding of natural selection was influenced by his ability to draw analogies between natural observations he made and the actions of humans breeding pigeons and dogs for special traits, it is wrong to suggest that our modern understanding of these processes is merely analogical, rather than treating artificial selection as a special application of the principles behind natural selection.
Explore Evolution further errs in presenting results from a few hundred years of intensive breeding in dogs and horses as evidence for limits in evolutionary processes over thousands, millions, and indeed billions of years. Even if horses and dogs demonstrated the limits claimed by the authors, it would be foolish to extrapolate limits found under the special conditions of horse-breeding and dog-breeding to the longer-term and more complex conditions which natural selection must confront in its more general form. Given the track record of Explore Evolution, it is hardly surprising that artificial selection in dogs and in horses has not actually reached clear limits, and what limits can be inferred from those cases shows that the variation which can be produced in even a thousand years or so is greater than that seen in all of the members of the mammalian family Carnivora other than dogs. If such extrapolation is legitimate, the actual evidence undermines the point Explore Evolution seeks to make with those data.
Artificial selection and natural selection are different forms of the same process. Treating the relationship as a mere analogy assumes that differences are greater than they actually are.
Natural selection simply requires certain conditions. When they occur, natural selection will occur:
The only difference between natural selection and artificial selection is whether the difference in reproductive success is driven by naturally occurring processes, or whether the selection is imposed by humans. Explore Evolution obscures this in two ways. First, by asserting that the relationship is an analogy, rather than a generalization from the human activity. Second, by referring not to a human activity, but to the action of "intelligence."
This shift is subtle, but is a powerful rhetorical opening move. After introducing an example of shepherds selectively breeding woollier sheep, Explore Evolution asks:
Is it possible that something like this process occurs in nature—only without any intelligence to guide it?Explore Evolution, p. 87
The same question could as easily be posed whether "something like this process occurs in nature—only without any [human] to guide it," but would seem much less profound. And as Explore Evolution acknowledges, it is easy to see how forces other than humans could exert selective pressure on populations of living things.
Explore Evolution invites readers to imagine a dog as small as a pair of glasses, or larger than a horse, concludes "this is comical," and states that unnamed "critics" think "there are limits to how much an animal can change [via natural selection]" (p. 90). Setting aside that natural selection does not change "an animal," but operates over many generations of a population or species of animals, plants or other organisms, the claim that limits on natural selection are such that they would prevent speciation or other "large-scale changes" is simply not correct.
Explore Evolution states that "Horse breeders have not significantly increased the running speed of thoroughbreds, despite more than 70 years of trying" (p. 90), a claim which is inaccurate on at least one count, and which misrepresents the source they cite. Gaffney and Cunningham (1988), the paper they cite to justify the sentence, do find that winning race times have not changed, but end the paper stating, "We conclude that the explanation for the lack of progress in winning times is not due to a lack of genetic gain in the thoroughbred population as a whole." Genetic gain in the population as a result of selective breeding is the very definition of selection. Furthermore,
breeders and horse-racing enthusiasts state they pay little attention to winning times. Instead, riders, horse owners, breeders, and bettors are rewarded for horses that win races, regardless of time, and little effort is made to "beat the clock." Furthermore, "fast tracks" are notoriously bad for the health of horses, causing damage to bones and tendons. Consequently, track surfaces are often treated to be softer, slower, and less likely to cause stress on the horse. Thus, modern racetracks may be slower than the tracks of 50 years ago.Ernest Bailey (1998), "Odds on the FAST gene," Genome Research, 8(6):569-571
Thus, it is not the case that horse breeders have tried to increase the absolute time in which their horses complete races, but to ensure that their horses run faster than the other horses in a given race. It is therefore impossible to know whether contemporary horses would run faster than famous racehorses like Seabiscuit or Secretariat if they ran against one another, or whether contemporary horses as a whole are faster in absolute terms than horses were 70 years ago.
The book's dismissal of variation within dogs is, if possible, even more disingenuous.Morphometric studies of dog limbs and skulls have found that the variation within the domestic dog, Canis familiaris, is greater than the variation within the entire family to which that species belongs, and indeed greater than the variation within the order Carnivora. The range of sizes is many times greater (axis 1 in both figures). The shapes of the dogs' limbs (axis two in the first image) only slightly overlap the shapes found in other canids, including other members of the genus Canis. The shapes of the skulls (axis two in the second figure) completely overlap the shapes of non-Canis canid skulls, and the range of dog skull shapes is matched only by variation among other members of the genus.
There is no evidence in these data to suggest that dogs have reached any inherent limits to their evolution or to the powers of natural selection. What these data show is that dog breeders have already managed to produce animals which break new morphological ground. Whatever limits might seem to exist if we look at the shapes and sizes of wild canids have been surpassed by the work of dog breeders. Whatever limits natural selection has, they have prevented the evolution of variation beyond that seen within the rest of the entire order Carnivora (dogs, cats, bears, foxes, weasels, etc.), all within the last few thousand years. Natural selection may well have limits, but if the limits are that loose, they would not prevent the diversification of life as we know it over the course of several billion years.
There is little doubt that limits on natural selection do in fact exist. Because selection operates on existing variation, there is a balance between the rate of mutation and the force of selection. This balance was first described in the 1920s, and modern textbooks describe this mutation-selection balance (e.g., p. 115 in Ridley's Evolution, p. 438 in Futuyma's Evolutionary Biology, or p. 461 in Campbell and Reece's Biology). In a hypothetical case where mutation does not occur, strong enough selection would eventually stabilize all of the genes relevant to a given trait. Similarly, in the absence of selection, mutation would gradually increase the number of mutants in the population to some equilibrium. Depending on the amount of selection and the amount of mutation, the amount of variation available to select on will vary.
The limits selection might face because of limited natural variability within a single generation will get progressively broader as the number of generations increases. Modern racing horses can trace over half of their genes to 10 horses of the late 18th century, and over 80% to only 31 ancestors from that era. Despite that highly constrained gene pool, the speed of horses has risen (whether or not it plateaued in the 1950s as discussed above). Similarly, much of the morphological evolution in dogs took place over a similar time period, beginning in the 18th century as breeders began paying more careful attention to studbooks.
In its effort to debunk natural selection, Explore Evolution reiterates the debunked claims from the creationist book Icons of Evolution. That book claimed that textbooks misrepresented evolution by incorrectly characterizing certain popular experiments. Explore Evolution repeats the earlier book's arguments, reuses several of that books images without change (or attribution), and does not update its arguments to reflect more recent research.
The most fundamental error here is the claim that research on peppered moths and work on the Galápagos finches are the only, or at least major, examples offered for natural selection in textbooks. Those examples are frequently cited, but modern textbooks cite many other examples to show how natural selection works. Nor do modern textbooks cite those bodies of research for the purposes claimed in Explore Evolution. The treatment of natural selection in this book focuses exclusively on whether natural selection can generate biological novelty. That is an interesting topic, but not the only relevant topic for students to learn about natural selection, and Explore Evolution does students a disservice by treating an important and multi-faceted topic like natural selection through such a limited lens.
Turning to the details of the critiques offered for the peppered moth work and the Galápagos finch research, one finds that Explore Evolution describes that research inaccurately, and ignores recent work which directly contradicts the book's claims. For instance, it presents a graph of finch evolution which bears no relationship at all with any measurements reported by any researchers in the field, and criticizes the 50 year old work of Bernard Kettlewell on peppered moths without any discussion of research from the 1990s which tested several of the authors' criticisms of Kettlewell, and found that Kettlewell's results were unaltered by those criticisms.
Explore Evolution claims:
Biology textbooks cite two classic examples to support the claim that natural selection can produce small-scale change over a short time.Explore Evolution, p. 88
Campbell and Reece's Biology (6th edition) has a section in the chapter on evolution entitled, "Examples of natural selection provide evidence of evolution." It begins:
Natural selection and the adaptive evolution it causes are observable phenomena. As described in the interview at the beginning of this unit, Peter and Rosemary Grant of Princeton University are documenting natural selection and evolution in populations of finches in the Galápagos [Darwin's finches]. We will now look at two additional examples of natural selection as a pervasive mechanism of evolution in populations.Neil A. Campbell and Jane B. Reece, Biology, 6th ed.
Those examples include the evolution of HIV and insects in response to drugs and insecticides, yet neither HIV nor insecticide even rates a mention in the index of Explore Evolution. In Chapter 9, Explore Evolution addresses antibiotic resistance, but only to discuss the origins of mutations conferring resistance, not to point out that natural selection is what causes that resistance to spread.
Sickle cell anemia also makes an appearance in Chapter 9, again purely as an example of a mutation. In Raven and Johnson's Biology (5th edition), however, it is the first example of natural selection described in the section entitled "Natural selection explains adaptive microevolution." Explore Evolution mentions that sickle cell anemia can be beneficial under some circumstances, but misses the chance to either discuss how natural selection makes it more common in human populations traditionally occupying malarial areas, or to employ a truly inquiry based approach by inviting students to develop and test hypotheses about malarial resistance in order to actually explore evolution.
Earlier, Raven and Johnson discuss examples of natural selection including its ability to maintain persistent latitudinal gradients in the oxygen-carrying hemoglobin molecule in ocean fish, differences which make northern fish more efficient in cold water and southern fish more efficient in warmer water. They also describe how selection improves the camouflage of butterfly caterpillars and allows snail populations to adapt to different local ground coloration, as well as pesticide resistance in tobacco budworms and agricultural weeds. Only later do they discuss industrial melanism in peppered moths or the beaks of Darwin's finches.
That introductory textbook authors tend to focus on a few common examples does not detract from the fact that natural selection is commonplace, easy to observe, and widely documented. Specialized textbooks on evolutionary biology present an even wider array of examples of natural selection. For instance, the chapter on natural selection in Futuyma's Evolutionary Biology discusses how selection produces a north-south gradient in the frequency of alleles of a certain gene in Drosophila flies, a pattern repeated on multiple continents. Similar patterns exist for field crickets. Later, Futuyma describes how guppies in streams without predators have brighter coloration than closely related guppies in streams with predators. In the example of snail shells also used by Raven and Johnson, Futuyma points to paleontological studies showing that the genetic polymorphism seen in the population today has persisted for thousands or millions of years — clear evidence of stabilizing selection — and studies of broken snail shells allow an evaluation of rates of predation on various color morphs — allowing an assessment of the selective pressures acting on the population.
The emphasis on two examples of natural selection, and the complete disregard for the myriad other examples in active use by introductory and advanced textbooks, reflects a common creationist strategy. Jonathan Wells, a creationist author at the Discovery Institute, has made a career of attacking the Galápagos finches and the peppered moth, perhaps in the belief that all of the other examples of natural selection would go away if he could disprove one or two well-known examples.
It is noteworthy that several figures in this chapter are drawn from Jonathan Wells' Icons of Evolution, a creationist work aimed at critiquing the content of common biology textbooks and common examples used to illustrate and explain evolutionary processes. While Wells is not credited in this chapter, many of the arguments are the same as in his earlier work (critiqued by NCSE's Alan Gishlick), as are the illustrations. Explore Evolution repeats many of the errors previously identified in Wells' work. Just as the authors of Explore Evolution have a well-documented religious agenda which belies the scientific appearance of their book, Wells is famous for his religious reasons for obtaining a PhD in biology and attacking evolution, rooted in his involvement with the Unification Church (better known as "Moonies"), led by Sun Myung Moon, or, as Wells refers to him, "Father":
I asked God what He wanted me to do with my life, and the answer came not only through my prayers, but also through Father's many talks to us, and through my studies.…
He also spoke out against the evils in the world; among them, he frequently criticized Darwin's theory that living things originated without God's purposeful, creative activity.…
Father's words, my studies, and my prayers convinced me that I should devote my life to destroying Darwinism, just as many of my fellow Unificationists had already devoted their lives to destroying Marxism. When Father chose me (along with about a dozen other seminary graduates) to enter a Ph.D. program in 1978, I welcomed the opportunity to prepare myself for battle.
Explore Evolution, like Wells' earlier work, is rooted in a religious aversion to evolution, not in actual science. Scientists have sought to correct the erroneous claims displayed in Explore Evolution in their earlier incarnations, and the refusal to accept those corrections, or even to acknowledge those criticisms, recommends strongly against adopting this work into science classes.
…after the rains returned [to the Galápagos], the Grants notices that several separate species of finches were interbreeding. No only were no new species springing forth, but existing varieties actually seemed to be merging. Critics therefore conclude that the finch beak example of microevolution actually suggests that biological change has limits.Explore Evolution , p. 93
Given that this passage follows right after their complaints about the dangers of extrapolation, it is rich to find them claiming that anything that didn't happen in a 30 year study could never happen. The sole basis for the claim that limits exists seems to be that something did not occur in the course of a single study. By that standard, I could predict that I will never die, since I am thirty and have not been observed to die.
Furthermore, the citation the authors use to explain that hybridization occurred points out that genetic novelty accompanies such unusual matings, and explains why those matings do not mean that the species are merging. The ongoing changes in beak shape (shown in part F of the figure above) cannot be explained by natural selection alone. They can only be explained by invoking the combination of two of the 4 major evolutionary mechanisms: natural selection and gene flow. Natural selection explains the initial changes, but the flow of genes between species provided the ongoing evolutionary pressure towards blunter beaks. Peter and Rosemary Grant explain:
The proportionally greater gene flow from G. fortis to G. scandens than vice versa has an ecological explanation. Adult sex ratios of G. scandens became male biased after [the extremely hot and wet] 1983 as a result of heavy mortality of the socially subordinate females. High mortality was caused by the decline of their principal dry-season food, Opuntia cactus seeds and flowers; rampantly growing vines smothered the bushes. G. fortis, more dependent on small seeds of several other plant species, retained a sex ratio close to 1:1. Thus, when breeding resumed in 1987 after 2 years of drought, competition among females for mates was greater in G. fortis than in G. scandens. All 23 G. scandens females paired with G. scandens males, but two of 115 G. fortis females paired interspecifically. All their F1 offspring later bred with G. scandens because choice of mates is largely determined by a sexual imprinting-like process on paternal song.Grant, P. R. and B. R. Grant (2002), "Unpredictable evolution in a 30-year study of Darwin's finches." Science 296:707-711
There are two important things to understand about that. First, that the hybridization was a result of unusual environmental conditions and an excess in the number of males of one species. Those males competed for access to any female at all, and were prepared to overcome pre-mating barriers to hybridization. Second, the main force limiting hybridization is that different species of finches select mates with songs similar to those of their fathers. Elsewhere, the Grants explain:
Hybridization occurs sometimes as a result of miscopying of song by a male; a female pairs with a heterospecific male that sings the same song as that sung by her misimprinted father. On Daphne Major island, hybrid females bred with males that sang the same species song as their fathers. All G. fortis × G. scandens F1 hybrid females whose fathers sang a G. fortis song paired with G. fortis males, whereas all those whose fathers sang a G. scandens song paired with G. scandens males. Offspring of the two hybrid groups (the backcrosses) paired within their own song groups as well. The same consistency was shown by the G. fortis × G. fuliginosa F1 hybrid females and all their daughters, which backcrossed to G. fortis. Thus mating of females was strictly along the lines of paternal song.Peter R. Grant and B. Rosemary Grant (1997). "Genetics and the origin of bird species," Proceedings of the National Academy of Sciences, 94:7768–7775
The Grants go on to discuss how this helps explain the speciation of Galápagos finches.
Experiments show that birds are less responsive to the songs of conspecifics from different islands than to songs from their own island. Even though the changes are small, the process of cultural drift is enough to begin isolating these populations. In addition, the finches show a preference for the morphology of birds from their own island to members of the same species from other islands, even independent of song differences. The forces driving natural selection on different islands will differ, and that will produce morphological differences, which will combine with differences in songs to make hybridization less likely.
Thus, the observation of hybridization does not provide evidence that two species will merge into one. Instead, it helps test the process by which species separate. Furthermore, the hybridization observed had an effect which directly contradicts the claim that novelty cannot originate through evolutionary processes. Because of genes flowing in from another species, G. scandens experienced substantial evolutionary change, and acquired novel traits.
The Grants point out that this sort of gene exchange is important to understanding speciation and evolution in general:
Introgressive hybridization [as seen in Darwin's finches] has the potential of leading to further evolutionary change as a result of enhancing genetic variances, in some cases lowering genetic covariances), introducing new alleles, and creating new combinations of alleles, some of which might be favored by natural selection or sexual selection. Svärdson believed that introgression in coregonid fishes has replaced mutation as the major source of evolutionary novelty. Introgression and mutation are not independent; introgressive hybridization may elevate mutation rates.
The relevance to speciation lies in the fact that regions of introgression are peripheral areas, which could become isolated from the main range of the species through a change in climate and habitat: they are potential sites of speciation.Grant, P. R. and B. R. Grant (1997)
Far from demonstrating limits to the power of evolution, the rare hybridizations between species demonstrate how strong pre-mating isolation is, illustrate an important source of variation, and provide evidence about the process of speciation and the origins of genetic novelty. The fact that Explore Evolution never mentions two of the four major mechanisms of evolution (gene flow and genetic drift), speaks poorly of the authors' commitment to a serious examination of evolutionary biology.
Critics question whether the peppered moth story shows that microevolution can eventually produce large-scale change. They point out that nothing new emerged.Explore Evolution, p. 93, emphasis original
Textbooks which present this example typically use it to illustrate the process by which biologists investigate natural selection, not to demonstrate the origins of biological novelty. In Raven and Johnson's Biology (5th edition), the discussion of peppered moths and industrial melanism is in a section titled "Natural selection explains adaptive microevolution," and never claims that the example illustrates anything other than the process by which scientists have investigated the effects of natural selection. Ridley's Evolution (2nd ed.) discusses peppered moths first in a section explaining how "Natural selection operates if some conditions are met," and later in a chapter entitled "The Theory of Natural Selection," in a section discussing how "the model of selection can be applied to the peppered moth." Ridley first demonstrates the reasons why natural selection was invoked by observers of a pattern, and then proceeds to describe the particular ways in which researchers investigated the hypothesis: determining the heritability of coloration, experimenting to determine the fitnesses of various genotypes under different conditions, and concluding with a discussion of ways in which "the details of the story are now known to be more complex."
As Ridley explains:
In conclusion, the industrial melanism of the peppered moth is a classic example of natural selection, and illustrates the one-locus, two-allele model of selection. The model can be used to make a rough estimate of the difference in fitness between the two forms of moth using their frequencies at different times; the fitnesses can also be estimated from mark-recapture experiments. However, the one-locus, two-allele model is only an approximation to reality. In fact, several alleles are present (and their dominance relations are not simple); selection is not simply a matter of bird predation in relation to camouflage; and it seems that migration, as well as selection, is needed to explain the geographic pattern of gene frequencies.Mark Ridley (1996), Evolution, 2nd ed., p. 109
Explore Evolution claims that "the experiment [does] not show what the story says it's supposed to," but misrepresents what scientists claim it illustrates. It is not an experiment meant to illustrate speciation, and Explore Evolution does not discuss those experiments, such as the examples in Drosophila discussed by Ridley in his chapter on speciation.
In fact, Explore Evolution does not even discuss the process by which melanism would have originated in peppered moths. The genetics of melanism have been well understood since the 1960s, when researchers showed how several different mutations to the same genes could produce similar sorts of melanism (Lees, David R., 1968, "Genetic Control of the Melanic Form Insularia of the Peppered Moth Biston betularia (L.)," Nature 220(5173):1249-1250). Natural selection is the process by which those mutations increased in frequency over several generations, exactly what scientists and textbook authors claim this example demonstrates.
In these experiments the moths were placed onto trunks and branches at dawn, not day time, and allowed to take up their own resting places, as described in Kettlewell's 1958 paper "The importance of the micro-environment to evolutionary trends in the Lepidoptera" (Entomologist, 91:214-224). This is exactly what the moths do naturally. In one experiment, as a control, Kettlewell released moths earlier, and allowed them to fly on to the trees themselves. The recapture patterns from this experiment were no different from the recapture patterns with the moths placed on branches and trunks (Kettlewell, 1956, "Further experiments on industrial melanism in Lepidoptera" Heredity, 10: 287-301).
As well, some of the moths that were released in the mark-recapture-experiments stayed out for two nights before being captured. That is, they had been flying free at night and had found their own location during the morning. The distribution of those moths that did freely choose their own resting places is no different from those that were placed on trunks and branches (as shown in Kettlewell, 1956, and in his 1955 paper "Selection experiments on industrial melanism in Lepidoptera," (Heredity, 9: 323-342).
While there were legitimate reasons why scientists did criticize Kettlewell’s experiments (including Bruce Grant's 1999 paper "Fine tuning the peppered moth paradigm," Evolution 53. 980-984 and Michael Majerus's 1998 Melanism: evolution in action, Oxford University Press, Oxford, chapters 5 and 6), none of these criticisms (density and resting place choice) involve the moths being sleepy or sluggish, and no serious experimenter suggested that Kettlewell’s results were invalid. Indeed, subsequent experiments to test these criticisms broadly confirmed Kettlewell’s results (again, see Grant, 1999, Majerus 1998, and Majerus' 2007 talk "The Peppered Moth: The Proof of Darwinian Evolution," given at the ESAB meeting in Uppsala on 23 August – also available as Powerpoint, as well as his 2009 paper "Industrial melanism in the peppered moth, Biston betularia: an excellent teaching example of Darwinian evolution in action," Evolution: Education and Outreach 2(1):63-74). Further details of these experiments are discussed in "Where Peppered Moths Rest," below.
Kettlewell was aware that peppered moths rested on both trunks and branches. In Kettlewell’s experiments, he actually placed the moths on trunks and branches, in relatively unexposed locations, thus covering the natural resting places of the peppered moth.
In a comprehensive study of peppered moth resting places in the wild, fully 25% of moths were found resting on trunks (Majerus, 1998, cited above). Of the remainder, roughly 25% were found on branches, and 50% at branch/trunk junctions. Furthermore, in the branch/trunk junction category, the moths are actually resting on the trunks, 2-3 inches below the branch. In a later, extensive 6 year study 37% of peppered moths were found on trunks (Majerus, 2007, cited above).
It is important to note that Kettlewell performed several different experiments; direct observations, filmed observations of birds taking moths from exposed trunks, indirect observations of moth predation where moths were released onto relatively unexposed trunks and branches and allowed to chose their resting places, and mark-recapture experiments, where again moths were released onto relatively unexposed trunks and branches to choose their own resting places (Kettlewell, 1955, 1956, both cited above). So when Kettlewell put his moths on trunks and branches (Kettlewell, 1955, 1956), he was placing them where the majority of all moths rest naturally, as far as we can tell (even more if we count the trunk-resting moths at the trunk/branch junctions).
Michael Majerus has repeated Kettlewell’s experiments using moths resting on the undersides of branches (Majerus 1998, 2007). In both cases, differential predation was found that confirmed Kettlewell’s original observations. Furthermore, in Majerus’s 6-year experiment, measured predation intensity at the experimental sites predicted the population frequencies of moths found in the wild (Majerus, 2007).
While the authors of Explore Evolution could not have been expected to have had access to Majerus’s 2007 results, Majerus’s 1998 results, as well as Kettlewell’s description of the original experiments (Kettlewell, 1955, 1956) alone are enough to show that Explore Evolution is completely wrong on this point.
Since Kettlewell's original experiments were published, they have been independently replicated at least 6 times (See for example Grant 1999, Majerus 1998, and Majerus 2007, all cited above, for reviews). All of these experiments have addressed one or more criticisms of the original study, and all have broadly confirmed Kettlewell's experiments. Thus we can say that Kettlewell's experiments have stood the test of time.
An inquiry-based book could have used this history of successive investigations to explore the practice of science as a self-correcting enterprise, and the importance of replicability to the scientific process. Students could have been asked to devise their own experiments based on criticisms of Kettlewell's early work, and then teachers could reveal data from experiments like those performed by Majerus to evaluate the results of those new experiments. Instead, students are presented with erroneous critiques of Kettlewell's work, given none of the more recent vindicating evidence, and instructed to believe that this flawed exploration demonstrates a weakness in natural selection. In fact, it reflects only the weaknesses of Explore Evolution, and of its authors' approach to evolution and science in general.
Natural selection operates at different speeds under different circumstances. Scientists agree that natural selection over long periods of time can produce larger evolutionary change than natural selection can produce in shorter periods, but exactly how much more is a subject of ongoing research.
Explore Evolution claims that there are inherent limits to the amount of change that evolutionary processes can produce, and that these limits make it improper to extrapolate from short-term research on natural selection in explaining the long-term evolutionary change we see in the fossil record. Alas, their argument for inherent limits to evolutionary change is rooted in exactly the sort of fallacious extrapolation they decry. The work scientists do bears little, if any, resemblance to these sorts of erroneous extrapolation.
To illustrate the claim of improper extrapolation, Explore Evolution actually invents data from whole cloth, presenting a graph of finch beak size "extrapolation" vs. "data" which is actually contradicted by the data obtained from field research. Where the text and graph suggest constant oscillations within fixed limits, research on the Galápagos finches show directional change in beak shape in addition to cyclical changes in beak size. Furthermore, those cycles match the cyclical environment the birds live in, so it is inappropriate to treat those oscillations as inherent limits to the birds' evolutionary capacity, rather than a reflection of their ability to rapidly adapt to large environmental changes with equally large evolutionary change.
As always, Explore Evolution passes up any opportunity to give students the data or opportunity to propose their own tests of any of these claims, belying the book's claim to be inquiry-based. Students are expected to learn by rote that limits exist on evolutionary change. They are never told how researchers actually investigate the ways in which various factors do limit evolutionary change.
Anyone who denies the logical link between genetic changes within a population ("microevolution") and speciation ("macroevolution") is similar to someone who watches the sun come up in the east and move west across the sky, but denies that it will set in the west. The only difference between genetic changes within a population and generation of a new species from that population is time. Given enough time, the sun will set in the west. Given enough time, speciation will occur.
This claim is related to the "young earth" creationist belief that the earth is only a few thousand years old. In this belief system, there has not been enough time for speciation to occur, given the rate of change that we can observe in most populations. So it is necessary for them to deny reality (observations of speciation) in order to validate a creationist perspective on the age of the earth. An age, by the way, that is about 0.00000002% of the approximately 3 billion years over which biological evolution has proceeded.
Nowhere in the discussion of "the information problem" is there any attempt to formally define how students should measure "information." At one point, the authors introduce a strained analogy between upgrading computer software and adding biological information, but never quite explain the analogy. Later they observe that scientists have occasionally referred to DNA as if it were analogous to a computer program. Based on this informal analogical reasoning, they declare "So, biological information is stored in DNA" (p. 94). Teachers who wish to actually discuss this idea in class would be stranded utterly not only by Explore Evolution's treatment of the subject, but by the equally vague attempts by the ID creationists on whose work this section draws.
The field of mathematics known as information theory was developed to address the transmission of information, and it both defines information and describes how information is created. In essence, a mathematically random sequence of symbols (whether letters, DNA bases, or computer bits) has the highest information content possible. A completely predictable sequence contains only as much information as it would otherwise take to accurately predict the sequence. Thus, in information theory, adding random noise actually increases the amount of information being transmitted. Whether that information is useful or not to a listener is a separate matter.
This is where the misuse of "information" throughout Explore Evolution can be confusing. We usually have a very specific expectation for information transmitted over a telephone line, so random static on the line reduces the amount of information we can use. Randomness adds mathematical information, but decreases immediately usable information. A process of selection, mutation, and drift acting on such random information will, in time, extract new elements which are usable.
Evolution itself has no expectations about what data will be transmitted from generation to generation. Random mutations add information to the genome, and natural selection (or artificial selection) acts against those mutations which are not useful at a given moment, promotes those mutations which happen to benefit the organisms possessing them, and has no particular effect on mutations which do not influence the organism's fitness.
Biologists have incorporated this insight into their studies of the evolution of new genes. Gene duplication are common events, resulting from small errors in the process of cell replication. Once a gene is duplicated it is possible for one copy to mutate, adding information without risking the functioning of the pre-existing gene.
The process of gene duplication has been known since at least 1936; its possible significance for producing the raw material for the evolution of genetic novelty was recognized as early as 1951 (see Zhang 2003 for more on this history).
|jingwei||2.5 my||A standard chimeric structure with rapid sequence evolution|
|Sdic||<3 my||Rapid structural evolution for a specific function in sperm tails|
|sphinx||<3 my||A non-coding RNA gene that rapidly evolved new splice sites and sequence|
|Cid||Function diverged in the past 3 my||Co-evolved with centromeres under positive Darwinian selection|
|Dntf-2r||3-12 my||Origin of new late testis promoter for its male-specific functions.|
|Adh-Finnegan||30 my||Recruited a peptide from an unknown source and evolved at a faster rate than its parent gene|
|FOXP2||100,000 y||A selective sweep in this gene, which has language and speech function, took place recently|
|RNASE1B||4 my||Positive selection detected, which corresponds with new biological traits in leaf-eating monkeys|
|PMCHL2||5 my||Expression is specifically and differentially regulated in testis|
|PMCHL1||20 my||A new exon-intron in the 3' coding region created de novo and an intron-containing gene structure created by retroposition|
|Morpheus||12-25||Strong positive selection in human-chimpanzee lineages|
|TRE2||21-33 my||A hominoid-specific chimeric gene with testis=specific expression|
|FUT3/FUT6||35 my||New regulatory untranslated exons created de novo in new gene copies; the family has been shaped by exon shuffling, transposation, point mutations and duplications|
|CGß||34-50 my||One of two subunits of placentally expressed hormone; the rich biological data clearly detail its function|
|BC200||35-55 my||A non-coding RNA gene that is expressed in nerve cells.|
|4.5Si RNA||25-55 my||A non-coding RNA gene that is expressed ubiquitously|
|BC1 RNA||60-110 my||A neural RNA that originated from an unusual source: tRNA|
|Arctic AFGP||2.5 my||Convergent evolution; antifreeze protein created from an unexpected source driven by the freezing environment|
|Antarctic AFGP||5-14 my||Convergent evolution; antifreeze protein created from an unexpected source driven by the freezing environment|
|Sanguinaria rps1||<45 my||A chimeric gene structure created by lateral gene transfer|
|Cytochrome c1||110 my||Origin of mitochondrial-targeting function by exon shuffling|
|N-acetylenuraminate lyase||<< 15 my||A laterally transferred gene from proteobacteria that recruited a signal peptide|
As it has become more practical to trace the sequences of genes in multiple species, scientists have been able to identify genes which went through these processes, acquiring new functions within relatively recent history. That research systematically refutes the claim in Explore Evolution that "whether you're talking about artificial selection or about microevolution that occurs naturally, changes in the sub-population take place as genetic information is lost to that population" (p. 95). In fact, a recent review of the processes by which new genes and new gene functions evolved drew the exact opposite conclusion:
The origination of new genes was previously thought to be a rare event at the level of the genome. This is understandable because, for example, only 1% of human genes have no similarity with the genes of other animals, and only 0.4% of mouse genes have no human homologues, although it is unclear whether these orphan genes are new arrivals, old survivors or genes that lost their identity with homologues in other organisms. However, it does not take many sequence changes to evolve a new function. For example, with only 3% sequence changes from its paralogues, RNASE1B has developed a new optimal pH that is essential for the newly evolved digestive function in the leaf-eating monkey. Although it will take a systematic effort to pinpoint the rate at which new genes evolve, there is increasing evidence from Drosophila and mammalian systems that new genes might not be rare. Patthy compiled 250 metazoan [multicellular animal] modular protein families that were probably created by exon shuffling. Todd et al. investigated 31 diverse structural enzyme superfamilies for which structural data were available, and found that almost all have functional diversity among their members that is generated by domain shuffling as well as sequence changes.Manyuan Long, Esther Betrán, Kevin Thornton and Wen Wang (2003) "The Origin of New Genes: Glimpses from the Young and Old," Nature Reviews: Genetics, 4:865
The table at the right describes a few well-studied examples of recently evolved genes, and a summary of what scientists have learned about the processes by which those genes evolved. The processes are the same sorts of small-scale mutational changes that we observe in existing populations. It was not necessary to invoke previously undescribed processes, merely to understand how known processes could produce the patterns observed in nature. That is the way scientists typically work, and an inquiry-based textbook ought to teach students to apply those methods. Instead, Explore Evolution ignores actual knowledge, criticizes the scientists who produced that knowledge, and discourages scientific inquiry from students, in favor of vague and untestable speculation.
Biologists do not dispute that limits to evolution may exist, and conduct research to test whether such limits exist. For instance, biologists wonder why no marsupials evolved flight or the sorts of adaptations to swimming seen in other mammals. It is hypothesized that the young marsupials' early crawl to the teat (see our discussion of marsupial reproduction in chapter 12) may place a constraint on the possible final forms the marsupial shoulder can take. While placental mammals give birth to offspring that are self-sufficient, marsupials give birth before major nerves, muscles and bones have formed, and must crawl to the teat (an exception is found in bandicoots of the genus Isoodon, which have a backwards-facing pouch into which the newborn can drop or slither without using its arms). That crawl requires that a functional shoulder exist early in fetal development, and the necessity of forming that functional shoulder so early may prevent the sort of limb diversification seen in other mammals, which range from the bat's wing to the cat's leg and on to the whale's flipper.
To test this, Dr. Karen Sears measured the adult and fetal shoulderblades of a dozens of marsupial and placental mammals, and performed a statistical analysis of the changes in shape.
As shown in the figure here, the placental mammals changed shoulder shape in many directions as they grew in size, while all of the marsupial limbs moved in the same direction. All but Isoodon, which doesn't use the shoulder during its move from womb to pouch, and so does not face the same developmental constraints.
This insight that developmental constraints can limit what evolutionary processes can produce is not new, and is well integrated into textbooks on biology and evolutionary biology (for a recent review, see J. L. Hendrikse, T. E. Parsons and B. Hallgrímsson. 2007. "Evolvability as the proper focus of evolutionary developmental biology," Evolution & Development, 9(4):393–401. For examples of textbook coverage, see pp. 352-365 of Ridley's Evolution, with sections titled: "Genetic constraints may cause imperfect adaptations," "Developmental constraints may cause adaptive imperfection," "Historical constraints may cause adaptive imperfection," "An organism's design may be a trade-off between different adaptive needs" and "Conclusion: constraints on adaptation.")
Other scientists confronting apparent biological constraints did not merely criticize, they proposed new evolutionary mechanisms which would not face those same limitations. The origins of mitochondria and other cellular structures is a case in point. The mitochondrion is the part of the cell in which oxygen is converted into usable energy. Without mitochondria, oxygen would poison every cell in our bodies, and without the molecular energy they produce, each of our cells would starve.
Our cells each have several mitochondria within them. Each of those mitochondria has its own circular genome with which it produces the proteins it needs to process oxygen. Each of the mitochondria possess two or more cell membranes, rather than the one found around all of our cells. It is impossible to imagine how a cell could exist with only part of a mitochondria, nor why a cell before the era of oxygen might have any of the unique parts present in the mitochondria found in nearly every eukaryotic cell. Even more mysterious was why the mitochondrial genome should be so different from that of every eukaryote. It is much more similar to that of a bacterium.
In the late 1970s, Lynn Margulis proposed that the mitochondria and several other parts of the eukaryotic cell might actually be the descendants of bacteria which were engulfed by the ancestors of all eukaryotes. This would explain the odd genome, and would explain the multiple membranes. The inner membrane is like that possessed by the free-living ancestor of mitochondria, while the outer membranes are the remnants of the vacuole within which that bacterium was captured to be digested. For whatever reason, it wasn't digested, instead helping process oxygen and cellular waste into useful molecular energy.
This theory proposed an entirely novel evolutionary mechanism, endosymbiosis. While some of the endosymbiotic relationships Margulis proposed are seen as unlikely, her explanation of the origin of mitochondria and chloroplasts have become widely accepted within the scientific community. Again, her discovery could form the basis for an inquiry-based discussion of evolutionary mechanisms, and could be enhanced by evidence of transitional stages in the evolution of endosymbiosis found today. Scientists in Japan recently described one such case (Noriko Okamoto and Isao Inouye. 2005. "A Secondary Symbiosis in Progress?" Science, 310(5746):287), and researchers recently showed that a bacterium which commonly invades insect cells, sometimes integrates its genes into the host cell, exactly like mitochondria sometimes do (Julie C. Dunning Hotopp, et al. 2007. "Widespread Lateral Gene Transfer from Intracellular Bacteria to Multicellular Eukaryotes," Science [DOI: 10.1126/science.1142490]).
Explore Evolution never mentions this process, despite its obvious pedagogical value, and its utility in addressing the limits of more commonly observed evolutionary mechanisms.
When the sizes of finch beaks oscillates, it is because of an oscillating environment. The size changes within a species are large enough to explain the differences between the various species of Galápagos finches, species Charles Darwin initially thought belonged to several different families of bird. Not all of these changes oscillate, the evolution of Darwin's finches has been directional in some aspects.
The figure shown here illustrates that, as the climate in the Galápagos has changed due to the multiyear El Niño cycle, the finches have changed body size, beak size, and beak shape (a measure of width, length and depth). Some of those measurements have returned to levels that were seen historically (inside the faint horizontal lines in the figure), while other measurements continue to diverge.
As the Grants describe in "Unpredictable Evolution in a 30-Year Study of Darwin's Finches", "The temporal pattern of change shows that reversals in the direction of selection do not necessarily return a population to its earlier phenotypic state." This is the opposite of what Explore Evolution describes:
After the heavy rains of 1983, the depth of the average finch beak went back to its pre-drought size, and the so-called "evolutionary change" was reversed.Explore Evolution, p. 93
While one species did wind up at roughly the same beak size, the beaks at the end were sharper than they had been at the beginning, while the other species has a smaller and blunter beak than it had to begin with.
Does this demonstrate that evolution has limits? Not at all. We would not expect finches to evolve more rapidly over 30 typical years. Over the thirty years, the environment oscillated within limits, and the finches evolved and adapted to that changing climate. When the climate returned to the state it had been in at the beginning of the study, the finches became more similar, but not identical, to their initial state. Why students ought to assume that finches could not have changed more if the environment had changed more is not at all clear, especially since the size and shape of finches and their beaks continues to change and to cross historical limits.
Of course, the climate is not constant, either. Peter Grant explains:
The climate of the Galápagos has not remained stable over the last 50,000 years. This is known from an analysis of particles and plant products in cores taken from the sediment of El Junco lake on the summit of San Cristobal (Colinvaux 1972, 1984). Inferences can be made about changes in water level, cloud cover, and heat budget from the composition of the cores at different levels.
The present climate has persisted for the last 3,000 years, and it also prevailed about 6,200 and 8,000 years ago. In the intervening period of 3,200 years it was drier, and possibly hotter, than now. Going back further, it was drier before 8,000 years ago. The most different climate regime from the present one occurred from about 10,000 to 34,000 years ago; this was a time of little precipitation or evaporation.Grant, P. R. (1999) Ecology and Evolution of Darwin's Finches, Princeton University Press, Princeton, NJ, pp. 29-30
We know that finches can and do undergo significant morphological change when the climate changes. We know the climate changes. Explore Evolution simply insists that we should not follow the syllogism to its conclusion, and decide that the finches would have undergone large changes during periods of large environmental change. Explore Evolution invokes limits, but provides no actual evidence that inherent limits on evolution operate beyond those imposed by limited environmental variability. This is not, needless to say, how science proceeds, and it is not what we would expect from an inquiry-based approach to science. Unanswered questions are not places where scientists draw lines, they are opportunities to make new discoveries. An inquiry-based text should invite students to propose hypotheses about finch evolution, and provide teachers with a suite of data for students to test their hypotheses. | http://ncse.com/book/export/html/960 | 13 |
12 | To hammer home the theory you've just learnt let's look at a simple problem:
Given the digits 0 through 9 and the operators +, -, * and /, find a sequence that will represent a given target number. The operators will be applied sequentially from left to right as you read.
So, given the target number 23, the sequence 6+5*4/2+1 would be one possible solution.
If 75.5 is the chosen number then 5/2+9*7-5 would be a possible solution.
Please make sure you understand the problem before moving on. I know it's a little contrived but I've used it because it's very simple.
First we need to encode a possible solution as a string of bits… a chromosome. So how do we do this? Well, first we need to represent all the different characters available to the solution... that is 0 through 9 and +, -, * and /. This will represent a gene. Each chromosome will be made up of several genes.
Four bits are required to represent the range of characters used:
The above show all the different genes required to encode the problem as described. The possible genes 1110 & 1111 will remain unused and will be ignored by the algorithm if encountered.
So now you can see that the solution mentioned above for 23, ' 6+5*4/2+1' would be represented by nine genes like so:
0110 1010 0101 1100 0100 1101 0010 1010 0001
6 + 5 * 4 / 2 + 1
These genes are all strung together to form the chromosome:
A Quick Word about Decoding
Because the algorithm deals with random arrangements of bits it is often going to come across a string of bits like this:
Decoded, these bits represent:
0010 0010 1010 1110 1011 0111 0010
2 2 + n/a - 7 2
Which is meaningless in the context of this problem! Therefore, when decoding, the algorithm will just ignore any genes which don’t conform to the expected pattern of: number -> operator -> number -> operator …and so on. With this in mind the above ‘nonsense’ chromosome is read (and tested) as:
2 + 7
This can be the most difficult part of the algorithm to figure out. It really depends on what problem you are trying to solve but the general idea is to give a higher fitness score the closer a chromosome comes to solving the problem. With regards to the simple project I'm describing here, a fitness score can be assigned that's inversely proportional to the difference between the solution and the value a decoded chromosome represents.
If we assume the target number for the remainder of the tutorial is 42, the chromosome mentioned above
has a fitness score of 1/(42-23) or 1/19.
As it stands, if a solution is found, a divide by zero error would occur as the fitness would be 1/(42-42). This is not a problem however as we have found what we were looking for... a solution. Therefore a test can be made for this occurrence and the algorithm halted accordingly.
First, please read this tutorial again.
If you now feel you understand enough to solve this problem I would recommend trying to code the genetic algorithm yourself. There is no better way of learning. If, however, you are still confused, I have already prepared some simple code which you can find here. Please tinker around with the mutation rate, crossover rate, size of chromosome etc to get a feel for how each parameter effects the algorithm. Hopefully the code should be documented well enough for you to follow what is going on! If not please email me and I’ll try to improve the commenting.
Note: The code given will parse a chromosome bit string into the values we have discussed and it will attempt to find a solution which uses all the valid symbols it has found. Therefore if the target is 42, + 6 * 7 / 2 would not give a positive result even though the first four symbols("+ 6 * 7") do give a valid solution.
(Delphi code submitted by Asbjørn can be found here and Java code submitted by Tim Roberts can be found here)
I hope this tutorial has helped you get to grips with the basics of genetic algorithms. Please note that I have only covered the very basics here. If you have found genetic algorithms interesting then there is much more for you to learn. There are different selection techniques to use, different crossover and mutation operators to try and more esoteric stuff like fitness sharing and speciation to fool around with. All or some of these techniques will improve the performance of your genetic algorithms considerably.
Stuff to Try
If you have succeeded in coding a genetic algorithm to solve the problem given in the tutorial, try having a go at the following more difficult problem:
Given an area that has a number of non overlapping disks scattered about its surface as shown in Screenshot 1,
Use a genetic algorithm to find the disk of largest radius which may be placed amongst these disks without overlapping any of them. See Screenshot 2.
As you may have already gathered, I've already written some code that solves this problem so if you get stuck you can find it here. (but you will have a go yourself first eh? ;0)). For those of you without compilers, you can get the executable file here.
1 2 3 Home | http://www.ai-junkie.com/ga/intro/gat3.html | 13 |
12 | How to Find the Derivative of a Line
The derivative is just a fancy calculus term for a simple idea that you probably know from algebra — slope. Slope is the fancy algebra term for steepness. And steepness is the fancy word for . . . No! Steepness is the ordinary word you’ve known since you were a kid, as in, Hey, this road sure is steep. Everything you study in differential calculus all relates back to the simple idea of steepness.
Here’s a little vocabulary for you: differential calculus is the branch of calculus concerning finding derivatives; and the process of finding derivatives is called differentiation. Notice that the first and third terms are similar but don’t look like the term derivative. The link between derivative and the other two words is based on the formal definition of the derivative, which is based on the difference quotient. Now you can go and impress your friends with this little etymological nugget.
Don’t be among the legions of people who mix up the slopes of horizontal and vertical lines. How steep is a flat, horizontal road? Not steep at all, of course. Zero steepness. So, a horizontal line has a slope of zero. What’s it like to drive up a vertical road? You can’t do it. And you can’t get the slope of a vertical line — it doesn’t exist, or, as mathematicians say, it’s undefined.
To find points on the line y = 2x + 3 (shown in the figure below), just plug numbers into x and calculate y: plug 1 into x and y equals 5, which gives you the point located at (1, 5); plug 4 into x and y equals 11, giving you the point (4, 11); and so on.
You should remember that
The rise is the distance you go up (the vertical part of a stair step), and the run is the distance you go across (the horizontal part of a step). Now, take any two points on the line — say, (1, 5) and (6, 15) — and figure the rise and the run. You rise up 10 from (1, 5) to (6, 15) because 15 – 5 = 10. And you run across 5 from (1, 5) to (6, 15) because 6 – 1 = 5.
Next, you divide to get the slope:
You can just plug in the points (1, 5) and (6, 15): | http://www.dummies.com/how-to/content/how-to-find-the-derivative-of-a-line.navId-403861.html | 13 |
12 | Teaching astronomy and space videos
The resources are built around a series of Teachers TV programmes, aim to support the teaching of astronomy and space to 11-16 year olds.
Produced with generous funding from the Science and Technology Facilities Council, on behalf of the Institute of Physics and Teachers TV, they are now available to watch through a number of websites, including www.schoolsworld.tv/series/teaching-astronomy-and-space.
Within the programmes there are sections to use with students, where astronomers talk about their work in an inspiring and engaging way, as well as guidance and advice on setting up and managing practical activities with students. The activities are supported by full teaching notes. The different sections of the programmes are available to download separately below.
Astronomy and space videos
- Models of the Solar System - Earth, Sun and Moon
Explore the science behind our solar system, and how astronomers are exploring its boundaries
- Saturn and the Scale of the Solar System
Includes stunning images of Saturn and its moons taken from the Cassini spacecraft
- Asteroids and Comets
The risks and dangers of an asteroid collision on Earth
- The Sun
A solar physicist reveals what she knows about the Sun and the latest solar missions
- The Life Cycle of Stars
Explains how we believe stars are born, live and die and the different ends to different sized stars
- The Electromagnetic Spectrum
Explains how astronomers use radiation from across the electromagnetic spectrum to reveal the secrets of our universe
An introduction to SuperWASP, one of the most successful exoplanets finding instruments in the world
- How Big is the Universe?
Explains how astronomers have learnt to measure the distance to the stars
- The Expanding Universe and the Big Bang
Evidence for the Big Bang and the expanding universe.
- The Seasons demo 1
- The Seasons demo 2
- Phases of the Moon
- Solar Eclipses
- Cooking up a Comet
- Elliptical Orbits
- The Earth’s Atmosphere: Why is the Sky Blue?
- Invisible Wavelengths
- Colour and Temperature of Stars
- The Life Cycle of Stars: The Hertzsprung-Russell Diagram
These videos and their teaching notes, as well as additional teaching resources and web links, are all available on a DVD from the education department. Email [email protected] to request a copy. | http://www.iop.org/resources/videos/education/classroom/astronomy/page_51897.html | 13 |
11 | To explain the rules for multiplication ofsigned numbers, we recall that multiplication of whole numbers may be thought of as shortened addition. Two types of multiplication problems must be examined; the first type involves number8 with unlike signs, and the second involves numbers with like signs.
Consider the example 3(-4), in which themultiplicand is negative. This means we are to add -4 three times; that is, 3(-4) is equal to (-4) + (-4) + (-4), which is equal to -12. For example, if we have three 4-dollar debts, we owe 12 dollars in all.
When the multiplier is negative, as in -3(7), we are to take away 7 three times. Thus, -3(7) is equal to -(7) - (7) - (7) which is equal to -21.For example, if 7 shells were expended in one firing, 7 the next, and 7 the next, there would be a loss of 21 shells in all. Thus, the rule is as follows: The product of two numbers with unlike signs is negative.
The law of signs for unlike signs is sometimes stated as follows: Minus times plus isminus; plus times minus is minus. Thus a problem such as 3(-4) can be reduced to the following two steps:
1. Multiply the signs and write down thesign of’ the answer before working with the numbers themselves.
2. Multiply the numbers as if they were unsigned numbers.
Using the suggested procedure, the sign ofthe answer for 3(-4) is found to be minus. The product of 3 and 4 is 12, and the final answer is -12. When there are more than two numbers to be multiplied, the signs are taken in pairs until the final sign is determined.
When both factors are positive, as in 4(5),the sign of the product is positive. We are to add +5 four times, as follows:
4(5) = 5 + 5 + 5 + 5 = 20
When both factors are negative, as in -4(-5),the sign of the product is positive. We are to take away -5 four times.
Remember that taking away a negative 5 is thesame as adding a positive 5. For example, suppose someone owes a man 20 dollars and pays him back (or diminishes the debt) 5 dollars at a time. He takes away a debt of 20 dollars by giving him four positive 5-dollar bills, or a total of 20 positive dollars in all.
The rule developed by the foregoing exampleis as follows: The product of two numbers with like signs is positive.
Knowing that the product of two positive numbers or two negative numbers is positive, we can conclude that the product of any even number of negative numbers is positive. Similarly, the product of any odd number of negative numbers is negative.
The laws of signs may be combined as follows: Minus times plus is minus; plus times minus is minus; minus times minus is plus; plus times plus is plus. Use of this combined rule may be illustrated as follows:
4(-2) - (-5) - (6) - (-3) = -720
Taking the signs in pairs, the understood plus on the 4 times the minus on the 2 produces a minus. This minus times the minus on the 5 produces a plus. This plus times the understood plus on the 6 produces a plus. This plus times the minus on the 3 produces a minus, so we know that the final answer is negative. The product of the numbers, disregarding their signs, is 720; therefore, the final answer is -720.
Practice problems. Multiply as indicated:
1. 5(-8) = ?
Because division is the inverse of multiplication, we can quickly develop the rules for division of signed numbers by comparison with the corresponding multiplication rules, as in the following examples:
1. Division involving two numbers with unlike signs is related to multiplication with unlike signs, as follows:
Thus, the rule for division with unlike signs is: The quotient of two numbers with unlike signs is negative.
2. Division involving two numbers with like signs is related to multiplication with like signs, as follows:
3(-4) = -12
Thus the rule for division with like signs is: The quotient of two numbers with like signs is positive.
The following examples show the application of the rules for dividing signed numbers:
Practice problems. Multiply and divide as indicated: | http://www.tpub.com/math1/4b.htm | 13 |
10 | We have provided a variety of resources to support and extend each chapter in the Level 5 Math Central textbook.
Chapter 1: Whole Numbers and Decimals
Chapter 2: Multiplication of Whole Numbers
Chapter 3: Division of Whole Numbers
Chapter 4: Collecting, Organizing, and Using Data
Chapter 5: Measurement and Geometry
Chapter 6: Multiplication of Decimals
Chapter 7: Division of Decimals
Chapter 8: Geometry
Chapter 9: Fractions and Mixed Numbers
Chapter 10: Addition and Subtraction of Fractions
Chapter 11: Multiplication and Division of Fractions
Chapter 12: Ratio, Percent, and Probability
Chapter 13: Area and Volume
Copyright © 1999 Houghton Mifflin Company. All Rights Reserved.
Terms and Conditions of Use. | http://www.eduplace.com/math/mathcentral/grade5/index.html | 13 |
12 | Diagnosing a black hole flareMay 7th, 2012 in Space & Earth / Astronomy
An optical-IR image showing a galaxy that suddenly brightened when the supermassive black hole at its center shredded and absorbed a star that wandered too close. Credit: NASA; Gezari, Rest, and Chornock
(Phys.org) -- Black holes can come in a wide range of masses. Some, with only about one solar mass, result from the supernova death of a massive star, while those at the center of galaxies (called supermassive black holes) have millions or even billions of solar masses. Supermassive black holes are relatively famous because they are responsible for the powerful jets and other dramatic phenomena seen in some galaxies. The center of our Milky Way galaxy contains a modest-sized supermassive black hole, with about four million solar masses, and (fortunately for us) it is inactive - it lacks the extreme phenomena seen elsewhere.
Black holes are so dense that nothing, not even light, can escape from their gravitational clutches. Still, black holes can be detected because matter that falls into them heats up, and emits bright radiation. A short-lived flare, for example, can result when a body (perhaps a cloud of gas or a star) wanders too close to a black hole and is eaten. Astronomers are particularly interested in measuring the way the brightness of the flare increases, versus its decline, because the shape of the rising emission holds clues to the actual infall process. Observing such events is difficult, though, because the flaring activity may only last for a few months -- by the time it is spotted in the sky the most diagnostic phases of flare activity may have passed. Moreover, flares from smaller supermassive black holes (like the one in the center of the Milly Way) may be correspondingly weaker.
Pan-STARRS (Panoramic Survey Telescope & Rapid Response System) is a telescope with a small mirror (1.8 meters) but a very large field of view, and large digital cameras (1.4 billion pixels) developed especially to look for transient events. It can observe the entire available sky several times a month. In May of 2010 it spotted what appeared to be a flare from a previously inactive, Milky-Way-sized supermassive black hole in a galaxy about two billion light-years away. A team including CfA astronomers Ryan Chornock, Edo Berger, Peter Challis, Gautham Narayan, Ryan Foley, George Marion, Laura Chomiuk, Alicia Soderberg, Bob Kirshner, and Chris Stubbs, then led an aggressive follow-up campaign of observations to see what was going on.
The team reports on their discovery in this week's Nature. They began observing the flare about 40 days after it went off and about 40 days before it peaked, providing excellent data over most of the event. Detailed modeling of the light led the team to conclude that the black hole is less massive than previously thought, only about two million solar masses, and that the object it devoured was probably an evolved star (about 5 billion years old) whose mass was about 0.2 solar masses. These new results provide a particularly impressive, detailed view of what goes on in these exotic cosmic flares, and offer support for the overall model of these flaring events.
Provided by Harvard-Smithsonian Center for Astrophysics
"Diagnosing a black hole flare." May 7th, 2012. http://phys.org/news/2012-05-black-hole-flare.html | http://phys.org/print255599106.html | 13 |
78 | Given that there are thousands of asteroids and probably a hundred thousand million comets, these small bodies must be considered essential components of the solar system. Certainly objects closely similar to the small bodies that remain today were involved in the agglomeration of the larger planets and satellites some 4.5 billion years ago, and much of the importance of the small bodies today derives from the clues that they may contain about the processes that took place in the early solar system. This importance is magnified when we realize that asteroid-like parent bodies are the only solar system objects (other than Earth and the Moon) of which we have samples for detailed laboratory studies.
Although our understanding of small bodies is relatively limited, we know enough to realize that geologically these objects are best studied separately from the larger bodies, such as Earth and the Moon. For one thing, gravity is so much smaller on these bodies that it is difficult to extrapolate our experiences with surface processes on larger objects with any great confidence. For another, many of the small objects are irregular and call for mapping and geodetic techniques quite distinct from those commonly used for the larger (usually almost spherical) planets and satellites.
7.1. What is a Small Body?
It is not easy (nor is it necessary) to give a rigorous definition of a small body. Certainly implicit in the term is that the object has a low surface gravity and small escape velocity. Rather arbitrarily, we can take the largest small body to be the size of the biggest asteroid, Ceres, which has a diameter of some 1000 km. Most small bodies are considerably smaller; the two satellites of Mars, Phobos (21 km) and Deimos (12 km), are more representative.
For an object the size of Phobos, surface gravity is only about 1 cm sec-2, and the escape velocity is some 10 m sec-1. Weak gravity has several important implications. Since such bodies cannot have atmospheres, their regoliths are immune to weathering processes involving the presence of an atmosphere. On the other hand, they are directly exposed to the whole spectrum of meteoroidal impacts, cosmic rays, solar radiation, and the solar wind. Low gravity also makes it impossible for the body to achieve or retain a spherical shape during its history, and many small bodies tend to be irregular in shape. Additionally, low gravity affects the development of the surface under meteoroidal bombardment. Craters probably tend to remain deeper, ejecta become more dispersed, and the proportion of strongly shocked material retained is smaller than on larger bodies. Furthermore, the chances that an asteroid-like small body will suffer a catastrophic, or nearly catastrophic, impact during its history are non-negligible.
The study of meteorites has provided incontrovertible proof that some small parent bodies underwent differentiation (Dodd, 1981). In addition, there is strong evidence of subsurface aqueous processes in some parent bodies (Kerridge and Bunch, 1979) and of surface eruptions of lavas on others (Drake, 1979). The realization of the importance of short-lived nuclides such as 26AI as possible heat sources early in the solar system's history has made it quite plausible that some small bodies should have had early histories of melting and other internal activity (Sonett and Reynolds, 1979). Thus, whereas some small bodies (comet nuclei?) may have had dull evolutionary histories and may rightly be regarded as primitive, others have probably experienced histories almost as complex and certainly as interesting as some larger objects.
The solar system's small bodies can be divided conveniently into three broad categories: (1) rocky objects (asteroids and some small satellites), (2) icy objects (mostly small satellites, but perhaps including such objects as Chiron), and (3) comet nuclei.
The inventory of known small bodies includes thousands of asteroids in the main belt, as well as about 60 Amor, Apollo, and Aten objects. Only about 35 asteroids are larger than 200 km across, although physical measurements have been made of objects as small as 200 meters (Gehrels, 1979). None has yet been studied by spacecraft.
The inventory also includes the small satellites of Mars and of the outer planets. Phobos and Deimos, the two tiny satellites of Mars, are the only very small bodies that have been investigated sufficiently by spacecraft (Mariner 9 and Viking) to permit meaningful discussions of surface geologic processes (Veverka and Thomas, 1979).
Jupiter has at least a dozen small satellites. Except for a few low-resolution images of Amalthea obtained by Voyager, we know almost nothing about the geology of these bodies. There are also at least 70 known Trojan asteroids near the libration points of Jupiter's orbit, and speculations exist that some of Jupiter's outer satellites may be related to them (Degewij and van Houten, 1979).
Recent Earth-based and Voyager observations have greatly expanded our list of Saturn's small satellites, and at least in the case of Mimas and Enceladus, the Voyager data are adequate to support geologic investigation. Beyond Saturn, most of the satellites of Uranus, Neptune's Nereid, and Pluto's Charon probably fall within our definition of small bodies. However, it will be at least 1986 before any spacecraft data on any of these objects are available.
It is worthwhile to stress that the above list is almost certainly incomplete and that new small bodies will continue to be discovered. In addition, there are indications that small, so far undetected satellites are associated with the rings of Uranus and perhaps those of Saturn and Jupiter as well.
Comets are the most abundant small bodies in the solar system: one estimate is that some 1011 exist in the Oort cloud at the fringes of the solar system (Wilkening, 1982). From the geologic point of view, it is only the nuclei of comets that are of interest and not the comas and tails that develop when the nucleus approaches close enough to the Sun for its surface ices to vaporize. Most comet nuclei are believed to be bodies of rock and ice less than 10 km across, but very little direct information about them exists. None has been studied by spacecraft yet. They could be the parent bodies of some volatile-rich meteorites, and there may be an evolutionary connection between them and some asteroids. For example, it has been suggested that some Apollo asteroids are the remnants of extinct short-period comets (Shoemaker and Helin 1977 Kresak 1979).
In summary, three facts about small bodies must be kept in mind: (1) their vast number, (2) their great diversity, and (3) our lack of knowledge concerning them.
The next two decades of solar system exploration should remedy our current lack of information about small bodies. We cannot gain a true understanding of the solar system's evolution by ignoring them. They are of interest not only in their own right, but as the solar system's most abundant projectiles, they have influenced, m some cases probably dramatically, the evolution of the surfaces of the larger planets and satellites.
7.3. Why Study Small Bodies?
At least four major reasons for studying small bodies in the geologic context can be given:
It could also be argued that another important reason for studying small bodies is that their geologic record may extend further back in time than that preserved on the surfaces of the larger bodies. Also, many small bodies (including satellites) probably are collisional fragments of large bodies and in some instances could provide accessible information on the differentiation of large parent bodies.
7.3.1. Effects of Small Bodies on Larger Objects
Surfaces in the solar system continue to be modified by impacts, and there is abundant evidence that during the first half billion years of the solar system's existence, the surfaces of planets and satellites were influenced dramatically by collisions with small bodies. From the geologic point of view, we are interested in the time history of the flux and population (size and composition) of the impacting objects at different distances from the Sun. The early fluxes appear to have had a profound influence on the evolution of the crusts of larger bodies, and subsequent fluxes are important in determining relative chronologies of different surface units (chapter 3). The actual nature of the impacting bodies (whether volatile-rich or volatile-poor) may have played a role in determining the evolution of some atmospheres and perhaps even of subsequent weathering processes. For instance, it has been proposed that a significant fraction of some gases in the atmospheres of the terrestrial planets were brought in by comets.
Some of the important questions to be addressed are:
In the above, the term "flux" should be understood to mean not only total flux of bodies of all sizes (or masses) but also information about the relative fluxes of bodies of various sizes (or masses).
A vigorous program of searching for Apollo, Aten, and Amor asteroids, as well as for comets, can answer the first of these questions. The second and third questions are more difficult, but considerable progress is being made in addressing some aspects of them by theoretical calculations.
A closely related issue involves the orbital evolution of the various classes of impacting objects (origin, lifetime, and eventual fate). For example, how do objects end up in Apollo orbits? How long do they stay? What happens to them?
7.3.2. Unique Surface Features and Processes
Not surprisingly, there are processes that are important on small bodies but impossible to predict from an extrapolation of our terrestrial or lunar experience. In fact, it is sometimes even difficult to predict a priori what form a well-known process will take in the small-body environment. For example, a decade ago, there was a legitimate discussion about whether or not there would be recognizable craters on bodies as small as the satellites of Mars. A more serious debate developed about whether appreciable regoliths would form on such small objects. Although we have now learned....
....the answers to such rudimentary questions, we cannot pretend to fully understand the process of cratering and regolith formation on small bodies (Cintala et al., 1978; Housen and Wilkening, 1982). For example, we have no convincing explanation for the gross y different appearance of the surfaces of Phobos and Deimos. Why is it that the surface of the smaller Deimos appears to have retained considerably more regolith than that of the larger Phobos?
Our very limited experience in exploring small bodies has already confirmed that unique and unexpected surface features and processes come into play. No one anticipated the existence of grooves on Phobos, yet this type of feature may well be a common one on many small bodies (Thomas and Veverka, 1979). There is every reason to expect that additional, important surface features and processes will be discovered as our exploration of small bodies proceeds, especially in the cases of small icy satellites and the nuclei of comets.
7.3.3. Small Bodies as Natural Laboratories
Due to their great diversity in size and composition, small bodies provide ideal testing grounds for studying various processes especially those involving cratering. In principle, one can find small....
....bodies of similar surface gravity but drastically different surface composition (rock versus ice), or bodies of similar composition but very different surface gravity, to test the importance of such variables on crater morphology, ejecta patterns, etc. Much could be learned by comparing surface features and regolith characteristics on three small asteroids of similar surface gravity but of different composition (carbonaceous, stony, or metallic). As a next step, one could investigate the effects of rotation rate on regolith characteristics by comparing two asteroids that are identical in all bulk characteristics except their spin rates. Full exploitation of such possibilities would require an aggressive program of future solar system exploration.
7.3.4. Evolution and Interrelationship
There is ample evidence that some small bodies have had complicated evolutionary histories that involved processes of high interest to planetary geologists. The meteorite record proves that some parent bodies experienced internal differentiation, aqueous metamorphism, and even the eruption of lava onto their surfaces (Dodd, 1981). In many cases, very mature and very complex regoliths were developed (Housen and Wilkening, 1982). Understanding the geologic evolution of such interesting bodies is not only worthy in its own right, but would improve our understanding of the possible interrelationships among small bodies and between the small bodies and larger planets. First, there are questions of the following type to be considered: what styles of eruption and what types of volcanic constructs would one expect on a body as small as Vesta? Or, what kinds of structure control the local emission of gases from a comet nucleus? Second, there are the interrelationship questions; for example, is it geologically reasonable that a comet nucleus can evolve into something like an Apollo asteroid or that some volatile-rich carbonaceous chondrites could come from comets? Unfortunately, in many cases we still lack key observational data to address such important questions meaningfully.
The small bodies of the solar system are of great intrinsic geologic interest that goes beyond their original role as building blocks of planets and their subsequent role as projectiles. They are characterized by vast numbers and by their diversity.
So far, their geologic study has been hampered by a lack of first-hand information of the sort that can be obtained only by direct spacecraft exploration. Even after Viking and Voyager, our inventory of small objects about which enough is known to carry out detailed geological investigations is very meager. It is restricted to a few icy satellites of Saturn and to the two rocky moons of Mars. We have yet to carry out a geologic reconnaissance of an asteroid or a comet nucleus. Although our accumulated knowledge may be adequate to guess what asteroid surfaces may be like in a general way, we really know next to nothing about comet nuclei. Thus, a first-order requirement for progress in our understanding of small bodies is the exploration of at least one asteroid and one comet nucleus during the coming decade. Some important questions, however, can be addressed only by studying a variety of objects.
In the meantime, it is important to continue the ongoing active programs of Earth-based observations of small bodies as well as related laboratory and theoretical investigations. It is especially crucial to continue monitoring the neighborhood of Earth's orbit for small comets and asteroids, since there is no other way of obtaining adequate statistics on the population of such objects.
In terms of data analysis and interpretation, there are enough unresolved questions concerning the small satellites of Mars and of the outer planets to justify a healthy program of analysis of Viking and Voyager data in these areas. For example, the Viking IRTM * measurements of Phobos and Deimos must be fully correlated with imaging data to gain information on regolith characteristics. We must also develop techniques for mapping irregular satellites and making accurate measurements of their topography and volume. We should make a special effort to apply the many lessons we have learned from comparative planetology during the past two decades to considerations of surface and near-surface processes on small bodies. Such extrapolations from our experience with larger bodies will have to be done judiciously, but the effort should prove beneficial to our general understanding of the solar system.
*Infrared Thermal Mapper. | http://history.nasa.gov/SP-467/ch7.htm | 13 |
32 | Between 1900 and 1905, the Wright brothers designed and built three
unpowered gliders and three
As they designed each aircraft, how
did they know how big to make the wings?
The Wright brothers operated a bicycle shop in
Dayton, Ohio, and had a good working knowledge of math and science.
They knew about
of motion and about
They knew that they needed to generate enough
to overcome the
of their aircraft.
They had written to the Smithsonian when they began their enterprise in 1899
and received technical papers describing the aeronautical
theories of the day. There were mathematical equations which could
be used to predict the amount of lift
that an object would generate. The
lift equation is shown on this slide.
The amount of lift generated by an object depends on a number of factors,
of the air, the
velocity between the object and the
air, the surface area
over which the air flows, the
of the body, and the body's inclination to the flow, also called the
angle of attack.
By the time the Wrights began their studies, it had been determined that
lift depends on the
square of the velocity
and varies linearly
with the surface area of the object.
Early aerodynamicists characterized the dependence on the properties of the air
by a pressure coefficient called
Smeaton's coefficient which represented the
pressure force (drag) on a one foot square flat plate moving at one mile per hour through
the air. They believed that any object moving through the air converted some
portion of the pressure force into lift, and they represented that portion by a
lift coefficient. The resulting equation is given as:
L = k * V^2 * A * cl
where L is the lift, k is the Smeaton coefficient, V is the velocity,
A is the wing area, and cl is the lift coefficient.
This equation is slightly different from the modern
used today. The modern equation uses the
of the moving air for the pressure dependence, while this equation uses
the Smeaton coefficient. Modern lift coefficients relate the lift force on the object to
the force generated by the dynamic pressure times the area, while the 1900's
lift coefficients relate the lift force to the drag of a flat plate of equal area.
The 1900's equation assumes that you know the perpendicular pressure force on a
moving flat plate (Smeaton coefficient). Because
of measuring inaccuracies at the time, there were many quoted values for the
coefficient ranging from .0027 to .005. Lilienthal had used the .005 value
in the design and testing of his wings.
When the Wrights began to design the
they used values for the lift coefficient based on the work by Lilienthal
so they too used the .005 value.
experiments of 1900 and 1901, the brothers measured the performance
of their aircraft. Neither aircraft performed as well as predicted
by the lift equation. The
had been designed to lift itself (100 pounds) plus a pilot (150 pounds)
when flown as a kite in a 15 mile per hour wind at 5 degrees angle of attack.
But in flight, it could barely lift itself in a 15 mile per hour wind at
a much higher angle of attack.
So the brothers began to doubt the .005 value for the Smeaton coefficient and
they determined that a value of .0033 more closely approximated their data.
The modern accepted value is .00326.
The brothers also began to doubt the accuracy of Lilienthal's lift
So in the fall of 1901, they decided to determine their own values
for the lift coefficient using a
The brothers built a clever
to directly measure the ratio of the lift of their models to the
drag of an equivalent flat plate.
We have developed an
interactive tunnel simulator
so that you can duplicate their wind tunnel results.
of testing many airfoil models, the brothers discovered the importance
on the lift coefficient.
They determined that the Lilienthal data was correct for the wing geometry
that he had used, but that the data could not be applied to a wing with a very
different geometry. Lilienthal's wings had a rather short span and an elliptical
planform, while the brothers used a long, thin, rectangular planform.
The brothers tested over fifty different models to determine
how lift and drag are affected by various design parameters
and they used this data to design their
using the lift equation shown on the slide with their own lift coefficients.
You can view a short
of "Orville and Wilbur Wright" discussing the lift force
and how it affected the flight of their aircraft. The movie file can
be saved to your computer and viewed as a Podcast on your podcast player.
- Re-Living the Wright Way
- Beginner's Guide to Aeronautics
- NASA Home Page | http://wright.grc.nasa.gov/airplane/liftold.html | 13 |
10 | Observations of 1 Ceres, the largest known asteroid, have revealed that the object may be a "mini planet," and may contain large amounts of pure water ice beneath its surface.
The observations by NASA's Hubble Space Telescope also show that Ceres shares characteristics of the rocky, terrestrial planets like Earth. Ceres' shape is almost round like Earth's, suggesting that the asteroid may have a "differentiated interior," with a rocky inner core and a thin, dusty outer crust.
"Ceres is an embryonic planet," said Lucy A. McFadden of the Department of Astronomy at the University of Maryland, College Park and a member of the team that made the observations. "Gravitational perturbations from Jupiter billions of years ago prevented Ceres from accreting more material to become a full-fledged planet."
The finding will appear Sept. 8 in a letter to the journal Nature. The paper is led by Peter C. Thomas of the Center for Radiophysics and Space Research at Cornell University in Ithaca, N.Y., and also includes project leader Joel William Parker of the Department of Space Studies at Southwest Research Institute in Boulder, Colo.
Ceres is approximately 580 miles (930 kilometers) across, about the size of Texas. It resides with tens of thousands of other asteroids in the main asteroid belt. Located between Mars and Jupiter, the asteroid belt probably represents primitive pieces of the solar system that never managed to accumulate into a genuine planet. Ceres comprises 25 percent of the asteroid belt's total mass. However, Pluto, our solar system's smallest planet, is 14 times more massive than Ceres.
The astronomers used Hubble's Advanced Camera for Surveys to study Ceres for nine hours, the time it takes the asteroid to complete a rotation. Hubble snapped 267 images of Ceres. From those snapshots, the astronomers determined that the asteroid has a nearly round body. The diameter at its equator is wider than at its poles. Computer models show that a nearly round object like Ceres has a differentiated interior, with denser material at the core and lighter minerals near the surface. All terrestrial planets have differentiated interiors. Asteroids much smaller than Ceres have not been found to have such interiors.
The astronomers suspect that water ice may be buried under the asteroid's crust because the density of Ceres is less than that of the Earth's crust, and because the surface bears spectral evidence of water-bearing minerals. They estimate that if Ceres were composed of 25 percent water, it may have more water than all the fresh water on Earth. Ceres' water, unlike Earth's, would be in the form of water ice and located in the mantle, which wraps around the asteroid's solid core.
Besides being the largest asteroid, Ceres also was the first asteroid to be discovered. Sicilian astronomer Father Giuseppe Piazzi spotted the object in 1801. Piazzi was looking for suspected planets in a large gap between the orbits of Mars and Jupiter. As more such objects were found in the same region, they became known as "asteroids" or "minor planets."
NASA Headquarters, Washington
Goddard Space Flight Center, Greenbelt, Md
Space Telescope Science Institute, Baltimore | http://www.hubblesite.org/newscenter/archive/releases/solar-system/2005/27/text/ | 13 |
69 | In many data analyses in social science, it is desirable to compute a coefficient of association. Coefficients of association are quantitative measures of the amount of relationship between two variables. Ultimately, most techniques can be reduced to a coefficient of association and expressed as the amount of relationship between the variables in the analysis. For instance, with a t test, the correlation between group membership and score can be computed from the t value. There are many types of coefficients of association. They express the mathematical association in different ways, usually based on assumptions about the data. The most common coefficient of association you will encounter is the Pearson product-moment correlation coefficient (symbolized as the italicized r), and it is the only coefficient of association that can safely be referred to as simply the "correlation coefficient". It is common enough so that if no other information is provided, it is reasonable to assume that is what is meant.
Let's return to our data on IQ and achievement in the previous assignment, only this time, disregard the class groups. Just assume we have IQ and achievement scores on thirty people. IQ has been shown to be a predictor of achievement, that is IQ and achievement are correlated. Another way of stating the relationship is to say that high IQ scores are matched with high achievement scores and low IQ scores are matched with low achievement scores. Given that a person has a high IQ, I would reasonably expect high achievement. Given a low IQ, I would expect low achievement. (Please bear in mind that these variables are chosen for demonstration purposes only, and I do not want to get into discussions of whether the relationship between IQ and achievement is useful or meaningful. That is a matter for another class.)
So, the Pearson product-moment correlation coefficient is simply a way of stating such a relationship and the degree or "strength" of that relationship. The coefficient ranges in values from -1 to +1. A value of 0 represents no relationship, and values of -1 and +1 indicate a perfect linear relationships. If each dot represents a single person, and that person's IQ is plotted on the X axis, and their achievement scores is plotted on the Y axis, we can make a scatterplot of the values which allow us to visualize the degree of relationship or correlation between the two variables. The graphic below gives an approximation of how variables X and Y are related at various values of r :
The r value for a set of paired scores can be calculated as follows:
There is another method of calculating r which helps in understanding what the measure actually is. Review the ideas in the earlier lessons of what a z score is. Any set of scores can be transformed into an equivalent set of z scores. The variable will then have a mean of 0 and a standard deviation of 1. The z scores above mean are positive, and z scores below the mean are negative.
The r value for the correlation between the scores is then simply the sum of the products of the the z scores for each pair divided by the total number of pairs minus 1.
This method of computation helps to show why the r value signifies what it does. Consider several cases of pairs of scores on X and Y. Now, when thinking of how the numerator of the sum above is computed, consider only the signs of the scores and signs of their products. If a person's score on X is substantially below the mean, then their z score is large and negative. If they are also below the mean on Y, their z score for Y is also large and negative. The product of these two z scores is then large and positive. The product is also obviously large and positive if both people score substantially above the mean on both X and Y. So, the more the z scores on X and Y are alike, the more positive the product sum in the equation becomes. Note that if people score opposite on the measures consistently ( negative z scores on X and positive z scores on Y), the more negative the product sum becomes. This system sometimes helps to give insight into how the correlation coefficient works. The r value is then an average of the products between z scores (using n-1 instead of n to correct for population bias). When the signs of the z scores are random throughout the group, there is roughly equal probability of having a positive ZZ product or a negative ZZ product. You should be able to see how this would tend to lead to a sum close to zero.
Interpretation of r
One interpretation of r is that the square of the value represents the proportion of variance in one variable which is accounted for by the other variable. The square of the correlation coefficient is called the coefficient of determination. It is easy for most people to interpret quantities when they are on a linear scale, but this square relationship creates an exponential relationship which should be kept in mind when interpreting correlation coefficients in terms of "large", "small", etc. Note the graph below which shows the proportion of variance accounted for at different levels of r . Note that not even half of the variance is accounted for until r reaches .71, and that values below .30 account for less than 10% of the variance. Note also how rapidly the proportion of variance accounted for increases between .80 and .90, as compared to between .30 and .40. Note that r = .50 is only 25% of the variance. Be careful not to interpret r in a linear way like it is a percentage or proportion. It is the square which has that quality. That is, don't fall into the trap of thinking of r = .60 as "better than half", because it clearly is not (it is 36%).
There are some obvious caveats in correlation and regression. One has been pointed out by Teri in the last lesson. In order for r to have the various properties needed for it's use in other statistical techniques, and in fact, to be interpreted in terms of proportions of variance accounted for, it is assumed that the relationship between the variables is linear. If the relationship between the variables is curvilinear as shown in the figure below, r will be an incorrect estimate of the relationship.
Notice that although the relationship between the curvilinear variables is actually better than with the linear, the r value is likely to be less for the curvilinear case because the assumption is not met. This problem can be addressed with something called nonlinear regression, which is a topic for advanced statistics. However, it should be obvious that one can transform the y variable (such as with log or square functions) to make the relation linear, and then a normal linear regression can be run on the transformed scores. This is essentially how nonlinear regression works.
Another assumption is called homoscedasticity (HOMO-SEE-DAS-STI-CITY or HOMO-SKEE-DAS-STI-CITY). This is the assumption that the variance of one variable is the same across all levels of the other. The figure below shows a violation of the homoscedasticity assumption. These data are heteroscedastic (HETERO-SKEE-DASTIC). Note that Y is much better predicted at lower levels of X than at higher levels of X :
A related assumption is one of bivariate normality . This assumption is sometimes difficult to understand (and it crops up in even more complicated forms in multivariate statistics), and difficult to test or demonstrate. Essentially, bivariate normality means that for every possible value of one variable, the values of the other variable are normally distributed. You may be able to visualize this by looking at the figure below with thousands of observations (this problem is complicated enough to approach the limits of my artistic ability). Think of the normal curves as being frequency or density at their corresponding values of X or Y. That is, visualize them as perpendicular to the page.
Regression and correlation are very sensitive to these assumptions. The values for this type of analysis should not be over interpreted. That is, quantitative predictions should be tempered by the validity of these assumptions.
It should be intuitive from the explanation of the correlation coefficient that a significant correlation allows some degree of prediction of Y if we know X. In fact, when we are dealing with z scores, the math for this prediction equation is very simple. The predicted Z for the Y score (z'y) is:
When the r value is used in this way, it is called a standardized regression coefficient , and the symbol used to represent it is often a lower case Greek beta (b), so the standardized regression equation for regression of y on x is written as :
When we are not working with z scores, but we are attempting to predict Y raw scores from X raw scores, the equation requires a quantity called the unstandardized regression coefficient. This is usually symbolized as B1, and allows for the following prediction equation for raw scores:
The unstandardized regression coefficient (B1) can be computed from the r value and the standard deviations of the two sets of scores (Equation a). The B0 is the intercept for the regression line, and it can be computed by subtracting the product of B1 and the mean of the x scores from the mean of the y scores (Equation b).
Now, suppose we are attempting to predict Y (achievement) from X (IQ). Assume we have IQ and Achievement scores for a group of 10 people. Suppose I want to develop a regression equation to make the best prediction of a person's Achievement if I am given their IQ score. I would proceed as follows:
First compute r.
Now, it is a simple matter to compute B1 .
B1 = SPxy / SSx = 420 / 512.5 = 0.82
Now compute B0 .
B0 = MY - B1 MX = 94.8 - 0.82(99.5) = 13.2
The regression equation for predicting Achievement from IQ is then
Y' = B0 + B1(X)
ACHIEVEMENT SCORE = 13.2 + 0.82 (IQ)
Error of Prediction
Given an r value between the two variables, what kind of error in my predicted achievement score should be expected? This is a complicated problem, but an over simplified way of dealing with it can be stated which is not too far off for anything other than extreme values. The standard error of the estimate can be thought of roughly as the standard deviation of the expected distribution of true Y values around a predicted Y value. The problem is that this distribution changes as you move across the X distribution, and so the standard error is not correct for most any prediction. However, it does give a reasonable estimate of the confidence interval around predicted scores. For standardized (z) scores, the standard error of the estimate is equation (a). For raw scores, it is equation (b) :
For example, given a predicted Y score of 87, and a standard error of estimate of 5.0, we could speculate that our person's true score is somewhere between 87-2(5) and 87+2(5) for roughly 96% confidence. Again, this is an oversimplification, and the procedures for making precise confidence intervals are best left for another time. | http://jamesstacks.com/stat/pearson.htm | 13 |
19 | The possibility of life on Mars has held human interest for hundreds of years and has recently become an obsession for NASA.
A number of atmospheric probes and surface craft have been sent to Mars to assess the planet’s habitability. The ultimate goal is to send future missions to Mars to directly look for evidence of life, both past and contemporary.
In the midst of all the excitement and anticipation, it’s easy to forget that there have already been missions to Mars specifically designed to detect life. Over thirty years ago in 1976 NASA sent the Viking 1 and 2 spacecraft to Mars. Two landers made it to the surface. These robotic systems harbored four life-detection experiments.
The Viking Biology Experiments
The Gas Exchange Experiment: Martian soil samples were incubated with a nutrient broth. A gas chromatograph monitored headspace samples for the generation of gases like oxygen, carbon dioxide, or methane. Gas production in the soil would indicate biological activity.
The Label Release Experiment: Martian soil samples were incubated with a nutrient broth. Some of the nutrients in the cocktail were labeled with carbon-14. If organisms were present in the soil, they would consume the labeled nutrients and generate radioactive gas. Detection of radioactivity in the headspace would indicate the presence of life.
The Pyrolytic Release Experiment: Martian soil samples were exposed to light, water, carbon-14-labeled carbon dioxide and carbon-14-labeled carbon monoxide. If photosynthetic life was present, the radioactive gases would become incorporated into the soil.
The Gas Chromatograph-Mass Spectrometer Experiment: This instrument was designed to detect and identify organic compounds (both from life and meteoritic infall) in the Martian soil. If life were present, organic materials would be abundant in the Martian soil.
Results of the Viking Biology Experiments
The Gas Exchange Experiment: Gas evolution from the soil was observed.
The Label Release Experiment: Radioactive gas was produced after the soil was incubated with radiolabeled nutrient broth.
The Pyrolytic Release Experiment: The results of this experiment were initially interpreted as evidence for extremely low levels of microbes in the soil. These results were later reinterpreted as a null result.
The Gas Chromatograph-Mass Spectrometer Experiment: No organic compounds were detected in the soil, not even at a trace level.
Interpretation of the Viking Biology Experiments
Even though the Gas Exchange and Label Release experiments gave positive results, the failure to detect organics in the soil was troubling. It is difficult to conceive of life on the Martian surface without organic compounds in the soil.
It appears that a highly oxidizing chemical species in the Martian soil was likely responsible for the release of gases after incubation with nutrients, many of them organic compounds. The oxidizing compounds in the Martian soil would rapidly break down any organic material, generating gases like oxygen and carbon dioxide as the by-products.
The highly oxidizing nature of the Martian soil and the intense exposure of the Martian surface to UV radiation explain why no organics exist in the Martian soil, not even organic materials from meteorite infall. UV radiation, like chemical oxidants, readily destroys organic materials.
The Viking landers looked for life on Mars and failed to detect it.
Revisiting the Interpretation
The interpretation of the Viking results is still discussed by astrobiologists. In fact, during the fall of 2006 a team of scientists published a paper questioning the design of the Gas Chromatograph-Mass Spectrometer Experiment. They argued that the experimental setup was fundamentally unable to detect low levels of organics in the Martian soil. They also maintained that if the organic materials were too refractory, the sample preparation procedure for the Gas Chromatograph-Mass Spectrometer Experiment would fail to release them from the soil, leaving the organics unavailable for detection and analysis. They also raised concerns about oxidation and, hence, destruction, of organics during sample preparation.
In short, these astrobiologists claimed that it was premature to discount the null results for the Gas Chromatograph-Mass Spectrometer experiments aboard the Viking lander. If organics are indeed present on the Martian surface, it means that the results of the Gas Exchange and Label Release experiments very well may be taken as an indication of life on Mars and, at minimum, could motivate future missions to Mars to look for life.
Not So Fast
A recent paper, however, discounts the criticisms leveled against the Gas Chromatograph-Mass Spectrometer Experiment. Klaus Biemann—a world renowned mass spectroscopist—demonstrated that the detection limit of the Viking Gas Chromatograph-Mass Spectrometer Experiment was 1-2 ppb (parts per billion). In fact, when on the surface of Mars, the Gas Chromatograph-Mass Spectrometer successfully detected and identified trace levels of organic contaminants introduced into the system while on Earth. Biemann also showed that the sample preparation procedure would not destroy organics and could readily detect refractory organic materials.
The bottom line: the null results of the Gas Chromatograph-Mass Spectrometer are valid. There are no organics, nor life, on the surface of Mars.
For a more detailed discussion of life on Mars, see the book I wrote with Hugh Ross, Origins of Life: Biblical and Evolutionary Models Face Off. | http://www.reasons.org/articles/viking-invasion-of-mars-thwarted | 13 |
20 | Microwaves are radio waves with wavelengths ranging from as long as one meter to as short as one millimeter, or equivalently, with frequencies between 300 MHz (0.3 GHz) and 300 GHz. This broad definition includes both UHF and EHF (millimeter waves), and various sources use different boundaries. In all cases, microwave includes the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum, with RF engineering often putting the lower boundary at 1 GHz (30 cm), and the upper around 100 GHz (3 mm).
Apparatus and techniques may be described qualitatively as "microwave" when the wavelengths of signals are roughly the same as the dimensions of the equipment, so that lumped-element circuit theory is inaccurate. As a consequence, practical microwave technique tends to move away from the discrete resistors, capacitors, and inductors used with lower-frequency radio waves. Instead, distributed circuit elements and transmission-line theory are more useful methods for design and analysis. Open-wire and coaxial transmission lines give way to waveguides and stripline, and lumped-element tuned circuits are replaced by cavity resonators or resonant lines. Effects of reflection, polarization, scattering, diffraction, and atmospheric absorption usually associated with visible light are of practical significance in the study of microwave propagation. The same equations of electromagnetic theory apply at all frequencies.
The prefix "micro-" in "microwave" is not meant to suggest a wavelength in the micrometer range. It indicates that microwaves are "small" compared to waves used in typical radio broadcasting, in that they have shorter wavelengths. The boundaries between far infrared light, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study.
Microwave technology is extensively used for point-to-point telecommunications (i.e., non broadcast uses). Microwaves are especially suitable for this use since they are more easily focused into narrow beams than radio waves, their comparatively higher frequencies allow broad bandwidth and high data flow, and also allowing smaller antenna size because antenna size is inversely proportional to transmitted frequency (the higher the frequency, the smaller the antenna size). Microwaves are the principal means by which data, TV, and telephone communications are transmitted between ground stations and to and from satellites. Microwaves are also employed in microwave ovens and in radar technology.
At about 20 GHz, decreasing microwave transmission through air is seen, due at lower frequencies from absorption from water and at higher frequencies from oxygen. A spectral band structure causes fluctuations in this behavior (see graph at right). Above 300 GHz, the absorption of microwave electromagnetic radiation by Earth's atmosphere is so great that it is in effect opaque, until the atmosphere becomes transparent again in the so-called infrared and optical window frequency ranges.
|Name||Wavelength||Frequency (Hz)||Photon Energy (eV)|
|Gamma ray||less than 0.02 nm||more than 15 EHz||more than 62.1 keV|
|X-Ray||0.01 nm – 10 nm||30 EHz – 30 PHz||124 keV – 124 eV|
|Ultraviolet||10 nm – 400 nm||30 PHz – 750 THz||124 eV – 3 eV|
|Visible||390 nm – 750 nm||770 THz – 400 THz||3.2 eV – 1.7 eV|
|Infrared||750 nm – 1 mm||400 THz – 300 GHz||1.7 eV – 1.24 meV|
|Microwave||1 mm – 1 meter||300 GHz – 300 MHz||1.24 meV – 1.24 µeV|
|Radio||1 mm – 100,000 km||300 GHz – 3 Hz||1.24 meV – 12.4 feV|
Microwave sources
High-power microwave sources use specialized vacuum tubes to generate microwaves. These devices operate on different principles from low-frequency vacuum tubes, using the ballistic motion of electrons in a vacuum under the influence of controlling electric or magnetic fields, and include the magnetron (used in microwave ovens), klystron, traveling-wave tube (TWT), and gyrotron. These devices work in the density modulated mode, rather than the current modulated mode. This means that they work on the basis of clumps of electrons flying ballistically through them, rather than using a continuous stream of electrons.
Low-power microwave sources use solid-state devices such as the field-effect transistor (at least at lower frequencies), tunnel diodes, Gunn diodes, and IMPATT diodes. Low-power sources are available as benchtop instruments, rackmount instruments, embeddable modules and in card-level formats.
A maser is a device similar to a laser, which amplifies light energy by stimulating photons. The maser, rather than amplifying visible light energy, amplifies the lower-frequency, longer-wavelength microwaves and radio frequency emissions.
Before the advent of fiber-optic transmission, most long-distance telephone calls were carried via networks of microwave radio relay links run by carriers such as AT&T Long Lines. Starting in the early 1950s, frequency division multiplex was used to send up to 5,400 telephone channels on each microwave radio channel, with as many as ten radio channels combined into one antenna for the hop to the next site, up to 70 km away.
Wireless LAN protocols, such as Bluetooth and the IEEE 802.11 specifications, also use microwaves in the 2.4 GHz ISM band, although 802.11a uses ISM band and U-NII frequencies in the 5 GHz range. Licensed long-range (up to about 25 km) Wireless Internet Access services have been used for almost a decade in many countries in the 3.5–4.0 GHz range. The FCC recently[when?] carved out spectrum for carriers that wish to offer services in this range in the U.S. — with emphasis on 3.65 GHz. Dozens of service providers across the country are securing or have already received licenses from the FCC to operate in this band. The WIMAX service offerings that can be carried on the 3.65 GHz band will give business customers another option for connectivity.
Metropolitan area network (MAN) protocols, such as WiMAX (Worldwide Interoperability for Microwave Access) are based on standards such as IEEE 802.16, designed to operate between 2 to 11 GHz. Commercial implementations are in the 2.3 GHz, 2.5 GHz, 3.5 GHz and 5.8 GHz ranges.
Mobile Broadband Wireless Access (MBWA) protocols based on standards specifications such as IEEE 802.20 or ATIS/ANSI HC-SDMA (such as iBurst) operate between 1.6 and 2.3 GHz to give mobility and in-building penetration characteristics similar to mobile phones but with vastly greater spectral efficiency.
Some mobile phone networks, like GSM, use the low-microwave/high-UHF frequencies around 1.8 and 1.9 GHz in the Americas and elsewhere, respectively. DVB-SH and S-DMB use 1.452 to 1.492 GHz, while proprietary/incompatible satellite radio in the U.S. uses around 2.3 GHz for DARS.
Microwave radio is used in broadcasting and telecommunication transmissions because, due to their short wavelength, highly directional antennas are smaller and therefore more practical than they would be at longer wavelengths (lower frequencies). There is also more bandwidth in the microwave spectrum than in the rest of the radio spectrum; the usable bandwidth below 300 MHz is less than 300 MHz while many GHz can be used above 300 MHz. Typically, microwaves are used in television news to transmit a signal from a remote location to a television station from a specially equipped van. See broadcast auxiliary service (BAS), remote pickup unit (RPU), and studio/transmitter link (STL).
Most satellite communications systems operate in the C, X, Ka, or Ku bands of the microwave spectrum. These frequencies allow large bandwidth while avoiding the crowded UHF frequencies and staying below the atmospheric absorption of EHF frequencies. Satellite TV either operates in the C band for the traditional large dish fixed satellite service or Ku band for direct-broadcast satellite. Military communications run primarily over X or Ku-band links, with Ka band being used for Milstar.
Radar uses microwave radiation to detect the range, speed, and other characteristics of remote objects. Development of radar was accelerated during World War II due to its great military utility. Now radar is widely used for applications such as air traffic control, weather forecasting, navigation of ships, and speed limit enforcement.
Radio astronomy
Most radio astronomy uses microwaves. Usually the naturally-occurring microwave radiation is observed, but active radar experiments have also been done with objects in the solar system, such as determining the distance to the Moon or mapping the invisible surface of Venus through cloud cover.
Global Navigation Satellite Systems (GNSS) including the Chinese Beidou, the American Global Positioning System (GPS) and the Russian GLONASS broadcast navigational signals in various bands between about 1.2 GHz and 1.6 GHz.
Heating and power application
A microwave oven passes (non-ionizing) microwave radiation (at a frequency near 2.45 GHz) through food, causing dielectric heating primarily by absorption of the energy in water. Microwave ovens became common kitchen appliances in Western countries in the late 1970s, following development of inexpensive cavity magnetrons. Water in the liquid state possesses many molecular interactions that broaden the absorption peak. In the vapor phase, isolated water molecules absorb at around 22 GHz, almost ten times the frequency of the microwave oven.
Microwave heating is used in industrial processes for drying and curing products.
Microwave frequencies typically ranging from 110 – 140 GHz are used in stellarators and more notably in tokamak experimental fusion reactors to help heat the fuel into a plasma state. The upcoming ITER Thermonuclear Reactor is expected to range from 110–170 GHz and will employ Electron Cyclotron Resonance Heating (ECRH).
Microwaves can be used to transmit power over long distances, and post-World War II research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using solar power satellite (SPS) systems with large solar arrays that would beam power down to the Earth's surface via microwaves.
Less-than-lethal weaponry exists that uses millimeter waves to heat a thin layer of human skin to an intolerable temperature so as to make the targeted person move away. A two-second burst of the 95 GHz focused beam heats the skin to a temperature of 130 °F (54 °C) at a depth of 1/64th of an inch (0.4 mm). The United States Air Force and Marines are currently using this type of active denial system.
Microwave radiation is used in electron paramagnetic resonance (EPR or ESR) spectroscopy, typically in the X-band region (~9 GHz) in conjunction typically with magnetic fields of 0.3 T. This technique provides information on unpaired electrons in chemical systems, such as free radicals or transition metal ions such as Cu(II). Microwave radiation is also used to perform rotational spectroscopy and can be combined with electrochemistry as in microwave enhanced electrochemistry.
Microwave frequency bands
The microwave spectrum is usually defined as electromagnetic energy ranging from approximately 1 GHz to 100 GHz in frequency, but older usage includes lower frequencies. Most common applications are within the 1 to 40 GHz range. One set of microwave frequency bands designations by the Radio Society of Great Britain (RSGB), is tabulated below:
|Letter Designation||Frequency range||Wavelength range||Typical uses|
|L band||1 to 2 GHz||15 cm to 30 cm||military telemetry, GPS, mobile phones (GSM), amateur radio|
|S band||2 to 4 GHz||7.5 cm to 15 cm||weather radar, surface ship radar, and some communications satellites (microwave ovens, microwave devices/communications, radio astronomy, mobile phones, wireless LAN, Bluetooth, ZigBee, GPS, amateur radio)|
|C band||4 to 8 GHz||3.75 cm to 7.5 cm||long-distance radio telecommunications|
|X band||8 to 12 GHz||25 mm to 37.5 mm||satellite communications, radar, terrestrial broadband, space communications, amateur radio|
|Ku band||12 to 18 GHz||16.7 mm to 25 mm||satellite communications|
|K band||18 to 26.5 GHz||11.3 mm to 16.7 mm||radar, satellite communications, astronomical observations|
|Ka band||26.5 to 40 GHz||5.0 mm to 11.3 mm||satellite communications|
|Q band||33 to 50 GHz||6.0 mm to 9.0 mm||satellite communications, terrestrial microwave communications, radio astronomy, automotive radar|
|U band||40 to 60 GHz||5.0 mm to 7.5 mm|
|V band||50 to 75 GHz||4.0 mm to 6.0 mm||millimeter wave radar research and other kinds of scientific research|
|E band||60 to 90 GHz||3.3 mm to 5 mm||UHF transmissions|
|W band||75 to 110 GHz||2.7 mm to 4.0 mm||satellite communications, millimeter-wave radar research, military radar targeting and tracking applications, and some non-military applications|
|F band||90 to 140 GHz||2.1 mm to 3.3 mm||SHF transmissions: Radio astronomy, microwave devices/communications, wireless LAN, most modern radars, communications satellites, satellite television broadcasting, DBS, amateur radio|
|D band||110 to 170 GHz||1.8 mm to 2.7 mm||EHF transmissions: Radio astronomy, high-frequency microwave radio relay, microwave remote sensing, amateur radio, directed-energy weapon, millimeter wave scanner|
When radars were first developed at K band during World War II, it was not realized that there was a nearby absorption band (due to water vapor and oxygen at the atmosphere). to avoid this problem, the original K band was split into a lower band, Ku, and upper band, Ka see.
Microwave frequency measurement
Microwave frequency can be measured by either electronic or mechanical techniques.
Frequency counters or high frequency heterodyne systems can be used. Here the unknown frequency is compared with harmonics of a known lower frequency by use of a low frequency generator, a harmonic generator and a mixer. Accuracy of the measurement is limited by the accuracy and stability of the reference source.
Mechanical methods require a tunable resonator such as an absorption wavemeter, which has a known relation between a physical dimension and frequency.
In a laboratory setting, Lecher lines can be used to directly measure the wavelength on a transmission line made of parallel wires, the frequency can then be calculated. A similar technique is to use a slotted waveguide or slotted coaxial line to directly measure the wavelength. These devices consist of a probe introduced into the line through a longitudinal slot, so that the probe is free to travel up and down the line. Slotted lines are primarily intended for measurement of the voltage standing wave ratio on the line. However, provided a standing wave is present, they may also be used to measure the distance between the nodes, which is equal to half the wavelength. Precision of this method is limited by the determination of the nodal locations.
Health effects
Microwaves do not contain sufficient energy to chemically change substances by ionization, and so are an example of nonionizing radiation. The word "radiation" refers to energy radiating from a source and not to radioactivity. It has not been shown conclusively that microwaves (or other nonionizing electromagnetic radiation) have significant adverse biological effects at low levels. Some, but not all, studies suggest that long-term exposure may have a carcinogenic effect. This is separate from the risks associated with very high intensity exposure, which can cause heating and burns like any heat source, and not a unique property of microwaves specifically.
During World War II, it was observed that individuals in the radiation path of radar installations experienced clicks and buzzing sounds in response to microwave radiation. This microwave auditory effect was thought to be caused by the microwaves inducing an electric current in the hearing centers of the brain. Research by NASA in the 1970s has shown this to be caused by thermal expansion in parts of the inner ear.
When injury from exposure to microwaves occurs, it usually results from dielectric heating induced in the body. Exposure to microwave radiation can produce cataracts by this mechanism, because the microwave heating denatures proteins in the crystalline lens of the eye (in the same way that heat turns egg whites white and opaque). The lens and cornea of the eye are especially vulnerable because they contain no blood vessels that can carry away heat. Exposure to heavy doses of microwave radiation (as from an oven that has been tampered with to allow operation even with the door open) can produce heat damage in other tissues as well, up to and including serious burns that may not be immediately evident because of the tendency for microwaves to heat deeper tissues with higher moisture content.
History and research
The existence of radio waves was predicted by James Clerk Maxwell in 1864 from his equations. In 1888, Heinrich Hertz was the first to demonstrate the existence of radio waves by building a spark gap radio transmitter that produced 450 MHz microwaves, in the UHF region. The equipment he used was primitive, including a horse trough, a wrought iron point spark, and Leyden jars. He also built the first parabolic antenna, using a zinc gutter sheet. In 1894 Indian radio pioneer Jagdish Chandra Bose publicly demonstrated radio control of a bell using millimeter wavelengths, and conducted research into the propagation of microwaves.
Perhaps the first, documented, formal use of the term microwave occurred in 1931:
- "When trials with wavelengths as low as 18 cm were made known, there was undisguised surprise that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1
In 1943, the Hungarian engineer Zoltán Bay sent ultra-short radio waves to the moon, which, reflected from there, worked as a radar, and could be used to measure distance, as well as to study the moon.
Perhaps the first use of the word microwave in an astronomical context occurred in 1946 in an article "Microwave Radiation from the Sun and Moon" by Robert Dicke and Robert Beringer. This same article also made a showing in the New York Times issued in 1951.
In the history of electromagnetic theory, significant work specifically in the area of microwaves and their applications was carried out by researchers including:
|Work carried out by||Area of work|
|Barkhausen and Kurz||Positive grid oscillators|
|Hull||Smooth bore magnetron|
|Varian Brothers||Velocity modulated electron beam → klystron tube|
|Randall and Boot||Cavity magnetron|
See also
- Block upconverter (BUC)
- Cosmic microwave background radiation
- Electron cyclotron resonance
- International Microwave Power Institute
- Low-noise block converter (LNB)
- Microwave transmission
- Microwave chemistry
- Microwave auditory effect
- Microwave cavity
- Microwave radio relay
- Orthomode transducer (OMT)
- Plasma-enhanced chemical vapour deposition
- Rain fade
- RF switch matrix
- Thing (listening device)
- Tropospheric scatter
- Pozar, David M. (1993). Microwave Engineering Addison–Wesley Publishing Company. ISBN 0-201-50418-9.
- R. Sorrentino, Giovanni Bianchi, Microwave and RF Engineering, John Wiley & Sons, 2010, p. 4
- Microwave Oscillator notes by Herley General Microwave
- Liou, Kuo-Nan (2002). An introduction to atmospheric radiation. Academic Press. p. 2. ISBN 0-12-451451-0. Retrieved 12 July 2010.
- "IEEE 802.20: Mobile Broadband Wireless Access (MBWA)". Official web site. Retrieved August 20, 2011.
- "the way to new energy". ITER. 2011-11-04. Retrieved 2011-11-08.
- "Electron Cyclotron Resonance Heating (ECRH)". Ipp.mpg.de. Retrieved 2011-11-08.
- Raytheon's Silent Guardian millimeter wave weapon[dead link]
- "eEngineer – Radio Frequency Band Designations". Radioing.com. Retrieved 2011-11-08.
- PC Mojo – Webs with MOJO from Cave Creek, AZ (2008-04-25). "Frequency Letter bands – Microwave Encyclopedia". Microwaves101.com. Retrieved 2011-11-08.
- For other definitions see Letter Designations of Microwave Bands.
- Merrill I. Skolnik, Introduction to Radar Systems, Third Ed., Page 522, McGraw Hill, 2001.
- Goldsmith, JR (December 1997). "Epidemiologic evidence relevant to radar (microwave) effects". Environmental Health Perspectives 105 (Suppl. 6): 1579–1587. doi:10.2307/3433674. JSTOR 3433674. PMC 1469943. PMID 9467086.
- Philip L. Stocklin, US Patent 4,858,612, December 19, 1983
- "''The work of Jagdish Chandra Bose: 100years of MM-wave research'', retrieved 2010 01 31". Tuc.nrao.edu. Retrieved 2011-11-08.
|Wikimedia Commons has media related to: Microwaves (radio)|
- EM Talk, Microwave Engineering Tutorials and Tools
- Microwave Technology Video
- Millimeter Wave and Microwave Waveguide dimension chart. | http://www.digplanet.com/wiki/Microwave | 13 |
43 | From Math Images
|Gradients and Directional Derivatives|
|Change of Coordinate Systems|
|Math for Computer Graphics and Computer Vision|
Vectors are quantities that are specified by both a magnitude and direction. Perhaps the simplest vector is a Euclidean vector, represented by an arrow in Euclidean space. Refer to Image 1 below. This arrow has a length and points in some direction.
The length of a vector is called the magnitude. The magnitude is denoted by . The magnitude of a vector is a scalar quantity, a numerical value.
Image 1 is a vector. However, Image 1 is not very useful because it's unclear where it the vector is positioned. We need to introduce the Cartesian coordinate system (the x-y graph) to properly give the direction of a vector. Assigning x and y coordinates to a vector allows us to be more precise when we talk about the location of a vector. Refer to Modified Image 1 below. In this updated image, we now know the x and y coordinates. Our vector is 6 units in the x direction and 8 units in the y direction.
Another way to locate a vector uses unit vectors. In the two-dimensional coordinate system, the vectors and are our unit vectors. It's clear that unit vectors have a length of one. In textbooks, unit vectors can be written in bold text or have a hat placed above the variables like so, .
Any vector may be written in terms of our unit vectors and through scalar multiplication and addition (this is discussed in the graphical introduction below). For now, imagine the unit vector is a rubber substance that can stretch or shrink. With this property, we change change the magnitude of the unit vector such that it can express any vector in the coordinate system.
For example, can be written as the following:
We can write the vector in the Modified Image 1 as .
In order to give a vector's position using unit vectors, we write it as a combination of unit vectors that are placed along the the coordinate axes. Unit vectors correspond to the x-y-z coordinate system this is in three-dimensions. points along the x-axis points along the y-axis and points along the z-axis. Unit vectors (sometimes called the standard basis vectors) are used in physics, engineering and linear algebra.
The notation is used to emphasize the "vector" nature of a vector while the coordinate notation is use to emphasize the "point" nature of a vector.
When we have a vector quantity we put an arrow on top of the labeling letter to remind us that it is a vector. It looks like so . One way of writing vectors is by components, like this: . For example, suppose we want to write a specific vector in components, and we know the vector goes 3 units in the x direction, 2 units in the y direction and 0 units in the z direction. Then we can simply write: .
The components of the same vector can also be written as: . This vector has an x-component of 3, a y-component of 2 and a z-component of 0. This is shown in the image on the below.
With as any numerical values we can also write any vector in terms of standard unit vectors: . If we are given we know that vector A is 3 units in the x and y direction and 2 units in the z direction. This vector could equivalently be written as or . It is shown in the image below.
If a vector is still a bit abstract to you then think of a compass. The arrow has a certain length, this is our magnitude, and it points in any direction (north, south, east, and west).
Click here for a graphical introduction to vectors:
Click here for an algebraic introduction to vectors:
Click here for an interactive demonstration: | http://mathforum.org/mathimages/index.php/Vector | 13 |
21 | The inverse of the sine function is known as arc sine, most math libraries shorten this to asin
The inverse of the cosine function is known as arc cosine, most math libraries shorten this to acos
The inverse of the tangent function is known as arc tangent, most math libraries shorten this to atan or atan2
The trig functions are many to one, therefore the inverse trig functions have
many possible results. We usually assume that:
acos returns the angle between 0 and pi
asin returns the angle between -pi/2 and pi/2
atan returns the angle between -pi/2 and pi/2
atan2 returns the angle between -pi and pi
Which value to use?
As we can see from the above graphs, trig functions are many to one. That is, for a given value, it could be produced by many angles. In fact since the graph repeats every 2 pi (360 degrees) there are an infinite number of angles. The values returned by the inverse trig functions are shown above. Usually we want the smallest angle that will represent the required rotation.
Rectangular To Polar using atan function
For information about polar coordinates see here.
If we want to convert the rectangular coordinates x,y to the polar coordinates θ,r then we can do so as follows:
We can calculate r from:
r2 = x2 + y2
and θ from:
tan(θ) = y / x
θ = atan(y / x)
There are some potential problems with the above approach:
- It does not work for a full range of angles from 0° to 360°, only angles between -90° and +90° will be returned, other angles will be 180° out of phase. For example: the point x=-1,y=-1 will produce the same angle as x=1,y=1 but the above diagram shows that these are in different quadrants.
- points on the vertical axis have x=0, so when we calculate the intermediate result y/x we will get infinity which will generate an exception when calculated on the computer.
Most maths libraries have a atan2(y,x) function which takes both x and y as operands, which allows it to get round the above problems.
In the first quadrant atan2(y,x) is equivalent to atan(y/x) since:
= sin(a)/cos(a) = opposite/adjacent = y/x
atan2(y,x) = atan2(opposite,adjacent)
Most maths libraries, for example java, define the order of operands as: atan2(y,x) but some, for example Excel spreadsheet reverse the order of the operands as follows:
X_num is the x-coordinate of the point.
Y_num is the y-coordinate of the point.
Therefore you should always check the order of the order of operands for the maths library you are using. | http://www.euclideanspace.com/maths/geometry/trig/inverse/index.htm | 13 |
11 | In this section we need to take a look at the velocity and
acceleration of a moving object.
From Calculus I we know that given the position function of
an object that the velocity of the object is the first derivative of the
position function and the acceleration of the object is the second derivative
of the position function.
So, given this it shouldn’t be too surprising that if the
position function of an object is given by the vector function then the velocity and acceleration of the
object is given by,
Notice that the velocity and acceleration are also going to
be vectors as well.
In the study of the motion of objects the acceleration is
often broken up into a tangential
component, aT, and a normal component, aN. The
tangential component is the part of the acceleration that is tangential to the
curve and the normal component is the part of the acceleration that is normal
(or orthogonal) to the curve. If we do
this we can write the acceleration as,
where and are the unit tangent and unit normal for the
If we define then the tangential and normal components of
the acceleration are given by,
where is the curvature
for the position function.
There are two formulas to use here for each component of the
acceleration and while the second formula may seem overly complicated it is
often the easier of the two. In the
tangential component, v, may be messy
and computing the derivative may be unpleasant. In the normal component we will already be
computing both of these quantities in order to get the curvature and so the
second formula in this case is definitely the easier of the two.
Let’s take a quick look at a couple of examples.
Example 1 If
the acceleration of an object is given by find the object's velocity and position
functions given that the initial velocity is and the initial position is .
We’ll first get the velocity. To do this all (well almost all) we need to
do is integrate the acceleration.
To completely get the velocity we will need to determine
the “constant” of integration. We can
use the initial velocity to get this.
The velocity of the object is then,
We will find the position function by integrating the
Using the initial position gives us,
So, the position function is,
Example 2 For
the object in the previous example determine the tangential and normal
components of the acceleration.
There really isn’t much to do here other than plug into
the formulas. To do this we’ll need to
Let’s first compute the dot product and cross product that
we’ll need for the formulas.
Next, we also need a couple of magnitudes.
The tangential component of the acceleration is then,
The normal component of the acceleration is, | http://tutorial.math.lamar.edu/Classes/CalcII/Velocity_Acceleration.aspx | 13 |
21 | Student Reasoning about Average
Average is a term which has common meanings such as "That's average" (meaning not very good), as well as mathematical meanings such as mean, mode and median. The news often reports average ambiguously where the use of mean, mode or median is not clear.
Learning Sequence on Average with teacher comments
Mean, mode and median are different ways to express the central tendency of a data set. Each has particular usefulness depending on the type of data and its variation. For example:
- Mode (most frequent) is often used when reporting categorical data. "The average man drinks beer."
- Median describes the middle position of a data set that has been ordered from smallest to largest. It is useful in giving a sense of a central value when data sets have a few high numbers which could skew results, for example in providing average house prices.
- Mean is a formulaic approach to analysing a data set of numbers. It uses procedures of addition and division. "The average family has 2.3 kids."
Research has found that students' concept of average usually starts with a notion of mode which is a more intuitive concept - what has the most? It then progresses to median - what is the middle value? The concept of mean requires an understanding of formal maths and calculation and is developed quite a bit later. Younger students might have difficulty in understanding what an average of 2.3 kids actually means - "Um, does it mean there are two older children and then one under ten or something?"
Development of thinking
Looking at the central tendencies of a data set can be problematic without also considering the variation. Students can get an intuitive sense of variation through graphing and comparison of different data sets within the same context.
Over the primary and middle school years it is likely that student understanding of average will develop in the following sequence, encompassing the three conventional definitions of mean, median, and mode.
|Average Student - Grade 7 class|
Once students have the procedure for finding the mean, they develop the ability to solve problems using it.
- Working a mean value backward knowing the number of data values, can produce the total of all values and sometimes missing data values.
- Working with weighted means can combine means for different sized sets.
- Means can be used to compare sets of different sample sizes.
Eventually students will have an intuition for when it is appropriate to use the mean to answer questions about data sets and will consider the variation present as well as the single mean value. | http://www.simerr.educ.utas.edu.au/numeracy/student_reasoning/average.htm | 13 |
10 | Teacher introduces the alphabet; fairy tales, folk tales and nature study; form drawing; reading approached through writing; addition, subtraction, multiplication, division; beginning foreign language, knitting, singing, and recorder.
Listening Skills. Given oral presentations of stories (primarily fairy tales and nature stories) of progressive duration up to 20 minutes, children will, after a 24-hour interval, recall and sequence the principal characters and details of the story. They will create drawings and plays will be created from previously told stories.
The teacher will present rhythms, short plays, and poems orally, as well as games such as “Simon Says” and rhythmical activities requiring attention to verbal directions. Children will increase their retention of, and/or their response time to these activities.
Children learn to play simple pentatonic songs on a recorder. Call and response methods of teaching enhance auditory recall.
Speaking Skills. Through the introduction of poems, rhymes, tongue twisters and dramatic activities, children will be introduced to various sounds and sequences of sounds to develop diction and fluidity of speech.
Letters of the alphabet and phonetic aspects of speech will also be introduced through verse.
Plays, songs and verse will be performed before parents and/or assemblies in a choral format with particular attention to diction and expression.
Writing/Spelling and Reading Skills. Through stories and pictures presented by the teacher, children will become familiar with writing and recognizing the upper case alphabet. Letters and letter sounds will be practiced through movement and action games with appropriate gestures, forms, or activities.
Stories will be illustrated and short sentences will be written to form the first readers. These readers will be used in developing word recognition and the practice of intrinsic phonetic approaches to reading. Initial sounds and word families will be emphasized.
After confidence in recognition of letters and letterform is displayed, lower case letters may be introduced and capitalization will be emphasized as children continue to copy stories and verse from the blackboard. Attention will increasingly be given to clarity of form, word recognition, phonetic value and word families. Initial capitalization and full stop sentence closure will be introduced and emphasized.
Basic Sensory-Integrative Skills
Visual-Motor. Through playing a pentatonic flute children will learn to isolate and control individual finger movements.
Through wet-on-wet watercolor paintings, children will learn to control a medium through proper use of a paintbrush.
Through drawing large symmetrical forms, the children will practice control of the hand and also be asked to create matching sides in a mirror form. Pencil grip and pressure will thereby be introduced.
Balance and Movement. Through repeated circle games, children will gain control of bodily movement and balance.
Through adaptive movement work children will gain control of sensory processes (such as impulse control and static balance) foundational to academic performance.
Numbers 1—12. Through stories, games, picture symbols and arithmetic activities the qualities, quantities and writing of the numbers 1—12 will be explored.
Through the use of story, nature objects, movement and rhythmical activities, the concept of the whole being divided into many parts will be demonstrated.
Counting. Through games, song, movement, calendar work and stories, children will work with numbers to 144. Children will learn to count forward and backwards connecting gross motor movements to speech for memory enhancement.
Through drawings, games and “hands on” activities with various classroom materials, children will be exposed to, and practice sequencing, grouping and writing of numbers with cognitive experience of values.
By using games and rhythmical activities, students will experience number patterns of 1’s, 2’s, 3’s, 4’s, 5’s and 10’s. Values and relationships of these patterns will also be experienced through these exercises. Multiplication and division facts will later be drawn from the students’ use of these number patterns.
The Four Mathematical Processes. Through story and picture, the uses and qualities of the four processes (addition, subtraction, multiplication, and division) will be introduced.
The relationships between these processes will be explored through story, rhyme and picture.
Through games involving manipulation of materials (e.g., stones, beads, beans, blocks) and various rhythmical games and activities, the various processes and math facts will be practiced, memorized and explored in concrete ways.
through experience with concrete operations of the four processes, the students will practice mental problem solving.
Time. Routines will be introduced to provide practical and meaningful situations for learning to tell duration of time.
Through observations of the natural world as well as stories, songs, poems, games, and festivals relating to natural processes, children will learn the markers of seasonal change (see sciences).
Through constructing a daily calendar of events, children will learn the names of the months, the days of the week, and sequences of events.
Ways of Family and Neighborhood. Through stories and class discussions, the students will become conscious of the ways of family and neighborhood life among children in the class.
Social Conventions. Through stories and modeling, students will become aware of social conventions that make life run smoothly: good manners, traffic patterns (e.g., how best to ride one’s bike on the road, how best to cross the street) keeping home, yard and street tidy.
Social Conduct. Through practice of appropriate classroom and school ground behaviors, such as waiting to speak, helping others with materials, waiting turn, etc., children will learn skills of social conduct.
Children will develop awareness of, and appreciation for, natural surroundings as well as an understanding of seasonal changes and their effects on nature through a combination of nature walks, nature stories, songs, and fingerplays. During the walks, observation and informal discussion will take place. The stories will explore natural laws in an accurate but imaginative way, e.g., the water cycle as the journey of a raindrop, the metamorphosis of a caterpillar to a butterfly, the development from seed to flower, and the effect of the seasons on animals. | http://pineforestschool.org/the_grades/grade1/ | 13 |
19 | Information is not readily found at a bargain price. Gathering it is costly in terms of salaries, expenses and time. Taking samples of information can help ease these costs because it is often impractical to collect all the data. Sound conclusions can often be drawn from a relatively small amount of data; therefore, sampling is a more efficient way to collect data. Using a sample to draw conclusions is known as statistical inference. Making inferences is a fundamental aspect of statistical thinking.
There are four primary sampling strategies:
Before determining which strategy will work best, the analyst must determine what type of study is being conducted. There are normally two types of studies: population and process. With a population study, the analyst is interested in estimating or describing some characteristic of the population (inferential statistics).
With a process study, the analyst is interested in predicting a process characteristic or change over time. It is important to make the distinction for proper selection of a sampling strategy. The “I Love Lucy” television show’s “Candy Factory” episode can be used to illustrate the difference. For example, a population study, using samples, would seek to determine the average weight of the entire daily run of candies. A process study would seek to know whether the weight was changing over the day.
Random samples are used in population sampling situations when reviewing historical or batch data. The key to random sampling is that each unit in the population has an equal probability of being selected in the sample. Using random sampling protects against bias being introduced in the sampling process, and hence, it helps in obtaining a representative sample.
In general, random samples are taken by assigning a number to each unit in the population and using a random number table or Minitab to generate the sample list. Absent knowledge about the factors for stratification for a population, a random sample is a useful first step in obtaining samples.
For example, an improvement team in a human resources department wanted an accurate estimate of what proportion of employees had completed a personal development plan and reviewed it with their managers. The team used its database to obtain a list of all associates. Each associate on the list was assigned a number. Statistical software was used to generate a list of numbers to be sampled, and an estimate was made from the sample.
Like random samples, stratified random samples are used in population sampling situations when reviewing historical or batch data. Stratified random sampling is used when the population has different groups (strata) and the analyst needs to ensure that those groups are fairly represented in the sample. In stratified random sampling, independent samples are drawn from each group. The size of each sample is proportional to the relative size of the group.
For example, the manager of a lending business wanted to estimate the average cycle time for a loan application process. She knows there are three types (strata) of loans (large, medium and small). Therefore, she wanted the sample to have the same proportion of large, medium and small loans as the population. She first separated the loan population data into three groups and then pulled a random sample from each group.
Systematic sampling is typically used in process sampling situations when data is collected in real time during process operation. Unlike population sampling, a frequency for sampling must be selected. It also can be used for a population study if care is taken that the frequency is not biased.
Systematic sampling involves taking samples according to some systematic rule – e.g., every fourth unit, the first five units every hour, etc. One danger of using systematic sampling is that the systematic rule may match some underlying structure and bias the sample.
For example, the manager of a billing center is using systematic sampling to monitor processing rates. At random times around each hour, five consecutive bills are selected and the processing time is measured.
Rational subgrouping is the process of putting measurements into meaningful groups to better understand the important sources of variation. Rational subgrouping is typically used in process sampling situations when data is collected in real time during process operations. It involves grouping measurements produced under similar conditions, sometimes called short-term variation. This type of grouping assists in understanding the sources of variation between subgroups, sometimes called long-term variation.
The goal should be to minimize the chance of special causes in variation in the subgroup and maximize the chance for special causes between subgroups. Subgrouping over time is the most common approach; subgrouping can be done by other suspected sources of variation (e.g., location, customer, supplier, etc.)
For example, an equipment leasing business was trying to improve equipment turnaround time. They selected five samples per day from each of three processing centers. Each processing center was formed into a subgroup.
When using subgrouping, form subgroups with items produced under similar conditions. To ensure items in a subgroup were produced under similar conditions, select items produced close together in time.
This article focused on basic sampling strategies. An analyst must determine which strategy applies to a particular situation before determining how much data is required for the sample. Depending on the question the analyst wants to answer, the amount of sample data needed changes. The analyst should collect enough baseline data to capture an entire iteration (or cycle) of the process.
An iteration should account for the different types of variation seen within the process, such as cycles, shifts, seasons, trends, product types, volume ranges, cycle time ranges, demographic mixes, etc. If historical data is not available, a data collection plan should be instituted to collect the appropriate data.
Factors affecting sample size include:
Sample size calculators are available to make the determination of sample size much easier; it is best, however, that an analyst consults with a Master Black Belt and/or Black Belt coach until he or she is comfortable with determining sample size. | http://www.isixsigma.com/tools-templates/sampling-data/basic-sampling-strategies-sample-vs-population-data/ | 13 |
14 | Lesson: Electrons on the MoveContributed by: Integrated Teaching and Learning Program, College of Engineering, University of Colorado at Boulder
Educational Standards :
Pre-Req Knowledge (Return to Contents)
atoms, electrons, electric charge
Learning Objectives (Return to Contents)
After this lesson, students should be able to:
Introduction/Motivation (Return to Contents)
Ask the students: Have you ever had to replace the batteries in a flashlight? (Many will answer yes.). Why did you have to replace the batteries? (Possible answers: The batteries were dead, the flashlight did not work or the light was dim.) Once you place new batteries in the flashlight, you complete an electric circuit and the flashlight operates and the light shines brightly. Remind students that atoms are made of smaller parts called protons, neutrons and electrons. The electrons can carry a negative electric charge and can move from atom to atom and create current electricity. Tell students that during this lesson, they will learn how the electrons' charge can help light a bulb in a flashlight and what is trying to stop charge from lighting the bulb!
If you look closely at a battery, you will see a small number with the letter "V" next to it. Does anyone know what the letter represents? (Answer: Volts.) Let students know that during this lesson, they will find out what volts have to do with charge in a circuit.
Ask the students: Does anyone know of any alternatives to generating current electricity at a power plant? (Possible answers: Photovoltaic cells/solar cells, wind farms.) Photovoltaic (PV) cells, commonly called solar cells, have been powering satellites in space for decades. Most people have seen solar cells on calculators, and on road signs and lights along highways. Photovoltaic cells use sunlight to make electricity. Using photovoltaic cells to produce electricity does not produce the polluting emissions that conventional power plants produce. Conventional fossil fuels require costly operations to extract, while sunlight is freely available everywhere. Unfortunately, photovoltaic cells are still expensive to manufacture (and require non-solar power to manufacture!). Engineers and scientists are working to make solar electricity affordable for everyone.
Lesson Background & Concepts for Teachers (Return to Contents)
Electrical Potential Energy and Voltage
The force between any two charges depends on both the product of the charges and the distance between them. The force between two like-charged objects is repulsive, whereas the force between two oppositely-charged objects is attractive. Therefore, it takes energy to push two like-charged objects together or to pull two unlike-charged objects apart. For example, if we were to take two negatively-charged objects and compare the energy required to hold them at different distances from each other, we would find that the amount of energy we need to expend is increased as we bring the two negatively-charged objects closer together. This is analogous to the effect you experience when trying to push the like poles of two magnets together.
The closer together the two like-charged objects are (or the farther apart two oppositely-charged objects are) the more electrical potential energy they have. The amount of electrical potential energy per charge is called the voltage. It may be helpful to present voltage as the "electrical pressure" that causes the electrons to move in a conductor. If electric current is analogous to water moving in a pipe, then electrical pressure (voltage) is analogous to water pressure in a pipe. A pump in a water line would be analogous to a voltage source. Batteries, generators, photovoltaic cells and other voltage sources all provide electrical energy that can be used to do work.
The SI unit (SI is the abbreviation for the International System of measurement from the French Système Internationale) of electrical potential, or voltage, is the volt [V]. Small batteries have voltages ranging from 1.5 V to 9 V. This means that there is a potential difference of 1.5 V across the terminals of the battery. The electric outlets in homes provide electricity at 120 V or 220 V. Power lines are at 10,000 V, or higher, in order to reduce energy losses due to the resistance of the transmission cables.
Charge Moves Due to a Voltage Difference
There is a flow of electric charge, an electric current, if the ends of a conducting wire are held across a voltage source (potential difference). The "electrical pressure" due to the difference in voltage between the positive and negative terminals of a battery causes the charge (electrons) to move from the positive terminal to the negative terminal. The voltage difference, also known as a voltage drop, is produced by attaching, for example, a light bulb or radio to the battery. A voltage source, such as a battery, generator or photovoltaic cell, can provide the sustained "electrical pressure" required to maintain a current. Current is measured in amperes (or amps) [A], in the SI system. One amp is the flow of 6.25 x 1018 electrons per second.
Any path through which charges can move is called an electric circuit. If there is a break in the path, there cannot be a current; such a circuit is called an open circuit. However, if the path for movement of charge is complete, then the circuit is closed. There can only be a current in a closed circuit. Electrons cannot pile up or disappear in a circuit. A circuit can be as simple as a wire connected to both terminals of a battery, or as complicated as an integrated circuit in a home computer.
Resistance, Conductors and Insulators
Different materials oppose the movement of charge to varying degrees. The resistance of an object is a measurement of the degree of opposition to charge movement within that object. Conductors (such as metals) have lower resistances while insulators (such as wood or plastic) have higher resistances. An object's resistance depends on the materials that make up the object, its length, cross-sectional area and temperature.
If we continue with the water analogy for electric current, we can think of the resistance of a material like the boulders in a river, which slow the flow of water. Two objects made of the same material can have different resistances if their physical dimensions are different. Water in a wide riverbed (or a hose with a large diameter) has less resistance to flow than water in a narrow riverbed (or a hose with a small diameter). The resistance of a thick copper wire is less than the resistance of a thin copper wire. Longer pieces of a material have greater resistances than shorter pieces. Thus, we can see that the resistance to charge movement is cumulative in a material. Finally, lowering a material's temperature decreases its resistance. The SI unit of resistance is the ohm [Ω], which is equal to one volt per amp [V/A].
It is important to note that any material can conduct electricity if there is a high enough voltage across it. This is what happens both in lightning and electrocution. Air is normally an insulator, but during thunderstorms, a very high electrical potential difference between the clouds and the ground forces a current through the air briefly. In the body, the skin acts as an electrical insulator. When there is a high voltage across the body, there is a brief discharge through the body, damaging the tissues and possibly causing death. The likelihood of electrocution is increased if the skin is wet. This is because salts (from perspiration or soils) on the body dissolve in the water, producing a conducting solution.
Current, Voltage and Resistance Relationships
The current in a circuit is directly proportional to the voltage across the circuit and inversely proportional to the resistance of the circuit. This relationship is called Ohm's law. For a given voltage, there is greater current in a circuit element with a lower relative resistance. Also, for a given resistance, there is greater current in a circuit element if there is a greater voltage across it. The following equations, Ohm's law, describe the relationship:
I = V / R
V = I * R
Where I is current, V is voltage and R is resistance. For example, if a flashlight with a pair of alkaline batteries at a total of 2 V has a light bulb with a resistance of 10 ohms. What is the current? (Answer: I = V / R = 0.2 A.)
How Do Batteries Work?
In a battery, chemical energy is converted to electrical energy. Whenever a battery is connected in a closed circuit, a chemical reaction inside the battery produces electrons. The electrons produced in this reaction collect on the negative terminal of the battery. Next, electrons move from the negative terminal, through the circuit, and back to the positive battery terminal. Without a good conductor connecting the negative and positive terminals of the battery, the chemical reaction that produces electrons would not occur.
There are many different types of batteries, each using different materials in the chemical reaction and each producing a different voltage. A battery is actually several galvanic cells (a device in which chemical energy is converted to electrical energy) connected together. Every cell has two electrodes, the anode and cathode, and an electrolyte solution. Electrons are produced in the reaction at the anode, while electrons are used in the reaction at the cathode. The electrolyte solution allows ions to move between the cathode and the anode where they are involved in chemical reactions balancing the movement of electrons.
Possibly the most familiar battery reaction takes place in a car battery. This reaction involves the disintegration of lead in sulfuric acid. In a lead-acid battery, each cell has two lead grids, one filled with spongy lead and one filled with lead oxide, immersed in sulfuric acid. The grid with spongy lead is the anode: electrons are produced as the lead reacts with sulfuric acid. These electrons collect on the negative terminal of the battery. The grid with lead oxide is the cathode in a lead-acid battery. Electrons that have gone through the circuit and returned to the cathode are used in a reaction that takes place at the cathode. Each cell produces 2 V. In a car battery, there are six of these lead-acid cells linked together in series to produce a total voltage of 12 V.
The lead-acid cell is called a wet cell because the reaction takes place in a liquid electrolyte. Dry cells have a moist, pasty electrolyte. Most batteries used in consumer electronics are dry cells. Alkaline batteries are dry cells that use zinc and manganese-oxide electrodes with a basic (pH greater than 7) electrolyte. In inexpensive batteries, there is usually an acid electrolyte with zinc and carbon electrodes.
Engineers, who design computers, cars, cell phones, satellites, spacecraft, portable electronic devices, etc., must understand batteries because they are integral to a device's functioning. Batteries are also used to store the energy generated from solar electric panels and wind turbines. Many engineers are working to develop batteries that last longer, are more efficient, weigh less, are less harmful to the environment, require less maintenance and/or are more powerful.
Vocabulary/Definitions (Return to Contents)
Associated Activities (Return to Contents)
Lesson Closure (Return to Contents)
Ask students to give examples of devices that use current electricity. Have the students categorize the devices by the source of electricity, whether from solar cells, typical chemical batteries, a wall outlet (ultimately from a power plant) or a portable generator. Ask students to list some advantages and disadvantages of using the different power sources. As a class, discuss the functions of various devices, paying attention to the role of current electricity and the transformations of energy in the device. For example, contrast the use of current electricity to power a lamp and a fan. (The electricity is converted to light in the lamp, and to the movement of the blades in the fan.)
Assessment (Return to Contents)
Brainstorming: In small groups, have students engage in open discussion. Remind students that in brainstorming, no idea or suggestion is "silly." All ideas should be respectfully heard. Encourage wild ideas and discourage criticism of ideas. Ask the students:
Know / Want to Know / Learn (KWL) Chart: Before the lesson, ask students to write down in the top left corner of a piece of paper (or as a group on the board) under the title, Know, all the things they know about electricity. Next, in the top right corner under the title, Want to Know, ask students to write down anything they want to know about electricity. After the lesson, ask students to list in the bottom half of the page under the title, Learned, all of the things that they have learned about electricity.
Discussion Question: Solicit, integrate, and summarize student responses.
Lesson Summary Assessment
Numbered Heads: Divide the class into teams of three to five. Have students on each team number off so each member has a different number. Ask the students one of the questions below (give them a time frame for solving it, if desired). The members of each team should work together to answer the question. Everyone on the team must know the answer. Call a number at random. Students with that number should raise their hands to give the answer. If not all the students with that number raise their hands, allow the teams to work a little longer. Ask the students:
Know / Want to Know / Learn (KWL) Chart: Finish the remaining section of the KWL Chart as described in the Pre-Lesson Assessment section. After the lesson, ask students to list in the bottom half of the page under the title, Learned, all of the things that they have learned about electricity.
Lesson Extension Activities (Return to Contents)
Have students learn more about solar cells by conducting an Internet search. Photovoltaic cells can only be made of certain materials, called semiconductors, which are between conductors and insulators in their ability to conduct electricity. Silicon is the most commonly used semiconductor in photovoltaic cells. Whenever light hits a PV cell, some of the energy is absorbed by the cell. This energy can knock electrons loose from the atoms that make up the semiconductor material. An electrical device on the PV cell forces these loose electrons to move in a particular direction, thus creating an electric current. Metal contacts at the top and bottom of a photovoltaic cell, like the terminals on a battery, connect the PV cell to an electric circuit. This "circuit" may be the electrical system of a building or a single device. PV cells produce direct current (DC), current in one direction only, just like a battery. Most household appliances use alternating current: alternating current (AC) changes direction 60 times per second and is used in the U.S. The direct current from a PV cell can be modified to produce alternating current so it can be used by any electrical appliance. PV cells can be linked together in different ways to make panels for various applications. The photovoltaic system for a home might require a dozen panels while a calculator may have only one PV cell. For more information on photovoltaic cells, see: http://www.howstuffworks.com/solar-cell.htm.
Have students learn more about solar panels and systems by conducting an Internet search to find companies that make or sell photovoltaic (PV) panels. What are the typical costs of a solar panel? What are some applications? (Possible answers: Rural electrification, pumping water, electricity for homes and businesses.) Have students find out which parts of the world are the best for using photovoltaic systems to produce electricity. Where are the largest PV systems?
What is Volta's Pile? (Answer: A famous experiment by Alessandro Volta in 1800 that produced electricity by chemical means and spurred intense research in the field of electricity.) Have students investigate and build a variation of Volta's Pile. See instructions at: http://www.funsci.com/fun3_en/electro/electro.htm
References (Return to Contents)
Guyton M.D., Arthur, C. and Hall, John E., Textbook of Medical Physiology. 10th Edition. Philadelphia, PA: W B Saunders., 2000.
Hewitt, Paul G. Conceptual Physics. 8th Edition. New York, NY: Addison Publishing Company, 1998.
How Batteries Work, How Stuff Works, Inc., Media Network, accessed March 2004. http://www.howstuffworks.com/battery.htm
How Solar Cells Work, How Stuff Works, Inc., Media Network, accessed March 2004. http://www.howstuffworks.com/solar-cell.htm
ContributorsXochitl Zamora Thompson, Sabre Duren, Joe Friedrichsen, Daria Kotys-Schwartz, Malinda Schaefer Zarske, Denise Carlson
Copyright© 2004 by Regents of the University of Colorado.
The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0226322. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government.
Supporting Program (Return to Contents)Integrated Teaching and Learning Program, College of Engineering, University of Colorado at Boulder
Last Modified: March 1, 2013 | http://www.teachengineering.org/view_lesson.php?url=collection/cub_/lessons/cub_electricity/cub_electricity_lesson03.xml&std_open=true | 13 |
11 | How to Subtract Vectors
You don’t come across vector subtraction very often in physics problems, but it does pop up. To subtract two vectors, you put their feet (or tails, the non-pointy parts) together; then draw the resultant vector, which is the difference of the two vectors, from the head of the vector you’re subtracting to the head of the vector you’re subtracting it from.
To make heads or tails of this, check out the above figure, where you subtract A from C (in other words, C – A). As you can see, the result is B, because C = A + B.
Another (and for some people, easier) way to do vector subtraction is to reverse the direction of the second vector (A in C – A) and use vector addition; that is, reverse the direction of A, making it –A, and add it to C. C – –A = C + A, which gives B as the resultant vector. | http://www.dummies.com/how-to/content/how-to-subtract-vectors.html | 13 |
10 | As we discussed in pre-algebra, percent is a ratio that compares
a number to 100. Percent means per hundred. Percent is usually
expressed with the percent symbol %.
Percent problems are usually solved by using proportions.
In a classroom 14 of the 21 students are female. How many
percent does that correspond to?
We know that the ratio of girls to all students is
And we know that this ratio is a proportion to a ratio with the
As we saw in the last section from here we can calculate x
i.e. 67% of the students in the class are female.
One of the ratios in these proportions is always a comparison of
two numbers (above 14/21). This numbers are called the percentage
(14) and the base (21). The other ratio is called the rate and
always has the denominator 100.
Another way of saying this is that
Percent of change, or p%, indicates how much a quantity has
increased or decreased in comparison with the original amount. It's
Johnny is at the store where there is a big sign telling him
that there is a $4.99 discount on a shirt that originally costs
$39.99. But how big is the discount in percent?
The prize of the shirt has decreased by 12%.
Videolesson: A prize increases from $500 to
$585. How big is the increase in percent? | http://www.mathplanet.com/education/algebra-1/how-to-solve-linear-equations/calculating-with-percents | 13 |
52 | Washington, D.C.—The Moon has much more water than previously thought, a team of scientists led by Carnegie's Erik Hauri has discovered. Their research, published May 26 in ScienceExpress, shows that inclusions of magma trapped within crystals collected during the Apollo 17 mission contain 100 times more water than earlier measurements. These results could markedly change the prevailing theory about the Moon's origin.
The research team used a state-of-the-art NanoSIMS 50L ion microprobe to measure seven tiny samples of magma trapped within lunar crystals as so-called "melt inclusions." These samples came from volcanic glass beads—orange in appearance because of their high titanium content—which contained crystal-hosted melt inclusions. These inclusions were prevented from losing the water within when explosive volcanic eruptions brought them from depth and deposited them on the Moon's surface eons ago.
"In contrast to most volcanic deposits, the melt inclusions are encased in crystals that prevent the escape of water and other volatiles during eruption. These samples provide the best window we have to the amount of water in the interior of the Moon," said James Van Orman of Case Western Reserve University, a member of the science team. The paper's authors are Hauri; Thomas Weinreich, Alberto Saal and Malcolm Rutherford from Brown University; and Van Orman.
Compared with meteorites, Earth and the other inner planets of our solar system contain relatively low amounts of water and volatile elements, which were not abundant in the inner solar system during planet formation. The even lower quantites of these volatile elements found on the Moon has long been claimed as evidence that it must have formed following a high-temperature, catastrophic giant impact. But this new research shows that aspects of this theory must be reevaluated. The study also provides new momentum for returning similar samples from other planetary bodies in the solar system.
"Water plays a critical role in determining the tectonic behavior of planetary surfaces, the melting point of planetary interiors, and the location and eruptive style of planetary volcanoes," said Hauri, a geochemist with Carnegie's Department of Terrestrial Magnetism (DTM). "We can conceive of no sample type that would be more important to return to Earth than these volcanic glass samples ejected by explosive volcanism, which have been mapped not only on the Moon but throughout the inner solar system."
Three years ago the same team, in a study led by Saal, reported the first evidence for the presence of water in lunar volcanic glasses and applied magma degassing models to estimate how much water was originally in the magmas before eruption. Building on that study, Weinreich, a Brown University undergraduate, found the melt inclusions, allowing the team to measure the pre-eruption concentration of water in the magma and estimate the amount of water in the Moon's interior.
"The bottom line," said Saal, "is that in 2008, we said the primitive water content in the lunar magmas should be similar to the water content in lavas coming from the Earth's depleted upper mantle. Now, we have proven that is indeed the case."
The study also puts a new twist on the origin of water ice detected in craters at the lunar poles by several recent NASA missions. The ice has been attributed to comet and meteoroid impacts, but it is possible that some of this ice could have come from the water released by past eruptions of lunar magmas.
These findings should also be taken into account when analyzing samples from other planetary bodies in our solar system. The paper's authors say these results show that their method of analysis is the only way to accurately and directly determine the water content of a planet's interior.
Video Press Release Washington, D.C.—The Moon has much more water than previously thought, a team of scientists led by Carnegie’s Erik Hauri has discovered. Their research, published May 26 in Science Express, shows that inclusions of magma trapped within crystals collected during the Apollo 17 mission contain 100 times more water than earlier measurements. These results could markedly change the prevailing theory about the Moon’s origin.
Hauri and his team looked at bits of rock brought back to Earth in 1972 by astronauts on NASA's Apollo 17 mission. Specifically, the researchers analyzed pieces called melt inclusions, which are minuscule globules of lunar magma encased within solid crystals. [Infographic: Inside Earth's Moon]
These crystals prevented the magma's water from gassing out during the eruption, thereby largely preserving the original water content of the underground rock.
So melt inclusions are special. They're also rare, and finding the tiny structures in the small store of moon rocks available to researchers was by no means a given. But co-author Thomas Weinreich, at the time a freshman at Brown University, spotted some while poring over the Apollo 17 samples.
"A kid a year out of high school found these for us," Hauri told SPACE.com "That was pretty amazing in and of itself."
Other researchers had found melt inclusions in lunar samples before, but until now nobody had been able to measure their water content. Using a specialized ion microprobe, the team scrutinized seven melt inclusions, the largest just 30 microns across — smaller than the diameter of a human hair.
Backscatter electron image of a lunar melt inclusion from Apollo 17 sample 74220, enclosed within an olivine crystal. The inclusion is 30 microns in diameter. CREDIT: John Armstrong, Geophysical Laboratory, Carnegie Institution of Washington
The general consensus is that the Moon formed and evolved through a single or series of catastrophic heating
events in which most of the highly volatile elements, especially hydrogen, were evaporated away. That notion has
changed with the new report showing evidences of indigenous water in lunar volcanic glasses
Because these glasses are the most primitive melts erupted on the surface of the satellite, this result represents the best evidence for the presence of a deep source within the Moon relatively rich in volatile. Here
we report new volatile data (C, H2O, F, S, Cl) for over 200 individual Apollo 15 lunar glasses with composition ranging from very-low to high Ti contents (sample 15427,41; 15426,138; 15426,32). Our new SIMS detection limits (~0.15 ppm C; ~0.4 ppm H2O, ~0.05 ppm F, ~0.21 ppm S, ~ 0.04 ppm Cl by weight determined by the repeated analysis of synthetic forsterite located on each sample mount), represent at least 2 orders of magnitude improvement over previous analytical techniques. After background correction the volatile contents have the following ranges: C 0-0.14± 0.13 ppm is within background; 0-70 ± 0.4 ppm for H2O; 1.6-60 ± 0.1 ppm for F; 58-885 ± 1.3 ppm for S; and 0-3 ± 0.02 ppm for Cl. Our new values represent an increase in the volatile concentrations by a factor of 2 from previously reported data [1.] Two outstanding features of the data are the significant correlation among H2O, Cl, F and S contents, and the clear relationship between the volatile and the major element contents of the glasses. The data support the hypothesis that there were significant differences in the initial volatile content, and/or the mechanism of degassing and eruption among these glasses was different. Most importantly, the data suggest that the measured H2O is indigenous to the Moon. Our results suggest that, contrary to the prevailing ideas, the bulk Moon is not uniformly depleted in highly volatile elements, and the presence of water, in particular, must be included to constrain models for the thermal and chemical evolution of the Moonís interior.
Water on the Moon 100 X Higher Than Previously Measured: A Watershed Discovery
A team of NASA-funded researchers has measured for the first time water from the moon in the form of tiny globules of molten rock, which have turned to glass-like material trapped within crystals. Data from these newly-discovered lunar melt inclusions indicate the water content of lunar magma is 100 times higher than previous studies suggested.
The inclusions were found in lunar sample 74220, the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The scientific team used a state-of-the-art ion microprobe instrument to measure the water content of the inclusions, which were formed during explosive eruptions on the moon approximately 3.7 billion years ago.
The results published in the May 26 issue of Science Express raise questions about aspects of the "giant impact theory" of how the moon was created. That theory predicted very low water content of lunar rock due to catastrophic degassing during the collision of Earth with a Mars-sized body very early in its history.
"Water plays a critical role in determining the tectonic behavior of planetary surfaces, the melting point of planetary interiors and the location and eruptive style of planetary volcanoes," said Erik Hauri, a geochemist with the Carnegie Institution of Washington and lead author of the study. "I can conceive of no sample type that would be more important to return to Earth than these volcanic glass samples ejected by explosive volcanism, which have been mapped not only on the moon but throughout the inner solar system."
"First, I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish."—President John F. Kennedy, Joint Session of Congress, May 25, 1961
Was President Kennedy a dreamer, a visionary, or simply politically astute? We may never know, but he had the courage to make that bold proposal 50 years ago Wednesday. The Soviet Union's Yuri Gagarin had completed an orbit of the Earth the previous month and electrified the world. The United States had taken only one human, Alan Shepard, above 100 miles altitude and none into orbit. Americans, embarrassed by the successes of our Cold War adversary, were eager to demonstrate that we too were capable of great achievements in space.
A half century has passed since Kennedy challenged our citizenry to do what most thought to be impossible. The subsequent American achievements in space were remarkable: Mercury, Gemini, Apollo and Skylab. Our efforts enhanced international cooperation with Apollo-Soyuz, the space shuttle and the International Space Station. The compelling fascination of our space achievements among young people spurred their interest in education.
By 2005, in keeping with President Kennedy's intent and America's resolve, NASA was developing the Constellation program, focusing on a return to the moon while simultaneously developing the plans and techniques to venture beyond, and eventually to Mars.
The response to Kennedy's bold challenge a half-century ago has led to America's unchallenged leadership in space. We take enormous pride in all that has been accomplished in the past 50 years. And we have the people, the skills and the wherewithal to continue to excel and reach challenging goals in space exploration.
But today, America's leadership in space is slipping. NASA's human spaceflight program is in substantial disarray with no clear-cut mission in the offing. We will have no rockets to carry humans to low-Earth orbit and beyond for an indeterminate number of years. Congress has mandated the development of rocket launchers and spacecraft to explore the near-solar system beyond Earth orbit. But NASA has not yet announced a convincing strategy for their use. After a half-century of remarkable progress, a coherent plan for maintaining America's leadership in space exploration is no longer apparent.
Former Senator Schmitt Proposes Dismantling of NASA and Creation of a New, National Space Exploration Administration (NSEA)
On May 25, 1961, President John F. Kennedy announced to a special joint session of Congress the dramatic and ambitious goal of sending an American to the Moon and returning him safely to Earth by the end of that decade. President Kennedy’s confidence that this Cold War goal could be accomplished rested on the post-Sputnik decision by President Dwight D. Eisenhower to form the National Aeronautics and Space Administration and, in January 1960, to direct NASA to begin the development of what became the Saturn V rocket. This release of a collection of essays on Space Policy and the Constitution commemorates President Kennedy’s decisive challenge 50 years ago to a generation of young Americans and the remarkable success of those young Americans in meeting that challenge.
How notions of leadership have changed since Eisenhower and Kennedy! Immense difficulties now have been imposed on the Nation and NASA by the budgetary actions and inactions of the Bush and Obama Administrations between 2004 and 2012. Space policy gains relevance today comparable to 50 years ago as the dangers created by the absence of a coherent national space policy have been exacerbated by subsequent adverse events. Foremost among these events have been the Obama Administration’s and the Congress’s spending and debt spree, the continued aggressive rise of China, and, with the exception of operations of the Space Shuttle and International Space Station, the loss of focus and leadership within NASA headquarters.
By Dr. Harrison H. Schmitt. Preface: (“Is there a path forward for United States’ space policy? When a new President takes office in 2013, he or she should propose to Congress that we start space policy and its administration from scratch. A new agency, the National Space Exploration Administration (NSEA), should be charged with specifically enabling America’s and its partners’ exploration of deep space, inherently stimulating education, technology, and national focus. The existing component parts of NASA should be spread among other agencies with the only exception being activities related to U.S. obligations to its partners in the International Space Station (ISS).” — HHS). The Foreword was written by Michael D. Griffin, noted physicist, aerospace engineer and NASA Administrator (2005-2009): (“Jack makes the case for space as no one else can, and he shows how and why we are on the wrong path— leaving the rest of us with the question: what can we do to obtain the leadership we need instead of the leadership we have?”— MDG).
WASHINGTON -- Fifty years ago, a young president struggling with deepening international issues set a fledgling space agency on a course that would change the history of human exploration. NASA commemorates President John F. Kennedy's historic speech that sent humans safely to the moon with a series of activities and a commitment to continue the journey of discovery and exploration that started with a desperate race into space.
"We are moving into a bright new future that builds on a challenge presented to us 50 years ago," said NASA Administrator Charles Bolden. "It is important that we remember our history but we must always look forward toward a brighter future. Our advantage now is that we have five decades of accomplishment and world leadership in space on which to build. The dreams President Kennedy helped make real for our world, and the dreams we still hold, may appear to be just out of reach but they are not out of our grasp."
On this date in 1961, Kennedy addressed a joint session of Congress, with a worldwide television audience, and announced, "I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to Earth." This was seen as a bold mandate because America's experience up to this point was Alan Shepard's suborbital Freedom 7 mission, which launched just a few weeks earlier and lasted about 15 minutes.
"Today, we have another young and vibrant president who has outlined an urgent national need to out-innovate, out-educate, and out-build our competitors and create new capabilities that will take us farther into the solar system, and help us learn even more about our place in the universe," Bolden added. "We stand at a moonshot moment once again, where we have a chance to make great leaps forward to new destinations, develop new vehicles and technologies, and new ways of exploring."
To commemorate the address that launched NASA into history, the agency has scheduled several events and historic multimedia perspectives, including:
-- A special concert at 7 p.m. EDT tonight at the John F. Kennedy
Center for the Performing Arts in Washington. The one-hour concert
will feature the Space Philharmonic, Administrator Bolden,
astronauts, Kennedy family representatives and special guests. There
are a limited number of tickets available for the public. For more
information, visit: http://go.nasa.gov/jTOKZt
-- Video and other multimedia material from President Kennedy's speech
are available on NASA Television and on the agency's Internet
homepage http://www.nasa.govalong with information about the
agency's future exploration initiatives.
-- A message from the administrator about NASA's next moonshot moment
and moving beyond Earth orbit is available on his blog at: http://bit.ly/fNjTS2
-- An announcement later today that represents an important step in
executing the president's exploration objectives and could pave the
way for extending humanity's reach beyond low-Earth orbit and further
-- NASA and the Smithsonian's National Air and Space Museum in
Washington present "NASA | ART," from May 28 to Oct. 9. The exhibit
features more than 70 paintings, drawings, photographs, sculptures,
and other forms of art illustrating the agency's mission. Admission
is free, and the exhibit is located at the Air and Space Museum's
building at Sixth Street and Independence Ave. SW. | http://lunar-update.blogspot.com/2011_05_01_archive.html | 13 |
28 | Pre-lab Questions Experiment Post-lab Questions
EXPERIMENT 17: OXIDATION - REDUCTION
The following preparatory questions should be answered before coming to class. They are intended to introduce you to several ideas important to aspects of the experiment. You must turn-in your work to your teaching assistant before you will be allowed to begin the experiment. Be sure to bring a calculator and paper to laboratory.
l. Define the following terms.
(a) oxidizing agent
(b) reducing agent
2. Given that the following three reactions occur in the direction written;
Complete the table.
Rank the strength of the oxidizing and reducing agents identified in the table.
EXPERIMENT 17: OXIDATION - REDUCTION Top
PART I: Constructing a Qualitative Potential Series (WORK IN PAIRS)
Using a 24-well plate, half fill two wells each with Cu2+, Pb2+ and Zn2+ solutions. Be sure to note the location (well number) of each solution. To each of the wells add a shiny piece of Cu, Pb or Zn metal to form all of the combinations listed in Table I. (If necessary, shine the metal pieces with steel wool or sand paper.) Complete the table by describing your observations and writing balanced chemical equations for each reaction you observe. If no reaction occurs, write NR.
Based on your observations answer the following questions. Remember to always show the charge on the ions.
I. (a) Prepare a potential series for Cu, Pb and Zn and their ions.
(b) Which metal ion is the strongest oxidizing agent?
(c) Which metal ion is the weakest oxidizing agent?
(d) Which metal is the strongest reducing agent?
(e) Which metal is the weakest reducing agent?
Using clean wells on the 24-well plate, half fill three wells with 6 M HCl. Scrub pieces of Cu, Pb and Zn metal carefully with steel wool. Do not touch the clean metal surfaces with your hands. Use forceps or a paper towel. To each of the three wells add a different metal piece. Complete the Table II by describing your observations and writing balanced chemical equations for each reaction you observe. If no reaction occurs, write NR. (Note: Some of the reactions may occur very slowly. Allow at least five minutes before deciding that no reaction has occurred.)
(g) Position H2 - H+ in the potential series prepared in I (a).
EXPERIMENT 17: OXIDATION - REDUCTION
PART II: Semi-Micro Voltaic Cells
Cut four Styrofoam coffee cups about 2 cm from the bottom to form shallow cups.
Tape the bottom of one of the cups to the center of a piece of cardboard or stiff paper. Tape the remaining three cups around the center cup as shown in Figure II.
Fill the center cup about half full of 1.0 M NH4NO3 solution (5 to 10 mL). Be careful not to splash any solution into the surrounding cups. Fill one of the surrounding cups about half full of 0.1 M CuSO4. Half fill another with 0.1 M FeSO4 and the third with 0.1 M ZnSO4. Label the cups by writing the name of each solution on the cardboard sheet.
Using sandpaper or steel wool, polish copper, zinc and iron electrodes until they shine. Do not handle the electrodes with your fingers, use forceps or paper towels.
Carefully dip one end of the copper electrode in the copper (II) sulfate solution in the cup. Bend the electrode or use small pieces of tape as necessary to secure the electrode to the cup so that one end is below the surface of the solution and the other extends out of the cup (see Figure III).
Repeat the process placing the zinc electrode in the zinc sulfate solution and the iron electrode in the ferrous sulfate solution. Label each cup by writing on the cardboard.
Fold three sheets of filter paper into strips about 1 cm wide and several layers thick.
Bend one of the strips into a 'U' shape. Invert the 'U' and submerge one end of the strip in the ammonium nitrate solution in the center cup and the other end in the solution in one of the surrounding cups. Repeat the procedure with the remaining two surrounding cups so that each of the three surrounding solutions is connected to the center solution by a strip of filter paper. These connections form a salt bridge between any two of the surrounding cups.
Connect one of the leads of the voltmeter (be sure your voltmeter is set on the 5 or less volt scale) to the copper electrode with an alligator clip. Connect the other lead to the zinc electrode. Read the voltage on the voltmeter scale. If the deflection is negative, remove the clips and attach them to the opposite electrodes. Wait a few seconds for the reading to stabilize and then record the potential.
The cathode will be attached to the red pole of the voltmeter and anode to the black.
Which electrode is the anode?
Which electrode is the cathode?
Write the half reaction occurring at the anode. Is this oxidation or reduction?
Write the half reaction occurring at the cathode. Is this oxidation or reduction?
Write the overall reaction for the cell.
Record the information (Obs. #1 - #6) for the zinc and copper cell in Table I. Complete the table by measuring the potential of each voltaic cell by attaching the voltmeter lead clips to each combination of electrodes producing a positive voltage.
Based on the measured cell potentials, prepare a potential series similar to that obtained in Part I of this experiment. List the metals from strongest to weakest reducing agent.and the metal ions from weakest oxidizing agent to strongest.
Compare this result with that obtained in Part I. The only new metal tested this time is iron. Can you insert Fe into the series generated in Part I? Explain your answer.
Replace the voltmeter leads on the copper and zinc electrodes to give a positive cell potential. Using forceps, remove the strips of filter paper connecting the copper(II) solution to the NH4NO3 solution. Describe what happens.
What is the purpose of the salt bridge? For the zinc/copper cell, in which direction do ions move from the center NH4NO3 solution?
Replace the filter paper strip. Measure and record the cell potential in Table II. Add some (2-3 mL) 6 M NH3(aq) to the cup containing CuSO4. Observe and record the cell potential. (The deep blue color is due to the formation of Cu(NH3)42+ complex.)
Now add some (2-3 mL) 2 M Na2S to the copper solution. Measure and record the cell potential.
Explain the changes in the cell potential caused by the addition of NH3(aq) and Na2S.
To clean up, remove the electrodes and filter paper strips from the cups. Carefully lift each cup, and slide one finger underneath to detach the cup from the tape. Dispose of the solutions as directed by your instructor. Clean and dry the electrodes and return them to their labeled containers. Place the filter paper strips in the trash.
EXPERIMENT 17: OXIDATION - REDUCTION
PART III: Electrolysis of a Sulfuric Acid Solution
Clean a copper electrode by scrubbing it with steel wool until all surfaces shine. Rinse the electrode thoroughly with water and wipe with a paper towel to remove any material clinging to the metal. Rinse the electrode with a small amount of acetone and allow it to air dry. When the electrode is dry weigh it on the laboratory balance (handle the dry electrode only by the edges or use forceps) and record the initial mass in observation 1.
Pour about 100 mL of 1 M sulfuric acid into a 250 mL beaker. Fill a 50 mL graduated cylinder level full with the same acid solution. Very carefully cap the graduated cylinder with a small square of parafilm. Invert the cylinder and place it in the beaker so that its mouth is below the level of the acid but not touching the bottom of the beaker. Clamp the graduated cylinder to a ring stand
to hold it in place. Remove the parafilm below the surface of the acid solution using forceps. There should be no air bubbles inside the graduated cylinder.
Obtain a long piece of insulated wire with exposed wire ends. Coil the longer exposed end around a pen or pencil. About 2 cm below the coil, bend the wire as shown in Figure II.
Place the coiled end of the wire in the beaker. Maneuver the wire so that the coil and all uninsulated wire is inside the graduated cylinder. (See Figure III)
Bend the wire down the side of the beaker to hold it in place. Attach the free end of the wire to the negative terminal of a 9 volt battery. Obtain an insulated wire with alligator clips on either end. Attach one clip to the positive terminal of the battery and the other to a copper electrode. In a moment the end of this copper electrode will be placed in the beaker containing the solution of sulfuric acid. Before placing the electrode in the beaker, put a sheet of white paper under the beaker.
Record the exact time when you dip the copper electrode into the solution in the beaker.
Your experimental set-up should resemble Figure IV.
Position the wire so that the electrode remains partially submerged in the solution. Do not allow the alligator clip to contact the solution. Observe and record what happens when the electrode enters the solution and as the reaction progresses.
While the reaction progresses, measure the temperature of the solution and the atmospheric pressure. Record the values below.
Allow the reaction to continue until about 45 mL of gas are collected in the graduated cylinder. Then remove the copper electrode, being careful to record the exact time in Obs. #2. What happens when the electrode is removed from the solution?
Rinse the electrode thoroughly with water and wipe it with a paper towel to remove any solid that may adhere to the surface. Rinse the electrode with acetone and allow it to air dry. When dry, weigh the electrode on the laboratory balance and record the final mass in Obs. #1. Determine the change in mass of the electrode.
Remove the coiled wire from the graduated cylinder and beaker. Raise or lower the cylinder so that the water levels inside and outside the cylinder are equal. If this is not possible because the solution level inside the cylinder is much higher than the level outside, lower the cylinder until its mouth almost touches the bottom of the beaker. Add water to the beaker until the water levels inside and outside the graduated cylinder are equal. Read the volume of gas in the cylinder. Remember that the scale on the cylinder is upside down! Record the volume below.
We know that this gas has been generated at the negative electrode, but we do not know its chemical identity. To determine its identity we must examine its physical properties. Describe the appearance of the gas below.
Consider that the gas is generated from an aqueous solution of H2SO4. The gases generated from such a solution are H2 or O2. Does the description in Obs. #7 help to determine which of these gases has been generated? _______________
One property that hydrogen and oxygen do not share is flammability. When exposed to an open flame hydrogen explodes while oxygen simply makes the flame glow more brightly.
Raise the graduated cylinder out of the beaker, allowing the water to fall back into the beaker. Clamp the graduated cylinder, still inverted in the air well away from all equipment. HAVE YOUR INSTRUCTOR pass a lit match near the opening of the cylinder. If hydrogen is present an explosion (really just a small pop) will occur. Describe the result.
Identify the gas formed and write a half reaction representing the process. Is the reaction which produces the gas an oxidation or reduction reaction? Name the electrode (anode or cathode) at which it occurs.
Name (anode or cathode) the positive electrode and write the reaction that occurs there. What evidence do you have to support the reaction you have written.
1. In addition to the gas generated in the chemical reaction the graduated cylinder also contains water vapor. In order to find the partial pressure of the gas we use Dalton's Law of partial pressure.
Ptotal = Pgas + Pwater vapor
Look up the vapor pressure of water at the temperature of your solution in your text, in the CRC handbook, in another reference book. Calculate the partial pressure of the gas generated.
2. Using Obs. #4 and #6 complete the table. Show any necessary unit conversions.
Pressure __________ atm
Volume __________ L
Gas Constant __________L-atm/mol K
Temperature __________ K
3. Use the ideal gas law to calculate the number of moles of gas generated.
4. Calculate the number of moles of Cu metal lost by the copper electrode.
5. (a) Write the oxidation half reaction occurring at the anode.
(b) Write the reduction half reaction occurring at the cathode.
(c) Write the overall balanced chemical reaction.
6. Compare the results of problems 3 and 4 with the equation obtained in 5(c). Is the mole ratio what is predicted by the equation? Calculate the percent error.
7. Calculate the number of moles of electrons used in the experiment.
8. Use Faraday's constant to calculate the number of coulombs of current used in the experiment.
9. Determine the number of seconds elapsed during the experiment.
10. Calculate the current in amperes produced by the battery (1 Amp = 1 coulomb/sec). Compare your result with those of other students or groups. Is the current output constant for all 9 volt batteries?
1. Will the reaction performed in this experiment proceed without the battery? Explain your answer.
2. A color change is observed in the solution as the reaction progresses. What is the significance of the color?
3. Why is it necessary to adjust the level of the solution inside the graduated cylinder to equal the level of the solution in the beaker before reading the volume of gas in the cylinder?
4. Consider the error determined in calculation #6. Discuss the sources of error in the experiment, excluding human errors.
Post-lab Questions: Top
Answer the following questions after you have completed all parts of the oxidation-reduction experiment.
1. Describe the differences and similarities between voltaic and electrolytic cells.
2. Suppose that you needed to construct a temporary battery to power a small lamp. Using any materials available in this experiment (excluding the 9 volt battery in Part III, of course!) how would you proceed?
3. An electrolytic cell, like the one used in Part III of this experiment was found to produce gas very very slowly. In order to speed up the process, a Chemistry student proposed using concentrated (18 M) sulfuric acid rather than the 1M acid called for in the experiment.
(a) Would the student's suggestion be effective? Why?
(b) Suggest a method of increasing the reaction rate and tell how your method would work
4. Locate the "NR" observations recorded in Table I of Part I of this experiment. Is it ever possible for these combinations of chemicals to react? If so, explain the circumstances under which reactions could occur. If not, explain why not.
Return to Index of Experiments
Pre-lab Questions Experiment Post-lab Questions Top | http://intro.chem.okstate.edu/HTML/SEXP17.HTM | 13 |
65 | A screw thread, often shortened to thread, is a helical structure used to convert between rotational and linear movement or force. A screw thread is a ridge wrapped around a cylinder or cone in the form of a helix, with the former being called a straight thread and the latter called a tapered thread. A screw thread is the essential feature of the screw as a simple machine and also as a fastener. More screw threads are produced each year than any other machine element.
The mechanical advantage of a screw thread depends on its lead, which is the linear distance the screw travels in one revolution. In most applications, the lead of a screw thread is chosen so that friction is sufficient to prevent linear motion being converted to rotary, that is so the screw does not slip even when linear force is applied so long as no external rotational force is present. This characteristic is essential to the vast majority of its uses. The tightening of a fastener's screw thread is comparable to driving a wedge into a gap until it sticks fast through friction and slight plastic deformation.
Screw threads have several applications:
- Gear reduction via worm drives
- Moving objects linearly by converting rotary motion to linear motion, as in the leadscrew of a jack.
- Measuring by correlating linear motion to rotary motion (and simultaneously amplifying it), as in a micrometer.
- Both moving objects linearly and simultaneously measuring the movement, combining the two aforementioned functions, as in a leadscrew of a lathe.
In all of these applications, the screw thread has two main functions:
- It converts rotary motion into linear motion.
- It prevents linear motion without the corresponding rotation.
Every matched pair of threads, external and internal, can be described as male and female. For example, a screw has male threads, while its matching hole (whether in nut or substrate) has female threads. This property is called gender.
The helix of a thread can twist in two possible directions, which is known as handedness. Most threads are oriented so that the threaded item, when seen from a point of view on the axis through the center of the helix, moves away from the viewer when it is turned in a clockwise direction, and moves towards the viewer when it is turned counterclockwise. This is known as a right-handed (RH) thread, because it follows the right hand grip rule. Threads oriented in the opposite direction are known as left-handed (LH).
By common convention, right-handedness is the default handedness for screw threads. Therefore, most threaded parts and fasteners have right-handed threads. Left-handed thread applications include:
- Where the rotation of a shaft would cause a conventional right-handed nut to loosen rather than to tighten due to fretting induced precession. Examples include:
- In combination with right-handed threads in turnbuckles and clamping studs.
- In some gas supply connections to prevent dangerous misconnections, for example in gas welding the flammable gas supply uses left-handed threads.
- In a situation where neither threaded pipe end can be rotated to tighten/loosen the joint, e.g. in traditional heating pipes running through multiple rooms in a building. In such a case, the coupling will have one right-handed and one left-handed thread
- In some instances, for example early ballpoint pens, to provide a "secret" method of disassembly.
- In mechanisms to give a more intuitive action as:
- Some Edison base lamps and fittings (such as formerly on the New York City Subway) have a left-hand thread to deter theft, since they cannot be used in other light fixtures.
The term chirality comes from the Greek word for "hand" and concerns handedness in many other contexts.
The cross-sectional shape of a thread is often called its form or threadform (also spelled thread form). It may be square, triangular, trapezoidal, or other shapes. The terms form and threadform sometimes refer to all design aspects taken together (cross-sectional shape, pitch, and diameters).
Most triangular threadforms are based on an isosceles triangle. These are usually called V-threads or vee-threads because of the shape of the letter V. For 60° V-threads, the isosceles triangle is, more specifically, equilateral. For buttress threads, the triangle is scalene.
The theoretical triangle is usually truncated to varying degrees (that is, the tip of the triangle is cut short). A V-thread in which there is no truncation (or a minuscule amount considered negligible) is called a sharp V-thread. Truncation occurs (and is codified in standards) for practical reasons:
- The thread-cutting or thread-forming tool cannot practically have a perfectly sharp point; at some level of magnification, the point is truncated, even if the truncation is very small.
- Too-small truncation is undesirable anyway, because:
- The cutting or forming tool's edge will break too easily;
- The part or fastener's thread crests will have burrs upon cutting, and will be too susceptible to additional future burring resulting from dents (nicks);
- The roots and crests of mating male and female threads need clearance to ensure that the sloped sides of the V meet properly despite (a) error in pitch diameter and (b) dirt and nick-induced burrs.
- The point of the threadform adds little strength to the thread.
Ball screws, whose male-female pairs involve bearing balls in between, show that other variations of form are possible. Roller screws use conventional thread forms but introduce an interesting twist on the theme.
The angle characteristic of the cross-sectional shape is often called the thread angle. For most V-threads, this is standardized as 60 degrees, but any angle can be used.
Lead, pitch, and starts
Lead (pron.: //) and pitch are closely related concepts.They can be confused because they are the same for most screws. Lead is the distance along the screw's axis that is covered by one complete rotation of the screw (360°). Pitch is the distance from the crest of one thread to the next. Because the vast majority of screw threadforms are single-start threadforms, their lead and pitch are the same. Single-start means that there is only one "ridge" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of one ridge. "Double-start" means that there are two "ridges" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of two ridges. Another way to express this is that lead and pitch are parametrically related, and the parameter that relates them, the number of starts, very often has a value of 1, in which case their relationship becomes equality. In general, lead is equal to S times pitch, in which S is the number of starts.
Whereas metric threads are usually defined by their pitch, that is, how much distance per thread, inch-based standards usually use the reverse logic, that is, how many threads occur per a given distance. Thus inch-based threads are defined in terms of threads per inch (TPI). Pitch and TPI describe the same underlying physical property—merely in different terms. When the inch is used as the unit of measurement for pitch, TPI is the reciprocal of pitch and vice versa. For example, a 1⁄4-20 thread has 20 TPI, which means that its pitch is 1⁄20 inch (0.050 in or 1.27 mm).
As the distance from the crest of one thread to the next, pitch can be compared to the wavelength of a wave. Another wave analogy is that pitch and TPI are inverses of each other in a similar way that period and frequency are inverses of each other.
Coarse versus fine
Coarse threads are those with larger pitch (fewer threads per axial distance), and fine threads are those with smaller pitch (more threads per axial distance). Coarse threads have a larger threadform relative to screw diameter, whereas fine threads have a smaller threadform relative to screw diameter. This distinction is analogous to that between coarse teeth and fine teeth on a saw or file, or between coarse grit and fine grit on sandpaper.
The common V-thread standards (ISO 261 and Unified Thread Standard) include a coarse pitch and a fine pitch for each major diameter. For example, 1⁄2-13 belongs to the UNC series (Unified National Coarse) and 1⁄2-20 belongs to the UNF series (Unified National Fine).
A common misconception among people not familiar with engineering or machining is that the term coarse implies here lower quality and the term fine implies higher quality. The terms when used in reference to screw thread pitch have nothing to do with the tolerances used (degree of precision) or the amount of craftsmanship, quality, or cost. They simply refer to the size of the threads relative to the screw diameter. Coarse threads can be made accurately, or fine threads inaccurately.
There are several relevant diameters for screw threads: major diameter, minor diameter, and pitch diameter.
Major diameter
Major diameter is the largest diameter of the thread. For a male thread, this means "outside diameter", but in careful usage the better term is "major diameter", since the underlying physical property being referred to is independent of the male/female context. On a female thread, the major diameter is not on the "outside". The terms "inside" and "outside" invite confusion, whereas the terms "major" and "minor" are always unambiguous.
Minor diameter
Minor diameter is the smallest diameter of the thread.
Pitch diameter
Pitch diameter (sometimes abbreviated PD) is a diameter in between major and minor. It is the diameter at which each pitch is equally divided between the mating male and female threads. It is important to the fit between male and female threads, because a thread can be cut to various depths in between the major and minor diameters, with the roots and crests of the threadform being variously truncated, but male and female threads will only mate properly if their sloping sides are in contact, and that contact can only happen if the pitch diameters of male and female threads match closely. Another way to think of pitch diameter is "the diameter on which male and female should meet".
Classes of fit
The way in which male and female fit together, including play and friction, is classified (categorized) in thread standards. Achieving a certain class of fit requires the ability to work within tolerance ranges for dimension (size) and surface finish. Defining and achieving classes of fit are important for interchangeability. Classes include 1, 2, 3 (loose to tight); A (external) and B (internal); and various systems such as H and D limits.
Standardization and interchangeability
To achieve a predictably successful mating of male and female threads and assured interchangeability between males and between females, standards for form, size, and finish must exist and be followed. Standardization of threads is discussed below.
Thread depth
Screw threads are almost never made perfectly sharp (no truncation at the crest or root), but instead are truncated, yielding a final thread depth that can be expressed as a fraction of the pitch value. The UTS and ISO standards codify the amount of truncation, including tolerance ranges.
A perfectly sharp 60° V-thread will have a depth of thread ("height" from root to crest) equal to .866 of the pitch. This fact is intrinsic to the geometry of an equilateral triangle—a direct result of the basic trigonometric functions. It is independent of measurement units (inch vs mm). However, UTS and ISO threads are not sharp threads. The major and minor diameters delimit truncations on either side of the sharp V, typically about 1/8p (although the actual geometry definition has more variables than that). This means that a full (100%) UTS or ISO thread has a height of around .65p.
Threads can be (and often are) truncated a bit more, yielding thread depths of 60% to 75% of the .65p value. This makes the thread-cutting easier (yielding shorter cycle times and longer tap and die life) without a large sacrifice in thread strength. The increased truncation is quantified by the percentage of thread that it leaves in place, where the nominal full thread (where depth is about .65p) is considered 100%. For most applications, 60% to 75% threads used. In may cases 60% threads are optimal, and 75% threads are wasteful or "over-engineered" (additional resources were unnecessarily invested in creating them). To truncate the threads below 100% of nominal, different techniques are used for male and female threads. For male threads, the bar stock is "turned down" somewhat before thread cutting, so that the major diameter is reduced. Likewise, for female threads the stock material is drilled with a slightly larger tap drill, increasing the minor diameter. (The pitch diameter is not affected by these operations, which are only varying the major or minor diameters.)
This balancing of truncation versus thread strength is common to many engineering decisions involving material strength and material thickness, cost, and weight. Engineers use a number called the safety factor to quantify the increased material thicknesses or other dimension beyond the minimum required for the estimated loads on a mechanical part. Increasing the safety factor generally increases the cost of manufacture and decreases the likelihood of a failure. So the safety factor is often the focus of a business management decision when a mechanical product's cost impacts business performance and failure of the product could jeopardize human life or company reputation. For example, aerospace contractors are particularly rigorous in the analysis and implementation of safety factors, given the incredible damage that failure could do (crashed aircraft or rockets). Material thickness affects not only the cost of manufacture, but also the device's weight and therefore the cost (in fuel) to lift that weight into the sky (or orbit). The cost of failure and the cost of manufacture are both extremely high. Thus the safety factor dramatically impacts company fortunes and is often worth the additional engineering expense required for detailed analysis and implementation.
Tapered threads are used on fasteners and pipe. A common example of a fastener with a tapered thread is a wood screw.
The threaded pipes used in some plumbing installations for the delivery of fluids under pressure have a threaded section that is slightly conical. Examples are the NPT and BSP series. The seal provided by a threaded pipe joint is created when a tapered externally threaded end is tightened into an end with internal threads. Normally a good seal requires the application of a separate sealant in the joint, such as thread seal tape, or a liquid or paste pipe sealant such as pipe dope, however some threaded pipe joints do not require a separate sealant.
Standardization of screw threads has evolved since the early nineteenth century to facilitate compatibility between different manufacturers and users. The standardization process is still ongoing; in particular there are still (otherwise identical) competing metric and inch-sized thread standards widely used. Standard threads are commonly identified by short letter codes (M, UNC, etc.) which also form the prefix of the standardized designations of individual threads.
Additional product standards identify preferred thread sizes for screws and nuts, as well as corresponding bolt head and nut sizes, to facilitate compatibility between spanners (wrenches) and other tools.
ISO standard threads
These were standardized by the International Organization for Standardization (ISO) in 1947. Although metric threads were mostly unified in 1898 by the International Congress for the standardization of screw threads, separate metric thread standards were used in France, Germany, and Japan, and the Swiss had a set of threads for watches.
Other current standards
In particular applications and certain regions, threads other than the ISO metric screw threads remain commonly used, sometimes because of special application requirements, but mostly for reasons of backwards compatibility:
- ASME B1.1 Unified Inch Screw Threads, (UN and UNR Thread Form), considered an American National Standard (ANS) widely use in the US and Canada
- Unified Thread Standard (UTS), which is still the dominant thread type in the United States and Canada. This standard includes:
- Unified Coarse (UNC), commonly referred to as "National Coarse" or "NC" in retailing.
- Unified Fine (UNF), commonly referred to as "National Fine" or "NF" in retailing.
- Unified Extra Fine (UNEF)
- Unified Special (UNS)
- National pipe thread (NPT), used for plumbing of water and gas pipes, and threaded electrical conduit.
- NPTF (National Pipe Thread Fuel)
- British Standard Whitworth (BSW), and for other Whitworth threads including:
- British standard pipe thread (BSP) which exists in a taper and non taper variant; used for other purposes as well
- British Standard Pipe Taper (BSPT)
- British Association screw threads (BA), primarily electronic/electrical, moving coil meters and to mount optical lenses
- British Standard Buttress Threads (BS 1657:1950)
- British Standard for Spark Plugs BS 45:1972
- British Standard Brass a fixed pitch 26tpi thread
- Glass Packaging Institute threads (GPI), primarily for glass bottles and vials
- Power screw threads
- Camera case screws, used to mount a camera on a photographic tripod:
- ¼″ UNC used on almost all small cameras
- ⅜″ UNC for larger (and some older small) cameras
(many older cameras use ¼" BSW or ⅜" BSW threads, which in low stress applications, and if machined to wide tolerances, are for practical purposes compatible with the UNC threads)
- Royal Microscopical Society (RMS) thread, also known as society thread, is a special 0.8" diameter x 36 thread-per-inch (tpi) Whitworth thread form used for microscope objective lenses.
- Microphone stands:
- ⅝″ 27 threads per inch (tpi) Unified Special thread (UNS, USA and the rest of the world)
- ¼″ BSW (not common in the USA, used in the rest of the world)
- ⅜″ BSW (not common in the USA, used in the rest of the world)
- Stage lighting suspension bolts (in some countries only; some have gone entirely metric, others such as Australia have reverted to the BSW threads, or have never fully converted):
- ⅜″ BSW for lighter luminaires
- ½″ BSW for heavier luminaires
- Tapping screw threads (ST) – ISO 1478
- Aerospace inch threads (UNJ) – ISO 3161
- Aerospace metric threads (MJ) – ISO 5855
- Tyre valve threads (V) – ISO 4570
- Metal bone screws (HA, HB) – ISO 5835
- Panzergewinde (Pg) (German) is an old German 80° thread (DIN 40430) that remained in use until 2000 in some electrical installation accessories in Germany.
- Fahrradgewinde (Fg) (English: bicycle thread) is a German bicycle thread standard (per DIN 79012 and DIN 13.1), which encompasses a lot of CEI and BSC threads as used on cycles and mopeds everywhere (http://www.fahrradmonteur.de/fahrradgewinde.php)
- CEI (Cycle Engineers Institute, used on bicycles in Britain and possibly elsewhere)
- Edison base Incandescent light bulb holder screw thread
- Fire hose connection (NFPA standard 194)
- Hose Coupling Screw Threads (ANSI/ASME B1.20.7-1991 [R2003]) for garden hoses and accessories
- Löwenherz thread, a German metric thread used for measuring instruments
- Sewing machine thread
History of standardization
The first historically important intra-company standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity. During the next 40 years, standardization continued to occur on the intra-company and inter-company level. No doubt many mechanics of the era participated in this zeitgeist; Joseph Clement was one of those whom history has noted. In 1841, Joseph Whitworth created a design that, through its adoption by many British railroad companies, became a national standard for the United Kingdom called British Standard Whitworth. During the 1840s through 1860s, this standard was often used in the United States and Canada as well, in addition to myriad intra- and inter-company standards. In April 1864, William Sellers presented a paper to the Franklin Institute in Philadelphia, proposing a new standard to replace the U.S.'s poorly standardized screw thread practice. Sellers simplified the Whitworth design by adopting a thread profile of 60° and a flattened tip (in contrast to Whitworth's 55° angle and rounded tip). The 60° angle was already in common use in America, but Sellers's system promised to make it and all other details of threadform consistent.
The Sellers thread, easier for ordinary machinists to produce, became an important standard in the U.S. during the late 1860s and early 1870s, when it was chosen as a standard for work done under U.S. government contracts, and it was also adopted as a standard by highly influential railroad industry corporations such as the Baldwin Locomotive Works and the Pennsylvania Railroad. Other firms adopted it, and it soon became a national standard for the U.S., later becoming generally known as the United States Standard thread (USS thread). Over the next 30 years the standard was further defined and extended and evolved into a set of standards including National Coarse (NC), National Fine (NF), and National Pipe Taper (NPT). Meanwhile, in Britain, the British Association screw threads were also developed and refined.
During this era, in continental Europe, the British and American threadforms were well known, but also various metric thread standards were evolving, which usually employed 60° profiles. Some of these evolved into national or quasi-national standards. They were mostly unified in 1898 by the International Congress for the standardization of screw threads at Zurich, which defined the new international metric thread standards as having the same profile as the Sellers thread, but with metric sizes. Efforts were made in the early 20th century to convince the governments of the U.S., UK, and Canada to adopt these international thread standards and the metric system in general, but they were defeated with arguments that the capital cost of the necessary retooling would drive some firms from profit to loss and hamper the economy. (The mixed use of dueling inch and metric standards has since cost much, much more, but the bearing of these costs has been more distributed across national and global economies rather than being borne up front by particular governments or corporations, which helps explain the lobbying efforts.)
Sometime between 1912 and 1916, the Society of Automobile Engineers (SAE) created an "SAE series" of screw thread sizes to augment the USS standard.
During the late 19th and early 20th centuries, engineers found that ensuring the reliable interchangeability of screw threads was a multi-faceted and challenging task that was not as simple as just standardizing the major diameter and pitch for a certain thread. It was during this era that more complicated analyses made clear the importance of variables such as pitch diameter and surface finish.
A tremendous amount of engineering work was done throughout World War I and the following interwar period in pursuit of reliable interchangeability. Classes of fit were standardized, and new ways of generating and inspecting screw threads were developed (such as production thread-grinding machines and optical comparators). Therefore, in theory, one might expect that by the start of World War II, the problem of screw thread interchangeability would have already been completely solved. Unfortunately, this proved to be false. Intranational interchangeability was widespread, but international interchangeability was less so. Problems with lack of interchangeability among American, Canadian, and British parts during World War II led to an effort to unify the inch-based standards among these closely allied nations, and the Unified Thread Standard was adopted by the Screw Thread Standardization Committees of Canada, the United Kingdom, and the United States on November 18, 1949 in Washington, D.C., with the hope that they would be adopted universally. (The original UTS standard may be found in ASA (now ANSI) publication, Vol. 1, 1949.) UTS consists of Unified Coarse (UNC), Unified Fine (UNF), Unified Extra Fine (UNEF) and Unified Special (UNS). The standard was not widely taken up in the UK, where many companies continued to use the UK's own British Association (BA) standard.
However, internationally, the metric system was eclipsing inch-based measurement units. In 1947, the ISO was founded; and in 1960, the metric-based International System of Units (abbreviated SI from the French Système International) was created. With continental Europe and much of the rest of the world turning to SI and the ISO metric screw thread, the UK gradually leaned in the same direction. The ISO metric screw thread is now the standard that has been adopted worldwide and has mostly displaced all former standards, including UTS. In the U.S., where UTS is still prevalent, over 40% of products contain at least some ISO metric screw threads. The UK has completely abandoned its commitment to UTS in favour of the ISO metric threads, and Canada is in between. Globalization of industries produces market pressure in favor of phasing out minority standards. A good example is the automotive industry; U.S. auto parts factories long ago developed the ability to conform to the ISO standards, and today very few parts for new cars retain inch-based sizes, regardless of being made in the U.S.
Even today, over a half century since the UTS superseded the USS and SAE series, companies still sell hardware with designations such as "USS" and "SAE" to convey that it is of inch sizes as opposed to metric. Most of this hardware is in fact made to the UTS, but the labeling and cataloging terminology is not always precise.
Engineering drawing
In American engineering drawings, ANSI Y14.6 defines standards for indicating threaded parts. Parts are indicated by their nominal diameter (the nominal major diameter of the screw threads), pitch (number of threads per inch), and the class of fit for the thread. For example, “.750-10UNC-2A” is male (A) with a nominal major diameter of 0.750 in, 10 threads per inch, and a class-2 fit; “.500-20UNF-1B” would be female (B) with a 0.500 in nominal major diameter, 20 threads per inch, and a class-1 fit. An arrow points from this designation to the surface in question.
There are many ways to generate a screw thread, including the traditional subtractive types (e.g., various kinds of cutting [single-pointing, taps and dies, die heads, milling]; molding; casting [die casting, sand casting]; forming and rolling; grinding; and occasionally lapping to follow the other processes); newer additive techniques; and combinations thereof.
- Inspection of thread geometry is discussed at Threading (manufacturing) > Inspection.
See also
|Wikimedia Commons has media related to: Screw threads|
- Acme Thread Form
- Bicycle thread
- Buttress Thread Form
- Dryseal Pipe Threads Form
- Filter thread
- Garden hose thread form
- Metric: M Profile Thread Form
- National Thread Form
- National Pipe Thread Form
- Nut (hardware)
- Tapered thread
- Thread pitch gauge
- Degarmo, Black & Kohser 2003, p. 741.
- Brown, Sheldon. "Bicycle Glossary: Pedal". Sheldon Brown. Retrieved 2010-10-19.
- Bhandari, p. 205.
- ISO 1222:2010 Photography -- Tripod connections
- Löwenherz thread
- Ryffel 1988, p. 1603.
- Sewing machine thread
- Roe 1916, pp. 9–10.
- ASME 125th Anniversary: Special 2005 Designation of Landmarks: Profound Influences in Our Lives: The United States Standard Screw Threads
- Roe 1916, pp. 248–249.
- Roe 1916, p. 249.
- Wilson pp. 77–78 (page numbers may be from an earlier edition).
- Bhandari, V B (2007), Design of Machine Elements, Tata McGraw-Hill, ISBN 978-0-07-061141-2.
- Degarmo, E. Paul; Black, J T.; Kohser, Ronald A. (2003), Materials and Processes in Manufacturing (9th ed.), Wiley, ISBN 0-471-65653-4.
- Green, Robert E. et al. (eds) (1996), Machinery's Handbook (25 ed.), New York, NY, USA: Industrial Press, ISBN 978-0-8311-2575-2.
- Roe, Joseph Wickham (1916), English and American Tool Builders, New Haven, Connecticut: Yale University Press, LCCN 16011753. Reprinted by McGraw-Hill, New York and London, 1926 (LCCN 27-24075); and by Lindsay Publications, Inc., Bradley, Illinois, (ISBN 978-0-917914-73-7).
- Wilson, Bruce A. (2004), Design Dimensioning and Tolerancing (4th ed.), Goodheart-Wilcox, ISBN 1-59070-328-6.
- International Thread Standards
- ModelFixings - Thread Data
- NASA RP-1228 Threaded Fastener Design Manual[dead link] | http://en.wikipedia.org/wiki/Screw_thread | 13 |
31 | These instructions are written in a simple format to explain how to find the diameter of a circle.
The diameter of a circle is 2 x the radius and is basically the span of two radii across the circle's center point thus forming a horizontal line across the entire circle like the example shows. The formula is expressed as: d=2r , where d is the diameter and r is the radius.
For example, if r=4 , then you multiply 4 by 2. In this case the diameter would be 6. d=8
You always multiply the radius x 2 to get the diameter of a circle.
You can divide the diameter by 2 to get the radius of a circle if the diameter is known.
See also: Find diameter and circumference of a circle | Formula for finding the area of a rectangle | What is a circle? | What is the radius of a circle? | What is the diameter of a circle? | What is the circumference of a circle? | http://www.monkeydoit.com/diameter-circle.php | 13 |
17 | Several years ago, NASA started making plans to send robots to explore the deep, dark craters on the Moon. As part of these plans, NASA needed modeling tools to help engineer unique electronics to withstand extremely cold temperatures.
According to Jonathan Pellish, a flight systems test engineer at Goddard Space Flight Center, “An instrument sitting in a shadowed crater on one of the Moon’s poles would hover around 43 K”—that is, 43 kelvin, equivalent to -382 °F. Such frigid temperatures are one of the main factors that make the extreme space environments encountered on the Moon and elsewhere so extreme.
Radiation is another main concern.
“Radiation is always present in the space environment,” says Pellish. “Small to moderate solar energetic particle events happen regularly and extreme events happen less than a handful of times throughout the 7 active years of the 11-year solar cycle.” Radiation can corrupt data, propagate to other systems, require component power cycling, and cause a host of other harmful effects.
In order to explore places like the Moon, Jupiter, Saturn, Venus, and Mars, NASA must use electronic communication devices like transmitters and receivers and data collection devices like infrared cameras that can resist the effects of extreme temperature and radiation; otherwise, the electronics would not be reliable for the duration of the mission.
Since 1987, NASA has partnered with Huntsville, Alabama-based CFD Research Corporation (CFDRC), a company that specializes in engineering simulations and innovative designs and prototypes for aerospace and other industries. A few years ago, CFDRC received funding from Marshall Space Flight Center’s Small Business Innovation Research (SBIR) program to refine an existing software tool to predict the behavior of electronics in the cold, radiation-filled environment of space.
During the first phase of its work, in collaboration with Georgia Tech, CFDRC enhanced and demonstrated a technology called NanoTCAD for predicting the response of silicon-germanium (SiGe) semiconductor technology to radiation. During its second phase, the company demonstrated and validated NanoTCAD for temperature ranges from -382–266 °F.
Marek Turowski, the director of the nanoelectronic and plasma technology group at CFDRC explains how, as electronic parts become smaller, the effects of radiation and temperature become more severe. “When radiation particles bombard a microchip, it is like hail hitting a car,” he says.
Even though hail may not damage a large truck, the same hail could cause significant damage to a truck the size of a toy. Likewise, as electronic devices decrease in size, radiation particles can damage them more easily.
Being able to predict the behavior of nanoelectronics in the extreme space environment reduces the risk of failure during a critical NASA mission. Using NanoTCAD, designers can better evaluate performance and response of electronics early in the design stage, thereby reducing the costs and testing time involved. As Turowski explains it, “The purpose of NanoTCAD tools and models is to predict the behavior of electronics in space before they actually go to space. The prediction happens on the computer screen and accurately takes temperature and radiation into account.”
Pellish says NanoTCAD has already been used to evaluate key technologies for the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2), scheduled for launch in 2016. ICESat-2 will look at polar ice, sea-level change, vegetation canopy height, and climate. “The NanoTCAD research on SiGe semiconductor technology processes provided a portion of the necessary insight into this technology so that it can be used in space,” he says.
NanoTCAD software is now available from CFDRC as a nanotechnology computer aided design (CAD) tool to predict the effects of extreme thermal and radiation environments on electronic systems. It is also used by CFDRC in its modeling and simulation services provided to the aerospace industry. The “nano” part of the product’s name means the software can address nano-size devices while “TCAD” stands for “technology computer aided design.”
“It solves basic physics equations,” says Turowski. “It looks at how electrons flow, how fields inside the devices behave, and how the varying temperature affects their behavior.”
Today, CFDRC’s NanoTCAD customers include electronic chip designers at Georgia Tech and Vanderbilt University. The electronics, chips, circuits, and devices that the universities are modeling with NanoTCAD are often for NASA missions. The European Space Agency and the Japanese Aerospace Exploration Agency are also potential customers of CFDRC’s NASA-improved technology.
The tool is also being employed for Department of Defense applications for space communication and surveillance systems for satellites. Entities like the Air Force and Navy design electronics that can suffer the same problems as NASA spacecraft. CFDRC also uses NanoTCAD to provide modeling, simulations, and radiation-hardening design services to national nuclear laboratories and commercial satellite designers.
According to CFDRC, the technology has led to approximately $2 million in revenue for the company, created new jobs, and led to partnerships with other defense and industrial customers.
“NASA has given us the opportunity to develop valuable technology,” says Turowski. “Now the technology is being adapted and enhanced for every new generation of electronics.”
Whether it is for the Moon, on-orbit, or other applications, CFDRC’s work with NASA is helping to make future space missions possible. | http://spinoff.nasa.gov/Spinoff2012/it_3.html | 13 |