Dataset Viewer (First 5GB)
content
string | pred_label
string | pred_score
float64 |
---|---|---|
Distance Calculator?
In order to calculate distance, you need to have both time and velocity. Multiply the two together in order to get distance. You can use a distance calculator in order to find out how to calculate distance.
1 Additional Answer
Ask.com Answer for: distance calculator
City, State or Zip
City, State or Zip
Q&A Related to "Distance Calculator?"
There are many ways that you can calculate the distance between two objects depending on just how far apart they are. If they are relatively close you can figure the size of a human
1. Draw and label your two points on a grid. This will help you to visualize the problem at hand. For example, if you're given two points A and C as (1, 2) and (2,4) respectively,
1 Use this triangle. Ad 2 Cover up the factor you wish to find to get your formula e.
1. Calculate your X distance by subtracting the first point's X value from the second point's X value. For example, if we have the points (0,0,0) and (3,4,5) then the X distance would
Explore this Topic
A nautical distance calculator determines the distance between two points anywhere on the map. A nautical distance calculator finds the most direct path by tracing ...
A driving distance calculator is available online. You can use a quick trip calculator where you put in the origin and destination and it calculates the total ...
You can easily calculate driving distances online using a tool such as MapQuest. This is done by entering the address of the starting point, and address of the ...
|
__label__pos
| 0.998779 |
sap beetle
sap beetle (family Nitidulidae), any of at least 2,000 species of beetles (insect order Coleoptera) usually found around souring or fermenting plant fluids (e.g., decaying fruit, moldy logs, fungi). Sap beetles are about 12 mm (0.5 inch) or less in length and oval or elongated in shape. In some species the elytra (wing covers) cover the abdomen, while in others the tip of the abdomen is exposed. The picnic beetle (Glischrochilus fasciatus), a common North American species, is shiny black with two yellow-orange bands across the elytra.
|
__label__pos
| 0.998392 |
Link Details
Link 390877 thumbnail
User 285573 avatar
By alashcraft
Published: Apr 07 2010 / 07:23
In .NET 4.0, we have a set of new API's to simplify the process of adding parallelism and concurrency to applications. This set of API's is called the "Task Parallel Library (TPL)" and is located in the System.Threading and System.Threading.Tasks namespaces. The Parallel class found in the System.Threading.Tasks namespace “provides library-based data parallel replacements for common operations such as for loops, for each loops, and execution of a set of statements”. In this article, we will use the Invoke method of the Parallel class to call multiple methods, possibly in parallel.
• 12
• 0
• 2818
• 0
Add your comment
Spring Integration
Written by: Soby Chacko
Featured Refcardz: Top Refcardz:
1. Search Patterns
2. Python
3. C++
4. Design Patterns
5. OO JS
1. PhoneGap
2. Spring Integration
3. Regex
4. Git
5. Java
|
__label__pos
| 0.980109 |
Smoke Bathing
Newer Older
Smoke triggers the same odd contorsions as ants do. The same passive /active movements and postures are used. The rook picks up a burning cigarette bud in its beak, goes into a magnificent anting posture and rubs it up and down the inside of its arched wings.
These movements are combined with the "passive " form. The bird simply sits on top of the burning cigarette, with streched wings, thus allowing the smoke to pass through its feathers. This behaviour appears to be more common in corvids than in other birds. In the midle ages, crows, rooks and jackdaws engaged in the same odd behaviour, using not cigarettes, but smoldering embers.They sometimes carried them back to their nest. This is why crows were thoght to be responsible for starting fires. Because of this reason, these birds were known by then as "Aves Incendiaria"(fire birds). While fumigating its feathers, the rook's posture, with wings spread and head turned to one side, resembles the mythical Phoenix, the bird reborn from fire. So it may well be that these birds indulging in pest control spawned the Phoenix legend...
|
__label__pos
| 0.920956 |
Home Flora of Pakistan
Name Search
District Map
Grid Map
Inventory Project
Published In: A Voyage to Terra Australis 2(App. 3): 547. 1814. (18 Jul-10 Aug 1814) (Voy. Terra Austral.) Name publication detail
Acceptance : Accepted
Project Data (Last Modified On 4/5/2012)
Contributor Text:
Contributor Institution:
E-mail:[email protected]; [email protected]
A small genus of c. 30 species, endemic to Australia and New Caledonian; introduced in other countries as garden ornamentals. Represented by the following introduced species in our flora.
Export To PDF Export To Word
Trees or low shrubs with silky hairy young shoots. Leaves often leathery, alternate, terete or narrow, entire, acute or acuminate. Flowers bisexual, sessile, showy, variously colored, arranged in bottlebrush-like, pseudoterminal spikes and axis of inflorescence grows beyond to produce leafy shoot, bracts absent or membranous and fugacious, rarely foliaceous and persistent. Calyx tube campanulate or urceolate, adnate below to the ovary, limb 5-lobed, lobes imbricate, ±scarious, deciduous. Petals 5, orbicular, longer than sepals, patent, green, yellow, white, pink or red. Stamens numerous, conspicuous, filaments much longer than petals, showy, free or basally shortly united into a ring. Ovary 3- or 4-loculed, ovules numerous in each locule; style filiform, stigma small. Fruit a globose to urn-shaped, woody capsule, often forming dense spikes. Seeds fine, brown.
Export To PDF Export To Word
Flowers bright red. Stamens fused into a ring at thebase.
1. C. viminalis
Flowers crimson, sometimes purplish-red or lilac. Stamens not or inconspicuously united at the base.
2. C. citrinus
|
__label__pos
| 0.892249 |
Subscribe English
look up any word, like drinking watermelon:
Pounding your cranium against an obstacle till your ears hemorrhage and you lose vision due to an unattainable, impractical objective that all others prior to you have failed to accomplish using equivalent techniques.
Man 1: I've been trying to hook up with Lissy for two months now.
Man 2: Dude, she's a lesbian.
Man 1: I'm determined to make her my girl.
by Mark the man Jor Dan July 04, 2008
56 32
|
__label__pos
| 0.836726 |
Frequently Asked Questions
I can't hear any sound when I click the "Play" button on my player. What's wrong?
This may happen for a number of reasons. Try the following:
1. Check your volume. If it's on, try turning it up.
2. If you have external speakers, make sure the speakers are plugged in and turned on, and that their volume is turned up.
3. Check your computer's sound control. Make sure it's on, that the volume is turned up, and that the speakers are chosen as the sound output source.
4. Some older computers don't come with sound cards. You may need to have one installed.
Did this answer your question?
Related Questions:
|
__label__pos
| 0.896737 |
Mineral processing
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A set of Cornish stamps
Before the advent of heavy machinery the raw ore was broken up using hammers wielded by hand, a process called "spalling". Before long, mechanical means were found to achieve this. For instance, stamp mills were used in Samarkand as early as 973. They were also in use in medieval Persia. By the 11th century, stamp mills were in widespread use throughout the medieval Islamic world, from Islamic Spain and North Africa in the west to Central Asia in the east.[1] A later example was the Cornish stamps, consisting of a series of iron hammers mounted in a vertical frame, raised by cams on the shaft of a waterwheel and falling onto the ore under gravity.
The simplest method of separating ore from gangue consists of picking out the individual crystals of each. This is a very tedious process, particularly when the individual particles are small. Another comparatively simple method relies on the various minerals having different densities, causing them to collect in different places: metallic minerals (being heavier) will drop out of suspension more quickly than lighter ones, which will be carried further by a stream of water. The process of panning and sifting for gold uses both of these methods. Various devices known as 'buddles' were used to take advantage of this property.[when?] Later, more advanced machines were used such as the Frue vanner, invented in 1874.
Other equipment used historically includes the hutch, a trough used with some ore-dressing machines and the keeve or kieve, a large tub used for differential settlement.
Unit operations[edit]
Mineral processing can involve four general types of unit operation: comminution – particle size reduction; sizing – separation of particle sizes by screening or classification; concentration by taking advantage of physical and surface chemical properties; and dewatering – solid/liquid separation. In all of these processes, the most important considerations are the economics of the processes and this is dictated by the grade and recovery of the final product. To do this, the mineralogy of the ore needs to be considered as this dictates the amount of liberation required and the processes that can occur. The smaller the particles processes, the greater the theoretical grade and recovery of the final product, but this however is difficult to do with fine particles as they prevent certain concentration processes from occurring.
Comminution is particle size reduction of materials. Comminution may be carried out on either dry materials or slurries. Crushing and grinding are the two primary comminution processes. Crushing is normally carried out on "run-of-mine"[2] ore, while grinding (normally carried out after crushing) may be conducted on dry or slurried material. In comminution, the size reduction of particles is done by three types of forces-compression, impact and attrition. Compression and impact forces are extensively used in crushing operations while attrition is the dominant force in grinding. The primarily used equipments in crushing are-jaw crushers, gyratory crushers and cone crushers whereas rod mills and ball mills, closed circuited with a classifier unit, are generally employed for grinding purposes in a mineral processing plant. Crushing is a dry process whereas grinding is generally performed wet and hence is more energy intensive.
Sizer 2000 for screening coarse to small particles
The simplest sizing process is screening, or passing the particles to be sized through a screen or number of screens. Screening equipment can include grizzlies,[3] bar screens,wedge wire screens, radial sieves, banna screens, multi-deck screens, vibratory screen, fine screens, flip flop screens and wire mesh screens. Screens can be static (typically the case for very coarse material), or they can incorporate mechanisms to shake or vibrate the screen. Some considerations in this process includes the screen material, the aperture size, shape and orientation, the amount of near sized particles, the addition of water, the amplitude and frequency of the vibrations, the angle of inclination, the presence of harmful materials, like steel and wood, and the size distribution of the particles.
Classification refers to sizing operations that exploit the differences in settling velocities exhibited by particles of different size. Classification equipment may include ore sorters, gas cyclones, hydrocyclones, rotating trommels, rake classifiers or fluidized classifiers.
An important factor in both comminution and sizing operations is the determination of the particle size distribution of the materials being processed, commonly referred to as particle size analysis. Many techniques for analyzing particle size are used, and the techniques include both off-line analyses which require that a sample of the material be taken for analysis and on-line techniques that allow for analysis of the material as it flows through the process.
Gravity concentration[edit]
Gravity separation is the separation of two or more minerals of different specific gravity by their relative movement in response to the force of gravity and one or more other forces (such as centrifugal forces, magnetic forces), one of which is resistance to motion (drag force) by a viscous medium such as heavy media or water.
It is necessary to determine the suitability of a gravity concentration process before it is employed for concentration of an ore. A criteria called as concentration criterion is commonly used for this purpose. It is defined as-
Concentration Criterion (CC)= (SG of heavy mineral-SG of fluid)÷(SG of light mineral-SG of fluid), where SG=specific gravity
• for CC < 1.25, not suitable for any size
There are several methods that make use of the density differences of particles-
• Heavy media or dense media separation (these include, baths, drums, larcodems, dyana whirlpool separators, and dense medium cyclones)
• Shaking tables, such as the Wilfley table[4]
• Spiral separators
• Reflux Classifier
• Jig concentrators are continuous processing gravity concentration devices using a pulsating fluidized bed.(RMS-Ross Corp. Circular Jig Plants)
• Centrifugal bowl concentrators, such as the Knelson concentrator and Falcon Concentrator
• Multi gravity separators (Falcon Concentrator, Knelson, Mozley (Enhanced Gravity Separator), Salter Cyclones (Multi-Gravity Separator) and the Kelsey Jig)
• Inline pressure Jigs
• Reichert Cones
• Sluices
These processes can be classified as either dense medium separation or gravity separation. The difference between the two that gravity separation does not use a dense medium to operate, only water or air. Dense medium separation can be performed with a variety of mediums. These include, organic liquids, aqueous solutions, suspensions in water and suspensions in air. Of these, most industrial processes use suspensions in water. The organic liquids are not used due to their toxicity and difficulties in handling. The aqueous solution as a dense medium is used in coal processing in the form of a belknap wash and the suspension in air is used in water-deficient areas, like china, where sand is used to separate coal from the gangue minerals. The dense medium separation is also classified as absolute gravity separation as the sinks and the floats travel in different directions. The gravity separation is also called relative gravity separation as they separate particles due to their differences in the magnitude of the particle response to a driving force.
These processes can also be classified into multi-G and single G processes. The difference is the magnitude of the driving force for the separation. Multi-G processes allow the separation of fine particles to occur and these particles can be in the range of 10 to 50 micron. The single G process are only capable of processing particles that are greater than 80 micron in diameter.
Of the gravity separation processes, the spiral concentrators and circular jigs are two of the most economical due to their simplicity and use of space. They operate by flowing film separation and can either use washwater or be washwater-less. The washwater spirals separate particles more easily but can have issues with entrainment of gangue with the concentrate produced..
Froth flotation cells used to concentrate copper and nickel sulfide minerals, Falconbridge, Ontario.
Froth flotation[edit]
This process was invented in the 19th century in Australia. It was used to recover a sphalerite concentrate from tailings, produced using gravity concentration. Further improvements have come from Australia in the form of the Jameson Cell, developed at the University of Newcastle, Australia. This operated by the use of a plunging jet that generates fine bubbles. These fine bubbles have a higher kinetic energy and as such they can be used for the flotation of fine grained minerals, such as those produced by the isamill.
Electrostatic separation[edit]
Magnetic separation[edit]
Automated Ore Sorting[edit]
Modern, automated sorting applies optical sensors (visible spectrum, near infrared, X-ray, ultraviolet), that can be coupled with electrical conductivity and magnetic susceptibility sensors, to control the mechanical separation of ore into two or more categories on an individual rock by rock basis. Also new sensors have been developed which exploit material properties such as electrical conductivity, magnetization, molecular structure and thermal conductivity. Sensor based sorting has found application in the processing of nickel, gold, copper, coal and diamonds.
Other processes[edit]
See also[edit]
2. ^ Run-of-mine: The raw mined material as it is delivered prior to treatment of any sort. "Dictionary of Mining, Mineral, and Related Terms". Hacettepe University - Department of Mining Engineering. Retrieved 2010-08-07.
3. ^ Grizzly: a grid of iron bars that allows ore of the correct size to travel down the ore pass to the bottom of the mine, ready for hoisting to the surface. An active, articulating "grizzly" that is able to roll, scrub, clean and discharge oversize rock and boulder of up to 4 foot (1220 mm minus)diameter, while recovering all the 2 inch minus (51 mm minus) slurry material for further screening, separation and recovery of target metals/minerals is the DEROCKER system (RMS-Ross Corporation) "Geevor Tin Mine: Grizzly men". Geevor Tin Mine Museum. Retrieved 2010-08-07.
4. ^ "Mill Machines: The Wilfley Table". Copper Country Explorer. Retrieved 2010-08-07.
• Dobby, G.S., and Finch, J.A., 1991, Column Flotation: A Selected Review, Part II, 4(7-11) 911-923
• Finch, J.A., 1995, Column Flotation: A Selected Review-Part IV: Novel Flotation Devices, Minerals Engineering, 8(6), 587-602
• Miettinen, T, Ralston, J., and Fornasiero, D., The Limits of Fine Particle Flotation, Minerals Engineering, 23, 420-437 (2010)
• Nguyen, A.V., Ralston, J., Schulze, H.S., 1988, On modelling of bubble–particle attachment probability in flotation, Int. J. Min. Proc., 53(4) 225-249
• Probstein, R. F. (2003) Physicochemical Hydrodynamics: An introduction, Hoboken, New Jersey, John Wiley & Sons, Inc., 141-142.
• Ralston, J. Fornasiero, D., Hayes, R., 1999, Bubble Particle Attachment and Detachment in Flotation, Int. J. Min. Proc., 56(1-4) 133-164
• Various articles in J. Day & R. F. Tylecote, Metals in the Industrial Revolution (Institute of Metals, London 1991).
|
__label__pos
| 0.812047 |
Take the 2-minute tour ×
How are traits leveled up for the Competent, Specialist, and Expert achievements?
share|improve this question
add comment
1 Answer
up vote 6 down vote accepted
According to the manual
All traits start at level 1 and may be improved up to level 5 by successfully completing missions and scenarios using these traits. Improving a trait either increases a positive effect granted by the trait or decreases a negative effect 10 imposed by the trait. Trait level is saved automatically. Traits can never go down a level.
So it sounds like the key is to use each trait during a successful game.
share|improve this answer
add comment
Your Answer
|
__label__pos
| 0.825784 |
List of bones of the human skeleton
From Wikipedia, the free encyclopedia - View original article
Jump to: navigation, search
Front view of a skeleton of an adult human
Back view of a skeleton of an adult human
A typical adult human skeleton consists of 206 bones. Individuals may have more or fewer bones than this owing to anatomical variations. The most common variations include additional (i.e. supernumerary) cervical ribs or lumbar vertebra. Sesamoid bone counts also may vary among individuals. The figure of 206 bones is commonly repeated, but must be noted to have some peculiarities in its method of counting. As noted below, the craniofacial bones are counted separately despite the synostoses, which occur naturally in the skull. Some reliable sesamoid bones (e.g., pisiform) are counted, while others (e.g., hallux sesamoids) are not. The count of bones also changes with age, as multiple ossific nuclei joined by synchondroses fuse into fewer mature bones, a process which typically reaches completion in the third decade of life.
Bones of an adult:
Cranial (8)
Facial bones (14):
In the middle ears (6):
In the throat (1):
In the shoulder girdle (4):
In the thorax (25):
In the vertebral column (24):
In the arms (2):
In the forearms (4):
In the hands excluding sesamoid bones (54):
In the pelvis (4):
In the thighs (2):
In the legs (6):
In the feet excluding sesamoid bones (52):
|
__label__pos
| 0.981507 |
where the writers are
Serkalem Fasil | Serkalem Fasil
aberjhani's picture
Ethiopian Journalist and "political prisoner" Eskinder Nega. Political relations between China and the United States may have been visibly strained due to Chinese activist and lawyer Chen Guangcheng’s unexpected bid for asylum last week but diplomacy and fellowship between authors from...
|
__label__pos
| 0.985684 |
What's the Expression - Autism
What's the Expression - Autism
100 - 500 downloads
Add this app to your lists
As humans we always express what we need. With expressions one can explain himself/herself in a better way. Children with autism spectrum disorders usually don’t understand how to express themselves in different situations. To address this developmental issue, we has brought an app called What's the Expression - Autism to help child with special needs learn different expressions such as happiness, sadness, anger, and surprise.
*** About WebTeam Corporation ***
** About ABA **
Tags: feelings, autism learn, feelings pictures for autistic, learn feelings apk, feelings test, autism feelings, feelings autism, autism learning tools, visual script autism, feelings images pictures.
Comments and ratings for What's the Expression - Autism
• (32 stars)
by A Google User on 12/04/2012
I thinkis too simple. I thought it would have more levels. I think having the spelling of the feeling doesnt help ro focus on the expressions which is what our kids need to identify. My son looked at the words instead of the expressions of the faces.
|
__label__pos
| 0.999332 |
Childrens Cartoons for T-Shirts
Job Description
Looking for 6 cartoon illustrations...
The finished product should just be the black and white line Art. Friendly and Happy for young children. - we will add the colors in later so ONLY THE BLACK AND WHITE LINE ART.
Attached are some examples of the type of finished products we are looking for.
The 5 animals that we need.
1. Axolotl
2. Tenerec
3. Star Nosed Mole
4. Leafy Sea Dragon
5. Frill Necked Lizard
6. The Last one is a Monkey floating in Space, wearing a space suit.
|
__label__pos
| 0.890827 |
[1] The identification of past climatic extremes and norms is important for a better understanding of the climate systems and the way they change. Here we present an almost continuous tree-ring and climate record from Vancouver Island, Canada for the last four millennia from Douglas-fir trees (Pseudotsuga menziesii (Mirb.) Franco var. menziesii) that are sensitive to precipitation variation. Spring droughts more severe than that of the mid-1920s occurred in the late 1840s, mid-1460s AD, and ∼ mid-1860s BC. A remarkable climatic anomaly occurred in ∼ the 19th century BC during which strong pentadecadal oscillation prevailed and radial growth decreased by 71% in four years. This event could have been the final stage in the process of climatic and environmental transition beginning 2–3 centuries earlier that led to major cultural transformation in regions sensitive to climate change.
|
__label__pos
| 0.735799 |
x = independently organized TED event
Theme: Colouring Life
London, United Kingdom
March 8th, 2014
About this event
Situated in historic Somerset House, The Courtauld is a unique institution dedicated exclusively to the study, preservation and conservation of art.
This year, TEDxCourtauldInstitute recognises The Courtauld as a centre for art yet pushes beyond its borders, exploring the ways in which we as humans reach our full potential and bring about change.
As students of art history, we constantly oscillate between the past, present, and future. We discover the history of an object and examine how it functioned and what contributed to its production. We pick apart that object, finding what materials and techniques were used. We explore the relevance of that object in the present and why that object matters. Today, we ask you to do the same about yourself, your work and your inspiration.
How have your experiences shaped who you are today? What tools did you use to carve your achievements? Why does what you do matter, and how does it impact life?
‘Colouring life’ challenges us to consider the brushstrokes that helped create personal achievement and how individuals and organizations can make a difference to people’s lives.
Just as art does, TEDxCourtauldInstitute: Colouring Life aims educate, inspire and bring awareness of human potential.
Venue and Details
Courtauld Institute of Art
Courtauld Institute of Art
Somerset House
London, WC2R ORN
United Kingdom
Event Type (what is this?) University
This event occurred in the past.
See more TEDxCourtauldInstitute events »
Organizer Default_165x165_male
Penelope du Jeu
View Profile »
|
__label__pos
| 0.986468 |
First: Mid: Last: City: State:
Robt Quitter
Get exclusive access to more than a billion public records when you sign up with USA-People-Search.com. Our sophisticated system will instantly generate accurate and extensive information about everyone named Robt Quitter. From there, you can comfortably browse the results to find the exact Robt you're looking for.
Did you find the right Robt Quitter yet? If not, simply modify your search by including extra details like previous residences or other known aliases. Any small piece of information you might have can help. Once you locate the Robt Quitter you're looking for, check out the other data we have on them, including addresses, phone numbers, and email addresses.
Name/AKAsAgeLocationPossible Relatives
|
__label__pos
| 0.983461 |
TY - SOUND DB - /z-wcorg/ DP - http://worldcat.org ID - 46345896 LA - English T1 - The Maltese falcon A1 - Hammett, Dashiell,, Prichard, Michael, PB - Books on Tape CY - Newport Beach, Calif. Y1 - 2000/// SN - 073666047X 9780736660471 AB - Sam Spade's partner is murdered while working on a case, and it is Spade's responsibility to find the killer. In his search, Spade runs mortal risks as he comes closer to the answer. ER -
|
__label__pos
| 0.998309 |
Kids Number World
Kids Number World
(5 stars)
Download for Android
100 - 500 downloads
Add this app to your lists
Numbers are presented in two ways. One way teaches in
reach us at [email protected]
Voice powered by iSpeech.
Features in Kids Number World
* Number sounds
* Counting
* Number order
* Number matching
* Numbers - This activity helps the kids to recognize numbers.
Each number from 1 to 10 has a different scenario which
* Flash Cards
Colorful flashcards for kids to learn numbers.
* Match it!
Helps kids to improve counting skills and number recognition
* Count and Drag
Help children practice counting while having fun
* Help the bug
A fun game which helps kids to learn the number order.
Overall it is a joyful ride for kids to learn numbers.
Tags: numbers for kids, learn numbers for kids, number with pictures for kids, number for kids, 110 numbers or kids, numberforkids, the number of kids in the world, numbers for kids with picture, learning number kid s, how to learn numbers for kids.
Comments and ratings for Kids Number World
|
__label__pos
| 0.999956 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
1. Advanced Patent Search
Publication numberUS5488700 A
Publication typeGrant
Application numberUS 08/100,087
Publication dateJan 30, 1996
Filing dateJul 30, 1993
Priority dateJul 30, 1993
Fee statusPaid
Publication number08100087, 100087, US 5488700 A, US 5488700A, US-A-5488700, US5488700 A, US5488700A
InventorsAndrew Glassner
Original AssigneeXerox Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Image rendering system with local, adaptive estimation of incident diffuse energy
US 5488700 A
A method for coloring pixels of a pixelated display to present a simulated image of a view of objects illuminated by light from light sources and light reflected off the objects is used in an apparatus for coloring pixels to create an image. According to the method, a space is defined, along with objects and light sources in the space. A view point is taken and a view surface is divided into unit areas corresponding to pixels in the image. As the illumination of points on surfaces of objects in the space is found, a data structure is saved for that point indicating its location, its orientation, and the rays of illumination which reach the point, each ray described by a direction, source, color, propagator object, and propagator type. Rays from those propagator types which identify diffuse reflections off significantly diffuse objects are saved as nearby diffuse estimators, and are used for finding the illumination at a nearby shading point without searching the entire space above the shading point for diffuse reflections.
Previous page
Next page
What is claimed is:
1. A method for coloring pixels of a pixelated display to present a simulated image of a view of objects illuminated by light from light sources and light reflected off the objects, the method comprising the steps of:
defining a space;
positioning objects and light sources in said space;
positioning a view point in said space;
positioning a view surface in said space;
dividing said view surface into a plurality of unit areas;
associating each unit area of said plurality of unit areas with a pixel of the pixelated display;
initializing a calculation storage area which includes storage for each of a plurality of calculation results, a calculation result including at least an indication of an incident light ray color, an incident light ray direction, and a reflection type selected from a specular reflection type and a diffuse reflection type;
calculating, for each unit area, a color value, said step of calculating a color value comprising the steps of:
tracing at least one ray from said view point through said unit area to intersect a shading point to be shaded on a surface of an object in the space, or to intersect a point on a background surface if no object is intersected by said traced ray;
identifying light rays incident on said shading point, wherein said light rays are identified either by determining which objects and light sources cast light rays onto said shading point, or by determining which objects and light sources provide a specular reflection of light which is incident on said shading point and retrieving a saved indication of diffuse reflections incident on said shading point;
saving an indication of said identified light rays in said calculation storage area, saving at least an indication of an incident light color and direction, and saving an indication of a reflection type for each identified light ray if more than one reflection type is saved, said saving step performed when a new indication is calculated;
calculating a reflected light which would reflect off said intersected surface at said shading point in a direction of an origin of said traced ray when illuminated by said identified light rays, including at least one calculation which is a function of incident light direction at least when the reflection type is a specular reflection type; and
assigning a color of said reflected light in said direction of said traced ray to the pixel associated with said unit area, or a combination of colors of reflected light in the direction of multiple rays if multiple rays are used.
2. The method of claim 1, wherein a color of a light is an intensity and a color.
3. The method of claim 1, wherein a color of a light is described by at least three spectral components.
4. The method of claim 1, wherein said steps are performed by an appropriately programmed digital computer.
5. The method of claim 1, wherein the pixelated display is a computer monitor.
6. The method of claim 1, wherein the pixelated display is a printer.
7. The method of claim 1, wherein said step of identifying incident rays includes the step of identifying a unique initial source for said incident rays so quantities of light energy from a light source are not accumulated more than once in determining the incident light on said shading point.
8. The method of claim 1, wherein said step of calculating a color value further comprises the steps of:
sending multiple rays from said viewpoint through different points in said unit area; and
combining the color values from each of the multiple rays into a single color value for said unit area.
9. An apparatus for rendering an image which is a simulated view of objects illuminated by light from light sources and light reflected off the objects, the apparatus comprising:
a pixelated display, wherein a plurality of pixels are displayable, each pixel colorable by a constant shade over the extent of said pixel;
a model memory, for storing representations of the objects and the light sources positioned in a space with a view point and a view surface, where said view surface is divided into a plurality of unit areas, each unit area corresponding to a pixel of said pixelated display;
an illumination memory, comprising storage for a plurality of illumination sets, wherein an illumination set describes, for a given shading point on an object in said space, incident light rays according to their color, incident direction, and ray type, said ray type being one of a direct light ray, a specularly reflected light ray, or a diffusely reflected light ray;
a calculation unit, coupled to said model memory and said illumination memory, and including an output for said pixelated display, wherein said output includes a calculation of a color of light passing through each unit area to said view point, said calculation based on which point of which object in said model memory is visible in said unit area from said view point and based on an illumination set for either said point of said object, or nearby points on a surface of said object near said point, if illumination sets are stored in said illumination memory for said nearby points, said calculation also a function of at least a ray type and an incident direction of an incident light ray described by the illumination set used; and
storage means, coupled to said calculation unit and said illumination memory, for storing illumination sets calculated when an illumination set is calculated for a point on an object.
10. The apparatus of claim 9, wherein an illumination set is a nearby diffuse estimator for its given shading point, in that the illumination set indicates the diffusely reflected light energy incident on the given shading point.
11. A method for estimating light incident on a shading point from objects in a space, where the location of the objects in the space is known and the light given off by the surfaces of the portion of the objects visible from the shading point is known, the method comprising the steps of:
tracing a plurality of sample rays from the shading point to intersect, if at all, a surface of an object in the space;
identifying visible points for each of said plurality of sample rays, where a visible point is an intersection point of each sample ray and a surface visible from the shading point;
continuing said each sample ray beyond the visible point for the sample ray to invisible intersection points, invisible intersection points being points on a surface which are not visible from the shading point;
at least for one surface containing at least one visible point and one invisible point, dividing the one surface into cells, where a cell is associated with each intersection point on a divided surface and boundaries of the cell are determined such that a metric between a point in the cell and said intersection point of the cell is lower than said metric between said point in said cell and any other intersection point on said surface;
returning to the shading point, for each sample ray, a color of light incident on the shading point from a surface area defined by a cell associated with the sample ray; and
accumulating said returned color values into a total illumination value indicating the light incident on the shading point.
12. The method of claim 11, wherein said metric is either the distance on said surface between two points or the distance in space between the two points.
The present invention relates to the field of image rendering. More specifically, in one embodiment the invention provides an improved system for rendering images which takes into account multiple diffuse reflections of light.
The process of deriving an image from a model is called "rendering" the image. The model is typically a geometrical description of a viewpoint, objects, and light sources, and their positions and orientations in a three dimensional space, often referred to as the model space, or "world". Locations in the model space are described by world coordinates, which is contrasted with image coordinates which describe a position in a view of the model space.
One well-known method of rendering an image is the process of "ray-tracing". Ray-tracing is illustrated in FIG. 1, which shows a three dimensional geometric model space 10 containing an object 12 with a surface 14, an object 16 with a surface 18, a light source 20, a view surface 22, and a center of projection, or "view" point, P. View surface 22 is divided into unit areas which have a one-to-one correspondence with pixels of a pixelated display device 24, and the pixels of device 24 collectively form an image of the model space. The image is a view of the model space when looking through view surface 22 from the perspective of view point P.
View surface 22 and the display surface of display 24, for simplicity, are shown as planar surfaces, however they need not be planar. View surface 22 also does not necessarily have to be the same shape as the display surface of display device 24 nor need viewing point P be a constant for all rays passing through view surface 22. Pixelated display device 24 could be a computer graphics monitor, a bit-mappable printer, or a device which stores bit-mapped images for later display.
The goal of ray-tracing, or any rendering process, is to determine color information each pixel in the image, such as exemplary pixel A.sub.ij of an image formed on display device 24 from the collection of pixels displayed on device 24. The color of pixel A.sub.ij depends on what objects and light sources (often referred to just as "objects", with light sources being self-luminous objects, or luminaires) are present in the model space and where they are located in that model space. More particularly, ray tracing determines the color of pixel A.sub.ij by tracing one or more rays, such as ray R', from point P through unit area a.sub.ij on view surface 22 and continuing until an object is encountered. The ray tracer then determines what color of light is given off the intersected object in the direction opposite the ray being traced (shown by ray R). Of course, ray tracing is iterative, in that rays must be sent out from the intersected point to see what light illuminates the intersected point, in order to determine what light is reflected or transmitted in the direction of ray R.
Depending on the implementation, multiple rays R' might be sent out for each unit area, and the resulting color for the multiple rays are combined (by averaging or other means) to arrive at a single color value for the unit area, and thus for the pixel. Aliasing causes edges to appear jagged, or "stair-stepped". Using multiple rays per pixel reduces aliasing, by providing "smoothed" colors in adjacent pixels.
Color can be represented in a number of ways, but is generally reducible to an n-tuple of component color values. The n-tuple of values might be an RGB representation (triplet of red, green, and blue intensities), or CYMK (quartet of cyan, yellow, magenta, and black intensities), or even a 20- or n-tuple of intensities of specific wavelengths covering the visible light spectrum. In rendering processes, color can be treated as an n-tuple or each color component could be processed as a separate monochromatic image.
Color, when used below, refers to the intensity as well as shade, so a specific color could refer to the intensity of lightness or darkness in a monochromatic image, such as a black ink intensity on a printed page, or it could refer to different shades of similar intensity, such as the color red and the color blue-green. Although colors viewed by the human eye might be suitably represented by intensity values of three components, more color components are often needed to accurately model physical effects such as reflections from a surface where the specular energy reflected from the surface is a function of wavelength. The units of color are generally the power per normalized unit of area at the point the light is being measured.
The following discussion will assume that ray R' does intersect an object. This assumption is not limiting, since a "background" object can be provided if needed, to catch all rays which would not otherwise intersect an object. The background object is behind (relative to the view) all the other objects in the model space and occupies the entire view of view surface 22 which is not occupied by other objects in model 10. If the model space has a background at infinity, then a finite background object which gives off no light should be used.
In FIG. 1, the first surface encountered by ray R' is surface 18 of object 16, at the intersection point O. Because the first intersection of ray R' is surface 18 at point O, the color value for pixel A.sub.ij is simply the color of light given off (emitted, reflected, or transmitted) by object 16 at point O along ray R. The light given off by object 16, which is non-self-luminous, is just the light reflected from object 16 at point O and the light transmitted through object 16 at point O. Since reflection and transmission (through the bottom of a fully or partially translucent surface) are handled in nearly identical ways, the discussion herein is in terms of reflection, with the understanding that transmissive effects are also included.
Although ray R' is shown as a line, it is actually a pyramid with an apex at point P and sides defined by the sides of the unit area a.sub.ij (or a portion of the unit area when multiple rays per pixel are traced). However, for small enough unit areas, the pyramid is suitably approximated by a ray through the center of the unit area, or multiple rays distributed over the unit area. Pixel A.sub.ij, which by definition can only have one color value, then has the color value of the light reflected by point O in the direction of ray R, or a suitable combination of the multiple rays passing through A.sub.ij.
Thus, the problem of ray tracing is reduced to finding the color of light reflected off the point O along ray R. The color of light reflected off a point is dependent on the light incident on that point. Because of the iterative nature of shading a point (finding its color), shading points on objects intersected by tracing rays occupies most of the computing power required for rendering an image.
Light reflected by point O is well approximated by a linear combination of a specular reflection and a diffuse reflection, with self-illumination ignored for now. Specular reflection off a point is light directionally reflected in a direction opposite the light incident on the point, i.e., a mirror-type reflection. The light being reflected strikes the surface at a point with a given angle with respect to a normal to that surface at the point, and the reflected light leaves the surface in a direction of a ray which is in the plane defined by the normal and the incident ray, although the reflected ray might be dispersed somewhat. This dispersal can be modelled by the "Phong" illumination model, which approximates the intensity of the specularly reflected light at any angle as a function of the difference between that angle and the angle of the reflected ray (call this quantity θ) and which is proportional to cos.sup.n θ, where n, the specular reflection exponent, is a characteristic of the surface which reflects the light. For higher values of n, the specular qualities of the surface approach a perfect mirror, where a single incident ray results in a single reflective ray. For lower values of n, the surface approaches a diffuse surface.
There is another kind of specular light, refractive light. Refractive light is light which arrives at point O from below surface 18, such as when object 16 is translucent an illuminated from behind. This light is treated as specular light, except that the refractive index of the surface and light sources or lit objects below the surface of the object must be taken into account. This is also known as transmissive reflection. The extension of these concepts to refractive light is straightforward, so refracted light will be ignored for now.
The other type of reflected light, diffuse reflected light, is reflected in all directions evenly, i.e. the intensity in any given direction is not a function of the outgoing direction, only a function of surface properties and the direction and color of the incident light. Objects which do not appear shiny reflect primarily diffuse reflections, whereas shiny objects have more specular reflection. Most real objects have some of both.
As hinted at above, reflected light depends not only on the light incident on the surface, but also depends on what the surface does to the light it is given. The surface converts incident light into specular reflected light, diffusely reflected light, and absorbed light. The absorption of a surface is easily modelled, and as a first approximation, the object's color indicates its absorption. The amounts of a given light which will be specularly reflected and diffusely reflected are also easily modelled.
Thus, from surface characteristics of a point on the surface of an object and the descriptions of all incident light rays on that point, the reflected light from the point can be calculated. To find the reflected light along a specific ray from a point on a surface, such as ray R leaving point O, one needs to know only the light incident on point O from one direction (the specular component), the light incident on point O from all directions (diffuse components), and the surface characteristics at point O. Thus, since diffuse reflections are omnidirectional, a diffusely reflected ray of light which could reflect in the direction of a traced ray could come from anywhere. To find specular reflections in the direction of the traced ray, one only needs to look in one direction (or a small number of directions for dispersed specular reflections), namely the direction of reflection.
FIG. 2 illustrates this point, showing surface 18 of object 16 in greater detail. The light given off of surface 18 in the direction of ray R is the light of interest, and N is a vector normal to surface 18 at point O. The light of interest for calculating the specular reflection in the direction of ray R is found by measuring the light incident on point O from the direction of ray S and factoring in the specular reflection characteristics of the surface (S' is the refractive component) at point O. The light of interest for calculating the diffuse reflections in the direction of ray R is another matter, since the region above the surface 18 must be searched for objects in all directions. Only some of these directions are illustrated by the rays D in FIG. 2. The computation becomes intractable when multiple reflections are considered--to find the color of light arriving at point O from a point on another object, such as point Q on object 12, all the rays of light arriving at point Q must be determined to find the color of light given off by object 12 at point Q in the direction of point O.
The problem of the infinite directions of rays D is simplified somewhat if the point O is visible by only a few points on illuminating objects, but most realistic looking images are complex enough that a means for dealing with the large number of light sources diffusely reflected by a point on a surface is needed. Several solutions to the problem of diffuse reflections have been proposed, but none are entirely satisfactory.
One solution is called the "radiosity" method. With radiosity, every object is divided into finite elements, and starting with the proposition that all light in a closed system is conserved, the light coupling between each finite element and each other finite element is calculated. Then, to calculate the image, the light sources illuminate all the finite elements in their field in a first pass, and then the light is coupled to other finite elements in subsequent passes, with some arbitrary limit on the number of iterations. After the radiosity of each finite element is calculated, a view point (center of projection) and a view plane are placed in the modelling space, and rays from the view point through the unit areas of the view plane are intersected with finite elements. The radiosity of an intersected finite element indicates the color of the pixel associated with that unit area.
The problem with radiosity methods is that the division of object surfaces into finite elements is an art. If the finite elements are too small, computation time is wasted, and if the finite elements are too large, the final image might appear with jagged edges (aliasing). Even with the proper division of surfaces into finite elements, computation time might be wasted calculating radiosity parameters for points of surfaces which are not visible in the desired view plane from the desired view point.
Another proposed solution is to ignore the effect of diffuse reflections from points such as point Q when determining the light incident on point O. This simplifies the calculations in many cases, since most points will appear black to point O, and only the light sources and rays of specular reflection directed at point O need to be considered. Of course, in a model space containing light sources and two objects with diffuse surfaces, no light from one object would reflect off the other. FIG. 3 illustrates the error caused by this simplification.
FIG. 3 is a model space 48, which has a light source 50 directly illuminating a wall 52 and an enclosure 54, each of which are not translucent and have diffusely reflecting surfaces facing light source 50. Wall 52 blocks light source 54 from direct view at a view point V. Another wall 56 is located behind enclosure 54 with respect to view point V and also has a diffuse surface which faces enclosure 54 and view point V. Suppose light source 50 outputs white light, the surface of enclosure 54 visible from light source 50 is black, the surface of wall 52 visible from light source 50 is red, and the surface of wall 56 facing enclosure 54 and view point V is white. In such a model, light from light source 50 will reach view point V only through two (or more) diffuse reflections along a path such as paths P, and the light reaching view point V is red. Given the geometry of model 48, specular reflections off wall 52 and then off wall 56 cannot reach view point V.
In the above example, a rendered image of model 48 will appear completely black if multiple diffuse reflections are ignored. A correct rendering is one in which the visible portion of wall 56 (area 58) is shaded with various intensities of red, the red light being brightest near the edge of wall 52.
One method of accounting for the elimination of multiple diffuse reflections in a rendering is to add in an "ambient" light source. The model for ambient light is a constant color light striking an object from an unspecified angle and diffusely reflecting from that object. Ambient light, however, does not account for the interplay of light off various objects in a model and the subtle shadings in shadows. In the example of FIG. 3, the ambient light would not necessarily be red, so area 58 would appear to be whatever color is chosen for the ambient light. Furthermore, the side of wall 52 visible from view point V would be lit by the ambient light, when wall 52 is totally dark in a correct rendering.
Ward [Ward, G. J., "A Ray Tracing Solution for Diffuse Interreflection", Computer Graphics, Vol. 22, No. 4, August 1988, pp. 85-92] presents a method of averaging indirect illumination (light from non-self-luminous surfaces) incident on a surface. At the start of a rendering, the indirect illumination at a point is calculated, used in the ray tracing process and then stored. When calculating the illumination of a point nearby the first point, the indirect illumination values stored for the first point are used for the nearby point, if the nearby point is "close enough" to the first point.
As calculations are done for points, the points are given weights which indicate the span of the surface around the points over which their calculated indirect illumination values might be usable, and for some points, a weighted average of indirect illumination values of multiple nearby points is used. Thus, at the beginning of the rendering, most points are evaluated by the primary process in which indirect illumination is calculated for the point and stored, and later in the rendering, more points are evaluated using a secondary process in which stored values are averaged from values stored during the primary process for nearby points.
One of the drawbacks of such a system is with models having rapidly changing light sources, which allows fewer evaluations to be done using the secondary process. At the limit, indirect illumination values are calculated at every point using the primary process and none of these calculations are reused at other points in the secondary process.
Further efficiencies in computing diffuse reflection contributions are still needed. Ward suggests that after some number of reflections, the light be replaced with an ambient light term, to limit the number of calculations. While this may be an improvement over the prior method of treating all reflections beyond the first diffuse bounce as ambient light, an improved method and apparatus for quickly rendering an image of a geometrical model in a defined space is still needed.
An improved method and apparatus for rendering an image of a geometrical model in a defined space is provided by virtue of the present invention.
In one method according to the present invention, pixels of a pixelated display are colored based on models of light reflected from points of objects visible from a view point through a view surface divided into unit area for each pixel. The light reflected from a point is calculated from properties of the object whose surface includes the point and from the light incident on the point. The light incident on the point is either calculated by further ray tracing, with the results of the ray tracing stored, or is calculated from a weighted average of stored ray tracing results for nearby points. The light incident on a point is stored as ray information, indicating the direction and color/intensity of the light, as well as an indication of whether the source of the light is a specular reflector, a light source, or a diffuse reflector, however, in some cases, only the diffuse contributions are stored and the other contributions are calculated as needed. In some embodiments, the intersection points, where the ray intersects an object surface, are also stored. The incident light information also includes information indicating direction of the incident light. Direction is equivalent to a point on an illumination sphere centered on the illuminated point. Of course, on opaque surfaces, the illumination sphere is actually an illumination hemisphere, although anything that applies to an illumination hemisphere also applies equally to an illumination sphere.
To evaluate incident light at a given point from the stored illumination of nearby points (either an illumination hemisphere, or a nearby diffuse estimator, or NDE), the direct illumination can be found by ray tracing to the light sources (to ensure suitable shadows), the specular illumination is found by either ray tracing to the sources of the specular reflection, via a search in regions previously found to contain specular reflections, or by interpolating values from ray information stored in suitable NDE's, and the diffuse illumination is found by averaging values from suitable NDE's, or using the directional information from suitable NDE's to narrow the search for diffuse sources. Alternatively, some illumination may be found through the use of invisibility information. The suitability of an NDE is a user-settable parameter and is also a function of the geometry between the NDE's shading point and the shading point being evaluated.
In some embodiments, an NDE contains data indicating the location of the shading point for the NDE, as well as an indication of the orientation of a surface tangent plane (or alternately, the direction of a normal to the surface) at the shading point of the NDE. In some embodiments, an improved means for determining the light associated with each returned light sampling ray is provided. Each sampling ray returns not only a color value indicative of the light incident on a shading point, but also returns an indication of the object hit and other objects intersected by a continuation of the ray. This additional information, termed "invisibility information", is used to divide the surfaces of intersected objects into cells of finite areas around each intersected point. The points intersected by the continuation of the sampling rays are labelled "invisible" intersection points, since they are not visible from the shading point. The cells around the intersection points are determined by creating a Voronoi diagram over the surfaces of the objects intersected. The light returned for each sampling ray is then just the light from that ray's cell. The light from the cells surrounding invisibility points is not calculated, as those cells are just used to affect the proper boundaries of cells which are visible from the shading point. An invisible point is due to either being on the far side of an object (i.e., a self-shadowed surface) or by being a point on an object obscured by another object.
FIG. 4 is an illustration of an apparatus which renders images according to the present invention. Digital computer 100 is shown comprising a central processing unit (CPU) 102, random access memory 104, a rendering program 106, a model database 108, a nearby diffuse estimator (NDE) database 110, and an image output bus 112 which outputs images in a form suitable to be displayed on a graphics monitor 114, printed by a bit-mappable printer 116, or stored in a bit-map graphics file format in a storage device 118. A means for user input 120 allows for user control of CPU 102, rendering program 106, and the entries of model database 108.
Rendering program 106 is read from disk storage media or from read-only memory (ROM) and is executed by CPU 102 in order to generate a bit-mapped image based on the elements of model database 108. An operation of rendering program 106 is described below, with reference to FIGS. 6(a)-(b).
Model database 108 contains entries describing elements in a model space, including elements such as a view point, a view surface, light sources, backgrounds, and light-reflecting or light-absorbing objects. Some of these objects are described by reference to geometric primitives using a three dimensional world coordinate system. For example, a view point is described by three values, the point's x, y, and z coordinates in the world coordinate system. Some objects are described by more complex functions, such as a toroid with a surface having a random texture. Some objects cannot be described by functions alone, such as an object whose surface is a painting digitized and wrapped over the object. The descriptions of these types of objects are stored in tabular, or bit-mapped, form.
In some embodiments, model database 108 does not include information on the location of the view point and the view surface. In those embodiments, the view information is provided by another database, or by user input 120 to digital computer 100. Model database 108 could be a conventional model database containing descriptions of objects in the model space stored in one of more files of a disk storage device.
Often, the model described by model database 108 is a geometrical approximation to a collection of physical objects for which an image is needed. This is useful, for example, for previewing a room in a house before it is built, or for showing the look of a room once furniture is installed. Thus, in most image rendering applications, the accurate reproduction of light interactions between light sources and objects is essential to create the visual impression that the image is in fact a physical image of an existing physical space in which the described objects are contained.
NDE database 110 contains data relating to the incident light at points on surfaces of objects located in the model space. As explained below, NDE database 110 is initially empty, but is populated by the actions of rendering program 106. The structure of data records in NDE database 110 is described below in connection with FIG. 5. CPU 102 clears NDE database 110 in an initialization step of rendering program 106 and adds records in an incident light determination step of rendering program 106.
In the process of rendering an image specified by the elements of model database 108, CPU 102 builds up a two dimensional array of pixel color values, which might be output to image output bus 112 as pixel color values are calculated, or might be cached in RAM 104 until all the color values for the image are calculated, after which they are output as a block onto image output bus 112. In some cases, the format of the image data output onto image output bus 112 is independent of the device which uses the image, however in other cases, the format of the output data is different for a given image output to different devices. For example, if monitor 114 is an RGB monitor and printer 116 is a CYMK printer, then the image might be output with color values in the RGB color space if it is output to monitor 114 but output with color values in the CYMK color space if it is output to printer 116.
FIG. 5 illustrates one representation of the light incident on a shading point, showing shading point N on a surface 150 illuminated by light from an object 001, an object 002, and a light source 154. The representation shown is an illumination hemisphere 152. The light striking shading point N will necessarily pass through hemisphere 152 (if surface 150 is translucent, a sphere is used instead of a hemisphere), therefore the incident light on shading point N can be found by sampling rays passing through hemisphere 152 and shading point N. The sample rays need not be evenly-spaced; in fact, with adaptive point-sampling, sample rays are denser in solid angles of particularly bright illumination and in solid angles which span large changes in illumination.
The sample rays sample light arriving at point N from propagator objects, which are either light sources or reflective objects. Each sample is characterized as a type I sample or a type II sample. Type I samples represent light which is from a diffuse reflection off a "significantly diffuse" surface of a propagator, and type II samples are those which are not classed into type I. A surface is significantly diffuse if the diffuse reflection off the surface is more than a predetermined threshold. This threshold is a measure of the irradiance (radiant flux per unit area) of the surface. Thus, a sample is treated as a diffuse sample if more light than the threshold arrives at the shading point as a diffuse reflection off the surface being sampled. Type II samples include direct illumination by a light source, illumination by a specular reflection, and illumination by a diffuse reflection from a surface which is not a significantly diffuse surface. It is possible for shading point N to be illuminated by more than one type of light from a given point on a propagator, in which case multiple samples would occur on hemisphere 152 with the same position. For example, rays S1 and D1 represent light from the same point on the surface of object 001. If an object is both reflective and translucent, the specular reflection and the direct light can be represented in separate type II samples or they can be combined into a single type II sample.
Each sample is shown in FIG. 5 by a ray. Ray L1 is direct illumination; ray S1 is a specular reflection, and rays D1-5 are diffuse reflections. Suppose that facet 158 of object 001 and the surface of object 002 are classified as being significantly diffuse surfaces, but that facet 160 is not. In that case, D1, D4, and D5 are type I rays and L1, S1, D2, and D3 are type II rays.
Each sample is described by the propagator of the ray (an object or a light source), the approach direction of the ray, the type of propagator, and the intensity of the light. Some embodiments include the intersection point of the ray and the propagator surface, in UV surface coordinates or XYZ world coordinates. The propagators shown in FIG. 5 are light source 154, object 001, and object 002. The approach direction of the ray indicates where the ray intersects the surface of hemisphere 152, and can be expressed as a polar angle and an azimuthal angle (or UV coordinates on the surface of the hemisphere). The type of propagator indicates whether the ray is a type I or type II ray. The intensity of the sample is expressed as the energy density of the light in that sample distributed over various components in a color space. For example, one color space is the visible spectrum sampled at n evenly-spaced wavelengths, and the value of one component of the intensity value is the energy of the sampled wavelength which is incident on a unit area normal to the ray and includes the shading point. Table 1 shows what an illumination data structure for hemisphere 152 might look like, assuming monochromatic light for simplicity.
TABLE 1__________________________________________________________________________Illumination data structure for point NRAY.sub.-- ID PROP.sub.-- OBJ PROP.sub.-- PNT PROP.sub.-- TYPE DIRECTION INTENSITY__________________________________________________________________________L1 L154 (X,Y,Z).sub.L1 II (U.sub.L1, V.sub.L1) E.sub.L1S1 OB001 (X,Y,Z).sub.S1 II (U.sub.S1, V.sub.S1) E.sub.S1D1 OB001 (X,Y,Z).sub.D1 I (U.sub.D1, V.sub.D1) E.sub.D1D2 OB001 (X,Y,Z).sub.D2 II (U.sub.D2, V.sub.D2) E.sub.D2D3 OB001 (X,Y,Z).sub.D3 II (U.sub.D3, V.sub.D3) E.sub.D3D4 OB002 (X,Y,Z).sub.D4 I (U.sub.D4, V.sub.D4) E.sub.D4D5 OB002 (X,Y,Z).sub.D5 I (U.sub.D5, V.sub.D5) E.sub.D5__________________________________________________________________________
RAY.sub.-- ID identifies the sample ray in the data structure; PROB.sub.-- OBJ identifies which object is intersected by the identified ray; PROP.sub.-- PNT identifies the intersection point in world coordinates of the identified ray and the intersected object; PROP.sub.-- TYPE identifies the propagation type of the identified ray as either type I or type II; DIRECTION identifies the direction of the identified ray in polar coordinates (or UV coordinates on a unit hemisphere); and INTENSITY identifies the intensity of the sampled light associated with the identified ray. The data structure also includes header elements identifying the point N for which the structure applies, usually by its location in world coordinates, and elements identifying the surface and the object containing point N and the local orientation of the surface at point N. Using this data structure and referencing the surface parameters of the object identified by the data structure, the reflected light from point N in any direction can be calculated.
In some embodiments, each ray record (i.e., a line in Table 1) includes a field (column) identifying the location of the intersection point on the propagator which produces the light for the ray (PROP.sub.-- PNT). This information can be used to calculate the distance between the shading point and the intersection point, which is useful for approximating light attenuation due to intervening atmosphere, among other things.
If the location of the intersection point on the propagator is fully specified in world coordinates, the direction of the ray in polar coordinates does not necessarily need to be stored, as it can be calculated from the intersection point and shading point coordinates.
Since the light in any direction can be easily calculated from an illumination hemisphere for a shading point in the view, the bulk of the effort of the rendering process for that shading point is in generating the shading point's illumination hemisphere. Additional hemispheres are also needed on surfaces which reflect light towards a shading point, in order to determine the light arriving from those surfaces. Thus, much of the computational effort of rendering can be eliminated if the information in the illumination hemispheres can be reused. This reusable information is stored as nearby diffuse estimators (NDE), as described below.
An NDE is essentially an already-calculated illumination hemisphere saved in NDE database 110. Each NDE in NDE database 110 is associated with the shading point in the model space which was the shading point of the illumination hemisphere. An NDE comprises data elements for the shading point's location, the orientation of the surface at that point, and a plurality of data elements each describing a sample of light incident on the point such as the lines of Table 1. An NDE also contains an additional data element not necessarily found in the illumination data structure: a "diffuse sum" indicating the sum irradiance at the point. In some embodiments, only the type I rays are stored in an NDE data structure, in which case, the propagator type field is not needed in the NDE structure.
FIGS. 6(a)-(b) form a flowchart illustrating one method of operation of rendering program 106 (shown in FIG. 4) when executed by CPU 102. The operation of rendering program 106 is described below with reference to model space 10, shown in FIG. 1, although other model spaces work equally well. CPU 102 begins the program in block 200, and proceeds to block 202. In block 202, CPU 102 reads model database 108 which describes the model space, and the CPU initializes its internal tables with the necessary limits and program variables, then proceeds to block 204.
In block 204, the program initializes NDE database 110, however, for some conditions, such as where a new view is to be generated from a model for which a view has already been generated, the prior contents of NDE database 110 are used and NDE database 110 is not cleared. The program then proceeds to block 206.
In block 206, the program selects an uncolored pixel to color. CPU 102 maintains a program variable indicating which pixel, if any, was last colored. If the pixels are colored in a determined order, such as top row to bottom row and from left to right within a row, then the next pixel to be colored is determined from the last pixel variable. Once the pixel (A.sub.ij for example) is selected, the program proceeds to block 208 to evaluate a color viewed through unit area A.sub.ij, Alternatively, in block 206, an inner loop is performed once for each of a plurality of tracing rays through one pixel, and the results of multiple samplings are combined to form a single color, by averaging or other combination method. With the proper set of multiple tracing rays through a single pixel, "staircase" edges and other aliasing effects can be reduced in the image formed by the pixels.
In block 208, the program traces ray R' from view point P through the center of unit area A.sub.ij until intersecting an object at a shading point, in this case, object 16 at point 0. Once the shading point O is located in model space 10, the program proceeds to block 210. Shading point O is relevant to the rendering process because the light reflected off shading point O in the direction of ray R determines the color of pixel A.sub.ij.
In block 210, the program determines the light incident on the shading point. FIG. 6(b) shows more detail as to how this step is done. Once the light incident on the shading point is determined, the program proceeds to block 212. In either block 208 or block 210, if the orientation of surface 18 at point O is not known, it is extracted from the information in model database 108. The orientation can be described either by local tangents to the surface, or a normal vector, such as vector N in FIG. 2.
In block 212, the program uses the illumination calculated in block 210 to determine how shading point O is illuminated. Model database 108 includes, for each object, surface properties of the object. An example of a surface property is a surface which specularly reflects 90% of the light energy incident on the surface, diffusely reflects 10% of all red wavelengths, and absorbs the rest. This surface might describe the surface of a fairly shiny red object. The surface properties need not be constant over the surface of the object; a multi-colored object with varied degrees of shine could be placed in the model space. From the surface property at the shading point, the point's surface orientation, and the illumination which was determined in block 210, the light reflected in the direction of ray R is calculated, in block 212. Because the illumination information includes directional information, the specular reflection off the shading point can be calculated as well as the diffuse reflections.
In block 214, pixel A.sub.ij is assigned a color of the light reflected along ray R. Color of light describes the shade and the intensity of the light. Thus, the color assigned to pixel A.sub.ij visually indicates what light arrives at unit area A.sub.ij in the direction of ray R. Any necessary conversions to a color space of the means for display can be done at this time.
In block 216, the program checks to see if all N colored. If not, the program proceeds again to block 206 to select a next pixel. If all pixels have been colored, then those pixels form the desired image of model space 10, and the rendering program ends. At this point, CPU 102 might output the image to an output device over image output bus 112. Pixels may be colored in any order, but if nearby pixels are colored near in time to each other, memory paging might enhance the speed of computation. For example, if RAM 104 can hold the NDE's and pixel color values for a block of 8 portions of NDE database 110 can be paged into RAM 104 and accessed repeatedly, thereby reducing the total number of access of NDE database 110 for a given rendering.
FIG. 6(b) illustrates the operation of block 210 of FIG. 6(a) in more detail. To begin, the program proceeds to block 250. In block 250, the program checks whether at least three suitable NDE's are available in NDE database 110. If they are, the program proceeds to block 252 and then returns at block 258, otherwise the program proceeds to block 254, then to block 256, and the returns at block 258.
In block 250, the program determines the existence of suitable NDE's by searching NDE database 110 heuristically and testing the NDE's therein against suitability parameters. Each NDE contains a field which indicates the location of a center point for that NDE (which was the shading point for the illumination hemisphere from which the NDE is derived), as well as a field indicating the orientation of the surface of an object through the center point, a field identifying the object the point is on, and a field indicating a diffuse sum (the sum irradiance at the center point from all directions). The suitability of an NDE for estimating light at a nearby shading point is guided by three quantities: 1) the distance between the NDE and the shading point, 2) the curvature of the surface between the NDE and the shading point, and 3) the difference between the diffuse sums of various suitable NDE's. The lower each of these quantities are, the more suitable the NDE.
In the ideal limiting case, all of these quantities are zero, and the only suitable NDE's are those exactly at the shading point, which makes the ideal limiting case equivalent to evaluating NDE's for every point. Thus, maximum allowed values for the three quantities will depend on the image accuracy desired and the allowable computational cost for rendering the image. The allowable maximums might not be independent maximums, but instead some allowed maximums might rise if other allowed maximums are reigned in. Furthermore, the actual maximum values might be user-settable when an image is rendered.
If three suitable NDE's are found, the NDE's are used to generate the illumination hemisphere for the shading point in block 252. In alternate embodiments, the number of suitable NDE's required by the query of block 250 might be one, two, or more than three. Requiring a higher number of NDE's might provide more accurate results, but at a cost of generating NDE's for more points.
In block 252, an illumination hemisphere is generated for the selected shading point using the suitable NDE's found in block 250. FIG. 7 illustrates this process. FIG. 7 shows a surface 150 with an illumination hemisphere 162 centered on a shading point B, and NDE's 164a-c centered on three center points N.sub.a-c, along with light source 154 and objects 001 and 002 which illuminate point B. In FIG.7, only type II illumination rays (L1, S1, D2, D3) are shown.
The type I rays for point B are determined from the type I rays stored in the NDE's. Typically, this is done by averaging the diffuse illumination of the three NDE's and setting the diffuse illumination of point B to that average. The type II rays are determined either by scanning for type II sources or by interpolating between saved type II rays in the NDE's. Alternatively, the type II rays are determined by scanning only parts of the illumination hemisphere for samples, where the parts to scan are indicated by the direction of saved type II rays in the NDE's. Optionally, the newly created illumination hemisphere is saved as a new NDE, as described above.
If suitable NDE's are not found in block 250, the program proceeds to block 254, where an illumination hemisphere is generated as described above in connection with FIG. 5. When an illumination hemisphere is generated without reference to NDE's, it is saved as an NDE, since points nearby a point lacking suitable NDE's will likely also be lacking suitable NDE's. Finally, the program proceeds from either block 252 or block 256 to block 258 and the program returns to the flow shown in FIG. 6(a). The process shown in FIG. 6(b) might be recursive, as illumination of intersection points on propagators needs to be calculated to find the illumination of shading points. The recursion begins the same process as described in connection with FIG. 6(b), performed for the intersection point on the propagator. Typically, the recursion has a limit to prevent an infinite recursion. This limit could be either a limit on the number of recursion layers allowed (i.e., illumination after three diffuse reflections is not considered) or a limit on the lowest light level (i.e., a string of multiple reflections is truncated so that the total attenuation along the reflection path is less than a set limit).
In FIG. 6(b), incident light on a shading point (which is on a "shading object") is determined by sending out sampling rays from the shading point, with each sampling ray going out in a direction from the shading point until it intersects an object which gives off light, either an original light source or an object which reflects light incident on it. If a model does not have a bounding object or surface, such as the "walls" of a room being modelled, a bounding surface can be added without affecting the model, even if it is necessary to make the bounding surface a completely black, nonreflective surface. With a bounding surface, every sample ray is assured of hitting something and is assured that when the bounding surface is hit, nothing else would have been hit (i.e., the bounding surface does not obscure any other objects in the model space). For the following discussion on methods of determining incident light from the direction of a ray, it is generally irrelevant whether the object is the originator of the light or whether it is a non-self-luminous object, so the discussion will refer to everything as objects without loss of generality.
If the shading object is opaque, then the sampling rays pass through the illumination hemisphere which is centered over the shading point and oriented with its circular edge in the plane tangential to the shading point on the surface of the shading object. For translucent objects, sampling rays pass through an illumination sphere. The sampling rays are sent out until they hit an object, and then the rays "return" a sample color of the light given off by the object. With adaptive sampling, one or more initial rays are sent out and their returned samples are examined to decide where, if at all, to send additional rays. For example, where some of the rays return samples which indicate a high spatial frequency (rapid color variations over a solid angle, such as edges, shadows, or complex patterns on object surfaces), additional rays may be sent out in the direction of the high frequencies. Once the adaptive sampling process reaches a point where no more rays are sent out, a light accumulation process accumulates the color samples returned by the rays into an illumination data structure. This accumulation step might just be a gathering step where all the rays sent out are stored, or an actual accumulation step where the returned rays are "summarized" by a smaller number of rays.
The color returned by a ray is a measure of the light from the sampling ray's direction. However, the light from a single point in the illumination sphere is infinitesimal unless the ray happens to exactly intersect a point light source. More accurately, the sample ray should return the light from a finite solid angle around the ray, where the light in that finite solid angle is the light from the surfaces of objects visible from the shading point through that finite solid angle. Which solid angles go with which sample rays is not known completely until it is determined which rays are sent in which directions, since the solid angle associated with a ray is generally those angles closer to that ray than any other ray. Of course, the term "closer" might refer to a quantity or metric which is not just angular distance, as explained below.
Since the size of the finite solid angles around the rays it not fixed until the adaptive sampling process is done sending out rays, the prior art sampling processes have generally avoided using finite solid angles due to this problem. In one prior art method, sample rays are sent out and return a color value of the object intersected by the ray, where the returned color value is the light intensity of the entire object. If more sample rays are sent out, they too return the intensity of the entire object. When sampling is complete, an accumulation process compares nearby rays to determine if more than one ray hit a particular object. If only one ray hits the object, then the contribution to the total illumination for that ray is the light of the entire object, on the theory that all the light from the object illuminates the shading point through the solid angle subtended by the object. This process ignores any obscured portions of the intersected object, resulting in the light incident on the shading point being over-estimated. The problem is that the information returned for the sample rays is only "visibility" data and doesn't include "invisibility" data.
When the accumulation process detects multiple rays hitting an object, the samples from those rays are not added, since each sample represents the entirety of the light from that object. Instead, the accumulation routine might perform a step of eliminating all but one of the rays intersecting each object, or a step of dividing the total illumination of the object all the rays intersecting the object. However, since the sampling process only returns a color value, the accumulation routine must guess which rays hit which object by looking at the returned color. If the objects in the model all have different total light intensities, then the detection of which object goes with which ray might work acceptably. However in the general case, a better method is needed.
FIG. 8 is a flowchart of a sampling and accumulation process according to the present invention which illustrates the above concept and the improvement provided by the present invention. In the embodiment shown in the drawings, the sampling and accumulation process is performed by rendering program 106, which by the way, may use parallel processing to complete the sampling process in a shorter time. Program 106 begins the process at block 300 for a given shading point, and proceeds to block 302. The program flow from block 302 is to operation blocks 304, 306, 308, 310, 312, 314, then to decision block 316. With a "yes" decision, the program proceeds to operation blocks 318, 320, 322, and then back to operation block 310. With a "no" decision, the program proceeds to operation block 324, and then ends the sampling and accumulation process at block 326.
In block 302, the program sends out an initial set of rays from the shading point to sample the objects visible from the shading point. These initial rays might be rays directed at known objects or previously calculated directions.
In block 304, the program intersects objects with the sampling rays to identify visible points. A visible point is a points of intersection of the sampling ray and a surface which is visible from the shading point. Because each ray begins at the shading point, the first point intersected by the ray is visible from the shading point. Other intersection points may also be visible, as explained below, if the first intersection point is not on an opaque surface. Because of the presence of the bounding object, a ray is guaranteed to intersect an object somewhere.
In block 306, the program extends the sample rays to intersect any surfaces beyond the visible point first intersected. These later intersection points are called invisible points, because they are obscured from the view of the shading point by whatever object was first intersected. In some embodiments, some intersection points are partially visible points, which are points on a sampling ray beyond a visible point where the visible point is on a translucent surface. A partially visible point is treated as a visible point with a light contribution adjusted according to the point's partial visibility. Because of the placement of the bounding object, the rays need not be extended once the extension of the ray intersects the bounding object.
In block 308, the surfaces intersected by at least one visible point are divided into Voronoi domains, or cells, based on all intersected points, visible or invisible. If a surfaces has no visible intersection points, it need not be divided, since no light from the object reaches the shading point, at least in the estimation of the sampling rays. Each cell is a finite area of a surface associated with an intersection point, and the area of the cell comprises all the points on the surface for which the cell's associated intersection point is the closest intersection point. The term closest refers to the minimization of a metric, which in this example is distance on the surface. This is illustrated in FIG. 9 and discussed in more detail below. Other metrics besides the separation distance of two points on the surface of the object are also possible, such as separation distance of the projection of the two points onto the illumination hemisphere, or the separation between the two points in world coordinates.
Once the cells associated with intersection points are found, in block 310, the light contribution from each cell is calculated. Depending on the desired accuracy, the light contribution is calculated by one of a number of ways. The light from each cell might be assumed to be constant, or the light from the cell could be found by integrating a two-dimensional illuminance function over the cell's area.
In block 312, the color of the cell's light is returned for each ray. Although it might not be needed for the immediate process, the intersection point is also returned, at least for the visible intersection points. In some embodiments, information related to the boundaries of the cell is also returned.
In block 314, the returned colors are examined to see if any solid angles of the illumination sphere need additional samplings, as would sometimes be required in areas of high spatial frequency color variations.
In block 316, if more samples are needed, the program proceeds to block 318, otherwise it skips to block 324.
In block 318, more sample rays are sent out.
In block 320, visible points and invisible points for the additional sample rays are found.
In block 322, the Voronoi cells are adjusted to account for the additional intersection points. Because invisible points are taken into account when generating the Voronoi diagram, obscured portions of surfaces will be approximately covered by cells of invisible points, while visible portions of surfaces will be approximately covered by cells of visible points, with sufficient samples. Thus, this technique provides, as a byproduct, indications of hidden surfaces, which is useful in other processes beyond light estimation. After the cells are adjusted for the additional points, the program loops back to block 310.
Eventually, the program will be satisfied with the number of samples, and will proceed to block 324, where the color values for each sample ray are accumulated and/or stored. Using this method, the color values for each sample ray are not the light from the entire object intersected by the sample ray, but just the contribution from a solid angle around the sample ray direction. Therefore, the total illumination of the shading point is the straight sum of all the samples.
The program returns at block 326. At this point, with the illumination of the shading point determined, some of the information collected in the sampling and accumulation routine is saved as an NDE.
FIG. 9 is an illustration of the above process applied to a model space 400. Model space 400 is shown with a shading point S, a sphere 402 and a surface 404 in the view of shading point S with sphere 402 partially obscured by surface 404, and bounding surface 406 visible from all directions from shading point S except for the solid angles subtended by sphere 402 and surface 404. Also shown are rays R.sub.A-D. Ray R.sub.A intersects sphere 402 at one visible intersection point (A.sub.O) and one invisible intersection point (A.sub.1), and also intersects bounding surface 406 at A.sub.x. Similarly, rays R.sub.B-D intersect surface 404 at visible intersection points (B.sub.0, C.sub.0, D.sub.0), bounding surface 406 at invisible intersection points (B.sub.x, C.sub.x, D.sub.x), and ray R.sub.B also intersects sphere 402 at two invisible intersection points (B.sub.1, B.sub.2). This example assumes that neither sphere 402 or surface 404 is translucent. FIG. 9 also shows a 2-D Voronoi diagram for surface 404, with cells 408B-D identified.
Suppose all four rays R.sub.A-D are sent out as the initial set of sample rays (extension to adaptive sampling is straightforward). Each ray is extended in its assigned direction from point S up to bounding surface 406, and all the intersections with objects along the ray are noted, whether visible or not. Bounding surface 406 need not explicitly exist, as it is only used to stop the extension of the ray, and any other means of limiting the search for intersection points on a ray can be used in place of bounding surface 406. If a bounding surface is used, it should be placed "behind" the visible objects. While one bounding surface is usually possible for all shading points, different bounding surfaces for different points are possible.
Once all the intersection points, visible or invisible are found, the surface of sphere 402 and surface 404 are divided into cells according to a Voronoi diagram. Note that since bounding surface 406 happens not to contain any visible intersection points in this example, it does not need to be divided into cells. Sphere 402 is divided into four cells, each associated with one of the points A.sub.0, A.sub.1, B.sub.1, or B.sub.2. Of these four cells, a light value is only returned for the A.sub.0 cell, since the other three points are invisible. Light is not returned for the other three points, but they are used find the right boundaries for the visible cells. Once the visible cells are identified, it is a simple matter to find the light contribution from those cells.
As the front view of surface 404 in FIG. 9 shows, the Voronoi diagram for the three points (B.sub.0, C.sub.0, D.sub.0) is formed by three line segments through midpoints of line segments between the points, all coming together at a point equidistant from all three points.
The above description is illustrative and not restrictive. Many variations of the invention will become apparent to those of skill in the art upon review of this disclosure. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
A further understanding of the nature and advantages of the inventions herein may be realized by reference to the remaining portions of the specification and the attached drawings.
FIG. 1 is an illustration of a model space containing light sources and objects to be imaged on a pixelated display;
FIG. 2 is an illustration of a surface of an object in the model space;
FIG. 3 is an illustration of several objects in a model space where diffuse light dominates a view;
FIG. 4 is a block diagram of a digital computer for rendering images according to the present invention;
FIG. 5 is a representation of an illumination hemisphere, which might also be a nearby diffuse estimator (NDE) stored in the NDE database shown in FIG. 4;
FIG. 6(a) is a flowchart of a rendering operation;
FIG. 6(b) is a more detailed flowchart of the incident light determining step in FIG. 6(a);
FIG. 7 is an illustration of a point on a surface in a model space which has nearby diffuse estimators;
FIG. 8 is a flowchart of a ray sampling and accumulate process for estimating the light incident on a shading point; and
FIG. 9 shows a model space which illustrates the ray sampling and accumulate operation.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4865423 *Jul 5, 1988Sep 12, 1989International Business Machines CorporationMethod for generating images
US4928250 *Jul 2, 1986May 22, 1990Hewlett-Packard CompanySystem for deriving radiation images
Non-Patent Citations
1Burger, Peter, et al., "Illumination and Colour Models for Solid Objects" Interactive Computer Graphics, Chpt. 7, pp. 301-307, 1990.
2 *Burger, Peter, et al., Illumination and Colour Models for Solid Objects Interactive Computer Graphics, Chpt. 7, pp. 301 307, 1990.
3Ward, Gregory, J., et al., "A Ray Tracing Solution for Diffuse Interreflection", Computer Graphics, vol. 22, No. 4, pp. 85-92, Aug. 1988.
4 *Ward, Gregory, J., et al., A Ray Tracing Solution for Diffuse Interreflection , Computer Graphics, vol. 22, No. 4, pp. 85 92, Aug. 1988.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US5936629 *Nov 20, 1996Aug 10, 1999International Business Machines CorporationAccelerated single source 3D lighting mechanism
US6023523 *Jun 30, 1997Feb 8, 2000Microsoft CorporationMethod and system for digital plenoptic imaging
US6222937Jun 30, 1997Apr 24, 2001Microsoft CorporationMethod and system for tracking vantage points from which pictures of an object have been taken
US6262742 *Mar 3, 1999Jul 17, 2001Discreet Logic Inc.Generating image data
US6313842Mar 3, 1999Nov 6, 2001Discreet Logic Inc.Generating image data
US6329988 *Sep 9, 1998Dec 11, 2001Seta CorporationPicture-drawing method and apparatus, and recording medium
US6366283Mar 3, 1999Apr 2, 2002Discreet Logic Inc.Generating image data
US6411297Mar 3, 1999Jun 25, 2002Discreet Logic Inc.Generating image data
US6487322Mar 3, 1999Nov 26, 2002Autodesk Canada Inc.Generating image data
US6496597Mar 3, 1999Dec 17, 2002Autodesk Canada Inc.Generating image data
US6525730Jan 29, 2002Feb 25, 2003Autodesk Canada Inc.Radiosity with intersecting or touching surfaces
US6753859 *Sep 29, 2000Jun 22, 2004Bentley Systems, Inc.Method and system for hybrid radiosity
US7034825Aug 24, 2001Apr 25, 2006Stowe Jason AComputerized image system
US7084869 *Oct 9, 2002Aug 1, 2006Massachusetts Institute Of TechnologyMethods and apparatus for detecting and correcting penetration between objects
US7202867Jan 31, 2003Apr 10, 2007Microsoft CorporationGeneration of glow effect
US7242408Aug 19, 2005Jul 10, 2007Microsoft CorporationGraphical processing of object perimeter information
US7265753 *Dec 9, 2002Sep 4, 2007Bentley Systems, Inc.Particle tracing with on-demand meshing
US7268780 *Mar 25, 2004Sep 11, 2007Matsushita Electric Works, Ltd.Simulation method, program, and system for creating a virtual three-dimensional illuminated scene
US7274365 *Jan 31, 2003Sep 25, 2007Microsoft CorporationGraphical processing of object perimeter information
US7411592Aug 19, 2005Aug 12, 2008Microsoft CorporationGraphical processing of object perimeter information
US7414625Nov 30, 2006Aug 19, 2008Microsoft CorporationGeneration of glow effect
US7626585 *Apr 19, 2007Dec 1, 2009Panasonic CorporationImage processing method, image processor, and image processing program
US8189002 *Oct 31, 2005May 29, 2012PME IP Australia Pty, Ltd.Method and apparatus for visualizing three-dimensional and higher-dimensional image data sets
US8325185 *Nov 20, 2007Dec 4, 2012Digital Fashion Ltd.Computer-readable recording medium which stores rendering program, rendering apparatus and rendering method
US20090289940 *Nov 20, 2007Nov 26, 2009Digital Fashion Ltd.Computer-readable recording medium which stores rendering program, rendering apparatus and rendering method
US20100103172 *Oct 28, 2008Apr 29, 2010Apple Inc.System and method for rendering ambient light affected appearing imagery based on sensed ambient lighting
US20100309203 *Jun 1, 2010Dec 9, 2010Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.)Polygon processing apparatus, program and information storing medium
US20130079983 *Aug 22, 2012Mar 28, 2013Tobias EhlgenMethod for determining an object class of an object, from which light is emitted and/or reflected to a vehicle
US20130120385 *Aug 11, 2010May 16, 2013Aravind KrishnaswamyMethods and Apparatus for Diffuse Indirect Illumination Computation using Progressive Interleaved Irradiance Sampling
WO1998047108A1 *Apr 14, 1997Oct 22, 1998Connor Michael OImage composition method and apparatus
WO2000019379A1 *Sep 29, 1999Apr 6, 2000Mel SlaterEnergy propagation modelling apparatus
WO2008080172A2 *Dec 26, 2007Jul 3, 2008Ofer AlonSystem and method for creating shaders via reference image sampling
U.S. Classification345/426, 345/589
International ClassificationG06T15/50
Cooperative ClassificationG06T15/506
European ClassificationG06T15/50M
Legal Events
Jun 27, 2007FPAYFee payment
Year of fee payment: 12
Oct 31, 2003ASAssignment
Effective date: 20030625
May 20, 2003FPAYFee payment
Year of fee payment: 8
Jun 28, 2002ASAssignment
Effective date: 20020621
May 10, 1999FPAYFee payment
Year of fee payment: 4
Oct 4, 1993ASAssignment
Effective date: 19930927
|
__label__pos
| 0.82099 |
Auction Rate Bond - ARB
Filed Under:
Dictionary Says
Definition of 'Auction Rate Bond - ARB'
A debt security with an adjustable interest rate and fixed term of 20-30 years. An auction rate bond's (ARB) interest rate is determined through a modified Dutch auction (where the price starts high and gets lower and lower until buyers are found) on a set schedule every seven, 14, 28 or 35 days. Non-profit institutions and municipalities utilize ARBs as a means to reduce borrowing costs for long-term financing.
Investopedia Says
Investopedia explains 'Auction Rate Bond - ARB'
The rates on ARBs are set in a similar way to how rates on new U.S. Treasury bills are set when they are issued. However, when an auction fails due to a lack of buyers, both bondholders and bond issuers are negatively impacted. The bondholders can't sell what is supposed to be a liquid investment and issuers are forced to pay higher default rates (set when the bonds were initially sold).
Articles Of Interest
1. Learn Simple And Compound Interest
2. Accelerating Returns With Continuous Compounding
3. A Study On The Wealth Effect And The Economy
4. Diversification Beyond Stocks
5. Wealth Effect
6. Calculating Covariance For Stocks
7. Seven Market Anomalies Investors Should Know
Though they're unpredictable and heavily contested, market anomalies can often work in an investor's favor.
8. Financial Markets: Random, Cyclical Or Both?
9. The Intelligent Investor: Benjamin Graham
10. What Are The Odds Of Scoring A Winning Trade?
comments powered by Disqus
Hot Definitions
1. Ease Of Movement
2. Dangling Debit
3. Cabinet Crowd
4. Backdating
5. Abnormal Spoilage
6. "Just Say No" Defense
Trading Center
|
__label__pos
| 0.768138 |
This topic has not yet been rated - Rate this topic
BigInteger::ToByteArray Method
Converts a BigInteger value to a byte array.
Namespace: System.Numerics
Assembly: System.Numerics (in System.Numerics.dll)
array<unsigned char>^ ToByteArray()
Return Value
Type: array<System::Byte>
The value of the current BigInteger object converted to an array of bytes.
The individual bytes in the array returned by this method appear in little-endian order. That is, the lower-order bytes of the value precede the higher-order bytes. The first byte of the array reflects the first eight bits of the BigInteger value, the second byte reflects the next eight bits, and so on. For example, the value 1024, or 0x0400, is stored as the following array of two bytes:
Byte value
Negative values are written to the array using two's complement representation in the most compact form possible. For example, -1 is represented as a single byte whose value is 0xFF instead of as an array with multiple elements, such as 0xFF, 0xFF or 0xFF, 0xFF, 0xFF, 0xFF.
Because two's complement representation always interprets the highest-order bit of the last byte in the array (the byte at position Array::Length - 1) as the sign bit, the method returns a byte array with an extra element whose value is zero to disambiguate positive values that could otherwise be interpreted as having their sign bits set. For example, the value 120 or 0x78 is represented as a single-byte array: 0x78. However, 128, or 0x80, is represented as a two-byte array: 0x80, 0x00.
You can round-trip a BigInteger value by storing it to a byte array and then restoring it using the BigInteger(array<Byte>) constructor.
Caution noteCaution
If your code modifies the value of individual bytes in the array returned by this method before it restores the value, you must make sure that you do not unintentionally change the sign bit. For example, if your modifications increase a positive value so that the highest-order bit in the last element of the byte array becomes set, you can add a new byte whose value is zero to the end of the array.
The following example illustrates how some BigInteger values are represented in byte arrays.
.NET Framework
Supported in: 4.5.1, 4.5, 4
.NET Framework Client Profile
Supported in: 4
Portable Class Library
Supported in: Portable Class Library
.NET for Windows Store apps
Supported in: Windows 8
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
© 2014 Microsoft. All rights reserved.
|
__label__pos
| 0.945223 |
Spirits That Surprise: 12 Weird Alcoholic Drinks
By Sara Schwartz
Yury Minaev/iStockphoto
Back Next
Kvass (Fermented Bread Liquor)
A mildly alcoholic beer made from fermented rye or black bread, kvass is a popular home-brewed tradition in Russia that is also commercially produced in bigger towns and cities.
The fermentation process involves pouring boiling water over dried bread, adding yeast, sugar or molasses, and a little flour, and allowing the mixture to stand for 12 hours in a warm place. The resulting brew is then strained and poured into bottles and often dried fruit, like a raisin or two, is added before sealing and letting sit an additional two days. In addition to the original version, kvass flavored with fruit and herbs are quite common.
Restart Slideshow
Culinary Intrigue
|
__label__pos
| 0.985905 |
Fantasy Fitness
1 6325 Windsor Mill Rd Gwynn Oak, MD 21207
Fantasy Fitness LLC’s classes for women fuse sexy and exotic movements with a fitness regimen that tightens and tones muscles throughout the body. It’s a system that elicits an energetic, positive vibe and bolsters self-confidence as participants shimmy down poles, dance around chairs in high heels, or gyrate hips to Latin beats. The studio's list of 10 classes also includes a mommy-and-me dance class, where moms can burn calories while the little ones learns dance steps and the importance of exercise.
Nearby Places
|
__label__pos
| 0.884612 |
Books tagged: greenbriar ghost
Full Search
Found: 1 result
The Adventure of the Greenbriar Ghost
By Jonathan Maberry
Price: $0.99 USD. Words: 7,770. Language: English. Published: November 19, 2011. Category: Fiction
Sherlock Holmes has investigated some strange cases in his celebrated career, but none stranger than when a woman claims that she has received vital clues to an unsolved murder from the ghost of her own murdered daughter. Based on an actual 19th century legal case, join Sherlock Holmes and Dr. Watson as they journey to America to solve the Adventure of the Greenbriar Ghost!
|
__label__pos
| 0.787476 |
International Language Environments Guide
Mouse Selection
The user makes a primary selection with mouse button 1. Pressing this button deselects any existing selection and moves the insertion cursor and the anchor to the position in the text where the button is pressed. Dragging while holding down mouse button 1 selects all text between the anchor and the pointer position, deselecting any text outside the range.
The text selected is influenced by the resource XmNeditPolicy, which can be set to XmEDIT_LOGICAL or XmEDIT_VISUAL. If the XmNeditPolicy is set to XmEDIT_LOGICAL and the text selected is bidirectional, the selected text is not contiguous visually and is a collection of segments. The text in the logical buffer does not have a one-to-one correspondence with the display.
As a result, the contiguous buffer of logical characters of bidirectional text is not rendered in a continuous stream of characters. Conversely, when the XmNeditPolicy is set to XmEDIT_VISUAL, the selected text can be contiguous visually but is segmented in the logical buffer. Therefore, the sequence of selection, deletion, and insertion of bidirectional text at the same cursor point does not result in the same string.
|
__label__pos
| 0.878377 |
Anatomical terminology
From Wikipedia, the free encyclopedia
(Redirected from Flexor)
Jump to: navigation, search
Anatomists and health care providers use anatomical terminology and medical terminology intermittently. These languages can be bewildering to the uninitiated, however the purpose of this language is not to confuse, but rather to increase precision and reduce medical errors. For example, is a scar “above the wrist” located on the forearm two or three inches away from the hand? Or is it at the base of the hand? Is it on the palm-side or back-side? By using precise anatomical terminology, ambiguity is eliminated. Anatomical terms derive from ancient Greek and Latin words, and because these languages are no longer used in everyday conversation, the meaning of their words does not change.[1]
Anatomical vocabulary[edit]
Anatomical terms are made up of roots, prefixes, and suffixes. The root of a term often refers to an organ, tissue, or condition, whereas the prefix or suffix often describes the root. For example, in the disorder hypertension, the prefix “hyper-” means “high” or “over,” and the root word “tension” refers to pressure, so the word “hypertension” refers to abnormally high blood pressure. The roots, prefixes and suffixes are often derived from Greek or Latin, and often quite dissimilar from their English-language variants.[1]
Latin names of structures such as musculus biceps brachii can be split up and refer to, musculus for muscle, biceps for "two-headed", brachii as in the brachial region of the arm.
The first word tells us what we are speaking about, the second describes it, and the third points to location.[citation needed]
Relative location[edit]
The anatomical position, with terms of relative location noted.
Anatomical terminology is often chosen to highlight the relative location of body structures. For instance, an anatomist might describe one band of tissue as “inferior to” another or a physician might describe a tumor as “superficial to” a deeper body structure. Terms are used to define the relative location of body structures in a body that is positioned in the anatomical position, which is standing, feet apace, with palms forward and thumbs facing outwards.[1]
To further increase precision, anatomists standardize the way in which they view the body. Just as maps are normally oriented with north at the top, the standard body “map,” or anatomical position, is that of the body standing upright, with the feet at shoulder width and parallel, toes forward. The upper limbs are held out to each side, and the palms of the hands face forward. Using the standard anatomical position reduces confusion. It does not matter how the body being described is oriented, the terms are used as if it is in anatomical position. For example, a scar in the “anterior (front) carpal (wrist) region” would be present on the palm side of the wrist. The term “anterior” would be used even if the hand were palm down on a table.[1]
When anatomists refer to the right and left of the body, it is in reference to the right and left of the subject, not the right and left of the observer. When observing a body in the anatomical position, the left of the body is on the observer’s right, and vice versa.
These standardized terms avoid confusion. Examples of terms include:[2]:4
• Anterior and posterior, which describe structures at the front (anterior) and back (posterior) of the body. For example, the toes are anterior to the heel, and the popliteus is posterior to the patella.
• Superior and inferior, which describe a position above (superior) or below (inferior) another part of the body. For example, the orbits are superior to the oris, and the pelvis is inferior to the abdomen.
• Proximal and distal, which describe a position that is closer (proximal) or further (distal) from the trunk of the body. For example, the shoulder is proximal to the arm, and the foot is distal to the knee.
• Superficial and deep, which describe structures that are closer to (superficial) or further from (deep) the surface of the body. For example, the skin is superficial to the bones, and the brain is deep to the skull. Sometimes profound is used synonymously with deep.
• Medial and lateral, which describe a position that is closer to (medial) or further from (lateral) the midline of the body. For example, the nose is medial to the eyes, and the thumb is lateral to the other fingers.
• Ventral and Dorsal, which describe structures derived from the front (ventral) and back (dorsal) of the embryo, before limb rotation.
• Cranial and caudal, which describe structures close to the top of the skull (cranial), and towards the bottom of the body (caudal).
• Occasionally, sinister for left, and dexter for right are used.[citation needed]
The skull uses different terminology, due to its embryonic origin of neuraxis.
Skull and brain[edit]
Different terms are used when it comes to the skull in compliance with its embryonic origin and its tilted position compared to in other animals.
• Rostral refers to proximity to the front of the nose, and is particularly used when describing the skull.[2]:4
When speaking of the arm different terminology is often used, so as to take account of the supination action the arm can perform. Therefor the terms ventral for anterior and dorsal for posterior are used preferentially. Aside from this additional terms are employed:
• Radial referring to the radius bone, seen laterally in the anatomical position.
• Ulnar referring to the ulna bone, medially positioned when in the anatomical position.
The three anatomical planes of the body: the sagital, transverse (or horizontal), frontal planes.
Anatomy is often described in planes, referring to two-dimensional sections of the body. A section is a two-dimensional surface of a three-dimensional structure that has been cut. A plane is an imaginary two-dimensional surface that passes through the body. Three planes are commonly referred to in anatomy and medicine:[2] :4
• The sagittal plane is the plane that divides the body or an organ vertically into right and left sides. If this vertical plane runs directly down the middle of the body, it is called the midsagittal or median plane. If it divides the body into unequal right and left sides, it is called a parasagittal plane, or less commonly a longitudinal section.
• The frontal plane is the plane that divides the body or an organ into an anterior (front) portion and a posterior (rear) portion. The frontal plane is often referred to as a coronal plane, following Latin corona, which means "crown".
• The transverse plane is the plane that divides the body or organ horizontally into upper and lower portions. Transverse planes produce images referred to as cross sections.
Functional state[edit]
Anatomical terms may be used to describe the functional state of an organ:[citation needed]
• Anastamoses refers to the connection between two structures previously branched out, such as blood vessels or leaf veins.
• Patent, meaning a structure such as an artery or vein that abnormally remains open, such as a patent ductus arteriosus, referring to the ductus arteriosus which normally becomes ligamentum arteriosum within three weeks of birth.
• Visceral and parietal' describe structures that relate to an organ (visceral), or the wall of the cavity that the organ is in (parietal). For example, the parietal peritoneum surrounds the abdominal cavity.
• Paired, referring to a structure that is present on both sides of the body. For example, the hands are paired structures.
A body that is lying down is described as either prone or supine. Prone describes a face-down orientation, and supine describes a face up orientation. These terms are sometimes used in describing the position of the body during specific physical examinations or surgical procedures.[1]
The human body is shown in anatomical position in an anterior view and a posterior view. The regions of the body are labeled in boldface.
The human body’s numerous regions have specific terms to help increase precision. Notice that the term “brachium” or “arm” is reserved for the “upper arm” and “antebrachium” or “forearm” is used rather than “lower arm.” Similarly, “femur” or “thigh” is correct, and “leg” or “crus” is reserved for the portion of the lower limb between the knee and the ankle.[1]
When describing the position of anatomical structures, landmarks may be used to describe location. These landmarks may include structures, such as the umbilicus or sternum, or anatomical lines, such as the midclavicular line from the centre of the clavicle.
Body cavities[edit]
Different body cavities (anterior mediastinum not visible)
The ventral cavity includes the thoracic and abdominopelvic cavities and their subdivisions. The dorsal cavity includes the cranial and spinal cavities. This illustration shows a lateral and anterior view of the body and highlights the body cavities with different colors.[1]
Abdominal regions are used for example to localize pain.
To promote clear communication, for instance about the location of a patient’s abdominal pain or a suspicious mass, health care providers typically divide up the cavity into either nine regions or four quadrants.[1]
The abdomen may be divided into four quadrants, more commonly used in medicine, subdivides the cavity with one horizontal and one vertical line that intersect at the patient’s umbilicus (navel).. The right upper quadrant (RUQ) includes the lower right ribs, right side of the liver, and right side of the transverse colon. The left upper quadrant (LUQ) includes the lower left ribs, stomach, spleen, and upper left area of the transverse colon. The right lower quadrant (RLQ) includes the right half of the small intestines, ascending colon, right pelvic bone and upper right area of the bladder. The left lower quadrant (LLQ) contains the left half of the small intestine and left pelvic bone.[1]
The more detailed regional approach subdivides the cavity with one horizontal line immediately inferior to the ribs and one immediately superior to the pelvis, and two vertical lines drawn as if dropped from the midpoint of each clavicle, resulting in nine regions. The upper right square is the right hypochondriac region and contains the base of the right ribs. The upper left square is the left hypochondriac region and contains the base of the left ribs. The epigastric region is the upper central square and contains the bottom edge of the liver as well as the upper areas of the stomach. The diaphragm curves like an upside down U over these three regions. The central right region is called the right lumbar region and contains the ascending colon and the right edge of the small intestines. The central square contains the transverse colon and the upper regions of the small intestines. The left lumbar region contains the left edge of the transverse colon and the left edge of the small intestine. The lower right square is the right iliac region and contains the right pelvic bones and the ascending colon. The lower left square is the left iliac region and contains the left pelvic bone and the lower left regions of the small intestine. The lower central square contains the bottom of the pubic bones, upper regions of the bladder and the lower region of the small intestine.[1]
Serous membrane
A serous membrane (also referred to as a serosa) is a thin membrane that covers the walls of organs in the thoracic and abdominal cavities. The serous membranes have two layers; parietal and visceral, surrounding a fluid filled space.[1] The visceral layer of the membrane covers the organ (the viscera), and the parietal layer lines the walls of the body cavity (pariet- refers to a cavity wall). . Between the parietal and visceral layers is a very thin, fluid-filled serous space, or cavity.[1] An example of a serous cavities include the pericardium, which surrounds the heart.[1]
Body Movements I.jpg
Body Movements II.jpg
Joints, especially synovial joints allow the body a tremendous range of movements. Each movement at a synovial joint results from the contraction or relaxation of the muscles that are attached to the bones on either side of the articulation. The type of movement that can be produced at a synovial joint is determined by its structural type.
Movement types are generally paired, with one being the opposite of the other. Body movements are always described in relation to the anatomical position of the body: upright stance, with upper limbs to the side of body and palms facing forward.[1]
General movements[edit]
General motion[edit]
Terms describing motion in general include:
• Flexion and Extension, which refer to a movement that decreases (flexion) or increases (extension) the angle between body parts. For example, when standing up, the knees are extended.
• Abduction and adduction refers to a motion that pulls a structure away from (abduction) or towards (adduction) the midline of the body or limb. For example, a star jump requires the legs to be abducted.
• Internal rotation (or medial rotation) and External rotation (or lateral rotation) refers to rotation towards (internal) or away from (external) the center of the body. For example, the asana posture in yoga requires the legs to be externally rotated.[citation needed]
• Elevation and Depression refers to movement in a superior (elevation) or inferior (depression) direction. For example, a person saluting must elevate their arm.[clarification needed]
Special motions of the hands and feet[edit]
These terms refer to movements that are regarded as unique to the hands and feet:[3] :590–7
• Dorsiflexion and Plantarflexion refers to flexion (dorsiflexion) or extension of the foot at the ankle. For example, dorsiflexion occurs when pressing the brake pedal of a car.
• Palmarflexion and dorsiflexion refer to movement of the flexion (palmarflexion) or extension (dorsiflexion) of the hand at the wrist. For example, prayer is often conducted with the hands dorsiflexed.
• Pronation and Supination refer to rotation of the forearm or foot so that in the anatomical position the palm or sole is facing anteriorly (supination) or posteriorly (pronation) rotation of the forearm. For example, a person skiing must pronate their arms in order to grasp the skis.
• Eversion and Inversion refer to movements that tilt the sole of the foot away from (eversion) or towards (inversion) the midline of the body.
Other special motions[edit]
Other terms include:
• Protraction and Retraction refer to an anterior (protraction) or posterior (retraction) movement of the arm at the shoulders.
• Circumduction refers to the circular (or, more precisely, conical) movement of a body part, such as a ball-and-socket joint or the eye. It consists of a combination of flexion, extension, adduction, and abduction. "Windmilling" the arms or rotating the hand from the wrist are examples of circumductive movement.
• Opposition – A motion involving a grasping of the thumb and fingers.
• Reposition – To release an object by spreading the fingers and thumb.
• Reciprocal motion of a joint – Alternating motion in opposing directions, such as the elbow alternating between flexion and extension.
• Protrusion and Retrusion are sometimes used to describe the anterior (protrusion) and posterior (retrusion) movement of the jaw.
Muscle action that moves the axial skeleton work over a joint with an origin and insertion of the muscle on respective side. The insertion is on the bone deemed to move towards the origin during muscle contraction. Muscles are often present that engage in several actions of the joint; able to perform for example both flexion and extension of the forearm as in the biceps and triceps respectively.[1] This is not only to be able to revert actions of muscles, but also brings on stability of the actions though muscle coactivation.[citation needed]
Agonist and antagonist muscles[edit]
The muscle performing an action is the agonist, while the muscle which contraction brings about an opposite action is the antagonist. For example an extension of the lower arm is performed by the triceps as the agonist and the biceps as the antagonist (which contraction will perform flexion over the same joint). Muscles that work together to perform the same action are called synergists. In the above example synergists to the biceps can be the brachioradialis and the brachialis muscle.[1]
Skeletal and smooth muscle[edit]
The skeletal muscles of the body typically come in seven different general shapes. Muscle Shapes and Fiber Alignment This figure shows the human body with the major muscle groups labeled.
The gross anatomy of a muscle is the most important indicator of its role in the body. One particularly important aspect of gross anatomy of muscles is pennation or lack thereof. In most muscles, all the fibers are oriented in the same direction, running in a line from the origin to the insertion. In pennate muscles, the individual fibers are oriented at an angle relative to the line of action, attaching to the origin and insertion tendons at each end. Because the contracting fibers are pulling at an angle to the overall action of the muscle, the change in length is smaller, but this same orientation allows for more fibers (thus more force) in a muscle of a given size. Pennate muscles are usually found where their length change is less important than maximum force, such as the rectus femoris.[4]
Skeletal muscle is arranged in discrete muscles, an example of which is the biceps brachii. The tough, fibrous epimysium of skeletal muscle is both connected to and continuous with the tendons. In turn, the tendons connect to the periosteum layer surrounding the bones, permitting the transfer of force from the muscles to the skeleton. Together, these fibrous layers, along with tendons and ligaments, constitute the deep fascia of the body.[4]
Movement is not limited to only synovial joints, although they allow for most freedom. Muscles also run over symphysis, which allow for movement in for example the vertebral column by compression of the intervertebral discs. Additionally, synovial joints can be divided into different types, depending on their axis of movement.[citation needed]
Systemic and regional approaches to anatomy[edit]
In the systematic approach to the study of anatomy, each system of the body, such as the digestive system, is studied as a thing in itself.[5] The regional approach observes all aspects of a particular region at one time.[5] For example, each muscle, organ, bone, nerve and any other structure in the abdomen is studied together, then the student continues in this fashion with each region. The regional approach better facilitates study with dissection, while the systematic approach is appreciated for its comprehensive look at each system throughout the body.
Anatomical variation[edit]
The term anatomical variation is used to refer to a difference in anatomical structures that is not regarded as a disease. Many structures vary slightly between people, for example muscles that attach in slightly different places. For example, the presence or absence of the palmaris longus tendon. Anatomical variation is unlike congenital anomalies, which are considered a disorder.[citation needed]
Additional images[edit]
See also[edit]
This Wikipedia entry incorporates text from the freely licenced Connexions [1] edition of Anatomy & Physiology [2] text-book by OpenStax College
1. ^ a b c d e f g h i j k l m n o p q r s t "Anatomy & Physiology". Openstax college at Connexions. Retrieved November 16, 2013.
3. ^ Swartz, Mark H. (2010). Textbook of physical diagnosis : history and examination (6th ed. ed.). Philadelphia, PA: Saunders/Elsevier. ISBN 978-1-4160-6203-5.
4. ^ a b Moore, Keith L., Dalley, Arthur F., Agur Anne M. R. (2010). Moore's Clinically Oriented Anatomy. Phildadelphia: Lippincott Williams & Wilkins. pp. 29–35. ISBN 978-1-60547-652-0.
5. ^ a b "Introduction page, "Anatomy of the Human Body". Henry Gray. 20th edition. 1918". Retrieved 19 March 2007.
Further reading[edit]
• Calais-Germain, Blandine (1993). Anatomy of Movement. Eastland Press. ISBN 0-939616-17-3.
• Drake, Richard; Vogl, Wayne; Mitchell, Adam (2004). Gray’s Anatomy for Students. Churchill Livingstone. ISBN 0-443-06612-4.
• Martini, Frederic; Timmons, Michael; McKinnley, Michael (2000). Human Anatomy (3rd ed.). Prentice-Hall. ISBN 0-13-010011-0.
• Marieb, Elaine (2000). Essentials of Human Anatomy and Physiology (6th ed.). Addison Wesley Longman. ISBN 0-8053-4940-5.
• Muscolino, Joseph E. (2005). The Muscular System Manual: The Skeletal Muscles of the Human Body (2nd ed.). C.V. Mosby. ISBN 0-323-02523-4.
|
__label__pos
| 0.936719 |
Abonner Norwegian
søk opp hvilket som helst ord, som fuck:
Reffered to as a friend. The same as "Homeboy" and "homeslice"
person 1: what up crumbucket hows it hangin?
person 2: its hangin to the left bud
av Christopher T-D 3. mars 2008
1 2
Words related to crumbucket:
buckage bucket crumb crumbage homeboy homeslice homie
|
__label__pos
| 0.999949 |
P2X purinoreceptor (IPR001429)
Short name: P2X_purnocptor
Family relationships
P2X purinoceptors are cell membrane ion channels, gated by adenosine 5'-triphosphate (ATP) and other nucleotides; they have been found to be widely expressed on mammalian cells, and, by means of their functional properties, can be differentiated into three sub-groups. The first group is almost equally well activated by ATP and its analogue alpha,betamethylene-ATP, whereas, the second group is not activated by the latter compound. A third type of receptor (also called P2Z) is distinguished by the fact that repeated or prolonged agonist application leads to the opening of much larger pores, allowing large molecules to traverse the cell membrane. This increased permeability rapidly leads to cell death, and lysis.
Molecular cloning studies have identified seven P2X receptor subtypes, designated P2X1-P2X7. These receptors are proteins that share 35-48% amino acid identity, and possess two putative transmembrane (TM) domains, separated by a long (~270 residues) intervening sequence, which is thought to form an extracellular loop. Around 1/4 of the residues within the loop are invariant between the cloned subtypes, including 10 characteristic cysteines.
Studies of the functional properties of heterologously expressed P2X receptors, together with the examination of their distribution in native tissues, suggests they likely occur as both homo- and heteromultimers in vivo [PMID: 10414359, PMID: 12270951].
This entry represents all P2X purinoreceptor subtypes.
GO terms
Biological Process
GO:0006811 ion transport
Molecular Function
GO:0005524 ATP binding
GO:0005216 ion channel activity
GO:0001614 purinergic nucleotide receptor activity
Cellular Component
GO:0016020 membrane
Contributing signatures
Signatures from InterPro member databases are used to construct an entry.
PROSITE patterns
|
__label__pos
| 0.871542 |
First: Mid: Last: City: State:
Carson Quam
It’s very easy to pinpoint Carson Quam with the assistance of USA-People-Search.com. A simple search will provide accurate results for each and every Carson Quam in our database. The matching profiles have been sorted efficiently to allow you to determine the exact Carson you need immediately.
Have you managed to zero in on the exact Carson Quam you are hunting for? If you haven’t, modify your search to include any other bit of information you might know about the person, like a nickname or middle initial. When you are successful in locating the correct Carson Quam, browse through their complete personal profile, which may include phone numbers, addresses, and much more.
Name/AKAsAgeLocationPossible Relatives
|
__label__pos
| 0.712595 |
First: Mid: Last: City: State:
Charissa Quade
You can discover the Charissa Quade you've been in search of quickly and easily with our reliable people search tools. USA-People-Search.com has a comprehensive information database to help you find individuals based on age, past residences, relatives, aliases, and more. Find known relatives of Charissa Quade.
For any assistance to help you locate the correct Charissa Quade, we'll help you by utilizing our comprehensive data that we have in our records. Find the right Charissa using such info as previous residences and known aliases. Check out more personal information, including background checks, criminal profiles, and email addresses on USA-People-Search.com. If this Charissa is not the person you are looking for, refer to the list of people with the last name Quade below. This list could include name, age, location, and relatives.
Put in additional details into the search fields on the left for more refined results. A first name, middle name, last name, city, state and/or age can be essential to finding Charissa Quade. To get a visual sense of their whereabouts, utilize the map. When you locate the Charissa Quade you desire, feel free to access all the public records data we have on that person in our expansive database. View people search results for Charissa Quade in just a matter of seconds with our reliable tools.
Name/AKAsAgeLocationPossible Relatives
|
__label__pos
| 0.794113 |
The site is under maintenance.
XML-Writer-Compiler-1.112060 has the following 2 errors.
no_pod_errorsXML Writ er Compiler 1 112060/lib/XML/Writer/Compiler pm POD ERRORS Hey The above document had some coding errors which are explained below: Around line 352: Unterminated C sequence Around line 358: Unterminated C sequence
|
__label__pos
| 0.978407 |
Marvel Database
Gammenon (Earth-616)
121,715pages on
this wiki
Information-silk Real Name
Information-silk Aliases
The Gatherer
Information-silk Affiliation
Information-silk Base Of Operations
The known universe
Information-silk Alignment
Information-silk Identity
Information-silk Citizenship
Information-silk Marital Status
Information-silk Occupation
Gatherer, Cosmic Being
Information-silk Gender
Information-silk Height
Information-silk Eyes
Information-silk Hair
Information-silk Skin
Information-silk Unusual Features
Gammenon appears similar to a robot or suit of armour.
Information-silk Origin
Celestial (Cosmic Being)
Information-silk Universe
Information-silk Created by
First appearance
Gammenon is one of the Celestials, and is sent out to find various plant, animal and humanoid specimens for experimentation. He reports to Jemiah and often travels with Eson. Gammenon is the Celestials' data collector and gatherer of specimens.
Gammenon was among the First Celestial Host that visited Earth approximately one million years ago. He captured man-ape specimens which became Earth's first Deviants and Eternals.
As part of the Fourth Celestial Host, Gammenon landed on Earth, where he was sighted by Ajak and Dr. Daniel Damian. Gammenon captured S.H.I.E.L.D. agents Tyler, Parks, and Stevenson, and reduced them to atoms. Gammenon allowed Ajak to temporarily restore the captured S.H.I.E.L.D. agents.
Gammenon also encountered Thor, and captured an airliner full of passengers, including Don Blake, S.H.I.E.L.D. agent Johnson, and Ereshkigal. Gammenon witnessed the transformation of Tyrannus by the Deviants's Flame of Life and his dissipation into space. Gammenon stood assembled with nine of the other members of the Fourth Host in Peru, and participated in the repulsion of the attack by the Destroyer, the Uni-Mind, and Thor. He then left Earth with the other Celestials.
Gammenon was one of the Celestials allegedly encountered by the Beyonder during the second Secret War.
Gammenon later witnessed the near-destruction of the universe by a black hole created by Maelstrom.
Powers and AbilitiesEdit
Strength level
Beyond Class 100.
As part of the First Celestial Host, Gammenon used a gathering device composed of living metal which captured and tranquilized specimens collected by Gammenon for study and analysis by his fellow Celestials. As part of the Fourth Celestial Host, Gammenon used a gather-rod, approximately 1,250 feet (380 m) in length, holding in stasis a cluster of life-seed capsules containing the stored atoms of living organisms collected by Gammenon.
Celestial starships.
Discover and Discuss
Like this? Let us know!
Smb twitter
Smb facebook
Start a Discussion Discussions about Gammenon (Earth-616)
Advertisement | Your ad here
Around Wikia's network
Random Wiki
|
__label__pos
| 0.772355 |
Take the 2-minute tour ×
Example code:
#include <iostream>
int main()
if(int a = std::cin.get() && a == 'a')
When I compile this code, visual studio gives me a nice warning: warning C4700: uninitialized local variable 'a' used. So I understand that a is uninitialized. However, I wanted to fully understand how the expression is evaluated. Is it the case that the if statement above is equivalent to if(int a && a == 'a') { a = std::cin.get(); }? Could someone explain exactly what happens?
share|improve this question
add comment
3 Answers
up vote 5 down vote accepted
The and operator && has higher precedence than the assignment operator =. So in other words, your statement is being executed like this:
if (int a = (std::cin.get() && a == 'a'))
You really want to use explicit parentheses:
int a;
if ((a = std::cin.get()) && a == 'a')
Even better, write clear code:
int a = std::cin.get();
if (a == 'a')
share|improve this answer
Thanks, I understand now. – Jesse Good May 22 '12 at 21:35
And, as long as you have to do it in two separate statements, you may as well do the initialization with the declaration and simplify the conditional: int a = std::cin.get(); if (a == 'a'). – Rob Kennedy May 22 '12 at 21:38
@RobKennedy: Yes, I've always avoided issues like this by breaking the statements up, but I just wanted a good understanding. – Jesse Good May 22 '12 at 21:42
@RobKennedy Very true; I was tempted to put that, but I wanted to get my answer out quickly. :-) Editing shortly. – Platinum Azure May 22 '12 at 21:42
add comment
The expression gets evaluated just as if it was its own statement. Like this:
So it's equivalent to initialize a variable a with the result from std::cin.get() AND-ed with the comparison between an uninitialized variable and the literal char 'a'.
share|improve this answer
add comment
You are using the variable to initialize itself. First the memory is allocated then whatever was in that memory is compared to 'a' and the result used to initialize the variable.
share|improve this answer
add comment
Your Answer
|
__label__pos
| 0.774092 |
Take the 2-minute tour ×
my $Parser = new MIME::Parser;
my $entity = $Parser->parse_data( $body );
my @parts = $entity->parts;
for $part(@parts){
my $type=$part->mime_type;
my $bhandle=$part->bodyhandle;
$header = $part->head();
$content_disp = $header->get('Content-Disposition');
if ($type =~ /text/i){
$bodydata = "";
if (my $io = $part->open("r")) {
while (defined($_ = $io->getline)) {
$bodydata .= $_;
print $bodydata;
share|improve this question
add comment
1 Answer
up vote 2 down vote accepted
I think you're looking for the recommended_filename method:
$header = $part->head();
$filename = $header->recommended_filename;
Be sure to check the return value for sanity. Note that it can also be undef.
share|improve this answer
add comment
Your Answer
|
__label__pos
| 0.998342 |
The topic Carausius is discussed in the following articles:
reproductive and protective behaviour
• TITLE: insect (arthropod class)
SECTION: Reproduction
A few insects (e.g., the stick insect Carausius) rarely produce males, and the eggs develop without fertilization in a process known as parthenogenesis. During summer months in temperate latitudes, aphids occur only as parthenogenetic females in which embryos develop within the mother (viviparity). In certain gall midges (Diptera) oocytes start developing parthenogenetically in the...
• TITLE: insect (arthropod class)
SECTION: Protection from enemies
...the form of camouflage (cryptic coloration) in which the insect blends into its background. The coloration of many insects copies a specific background with extraordinary detail. Stick insects (Carausius) can change their colour to match that of the background by moving pigment granules in their epidermal cells. Some caterpillars also have patterns that develop in response to a...
|
__label__pos
| 0.780602 |
Questions about specific heat...need help quickly
0 pts ended
1) Analyze effect of errors in temperature measurementson the calculation of specific heat of metal sample.
- Not sure what to analyze as I have the specific heat ofmetal sample.
2) Discuss effect of calculation of specific heat if theamount of water were 4 times the amount used.
3) Why should initital temperature of the water andcalorimeter cup be close to or slightly below roomtemperature?
I just need something to write down for these.
Answers (0)
|
__label__pos
| 0.982164 |
Home Features
Many types of grinding and honing operations require the use of a "process fluid" for cooling and lubrication. Remanufacturing procedures such as crankshaft grinding and surface grinding generate a tremendous amount of heat and require a fluid primarily for cooling. Heat control is absolutely essential for a good finish and accurate tolerances. Use of a coolant also helps prolong the life of the grinding wheel.
With honing, the situation is a little different. Some type of process fluid is also required, but primarily to lubricate the honing stones as they cut the cylinder bore. Lubrication reduces friction so less rotational force and pressure are needed to hone the cylinder, and it allows the abrasives to cut more cleanly. The fluid also provides cooling, but heat buildup is less of a factor in honing because the rate at which the stones travel across the metal in surface feet per minute (sfpm) is only about 85 to 150 sfpm, compared to 5,000 to 6,000 sfpm for crankshaft grinding.
The ability of a fluid to provide lubrication is especially important when honing with superabrasives such as polycrystaline diamond (PCD) and cubic boron nitride (CBN). Superabrasives are much harder and longer lived than traditional vitrified abrasives such as aluminum oxide and silicon carbide, but the superabrasive particles are duller and have more rounded edges. This requires a stronger metal bond to hold and support the superabrasive particles, as well as more force to hone a cylinder bore. Because of this, superabrasives typically generate more heat than vitrified abrasives. So to limit bore distortion, a superabrasive honing fluid must also provide cooling as well as lubrication.
A third function that a coolant provides is to rinse away metal and abrasive particles from the work surface. Removing debris keeps the pores in grinding wheels and honing stones open so the abrasive doesn
Share it now!
The following two tabs change content below.
Larry Carley
Larry Carley
|
__label__pos
| 0.917851 |
No recent wiki edits to this page.
Julie Langford had success as a botanist and professor at a University in the United States during the 1920s. When World War II broke out, she worked with the United States government on various projects, one of which involved the assault on Iwo Jima. She was so renowned and appreciated that when she left for Rapture, an investigation was launched to find out where she went. The people on the surface were unable to locate her.
A Little Sister running through Arcadia
Langford's skill allowed her to create Arcadia, a living forest in Rapture that provides the vital oxygen that the city needs to survive. Originally, Arcadia was open to the public; however, her and Andrew Ryan eventually got so greedy that they began charging people, a move Ryan believed was justified since he thought that businesses should be allowed to make a profit off of their products.
As ADAM began tearing Rapture apart, Langford started working on the Lazarus Vector which would bring the trees back to life if they died. When Ryan gasses Arcadia during the events of Bioshock, the protagonist, Jack, saves the trees and oxygen by creating and then deploying the Lazarus Vector.
This edit will also create new pages on Giant Bomb for:
Comment and Save
|
__label__pos
| 0.809635 |
Human Capital
Filed Under:
Dictionary Says
Definition of 'Human Capital'
A measure of the economic value of an employee's skill set. This measure builds on the basic production input of labor measure where all labor is thought to be equal. The concept of human capital recognizes that not all labor is equal and that the quality of employees can be improved by investing in them. The education, experience and abilities of an employee have an economic value for employers and for the economy as a whole.
Investopedia Says
Investopedia explains 'Human Capital'
Economist Theodore Schultz invented the term in the 1960s to reflect the value of our human capacities. He believed human capital was like any other type of capital; it could be invested in through education, training and enhanced benefits that will lead to an improvement in the quality and level of production.
Articles Of Interest
1. What You Need To Know About The Employment Report
2. Human Capital: The Most Overlooked Asset Class
3. Intangible Assets Provide Real Value To Stocks
4. The Nash Equilibrium
Nash Equilibrium is a key concept of game theory, which helps explain how people and groups approach complex decisions. Named after renowned mathematician John Nash, the idea of Nash Equilibrium ...
5. The Basics Of A Financial Analysis Report
6. How Education And Training Affect The Economy
7. Viewing The Market As Organized Chaos
8. Qualitative Analysis: What Makes A Company Great?
9. Evaluate Your Investments With SWOT Analysis
10. What It Really Takes To Succeed In Business
comments powered by Disqus
Hot Definitions
1. Dangling Debit
2. Cabinet Crowd
3. Backdating
4. Abnormal Spoilage
5. "Just Say No" Defense
6. Zaraba Method
Trading Center
|
__label__pos
| 0.956493 |
September 30, 2012
Southern Miss vs. Louisville - By the Numbers
Each week we break down the raw numbers from the weekend and see if they give any insight into why the game went the way it did and if they offer any explanations. Here's some raw data from the Southern Miss vs. UofL game.
|
__label__pos
| 0.971932 |
How long does it take a duckling to absorb the yolk and arteries ones it breaks the membrane inside the egg?
when a baby duckling breaks through the membrane inside the egg, it starts breathing using the air pocket that's there. i know that that is when they start to absorb the rest of the yolk and draw the arteries into their bodies, but how long does that actually take before they pip through the shell?
Report as
|
__label__pos
| 0.929968 |
A more satisfactory approach, though not without problems, is echo sounding, widely used today, in which a sound pulse travels from the vessel to the ocean floor, is reflected, and returns. By calculations involving the time elapsed between generation of the pulse and its return and the speed of sound in water, a continuous record of seafloor topography can be made. Most echo sounders perform these calculations mechanically, producing a graphic record in the form of a paper chart. Misleading reflections caused by the presence of undersea canyons or mountains plus variations in the speed of sound through water caused by differences in temperature, depth, and salinity limit the accuracy of echo sounding, though these problems can be met somewhat by crossing and recrossing the same area. Sonar has also been employed in bathymetric studies, as have underwater cameras.
|
__label__pos
| 0.974906 |
Smiley-Face Tricks
Document Sample
Smiley-Face Tricks Powered By Docstoc
Smiley-Face Tricks
Ways to improve your writing and make your teacher smile!
1. Magic 3- Three items in a series, separated by commas that create a poetic rhythm or
add support for a point, especially when the items have their own modifiers
2. Figurative Language- Non-literal comparisons- such as similes, metaphors, and
personification- add “spice” to the writing and can help paint a more vivid picture for
the reader
3. Specific Details for Effect- Instead of general, vague descriptions, specific sensory
details help the reader visualize the person, place, thing, or idea that you are describing
4. Repetition for Effect- writers often repeat specially chosen words or phrases to make a
point, to stress certain ideas for the reader
5. Explode in the moment- instead of “speeding” past a moment, writers often emphasize
it by “expanding” the actions
6. Humor- professional writers know the value of laughter; even subtle humor can help turn
a boring paper into one that can raise someone’s spirits
7. Hyphenated Modifiers- sometimes a new way of saying something can make all the
difference; hyphenated adjectives often cause the reader to “sit up and take notice”
8. Full Circle Ending- sometimes students need a special ending, one that effectively
“wraps up” the piece.
Shared By:
|
__label__pos
| 0.972916 |
Matt Pruitt Guitar
1 434 Houston Street Nashville, TN 37203
With 28 years of experience backed by a Master's in guitar from the Berklee College of Music, Matt Pruitt helps guitar students reach their potential one strum at a time. Pruitt operates out of the Cotten Music Center, where he typically meets one-on-one with students for lessons tailored to suit the individual's skill level, goals, and interest in smashing guitars.
Nearby Places
|
__label__pos
| 0.93283 |
Colleagues have suggested we put Rivera's signature pitch side-by-side with Dickey's and compare the two of them. So let's do that here.
Since Dickey joined the Mets in 2010, he has thrown 5,570 knuckleballs. When a plate appearance ends with a Dickey knuckler, opponents are hitting .228 with a .603 OPS. They've averaged a home run for every 192 knuckleballs thrown.
That sounds impressive until you look at Rivera's numbers in that same span. His 1,638 cutters thrown over the past three seasons have netted opponents a .202 batting average and .508 OPS. He's allowed a home run with the cutter at a rate of once for every 410 cutters thrown.
|
__label__pos
| 0.932719 |
It would be unlikely to see this in the British or American press.
The inflationary phase of the early universe may be because of the Higgs boson field, but what about the initial singularity. Some other explanation is needed.
In the meantime the civilized peoples of Europe, such as the british and germans, should unite against the hostile and primitive invaders.
Together we can do what could not be done alone
|
__label__pos
| 0.914926 |
How to Think Like a Computer Scientist: Learning with Python 2nd Edition/Strings
From Wikibooks, open books for an open world
Jump to: navigation, search
7.1 A compound data type[edit]
So far we have seen five types: int, float, bool, NoneType and str. Strings are qualitatively different from the other four because they are made up of smaller pieces --- characters.
The bracket operator selects a single character from a string:
>>> fruit = "banana"
>>> letter = fruit[1]
>>> print letter
The first letter of "banana" is not a, unless you are a computer scientist. For perverse reasons, computer scientists always start counting from zero. The 0th letter ( zero-eth ) of "banana" is b. The 1th letter ( one-eth ) is a, and the 2th ( two-eth ) letter is n.
If you want the zero-eth letter of a string, you just put 0, or any expression with the value 0, in the brackets:
>>> letter = fruit[0]
>>> print letter
7.2 Length[edit]
The len function returns the number of characters in a string:
>>> fruit = "banana"
>>> len(fruit)
length = len(fruit)
last = fruit[length] # ERROR!
That won't work. It causes the runtime error IndexError: string index out of range. The reason is that there is no 6th letter in "banana". Since we started counting at zero, the six letters are numbered 0 to 5. To get the last character, we have to subtract 1 from length:
length = len(fruit)
last = fruit[length-1]
7.3 Traversal and the for loop[edit]
index = 0
while index < len(fruit):
letter = fruit[index]
print letter
index += 1
#fruit = "banana"
#while index is less than 6.
#6 is the length of fruit
#letter = fruit[index]
#Since index = 0, "b" is equal to letter in loop 1
#letter is printed
#1 is added to whatever the value of index is
#the loop continues until index < 6
Using an index to traverse a set of values is so common that Python provides an alternative, simpler syntax --- the for loop:
for letter in fruit:
print letter
Each time through the loop, the next character in the string is assigned to the variable letter. The loop continues until no characters are left.
The following example shows how to use concatenation and a for loop to generate an abecedarian series. Abecedarian refers to a series or list in which the elements appear in alphabetical order. For example, in Robert McCloskey's book Make Way for Ducklings, the names of the ducklings are Jack, Kack, Lack, Mack, Nack, Ouack, Pack, and Quack. This loop outputs these names in order:
prefixes = "JKLMNOPQ"
suffix = "ack"
for letter in prefixes:
print letter + suffix
The output of this program is:
Of course, that's not quite right because Ouack and Quack are misspelled. You'll fix this as an exercise below.
7.4 String slices[edit]
A substring of a string is called a slice. Selecting a slice is similar to selecting a character:
>>> print s[0:5]
>>> print s[7:11]
>>> print s[17:21]
The operator [n:m] returns the part of the string from the n-eth character to the m-eth character, including the first but excluding the last. This behavior is counterintuitive; it makes more sense if you imagine the indices pointing between the characters, as in the following diagram:
Banana python.png
>>> fruit = "banana"
>>> fruit[:3]
>>> fruit[3:]
What do you think s[:] means?
String comparison[edit]
Other comparison operations are useful for putting words in lexigraphical order_:
This is similar to the alphabetical order you would use with a dictionary, except that all the uppercase letters come before all the lowercase letters. As a result:
Strings are immutable[edit]
Instead of producing the output Jello, world!, this code produces the runtime error TypeError: 'str' object doesn't support item assignment.
The in operator[edit]
The in operator tests if one string is a substring of another:
Note that a string is a substring of itself:
Combining the in operator with string concatenation using +, we can write a function that removes all the vowels from a string:
Test this function to confirm that it does what we wanted it to do.
7.8 A find function[edit]
What does the following function do?
def find(strng, ch):
index = 0
while index < len(strng):
if strng[index] == ch:
return index
index += 1
return -1
#assume strng is "banana" and ch is "a"
#if strng[index] == ch:
#return index
#the above 2 lines check if strng[index#] == a
#when the loop runs first index is 0 which is b (not a)
#so 1 is added to whatever the value of index is
#when the loop runs second time index is 1 which is a
#the loop is then broken, and 1 is returned.
#if it cannot find ch in strng -1 is returned
This is the first example we have seen of a return statement inside a loop. If strng[index] == ch, the function returns immediately, breaking out of the loop prematurely.
This pattern of computation is sometimes called a eureka traversal because as soon as we find what we are looking for, we can cry Eureka! and stop looking.
Looping and counting[edit]
The following program counts the number of times the letter a appears in a string, and is another example of the counter pattern introduced in :ref:`counting`:
7.10 Optional parameters[edit]
To find the locations of the second or third occurrence of a character in a string, we can modify the find function, adding a third parameter for the starting position in the search string:
def find2(strng, ch, start):
index = start
while index < len(strng):
if strng[index] == ch:
return index
index += 1
return -1
The call find2('banana', 'a', 2) now returns 3, the index of the first occurrence of 'a' in 'banana' after index 2. What does find2('banana', 'n', 3) return? If you said, 4, there is a good chance you understand how find2 works.
Better still, we can combine find and find2 using an optional parameter:
def find(strng, ch, start=0):
index = start
while index < len(strng):
if strng[index] == ch:
return index
index += 1
return -1
#index = start = 0 by default
#while index is less than the length of string:
#if strng[index] equals ch
#return index i.e. location of ch in strng -- note return breaks out of loop
#else add 1 to index and continue until index equals the length of sting
#if no match return -1
The call find('banana', 'a', 2) to this version of find behaves just like find2, while in the call find('banana', 'a'), start will be set to the default value of 0.
Adding another optional parameter to find makes it search both forward and backward:
def find(strng, ch, start=0, step=1):
index = start
while 0 <= index < len(strng):
if strng[index] == ch:
return index
index += step
return -1
Passing in a value of len(strng)-1 for start and -1 for step will make it search toward the beginning of the string instead of the end. Note that we needed to check for a lower bound for index in the while loop as well as an upper bound to accommodate this change.
The string module[edit]
To see what is inside it, use the dir function with the module name as an argument.
which will return the list of items inside the string module:
['Template', '_TemplateMetaclass', '__builtins__', '__doc__', '__file__', '__name__', '_float', '_idmap', '_idmapL', '_int', '_long', '_multimap', '_re', 'ascii_letters', 'ascii_lowercase', 'ascii_uppercase', 'atof', 'atof_error', 'atoi', 'atoi_error', 'atol', 'atol_error', 'capitalize', 'capwords', 'center', 'count', 'digits', 'expandtabs', 'find', 'hexdigits', 'index', 'index_error', 'join', 'joinfields', 'letters', 'ljust', 'lower', 'lowercase', 'lstrip', 'maketrans', 'octdigits', 'printable', 'punctuation', 'replace', 'rfind', 'rindex', 'rjust', 'rsplit', 'rstrip', 'split', 'splitfields', 'strip', 'swapcase', 'translate', 'upper', 'uppercase', 'whitespace', 'zfill']
To find out more about an item in this list, we can use the type command. We need to specify the module name followed by the item using dot notation.
Since string.digits is a string, we can print it to see what it contains:
Not surprisingly, it contains each of the decimal digits.
string.find is a function which does much the same thing as the function we wrote. To find out more about it, we can print out its docstring, __doc__, which contains documentation on the function:
The parameters in square brackets are optional parameters. We can use string.find much as we did our own find:
This example demonstrates one of the benefits of modules --- they help avoid collisions between the names of built-in functions and user-defined functions. By using dot notation we can specify which version of find we want.
Actually, string.find is more general than our version. it can find substrings, not just characters:
Like ours, it takes an additional argument that specifies the index at which it should start:
Unlike ours, its second optional parameter specifies the index at which the search should end:
Character classification[edit]
It is often helpful to examine a character and test whether it is upper- or lowercase, or whether it is a character or a digit. The string module provides several constants that are useful for these purposes. One of these, string.digits, we have already seen.
Alternatively, we can take advantage of the in operator:
As yet another alternative, we can use the comparison operator:
String formatting[edit]
The most concise and powerful way to format a string in Python is to use the string formatting operator, %, together with Python's string formatting operations. To see how this works, let's start with a few examples:
The syntax for the string formatting operation looks like this:
It begins with a format which contains a sequence of characters and conversion specifications. Conversion specifications start with a % operator. Following the format string is a single % and then a sequence of values, one per conversion specification, separated by commas and enclosed in parenthesis. The parenthesis are optional if there is only a single value.
In the first example above, there is a single conversion specification, %s, which indicates a string. The single value, "Arthur", maps to it, and is not enclosed in parenthesis.
In the second example, name has string value, "Alice", and age has integer value, 10. These map to the two conversion specifications, %s and %d. The d in the second conversion specification indicates that the value is a decimal integer.
In the third example variables n1 and n2 have integer values 4 and 5 respectively. There are four conversion specifications in the format string: three %d's and a %f. The f indicates that the value should be represented as a floating point number. The four values that map to the four conversion specifications are: 2**10, n1, n2, and n1 * n2.
s, d, and f are all the conversion types we will need for this book. To see a complete list, see the String Formatting Operations_ section of the Python Library Reference.
The following example illustrates the real utility of string formatting:
This program prints out a table of various powers of the numbers from 1 to 10. In its current form it relies on the tab character ( \t) to align the columns of values, but this breaks down when the values in the table get larger than the 8 character tab width:
i i**2 i**3 i**5 i**10 i**20
1 1 1 1 1 1
2 4 8 32 1024 1048576
3 9 27 243 59049 3486784401
4 16 64 1024 1048576 1099511627776
5 25 125 3125 9765625 95367431640625
6 36 216 7776 60466176 3656158440062976
7 49 343 16807 282475249 79792266297612001
8 64 512 32768 1073741824 1152921504606846976
9 81 729 59049 3486784401 12157665459056928801
10 100 1000 100000 10000000000 100000000000000000000
One possible solution would be to change the tab width, but the first column already has more space than it needs. The best solution would be to set the width of each column independently. As you may have guessed by now, string formatting provides the solution:
Running this version produces the following output:
1 1 1 1 1 1
2 4 8 32 1024 1048576
3 9 27 243 59049 3486784401
4 16 64 1024 1048576 1099511627776
5 25 125 3125 9765625 95367431640625
6 36 216 7776 60466176 3656158440062976
7 49 343 16807 282475249 79792266297612001
8 64 512 32768 1073741824 1152921504606846976
9 81 729 59049 3486784401 12157665459056928801
10 100 1000 100000 10000000000 100000000000000000000
The - after each % in the converstion specifications indicates left justification. The numerical values specify the minimum length, so %-13d is a left justified number at least 13 characters wide.
Summary and First Exercises[edit]
This chapter introduced a lot of new ideas. The following summary and set of exercises may prove helpful in remembering what you learned:
1. Write the Python interpreter's evaluation to each of the following expressions:
• >>> 'Python'[1]
• >>> "Strings are sequences of characters."[5]
• >>> len("wonderful")
• >>> 'Mystery'[:4]
• >>> 'p' in 'Pinapple'
• >>> 'apple' in 'Pinapple'
• >>> 'pear' in 'Pinapple'
• >>> 'apple' > 'pinapple'
• >>> 'pinapple' < 'Peach'
2. Write Python code to make each of the following doctests pass:
Question 1[edit]
prefixes = "JKLMNOPQ"
suffix = "ack"
for letter in prefixes:
print letter + suffix
so that Ouack and Quack are spelled correctly.
Question 2[edit]
fruit = "banana"
count = 0
for char in fruit:
if char == 'a':
count += 1
print count
in a function named count_letters, and generalize it so that it accepts the string and the letter as arguments.
Question 3[edit]
Now rewrite the count_letters function so that instead of traversing the string, it repeatedly calls find (the version from Optional parameters), with the optional third parameter to locate new occurences of the letter being counted.
Question 4[edit]
• Which version of is_lower do you think will be fastest? Can you think of other reasons besides speed to prefer one version or the other?
Question 5[edit]
Create a file named and put the following in it:
Add a function body to reverse to make the doctests pass.
Add mirror to .
Write a function body for it that will make it work as indicated by the doctests.
Include remove_letter in .
Finally, add bodies to each of the following functions, one at a time
until all the doctests pass.
Try each of the following formatted string operations in a Python shell and record the results:
1. "%s %d %f" % (5, 5, 5)
2. "%-.2f" % 3
3. "%-10.2f%-10.2f" % (7, 1.0/2)
4. print " $%5.2fn $%5.2fn $%5.2f" % (3, 4.5, 11.2)
The following formatted strings have errors. Fix them:
1. "%s %s %s %s" % ('this', 'that', 'something')
2. "%s %s %s" % ('yes', 'no', 'up', 'down')
3. "%d %f %f" % (3, 3, 'three')
|
__label__pos
| 0.995719 |
Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification. The "IdeasOnLdapConfiguration" page has been changed by SomeOtherAccount. -------------------------------------------------- One key feature around LDAP is the ability to search objects using a simple, RPN-style system. Let's say we have an object class that has this definition: - { { { + {{{ objectclass: node hostname: string domain: string - } } } + }}} and in our LDAP server, we have placed the following objects:
|
__label__pos
| 0.999415 |
Torsten Förtsch > Apache2-Translation-0.34 > Apache2::Translation::_base
Annotate this POD
View/Report Bugs
Module Version: 0.03 Source
Apache2::Translation::_base - The Apache2::Translation provider interface
A translation provider must implement the following interface. It is free to support more functions.
new( NAME=>VALUE, ... )
This method is optional. If defined it is called from a PerlChildInitHandler and can be used to do some initializations. The DB provider connects here to the database and decides to use a singleton or not.
is called after each uri translation.
fetch( $key, $uri, $with_notes )
([block, order, action],
[block, order, action],
If the adminstration WEB interface is to be used fetch must return a list of:
([block, order, action, id],
[block, order, action, id],
where id is a unique key.
If the $with_notes parameter is true fetch is called from the admin interface and wants to fetch also notes. In this case the return value is a list like this:
([block, order, action, id, note],
[block, order, action, id, note],
Notes are comments on actions for the user of the admin interface. They are not evaluated otherwize.
returns true if a provider supports notes in its current configuration.
returns a sorted list of known keys.
list_keys_and_uris( $key )
$key is a string.
The function returns a sorted list of [KEY, URI] pairs. If $key is empty all pairs are returned. Otherwise only pairs where $key eq KEY are returned.
A change conducted via the WEB interface is a sequence of update, insert or delete operations. Before it is started begin is called. If there has no error occured commit is called otherwise rollback. commit must save the changes to the storage. rollback must cancel all changes.
update( [@old], [@new] )
insert( [@new] )
delete( [@old] )
All these functions return something >0 on success. @old is a list of KEY, URI, BLOCK, ORDER, ID that specifies an existing action. If there is no such action the functions must return 0. @new is a list of KEY, URI, BLOCK, ORDER, ACTION that is to be inserted or has to replace an existing action.
The following interface is optional.
deletes all entries from the provider. Is to be called within a begin - commit wrapper. Returns boolean.
returns a function reference that can be used the following way to step all entries currently hold by the provider. Lists of blocks are traversed in ascending alphabetical order with KEY as the major ordering element and URI the minor. Within a block list elements are traversed in ascending numerical order with BLOCK as the major ordering element and ORDER the minor.
my $iterator=$other->iterator;
while( my $el=$iterator->() ) {
# $el is an array ref as expected by insert().
The following interface is implemented by Apache2::Translation::_base itself and can be used.
append( $other_provider, %options )
Expects a provider object that implements the iterator function. append then insert()s all elements of $other_provider.
If drop_notes is passed as a true value in %options then notes are not copied.
diff( $other_provider, %options )
If Algorithm::Diff and JSON::XS are installed this method computes a difference between 2 providers. If key or uri are given in %options they act as filters. The difference is calculated only for elements that pass that filter. The value of key or uri can either be a string in which case the matching operation is a simple eq or a Regexp object (qr/.../).
If notes is specified in %options as a false value differences in notes only are disregarded.
If numbers is specified in %options as a false value differences in BLOCK and ORDER numbers only are disregarded.
For more information about the output format see diff() in Algorithm::Diff.
sdiff( $other_provider, %options )
Does the same as the diff method but differs in the output format.
For more information see sdiff() in Algorithm::Diff.
dump( $format, $filehandle )
Requires the iterator function to be implemented and dumps all elements formatted according to $format to $filehandle.
Both parameters are optional. Standard $filehandle is STDOUT, standard format is:
%{KEY} & %{URI} %{BLOCK}/%{ORDER}/%{ID}
%{paction> ;ACTION}
%{pnote> ;NOTE}
$format is an arbitrary string that contains substrings of the form
%{flags NAME}
where NAME is on of KEY, URI, BLOCK, ORDER, ACTION, NOTE or ID. These substrings are then replaced by the values for KEY, etc.
flags is optional. It is a semicolon separated list of strings. If given it must also be separated from NAME by a semicolon.
Currently 2 flags are known:
• p string
Trailing spaces are cut from the current value. Then all occurences of \r?\n are replaced by \nstring. Also, string is inserted at start if the current value.
Suppose an ACTION holds a multilined value:
PerlHandler: sub {
my $r=shift;
$r->print( "OK\n" );
return 0;
Then %{paction> ;ACTION} will be formatted as:
action> PerlHandler: sub {
action> my $r=shift;
action> $r->content_type( 'text/plain' );
action> $r->print( "OK\n" );
action> return 0;
action> }
• s l|t
sl strips off leading spaces and st trailing spaces.
Torsten Foertsch, <>
Copyright (C) 2005-2008 by Torsten Foertsch
syntax highlighting:
|
__label__pos
| 0.928329 |
Take the 2-minute tour ×
Very confusing times here adding various requests to a project in development. For two objects so far setObjectMapping:forResourcePathPattern: works fine. These have resource patterns like @"customer/login/" and @"customer/getDuckStatus/". A query that fails to create an object with correct mapping is @"customer/getPondsForDuck/"
// this define is used for object mapping and for creating the query,
// it can't be a mismatch
#define kResourcePathDuckPond @"customer/getPondsForDuck/"
The only difference I can see in the queries is that the keypath in the returned JSON is nil for the first two queries that work. When the keypath is nil and forResourcePathPattern is used to set object mapping, objectLoader:(RKObjectLoader *)objectLoader didLoadObjects:(NSArray *)objects (RKObjectLoaderDelegate method) receives a valid object with correct mapping. When the keypath is non-nil, using forResourcePathPattern always sends didLoadObjects: a nil object.
[[RKObjectManager sharedManager].mappingProvider setObjectMapping:duckMapping
fails. Using
[[RKObjectManager sharedManager].mappingProvider setObjectMapping:duckMapping
(where "ponds" is the keypath given in the returned JSON) works fine.
Is that the rule, forResourcePathPattern will fail unless keypath is nil? I'd like to have all my objects recognized the same way. While I can get it working I don't like using key path for one query and resource path for others. I wanted to use forResourcePathPattern for everything since I don't have control of the API and some queries return a keypath while some don't.
share|improve this question
add comment
Your Answer
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.912807 |
Take the 2-minute tour ×
How would I go about using the printf() function to print a floating point number in such a way that I only print the decimal part if it is not 0? Examples:
1.0 -> 1
2.0 -> 2
1.5 -> 1.5
2.25 -> 2.25
share|improve this question
Which format specifier are you using right now? Have you tried %g? – Greg Hewgill Oct 3 '12 at 22:54
add comment
1 Answer
up vote 2 down vote accepted
share|improve this answer
add comment
Your Answer
|
__label__pos
| 0.976577 |
Take the 2-minute tour ×
In a language such as (since I'm working in it now) PHP, which supports procedural and object-oriented paradigms, is there a good rule of thumb for determining which paradigm best suits a new project? If not, how can you make the decision?
share|improve this question
add comment
2 Answers
up vote 5 down vote accepted
It all depends on the problem you're trying to solve. Obviously you can solve any problem in either style (procedural or OO), but you usually can figure out in the planning stages before you start writing code which style suits you better.
Some people like to write up use cases and if they see a lot of the same nouns showing up over and over again (e.g., a person withdraws money from the bank), then they go the OO route and use the nouns as their objects. Conversely, if you don't see a lot of nouns and there's really more verbs going on, then procedural or functional may be the way to go.
Steve Yegge has a great but long post as usual that touches on this from a different perspective that you may find helpful as well.
share|improve this answer
add comment
If you're doing something for yourself, or if you're doing just a prototype, or testing an idea... use the free style that script languages gives you.
After that: always think in objects, try to organize your work around the OO paradigm even if you're writing procedural stuff. Then, refactorize, refactorize, refactorize.
share|improve this answer
add comment
Your Answer
|
__label__pos
| 0.864632 |
Take the 2-minute tour ×
I need a macro which will copy each unique row within a spreadsheet and insert copied rows in the two rows directly beneath the original copied row and then repeat for each row there after.
It would be great if the macro could also input the following text strings - "(A)" in the original copied row, "(B)" in the second and "(C)" in the third.
The text string part isnt hugely important as I can always just use a concatenate formula if required.
Screenshot of what Im trying to achieve:
share|improve this question
What have you tried? – assylias Mar 7 '12 at 15:22
I have tried using an Index combined with an MROUND formula but that doesnt seem to be working. Im literally out of ideas and it cant be a manual process as there are thousands of rows. – user1254997 Mar 7 '12 at 15:55
Solved: C2: ="(A) "&INDEX(A:A,MROUND((ROW()+3)/3,1)) C3: ="(B) "&INDEX(A:A,MROUND((ROW()+3)/3,1)) C4: ="(C) "&INDEX(A:A,MROUND((ROW()+3)/3,1)) – user1254997 Mar 7 '12 at 16:18
That works too - Your question stated that you needed a macro so I did not propose a formula. – assylias Mar 7 '12 at 16:24
add comment
1 Answer
Assuming the data is in column A and you want the result in column C (as per your picture), this should work:
Public Sub doIt()
Dim data As Variant
Dim modifiedData As Variant
Dim i As Long
Dim j As Long
data = ActiveSheet.UsedRange.Columns(1)
ReDim modifiedData(1 To (UBound(data, 1) - 1) * 3 + 1, 1 To 1) As Variant
modifiedData(1, 1) = data(1, 1) 'header
j = 2
For i = 2 To UBound(data, 1)
modifiedData(j, 1) = "(A) - " & data(i, 1)
modifiedData(j + 1, 1) = "(B) - " & data(i, 1)
modifiedData(j + 2, 1) = "(C) - " & data(i, 1)
j = j + 3
Next i
With ActiveSheet
.Cells(1, 3).Resize(UBound(modifiedData, 1), 1) = modifiedData
End With
End Sub
share|improve this answer
add comment
Your Answer
|
__label__pos
| 0.783865 |
impulse momentum & work energy
0 pts ended
The conveyor belt delivers each 12kg crate to the fixed ramp atpoint A such that each crates velocity is 2.5 m/s as it starts toslide down the ramp. If the coefficient of kinetic frictionbetween each crate and the ramp is uk=0.3, complete thefollowing.
Develop the impulse momentum equations for the crate as it slidesdown the inclined ramp from point A to point B in the figurebelow.
Using impulse momentum equations, derive an expression than can beused to determine the velocity of the crate for any time t as thecrate slides down the ramp. What is the velocity of the crateas it slides off the ramp at point B.
And lastly using the conservation of energy relationship,V1+T1+WNC=V2+T2,verify that the velocity determined in the previous step iscorrect. assume positions given in diagram
Answers (0)
|
__label__pos
| 0.999426 |
novelties stories
Now you can listen to music while you're whitening your teeth, thanks to the Beaming White Forever White Teeth Whitening Headset. Stretch your lips back into an insane, scary grin with its included cheek retractor, slather on a dollop of hydrogen peroxide gel, and flip on the 2.5 W blue LED light.
Readers, you can probably instantly figure out every one of the equations on this Pop Quiz Clock, but I'm having trouble with 6 x 2. Oh, wait a second, we don't need no stinking equations to figure out what time it is — it's an analog clock!
|
__label__pos
| 0.719242 |
Newer Older
Aircrew assigned to the 20th Bomb Squadron prepare to board a 2nd Bomb Wing B-52H at Barksdale Air Force Base, La., June 10 for a flight in support of Exercise BALTOPS 2012. Airmen from the 20th and 96th Bomb Squadrons teamed with Airmen from the 307th Bomb Wing's 343rd Bomb Squadron to generate aircraft in support of the largest multinational maritime exercise this year in the Baltic Sea. The Barksdale B-52 aircrews conducted flight missions lasting more than 25 hours during the exercise involving 12 countries during the first two weeks in June. In its 40th year, Exercise BALTOPS aims to improve maritime security in the Baltic Sea through increased interoperability and cooperation among regional allies. The 2 BW routinely participates in worldwide exercises to constantly refine and improve operational procedures and capabilities with other U.S. services and our allies. Wing Airmen train often to ensure base units are ready to fight any challenge, anywhere at any time. (U.S. Air Force photo/Airman 1st Class Andrew Moua)(RELEASED)
|
__label__pos
| 0.798035 |
British Rail Class 252 by Keith Larby
Currently unavailable for purchase
When originally built, in 1972, the prototype High Speed Train (HST) units were considered to be formed of two locomotives at either end of a rake of carriages. As a result, the power cars were designated Class 41 and numbered 41001/41002, while the carriages were given numbers in the new Mark 3 carriage number series.
Shortly after their introduction, it was decided to classify the unit as a Diesel Electric Multiple Unit. It was allocated Class 252, and the whole formation was renumbered into a new carriage number series for HST and Advanced Passenger Train vehicles (4xxxx). Two coaches were not included in the renumbering (one Trailer First and one Trailer Second), as these were transferred for use in the Royal Train as part of its upgrade before Queen Elizabeth II’s Silver Jubilee. The power cars were allocated numbers in the 43xxx series, and the two prototype cars took the numbers 43000/43001. Thus, the production-run cars were numbered from 43002 onwards.
Ironically, the situation reversed again in the 1980s, and the production power cars were then considered to be class 43, as this time round no power car or carriage was renumbered. By this time, the prototype cars had been transferred into departmental (non-revenue earning) service and had taken numbers in the departmental carriage 975xxx series, so they were not involved in this redesignation, and they retained their departmental carriage numbers rather than being transferred to the departmental locomotive list.
The former 41002/43001 has now been scrapped, but the other prototype loco, 41001/43000, has been preserved as part of the National Collection, held at the National Railway Museum, York. Of the passenger-carrying vehicles, all remain in use on the mainline, except for one of the former restaurant cars, which was scrapped in 1993, whilst many other vehicles since been scrapped due to other accidents. Her Power Cars can sometimes be seen while entering York from the northbound coming from the Newcastle area.
british rail class 252 keith larby, akphotos, steam train, railway, travel, passengers, driver, historic, wheels, brass, coal, steam, chimney, coal fire, fireman, guard, rail track, smoke, soot, funnel, boiler, water, railfest, 2012
|
__label__pos
| 0.925968 |
Shared publicly -
this is lame compared to #EchoSign an Adobe product that integrates directly into #Salesforce . Why would you want to pay for storage of raster contract images when you can contextually have the digital copies with signature attached to the appropriate record in your CRM?
funny what gets reported on mashable these days
Add a comment...
|
__label__pos
| 0.701866 |
Take the 2-minute tour ×
Is there a special term for couple of measured values of single event in statistics? This is called record, tuple or table row in other sciences.
share|improve this question
add comment
1 Answer
It's often called an observation. That's also the langauge used in e.g. SAS.
share|improve this answer
add comment
Your Answer
|
__label__pos
| 0.999753 |
1. NOOK Sample
Go Back
You've Reached the End of Your Sample
The Noonday Demon: An Atlas Of Depression
Customers Who Bought This Also Bought
2. Depression
3. Monkey Mind: A Memoir of Anxiety
4. Lincoln's Melancholy: How Depression Challenged a President and Fueled His Greatness
|
__label__pos
| 0.995371 |
Provider: ingentaconnect Database: ingentaconnect Content: application/x-research-info-systems TY - ABST AU - Czaplewski, Raymond L. TI - Multistage Remote Sensing: Toward an Annual National Inventory JO - Journal of Forestry PY - 1999-12-01T00:00:00/// VL - 97 IS - 12 SP - 44 EP - 48 N2 - Remote sensing can improve efficiency of statistical information. Landsat data can identify and map a few broad categories of forest cover and land use. However, more-detailed information requires a sample of higher-resolution imagery, which costs less than field data but considerably more than Landsat data. A national remote sensing program would be a major undertaking, requiring unprecedented partnerships between federal programs and stakeholders. UR - ER -
|
__label__pos
| 0.980251 |
Last updated on March 17, 2014 at 6:53 EDT
Researchers Study Root Of Dyslexia
September 21, 2012
Connie K. Ho for redOrbit.com — Your Universe Online
Five percent. That´s the number of people who suffer from dyslexia worldwide, according to researchers at the College of Science at Northeastern University. Even with the number of people who suffer from the disorder, there still isn´t a clear reason as to what causes the disorder.
With this in mind, a collaborative study was completed by researchers from Harvard Medical School, Western Galilee College, McGill University and Northeastern University College of Science, which highlighted how dyslexia may be the result of impairment of a different linguistic system than previously understood.
To begin, dyslexia is considered a reading disorder and can influence how people respond to spoken language. Problems related to dyslexia can be seen early on, even before reading skills are acquired by infants. The findings were recently featured in the open access journal PLoS ONE.
“Our research demonstrates that a closer analysis of the language system can radically alter our understanding of the disorder, and ultimately, its treatment,” commented Iris Berent, a researcher from Northeastern University, in a prepared statement.
The researchers explained how speech perception is thought to be a part of two different linguistic systems. One system is based off phonetics, allowing the individual to extract discrete sound units from an acoustic input. The other, a phonological system, joins the units together to create individual words. Based on past studies, researchers believed that dyslexia was due to a phonological impairment. However, the results from the new study show that the phonetic system may be the cause of dyslexia.
“Research has long recognized that reading and language are closely linked, but this recognition has had little impact on how dyslexia is studied. Our research demonstrates that a closer analysis of the language system can radically alter our understanding of the disorder, and ultimately, its treatment,” the authors wrote in a statement.
The study featured a group of Hebrew-speaking college students who were able to track abstract phonological patterns, but had problems being able to tell the difference between similar speech sounds. The participants included both skilled and dyslexic readers who were given fake words in Hebrew, which was the language utilized in the study. Some of the dyslexic participants had difficulties telling the difference between the real and fake words. Another portion of the study showed that dyslexics couldn´t distinguish between digital sounds copying human speech and real human speech.
“Some researchers identify phonology as any process related to speech processing, whether it is speech perception, or the map ping of letters to speech sounds,” commented Berent in an article by News @ Northeastern. “I think the contribution of our work, is saying, ℠Look at the linguistics, look at what the two systems really are doing in human languages and maybe that will help you understand dyslexia.´”
The researchers believe the results of the study show that there were issues participants had with their phonetic system as opposed to the phonological system. The cause of the disorder could also be related to a lower-level part of speech perception, like the auditory system. Or it could be due to problems in early development of the human brain.
“Our findings confirm that dyslexia indeed compromises the language system, but the locus of the deficit is in the phonetic, not the phonological system, as had been previously assumed,” noted Berent in the statement.
Based on the findings, researchers are better able to understand the learning disorder but do not provide a specific solution for people with dyslexia.
“Our present demonstration that these two components can be dissociated underscores the urgent need for a more precise definition of the phonological- and phonetic-deficit hypotheses,” concluded the authors in the paper.
Source: Connie K. Ho for redOrbit.com – Your Universe Online
|
__label__pos
| 0.821557 |
This is a text description of cw6.gif.
This screen capture shows the Step1: Select database table and procedure type page. It shows a list of available tables in the SCOTT schema that were found to contain one or more media columns.
Select a table. The PHOTOS table button is selected.
Choose either to create a standalone PL/SQL procedure or to generate the source of a PL/SQL procedure for inclusion into a PL/SQL package. The Standalone procedure button is selected.
The following actions can be selected:
|
__label__pos
| 0.935485 |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 72