text
stringlengths
182
626k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
379
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
49
202k
score
float64
2.52
5.34
int_score
int64
3
5
Microbes are a vast and varied bunch, responsible for an extraordinary range of transformations. They create rivers of acid, eat arsenic, and made the first oxygen that led to animals like ourselves. But figuring out just who is doing what – and possibly applying those findings for useful purposes – has long been the holy grail of microbial ecology. Traditionally, standard operating procedure for figuring this out called for the bulk acquisition of data. Just purify a sample’s DNA and sequence like crazy. That way, when you line up the bits that convincingly overlap, you can piece genes together, assembling a catalog of potential biological function of the constituent microbes. The problem with this shotgun sequencing approach to environmental samples is that you can’t link function and identity. By reading a lot of short pieces of DNA, it’s possible to acquire some 16S rRNA gene sequences (which tell you the identities of organisms in the sample) and some functional genes (which tell you the proteins that could be around and the biochemical reactions that may be possible), but tying the functional genes to identity genes isn’t really an option. But over the last few years, sequencing the entire genome of a single cell – a way to easily connect 16S genes to other functions encoded on the same strand of DNA – has become a viable option. It starts by isolating an individual microbial cell, by cell sorting or isolation in a microfluidic chamber. Next comes the finesse. To release genomic DNA, you need to break the cell wall, a bit like cracking an egg to get to the yolk. Too harsh a cell lysis method and the DNA itself could be compromised; too weak and the genetic material could remain ensconced within the cell wall. Once the target DNA is out in the open, it’s time to start the industrial-scale reproduction of its code. After all, a single microbe may have just a few picograms (10-12 grams) of DNA; sequencing machines insist on micrograms of material (10-6 grams). Multiple displacement amplification, or MDA, is the tool of choice, but it remains the most controversial aspect of the entire process. Making millions of copies of an entire genome starts with sets of six random nucleotides (the “A”s, “T”s, “G”s, and “C”s that comprise DNA). These “primers” will almost certainly find a patch of host DNA to link up with, and once they do, a DNA polymerase enzyme gets to work, recruiting loose nucleotides to build a complementary chain of DNA. Once the elongating chain runs into another bound primer further up the track, polymerase nudges the obstructing strand aside and keeps going. This way, each polymerase transcribes really long stretches of the genome – which will eventually allow you to see the placement of genes in relation to each other on the genome. And the dislodged strands will serve as initiation points for unattached primers floating in the solution; ultimately, many copies of each genome segment are produced in a frenzy of multiplicative DNA synthesis. There are some potential issues with MDA, however. Because it amplifies the initial pool of DNA so extensively, even a tiny amount of contamination – or the six-base primers themselves – can be magnified exponentially. The initial placement of primers is random, and adjacent regions of DNA are often over-represented in the final product. The pool of amplified DNA is then sequenced in bits and pieces, and computers stitch it all together again. And while a fully “closed” genome is the goal, critics point out that it’s never been done. The best results have achieved roughly 90% reconstruction. It’s incredible, really: the notion that we can pluck a solitary microbial cell from nearly any environment on the planet and (almost) spell out its genome. Environmental microbiologists often ask the eternal question of ecology: “Who’s doing what where?” By linking identity to functional capabilities with single cell genomes, those answers might not be so far away.Go Back to Top. Skip To: Start of Article.
<urn:uuid:a42799f4-e5cd-4381-bb93-fe5d72d17a9d>
CC-MAIN-2016-26
http://www.wired.com/2013/06/microbiological-magic-why-single-cell-genomes-are-a-game-changer/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926162
874
3.53125
4
Corktown Common Park is a beautiful urban oasis—the 18 acre park, situated in the West Don Lands district of Toronto, boasts a wildlife-filled marsh, athletic fields, playgrounds and plenty of place to sprawl out on grass or host a bbq. But the coolest of the park’s features is the one you can’t see. Built into the sprawling greenland is a plan to protect the surrounding neighborhoods from flood waters. The landscape architects from Michael van Valkenburgh Associates partnered with engineering firm Arup to build a park that looks like nature, but works like a dyke. Ten years ago, if you visited the West Don Lands area of Toronto, you wouldn’t find a lot there. The neighborhood, which is situated at the mouth of the Don River near Lake Ontario, has traditionally been a post-industrial site, playing host to brick-making companies and taxi depots. “It was left fallow for many many years,” says Emily Mueller De Celis, an associate principal at MVVA. “For the longest time, Toronto really didn’t know what to do in terms of developing it.” The area was hard to develop, and for good reason: It’s one of the most vulnerable areas of the city for flooding, thanks to proximity to the lake and river. In recent years, though, the district has seen a boom in development, even amidst worries that a natural disaster could devastate it. Office buildings and residential towers are being constructed and it was chosen as the location of the 2015 Pan Am Games. This is in large part possible because of Corktown Common Park. A Growing Concern, Because of Climate Change Like other cities around the world, Toronto is figuring out how to best safeguard its at-risk neighborhoods from rising waters, and the potential for more frequent floods caused by climate change. A good example is in New York City with BIG’s plan to build an sloped, elevated barrier that would stretch 8 miles along Manhattan’ coastline. Corktown Common is a smaller, more understated version of BIG’s ideas, with the flood protection infrastructure being integrated directly into the park’s functionality. The architects say the infrastructure is robust enough to protect vulnerable areas against a 500 year flood. Because Corktown Common was developed on a flood plain, the team began by building up the area’s natural elevation. Nearly nine meters of land was added, creating a natural barrier to rising waters. “We had to make sure that the park and the infrastructure were well integrated so that in the end it didn’t feel like a piece of pure infrastructure but felt like a welcoming park that is connected to the urban fabric,” explains Mueller De Celis. This required MVVA to add an additional six meters of topography on top of the original infrastructure. It comes in the form of rolling hills, playgrounds and open green space. The park is split into a wet and dry side. As water falls on the dry side—whether that be from rainfall, flood waters or from the water playground—it gets collected and directed through a series of underground pipes into a cistern. This water is then reused for irrigation. MVVA says it expects the water to be used anywhere from two to four times before it evaporates. Beyond sustainability, this system also has the added benefit of relieving pressure from the mouth of the Don River by slowing the water flow that dumps into Lake Ontario. This infrastructure is masked by more than 700 trees, and more than 120 species of plants (95 percent of which are native to the area). Mueller De Celis says that as soon as the marsh was implemented, wildlife bloomed in what used to be a browned-out, post-industrialized area. She recalls one day when she was giving a tour of the park. There was construction happening in the neighborhood, as usual. “The people who were touring couldn’t hear me, not because of the construction but because of the frogs,” she recalls. In the process of building development-enabling infrastructure, Toronto has found itself with a real ecosystem in the middle of the city (no wildlife was reintroduced). As Mueller De Celis puts it: “It might be a constructed landscape, but the wildlife don’t know that.”Go Back to Top. Skip To: Start of Article.
<urn:uuid:fa6f5312-a9e7-46bf-88be-e3213c484c60>
CC-MAIN-2016-26
http://www.wired.com/2014/08/a-gorgeous-park-designed-with-a-double-purpose-flood-protection/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96534
910
2.9375
3
Learn something new every day More Info... by email Detectives are investigators who perform detective work for local, state and other jurisdictional agencies. These professionals are responsible for initiating investigations, analyzing evidence and following up on leads. Some detectives specialize in different types of detective work such as auto crimes, money laundering and art theft. Private detectives are independent contractors who do detective work for private companies and citizens. They often perform surveillance on claimants for workers’ compensation attorneys and insurance companies. Other tasks a private detective takes on are running background checks and investigating allegations of infidelity for customers. Some detectives work as part of a security team for high-profile clients such as celebrities and government officials. A missing person detective does detective work with the Federal Bureau of Investigations (FBI) and other law enforcement agencies. This kind of detective looks into cases of missing adults and children from all states and jurisdictions. The investigator has to review the case file, interview family members and friends and make sure the media gets out the information about the missing person. Detectives who assist in missing persons cases may work on recent cases or cold cases that are 10 to 20 years old. Local police departments usually hire narcotics detectives for detective work regarding the sale of illegal narcotics. These detectives work on undercover operations and meet with suspected drug dealers to confirm drug deals. Narcotics personnel also review cases with drug related offenses to see if there are any connections to other alleged drug dealers. In federal cases, the narcotics detective may have to provide testimony about what he witnessed during the drug buy. Some situations may require the investigator to infiltrate groups of people suspected of drug trafficking. Violent crimes detectives conduct detective work with state and federal agencies as well as other jurisdictional departments. Some of the vicious crimes that these professionals investigate are homicides, assault and battery and sexual assaults. Most of the detectives who oversee these cases speak with victims or witnesses, request medical records from hospitals, and collect evidence from crime scenes. Assignments also include searching computer databases and conducting voice analysis tests. Some detectives have achieved amazing success, such as Francois Vidocq. Francois Vidocq was a detective who organized the first police department in France. A Scottish American detective named Allan Pinkerton became famous for creating the Pinkerton Detective Agency, the first private detective agency in the United States. The stories about fictional master detective Sherlock Holmes, written by British writer Sir Arthur Conan Doyle, have sold millions of copies. @lovelife--There are several agencies that will hire a detective or investigator. The best thing to do is to contact the agency you're interested in working at. Most places need at least a Bachelor's degree, but not always. Some places will have their own certification classes that you will complete along with logging on the job training. Great information. I didn't realize that there are so many different types of detectives. Does anyone know what type of schooling you need to become a detective? One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
<urn:uuid:4211f99f-6a76-4071-82c5-c84cc35b18d9>
CC-MAIN-2016-26
http://www.wisegeek.com/what-are-the-different-types-of-detective-work.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949242
643
2.546875
3
Pollution is shutting down Florida beaches in record numbers. The National Resources Defense Council released its annual beach report this week, showing Florida's beaches closings have gone up 128 percent over the past year. The Panhandle area was the worst, partly because of all the bays in northwest Florida. High bacteria caused 88 percent of the closings and advisory days. Linda Young of the Clean Water Network says it’s cause for concern. "These are extremely disappointing numbers for us to have a look at in writing the report. Look at the data and think of the consequences and the reality of what this means. In this state to have that many of our beaches unsafe for people to get in the water, it’s terrible". Bay County had the highest number of days in the state when beaches were closed or under a pollution advisory for the 13 monitored beaches locally. Okaloosa County came in second with high pollution levels on 12 beaches. Here's how the Bay County beach report breaks down: - The 8th Street coast in Mexico Beach was closed 49 days - Beach Drive in Panama City came in with 84 days closed or under a health advisory - The beach at the south end of Beckrich Road at Panama City Beach had 28 days - Bid-a-Wee was closed or under advisory for 21 days - Here's the big one, Carl Gray Park was closed for 200 days - Delwood Beach had 42 days - Laguna Beach -39 days - Panama City Beach pier waters were closed or under advisory for 28 days - Rick Seltzer Park - 14 days - Spyglass Drive beach waters were closed 38 days - The Bay/Walton County line had 7 days of closings or advisories A spokesman for the Florida Department of Environmental Protection says the increases in beach closings are not because the beaches are dirtier. He says environmental officials are now doing more frequent testing that include additional types of bacteria. To see how your beaches did, log on to: www.nrdc.org and click on state summaries.
<urn:uuid:64628bfe-125f-4f7d-817a-4f15545e1f34>
CC-MAIN-2016-26
http://www.wjhg.com/home/headlines/909892.html?site=full
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972272
429
2.53125
3
- Word History, Word Explorer, Word Parts |part of speech: ||the smallest possible unit of a chemical element. Water is a substance that contains two kinds of atoms. is from an ancient Greek word that means "not able to be cut or divided."
<urn:uuid:9e64bb86-ae8e-42d7-be02-e2e37e8a9eb6>
CC-MAIN-2016-26
http://www.wordsmyth.net/?rid=2561&dict=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.746997
56
3.140625
3
Category: Fruit & Citrus 02:34PM, 20 Oct 2009 I notice all of my dwarf fruit trees planted some 8 months ago, their new leaf growth is getting a white mark of the leaf and goes translucent, then Falls off. I notice some of the existing adult leaves curl, but seem to stay like this as well. Fruit trees have been planted in a raised bed some 600 high and are on my trickle system getting watered every other day. I fertilised several weeks ago with 'fruit' fertiliser as per directions making care to kept product at least 10cm from trunk. The foliage on your tree that has been affected, should be pruned off now but when new foliage appears, get in early and spray with PestOil. This pest is active during the summer/autumn months. You are doing the correct thing by making sure your citrus receive enough water and are being fed with a good quality citrus food. We would recommend applying Dynamic Lifter Plus Organic Fruit Food. This product will encourage strong new growth and flowering to produce a good supply of fruit. Keep up the good work. This area is for general comments from members of the public. Some questions or comments may not receive a reply from Yates. For specific gardening advice visit Ask an expert Alternatively you may wish to contact us.
<urn:uuid:a4f38fa7-b59e-483a-af07-172c9b52946b>
CC-MAIN-2016-26
http://www.yates.com.au/garden-expert/answers/fruit-citrus/8937-how-to-identify-and-control-citrus-leaf-miner
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957397
270
2.53125
3
After a bite of ice cream, this child would say "yum!" An example of yum is what a child would say after tasting ice cream for the first time. Origin of yumechoic: see yummy - Used to express appreciation of or eagerness for a tasty food or beverage. - Used to express appreciation for something attractive or to express eagerness for a pleasurable experience. Origin of yumImitative. - An expression used to indicate delight in regard to a certain food's flavor - Yum! This apple pie is delicious!
<urn:uuid:28174172-8721-4886-92e7-effbaaaa1c21>
CC-MAIN-2016-26
http://www.yourdictionary.com/yum
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.873733
120
2.96875
3
Buildings are responsible for around 40% of the antopogenic greenhouse gas emissions in the western world. And while the technology to mitigate these GHG emissions is available, economic as well as convenience considerations resulted so far in very few implementations. This is where our research is focused: provide the technological innovations in a manner that are both beneficial for the environment but also increase the well-being of the occupants. The above research directions are all implemented and tested in real life conditions (in “Living Labs"): The HiLo lightweight living unit is being constructed as part of the NEST project at EMPA. Two research groups of the Institute of Technology in Architecture, ETH Zurich, are responsible for the project; the Assistant Professorship of Building Structure (BLOCK Research Group / BRG) and the Assistant Professorship of Architecture and Sustainable Building Technologies (SuAT). The two teams develop their ideas with foreign partners, with Supermanoeuvre from Sydney and Zwarts & Jansma Architects (ZJA) from Amsterdam. The two-story penthouse is designed as energy-plus building and should be completed according to plan in the summer of 2015. The adaptive facade and the occupant centered control are only two of the total five innovations that are being displayed in this living lab. The ETH House of Natural Resouces (HoNR) is an office building for the Laboratory of Hydraulics, Hydrology and Glaciology from ETH Zürich and will serve as a showcase building of a sustainable and reliable timber construction for students and researchers, among others. The building has multiple innovative aspects, such as the use of beech for structural elements, the implementation of a permanent sensor network, and the performance of in situ tests at different construction stages. We will implement the adaptive solar facade as well as the occupant centered control approach here. Retrofit measures are an effective means to improve both the heating energy and carbon footprint of a building. On one hand, reducing the losses through the envelope reduces the energy consumption. On the other hand, updating the heating from a fossil-fuel based system to an emission-free one bears the potential for CO2-emission free operation. This project explores how simple measurements can provide useful insight into the energetic behavior of a building and allow to derive models to predict the effect of the retrofit measures.Z. Nagy, D. Rossi, Ch. Hersberger, S.D. Irigoyen, C. Miller, and A. Schlueter, Balancing Envelope and Heating System Parameters for Zero Emissions Retrofit using Building Sensor Data, Applied Energy, in Press [...] it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and looks around. [...] How do we make such a tiny mechanism? I leave that to you. - Richard Feynman, There's plenty of room at the bottom, 1959. We are developing a modular robotic system that can be swallowed and will assemble inside the G.I. Tract for therapeutic and diagnostic procedures. ETH Zürich is one of four European universities participating in this project, led by Paolo Dario at Scuola Superiore Sant'Anna. My research involves the investigation of the self-assembly of the ARES robot inside the stomach. Using a specific magnet configuration on the connection face, assembly success rates of up to 90% are possible. This project aims at developing magnetic microrobotics for ophtalmic surgery. I am involved in developing the magnetic model for assembled-MEMS microrobots. The model is validated through FEM simulations and experiments, and captures the characteristics of complex 3-D structures. It allows us, for the first time, to consider full 6-DOF control of untethered devices. The Swiss French Television (TSR, Nouvo, Dec.5, 2008) reported on both projects: A German (SF1, Einstein, Feb.5, 2009) version is also available. In this project, we consider world's first really untethered microrobots that are driven by oscillating magnetic fields. The oscillations are converted into mechanical energy and rectified using a spring-mass impact system with friction, leading to stick-slip motion. I model this system using non-smooth multi-body dynamics and can explain several unintuitive behaviors that are experimentally not explicable because of the nature of the devices. The microrobots are fabricated and characterized experimentally by Dominic Frutiger. C. Miller, Z. Nagy, and A. Schlueter, A review of unsupervised statistical learning and visual analytics techniques applied to performance analysis of non-residential buildings, under Review G.P. Lydon, J. Hofer, B. Svetozarevic, Z. Nagy, and A. Schlueter, The energy concept of a net plus energy building using multifunctional elements, under Review P. Block, A. Schlueter, D. Veenendaal, J. Bakker, M. Begle, J. Hofer, P. Jayathissa, G. Lydon, I. Maxwell, T. Mendez Echenagucia, Z. Nagy, D. Pigram, B. Svetozarevic, R. Torsing ,J. Verbeek, and A. Willmann, NEST HiLo: Research and innovation unit for lightweight construction and building energy systems integration,, Under Review P. Jayathissa, M. Jansen, N. Heeren, Z. Nagy, and A. Schlueter, Life Cycle Assessment of Dynamic Building Integrated Photovoltaics, Solar Energy Materials and Solar Cells Z. Nagy, F.Y. Yong, and A. Schlueter, Occupant Centered Lighting Control: A user study on balancing comfort, acceptance, and energy consumption, Energy & Buildings Z. Nagy, B. Svetozarevic, P. Jayathissa, M. Begle, J. Hofer, G. Lydon, A. Willmann, and A. Schlueter, The Adaptive Solar Facade: From Concept to Prototypes, Frontiers of Architectural Research J. Hofer, A. Groenewolt, P. Jayathissa, Z. Nagy, and A. Schlueter, Parametric analysis and systems design of dynamic photovoltaic shading modules, Energy Science and Engineering A. Groenewolt, J. Bakker, J. Hofer, Z. Nagy, and A. Schlueter, Methods for modelling, analysis and optimisation of bendable photovoltaic modules on irregularly curved surfaces, Under Review L. Yang, Z. Nagy, Ph. Goffin, and A. Schlueter, Reinforcement Learning for Optimal Control of Low Exergy Buildings, Applied Energy, Vol. 156, pp.577-586, October 2015 Z. Nagy, F.Y. Yong (co-first), M. Frei, and A. Schlueter, Occupant Centered Lighting Control for Comfort and Energy Efficient Building Operation, Energy & Buildings, Vol. 94, pp. 100-108, May 2015 T. Kristensen, M. Ohlson, P. Bolstad, and Z. Nagy, Spatial variability of organic layer thickness and carbon stocks in mature boreal forests stands - Implications and suggestions for sampling designs, Environmental Monitoring and Assessment, Vol 187. No 8, pp. 1-19 S. Miyashita, Ch. Audretsch, Z. Nagy, R.M. Füchslin, and R. Pfeifer, Mechanical catalysis on the centimetre scale, J. R. Soc. Interface, Vol. 12, No. 104, March 2015 D. Rossi, Z. Nagy, and A. Schlueter, Soft Robotics for Architects, Soft Robotics, 2014, Vol.1, No. 2, pp. 147-153 C. Miller, Z. Nagy, and A. Schlueter, Automated Daily Pattern Filtering of Measured Building Performance Data, Automation in Construction, vol.49, Part A. pp.1-17, January 2015 D. Rossi, Z. Nagy, and A. Schlueter, Adaptive Distributed Robotics for Environmental Performance, Occupant Comfort and Architectural Expression, Int'l. Journal of Architectural Computing, vol.10, No.3, pp. 341-360, September 2012 S. Miyashita, K. Nakajima, Z. Nagy, and R. Pfeiffer, Self-organized Translational Wheeling Motion in Stochastic Self-assembling Modules, Artificial Life Early Access (2013), pp. 1-17 Z. Nagy, R.I. Leine, D.R. Frutiger, C.Glocker, and B. J. Nelson, Modeling the Motion of Microrobots on Surfaces Using Non-Smooth Multibody Dynamics, IEEE Transactions on Robotics, Vol.28, No.5, pp.1058-1068, October 2012 Z. Nagy and B. J. Nelson, Lagrangian Modeling of the Magnetization and the Magnetic Torque on Assembled Soft-Magnetic MEMS Devices for Fast Computation and Analysis, IEEE Transactions on Robotics, Vol.28, No.4, pp.787-797, August 2012 S. Miyashita, Z. Nagy, B. J. Nelson, and R. Pfeifer, The Influence of Shape on Parallel Self-Assembly, Entropy 2009, 11(4), pp. 643-666 Z. Nagy, K. Harada, M. Fluckiger, E. Susilo, I. K. Kaliakatsos, A. Menciassi, E. Hawkes, J. J. Abbott, P. Dario, and B. J. Nelson, Assembling Reconfigurable Endoluminal Surgical Systems: Opportunities and Challenges, International Journal of Biomechatronics and Biomedical Robotics, Vol. 1, No. 1, pp 3-16, 2009 M. Probst, M. Flueckiger, S. Pane, O. Ergeneman, Z. Nagy and B.J. Nelson, Manufacturing of a Hybrid Acoustic Transmitter Using an Advanced Microassembly System, IEEE. Trans. Indust. Electronics, Vol.56, Issue 7, pp. 2657-2666 M. Flueckiger, Z. Nagy, M. Probst, O. Ergeneman, S. Pane, and B.J. Nelson, A Microfabricated and Microassembled Wireless Resonator, Sensors & Actuators A: Physical, Vol. 154, No.1, pp.109-116, 2009 Z. Nagy, D. Rossi, Ch. Hersberger, S.D. Irigoyen, C. Miller, and A. Schlueter, Balancing Envelope and Heating System Parameters for Zero Emissions Retrofit using Building Sensor Data, Applied Energy, Vol. 131, October, 2014 F. Beyeler, S. Muntwyler, Z. Nagy, C. Graetzel, M. Moser and B.J. Nelson, Design and calibration of a MEMS sensor for measuring the force and torque acting on a magnetic microrobot, J. Micromech. Microeng. (18) 025004 (7pp), 2008 J. J. Abbott, Z. Nagy, F. Beyeler, B. J. Nelson, Robotics in the Small, Part I: Microrobotics, IEEE Robotics & Automation Magazine, Vol. 14, No. 2, 2007, pp. 92-103 Y. Peng, A.M. Rysanek, Z. Nagy, and A. Schlueter, Case Study Review: Prediction Techniques in Intelligent HVAC Control Systems, 9th Int. Conf. on Indoor Air Quality Ventilation & Energy Conservation In Buildings (IAQVEC), Seoul, Rep. Korea, 2016 A. Willmann, S. Cisar, Z. Nagy, and A. Schlueter, Energy and the City: Investigating spatial and architectural consequences of a shift in energy systems on district level., Sustainable Built Environment (SBE16), Zurich, Switzerland Z. Nagy, F.Y. Yong, and A. Schlueter, What should a building be controlled for? Ask the occupants!, Sustainable Built Environment (SBE16), Zurich, Switzerland J. Hofer, Z. Nagy, and A. Schlueter, Electrical design and layout optimization of flexible thin-film photovoltaic modules, EU PVSEC, 2016 B. Svetozarevic, Z. Nagy, J. Hofer, D. Jacob, M. Begle, E. Chatzi and A. Schlueter, SoRo-Track: A Two-Axis Soft Robotic Platform for Solar Tracking and Building-Integrated Photovoltaic Applications, in Proc. IEEE Int. Conference on Robotics and Automation (ICRA), 2016, Stockholm, Sweden P. Jayathissa, Z. Nagy, N. Offeddu, and A. Schlueter, Numerical simulation of energy performance, and construction of the adaptive solar facade., Advanced Building Skins, 2015 J. Hofer, A. Groenewolt, P. Jayathissa, Z. Nagy, and A. Schlueter, Parametric analysis and systems design of dynamic photovoltaic shading modules, EU PVSEC, 2015 H. Zhao, Z. Nagy, D. Thomas, and A. Schlueter, Service-Oriented Architecture For Data Exchange Between A Building Information Model And A Building Energy Model, CISBAT, Lausanne, Switzerland, 2015 J. Hofer, B. Svetozarevic Z. Nagy, and A. Schlueter, DC building networks and local storage for BIPV integration, CISBAT, Lausanne, Switzerland, 2015 M. Frei, Z. Nagy, and A. Schlueter, Towards Data-Driven Building Retrofit , CISBAT, Lausanne, Switzerland, 2015 G. Lydon, A. Willmann, J. Hofer, Z. Nagy, and A. Schlueter, Balancing operational and embodied emissions for the energy concept of an experimental research and living unit, CISBAT, Lausanne, Switzerland, 2015 C. Miller, Z. Nagy, and A. Schlueter, A seed dataset for a public, temporal data repository for energy informatics research on commercial building performance, 3rd Conf. on Future Energy Business & Energy Informatics, Rotterdam, NED 2014 B. Svetozarevic, Z. Nagy, D. Rossi, and A. Schlueter, Experimental Characterization of a 2-DOF Soft Robotic Platform for Architectural Applications, Robotics: Science and Systems, Workshop on Advances on Soft Robotics, Berkley, CA, USA, 2014 Z. Nagy, M. Hazas, M. Frei, D. Rossi, and A. Schlueter, Illuminating Adaptive Comfort: Dynamic Lighting for the Active Occupant, in Proc. 8th Windsor Conference: Counting the Cost of Comfort in a Changing World, April 2014, London, UK C. Miller, Z. Nagy, and A. Schlueter et. al, BIM-Extracted EnergyPlus Model Calibration for Retrofit Analysis of a Historically Listed Building, Building Simulation Conference, ASHRAE/IBPSA-USA, Atlanta, GA, USA, 2014 D. Rossi, Z. Nagy, and A. Schlueter, Simulation Framework for Design of Adaptive Solar Facade Systems, CISBAT, Lausanne, Switzerland, 2013 D. Rossi, Z. Nagy, and A. Schlueter, Soft Pneumatics for Sustainable Architecture, Int'l. Workshop on Soft Robotics and Morphological Computation, Ascona, Switzerland, 2013 D. Rossi, Z. Nagy, and A. Schlueter, Adaptive Distributed Architectural Systems, ACADIA - Synthetic Digital Ecologies, San Francisco, Ca, USA, October 2012 Z. Nagy, D. Rossi and A. Schlueter, Sustainable architecture and human comfort through adaptive distributed systems, IEEE Pervasive Computing and Communications Workshop (PERCOM), March, 2012, Lugano, Switzerland Z. Nagy and B. J. Nelson, On the Feasibility of Magnetic Self-Assembly for Swallowable Modular Robots, Workshop on MesoScale Robotics for Medical Interventions at the IEEE Int. Conference on Robotics and Automation (ICRA), 2010, Anchorage, AK, USA Z. Nagy, D. Frutiger, R.I. Leine, C. Glocker, and B. J. Nelson, Modeling and analysis of wireless resonant magnetic microactuators, in Proc. IEEE Int. Conference on Robotics and Automation (ICRA), 2010, Anchorage, AK, USA Z. Nagy, S. Miyashita, S. Muntwyler, A.K. Cherukuri, J.J. Abbott, R. Pfeiffer, and B.J. Nelson, Morphology Detection for Magnetically Self-Assembled Modular Robots, in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2009, St. Louis, MO, USA Z. Nagy, M. Flueckiger, O. Ergeneman, S. Pane, M. Probst and B.J. Nelson, A Wireless Acoustic Emitter for Passive Localization in Liquids, in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2009, Kobe, Japan Z. Nagy, R. Oung, J. J. Abbott, and B. J. Nelson, Experimental Investigation of Magnetic Self-Assembly for Swallowable Modular Robots, in Proc. IEEE/RJS International Conference on Intelligent Robots and Systems (IROS), 2008, Nice, France Z. Nagy, O. Ergeneman, J. J. Abbott, M. Hutter, A. M. Hirt, and B. J. Nelson, Modeling Assembled-MEMS Microrobots for Wireless Magnetic Control, in Proc. IEEE International Conference on Robotics and Automation (ICRA), 2008, Pasadena, CA, USA F. Beyeler, S. Muntwyler, Z. Nagy, M. Moser, and B.J. Nelson, A Multi-Axis MEMS Force-Torque Sensor for Measuring the Load on a Microrobot Actuated by Magnetic Fields, in Proc. IEEE/RJS International Conference on Intelligent Robots and Systems (IROS), 2007, St. Diego, CA, USA Z. Nagy, J. J. Abbott, and B. J. Nelson, The Magnetic Self-Aligning Hermaphroditic Connector: A Scalable Approach for Modular Microrobots, in Proc. IEEE/ASME Int. Conf. Advanced Intelligent Mechatronics, 2007, Zurich, Switzerland Z. Nagy, A. Sadowski, Systems and Methods for Data Management, U.S. Provisional Patent Application No.: 61/286,835 A. Schlueter, Z. Nagy, D. Rossi, Energie sparen mit lernfähiger Fassade, Bulletin SEV/VSE, August 2014 Z. Nagy, F. Beyeler, Die Mikrowelt entdecken: Die Arbeit mit Mikroobjekten, Bulletin SEV/VSE, November 2009 Student member of the Institute of Electrical and Electronics Engineers (IEEE) and the American Society of Mechanical Engineers (ASME). |Fall'06, '07, '08||Introduction to Robotics and Mechatronics||Supervision of weekly laboratory sessions and exam grading||60-100| |Fall'07, '08, '09||Theory of Robotics and Mechatronics||Exam grading||40-90| |Spring'08, Fall'09, '10||Microrobotics||Prepared and gave lectures and assignments on magnetism||20-40| |Fall'13||Low-Ex + Architecture Seminar||Prepared and gave lecture on Photovoltaics||10-20| |MIT, Cambridge, USA||10/2009-11/2009||Visiting Graduate Student with Prof. Daniela Rus (CSAIL)| |ETH Zürich||2006-2011||Dr. Sc. ETH - Advisor: Prof. Brad Nelson PhD Thesis: Modeling and analysis of the magnetization, torque and dynamics of untethered soft-magnetic microrobots |ETH Zürich||2001-2006||Dipl.-Ing (M.S.) Mechanical Engineering - Advisors: Dr. Jake Abbott and Prof. Brad Nelson MSc Thesis: Numerical approaches to 3D magnetic MEMS |Sensirion AG||07/2005-10/2005||Internship (Micro Flow Sensors)| |DTU Copenhagen||02/2005-06/2005||Academic Exchange Semester (ERASMUS Scholarship)| |Mathematical Modeling||MATLAB/Simulink, Mathematica| |Numerical Analysis (FEM)||ANSYS, COMSOL Multiphysics (FEMLAB), Maxwell-3D| |Computer Aided Design (CAD)||Unigraphics NX, SolidWorks,| |Programming and OS||C/C++, PHP/MySQL, Windows, Linux| |Documentation and Design||MS Office, Latex, Adobe Illustrator & InDesign| The only real valuable thing is intuition. - Albert Einstein (reload page for new wise words)
<urn:uuid:0199669b-da88-4b21-9b88-cd271155ff72>
CC-MAIN-2016-26
http://www.zoltan-nagy.net/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.741017
4,691
2.953125
3
|PDF (1 MB)| Assessing Local Needs Related to Parent Involvement The relative family-friendliness of a school refers to how inviting it feels to the families of its students: do families feel they would be welcome to ask questions, to contribute somehow in their children's classroom, to make suggestions, or to otherwise support their children's education? The degree to which parents feel at ease in their children's school is influenced by such factors as who initially greets them and whether they are met with a smile, with a frown, or ignored entirely; whether there is a physical space for parents to meet and find information and resources related to the school and education in general; whether they receive timely information (e.g., about school events, student productions, upcoming assessments) on a regular basis, as in a weekly newsletter coming home with their children, for example; whether teachers and the principal seem open to questions or feedback; and whether the only time parents hear from anyone at school is when there is a problem with their child. Parents who have made an initial effort to come to school to meet their children's teachers and principal are less likely to return if their experience is not positive. On the other hand, if parents are enthusiastically invited into schools, warmly greeted, and engaged in ways that make them feel comfortable and assure them that their input and questions are valued, they may be willing to come back and become involved at levels they might not even have considered. By assessing both parents' current thoughts on the climate of the school and staff feelings about parent involvement, schools can get a better idea of how they need to improve in the area of family friendliness, and they can solicit targeted help from their PIRC. While personal interviews and focus groups can be used to solicit in-depth information about parent and staff attitudes, few schools can manage such intensive ways of soliciting information. Written surveys are a much more efficient method that can still yield good results. The act of conducting a survey is itself a parent-friendly message to parents that a school cares what they think. It gives both parents and staff a voice in articulating what works and what does not work in the particular school community as related to parent involvement. In yielding site-specific information, it offers important guidance. one parent noted when talking about the value of a school survey, "It gives us data about our actual community. It's not just something we got from someplace else like ours that may or may not really fit us." Provide Surveys on Schools' Family Friendliness as a PIRC Service While some schools and districts develop and conduct their own school surveys for various purposes, both the Indiana Partnerships Center and ADI's PIRC recognized that not all education agencies have this capacity. Six years ago the Indiana PIRC contracted with an outside agency to develop the "Are We Family Friendly?" Survey for distribution to Indiana schools. This perception survey asks parents how comfortable they feel in the school; how informed they feel about their children's performance and how to help them; whether or not they feel invited to participate in the school's activities and at what level; and how empowered they feel in addressing any issues and concerns they might have. Teachers, in turn, are asked how often and in what capacity parents are invited to participate in their children's education in the classroom and at home; how informed they keep the parents; whether they make home visits and go into students' communities; and how much they solicit information. The PIRC's intent was to have schools across the state administer the survey, with the PIRC analyzing and feeding the results back to them. But over the years it had become clear that many schools were unable to ensure enough of a response to make the survey worthwhile; sending surveys home with students or mailing them to a family's home was not effective. In 2005, the new superintendent of IPS required that all Title I schools in the district administer the survey to assess their family friendliness. The Indiana Partnerships Center collaborated with IPs to revise the survey and, also, create a spanish-language version. To further ensure a greater parent response rate, parent liaisons were used to disseminate the survey. Given the nature of their work, which entails developing strong relationships with parents at their site, the liaisons seemed well positioned to encourage parents to respond to the survey, to answer their questions, to monitor survey returns, and to provide follow-up if parents need additional encouragement to respond. As a result of this approach, some 4,900 parents completed the survey. Equally important, 880 or 18 percent of the parent respondents were Spanish speakers, whose voices may have remained silent in the absence of a translated survey. ADI also offers a school survey, which was first developed in 1996 in a project with the Regional Educational Laboratory at Temple University. the survey has evolved and expanded over the years; today, in addition to asking parents and teachers about parent-related issues at their school, it includes questions for principals. If the survey is administered for a high school, students also are included. The topics covered for parents and teachers are similar to those in the Indiana survey, while principals are asked more about what existing services and structures are already in place to support parent involvement: What types of written policies have been developed to promote parent involvement (e.g., homework policy, school-parent compact), what mechanisms exist to invite parents into the school (e.g., family nights, conferences), what resources are available at the school for parents (e.g., parent resource library, trainings), and what methods are used to communicate with parents (e.g., home visits, newsletters). (See fig. 8, Academic Development Institute: Principal Element From School Survey, on p. 40.) The survey is given to principals to administer to their school populations. ADI then analyzes the data and generates a detailed report, which is shared with the school community, administration, and faculty; the school board; parent organizations; and other interested parties. Use Survey Results to Inform Parent-related School Practice Both ADI and the Indiana Partnerships Center take steps to help ensure that survey results are easily understandable and are used by schools in meaningful ways. ADI's analysis of survey results report goes into considerable depth comparing and contrasting how parents and staff view issues and identifying areas where more work is needed to generate effective partnering between parents and school. Its purpose is to help school communities draw conclusions about areas of successes and challenges and to aid them in creating an action plan to strengthen their community. In addition to administering the survey, ADI offers a consulting service that includes up to three site visits: a pre-survey visit, a visit to review results and develop an action plan that is often tied to the goals of the school improvement plan, and a final visit three to six months later to assess progress. For Solid Foundation schools, the survey is administered at the beginning of the program and again at the end of the two-year Solid Foundation process. The results of these two surveys are then compared to identify areas of progress and areas still in need of improvement. ADI also administers progress reports twice a year for two years in December and June. These reports track implementation of the action plan through factors, such as how many home visits have been made. At one school, survey results identified homework as a significant issue for many parents, although they did not necessarily agree on how much or what type of homework there should be. As a result, however, at the time of this study, the school was considering a new homework policy that might include, for example, ensuring that all teachers use what ADI has identified as a best practice approach to assigning homework (i.e., 10 minutes of homework in first grade, 20 minutes in second grade, 30 minutes in third grade, and so on) and sending parents tips on how to help with homework. Because ADI employs an evaluator, the organization has the capacity to handle its survey analysis in-house. The Indiana PIRC does not have this same internal capacity, so it includes in its annual budget the funds to contract with an evaluator from a state university who analyzes the survey data and writes a report based on the findings. Committed to making findings accessible to those surveyed, including parents, the Indiana Partnerships Center has summarized survey findings into two pages of parent-friendly text with easy-to-read graphs and advice on next steps based on the findings. "We know from responses to our newsletter that people like things simple and they like information in graphs," says the center director, adding, "Less is better." In addition to preparing the written report, the evaluator consults with the PIRC about any implications for policy and practice, and the PIRC, in turn, incorporates this into its subsequent discussions with the client school or district. Once parents and educators realize that their voices have been heard and their input considered, they might be more willing to support any proposed changes in policy and practice. (See fig. 9, Indiana Partnerships Center: Example of Parent and Educator Survey Results Presentation, on p. 42.) The analysis of IPS's 2005–06 survey identified "parents as decision-makers" as the area most in need of improvement across the schools surveyed. Based on this information, individual schools began considering how to get parents more involved in school decision-making; the district started reviewing its parent involvement policies and supports; and, for its part, the Indiana Partnerships Center undertook a review of its leadership training. Figure 9. Indiana Partnerships Center: Example of Parent and Educator Survey Results Presentation Tips for Assessing Local Needs Regarding Parent-Friendly Nature of Schools
<urn:uuid:e7dce325-cbea-4b0e-86cb-b5da3729076f>
CC-MAIN-2016-26
http://www2.ed.gov/admins/comm/parents/parentinvolve/report_pg18.html?exp=5
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969229
1,976
3.5
4
Linguistic terminology is so spiffy! There are plosives and flaps, approximants and glides, fricatives and affricatives, nasals and trills. Consonsonants can be velars, alveolars, labials, labiovelars, bilabials, labiodentals, palatals, uvulars, pharyngeals, or glottals. They can be voiced or unvoiced. And more ... all of which I am incompetent to address. But, like the person who was delighted to learn that for his whole life he had been speaking prose without knowing it, I just love the sound of the words that describe the sounds of human language. And, in the context of the word zhurnal, that initial "zh" (the eighth letter of the Russian alphabet) is, I recently learned, a voiced postalveolar fricative. What a mouthful! (for pronunciation see TreasureKnowledge (26 Oct 2002), ...) (but according to Wikipedia, "The sound in Russian denoted by <ж> is commonly transcribed as a postalveolar fricative but is actually a laminal retroflex fricative.")
<urn:uuid:e706e669-4d90-4227-be8e-79080965f671>
CC-MAIN-2016-26
http://zhurnaly.com/cgi-bin/wiki/VoicedPostalveolarFricative
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924981
255
3.046875
3
Learning Center Search Results Showing 1-2 of 2 results Researching Maximilian Parker, the father of Butch Cassidy, using the Internet, Lancashire England records, and LDS records. The third of a three-part series, this lesson will introduce to the FamilySearch online collections for South Africa. The class will discuss the types of records available, various ways on how to access the records, and how to use the collections. There will be a short quiz/activity at the end of the lesson; pause the recording when the question is read, work to answer the question asked, and then resume the recording for the answer. The handout includes the activity questions.
<urn:uuid:5d64dc3d-b8fc-48d0-abeb-94c3edfc1266>
CC-MAIN-2016-26
https://familysearch.org/learningcenter/results.html?q=*&fq=languages_en%3A%22English%22&fq=format%3A%22Presentation%22&fq=skill_level%3A%22Beginner%22
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890075
140
2.609375
3
Law Library Research Guides are online bibliographies that feature resources for legal professionals. These guides can be a good starting point for your legal research. The guides include both secondary and primary legal resources: For example, books from the online catalog, legal treatise, legal journals, case law reporters, federal and state codes, administrative rules, agency decisions and legal forms. The library guides also feature many online resources: the journal indexes, HeinOnline and Index to Legal Periodicals for example; subject-specific legal databases like the Bloomberg/BNA online newsletters and law centers; and when available, the guides link to free Internet resources for cost-effective legal research. Listed below are several guides that may prove useful for the clinical programs at the Law School: Law and Entrepreneurship Clinic Center for Patient Partnerships Economic Justice Institute & Family Law Project Frank J. Remington Center Great Lakes Indian Law Center Government and Legislative Law Clinic If you do not see a research guide on your topic, you can request a new guide by emailing the Law Library at [email protected]. Submitted by Jenny Zook, Reference Librarian on October 30, 2013 This article appears in the categories: Law Library
<urn:uuid:31e05f40-4da6-47f9-9ceb-407f8373cdad>
CC-MAIN-2016-26
https://law.wisc.edu/newsletter/Law_Library_and_IT/Research_Guides_for_the_Law_Cent_2013-10-30
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.851436
256
2.53125
3
Popular Science posted an article on Pangaea a week ago or so (8/8/2013). It had a beautiful graphic that caught my eye on what the supercontinent looked like, if we super-impose today’s geo-political boundaries upon the supercontinent for reference. This is the source of my musing today. Pangaea was a supercontinent that existed during the late Paleozoic and early Mesozoic eras, forming about 300 million years ago. I look at this image and it immediately evoked a serious of questions in my mind: - Did any landmasses disappear and should the map be more fuller? - For example, Atlantis, would it have appeared near England/France water areas? - How about the four great rivers of Genesis? - Did the islands, like Hawaii or the Azores appear after Pangaea? - Were there other islands that existed when Pangea did, but disappeared since? - Would Canada’s Hudson Bay & Great Lakes areas be land or water? - What about Caribbean Islands? - Does this SuperContinent shed any light on dinosaur fossil finds, if plotted against this map? Well those were some of my musings when I saw that map? How about you? Did it give you pause to wonder? Email me!
<urn:uuid:e722401b-743a-40b2-9608-a3b81af709e9>
CC-MAIN-2016-26
https://mikeeliasz.wordpress.com/tag/supercontinent/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938306
278
3.421875
3
- For the time in history, see Middle Ages. In most countries, people that think that by middle age, people should be mature, perhaps with a good, steady job and a family. Due to promotion, middle age is sometimes a time when adults have greater wealth and influence than earlier in their careers, although they may have less disposable income due to having children. However, it is also common for some adults to suffer from mid-life crisis, when they are unsure about their life and sometimes become depressed because of it. Women and men often have different experiences of middle age partly depending on whether they choose to work or care for their family full-time. Health[change | change source] Middle-aged females find it harder to become pregnant. If they do have a baby, there is a higher chance that it will have a genetic disease (Down syndrome, for example). Females usually have their menopause in middle age, when they stop bleeding every month. When that happens, they cannot have children anymore.
<urn:uuid:02c07c9c-4fcc-45c5-9638-ecad07ecfd5f>
CC-MAIN-2016-26
https://simple.wikipedia.org/wiki/Middle_age
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982134
208
3.109375
3
Content: The argument is clear, focused, and supported. These papers engage with the text thoroughly and competently, and clearly demonstrate how the analysis fits in with the larger point of the essay. Further, they make efforts to engage counterarguments, consider the implication of the argument to other areas of the text, or demonstrate an awareness of the complexity of the topic. Finally, papers that push beyond simply “answering” the assignment and attempt to work with the material in provocative, interesting, and complex ways will fall into the “A” category. Structure: Structurally, they follow a clear development of ideas, providing evidence where appropriate. The paragraphs are developed and progress logically from what precedes them. An introduction and a conclusion are effective; the introduction provides necessary contextual information in an engaging and original way and the conclusion moves beyond simply restating the introduction by suggesting the implications of the argument. Style & Mechanics: Aside from being virtually free of grammatical errors, “A” papers read with a certain ease and clarity. They also demonstrate stylistic variety – on the level of the word, sentence and paragraph – by varying length and word choice. Content: Put simply, these papers are good essays – they accomplish the task at hand and do not have any significant structural, grammatical, or analytical errors that deter from the overall success of the paper. They demonstrate a thorough understanding of the text and have arguments that are adequate (but could be sharpened). They employ close reading and analysis successfully and are able to work textual support into the fabric of the argument. If counterarguments are not addressed directly, their absence does not weaken the argument. Though well-executed, these essays do not push their analysis into the realm of “provocative.” Structure: The structure of the paper is logical (tying points back to the whole), contains relatively few areas of disorganization, and attempts to use transitions. This paper provides enough detail in the body to satisfy the reader and presents an effective introduction and conclusion. Style & Mechanics: While they may contain a small number of awkward sentences, “B” papers are, for the most part, well-written and contain few grammatical errors. Content: Overall, “C” papers demonstrate an understanding of and ability to work with a topic. The essay makes effort to answer the assignment; however, it may fall short when considering all aspects of the issue at hand. The approach is not as “well-rounded.” These papers attempt to make a point or argument, yet run into some problems when following the argument through the entire paper. Arguments are often defined only generally and do not address the complexity if the topic. The supporting evidence, gathered responsibly and used accurately, is, nevertheless, often obvious or easily accessible. Structure: While the paper does follow a structure, this organization is sometimes lost and unclear. Transitions may be used, but are often mechanical. Style & Mechanics: Some patterns of grammatical error emerge and take away from the strength of the essay. Sentence structure is relatively simple and the writing style seems slightly awkward and choppy at moments. Content: These papers may gesture at an overall point but do not articulate this clearly. Some attempt is made to answer the assignment, but clarity and depth of analysis give way to summary. Points are made yet may exist without textual support and a clear tie to the rest of the paper. Understanding of the text is uncertain and possibly incorrect. Structure: Transitions are non-existent and the movement from paragraph to paragraph and sentence to sentence seems arbitrary at multiple points. Style & Mechanics: The paper contains a significant amount of grammatical errors. Coupled with the overall lack of organization, this makes the essay difficult to read. This paper makes no effort to respond to the assignment. It contains no argument or point and does not engage with the text in any coherent, analytical way. Organization is absent and grammatical mistakes are numerous. Writing style is both awkward and inappropriate. This paper may also be plagiarized.
<urn:uuid:6bf95abc-add8-47a6-9b62-4263c1ccf972>
CC-MAIN-2016-26
https://sites.google.com/site/hustcolloquium/home/grading-rubric
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934175
840
3.09375
3
Scientists think they understand why Jupiter’s Great Red Spot doesn’t die Jupiter, the largest planet in the solar system, hosts one of the largest known storms. Nearly twice as wide as Earth, this storm looks like a big, reddish-brown eye in Jupiter’s southern hemisphere. It’s known as the Great Red Spot. Its winds have churned at least since the storm was first observed. That was nearly 200 years ago. Most studies predict it should have fizzled out ages ago. But a team of scientists now says that gases flowing vertically — meaning up and down — may explain the storm’s surprising staying power. “We have lots of publications that show how the Red Spot dies,” Philip Marcus told Science News. He is a computational physicist at the University of California, Berkeley. Computational physicists like Marcus use mathematics and computer programs to test ideas in physics, the study of energy and matter. Marcus and Pedram Hassanzadeh, a physicist at Harvard University, used math to build a computer model, or simulation, of the Great Red Spot. Their calculations may finally explain the spot’s longevity. Gases exit the swirling storm at both its top and bottom, their model suggests. These gases then pick up energy from nearby jet streams — strong, narrow air currents that blow through the atmosphere — before plunging back into the storm. This cycle may help keep the storm going, year after year, say the scientists. The pair presented its findings November 25 at a meeting of physicists in Pittsburgh. Saturn, Jupiter and Earth all have jet streams. They sometimes lead to the formation of whirlwinds called vortices. (Tornadoes are one example of vortices.) Astronomers once thought that the Great Red Spot — a giant vortex — gained energy by swallowing up smaller vortices spun off by jet streams. But studies in the last few decades had suggested that Jupiter’s jet streams don’t make enough vortices to power the big one. Previous studies have considered only winds that blow across the planet. Marcus and Hassanzadeh took a different approach. They included precise calculations of winds that blow vertically through and near the big red spot. When they included those vertical winds in their model, it showed the storm had enough oomph to keep spinning for as long as 800 years. That means Jupiter’s big storm could be around for a long, long time. (Or not: Scientists still don’t know when it started.) Physicist Robert Ecke at Los Alamos National Laboratory in New Mexico called the idea that vertical winds keep the spot spinning “very reasonable.” He told Science News that though the new findings need to be examined by other scientists, they open a window on a new way to think about giant vortices. computational physics The use of computer models, or simulations of complex, real events, and mathematics to test and study ideas from physics. computer model A program that runs on a computer and creates uses math to create a model, or simulation, of a real-world phenomenon or event. jet stream A narrow current of air that races through the atmosphere, usually from west to east. physics The scientific study of the nature and properties of matter and energy. simulation An event or process that serves as a working imitation — or model — of the real thing. Many simulations are now developed by computers to provide a virtual (computer) model of an event, such as a storm or the burning of a fuel. vortex (plural: vortices) A swirling whirlpool of some liquid or gas. Tornadoes are vortices, and so are the tornado-like swirls inside a glass of tea that’s been stirred with a spoon. Smoke rings are doughnut-shaped vortices.
<urn:uuid:932f195d-48fe-4423-960c-2a21920b2077>
CC-MAIN-2016-26
https://student.societyforscience.org/article/jupiter%E2%80%99s-long-lasting-storm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.945458
801
3.953125
4
by Helen Peppe, guest blogger and Stonecoast Alumna. Helen will participate in a faculty panel at the 2013 Stonecoast Summer Residency titled “What We Talk About When We Talk About Race.” This post has been reblogged from Write Here, Write Now with Sheila Boneham. Once upon a time there was a troll, the most evil troll of them all; he was called the devil. One day he was particularly pleased with himself, for he had invented a mirror which had the strange power of being able to make anything good or beautiful that it reflected appear horrid; and all that was evil and worthless seem attractive and worthwhile. This is the first paragraph of “The Snow Queen” by Hans Christian Anderson who embedded moral lessons in fairy tales and other short works, many of which do not end happily ever after. Anderson created his characters using the rules of polarity: good and evil, beautiful and ugly, greedy and generous. He recognized that people universally think in terms of opposites, that it pervades our physical environment: north and south, night and day, dark and light, hot and cold. Anderson kept his characters deceptively basic, a flat land of generic stereotype. There is the wicked witch and the beautiful princess, the conniving hag and unsuspecting king, and their differences create conflict. It’s as simple as yes and no, as right and wrong. But it isn’t. Read the full post here.
<urn:uuid:3f1fa35f-3314-45b3-85c2-e0be9ccc6885>
CC-MAIN-2016-26
https://usm.maine.edu/stonecoastmfa/focus-character-development-view-behind-lens
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972923
307
2.875
3
Is a Muscle Biopsy? A muscle biopsy is a procedure that removes a small sample of tissue for testing in a laboratory. The test can help your doctor to see if you have an infection or disease in your muscles. A muscle biopsy is a relatively simple procedure. It is usually done on an outpatient basis, which means you will be free to leave on same day as the procedure. You may receive local anesthesia. This will numb the area where the doctor is removing the tissue from, but you will remain awake for the Is a Muscle Biopsy Done? A muscle biopsy is performed if you are experiencing problems with your muscle and your doctor suspects an infection or disease could be the cause. The biopsy can help your doctor rule out a certain condition as a cause for your symptoms. It can also help them make a diagnosis and initiate a Your doctor may order a muscle biopsy for various reasons. They may suspect you have: - defects in the way your muscles metabolize, or - diseases that affect blood vessels or connective tissue, such as polyarteritis nodosa (which causes the arteries to become - infections related to the muscles, such as trichinosis (an infection caused by a type of roundworm) - muscular disorders, including types of muscular dystrophy (genetic disorders that lead to muscle weakness and other symptoms) Your doctor might also use this test to tell if your symptoms are being caused by one of the muscle-related conditions above, or by a problem with your nerves. Risks of a Muscle Biopsy Any medical procedure that breaks the skin carries some risk of infection or bleeding. Bruising is also possible. However, since the incision made during a muscle biopsy is small — especially in needle biopsies — the risk is much lower. Your doctor will not take a biopsy of your muscle if it was recently damaged in another procedure — for instance, by a needle during an electromyography (EMG) test — or if it’s already known to have nerve damage. There is a small chance of damage to the muscle where the needle enters, but this is rare. Always talk with your doctor about any risks before a procedure and share your concerns. to Prepare for a Muscle Biopsy You don’t need to do much to prepare for this procedure. Depending on the type of biopsy you will have, your doctor may give you some instructions to carry out before the test. These instructions typically apply to open biopsies. It’s always a good idea to tell your doctor about any prescription drugs, over-the-counter medications, and herbal supplements you are taking prior to a procedure. You should discuss with them whether you should stop taking them before and during the test, or if you should change the a Muscle Biopsy Is Performed There are two different ways to perform a muscle biopsy. The most common method is called a needle biopsy. For this procedure, your doctor will insert a thin needle through your skin to remove your muscle tissue. Depending on your condition, the doctor will use a certain type of needle. These include: - core needle biopsy: a medium-sized needle extracts a column of tissue, similar to the way core samples are taken from the - fine needle biopsy: a thin needle is attached to a syringe, allowing fluids and cells to be drawn out - image-guided biopsy: this kind of needle biopsy is guided with imaging procedures — like X-rays or computed tomography (CT) scans — so your doctor can avoid specific areas like your lung, liver, or other - vacuum-assisted biopsy: this biopsy uses suction from a vacuum to collect more cells You will receive local anesthesia for a needle biopsy, and should not feel any pain or discomfort. In some cases, you may feel some pressure in the area where the biopsy is being taken. Following the test, the area may be sore for about a week. If the muscle sample is hard to reach — as may be the case with deep muscles, for instance — your doctor may choose to perform an open biopsy. In this case, your doctor will make a small cut in your skin and remove the muscle tissue from there. If you are having an open biopsy, you may receive a general anesthesia. This means you will be sound asleep throughout the procedure. a Muscle Biopsy After the tissue sample is taken, it’s sent to a laboratory for testing. It could take up to a few weeks for the results to be ready. Once the results are back, your doctor may call you or have you come to their office for a follow-up appointment to discuss the findings. If your results come back abnormal, it could mean you have an infection or disease in your muscles, which may be causing them to weaken or die. Your doctor may need to order more tests to confirm a diagnosis or see how far the condition has gone. They will discuss your treatment options with you and help you plan your next steps.
<urn:uuid:d5199263-eaa4-496d-affd-9a6d7407d0bf>
CC-MAIN-2016-26
https://www.aarpmedicareplans.com/health/muscle-biopsy
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934618
1,116
3.21875
3
M.Ed.,San Francisco State Univ. Jonathan has been teaching since 2000 and currently teaches chemistry at a top-ranked high school in San Francisco. Here are some tips, and tricks for writing acid-base net ionic equations. Now we all know that if you add an acid and a base, you always get 2 products; salt, and water. So in a strong acid, and strong base for example, you have HCl plus NaOH. You end up with NaCl plus H2O. But that's not the net ionic equation. What happens is the NaCl cancels out, because the NaCl is aqueous. So that ionizes, and you're left with the water. So what you end up with is you always end up H+ plus OH- yields H2O, because HCl, and NaOH, since they're strong, they fully dissociate. So the Na+, and the Cl - are spectator ions. So with a strong acid and strong base, the net ion equation is always H+ plus OH- yields H2O. Super easy. So in number two, say if we have HCl and Ammonia, NH3. Ammonia is a weak base. So when you add them together, you end up with NH4Cl, then the water is in the solution that you have there. Well, what happens is, so the HCl fully ionizes or dissociates. So you end with H+. But the Cl- is actually a spectator ion here. The NH3 only partially dissociates. So the NH3 keep it together, so keep weak together. Then you end up with NH4+. So basically the shortcut is, when you have H+, strong acid, you always put the H+ there plus the NH3, so you keep that together, because it's weak. All you do is you add together what you have, to make your product. So that should make that easy. We'll use the same philosophy with number 3. Weak acid, strong base. So say if I have a weak acid HF, Hydrofluoric acid, and I add it to a strong base. Let's just use Sodium Hydroxide again. Then I make my products. I get NaF plus H2O. So then if you take a look here. So HF, that was a weak acid, so weak keep together. So weak keep together, because it only partially dissociates plus NaOH, the Na would be a spectator ion. So I have to take care of that. So I'm left with OH-, because the OH- becomes fully dissociated from a strong base yields. Since we took the Na+, then we keep the F- that we have there, and then plus the H2O. So if you take a look here, so the shortcut is, keep the weak acid together. So that's why I have HF plus when I say strong base, you just use Hydroxide. Then yields, then basically what you want to do is you want to make water, so you basically take one away. Taking H plus OH from the acid, and then that's how you get the water. Then you get your F- which is 0, you get up your HF. Then the last one; weak acid, and weak base. So let's do HF plus NH3. So what ends up happening is since they're already weak, keep together. So both of them are weak, so keep them together. The acid is going to become so you have H+. Then you are left with F- plus NH4+. That's all you've got to do. So keep them together if its a weak acid, and a weak base. Then all you have is to donate the proton, or the H+ from the acid to the weak base, then you get your products. So hopefully this mini-tutorial gives you some ideas of the patterns that you need for writing acid-base in their ionic equations. Have a good one.
<urn:uuid:988b3a2e-2caf-4d3c-80d6-77f104d3b22c>
CC-MAIN-2016-26
https://www.brightstorm.com/science/chemistry/acids-and-bases/tips-for-acid-base-net-ionic-equations/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956614
836
3.953125
4
When my son was in preschool, I did what many parents of excessively energetic and impulsive preschoolers have surely done: I worried whether his behavior might be a sign of attention-deficit hyperactivity disorder (ADHD). Then I sought input from two pediatricians and a family therapist. The experts thought that his behavior was developmentally normal, but said it was still too early to tell for sure. They offered some tips on managing his behavior and creating more structure at home. One pediatrician worked with my son on self-calming techniques such as breathing deeply and pushing on pressure points in his hands. He also suggested an herbal supplement, Valerian Super Calm, for him to take with meals and advised us on dietary adjustments such as increasing my son's intake of fatty acids. Studies have shown that a combination of omega-3 (found in foods such as walnuts, flaxseed and salmon) and omega-6 fatty acids (from food oils such as canola and flax) can reduce hyperactivity and other ADHD symptoms in some children. In the couple of years since trying these techniques, my son has outgrown most of those worrisome behaviors. I had just about written off the possibility of ADHD until a few weeks ago, when his kindergarten teacher mentioned that she was going to keep an eye on him for possible attention issues. Hearing that left me worried and heavy-hearted. Why is it still so hard to diagnose ADHD? And why is there so much emotional baggage associated with treating it? There are no firm numbers for the number of children with ADHD in the United States. The Centers for Disease Control and Prevention estimates that 9 percent of U.S. children ages 5 to 17 had received diagnoses of ADHD as of 2009. It is far more prevalent in boys than in girls. Among those given the diagnosis, a small minority suffers extreme symptoms, and in those cases, diagnosis is fairly straightforward. Children with extreme cases tend to have trouble staying engaged in tasks, even those that they enjoy, for any length of time and find it impossible to stay still, particularly in classroom settings. But for the vast majority of children who are not so severely affected or who only partially fit the criteria, symptoms are often blurred, making it much more difficult to assess the disorder. "There is no line" that defines who does and does not have ADHD, says Lawrence Diller, a behavioral developmental pediatrician and an assistant clinical professor at the University of California at San Francisco. Except in the extreme, diagnosing ADHD is a "judgment call based on subjective opinion," he says. Schools play a major role in whether a child ends up with an ADHD diagnosis and is treated with stimulant medications. A large majority of referrals are generated by problems reported at school, Diller says, yet schools typically do not investigate the context of learning disorders and behavioral problems. "The whole system of diagnosis [of ADHD] is based primarily on symptoms of behavior only." Many doctors and some schools rely on the Vanderbilt Assessment Scale, a questionnaire meant to flag symptoms of ADHD and identify other underlying conditions. It includes general statements -- such as "Is distracted by extraneous stimuli" and "Is forgetful in daily activities" -- and asks the person completing the form to rank how often each applies to the child throughout the day. But the test does not provide the necessary insights into a child's home life -- discipline patterns, inadequate learning environments, familial difficulties, Diller says. "If the behavior crosses the threshold on these forms, the parent is likely to be told the child has ADHD, even though there can be a host of other reasons why the kid is acting that way." The child may also have other problems that have little to do with attention but result in ADHD-type behaviors. For instance, a child with an auditory processing problem -- a disorder in which the ears and the brain are not properly coordinated -- will hear oral instructions, but then those instructions might get scrambled. Instead of getting out the blue notebook and turning to Page 20, he or she may take out the wrong book and look lost, stare out the window or bother a friend. "That will be reported on the Vanderbilt as being distractible and not completing tasks," Diller says. Diller recommends that parents first address discipline and learning issues before turning to medications, particularly in children younger than 6. He shows parents how to be more immediate with setting limits, such as using a timer to let kids know how long they can play or being clear about consequences (for instance, if cleanup isn't sufficient, toys are removed immediately for a brief period of time), and he recommends "1-2-3 Magic," a book that gives parents tools for effective discipline. For "the kids who are in this gray zone, it can be difficult," says Thomas Insel, director of the National Institute of Mental Health. "What we usually say is to err on the side of trying to provide kids with structure and feedback. If that doesn't help, then you think about medication." Researchers are beginning to understand the neural pathways that underlie ADHD, progress that is identifying potential new strategies for treatment. One promising area of research has found that dopamine, a chemical messenger in the brain commonly associated with motivation and reward, is reduced in adults with ADHD. (Such studies have not been done in children since they require the use of small amounts of radioactivity, which is not recommended for people younger than 18.) In a 2009 study in the Journal of the American Medical Association, a team led by Nora Volkow, director of the National Institute on Drug Abuse, reported that decreased dopamine signaling in the ventral striatum, an area of the brain involved with reward and motivation, was associated with attention problems in adults with untreated ADHD. The results suggest that low dopamine levels in the reward center might explain why many children and adults with ADHD struggle with a lack of motivation about certain tasks. In a 2012 study in the Journal of Neuroscience, Volkow and colleagues showed that methylphenidate, a stimulant that is the active ingredient in Concerta and Ritalin, restored dopamine to normal levels and significantly improved inattention and hyperactivity in adults. Notably, they found that the dopamine messages were enhanced in the ventral striatum following treatment. This showed that increased dopamine transmission in the reward center of the brain was key to improving their patients' ADHD symptoms. Even though stimulants have been proved safe and effective in children with ADHD, the decision to medicate is controversial and fraught with anxiety for many parents. "We just tend to fight against" treating a disorder whose diagnosis is based on behavioral symptoms, Insel says. He emphasizes that behavioral interventions should be tried first in those with moderate symptoms but says that medication can be remarkably helpful for children with the disorder. Deferring treatment for children who need help can have serious consequences, he notes; self-esteem begins to suffer because the children are constantly being corrected for not sitting still or paying attention. "The cost of not doing something about it becomes more severe," Insel says. Even if your child is identified as having ADHD, it remains an open question whether he will outgrow the diagnosis. A 2013 study in the journal Pediatrics found that just 30 percent of people who had received such diagnoses as children still had symptoms as adults. But other research has shown that the number is as high as 65 percent. In our case, we plan to observe our son closely and stay in touch with the teacher, but we don't yet have major concerns. He's happy in school and progressing well. But as any parent surely understands, it can be nerve-racking to wait and see, especially when a child's well-being is at stake.
<urn:uuid:8d84f222-d783-41d1-92cd-07af0ad67efb>
CC-MAIN-2016-26
https://www.dailyherald.com/article/20131111/entlife/711119999/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971837
1,579
2.578125
3
Teaching Love and Compassion (TLC) What is TLC? Teaching Love and Compassion (TLC) is a six-week long, intensive after-school program run in collaboration with Light House Community Charter school. TLC is a violence prevention and intervention program; it aims to teach empathy and respect for all living beings through a combination of dog training and humane education. During the program, 14 middle school age students spend about half of their program time learning about important topics which include spaying and neutering, animal welfare, grooming, anger management, public speaking, conflict resolution and the web of life. With the guidance of the TLC staff students are able to spend the other half of their program time training EBSPCA shelter dogs. View a KGO ABC 7 piece about our TLC program! And who, you may ask, are the TLC dogs? Over the six-week program period, students are paired up and then matched with one of seven dogs individually selected. The students train their TLC dogs using only positive reinforcement. The diverse group of TLC dogs learn basic commands from their student trainers such as sit, watch, touch, down, dance and even some fun agility tricks such as jumping through a hula hoop! Students learn responsibility for their actions through training his or her own dog and the dogs become more viable candidates for adoption due to their newly learned manners! If you are interested in adopting a TLC dog, please speak to our front desk staff in order to find out when TLC dogs are done with their commitment to the program and are ready to go to their forever home. Each dog is usually ready to go home once the TLC program session has concluded, however, it is also possible to put TLC dogs on hold for potential adopters.
<urn:uuid:5f1bec6b-c40c-430d-be2d-4e3bb1bec9b0>
CC-MAIN-2016-26
https://www.eastbayspca.org/page.aspx?pid=965
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96957
367
2.53125
3
No fermentable sugars, then what is the 30 +/- 5 %? You are familiar with long-chain unfermentable sugars, right? From "How to Brew." Chapter 20 - Experiment! 20.1 Increasing the Body Very often brewers say that they like a beer but wish it had more body. What exactly is "more body"? Is it a physically heavier, more dense beer? More flavor? More viscosity? In most cases it means a higher final gravity (FG), but not at the expense of incomplete fermentation. On a basic level, adding unfermentables is the only way to increase the FG and increase the body/weight/mouthfeel of the beer. There are two types of unfermentables that can be added: unfermentable sugars and proteins. Unfermentable sugars are highly caramelized sugars, like those in caramel malts, and long chain sugars referred to as dextrins. Dextrin malt and malto-dextrin powder have been previously mentioned in the ingredients chapters. Dextrins are tasteless carbohydrates that hang around, adding some weight and viscosity to the beer. The effect is fairly limited and some brewers suspect that dextrins are a leading cause of "beer farts," when these otherwise unfermentable carbohydrates are finally broken down in the intestines. Dark caramel and roasted malts like Crystal 80, Crystal 120, Special B, Chocolate Malt, and Roast Barley have a high proportion of unfermentable sugars due to the high degree of caramelization (or charring). The total soluble extract (percent by weight) of these malts is close to that of base malt, but just because it's soluble does not mean it is fermentable. These sugars are only partially fermentable and contribute both a residual sweetness and higher FG to the finished beer. These types of sugars do not share dextrin's digestive problems and the added flavor and color make for a more interesting beer. The contribution of unfermentable sugars from enzymatic and caramel malts can be increased by mashing at a higher temperature (i.e. 158°F) where the beta amylase enzyme is deactivated. Without this enzyme, the alpha amylase can only produce large sugars (including dextrins) from the starches and the wort is not as fermentable. The result is a higher final gravity and more body. Proteins are also unfermentable and are the main contributor to the mouthfeel of a beer. Compare an oatmeal stout to a regular stout and you will immediately notice the difference. There is a special term for these mouthfeel-enhancing proteins - "medium-sized proteins." During the protein rest, peptidase breaks large proteins into medium proteins and protease breaks medium proteins into small proteins. In a standard well-modified malt, a majority of the large proteins have already been broken down into medium and small proteins. A protein rest is not necessary for further protein breakdown, and in fact, would degrade the beer's mouthfeel. A protein rest to produce medium-sized proteins for increased body is only practical when brewing with moderately-modified malts, wheat, or oatmeal, which are loaded with large proteins. To add more body to an extract-based beer, add more caramel malt or some malto-dextrin powder. You can also increase the total amount of fermentables in the recipe which will raise both the OG and FG, and give you a corresponding increase in alcohol too. Grain brewers can add dextrin malt, caramel malt, unmalted barley or oatmeal in addition to using the methods above. Grain brewing lends more flexibility in fine tuning the wort than extract brewing.
<urn:uuid:70a4daa8-d28b-45d7-9c86-0e8fefed5682>
CC-MAIN-2016-26
https://www.homebrewersassociation.org/forum/index.php?topic=17471.msg221771
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925761
775
2.625
3
Files in this item |(no description provided)| |Title:||Cyclone climatology of the Great Lakes| |Author(s):||Angel, James R.| |Doctoral Committee Chair(s):||Isard, Scott A.| |Department / Program:||Geography| |Degree Granting Institution:||University of Illinois at Urbana-Champaign| Physics, Atmospheric Science |Abstract:||Cyclones are an important feature of the Great Lakes region with significant impacts on ice cover, thermal structure, water quality, aquatic life, shipping, and shoreline property. For this research, a historical cyclone dataset was constructed for the period 1900 to 1990. This dataset was used to address the following five research topics: (a) the trends and fluctuations in the characteristics of cyclones, (b) the balance between cyclone frequency and intensity, (c) the sensitivity of cyclone characteristics to climate variables, particularly temperature and precipitation regimes, (d) the preferred tracks of cyclones passing over the region and changes over time, and (e) the influence of the Great Lakes on passing cyclones. The historical dataset was constructed from those cyclones with a central pressure $\leq$992 hPa when they were in the Great Lakes region. An extensive search of the climatological literature suggests that this is the first study to document a statistically significant increase in the frequency of strong cyclones over the Great Lakes during the 20th century in both November and December. This is a time of year when Great Lakes cyclones cause important economic damage (40% of the NOAA Storm Damage reports associated with cyclones occurred in those two months). Studies of the impacts of future climate change in the region generally assume that the cyclone frequency will not change over time. The results of this research suggest that this assumption is invalid. The increase in the frequency of strong cyclones in the Great Lakes region for November and December is believed to be the result of a general increase in intensity of all cyclones, which yielded more cyclones in the strong cyclone category. An analysis of changes in cyclone characteristics, temperature, and precipitation yields a positive relationship between cyclone frequency and precipitation. This relationship should be useful in climate change studies and for applications with the NWS long-range forecasts. This research also provides climatological evidence (as opposed to case studies or models) of the important influence of the Great Lakes on passing cyclones. During the unstable season, cyclones accelerate into the region, slow and deepen over the lakes, and then return to their prior speed and rate of deepening after they exit the region. The influence of the Great Lakes on passing cyclones is important not only during the unstable season (October-February), but also in late spring and early summer. |Rights Information:||Copyright 1996 Angel, James Randal| |Date Available in IDEALS:||2011-05-07| |Identifier in Online Catalog:||AAI9702447| This item appears in the following Collection(s) Graduate Dissertations and Theses at Illinois Graduate Theses and Dissertations at Illinois Dissertations and Theses - Geography and Geographic Information Science
<urn:uuid:3676039a-f242-4848-87eb-71d5f64c71c9>
CC-MAIN-2016-26
https://www.ideals.illinois.edu/handle/2142/20820
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.882588
669
2.859375
3
Principle to Emphasize “The use of audiovisual resources can be made more inspiring if students are invited to participate in the learning experience” (Teaching the Gospel: A Handbook, 40). Suggested Training Activities (50 minutes) Write on the board the headings “Purposes” and “Techniques.” Invite teachers to carefully read the section entitled “Audiovisual Presentations” (handbook, 40). Ask them to look for the purposes of using audiovisual resources and what techniques make these resources even more effective. List their findings under the appropriate headings on the board. Have teachers scan 1 Nephi 11 and note how the Spirit of the Lord taught Nephi. Ask: As the Spirit of the Lord taught Nephi, how did He demonstrate some of the purposes and techniques noted on the board? How might Nephi’s experience apply to your use of audiovisual presentations in the classroom? Invite teachers to prepare to watch a video presentation by reading 2 Kings 5:1–14. Write on the board little maid, king of Israel, servant of Naaman, and Naaman. Invite teachers to watch the video looking for the contrasting levels of faith of the characters listed on the board. Demonstrate the appropriate use of media by showing presentation 33, “Naaman and Elisha” (14:25). In this presentation, Naaman, the Syrian, comes to Elisha to be healed of leprosy (see 2 Kings 5). Pause the presentation after the little maid tells Naaman’s wife about the prophet Elisha. Ask teachers: How do you think the little maid might have developed such strong faith? What influence can faithful youth today have on others through their simple testimonies? Pause the presentation after the scene with the king of Israel. Ask teachers: How much faith did the king of Israel demonstrate? Pause the presentation again after Naaman’s servant talks to Naaman about bathing in the River Jordan. Ask teachers: How did Naaman’s servant demonstrate his faith in God? How do you think Naaman felt about Elisha at this point? At the end of the video, ask teachers: How do you think Naaman felt now about Elisha and the Lord? What made the difference? As you conclude the discussion, review with teachers the techniques you taught: Writing on the board what students should look for as they watch or listen Pausing during the presentation Inviting students to look for how the message of the story applies to their lives Invite teachers to consider how they would apply the techniques from the previous training activity if they were to use the presentation you are about to show. Distribute copies of handout 39, and invite the teachers to write their responses on the handout as they view the presentation. Show a brief audiovisual presentation from the seminary material or another Church resource. After they have completed the handout, invite teachers to share their responses with the in-service group. Invite teachers to carefully read the section entitled “Cautions” (handbook, 40–41) and underline the four questions that teachers should ask themselves when using visual and audio resources. Ask: What four questions should teachers ask themselves when using visual and audio resources? (see handbook, 40). How are visual and audio resources sometimes misused by teachers? (see handbook, 40). Why is it inappropriate to use an audiovisual product that may carry a good message but has undesirable features? (see handbook, 41). What conditions must be met when using commercially produced videos? (see handbook, 41). What conditions must be met when using radio and television programs taped off the air? (see handbook, 41). What impact do copyright violations have on the presence of the Spirit? What are the restrictions on the duplication of Church-produced materials? (see handbook, 41). What are the laws governing the duplication of music? (see handbook, 41). Why is it important that students and teachers be cautioned about copyright laws? Invite teachers to use audiovisual resources more effectively in their upcoming lessons by carefully considering their purposes for using these resources, using techniques that involve the students, and heeding cautions about the proper use of such materials. Have teachers share their experience of applying what they have learned (with a colleague or in the next in-service meeting).
<urn:uuid:185f8b0d-b941-4ce3-9e13-eca81cb86008>
CC-MAIN-2016-26
https://www.lds.org/manual/teaching-the-gospel-a-ces-resource-for-teaching-improvement/36-using-audiovisual-presentations?lang=eng
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915709
947
3.671875
4
Parasite's ill effects on rodents persist long after disease clears Wendy Ingram and Adrienne Greene Mice may permanently shed a fear of felines when infected with a parasite. The effects linger long after the parasites disappear, a study shows. The protozoan parasite Toxoplasma gondii can infect most mammals, including humans (SN: 1/26/13, p. 24). But the parasite can reproduce only in the feline gut, so cats need to eat animals infected with T. gondii to keep the parasite generations going. Perhaps increasing the likelihood that it will wind up in the belly of a cat, the parasite makes infected rodents lose their innate aversion to cat urine, researchers discovered in 2000. The parasite strain was so potent that it killed the mice quickly, so researchers had no way of knowing whether the rodents’ loss of cat aversion could persist.
<urn:uuid:6d718e8f-68b1-4a5a-9f7c-1cd37b080029>
CC-MAIN-2016-26
https://www.sciencenews.org/article/mice-lose-cat-fear-good-after-infection?mode=topic&context=87
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899327
185
3.15625
3
Manipulating Plants' Circadian Clock May Make All-Season Crops Possible http://www.sciencedaily.com/releases/2011/09/110901134643.htm Manipulating Plants' Circadian Clock May Make All-Season Crops Possible ScienceDaily (Sep. 3, 2011) — Yale University researchers have identified a key genetic gear that keeps the circadian clock of plants ticking, a finding that could have broad implications for global agriculture. The research appears in the Sept. 2 issue of the journal Molecular Cell. "Farmers are limited by the seasons, but by understanding the circadian rhythm of plants, which controls basic functions such as photosynthesis and flowering, we might be able to engineer plants that can grow in different seasons and places than is currently possible," said Xing Wang Deng, the Daniel C. Eaton Professor of Molecular, Cellular, and Developmental Biology at Yale and senior author of the paper. The circadian clock is the internal timekeeper found in almost all organisms that helps synchronize biological processes with day and night. In plants, this clock is crucial for adjusting growth to both time and day and to the seasons. The clock operates through the cooperative relationship between "morning" genes and "evening" genes. Proteins encoded by the morning genes suppress evening genes at daybreak, but by nightfall levels of these proteins drop and evening genes are activated. Intriguingly, these evening genes are necessary to turn on morning genes completing the 24-hour cycle. The Yale research solved one of the last remaining mysteries in this process when they identified the gene DET1 as crucial in helping to suppress expression of the evening genes in the circadian cycle. "Plants that make less DET1 have a faster clock and they take less time to flower," said lead author On Sun Lau, a former Yale graduate student who is now at Stanford University. "Knowing the components of the plant's circadian clock and their roles would assist in the selection or generation of valuable traits in crop and ornamental plants." Other authors from Yale are Xi Huang, Jae-Hoon Lee, Gang Li and Jean-Benoit Charron, now of McGill University. The research was funded by the National Institutes of Health and the National Science Foundation. Lau was supported in part by the Croucher Foundation.
<urn:uuid:6b5761b2-4c30-4afd-bffc-94fd7ce168e3>
CC-MAIN-2016-26
https://www.thcfarmer.com/community/threads/manipulating-plants-circadian-clock-may-make-all-season-crops-possible.39757/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94886
464
3.4375
3
JOE (ca. 1813–?). Joe, slave of William B. Travis and one of the few Texan survivors of the battle of the Alamo, was born about 1813. He was listed as a resident of Harrisburg in May 1833. Joe claimed that when Gen. Antonio López deSanta Annaqv's troops stormed the Alamo on March 6, 1836, he armed himself and followed Travis from his quarters into the battle, fired his gun, then retreated into a building from which he fired several more times. After the battle, Mexican troops searched the buildings within the Alamo and called for any blacks to reveal themselves. Joe did so and was struck by a pistol shot and bayonet thrust before a Mexican captain intervened. Sam, James Bowie's slave, was also reported to have survived the battle, but no further record of him is known to exist. Joe was taken into Bexar, where he was detained. He observed a grand review of the Mexican army before being interrogated by Santa Anna about Texas and its army. Accounts of his departure from the Alamo differ, but he later joined Susanna W. Dickinson and her escort, Ben, Santa Anna's black cook, on their way to Gen. Sam Houston's camp at Gonzales. On March 20 Joe was brought before the Texas Cabinet at Groce's Retreat and questioned about events at the Alamo. William F. Gray reported that Joe impressed those present with the modesty, candor, and clarity of his account. After his report to the Texas Cabinet Joe was returned to Travis's estate near Columbia, where he remained until April 21, the first anniversary of the battle of San Jacinto. On that day, accompanied by an unidentified Mexican man and taking two fully equipped horses with him, he escaped. A notice offering fifty dollars for his return was published by the executor of Travis's estate in the Telegraph and Texas Register on May 26, 1837. Presumably Joe's escape was successful, for the notice ran three months before it was discontinued on August 26, 1837. Joe was last reported in Austin in 1875. William Fairfax Gray, From Virginia to Texas, 1835 (Houston: Fletcher Young, 1909, 1965). Paul D. Lack, "Slavery and the Texas Revolution," Southwestern Historical Quarterly 89 (July 1985). Phil Rosenthal and Bill Groneman, Roll Call at the Alamo (Fort Collins, Colorado: Old Army, 1985). Telegraph and Texas Register, March 24, 1836, May 26, August 26, 1837. Amelia W. Williams, A Critical Study of the Siege of the Alamo and of the Personnel of Its Defenders (Ph.D. dissertation, University of Texas, 1931; rpt., Southwestern Historical Quarterly 36–37 [April 1933-April 1934]). Image Use Disclaimer All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Handbook of Texas Online, Nolan Thompson, "Joe," accessed July 01, 2016, http://www.tshaonline.org/handbook/online/articles/fjo01. Uploaded on June 15, 2010. Modified on June 30, 2016. Published by the Texas State Historical Association.
<urn:uuid:038cc5fe-df85-4c7c-9d9e-a487852eeb16>
CC-MAIN-2016-26
https://www.tshaonline.org/handbook/online/articles/fjo01
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971172
809
2.90625
3
WALLER CREEK (Cooke County). Waller Creek rises seven miles northwest of Muenster in western Cooke County (at 33°42' N, 97°28' W) and runs south for five miles, through a dam, to its mouth on the Elm Fork of the Trinity River (at 33°38' N, 97°28' W). The surrounding low rolling to flat terrain is surfaced by sandy to clay loams that support scrub brush and some hardwood trees near the banks of the creek. For most of Cooke County's history, the Waller Creek area has been used as range and crop land. Image Use Disclaimer All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article. Handbook of Texas Online "Waller Creek (Cooke County)," accessed July 01, 2016, Uploaded on June 15, 2010. Published by the Texas State Historical Association.
<urn:uuid:0ae79b05-0ec4-49a6-8b6e-f4d980def785>
CC-MAIN-2016-26
https://www.tshaonline.org/handbook/online/articles/rbw10
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92257
347
3.046875
3
Elizabeth A. Scarbrough Introduction to ethics through in-depth study of one or more selected topics (e.g., limits of moral community, animal rights, moral education, and freedom). Topics vary. Topics in Ethics: The Intersection of Ethics and Aesthetics. The focus of this course is on the interplay/intersection of ethics and aesthetics. This is a 200 level course; although no previous instruction in ethics or aesthetics is required, familiarity with normative ethics and/or aesthetics is helpful. At least one previous course in philosophy is recommended. The course will begin with an introduction to normative ethics, with a focus on how to apply these ethical constructs to issues in aesthetics. We will briefly discuss: Utilitarianism, Kantianism, Virtue Ethics, and Pluralism. After our unit on normative ethics, we will turn to the following issues in aesthetics: issues in public art (Should tax dollars be spent on public art? Can we destroy works of public art? What should the aim of public art be?), ethical issues in kitsch & sentimentality (Do sentimental or kitschy artworks engage us in morally bad forms of self-deception?), fakes & forgeries (Is the fact that an artwork is a forgery merely a moral flaw in its creation, or is it also an aesthetic flaw?), and the moral criticism of art (Are artworks immune from moral criticism? Can artworks that depict morally bad content have positive aesthetic value?). There will be several short writing assignments, a midterm and a final. Student learning goals General method of instruction Class assignments and grading
<urn:uuid:95a4083a-9960-4598-bd5a-1edb0d0d71b9>
CC-MAIN-2016-26
https://www.washington.edu/students/icd/S/phil/241lizscar.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915281
326
3.0625
3
This is a picture of Nereid. Image from: NASA Nereid was discovered by G. Kuiper in 1949. Of the 8 moons it is the farthest from Neptune, with a standoff distance of 5,513,400 km. Nereid is one of the small moons , and is about as long as the distance from Los Angeles to San Francisco with a diameter of 340 km (226 mi). As a small moon, the composition features are unknown. Shown in the picture above is Nereid. This is the best picture we have of this small moon! Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: What types of instructional experiences help K-8 students learn science with understanding? What do science educators teachers, teacher leaders, science specialists, professional development staff, curriculum designers, school administrators need to know to create and support such experiences?...more Neptune has // Call the moon count function defined in the document head print_moon_count('neptune'); moons. As is the case with all of the gas giant planets in our Solar System, it also has a series of...more Triton was discovered by W. Lassell in 1846. Of the 8 moons, it is the 2nd farthest from Neptune, with a standoff distance of 354,800 km. Triton may be one of the largest of the icy moons, is comparable...more Composition is generally determined by detailed measurements of the spectra of an object. Spectral measurements of the surface of Triton reveal the presence not only of ice but of several different kinds...more The diagram to the left shows a cutaway of the possible interior structure of Triton. The composition of Triton is mostly ice, therefore there is probably a small core of some rocky material buried inside,...more This gorgeous image of Triton reveals many interesting features of its surface. The surface of Triton is halfway between that of Ganymede and that of Europa, of the Galilean satellites. There are portions...more Triton is by far the largest moon of Neptune, and is one of the most unusual large moons in the Solar System. The poles of Triton are especially interesting. Triton has a frozen polar cap with ice geysers....more The atmosphere of Neptune is very similar to that of Uranus, and unlike that of Saturn and Jupiter. On Jupiter and Saturn, the atmosphere is mostly composed of the simple molecules hydrogen and helium....more
<urn:uuid:d612a102-213e-41ad-8280-705841c865f3>
CC-MAIN-2016-26
https://www.windows2universe.org/neptune/moons/nereid.html&edu=high
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928101
564
3.734375
4
This graphic shows the orbits of Mercury and Venus within the orbit of the Earth, and the maximum angular distance between these planets and the Sun as viewed from the Earth Click on image for full size Courtesy of NASA. The Innermost Planets as Bright Stars the innermost planets in the solar system, always appear only a small distance away from the Sun in the sky. so small and so close to the Sun (always within 28 degrees) that it is difficult to see from Earth, since it is usually lost in the Sun's glare. The innermost planet can be seen with the naked eye only at twilight, very low in the sky, near the horizon. From Earth, Venus can appear up to 47 degrees away from the Sun. During these times, when it rises or sets a few hours before or after the Sun, it can be seen just before sunrise or just after sunset as a bright morning or evening star. At these times, Venus is up to 15 times brighter than the brightest star, Sirius, and can even Shop Windows to the Universe Science Store! Our online store includes issues of NESTA's quarterly journal, The Earth Scientist , full of classroom activities on different topics in Earth and space science, ranging from seismology , rocks and minerals , and Earth system science You might also be interested in: Venus is the second planet from the Sun, and is Earth's closest neighbor in the solar system. Venus is the brightest object in the sky after the Sun and the Moon, and sometimes looks like a bright star...more Sometimes Venus passes between Earth and the Sun. This event is called a transit of Venus. Transits of Venus don't happen very often. There is a pattern in the time between transits of Venus. The pattern...more Sometimes the planet Venus gets between Earth and the Sun. Astronomers call that a "transit" of Venus. A transit is a little bit like an eclipse of the Sun, when the Moon gets between Earth and the Sun....more Venus is the hottest planet in our Solar System. On Earth, places near the equator are much warmer than places near the poles. On Venus, it is really hot everywhere... even at the North and South Poles....more A vortex is a swirling, circular movement of air and clouds... like in a tornado or hurricane. The plural form of vortex is "vortices". The planet Venus has vortices in its atmosphere above each of its...more The following may be the history of Venus. Venus formed about 4 Billion Years ago. at the conclusion of forming it continued to be bombarded with leftover material. Many planets still bear the remains...more This is an example of a volcanic tick. ...more
<urn:uuid:2106ba18-c9f7-4e83-947b-0c0bbe692784>
CC-MAIN-2016-26
https://www.windows2universe.org/venus/morning_star.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910406
575
3.890625
4
This is an argument for specificity in using the word “right” in the healthcare debate. Two statements commonly seen are: “everyone has the right to health care” and “health care is a human right.” When statements like these are used the usual implication is that health care is not currently a right in the U.S. However, a right to health care does exist in various forms. For instance, there exists the right to purchase health care or health insurance. There exists the right to get health care if you are poor. Granted, assistance programs are having problems now, and you might have to lose your life savings and go bankrupt to qualify. There exists the right to walk into a hospital ER and demand health care, but you'll get a big bill. What doesn't exist is the right to health care which will not put anyone into financial distress. The seminal documents of the U.S. reference rights endowed by a Creator, but the right to be given health care is not evident in these documents, nor in Western religious teaching, unless it can be considered a part of charity, which most religions consider a duty. The 1948 United Nations Universal Declaration of Human Rights, Article 25, says everyone has a right to a standard of living adequate for health and well being, and it specifically mentions medical care (www.un.org/en/documents/udhr/index.shtml). Unfortunately, since medical care depends on working people and is not free, medical care will be competing with other rights, such as the right to an education and property rights. There are also questions of fairness. For example, should smokers have the right to take your money for lung cancer treatment? Do sexually promiscuous people have a right to your resources for HIV treatment? We are all doing something unhealthy. Can we demand others pay for the consequences of our actions? It is probable that the word “right” is often used in health care arguments because the concept of a right is powerful, emotional, seems simple, and implies that there can be no argument against it. But as noted above, the right to health care can mean many different things. Providing health care rights requires infringement upon other rights, moral judgements, and a complex allocation of resources. Simple blanket statements like: “everyone has a right to health care” are meaningless. Arguing over such a statement is useless. Appropriate arguments are more specific: exactly what will constitute future health care rights, and how will adequate resources be developed to provide for them. Mecikalski MB. Right to health care. What does that mean? J Clin Sleep Med 2011;7(5):437. This was not an industry supported study. The author has indicated no financial conflicts of interest.
<urn:uuid:6235376b-c791-48d5-b0cc-99acb468de4f>
CC-MAIN-2016-26
http://aasmnet.org/JCSM/ViewAbstract.aspx?pid=28285
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956681
572
3.015625
3
Software reads online content aloud and printers generate Braille text, but there hasn't been a fast and easy way to create recognizable images for the blind. Now, computer scientists in Arizona are generating social networking profile pictures the blind can "see." "The face image -- that's very important for people in their social life, emotional life," said Baoxin Li, an associate professor of computer science at Arizona State University who is leading the software work. Li said the idea was inspired by a blind ASU researcher who wished she could access more graphical information. Making all digital graphics accessible to the blind would have been an overwhelming challenge, so Li and his colleagues focused on profile pictures. They had to find the right balance of information so the person would be recognizable. "We convert the photo in such a way so the major facial landmarks are nicely kept -- that's very important because we can't render all the features into tactile form," Li said. "That would be too disorienting." Instead, an algorithm pares down crucial facial information without oversimplifying it. Their software allows a blind user to take a photo of a face, put it into a computer application, and automatically generate a new printable image. The image comes out of a special tactile printer with raised lines along the facial features. "At the moment it's within one minute or so, but we can further optimize the software to do it faster," Li said. Tactile printers are usually found at centers that assist the blind, and institutions such as the Center for Cognitive Ubiquitous Computing at ASU. However, Li said that even the least expensive ones cost several thousand dollars. In the future, he expects the software will work with paperless tactile displays that are in development. Their automated approach was described last year in the journal IEEE Transactions on Multimedia. This week, Li demonstrated the software at the International Conference on Intelligent User Interfaces in Palo Alto, Calif. Other technology exists for creating tactile images, Li said, but it's designed to help sighted professionals with the time-consuming process of making intricate images for the blind. The ASU software is stable to the point where the scientists are talking with software producers about bringing it to market. Beyond profile images, the scientists would like to create software that can generate tactile images from online mapping sites. John Gardner is a former Oregon State University physics professor who lost his vision in 1988. Frustrated by a lack of access to information, he founded the assistive technology company ViewPlus Technologies in Corvallis, Ore. ViewPlus developed the tactile printer that Li uses, he said. "But we never had the software to make a nose feel like a nose and an eye feel like an eye," Gardner said. "It's a tour de force that he can analyze a face and make it feel like a face." He added that he'd like Li's software to render the Mona Lisa. At the demonstration this week in California, attendees were invited to have their photos taken and receive tactile versions. Some sighted visitors called the printouts works of art, Li said. "They asked me to put my signature on their copies."
<urn:uuid:c99debdf-c0e5-4f85-9155-148ac4927ec6>
CC-MAIN-2016-26
http://abcnews.go.com/Technology/printed-photos-blind/story?id=12951372
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955107
649
2.875
3
Forty years ago Monday, Neil Armstrong made his "giant leap for mankind." Since that triumphant moment, astronauts in the U.S. space program have gone no farther. The first footsteps on the moon — made by Armstrong on July 20, 1969, on the mission known as Apollo 11— came 3½ years before the last ones. Since then, astronauts have been stuck close to the Earth, mostly circling a few hundred miles overhead in a spacecraft that's little more than a glorified cargo truck. So now what? That question preoccupies NASA and worries the Obama administration. The president said in March that NASA is beset by "a sense of drift." Even some of the men who once walked on the moon are divided on how to proceed. Options could include going back to the moon, landing on an asteroid, shooting for Mars or even ending human exploration of space altogether. Former president George W. Bush tried to give NASA a sense of purpose, ordering the agency in 2004 to retire the space shuttle and return humans to the moon. The public yawned. Bush never publicly mentioned the plan again and didn't add much to NASA's budget for it. NASA is still trying to carry out Bush's goals, but the effort is in doubt. At the White House's request, a panel of independent space experts is giving NASA's human spaceflight program a top-to-bottom review. The panel, which will make recommendations at the end of the summer, could tell Obama that NASA is on track. Or, it could send the agency back to the drawing board. No matter what the panel decides, the federal deficit and competition from programs such as health care mean that NASA is unlikely to get enough money to do anything truly ambitious. Already Obama's proposed budget for 2010 shows that the administration plans to slash funding later this decade for the rocket and spacecraft needed to take astronauts back to the moon. If that stands, it's a "an absolutely going-out-of-business budget," says former NASA official Scott Pace, now at George Washington University. Many space historians and even NASA veterans agree that the glory days of Apollo — which spawned countless songs, movies and books — can't be recaptured. Gone is the vast budget for building spaceships. Gone is the Cold War with the Soviet Union, which unified the nation and lent urgency to the effort to put an American on the moon. "The Apollo program was such a success because it did have complete support," Aaron Cohen, a top Apollo official, said last month at an MIT symposium on the 40th anniversary of man's first step on the moon. "This may be very difficult to achieve in the near future." America "is a different place" now than during Apollo, says Rep. Bart Gordon, D-Tenn., head of the House Science and Technology Committee. "We were in a space race with the Soviet Union. (Apollo) was about geopolitics, not space exploration." All the same, polls regularly show that Americans have a warm feeling for the human spaceflight program and don't want it to end. That means figuring out what astronauts should do next. Should they forge outward into the solar system, despite the huge cost and a soaring deficit? And if so, where? The decision is not just technical, says David Mindell, who directs MIT's Department of Science, Technology and Society. "It's emotional and it's political, because human spaceflight is primarily a symbolic activity," he says. "If you really are looking strictly (at the) technical, you wouldn't be sending people." Some possible destinations for human space explorers include: Yes, America has been there. That doesn't mean it's not worth going back, say scientists and an astronaut who's been to the lunar surface. Humans went to the moon six times from 1969 to 1972, spending fewer than 13 days there. Lunar advocates say that's hardly time enough to plumb the moon's mysteries. Sending humans back to the moon could help unlock the secrets of the early solar system, says Jack Burns, a University of Colorado astronomer. The forces that shaped the Earth have not scarred the lunar surface, making the moon a pristine record of how planets formed, he says. Burns scoffs at the idea that because Americans have landed on the moon, there's no reason to go back. "It's like Thomas Jefferson sending Lewis and Clark to the West, and … people saying, 'We're done, we don't need to go there anymore,' " he says. NASA's plans for the moon include not just short, Apollo-style stopovers but eventually a moon base. The agency hopes to send the astronauts back to the moon around 2020. Operating a moon base would allow astronauts to practice living on another planet, NASA's Jeff Hanley says. Crews would need that experience before pressing on to Mars, the long-term goal of most space enthusiasts. "The fastest way to get to Mars is through the moon," says Harrison Schmitt, who in 1972 was one of the last two men on the moon. "We need to learn how to work in deep space again. That's what the moon does for us." It may sound crazy, but preliminary NASA studies indicate it's possible to send humans to visit asteroids, huge chunks of rock and gravel that orbit the sun. Telescopes have spotted at least nine asteroids that astronauts could reach using the spaceship and giant rocket NASA is designing to return humans to the moon, says the space agency's Rob Landis, who headed a study of such missions. Total travel time would be 90 to 180 days, he says. That's much longer than the six-day round trip to the moon but much shorter than the one-year round trip to Mars. Asteroids, unlike the moon, have negligible gravity, so a spaceship could fly to an asteroid and just pull up next to it. Then an astronaut could clamber out and explore. Going to the moon requires not just a spaceship but an expensive lander, one equipped with rockets so it could blast off from the lunar surface. There's a big incentive to learn more about asteroids: They could wipe out humanity. A wallop from even a medium-size asteroid could unleash as much energy as a large nuclear bomb, NASA says. Many scientists blame a collision with a huge asteroid for the extinction of the dinosaurs 65 million years ago. Asteroids also are of interest because they're loaded with minerals that could be useful for space crews headed into the solar system, says Russell "Rusty" Schweickart, who flew on the 1969 Apollo 9 mission that tested the lunar module. "Asteroids are a combination of long-term resource, potential threat and great scientific interest," he says. "In my mind, (that) sells a heck of a lot better to the general public than going back to the moon." Scientists have debated the existence of life on Mars for more than a century. Mars boosters say it's time to settle the arguments by sending humans to the Red Planet. Humans would learn "whether we're … in a living universe, where life is common, or a dead universe," says Robert Zubrin, president of the Mars Society, a group dedicated to Mars exploration. In all the vast universe, the Earth is the only place known to support life. Mars may be the next best place to nurture living things. It's not only the most Earth-like planet but also has stores of water, as confirmed by a NASA robot last year. Two of Apollo 11's three crewmembers are Mars partisans. "As celestial bodies go, the moon is not a particularly interesting place, but Mars is," Apollo 11 astronaut Michael Collins said in a statement from NASA. A return to the moon is "not very attractive," says Buzz Aldrin, who was the second man on the moon. "After 50 years, do we want to be known for returning to the moon?" He favors human colonization of Mars. He envisions crews testing their skills and spaceships on Phobos, a Mars moon, before pushing on to the Red Planet. A Mars trip may seem far-fetched, but such a mission was proposed by the first President Bush in 1989. The idea went nowhere then, but the younger President Bush's 2004 plan for NASA also included the goal of sending humans to Mars. NASA took that directive so seriously that its engineers started work on a giant rocket so powerful it could launch spacecraft not just to the moon but also to Mars. Work on the rocket is on hold while the Obama administration sorts out its plans for space. Even some strong supporters of space exploration say the best place to send America's astronauts would be nowhere at all. Opponents of human spaceflight say robots can do the job just as well as astronauts, pose no safety worries and work cheaply. Sending humans into space isn't worth it, they say. "The cost and risks are just too high," says physicist Robert Park of the University of Maryland, who wants NASA's manned program to be phased out. Human space exploration also has run into trouble in Congress. In its spending bill for 2008, lawmakers ordered NASA not to spend any money to study sending humans to Mars. "Manned space travel adds far more cost than is justified in terms of scientific return," says Rep. Barney Frank, D-Mass. Frank says he doesn't want to end the astronaut program but doesn't want to send humans to Mars or the moon. He'd restrict astronauts to tasks robots can't handle, such as the recent upgrade of the Hubble Space Telescope by a seven-astronaut team. Opposition to NASA's astronaut program stretches across the political spectrum. Republican Newt Gingrich, former speaker of the House, wrote in Aviation Week & Space Technology last year that NASA should get out of the business of sending humans to space to make way for private space entrepreneurs. For NASA, the most opposition may be from the people who pay the bills: the public. In a 2005 USA TODAY poll, 58% opposed spending money on a human mission to Mars. Americans may support human spaceflight, but they don't make it a high priority, says historian Roger Launius of the Smithsonian Institution's National Air and Space Museum. Nor do political leaders, he says. "That leaves us in low-Earth orbit for the foreseeable future," Launius says "I hope it doesn't come to that, but I'm afraid it might."
<urn:uuid:68a2c297-4c2e-4975-8ab9-807bb56ea27e>
CC-MAIN-2016-26
http://abcnews.go.com/Technology/story?id=8105921&page=3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957481
2,152
3.21875
3
Children with low muscle tone and/or oral motor delays (commonly found in post-institutionalized kids) often prefer foods with high flavor and/or crunch, which helps make it easier for them to chew and swallow safely. For such children, experiment with a variety of textures and flavors (salty, sour, spicy, and even bitter). Begin by introducing a small amount of the condiment alone; if it passes muster, move to pairing it with other foods. GOES GREAT ON... When buying packaged snacks, choose highly flavored varieties like BBQ or Salt and Vinegar. If your child likes to crunch, carrots, snap peas, whole wheat crackers, pretzels, dehydrated fruits and veggies, banana chips, and sesame sticks are wholesome options (though keep in mind they could be choking hazards for some children). Fry thin slices of turkey bacon, Canadian bacon, or tofu to achieve a chip-like crunch. Transform bread into toast sticks or garlic toast. Munchies in snack containers are easy to take along to all the fun places where little ones like to go.
<urn:uuid:7a09704f-490e-4771-b8af-55f4e9716056>
CC-MAIN-2016-26
http://adoptionnutrition.org/diet-tips-tricks/punching-up-interest-in-food/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918222
225
2.78125
3
ALEX Lesson Plans Subject: Arts Education (3), or English Language Arts (3) Title: Who Are You Looking At? Description: This lesson allows students to brain storm and use kinesthetic learning. The primary focus of the lesson is to teach young drama students how relationships with others and interactions with their environment help when developing a theatrical character. Students will also make connections on how the relationships, actions, and interactions of characters in literature make stories more interesting. Thinkfinity Lesson Plans Subject: Language Arts Title: Writing a Movie: Summarizing and Rereading a Film Script Description: Lights! Camera! Action! In this lesson, students view a scene with no dialogue from E.T., write a script for that scene, and perform a dramatic reading while the scene plays. Thinkfinity Partner: ReadWriteThink Grade Span: 3,4,5
<urn:uuid:b4e7ca4b-278e-4e46-a0a9-ae346f100d8f>
CC-MAIN-2016-26
http://alex.state.al.us/plans2.php?std_id=43781
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.856232
185
3.484375
3
St. Everilda (also spelled Everildis, Everild, Averil) of Everingham was a Saxon saint in the seventh century who was born into the Wessex nobility In 635, she was converted to Christianity by St. Birinus, along with King Cynegils of Wessex. While still a young girl, she fled from home to become a nun, and was joined by Sts. Bega and Wuldreda. St. Wilfrid of York made them all nuns at a place called the Bishop’s Farm, later known as Everildisham. This place has been identified with present-day Everingham. St. Everilda eventually became abbess of a monastery where she gathered some eighty women. She fell peacefully asleep in the Lord in 700. By permission of www.orthodoxeurope.org
<urn:uuid:3030d565-8c2e-4520-8c23-81dd3eabb8d3>
CC-MAIN-2016-26
http://antiochian.org/print/18890
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.985558
179
2.578125
3
Section Talking Points - Agricultural appraisal - Download form The Texas agricultural exemption is not technically an exemption. It is a county appraisal district assessment valuation based on agricultural use. Therefore, it is actually an agricultural appraisal. Landowners may apply for this special appraisal status based on their land's productivity value rather than on what the land would sell for on the open market. Typically, a productivity value is lower than the market value, which results in a lower property tax. Landowners must use their land for agriculture. There is a rollback tax for taking agricultural land out of its productivity use. Property owners may qualify for an agricultural appraisal status if their land meets the following criteria: - The land must be devoted principally to agricultural use. Agricultural use includes producing crops, livestock, poultry, fish, or cover crops. It also can include leaving the land idle for a government program or for normal crop or livestock rotation. Land used for raising certain exotic animals (including exotic birds) to produce human food or other items of commercial value qualifies. - TheUsing land for wildlife management is an agricultural use, if such land was previously qualified open-space land and is actively used for wildlife management. Wildlife management land must be used in at least three of seven specific ways to propagate a breeding population of wild animals for human use. - Agricultural land must be devoted to production at a level of intensity that is common in the local area. - The land must have been devoted to agricultural production for at least five of the past seven years. However, land within the city limits must have been devoted continuously for the preceding five years, unless the land did not receive substantially equal city services as other properties in the city. If land receiving an agricultural appraisal changes to a non-agricultural use, the property owner who changes the use will owe a rollback tax. The rollback tax is due for each of the previous five years in which the land received the lower appraisal. The rollback tax is the difference between the taxes paid on the land's agricultural value and the taxes paid if the land had been taxed on its higher market value. Plus, the owner pays 7 percent interest for each year from the date the taxes would have been due. The form used to apply for a Texas agricultural and timber exemption registration number that can be used to claim an exemption may be downloaded below. Download Agricultural and Timber Exemption Registration Number form. PDF Form Reader The county appraisal district forms and documents that may be downloaded from our website are in Adobe Acrobat Reader PDF format. If you do not already have Adobe Acrobat Reader, you will need to download the latest version to view and print the forms. Go to Acrobat Reader download for the latest version.
<urn:uuid:f7ab165f-0359-4e8f-bb41-c3a818c67c56>
CC-MAIN-2016-26
http://appraisaldistrictguide.com/texas/exemption/agricultural.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932788
555
2.609375
3
Some of us remember calling cards as credit card like devices carried in wallets to cheaply call friends and loved ones while on business or vacation. For those that lived during the Victorian era, calling cards were also a means of communication. Before the telephone was invented, when someone came to visit that was known as "paying a call." Thus when you paid a call you either announced yourself with your calling card (a small card with your name or your husband's name written on it) or left your card to let someone know you had been to visit. At one point this custom had grown so elaborate there was a system to let someone know why you had called or if a return call was requested by folding the card in a certain way. Today few people carry such cards anymore, but they are still sometimes used in formal correspondence, and many professionals carry cards with their business information on it to make certain potential clients or collaborators know how to contact them. The cards of long ago were sometimes very simple or incredibly ornate. At the latest A&H Family Day, families were encouraged to create their own calling cards. Below are some highlights from the August 18th family outing. Great work everyone! |The namesake of this blog post and homage to this summer's Carly Rae Jepsen hit.|
<urn:uuid:72778e8f-0d57-4c01-adc2-41c6a91b942e>
CC-MAIN-2016-26
http://artandhistoryeducation.blogspot.com/2012/08/call-me-maybe-calling-cards-at.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.988244
263
2.5625
3
This is especially common with stone fruits like plums, pluots and apricots, but also with apples and pears. Fruiting trees require different pruning strategies than ornamental trees. Apples and apricots, for instance, bear fruit on the same spurs year after year. Pruning all the little dead-looking stubs off the tree in winter is a sure way to guarantee no fruit the following year. Peaches, lemons, pomegranates, avocados, oranges, figs, persimmons, etc. – they're all pruned differently. I need some suggestions for plant in rather deep shade. I've tried camellias, impatiens, azaleas and a few others, but they haven't done very well. Lauren, Huntington Beach Answer: If you have deep shade you will need to be very selective. A few plants to consider are fatsia, aucuba, mahonia, osmanthus, clivia, ligularia, pachysandra and several ferns, such as giant chain fern, sword fern and holly fern. A woodland effect with some of these blended to contrast their foliage patterns and growth habits can be quite soothing and beautiful. If the area is warm enough in the winter you can add some indoor plants for a splash of color, such as spathiphyllum (peace lily), variegated pothos and various brightly colored crotons.
<urn:uuid:572ed83f-437c-4335-aff0-1b2c4a2555b3>
CC-MAIN-2016-26
http://articles.dailypilot.com/2010-07-02/news/tn-dpt-0703-vanderhoff-20100702_1_fruit-trees-planting-stone-fruits/3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938753
307
2.890625
3
The three main types of brake fluid now available are DOT3, DOT4 and DOT5. DOT3 and DOT4 are glycol-based fluids, and DOT5 is silicon-based. The main difference is that DOT3 and DOT4 absorb water, while DOT5 doesn't. One of the important characteristics of brake fluid is its boiling point. Hydraulic systems rely on an incompressible fluid to transmit force. Liquids are generally incompressible while gases are compressible. If the brake fluid boils (becomes a gas), it will lose most of its ability to transmit force. This may partially or completely disable the brakes. To make matters worse, the only time you are likely to boil your brake fluid is during a period of prolonged braking, such a drive down a mountain -- certainly not the best time for brake failure!
<urn:uuid:55991ee6-188d-4226-b106-7675791a7f64>
CC-MAIN-2016-26
http://auto.howstuffworks.com/auto-parts/brakes/brake-parts/types-of-brake-fluid.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939549
170
2.640625
3
As a public service, I'm reprinting my hops varieties chart for you to consult during the Fest. No real research has been done into the flavor and aroma these hops contribute when wet, or how their constituents (oils and acids) vary when wet ... or really much of anything. But we can at least compare the wet versions to the dry versions, and so here are the details on standard hops. - History. Amarillo was discovered growing on Virgil Gamache Farms as a wild hop cross. - Flavor/Aroma. Described as a “Super Cascade” with pronounced citrus (orange) and tropical fruit character. High in beta acids and a good aroma hop. (alpha acid: 8-11% / beta acid: 6-7%. Total oils 1.5-1.9 ml.) - History. A super-high alpha hop with principally Zeus and Nugget parentage released by SS Steiner in 2006. - Flavor/Aroma. Not much available on this new hop, which is described in generic terms as "fruity" and "floral." (alpha acid: 14-17%% / beta acid: 3-5%. Total oils 1.6 - 2.4 ml.) - History. A British bittering hop developed in 1919. Both Brewer's Gold and Bullion are seedlings found wild in Manitoba. It's an English/wild Canadian cross. Many modern high alpha hops were developed from Brewer's Gold. - Flavor/Aroma. It has a resiny, spicy aroma/flavor with hints of black currant and a pungent English character. (alpha acid: 8-10% / beta acid: 3.5-4.5%. Total oils 1.6-1.9 ml.) - History. The first commercial hop from the USDA-ARS breeding program, it was bred in 1956 but not released for cultivation until 1972. It was obtained by crossing an English Fuggle with a male plant, which originated from the Russian variety Serebrianka with a Fuggle male plant. - Flavor/Aroma. The most-used Northwest hop, with a lovely mild citrus and floral quality. (alpha acid: 4.5-7% / beta acid: 4.5-7%. Total oils 0.6-0.9 ml.) - History. Centennial is an aroma-type cultivar, bred in 1974 and released in 1990. The genetic composition is 3/4 Brewers Gold, 3/32 Fuggle, 1/16 East Kent Golding, 1/32 Bavarian and 1/16 unknown. Akin to a high-alpha Cascade. - Flavor/Aroma. One of the classic "C" hops, along with Cascade, Chinook, and Columbus. Character is not as citrusy and fruity as Cascade; considered to have medium intensity. Some even use it for aroma as well as bittering. Clean Bitterness with floral notes. (alpha acid: 9.5-11.5% / beta acid: 3.5-4.5%. Total oils 1.5-2.5 ml.) - History. Another of the recent proprietary strains, Citra is a relatively high-alpha dual-use hop that can be used either for bittering or aroma. Purported parentage includes Hallertauer, American Tettnanger, and East Kent Goldings. - Flavor/Aroma. Lots of American citrus character, but tending toward tropical fruit. (alpha acid: 11 - 13% / beta acid: 3.5 - 4.5%. Total oils 2.2-2.8 ml.) - History. Chinook hops were developed in the early 1980s in Washington state by the USDA as a variant of the Goldings Hop. - Flavor/Aroma. An herbal, smoky/earthy character. (alpha acid: 12-14% / beta acid: 3-4%. Total oils 0.7-1.2 ml.) - History. The breeding nursery from which these varieties were bred contained 20-30 female plants from which seeds were gathered. Exact parentage is unknown. - Flavor/Aroma. Hops have a very distinctive skunky/marijuana flavor and a sticky, resinous flavor. (alpha acid: 14.5 - 16.5% / beta acid: 4-5%. Total oils 2-3 ml.) - History. Crystal was released 1993, developed in Corvallis a decade earlier. Crystal is a half-sister of Mt. Hood and Liberty. - Flavor/Aroma. A spicy, sharp, clean flavor. It is not complex like Cascade but offers a clear note when used with other hops. (alpha acid: 4-6% / beta acid: 5-6.7%. Total oils 0.8-2.1 ml.) - History. A dwarf hop developed in England derived from a dwarf male and a Whitbread Golding variety. - Flavor/Aroma. Similar to Goldings--spicy and earthy. (alpha acid: 6.5-8.5% / beta acid: 3-4%. Total oils, 0.7-1.5 ml) - History. Traditional German hop from Hallertau region. One of the classic “noble hops” originating in Germany’s most famous hop-growing region. Many cultivars. - Flavor/Aroma. Pleasant herbal character with an excellent bittering and flavoring profile. US Hallertau exhibits a mild, slightly flowery and somewhat spicy traditional German hop aroma. (alpha acid: 3.5-5.5% / beta acid: 3.5-5.5%. Total oils 1.5-2.0 ml.) - History. Another cross of the Hallertauer Mittelfrüher, with characteristics similar to those of Mt. Hood, released in the mid-80s around the time of Mt. Hoods' release. - Flavor/Aroma. Mild and spicy, closely akin to Mt. Hood and Hallertauer. (alpha acid: 3.5-4.5% / beta acid: 3-3.5%. Total oils 1.0-1.8 ml.) - History. An Oregon State University product, Mt Hood was developed in 1985. It is a half-sister to Ultra, Liberty and Crystal. Mt. Hood is an aromatic variety with marked similarities to the German Hallertauer and Hersbrucker varieties. - Flavor/Aroma. It has a refined, mild, pleasant and clean, somewhat pungent resiny/spicy aroma and provides clean bittering. A good choice for lagers. (alpha acid: 4-6% / beta acid: 5-7.5%. Total oils 1.0-1.3 ml.) - History. Also an Oregon State University product, Mt Rainiers were bred from a variety of plants, including Galena, Hallertauer, Golden Cluster, Fuggles, and Landhopen (?). It was released commercially in 2008 or '09. - Flavor/Aroma. An interesting hop that contributes a minty or anise note. (alpha acid: 7 -9.5% / beta acid: around 7%. Total oils- NA.) - History. Nugget is a bittering-type cultivar, bred in 1970 from the USDA 65009 female plant and USDA 63015M. The lineage of Nugget is 5/8 Brewers Gold, 1/8 Early Green, 1/16 Canterbury Golding, 1/32 Bavarian and 5/32 unknown. - Flavor/Aroma. A sharply bitter hop with a pungent, heavy herbal aroma.. (alpha acid: 12-14% / beta acid: 4-6%. Total oils 1.7-2.3 ml.) - History. Bred in Germany in 1978 from English Northern Brewer stock. - Flavor/Aroma. Combines qualities of spicy English hops and rich, floral German hops. Excellent, clean bittering and aroma. (alpha acid: 6-8% / beta acid: 3 - 4%. Total oils 1 - 1.5 ml.) - History. A triploid hop resulting from a cross between 1/3 German Tettnanger, 1/3 Hallertauer Mittelfrüh, and an American hop (possibly Cascade). The first seedless Tettnang-type hop. An OSU hop released in 1998. - Flavor/Aroma. Noble hop character, herbal, floral, but with a little American character. (alpha acid: 5.5-7% / beta acid: 7-8.5%. Total oils 1.3 - 1.7 ml.) - History. A propriety strain bred by Yakima Chief. - Flavor/Aroma. Simcoe is best characterized as having a pronounced pine or woody aroma. The cultivar was bred by Yakima Chief in the USA. It is sometimes described as being “like Cascade, but more bitter - and with pine.” (alpha acid: 12-14% / beta acid: 4-5%. To2.0-2.5tal oils ml.) - History. Sterling is an aroma cultivar, made in 1990 with parentage of 1/2 Saaz, 1/4 Cascade, 1/8 unknown German aroma hop, 1/16 Brewers Gold, 1/32 Early Green, and 1/32 unknown. - Flavor/Aroma. Similar to Saaz in aroma and flavor. Aromas are fine, rustic, earthy, and spicy. Used in this year’s Full Sail LTD 03. (alpha acid: 4.5-5% / beta acid: 5-6%. Total oils 0.6-1.0 ml.) - History. Summit is a recently-released super-high-alpha hop variety. It is a dwarf variety grown on a low trellis system. Because the low trellis is not machine harvestable, these hops are picked by hand in the field. - Flavor/Aroma. Strongly pronounced orange/ tangerine aroma and flavor. A favorite hop of Rob Widmer and used in recent releases (W ’07, Drifter). (alpha acid: 17-19% / beta acid: 4% - 6%. Total oils 1.5 - 2.5 ml.) - History. An older US-bred hop with Fuggles parentage. - Flavor/Aroma. A classic earthy/spicy hop with great versatility. (alpha acid: 4-6% / beta acid: 3.5% - 4.5%. Total oils 1 - 1.5 ml.) Information assembled from the following sources: Beer Advocate, Brew 365, Hopsteiner, Yakima Chief, Winning Homebrew , Global Hops
<urn:uuid:48de5a07-5a8f-4264-ab85-1ebcd13376b2>
CC-MAIN-2016-26
http://beervana.blogspot.ca/2011/09/hop-varieties-cheat-sheet.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.871535
2,251
2.578125
3
Source: Courtesy of Wikimedia Commons MURDOCH, BEAMISH, writer, lawyer, and politician; b. to Andrew Murdoch and Elizabeth Beamish at Halifax, N.S., 1 Aug. 1800; d. in Lunenburg, N.S., 9 Feb. 1876. Beamish Murdoch was raised and educated by a maiden aunt, Harriet Jane Ott Beamish, after his father, a merchant, became involved in an expensive lawsuit and was jailed as a debtor for seven years. In 1822 Murdoch was admitted to the bar of Nova Scotia and began a legal practice. He also began to contribute articles to the Acadian Recorder, which was owned by Philip J. Holland (d. 1839), and to the Acadian Magazine or Literary Mirror, which began publication in Halifax in 1826. Although his grandfather, the Reverend James Murdoch, had been a missionary in the Antiburgher wing of the Church of Scotland, Beamish Murdoch was raised in the Church of England and belonged to St Paul’s Church. In the disruption of that church in 1824 he joined the faction which included Thomas Chandler Haliburton*, and moved to St George’s Church in Halifax [see J. W. Johnston]. In 1826, Murdoch, aided by his uncle, Thomas Ott Beamish, ran for the House of Assembly for Halifax Township. In spite of opposition from the city merchants, he had sufficient strength in the township to carry the election. Murdoch, who in 1824 and 1825 had been vice-president of the Charitable Irish Society, also received aid from the Irish, and in the assembly he worked to remove civil disabilities from the Roman Catholics. He quickly became an active member, generally following Haliburton’s lead. Thus, in 1827, when a question arose concerning the right of the legislature to control customs revenue, Murdoch supported Haliburton’s proposal that the assembly should petition the crown seeking a compromise measure which would give it some control over future revenue expenditures. Murdoch was irritated over the refusal of the Legislative Council, which included Hibbert N. Binney, the collector of customs, to support the move. He was further annoyed when the upper house failed to support his motion requesting that the crown reverse its decision to collect quitrents. He did not deny the constitutional right of Great Britain to enforce payment but argued that the measure was unjust. A real conflict with the upper house did not develop until 1830 when the two houses clashed over the duty on brandy [see Enos Collins]. Murdoch, who followed the lead of Samuel George William Archibald*, regarded the action of the council as unconstitutional and argued that the council’s stand denied the lower house the authority which it should possess as the representative of the people. In the election of 1830 Murdoch ran against Stephen Wastie DeBlois. He received support from Joseph Howe and the Novascotian despite the fact that Howe had criticized Murdoch during the legislative session of 1830. Prior to 1830 Murdoch had supported public grants to Pictou Academy, but he, like Haliburton, apparently objected to attacks on Bishop John Inglis* by the academy’s president, Thomas McCulloch*. Howe, in turn, felt that Murdoch had made unwarranted attacks on McCulloch. Any chance of Murdoch’s carrying the election in 1830 was ended when he was provoked into complaining about so-called loyalists who fled the United States to escape bad debts and monopolized public offices in the province. After the election, Murdoch withdrew from public affairs until the campaign of 1836 when he ran, unsuccessfully, against a Reformer. In the 1840 election he ran against Joseph Howe and William Annand* and again was defeated. During the 1840 campaign, Murdoch complained that the Reformers’ demands for responsible government threatened the tie with England and would upset the balance in the British constitutional system. By the time he wrote his history of the province, however, he had come to regard cabinet government and self-government as being compatible with association in the British empire. During his withdrawal from public affairs in the early 1830s, he prepared his four-volume Epitome of the laws of Nova Scotia, printed by Joseph Howe in 1832–33. This work, which involved a detailed study of the provincial and English law, was modelled after Sir William Blackstone’s Commentaries. Murdoch’s work was well received by the Maritime press and was apparently a significant contribution to both lawyers and law students until the growing body of provincial law made it obsolete and more specialized works made it unnecessary. Throughout his life Murdoch showed a keen interest in education, in charitable institutions., and in moral issues. In January 1825 he was appointed joint secretary of the Poor Man’s Friend Society and in the 1830s he served on the Nova. Scotia Philanthropic Society. His interest may, have been sparked by the experiences of his father and in 1826 he wrote a pamphlet in which he supported the introduction of a bankruptcy law. Murdoch was an early supporter of temperance. and, by 1842, was president of the Halifax Temperance Society, which had been established in 1832. His concern with public education led him to serve on the Halifax Library Committee in the 1840s and 1850s. He assumed a more significant role in provincial education when he became clerk of the Central Board of Education in April 1841. As clerk, he earned an annual salary of £150 and played an important part in the board’s attempts to establish a uniform school system in the province. He prepared a summary of the ordinances of the city of Halifax in 1851 and, in October 1852, was appointed recorder for the city with an annual salary of £200. As recorder he was required to offer legal advice to the city and to try cases before the mayor’s court. When he retired in 1860 Murdoch began to prepare A history of Nova Scotia, or Acadie, which was published in installments between 1865 and 1867. He originally intended the history to end with the year 1807 but extended it to the year 1827. He even considered going as far as 1867 but his energy, or perhaps the public response to the first three volumes, was not equal to the task. In his work he adopted a severely chronological approach, with extensive quotations from documents and earlier books. There was no critical appraisal of the documents, nor was there any sense of development through time. Murdoch was so convinced of the universal truth of his beliefs that he expected his reader to perceive the real nature of liberty, loyalty, and progress merely by seeing the actual words of the pioneers. He felt no compulsion to expound on his beliefs, but assumed that they were inherent in the British race. Thus, according to his History, as soon as the English arrived in Nova Scotia, the province began to take on an English aspect. The British belief in law, freedom, and industry helped preserve the province from the convulsions of revolution which racked the United States and was gradually adapted to the local environment. Thus, he was able to reconcile a faith in a Nova Scotian nationalism with a continued loyalty to Great Britain. Intended as a delineation of the Nova Scotian character, Murdoch’s work stands as a monument to chronology as history. Beamish Murdoch, The charter and ordinances of the city of Halifax in the province of Nova Scotia with the provincial acts concerning the city, collected and revised by authority of the city council (Halifax, 1851); An epitome of the laws of Nova Scotia (4v., Halifax, 1832–33); An essay on the mischievous tendency of imprisoning for debt (2nd ed., Halifax, 1831); A history of Nova Scotia, or Acadie (3v., Halifax, 1865–67); A narrative of the late fires at Miramichi, New Brunswick: with an appendix containing the statements of many of the sufferers, and a variety of interesting occurrences; together with a poem, entitled “The conflagration” (Halifax, 1825). PANS, Beamish Murdoch papers. Duncan Campbell, Nova Scotia in its historical, mercantile, and industrial relations (Montreal, 1873), 268–77. Directory of N.S. MLAs (Fergusson), 262. G. E. Hart, “The Halifax Poor Man’s Friend Society, 1820–27. An early social experiment,” CHR, XXXIV (1953), 109–23. D. C. Harvey, “History and its uses in pre-confederation Nova Scotia,” CHA Report, 1938, 5–16. D. C. Harvey, “Nova Scotia’s Blackstone,” Can. Bar Rev., XI (1933), 339–44. Gene Morison, “The Brandy Election of 1830,” N.S. Hist. Soc. Coll., XXX (1954), 151–83. H. L. Stewart, The Irish in Nova Scotia: annals of the Charitable Irish Society of Halifax (1786–1836) (Kentville, N.S., ), 138–41. Norah Story, “The church and state ‘party’ in Nova Scotia, 1749–1851,” N.S. Hist. Soc. Coll., XXVII (1947), 35–57. K. N. Windsor, “Historical writing in Canada to 1920,” Lit. hist. of Can. (Klinck), 208–50.
<urn:uuid:4e92e8c0-fcd8-482c-b84a-e2c9b0c67bf9>
CC-MAIN-2016-26
http://biographi.ca/en/bio/murdoch_beamish_10E.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974272
1,968
2.96875
3
The Materials Science Research Rack is slated to leave Marshall Space Flight Center on its way to Kennedy Space Center in Florida this afternoon. There it will be prepared to fly on the space shuttle to the International space station. Marshall scientists hope the new materials research rack will enable enhanced research to take place in the low gravity conditions of the space station. Researchers have used what is known as microgravity, or more commonly known as the weightlessness of space, to develop new materials and medicines over the past four decades. . The science equipment will be used to study of a variety of materials - including metals, ceramics, semiconductor crystals, and glass onboard the orbiting laboratory.
<urn:uuid:fa5f88ba-887f-409e-97da-55ae4d3dfdd5>
CC-MAIN-2016-26
http://blog.al.com/breaking/2008/12/nasa_to_deliver_huntsville_sci.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92029
136
3.046875
3
Raw fruits and veggies provide us with necessary calories, carbohydrates, proteins, fats, minerals, vitamins and the life giving, energizing enzymes. In fact they provide us with the best sources of enzymes. This is because any plant needs enzymes to live just like we need them and our dogs need them. Unfortunately these enzymes are lost when food is cooked. Researchers have known for a long time that high intakes of fruits and veggies have shown a decrease in a number of diseases including coronary heart disease, diabetes, obesity, and several forms of cancer. It was discovered that a low intake of fruits and veggies had about twice the risk of cancer compared with a high intake. So we can say that fruits and veggies are life giving factors for sure. This is another reason why we include fruits and veggies in all our formulations. posted by Rob Mueller
<urn:uuid:faab447e-e2ff-4e92-99df-39c36e6110bc>
CC-MAIN-2016-26
http://blog.barfworld.com/2008/07/07/the-beneficial-chemicals-found-in-fruits-and-veggies/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969266
168
2.984375
3
LANSING - Reusing industrial byproducts may soon be the better option for manufacturers in an eco-friendly, cost-efficient world. The Michigan Manufacturers Association (MMA) is promoting recycling industrial byproducts as part of its legislative priorities, but first wants to change the law regarding whether or not these byproducts can be used. "Every industry has some sort of product that comes out of it that is essentially benign. But millions of tons of products are currently required by law to go into a landfill," said Mike Johnston, MMA vice president for governmental and regulatory affairs. "It's about the constraints under the law about when something is a waste and when something is a product," Johnston said. "Generally, the concept is, if it comes out of a manufacturing facility as a byproduct, it must be waste. "This is what gets in the way of the concept of recycling. If it's first a waste, then the Solid Waste Act says you have to put it in a landfill. If it's inert, you can use it for something. But usually if you're going to use it for composting or some other use, it's probably not exactly inert, it's probably in between solid waste and inertness and it can never be reused," said Johnston. But one byproduct success story that has seen increased usage not only in the state but nationally is reusing fly ash - a product of coal-fired power plants. Johnston explained that two kinds of ash disperse from burning coal - fly ash and bottom ash. Fly ash, a product that can be used in road paving material, is the ash that is captured by environmental control equipment, and is one of the more researched and used byproducts. David Hand, a civil and environmental engineering professor at Michigan Technological University, said he and colleagues are researching the most efficient ways to use fly ash in construction materials. Hand said they are developing a standard procedure to add the proper materials -which includes fly ash - to concrete mixtures so that the cement will be not only environmentally sound, but last longer as well. "When you make cement, the carbon dioxide content is really high," Hand said, suggesting that reducing such emissions is the purpose of reusing this industrial by product. Hand said Minneapolis recently completed a bridge made with 40 percent fly ash in its road material. Dennis Leonard, principal environmental engineer for Detroit Edison, said the major use for this ash in Michigan is as a substitute for Portland cement, which is, according to him, the most expensive ingredient in concrete. "Some projects require fly ash because the strength is better, and some because the sulfate resistance is better," he said. An instance of something that would need fly-ash concrete because of sulfate would be a sewage treatment plant. Leonard said the EPA is working with the Department of Energy to issue a federal buying preference for concrete with fly ash in it for federal projects. "Just about any concrete ready-mix company in the state considers fly ash concrete for its projects," he said. Leonard also said fly ash can be good for increasing long-term length and has an easier ability to be pumped - which is good in construction on high rise buildings. He also said it is less costly and agreed with Hand that it reduces carbon dioxide emissions.
<urn:uuid:16fb6c78-5aaa-4df1-a568-b9394e524b23>
CC-MAIN-2016-26
http://blog.mlive.com/cns/2009/04/reuse_fly_ash_byproducts_manuf.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96452
679
2.78125
3
By Mark Lawrence Today (16 October 2013) is World Food Day and it’s sobering to think that globally more than two billion people are affected by micronutrient malnutrition, most commonly presenting as vitamin A deficiency, iron deficiency, anaemia, and iodine deficiency disorders. Alleviating micronutrient malnutrition is now firmly on the agendas of United Nations agencies and countries around the world. Food fortification, that is the addition of one or more nutrients to a food whether or not they are normally contained in the food, is receiving much attention as a potential solution for preventing or correcting a demonstrated nutrient deficiency. It is a powerful technology for rapidly increasing the nutrient intake of populations. Political agendas and technological capacities are combining to significantly increase the number of staple foods that are being fortified, the number of added nutrients they contain and their reach. Across the world approximately one third of the flour processed in large roller mills is now being fortified. Yet food fortification is a complex and contested technological intervention. Proponents of food fortification point to the ability of this technology to be introduced relatively quickly and delivered efficiently through centralised food systems to increase an individual’s nutrient exposure without the need for dietary behaviour change. Public-private partnerships involving collaborations between governments and the private sector are substantially increasing the capacity of governments to develop and implement food fortification interventions around the world. For example, the Global Alliance for Improved Nutrition (GAIN) is the primary vehicle globally brokering among governments, non-government organisations, the private sector, and civil society to promote food fortification. GAIN receives funding from a number of public and private sector donors, including the Bill and Melinda Gates Foundation and USAID. Those who are more cautious about food fortification highlight that there are alternative policy interventions available to correct nutrient deficiencies. There are public health, social, and agriculture development measures available to promote food security and healthy dietary behaviours. For example, the Food and Agriculture Organization has proclaimed that “Sustainable Food Systems for Food Security and Nutrition” will be the focus of World Food Day in 2013. Some suggest that food fortification policy decisions should be made with regard to uncertainties about scientific evidence and ethical considerations. Scientific uncertainties relate to the public heath effectiveness as well as safety implications of food fortification. Ethical considerations – associated with increasing nutrient exposure resulting from food fortification – relate to balancing the rights of individuals, population groups and the population as a whole. How do we decide when to select food fortification and/or an alternative policy intervention as the preferred approach to tackle inadequate nutrient intake within a population? It is my view that an evidence-informed approach to food fortification policy-making starts with the recognition that not all causes of inadequate nutrient intake are the same. Too often food fortification policy is made because of the perception it offers a relatively easy and immediate quick fix to a health problem resulting from an inadequate nutrient intake, irrespective of the cause of the inadequacy. A failure to consider the underlying cause of an inadequate nutrient intake risks putting in place an ineffective policy intervention, or worse, one that carries safety concerns as well as adverse ethical implications. Successful food fortification interventions have been those where the technology has been used to directly tackle the underlying cause of an inadequate nutrient intake such as addressing inherent nutrient deficiencies in the food supply. For example, universal salt iodization has been highly effective in reducing the prevalence of iodine deficiency disorders around the world with minimal risks (when implemented in accordance with policy guidelines) and minimal adverse ethical implications. Conversely, potential health risks and adverse ethical implications have resulted when food fortification policies have been developed and implemented with a lack of consideration of the underlying cause of an inadequate nutrient intake. For example, mandatory flour fortification with folic acid has been implemented in approximately 70 countries around the world as an intervention to reduce the prevalence of neural tube defects (NTDs). This policy intervention has been based on compelling epidemiological evidence that folic acid can reduce the risk of a relatively small number of women experiencing a NTD-affected pregnancy. However, NTDs have an uncertain multifactorial aetiology in which genetics plays a major role. What the evidence also shows is that for the small number of women who may be genetically predisposed to a raised folic acid requirement, folic acid is exerting its protective influence by acting more as a therapeutic agent than as a conventional nutrient. In these circumstances targeted folic acid supplementation or voluntary flour fortification with folic acid are policy interventions that are more directly tackling the underlying genetic cause of the policy problem. These policy interventions are associated with less health risks and adverse ethical implications than mandatory flour fortification with folic acid. The mandatory fortification policy intervention results in the exposure of everyone in the population who consumes flour, including infants, children, teenagers, men and older adults, to a lifetime of raised synthetic folic acid despite little evidence of any health benefit, but possible harm. In the future, food fortification interventions look set to become progressively more widespread. Ongoing monitoring and evaluation of the impact of individual and collective food fortification activities on the nutrient composition of foods, population nutrient intakes, health outcomes, and food system operations, will be critical to inform policy and practice to protect and promote public health. Mark Lawrence is an Associate Professor of Public Health Nutrition in the Population Health Strategic Research Centre at Deakin University. His latest book is Food Fortification: The evidence, ethics, and politics of adding nutrients to food (Oxford University Press). Subscribe to the OUPblog via email or RSS. Subscribe to only health and medicine articles on the OUPblog via email or RSS. Image credit: apple filled with pills – vitamins concept. © diosmic via iStockphoto.
<urn:uuid:ac21685c-a194-41bd-a6e5-91acf502255e>
CC-MAIN-2016-26
http://blog.oup.com/2013/10/food-fortification-world-food-day/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934987
1,173
3.1875
3
- Simply introduce a new global constant. You could name it x and write something like x=1.23456 and refer to x throughout your code. This has the advantage of being easy to implement. - Write all of your code in monadic style and make use of the reader monad. This is intrusive in the sense that you may have to make many changes to your code to support it. But it has the advantage that all of your functions now explicitly become functions of your global constant. Now I’m going to roughly sketch a more categorical view of both of these approaches. So let’s restrict ourselves to the subset of Haskell that corresponds to typed lambda calculus without general recursion so that we know all of our functions will be total and correspond to the mathematical notion of a function. Then all of our functions become arrows in the category that we’ll call Hask. Firstly consider approach (1). Suppose we want to introduce a new constant, x, of type A. Category theory talks about arrows rather than elements of objects, so instead of introducing x of type A, introduce the function x:1->A where 1 is the terminal object in Hask, normally called (). An element of A is the same thing as an element of 1->A, but in the latter case we have an arrow in the category Hask. Before continuing, let me digress to talk about polynomials. Suppose we have a ring (with an identity) R. We define R[x], where x is an indeterminate, to be the ring of polynomials in x. Another way to describe that is to say that R[x] is the smallest ring containing R and an indeterminate x, that makes no assumptions about x other than those required to make R[x] a ring. For example we know that (1+x)(1-x)=1-x2, because that must hold in any ring. Given a polynomial p in R[x] we can think of it as a function fp from R to R. fp(a) is the value we get when substituting the value of a for x in p. So a polynomial in R[x] is the same as a function from R to R that can be written in terms of elements of R, multiplication and addition. We can do the same with category theory. Given a category A we can ask for the smallest category extending A and containing an indeterminate arrow x:1 -> A. Just as with polynomials we have to allow all possible arrows that can be made by composing arrows of A with x. The resulting expressions for arrows will contain x as a free variable, just like the way x appears in polynomials. In fact, by analogy we can call the resulting category, A[x], the category of polynomials in x:1->A. In the special case A=Hask, you can see that Hask[x] is the category of Haskell functions extended by a new constant of type x:1->A but assuming no equations other than those necessary to make Hask[x] a category. Just as an arrow in Hask is a Haskell function, an arrow in Hask[x] is a Haskell function making use of an as yet undefined constant x. (I've glossed over some subtleties. Just as we need a suitable equivalence relation to ensure that (1+x)(1-x)=1-x2 in R[x], we need suitable equivalence relations in our category. I'll be showing you where to find the missing details later.) Here's the implementation of a function, h, making use of a constant x: (Note that I'll be using Edward Kmett's category-extras shortly so I need some imports) > import Control.Monad.Reader > import Control.Comonad > import Control.Comonad.Reader > x = 1.23456 > f a = 2*a+x > g a = x*a > h a = f (g a) > test1 = h 2 Now consider the second approach. The easiest thing is to just give an implementation of the above using the reader monad: > f' a = do > x <- ask > return $ 2*a+x > g' a = do > x <- ask > return $ x*a > h' a = return a >>= g' >>= f' > test2 = runReader (h' 2) 1.23456 Note how, as is typical in monadic code, I have to plumb f' and g' together using >>= so that 1.23456 is passed through f' and g'. Previously I've described another way to think about the composition of monadic functions. Using >>= we can compose functions of type a->m b and b->m c to make a function of type a->m c. The result is that given a monad we can form the Kleisli category of the monad. The objects are the same as in Hask, but an arrow from a->b in the Kleisli category is an arrow of type a->m b in Hask. It's not hard to show this satisfies all of the axioms of a category. When we program in the reader monad it's a bit like we've stopped using Hask and switched to the Kleisli category of the reader monad. It's not quite like that because we used functions like +. But in theory we could use lifted versions of those functions too, and then we'd be programming by composing things in the Kleisli category. If we call the reader monad R then we can call the corresponding Kleisli category HaskR. (Strictly speaking that R needs a subscript telling is the type of the value we intend to ask for.) So here's the important point: Hask[x] is the same category as HaskR. In both cases the arrows are things, which when supplied a value of the right type (like 1.23456), give arrows in Hask from their head object to their tail object. But there's another way to do this. We can use the reader comonad: > f'' a = 2*extract a+askC a > g'' a = extract a*askC a > h'' a = a =>> g'' =>> f'' > test3 = runCoreader (h'' (Coreader 1.23456 2)) In a similar way, we're dealing with arrows of the form wa -> b and we can compose them using =>>. These arrows form the coKleisli category of the reader comonad, S, which we can write HaskS. So we must have Now some back story. Over 20 years ago I was intrigued by the idea that logic might form a category with logical ‘and’ and ‘or’ forming a product and coproduct. I came across the book Introduction to Higher Order Categorical Logic by Lambek and Scott for ₤30.00. That’s ₤60.00 at today's prices, or about $120.00. On a student grant? What was I thinking? And as it bore no relation to anything I was studying at the time, I barely understood a word of it. I was probably fairly applied at that point doing courses in stuff like solid state physics and electromagnetism as well as a bit of topology and algebra. I doubt I'd heard of lambda calculus though I could program in BASIC and APL. So there it sat on my bookshelf for 22 years. Periodically I’d look at it, realise that I still didn’t understand enough of the prerequisites, and put it back on the shelf. And then a month or so ago I picked it up again and realised that the first third or so of it could be interpreted as being about almost trivial Haskell programs. For example, on page 62 was The category A[x] of all polynomials in the indeterminate x:1->A over the cartesian or cartesian closed category A is isomorphic to the Kleisli category AA=ASA of the cotriple (SA,&epsilonA,δA). The language is a little different. Lambek and Scott used the term cotriple instead of comonad and Kleisli category where I’d say coKleisli category. δ and ε are cojoin and coreturn. And Lambek and Scott's theorem applies to any cartesian closed category. But after staring at this claim for a while it dawned on me that all it was really saying was this: here are two ways to introduce new constants into a category. But there’s no way I would have seen that without having practical experience of programming with monads. Learning Haskell has finally paid off. It’s given me enough intuition about category theory for me to get some return on my ₤30.00 investment paid to Heffers all those years ago. I expected to take this book to my deathbed, never having read it. Anyway, for the details I left out above, especially the correct equivalence relation on Hask[x], you'll just have to read the book yourself. Also, note the similarity to the deduction theorem. This theorem says that if we can prove B, assuming A, then we can deduce A implies B without making any assumptions. It unifies two way to introduce a proposition A, either as a hypothesis, or as an antecedent in an implication. In fact, the above theorem is just a categorical version of the deduction theorem. Also note the connection with writing pointfree code. In fact, the pointfree lambdabot plugin makes use good use of the reader monad to eliminate named parameters from functions. I’m amazed by seeing a book from 1986 that describes how to use a comonad to plumb a value through some code. As far as I know, this predates the explicit use of the reader monad in a program, Wadler and Moggi’s papers on monads, and certainly Haskell. Of course monads and comonads existed in category theory well before this date, but not, as far as I know, for plumbing computer programs. I’d love to hear from anyone who knows more about the history these ideas.
<urn:uuid:90c6d724-0390-4d4d-96d5-c90b3b1e661e>
CC-MAIN-2016-26
http://blog.sigfpe.com/2008/06/categories-of-polynomials-and-comonadic.html?showComment=1213670880000
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928964
2,224
3.1875
3
It’s kind of amazing that with nearly 500 planets discovered orbiting other stars, we’re still finding ones that are really weird. Massive planets orbiting so close to their stars they are practically plowing through the stellar atmosphere; hot spots on the planet not aligned with their stars; planets orbiting so far out it’s a struggle to understand how they got there. And now we can add the planets NN Serpentis c and d to that list. Lying about 1500 light years from Earth, NN Ser is a binary star — most stars in the sky are part of multiple systems, so that in itself isn’t all that odd. But NN Ser is weird: it’s a very dinky red dwarf orbiting very close to a white dwarf. And by very close, I mean really close: they’re separated by only 600,000 km (360,000 miles), which isn’t much farther apart than the Earth and the Moon! I’ll get back to the stars in a sec. The planets found (named c and d because the two stars are a and b, according to the naming conventions) are Jupiter-scale beasts, with masses of about 6 and 2 times Jupiter’s, orbiting the binary stars at a distance of roughly 825 and 450 million km (500 million and 270 million miles). Those numbers don’t seem too odd; lots of planets have been found with similar characteristics. But when you take a closer look at the system…
<urn:uuid:6b986bd6-32a9-4797-a470-d263692f898f>
CC-MAIN-2016-26
http://blogs.discovermagazine.com/badastronomy/tag/nn-ser/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922347
314
3.515625
4
In breaking avian-aphrodisiacal news, researchers have found that a bird’s scent has a lot to do with how successfully — and with whom — it mates. While most animals rely on smell as the primary sexual signal in finding a partner, scientists have held for years that birds were all stuffed up when it came to odor-related romance, believing their attraction to be based instead on traits like size, song and plumage. But researchers at Michigan State University demonstrated that odor can reliably predict reproductive success among some birds—both who those birds jump in the nest with and how many offspring they produce. The team, led by Danielle Whittaker, examined the preen oil of dark-eyed juncos, a species of North American songbird. Preen oil is secreted by a gland near the tail; the bird rubs its head and bill in the gland and then spreads the oil all over its body, a behavior thought to maintain and strengthen the feathers (while also achieving an ultra-cool, slicked-back coif). The study published in the current issue of Animal Behaviour shows that the preen oil actually does a double duty. First, aromatic volatile compounds contained within the oil can indicate a bird’s fertility. And two, for males, specifically, the volatiles hint at how paternal they’ll be or even if they’re likely to be cuckolded (i.e. male birds that get stuck raising baby birds that aren’t biologically their own). Whittaker found that males with more male-like scent and females with more female-like scent were predictably more prolific. She calls the results “very intuitive,” but says she was “a little surprised at the strength of the correlation.” It turns out female dark-eyed juncos in the study weren’t just sniffing out a mate, they were also trying to find a good father. Whittaker reported that females would often use odor to select a particularly masculine mate, because males with more “male-like” compounds tended to have more surviving offspring and were also more successful at raising the nestlings to fledglings. In contrast, Whittaker found that those males with a higher abundance of “female-like” compounds in their preen oil (those that smelled like the ladies) lost more paternity, as the offspring in their home nests were more likely to have been sired by another male. It’s far too early to know if the findings extend beyond juncos to cuckolded cockatoos, much less people. But, Whittaker says, “there is an interesting parallel between birds and humans here” because neither has functional vomeronasal organs—that is, both possess smaller olfactory gene repertoires than other scent-dependent mammals so they rely primarily on visual and acoustic communication. While the prevailing scientific belief is that odor does not play a large role in sexual behavior for birds or humans – disregarding the deodorant-eschewing or Axe-body-spray-wearing outliers – Whittaker says that notion could change. “As we are learning that this assumption is incorrect in birds,” she says, “I would expect that we may begin re-examining our assumptions about humans as well.”
<urn:uuid:6947766c-b292-4aab-b220-0063f1048a94>
CC-MAIN-2016-26
http://blogs.discovermagazine.com/d-brief/2013/09/06/birds-can-whiff-a-winner-of-a-mate/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969037
698
2.78125
3
NASA didn’t announce the discovery of extra terrestrial life today. Would it be exciting to you, though, if I told you it discovered a way that might allow wastewater treatment plants to operate without phosphorous? Or, to put it in terms of the NASA news release: NASA-funded astrobiology research has changed the fundamental knowledge about what comprises all known life on Earth. NASA discovered an organism that’s figured out how to do without phosphorous. Phosphorous is necessary for life — at least the kind of life with which we’re most familiar. The “green revolution,” for example, was fueled by phosphorous, reserves of which are declining rapidly on earth because of fertilizer. It’s also the chemical backbone of our DNA. Now, an organism has been discovered that apparently doesn’t need phosphorous, it uses arsenic to build itself. We know that some microbes can breathe arsenic, but what we’ve found is a microbe doing something new — building parts of itself out of arsenic,” said Felisa Wolfe-Simon, a NASA Astrobiology Research Fellow in residence at the U.S. Geological Survey in Menlo Park, Calif., and the research team’s lead scientist. “If something here on Earth can do something so unexpected, what else can life do that we haven’t seen yet?” Pamela Conrad of NASA called the discovery “delightful” because it expands her thinking of what life beyond Earth might look like. “It opens up a whole new line of chemistry. The implication is we still don’t know everything there is to know about what might make a habitable environment on another planet.” Will this answer questions about how we got here and are we alone? “Probably not in our lifetime,” Wolfe-Simon said. But without the discovery, earthlings looking for life on another planet could go to all the trouble of getting somewhere, only to not recognize life that exists there as life at all. For example, was there life in this picture, but we didn’t know it? Here on earth, another scientist said, the discovery could lead to a creation of bioenergy organisms without needing to deplete the phosphorous supply on Earth. One excited scientist said today the discovery should inspire more U.S. students to study science. That would be a new form of life, too. By the way, ever wonder what gets a roomful of science reporters excited while they’re covering a news conference at which new forms of life are revealed ? Seeing themselves on a monitor:
<urn:uuid:0269dc6f-ad01-4855-bfba-63dfe104a9b2>
CC-MAIN-2016-26
http://blogs.mprnews.org/newscut/2010/12/life_as_we_didnt_know_it/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939901
550
3.796875
4
You exercise your body to stay physically in shape, so why shouldn't you exercise your brain to stay mentally fit? With these daily exercises you will learn how to flex your mind, improve your creativity and boost your memory. As with any exercise, repetition is necessary for you to see improvement, so pick your favorite exercises from our daily suggestions and repeat them as desired. Try to do some mentalrobics every single day! You use different parts of your brain for normal tasks and for new, interesting tasks. Even after a few minutes of performing certain tasks, your brain becomes accustomed to it. For example, close your eyes and touch your arm. You will certainly feel that, but keep your finger there for a few moments. Eventually your sense of touch becomes accustomed to the feeling of your arm and no longer reports it to your brain; you will no longer feel anything. The same thing happens for all your senses (which is why you can't tell when your own breath smells bad!) The brain thirsts for novel experiences. Unique experiences activate different parts of the brain, strengthen your synapses and pump up the production of neurotrophins. So, break up your routines and try something new. Short Term Memory Test Interactively test your short term memory. Mentalrobics Public Forums Chat about these articles and other mind related topics. Sudoku Logic Puzzle This puzzle requires logic and a good memory.
<urn:uuid:fe49da6f-66d4-4aea-9ea7-f076744f703b>
CC-MAIN-2016-26
http://braingle.com/mind/351.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936582
289
2.609375
3
February 13, 2014 A TAD TOO MISERLY, BUT A START: How Giving $1,000 to Every Baby in America Could Reduce Income Inequality (Norm Ornstein, February 12, 2014, National Journal) And the risk level needs to be automatically adjusted by age.It is called KidSave, and it was devised in the 1990s by then-Sen. Bob Kerrey of Nebraska, with then-Sen. Joe Lieberman as cosponsor. The first iteration of KidSave, in simple terms, was this: Each year, for every one of the 4 million newborns in America, the federal government would put $1,000 in a designated savings account. The payment would be financed by using 1 percent of annual payroll-tax revenues. Then, for the first five years of a child's life, the $500 child tax credit would be added to that account, with a subsidy for poor people who pay no income. The accounts would be administered the same way as the federal employees' Thrift Savings Plan, with three options--low-, medium-, and high-risk--using broad-based stock and bond funds. Under the initial KidSave proposal, the funds could not be withdrawn until age 65, when, through the miracle of compound interest, they would represent a hefty nest egg. At 5 percent annual growth, an individual would have almost $700,000.The initial idea of KidSave was to provide a retirement supplement to Social Security, making it easier in some ways to reform Social Security to achieve fiscal solvency. But the concept can serve multiple purposes at a very small cost. Posted by Orrin Judd at February 13, 2014 3:00 PM « BALANCED BUDGETS ARE JUST A MATTER OF HYGIENE: | Main | IF ONLY HE'D HATED SLAVS INSTEAD, WE'D HAVE WON WWII: »
<urn:uuid:b07a7a33-abd7-4a87-97e8-65c43a01c397>
CC-MAIN-2016-26
http://brothersjuddblog.com/archives/2014/02/a_tad_too_miserly_but_a_start.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952866
391
2.875
3
Although the recent Paris attack and conflict between Russia and Turkey have dominated the news headlines, COP 21—which opened in late November—is still the most important and historical international issue that the world faces at present. Six years after the greatly disappointing Copenhagen Climate Change Conference (COP 15), the ongoing Climate Change Conference in Paris epitomizes the combined efforts that participating countries are arduously undertaking within the UN framework to combat this global challenge. The world is facing another do-or-die moment, and it may very well be the last chance to seize the moment. Can the international community achieve a legally binding agreement to cope with the effects of climate change, a crisis that will shape the future of the human race? The two-week conference underway in Paris will tell us. With the whole world watching, China has taken on an unwavering sense of responsibility on the issue of climate change. The course of action to which China is committing itself at the UN Climate Change Conference in Paris will also help the country deepen reforms in its energy sector based on the Third Plenary Session of Eighteenth Chinese People’s Congress (CPC) Central Committee in November 2013. More than 160 countries have submitted intended nationally determined contributions (INDCs), which collectively cover more than 90 percent of global greenhouse gas emissions. More than 140 heads of state or government are making appearances at COP 21, a reflection of the international community’s strong determination to reach a climate agreement. China, as the world’s second-largest economy and largest emitter of greenhouse gases, obviously attaches great importance to COP 21. Whereas then premier Wen Jiabao attended the Copenhagen Conference in 2009, President Xi Jinping visited Paris himself—a sign of greater commitment this time around. This past July, China submitted its INDC in a document entitled Enhanced Actions on Climate Change. Beijing also announced joint presidential statements with the United States and France over the past few months, further indications that China is determined to respond to climate change. With its share of the global economy and energy markets increasing, China is taking a more proactive attitude in international governance of energy and environmental issues. By launching the AIIB and implementing the Belt and Road initiative, China has expressed willingness to playing a greater role on the international stage. The international community also has greater demands and expectations of China during these climate change negotiations. After the Chinese government proposed in 2009 to lower the carbon intensity of GDP by 40 to 45 percent below 2005 levels by 2030, the November 2014 China-U.S. Joint Announcement on Climate Change declared that China would peak its CO2 emissions before 2030. This goal was considered radical even by academics in 2009, but it has now become a pledge made by the government itself. Even more unexpectedly, between 2009 and 2015 China and the United States swapped places in terms of economic prospects. The U.S. economy fell into a tailspin due to the 2008 economic crisis, while China performed well after implementing a 4 trillion renminbi stimulus plan. This explains, to some degree, why the United States rejected emissions targets proposed by the EU in Copenhagen, due to fears that such an outcome may cause economic harm. But today China is faced with a slower rate of economic growth than it has experienced for the past several decades while the United States is in the midst of a robust economic recovery and prosperity stemming from the shale gas industry. Moreover, the U.S. Federal Reserve is planning to raise interest rates after years of quantitative easing. This raises the question: will the Chinese government restrict its efforts to cope with climate change because of the country’s domestic economic downturn? However, this should not cause China’s determination on climate change to waver. On the contrary, China should use its pledge to implement gas and oil sector reforms that have proceeded slowly up until now. A leading cause of China’s economic downturn is the pervasive influence of an economic model centered on coal consumption and subsequent overcapacity among high-pollutant, energy-hungry industries. This trend, in concert with ever more robust domestic environmental standards and the growing proportional size of the service sector relative to China’s overall economy, has ushered in an era of economic structural adjustments known as the New Normal. Coal has been the first of the three fossil fuels to decline in use. After China’s year-over-year coal consumption fell 2.9 percent in 2014, China’s coal use is almost certainly predicted to keep falling in 2015. As the Chinese economy’s carbon intensity decreases, there is no longer a strong correlation between the country’s economic growth and its energy consumption. Economic growth has become less dependent on energy consumption since China’s twelfth five-year plan took effect (2011–2015). Consequently, in 2014 the elasticity ratio of Chinese energy consumption was only 0.30, its lowest level in several years. Deeper reforms are needed in the gas and oil sectors, especially in terms of encouraging the use of natural gas instead of coal, given that natural gas produces far fewer air pollutants and 50 percent fewer carbon emissions. Currently natural gas accounts for only 6 percent of China’s energy consumption. With low prices and adequate supplies, natural gas is a clean energy source that should take on a greater role in domestic energy consumption. However, due to the relatively slow pace of reforms in the gas and oil sectors, not only is improved efficiency in these industries constrained, some heavy industries have stopped using natural gas and switched back to coal. Anemic growth in natural gas consumption would also hold back domestic industrial upgrades, which may hinder the transformation affecting China’s energy and economic landscape. If the severe air pollution of 2013 prompted Chinese efforts to reduce coal consumption over the short term, China should use its response to climate change as a push to continue its energy transition, especially by sustaining momentum for domestic natural gas and oil reforms. During the Fifth Plenary Session of the Eighteenth Central Committee of the CPC in October 2015, China listed green development as a strategy in its next five-year plan. The State Council unequivocally aims to advance energy reform in China’s Thirteenth Five-Year Plan (2016–2020). These oil and gas reforms will aim to free up and increase competition in industries dominated by monopolies with the goal of introducing market forces into these sectors of the economy. China must combine its commitment to combat climate change and its need for domestic reforms. The government must undertake a series of reforms to improve the efficiency of the country’s natural gas industry and increase its size relative to the rest of the energy sector. This will require further legislative reforms on mineral rights, oil and gas pipeline networks, state-owned enterprises, taxation, and government spending. In addition, China must introduce effective mechanisms to allow greater market competition and also strengthen the management of its natural gas industry. Deeping reforms in the gas and oil sectors will not only help China respond to climate change. It could also advance China’s ongoing economic and energy transition, providing a long-term basis for high-quality economic growth. Yang Yifang is a researcher at the CBN Research Institute.
<urn:uuid:48a82532-7e2c-48ba-b068-25b7d445a9f6>
CC-MAIN-2016-26
http://carnegieendowment.org/2011/11/17/implications-of-rising-inequality-in-emerging-markets/imr1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948406
1,456
2.9375
3
guys that stuff looks really technical and i can't really understand it.. i'll definently spend alot of today trying to make something like you have suggested work, or possibly you could help me with this idea.. this is my current code. and i realised that with this for loop i'm able to convert the numbers into binary like i need to, but i'm not able to work it into my printf. i couldn't figure out how. FILE * inputdata; unsigned int hex; inputdata = fopen("numbers.txt","r"); printf("Line # Hex Decimal Binary\n"); while (fscanf(inputdata,"%x",&hex) == 1) printf("%i %8x %u\n",i , hex , hex, hex); i tried to put it inside the printf then i decided thats probably not even possible, then i was trying to work it into storing it into a new variable, but i just really couldn't grasp the concept of what was needed to acheive this. so if anyone could explain how i could use that to convert to binary that would be great. it does work, i just can't work it properly. for(i = 0; i < 32; i++) printf("%d", (value >> (31 - i)) & 1);
<urn:uuid:010e674d-4f1f-441d-b6e3-fc5f7e2ea134>
CC-MAIN-2016-26
http://cboard.cprogramming.com/c-programming/55541-converting-hex-dec.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943178
283
2.5625
3
using namespace std; This is all i have and i know im totally wrong.. Whats my problem?? I can't think in my head how to do this.. string f= female; string m= male; cout <<"Please Enter Your Gender By Using M or F"<< flush; cin >> gender; cout << endl; if ( gender >= M ) cout <<"Your Gender Is:"<<" "<<M<<endl; cout <<"Are you female?"<<endl; If the input is 'M' then the output should me "Male". If the gender is "F" then the output should be Female. And invalid gender otherwise.. I kinda gave up!!!!
<urn:uuid:92523e9c-b065-464d-8744-e4c510f03f48>
CC-MAIN-2016-26
http://cboard.cprogramming.com/cplusplus-programming/71055-can-someone-help-me-im-totally-stumped.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.714698
153
2.859375
3
PARTICIPATORY ASSESSMENT AND RESEARCH Guiding and Stimulating the Community to Look at Itself by Phil Bartle, PhD Core Document in the Module How to encourage participation in the assessment, appraisal and evaluation of a community by community members A very important responsibility that you have as a community mobilizer is to ensure that the community members objectively and accurately assess and appraise their own community, cataloguing its various problems, and evaluating the differences in community priorities for solving those problems. Without an objective and collective community evaluation, different community members will have different ideas of what is more important and what is less important, and many myths and inaccurate assumptions will continue to be held by different members of the community. This contributes to disunity, and it hinders making transparent and effective actions to improve self reliance and reduce poverty. This means that you, as a mobilizer, need to learn techniques of encouraging and stimulating participation, and that you must train community members to understand the principles and learn the skills of participation in evaluation, assessment and appraisal. When you reach a later stage in the mobilization cycle, designing a community project, you must determine what is the priority problem to be solved. There must be agreement and consensus among community members that the chosen problem to be solved is the one with the highest priority. Without unity organizing, and an objective community evaluation, there will be no needed agreement about which task to undertake first. Without this community participation in evaluation, different factions will choose different priorities. Educated members will see different problems than uneducated. Men will see different ones than women. Landowners will see different problems than tenants or squatters. People of different age groups, ethnic groups, language groups, or religious groups will not automatically agree what are the priority problems, as they each see the universe from different perspectives, and have different value systems. A good way to start the community appraisal process is to arrange a map making session. Set a day or afternoon for preparing the map. Ask that as many community members attend as possible. With everybody in attendance, walk through the village or neighbourhood. Do not simply walk around the perimeters of the area, but traverse it, with enough lines of traverse that everyone can see everything between them. As you walk, you observe things, discuss them, and mark them on the map. As the mobilizer, you need to keep the discussion going whenever it does not continue spontaneously. The making of the map, as a group process, including the discussion and the choices of what to mark down, is as important, if not more important, than the map itself. On the map you include the major buildings, roads and installations (latrines, water points, playgrounds, shrines, garbage dumps). You also include observations about installations that are in a state of disrepair, have fallen down, or are not working. Ensure that you discuss each of these as you mark them on the map. This will help to limit opposition and contradictions later in the appraisal; it contributes to "transparency" in the process. At the end of the walk through the neighbourhood or village, everyone should meet (perhaps at a convenient school building) to discuss the walk, and to finalize the map. This debriefing is important, because it supports the transparency you wish to promote, and which was started by discussing every problem as it was marked down on the map. The map can then be used in the next phase of the appraisal, making a village or neighbourhood inventory. On the day of making the map, or as soon as possible after that, it is time to make a community inventory. It is important that the inventory be done in a participatory manner; the community members participate in constructing the inventory. Do not, as a mobilizer, make the inventory for the community; that defeats its purpose. It would be useful here, in your role of mobilizing and training, to renew the principles and techniques used in the brainstorm. Discourage cross talk and feedback; mark down all contributions on the board; shuffle and sort the contributions later as a group exercise. Ensure that individual contributions are given at arms length (do not focus in on individual contributors), allow apparently contradictory contributions (write every suggestion on the board), and reassert at the end that this is a group product, not the product of any one or more factions or individuals. Be aware that different groups or factions in the whole community will have different concerns. The local headmaster might see the need for a new school as most important. Men might see a need for access to fertilizers while women might see a need for available potable water as the highest priority. The local imam might see the need for a new mosque as highest priority, while other individuals and factions will see other needs as highest priority. That is why it would be misleading to consult only with a few community leaders in determining communal priorities. A group process, involving as many members of the community as possible, is more transparent, and will result in a more accurate assessment of whole community needs. To encourage objectivity, suggest that the community inventory include both assets and problems. If a clean and well used latrine is a positive asset, include it, not only the latrines that are broken. Refer to the map. Post it on the wall. Ask what assets and liabilities were observed in the map making process. What's in a Name? You may see the acronym PRA, or sometimes PAR, used in reference to this participatory method of making an assessment of community resources and problems. There are several interpretations and definitions of these. Once upon a time, there was a method called RRA, Rapid Rural Assessment. In essence this was used when an aid agency called in a high priced foreign specialist, who parachuted in and stayed a few days in the closest five star hotel for the duration, and wrote up a needs assessment that the agency could use to justify its project. At most, the specialist might consult with a few of the community leaders before writing his final report. In opposition to this "top down" approach, it became apparent (especially to community workers) that such an appraisal would be more accurate if it were more participatory and less rapid. Furthermore, sociologists noted that if the community members were involved in decision-making from the start, they would more likely take responsibility for the project, and therefore contribute to its maintenance and sustaining its installation. When the whole community were involved, the project would be more valid than if only a few representatives or leaders of the community were consulted. A new acronym was coined, PRA. This acronym was more consistent than what the letters represented: Participatory Rural Appraisal, Participatory Research and Assessment. What was common among these was that the process should be participatory. Some people tried to bypass the plethora of interpretations of PRA, and coined the new acronym, PAR. This too, however, has sprouted several interpretations, including Participatory Action Research, but the consistent feature is still that they both (PRA, PAR) emphasize participation. What is essential here is that the assessment process should be participatory, and that participation should involve the whole community, not merely a few factions, that the assessment of needs and potentials reflect the community as a whole. Information for Whom? You might hear, especially from non community-oriented project managers (eg engineers, central planners) that community appraisal is unnecessary. "We already have a social sector base study, why should we duplicate it with a village inventory?" is a typical lament. You may be called upon to defend this part of your work, especially if you are part of a sector specific project (eg water supply). Managers are in a hurry to get physical results (building the water point) and this participatory assessment takes up time. The information collected by the map making and inventory by the community may or may not duplicate information resulting from other sources. It is an incorrect assumption that the information is primarily for the project or agency to make plans. The purpose of the assessment process is to involve the whole community in decision making, and to encourage community members to take responsibility for any facility or service that may be installed in the future. That said, the information produced is very useful in adding to other sources of information (base line survey, census data, other reports) in getting an accurate picture of the current situation. As a mobilizer, you will contribute to the process of poverty reduction and community empowerment if you make the information available to your agency or project, to local authorities, district and central government officials, especially those in planning, community development and management. Training the Community Members: Where communities are characterized by having much poverty and have many marginalised persons, it is more likely that many members will be unfamiliar with participating in making community decisions. Furthermore, many will be unfamiliar with map-making and making an inventory, and many will not be able to read and write. These are skills they need in order to have them participate in decision making that leads to community empowerment. Formal training is not the answer here. You as a mobilizer will familiarize community members in all these simply by your carrying them out. Even more important, your encouraging them to participate supports their self confidence and motivates them in contributing to their community development. In the process of carrying them out, remember that community members are learning new skills, and ensure that you are transparent in your work. The skills needed by community members to carry out an appraisal are not sophisticated and difficult. Community members are normally and usually willing to engage in the process and will easily learn the skills in the process. Your job is to facilitate that learning. The participation of community members in making a community appraisal, goes farther beyond laying the groundwork for community action. The result of their assessments can be used as a base line or data for measuring progress, and therefore as an element of community based monitoring and evaluation. Where From Here? This document shows you how to encourage participation in the assessment, appraisal or evaluation of a community by community members. Throughout your work, participation of community members, all members rather than only some factions or individuals, should be stimulated and encouraged. In all training activities, while a participatory approach is generally best, where the trainer is a facilitator rather than a lecturer, the PAR/PRA methodology, however, should not be blindly applied in all areas. Where specific skills, for example are needed, especially if they have already been identified by the participants, it may be appropriate to employ other methods, such as demonstration, presentation, and dialogue. Given this, allowing trainees to learn by doing should be emphasized. For more discussion on this approach, see the Robert Chambers files. Making a Community Map: © Copyright 1967, 1987, 2007 Phil Bartle
<urn:uuid:fded50be-ef44-4737-b497-bc7326bde912>
CC-MAIN-2016-26
http://cec.vcn.bc.ca/cmp/modules/par-par.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961598
2,223
3
3
The special symbols described here are used as a notational convenience within this document, and are part of neither the Common Lisp language nor its environment. (+ 4 5) => 9This means that the result of evaluating the form (+ 4 5) is 9. If a form returns multiple values, those values might be shown separated by spaces, line breaks, or commas. For example: (truncate 7 5) => 1 2 (truncate 7 5) => 1 2 (truncate 7 5) => 1, 2 Each of the above three examples is equivalent, and specifies that (truncate 7 5) returns two values, which are 1 and 2. Some conforming implementations actually type an arrow (or some other indicator) before showing return values, while others do not. (char-name #\a) => NIL OR=> "LOWERCASE-a" OR=> "Small-A" OR=> "LA01" indicates that nil, "LOWERCASE-a", "Small-A", "LA01" are among the possible results of (char-name #\a)---each with equal preference. Unless explicitly specified otherwise, it should not be assumed that the set of possible results shown is exhaustive. Formally, the above example is equivalent to (char-name #\a) => implementation-dependent but it is intended to provide additional information to illustrate some of the ways in which it is permitted for implementations to diverge. (function-lambda-expression (funcall #'(lambda (x) #'(lambda () x)) nil)) => NIL, true, NIL OR=> (LAMBDA () X), true, NIL NOT=> NIL, false, NIL NOT=> (LAMBDA () X), false, NIL (gcd x (gcd y z)) == (gcd (gcd x y) z)This means that the results and observable side-effects of evaluating the form (gcd x (gcd y z)) are always the same as the results and observable side-effects of (gcd (gcd x y) z) for any x, y, and z. For example, conforming implementations are permitted to differ in issues of how interactive input is terminated. For example, the function read terminates when the final delimiter is typed on a non-interactive stream. In some implementations, an interactive call to read returns as soon as the final delimiter is typed, even if that delimiter is not a newline. In other implementations, a final newline is always required. In still other implementations, there might be a command which ``activates'' a buffer full of input without the command itself being visible on the program's input stream. In the examples in this document, the notation ``>> '' precedes lines where interactive input and output occurs. Within such a scenario, ``this notation'' notates user input. For example, the notation (+ 1 (print (+ (sqrt (read)) (sqrt (read))))) >> 9 16 >> 7 => 8 shows an interaction in which ``(+ 1 (print (+ (sqrt (read)) (sqrt (read)))))'' is a form to be evaluated, ``9 16 '' is interactive input, ``7'' is interactive output, and ``8'' is the value yielded from the evaluation. The use of this notation is intended to disguise small differences in interactive input and output behavior between implementations. Sometimes, the non-interactive stream model calls for a newline. How that newline character is interactively entered is an implementation-defined detail of the user interface, but in that case, either the notation ``<Newline>'' or ``<NEWLINE>'' might be used. (progn (format t "~&Who? ") (read-line)) >> Who? Fred, Mary, and Sally<NEWLINE> => "Fred, Mary, and Sally", false
<urn:uuid:99b02527-8c77-4ba3-bab6-c3c2d62eb286>
CC-MAIN-2016-26
http://clhs.lisp.se/Body/01_dac.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.818565
837
2.96875
3
HEAT GENERATION IN UPPER AND LOWER-BODY WORK Habib, C. M., Canine, K. M., Bothorel, B., Trone, D. W., & Vurbeff, G. K. (1997). Effect of exercise mode on cooling and heat strain. Medicine and Science in Sports and Exercise, 29(5), Supplement abstract 554. The effects of exercise mode on heat strain when whole-body cooling was provided by a liquid cooling undergarment was examined. Males (N = 8) exercised for 120 min in a hot environment (49 degrees Celsius) while dressed in a chemical protective overgarment. The following activities were performed: arm-cranking with no cooling, arm-cranking with cooling, treadmill with cooling, and treadmill with no cooling. Each test consisted of repeated intervals of 20 min exercise followed by 10 min rest. Results showed that heart rate was higher with upper-body exercise than with lower-body exercise and that thermal strain was unrelated to the mode of exercise when no cooling was provided. Heart rate was higher for upper-body work when cooling was provided. Greater heat extraction was required to maintain thermal balance during lower-body exercise than during upper-body exercise when work rates were matched. Implication. Greater amounts of heat will be generated with lower-body work than with upper-body work and thus, natural or artificial cooling will need to be greater for those forms of exercise. Fluid loss is likely to be greater with lower-body work. Return to Table of Contents for this issue.
<urn:uuid:a534add5-c312-4d3e-9fe9-7ec855726eab>
CC-MAIN-2016-26
http://coachsci.sdsu.edu/csa/vol36/habib.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953952
331
2.90625
3
“We are expanding our iPad technology fairly significantly at the high school at the present time,” principal Marcia Lawrence said. “Of course, we’re continuing to expand our PC technology also, but the biggest push we’ve had this year — and what I foresee perhaps in the spring — is more expansion of iPad technology.” Lawrence said the district has allocated nearly 100 iPads to be used by students at the school. The tablets allow the students to use different applications — programs downloaded from Apple’s app store — to supplement teachers’ lessons. “We like the fact that the teacher can choose an app that helps them to approach the learning objective that they have, and that every child can have the same application open and working on it at one time,” Lawrence said. “That relieves us of some of the difficulties that we have with wireless technology, when it’s they’re using PCs and they have to access things through web browsers.” Lawrence said the variety of educational applications available makes the iPads an invaluable tool in the classroom. “There are a million applications for iPads. It’s just been an exploding kind of thing,” she said. “If you wanted to see an app on the French Revolution, you can find one tailor-made for that one historical event that would talk about the culture and the causes. It might involve a game, it might involve some kind of matching game, or it might involve some research.” The high school has two classroom sets of iPads, which teachers can reserve through the media center for different class periods. The classroom sets are housed in iPad Learning Labs, essentially mobile carts with slots inside to fit 30 iPads. The learning lab charges the iPads and also allows selected apps to be simultaneously downloaded to all the iPads at once. The school has made additional iPads available, which Lawrence said are normally used for small-group learning situations. While large numbers of schools have begun phasing out textbooks in hopes of shifting solely to digital formats, Lawrence sees the two as learning tools, both still applicable to a classroom setting. “At this point, we’re not planning on phasing out textbooks,” she said. “I wouldn’t say that computers are taking place of textbooks for us, but I would say they are a major supplement to textbooks.” And the application of new technology has predictably seen its fair share of glitches, along with what sometimes becomes an overdependence on the technology. “The biggest negative in technology is that it has glitches,” Lawrence said. “Sometimes, I think in the last decade, our teachers have worked so hard to become technologically literate, that sometimes the technology becomes the lesson, instead of the technology being the tool to teach the lesson. So we want to be very cautious about that.” “We want teachers to use technology to enhance those objectives,” she added.
<urn:uuid:733b5e38-2af1-4c4e-ac02-1d5159debf2b>
CC-MAIN-2016-26
http://couriernews.com/view/full_story/21384333/article-Dardanelle-HS-increasing-iPad-use-
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968797
622
2.625
3
Wilfred Campbell (1860-1918) William Wilfred Campbell was a member of the Canadian school of “Confederation Poets” who were born in the mid-19th century around the date of the constitution of Canada as a confederated Dominion of Britain in 1867. Northrop Frye saw their distinctive Canadian romantic style and effect on Canadian poetry as “very much like the impact of the Group of Seven painting two decades later…..like the later painters, these poets were lyrical in tone and romantic in attitude”. Still using the brushes of the Victorian Romantics, they moved away from heavy classical and religious metaphor to paint in verse their personal relationships with nature and modern civilization. They never considered themselves a cohesive group. Indeed, some regard their School as having been arbitrarily defined to provide a powerful post facto canon to celebrate the new Dominion into the first quarter of the 20th century, with the effect of retarding the development of Modernist Canadian poetry. The Confederation School is considered to have two geographic branches: the Ottawa poets including Archibald Lampman (1861-1899), Duncan Campbell Scott (1862-1947) and William Wilfred Campbell (1860-1918), and the maritime poets, including Charles G. D. Roberts (1860-1943) and his cousin, Bliss Carmen (1861-1929). Others have been added to the School, including Frederick George Scott (1861–1944), Francis Joseph Sherman (1871–1926), Pauline Johnson (1861–1913) George Frederick Cameron (1854–1885), and Isabella Valancy Crawford (1850–1887). William Wilfred was born in 1860 in Newmarket, Upper Canada, the son of an alcoholic clergyman whose wife was a gifted pianist and composer. After teaching in the Wiarton district for several years, he studied divinity and theology and was ordained in 1885. He secretly married Mary Louisa Debelle in 1883 so that she not lose her teaching position. They had four children. He was a pastor at West Claremont N.H., St Stephen N.B. and Southampton, Ont. but alienated his last congregation as his religious beliefs evolved away from classical dogma. In poor health, he moved to Ottawa in 1891 for a federal civil service job that fell through. In 1915, Campbell moved with his family to an old stone farmhouse on the outskirts of Ottawa, which he named "Kilmorie". The house still exists with its original stone wall at 21 Withrow Avenue, (City View) Nepean, off Merivale Road. By 1891, Campbell was a well-recognized and highly productive poet who’s lyrical and beautiful compositions were featured in the most prestigious magazines in North America. His muse was God’s presence expressed in Nature. Sir John A. Macdonald habitually hired poets; in 1891 he hired Campbell as a temporary clerk for $1.50 a day in the Department of Railways and Canals and then in the Department of Secretary of State in 1892. His success as a poet prompted debates in the House of Commons and the Senate to give him a permanent position; both were defeated as creating unwanted patronage precedence for artists. However, he was so insistent that he was quietly given a permanent job, firstly in the Department of Militia and Defence (1893), the Privy Council Office (1897), the Archives Branch of the Department of Agriculture (1908) and the Dominion Archives in 1909. From 1892 to 1893, he joined fellow civil servants and Confederation poets Duncan Campbell Scott and Archibald Lampman, his next-door neighbor, in writing a column of essays for the Toronto Globe newspaper called “At the Mermaid Inn”. It helped to pay his bills but collapsed as William continued to express his liberal and unorthodox religious theories, seeking to reconcile religion, science and sociology. This blend, however, appealed to the members of the Royal Society of Canada who elected him a member in 1894, and their vice-president (1899-1900), president (1900-1901) and secretary (1903-1911). His poetry, verse dramas, pamphlets, five novels and three works of non-fiction expressed this blended philosophy and his patriotic British Imperialist politics. These principles guided his choices for Poems of Loyalty by British and Canadian authors (London, 1913) and for The Oxford Book of Canadian Verse (Toronto, 1913); he somewhat egocentrically devoted more pages in the latter to his own poetry than to anyone else! Throughout the Great War, he distributed pamphlets of poems and was seconded from the archives branch, where he was working on Loyalist historical projects, to the Imperial Munitions Board, where he began a history of the Canadian munitions industry. He died of pneumonia on New Year’s morning, 1918, and was buried in Beechwood cemetery; a frank and highly gifted Canadian poet, author and provocative philosopher. William Lyon McKenzie King and Viola Markham bought his plot and memorial. The following poem by Campbell, “Bird on a Bough”, could not be found in the usual anthologies of his work. It is one of his Nature poems with only a slight hint of God, represented by the Sun. It celebrates springtime. It was sent to W.P.Lett, signed personally and from an early address for Campbell at “24 Lisgar Street, Ottawa, Canada”. It is an original, typed manuscript and must have been composed shortly after his arrival in Ottawa in 1891 judging by the address and the fact that the recipient, W.P.Lett, died in 1892. *Biographical information has been abstracted from: PoemHunter.com at http://www.poemhunter.com/william-wilfred-campbell/biography/ The Dictionary of Canadian Biography at http://www.biographi.ca/en/bio/campbell_william_wilfred_14E.html Confederation Poets at http://en.wikipedia.org/wiki/Confederation_Poets Poets’ Pathway at http://www.poetspathway.ca/bio_campbell.htm Context for Wilfred Campbell During my research of the life and poetry of William Pittman Lett, I have discovered two “lost” and important poems of William Wilfred Campbell. I have already blogged the first in my article on Sir Sanford Fleming; the second is reproduced at the end of this article. However, I thought we should know something about him! I shamelessly compiled the condensed biography from the sources listed. The poem is a copy of the original discovery.The entire article was published in the newsletter of The Ottawa Historical Society
<urn:uuid:1618299f-7da7-428a-ba70-575dce6e81df>
CC-MAIN-2016-26
http://cprcook.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97018
1,419
2.8125
3
Continuing our series of five installments analyzing nanotechnology and risk, we turn now to molecular manufacturing. Earlier this week, Part 1 gave an overview of existing nanoscale technologies, and Part 2 assessed the risks of nanoscale technology. Part 3 (today) is an overview of molecular manufacturing. Part 4 will address the risks of molecular manufacturing and Part 5 is a conclusion with recommendations. Part 3: Molecular Manufacturing Overview Molecular manufacturing is based on a simple idea: use programmable chemistry and assembly to create complex products with nanoscale features. This is a more precise way of manufacturing than today's methods, and should make products with far higher performance. Computers and motors could be a million times smaller, and materials could be 100 times stronger. Precision automated manufacturing using tiny machines would allow a complete, self-contained, general-purpose factory to sit on a desktop. And because smaller things work faster, the factory could fabricate its own mass (including a duplicate factory, if desired) in just a few hours. The products of molecular manufacturing would be extremely inexpensive to make. Strong, chemically bonded components would be very reliable, especially since the chemical fabrication operations would be simple and repetitive. High reliability would allow complete automation, eliminating labor and maintenance cost. The R&D cost of the first factory could be spread over all the factories it could produce, and all the factories they could produce, and so on. The major remaining cost is raw materials and energy. The material supply for a nanofactory would be simple chemicals. The energy required could be high, but still a good payoff considering the value of the product. As building materials, the products are expected to be competitive with steel given their greater strength, even at today's energy prices. As computers, medical devices, and aerospace hardware, their value would be orders of magnitude higher than their cost. To put it in perspective, a few milligrams of motors could power your car, a few milligrams of circuitry would be a world-class supercomputer, and a few milligrams of artificial red cells could keep a person alive for many minutes even with their heart stopped. But making kilogram-scale products should not be much more difficult than making milligram-scale products. If a factory could build a duplicate in a few hours, then the number of available factories could grow exponentially, multiplying by 100 or 1000 each week. Production capacity will be essentially unlimited. An important product of the nanofactory would be solar cells. A lightweight design could collect enough energy to build another of the same size in a day or two. This implies that energy will not be a limiting factor in production capacity. And the main chemical element required would be carbon, which is plentifully available in many forms. Because a variety of physics factors work together to make molecular fabrication and nanoscale manipulation simpler than large-scale industrial robotics, the factories are expected to be completely automated; this implies that labor costs (and manufacturing jobs) will disappear. The process of designing products could be greatly accelerated by the ability to build prototypes very quickly and cheaply. Also, the vastly higher performance of components implies that most human-scale products would be mostly empty space, reducing the effort required to balance engineering tradeoffs. All in all, the process of product design might resemble software engineering more than hardware engineering. A present-day illustration of the effects of rapid prototyping can be found in the electronics industry. Two kinds of chips, ASICs (application-specific integrated circuits) and FPGAs (field programmable gate arrays), have very similar functionality: they can be configured for a particular product to carry out a complex task. But ASICs take months to manufacture, while FPGAs take seconds to reprogram. As a result, ASICs may require an order of magnitude more design effort because the cost of mistakes is so much higher. These considerations imply that the development of the first molecular manufacturing system will create a sudden and substantial technological advance in our ability to manufacture large, complex, nanostructured products. How rapidly this could be adopted in diverse applications remains a subject of debate. However, it seems clear that there is potential for abrupt transformation of several industries, infrastructures, and strategic military factors. The potential benefits include replacement of inefficient or ecologically harmful infrastructure, rapid medical advances, and inexpensive large-scale humanitarian projects. This suggests substantial opportunity for profit. However, molecular manufacturing also creates a variety of risks and disruptions that will require careful policymaking to avoid. These will be considered in our next installment. Tune in tomorrow for Part 4: Risks of Molecular Manufacturing.
<urn:uuid:fa0287c8-8630-4bd0-b6d0-654c51d64617>
CC-MAIN-2016-26
http://crnano.typepad.com/crnblog/2004/11/nanotechnology__2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943195
943
3.484375
3
Prota 1: Cereals and pulses/Céréales et légumes secs Repert. bot. syst. 1: 779 (1843). Papilionaceae (Leguminosae - Papilionoideae, Fabaceae) 2n = 22 Vigna sinensis (L.) Hassk. (1844). – Cowpea, black-eye bean, black-eye pea, China pea, marble pea (En). Niébé, haricot à l’œil noir, pois yeux noirs, cornille, voème, haricot dolique, dolique mongette (Fr). Caupi, feijão frade, feijão da China, feijão miúdo, feijão macundi, makunde (Po). Mkunde (Sw). – Yard-long bean, asparagus bean (En). Haricot-kilomètre, dolique asperge (Fr). Feijão de metro, feijão chicote, feijão espargo, feijão frade alfange (Po). – Catjang cowpea, Bombay cowpea (En). Catjang (Fr). Origin and geographic distribution Vigna unguiculata originated in Africa, where a large genetic diversity of wild types occurs throughout the continent, southern Africa being richest. It has been introduced in Madagascar and other Indian Ocean islands, where it is sometimes found as an escape from cultivation. The greatest genetic diversity of cultivated cowpea is found in West Africa, in the savanna region of Burkina Faso, Ghana, Togo, Benin, Niger, Nigeria and Cameroon. Cowpea was probably brought to Europe around 300 BC and to India 200 BC. As a result of human selection in China, India and South-East Asia, cowpea underwent further diversification to produce two cultivar-groups, Sesquipedalis Group with long pods used as a vegetable, and Biflora Group grown for the pods, dry seeds and for fodder. Cowpea was probably introduced to tropical America in the 17th century by the Spanish and is widely grown in the United States, the Caribbean region and Brazil. Cowpea is the most important pulse crop in the savanna regions of West and Central Africa, where it is also an important vegetable and a valuable source of fodder. In East and southern Africa it is also important both as a vegetable and a pulse. Only in humid Central Africa is it less prominent. Cowpea is the preferred pulse in large parts of Africa. The mature seeds are cooked and eaten alone or together with vegetables, spices and often palm oil, to produce a thick bean soup, which accompanies the staple food (cassava, yam, plantain). In West Africa the seeds are decorticated and ground into a flour and mixed with chopped onions and spices and made into cakes which are either deep fried (‘akara balls’), or steamed (‘moin moin’). In Malawi the seeds are boiled with their seed coat, or the latter is removed by soaking and leaving the seeds in the soil for a few hours. Small quantities of cowpea flour are processed into crackers, composite flour and baby foods in Senegal, Ghana and Benin. The leaves and the immature seeds and pods of cowpea are eaten as vegetables. Cowpea leaves are served boiled or fried and are usually eaten with a porridge. The leaf may be preserved by sun-drying or boiling and then sun-drying to be used during the dry season. Leaves to be preserved for later use are generally plucked towards the end of the season. It is believed that leaves developed towards the end of the season are tastier as they tend to grow under conditions of stress. In Botswana and Zimbabwe boiled cowpea leaves are kneaded to a pulp and squeezed into small balls, which are dried and stored. Immature, green and still soft seeds are cooked to a thick soup and used as relish. The tender seedless cowpea pods are sometimes used as a cooked vegetable, as are young pods of yard-long bean. In Asia this is the most important use of cowpea, in Africa it is uncommon. In Benue State, Nigeria, the stringless coiled pods with little parchment of a landrace called ‘Eje-O’Ha’ are parboiled for a few minutes, opened and split in half. The seeds are eaten directly while the pod walls are dried and preserved for later use. Pods are also eaten locally in Benin. The roots are sometimes eaten, e.g. in Ethiopia and Sudan. Cowpea is used as fodder in West Africa, Asia (especially India) and Australia; it is used for grazing or cut and mixed with dry cereals for animal feed. In the United States and elsewhere cowpea is grown as a green manure and cover crop. In Nigeria special cultivars are grown for the fibre extracted from the peduncle after retting; the strong fibre is especially suitable for fishing gear, and produces a good-quality paper. The dry seeds have been used as coffee substitute. Various medicinal uses of cowpea have been reported: leaves and seeds are applied as a poultice to treat swellings and skin infections, leaves are chewed to treat tooth ailments, powdered carbonized seeds are applied on insect stings, the root is used as an antidote for snakebites and to treat epilepsy, chest pain, constipation and dysmenorrhoea, and unspecified plant parts are used as a sedative in tachycardia and against various pains. Production and international trade According to FAO statistics the total annual world production of dry cowpea seeds in 1999–2003 was about 3.6 million t from 9.5 million ha. Other estimates indicate a higher production: over 4.5 million t from about 14 million ha. According to FAO 3.3 million t was produced annually in sub-Saharan Africa, from 9.3 million ha, mainly in West Africa (3 million t/year from 8.8 million ha), the main producers being Nigeria (2.2 million t/year from 5.1 million ha) and Niger (400,000 t/year from 3.3 million ha). Brazil, which is not included in the FAO cowpea statistics, is estimated to produce about 0.6–0.7 million t/year from 1.1–1.9 million ha. Cowpea seeds are produced for local consumption and surpluses are sold in local markets. International trade is mainly within West Africa, with the exporting countries in the drier Sahelian zone, and the importing countries in the more densely populated humid region along the coast. It has been estimated that at least 285,000 t was traded between West African countries in 1998, mainly from Niger to Nigeria, but the total trade is probably larger. There are no statistical data on the quantity of leaves and pods harvested, but it is likely to be considerable. Fresh and dried leaves are much sold in urban markets and some are traded to neighbouring countries. Dried leaves in the form of black balls are exported from Zimbabwe to Botswana and South Africa. Yard-long bean is grown in Asia on hundreds of thousands of hectares, but is of minor importance in Africa. The nutritional composition of leafy stem tips of cowpea per 100 g edible portion is: water 89.8 g, energy 121 kJ (29 kcal), protein 4.1 g, fat 0.3 g, carbohydrate 4.8 g, Ca 63 mg, Mg 43 mg, P 9 mg, Fe 1.9 mg, Zn 0.3 mg, vitamin A 712 IU, thiamin 0.35 mg, riboflavin 0.2 mg, niacin 1.1 mg, folate 101 μg, ascorbic acid 36 mg. Young cowpea pods with seeds contain per 100 g edible portion: water 86.0 g, energy 184 kJ (44 kcal), protein 3.3 g, fat 0.3 g, carbohydrate 9.5 g, Ca 65 mg, Mg 58 mg, P 65 mg, Fe 1.0 mg, Zn 0.3 mg, vitamin A 1600 IU, thiamin 0.15 mg, riboflavin 0.15 mg, niacin 1.2 mg, folate 53 μg, ascorbic acid 33 mg. Yard-long bean pods contain per 100 g edible portion: water 87.9 g, energy 197 kJ (47 kcal), protein 2.8 g, fat 0.4 g, carbohydrate 8.4 g, Ca 50 mg, Mg 44 mg, P 59 mg, Fe 0.5 mg, Zn 0.4 mg, vitamin A 865 IU, thiamin 0.1 mg, riboflavin 0.1 mg, niacin 0.4 mg, folate 62 μg, ascorbic acid 19 mg. Immature cowpea seeds contain per 100 g edible portion: water 77.2 g, energy 377 kJ (90 kcal), protein 3.0 g, fat 0.4 g, carbohydrate 18.9 g, fibre 5.0 g, Ca 126 mg, Mg 51 mg, P 53 mg, Fe 1.1 mg, Zn 1.0 mg, vitamin A 0 IU, thiamin 0.1 mg, riboflavin 0.15 mg, niacin 1.45 mg, folate 168 μg, ascorbic acid 2.5 mg. Mature cowpea seeds contain per 100 g edible portion: water 12.0 g, energy 1407 kJ (336 kcal), protein 23.5 g, fat 1.3 g, carbohydrate 60.0 g, fibre 10.6 g, Ca 110 mg, Mg 184 mg, P 424 mg, Fe 8.3 mg, Zn 3.4 mg, vitamin A 50 IU, thiamin 0.85 mg, riboflavin 0.23 mg, niacin 2.1 mg, vitamin B6 0.36 mg, folate 633 μg, ascorbic acid 1.5 mg. The essential amino-acid composition per 100 g mature, raw cowpea seeds is: tryptophan 290 mg, lysine 1591 mg, methionine 335 mg, phenylalanine 1373 mg, threonine 895 mg, valine 1121 mg, leucine 1802 mg and isoleucine 956 mg. The principal fatty acids are per 100 g edible portion: linoleic acid 343 mg, palmitic acid 254 mg, linolenic acid 199 mg and oleic acid 88 mg (USDA, 2004). The approximate fatty acid composition of fat from cowpea seeds is: saturated fatty acids 25%, mono-unsaturated fatty acids 8%, polyunsaturated fatty acids 42%. Cowpea protein is relatively rich in lysine, but poor in S-containing amino acids. Cowpea seed is lower in antinutritional components such as lectins and trypsin inhibitors than common bean (Phaseolus vulgaris L.), and is easier and quicker to cook. Adulterations and substitutes The pods of common bean are often used for the same dishes as yard-long bean, although the taste is not the same. Immature seeds of several leguminous plants are used as substitutes for immature cowpea seeds, e.g. those of pea (Pisum sativum L.), common bean and lima bean (Phaseolus lunatus L.). Climbing, trailing or more or less erect annual or perennial herb, cultivated as an annual; taproot well developed, with many lateral and adventitious roots; stem up to 4 m long, angular or nearly cylindrical, slightly ribbed. Leaves alternate, 3-foliolate; stipules ovate, 0.5–2 cm long, spurred at base; petiole up to 15(–25) cm long, grooved above, swollen at base, rachis (0.5–)2.5–4.5(–6.5) cm long; stipels small; leaflets ovate or rhombic to lanceolate, (1.5–)7–14(–20) cm × (1–)4–10(–17) cm, basal ones asymmetrical, apical one symmetrical, entire, sometimes lobed, glabrous or slightly pubescent, 3-veined from the base. Inflorescence an axillary or terminal false raceme up to 35 cm long, with flowers clustered near the top; rachis tuberculate. Flowers bisexual, papilionaceous; pedicel 1–3 mm long, with spatulate, deciduous bracteoles; calyx campanulate, tube c. 5 mm long, lobes narrowly triangular, c. 5 mm long; corolla pink to purple, sometimes white or yellowish, standard very broadly obovate, hood-shaped, c. 2.5 cm long, wings obovate, c. 2 cm long, keel boat-shaped, c. 2 cm long; stamens 10, 9 fused and 1 free; ovary superior, c. 1.5 cm long, laterally compressed, style upturned, with fine hairs in upper part, stigma obliquely globular. Fruit a linear-cylindrical pod 8–30(–120) cm long, straight or slightly curved, with a short beak, glabrous or slightly pubescent, pale brown when ripe, 8– 30-seeded. Seeds oblong to almost globose, often laterally compressed, 0.5–1 cm long, black, brown, pink or white; hilum oblong, covered with a white tissue, with a blackish rim-like aril. Seedling with epigeal germination; cotyledons oblong or sickle-shaped, thick; first two leaves simple and opposite, subsequent leaves alternate, 3-foliolate. Other botanical information Vigna comprises about 80 species and occurs throughout the tropics. However, the tropical American species are likely to be placed in a separate genus in the near future, which would reduce the genus to 50–60 species. Vigna unguiculata is extremely variable, both in wild and cultivated plants. Several subspecies (up to 10) have been distinguished, most of them comprising perennial wild types, but subsp. unguiculata includes annual wild types and cultivated ones. In cultivated Vigna unguiculata 5 cultivar-groups are generally recognized, although the groups can be crossed readily and overlap: – Unguiculata Group (common cowpea): pulse and vegetable types, grown for the dry or immature seeds, young pods or leaves; plant habit prostrate to erect, up to 80 cm tall, late flowering, pods 10–30 cm long, pendent, hard and firm, not inflated when young, many-seeded and seeds not spaced; most African cultivars belong to this group. – Sesquipedalis Group (yard-long bean, synonyms: Dolichos sesquipedalis L., Vigna sesquipedalis (L.) Fruhw.): grown for the young pods; plant climbing, stem up to 4 m long, pods 30–120 cm long, pendent, inflated when young, many-seeded and seed spaced; important vegetable in South-East Asia, but of minor importance in tropical Africa, where only cultivars introduced from Asia are grown. – Biflora Group (catjang cowpea): grown for the seeds, tender green pods and for fodder; plant habit prostrate to erect, up to 80 cm tall, early flowering, pods 7.5–12 cm long, erect or ascending, hard and firm, not inflated when young, few-seeded and seeds not spaced; important in India and South-East Asia, locally also in Africa (e.g. Ethiopia). – Melanophthalmus Group: originating from West Africa; plant able to flower quickly from the first nodes under inductive conditions, pods comparatively few-seeded, seed coat thin, often wrinkled, partly white. – Textilis Group: a small group only grown in Nigeria for the fibre extracted from the long peduncles; at the beginning of the 20th century this group was distributed from the interior delta of the Niger river eastward to the Lake Chad basin, but it is gradually disappearing. In Africa there are numerous landraces and improved cultivars within Unguiculata Group. Leaves are traditionally picked in cowpea fields grown primarily for the dry seed and belong to the top ten most popular leafy vegetables in many African countries. In addition, special types with erect plant habit or prostrate stems with long tender shoots are grown as a leafy vegetable, sometimes also for the immature seeds or young pods. The use of dual purpose types (seeds and leaves) is becoming very popular in some countries as the leaves are the main vegetable during the early rainy season. Various cultivars of yard-long bean are offered by Asian seed companies, with a large variation in plant characters. Growth and development Germination of cowpea takes 3–5 days at temperatures above 22°C. The optimum temperature for germination is about 35°C. Flowers open in the morning and close before noon; they fall the same day. In dry climates cowpea is almost entirely self-pollinated, but in areas with high air humidity cross-pollination by insects may amount to 40%. Only fairly large insects are heavy enough to open the keel. The length of the reproductive period is very variable, with the earliest cultivars taking 30 days from planting to flowering, and less than 60 days to mature seeds. When leaves are harvested during the early growth stages, senescence starts 1.5–2 months after sowing and the plant dies after 3–4 months, depending on crop health and intensity of harvesting. Late cultivars with indeterminate growth take 90–100 days to flower and up to 240 days for last pods to mature. Cowpea forms N-fixing nodules with Sinorhizobium fredii and several Bradyrhizobium species. Wild types of Vigna unguiculata grow in savanna vegetation, often in disturbed localities or as a weed, up to 1500 m altitude, but some can be found in grassland subject to regular burning, sandy localities close to the coast, woodland, forest edges or swampy areas, occasionally up to 2500 m altitude. Cowpea grows best at day temperatures of 25–35°C; night temperatures should not be less than 15ºC and consequently cultivation is restricted to low and medium altitudes. At altitudes above 700 m growth is retarded. Cowpea does not tolerate frost, and temperatures above 35°C cause flower and pod shedding. It performs best under full sunlight but tolerates some shade. Cowpea is generally grown as a rainfed crop in sub-Saharan Africa, but in Asia it is sometimes grown on residual moisture after an irrigated rice crop. Short-duration determinate types can be grown with less than 500 mm rainfall per year; in experiments in Senegal ‘Ein al Ghazal’ produced 2400 kg/ha of seeds with only 450 mm rain. Long-duration types require 600–1500 mm. Yard-long bean tolerates high rainfall; a fully-grown crop has a water requirement of 6–8 mm per day. Cultivation in the dry season with ample irrigation is practised, as well as cultivation during the rainy season, although sowing during the rainy season can result in damage to the emerging or young plants. Most cowpea cultivars are quantitative short-day plants, but day-neutral types also exist. Cowpea can be grown on a wide range of soil types with pH 5.5–6.5(–7.5), provided they are well drained. It is moderately sensitive to salinity and exhibits greater salt tolerance during later stages of growth. Propagation and planting Farmers normally use farm-saved seed for planting. The 1000-seed weight of cowpea is 150–300 g. The seed rate for pure stands is 15–30 kg/ha. Seed dressing with an insecticide and a fungicide (e.g. thiram) prior to planting is recommended. In tropical Africa cowpea is mostly grown intercropped or in relay with other crops such as yam, maize, cassava, groundnut, sorghum or pearl millet. Pure stands are not common except in the coastal areas of East Africa, and also in Asia and Western countries. In the forest and Guinea savanna zones of West Africa cowpea is mainly intercropped with maize, cassava, yam or groundnut, at a very low density (1000–5000 hills/ha). In the northern Guinea savanna zone cowpea is intercropped with groundnut and/or sorghum. The component crops are normally planted in rows with systematic intercropping patterns, which may vary from alternate row intercropping to within-row intercropping with varying distance, giving a grid of groundnut or sorghum rows crossed by the cowpea rows every 2–3 m. The cowpea population is low, with individual plants spread over a 2–3 m radius. In the Sudan savanna cowpea is intercropped with pearl millet, sorghum and/or groundnut, in diverse and complex traditional intercropping patterns with varying interplant distances and planting sequences of component crops. For instance, in some areas of Kano state in Nigeria (Minjibir and Gezawa areas) pearl millet is planted first in rows 1.5–3 m apart at the onset of the rains (May–June), with 1 m distance within the row, resulting in 4000–6000 hills/ha. When the rains become more stable towards the end of June, pulse-type early cowpea cultivars are planted between alternate pearl millet rows at a distance of 1 m. Fodder-type, late-maturing cowpea is planted later, in mid-July, in the remaining rows. When grown as a sole crop, cowpea is sown at densities ranging from 22,000 plants/ha for prostrate types to 100,000 plants/ha for erect types. Recommended planting distances for sole-cropped cowpea in Kenya are 60 cm between rows and 20 cm within the row. In Swaziland spacings are 50 cm between rows and 15 cm within the row for erect cultivars. For landraces the spacings are much wider, especially for the dual purpose types. Often 2–3 seeds are sown per pocket, with thinning afterwards, e.g. during weeding. The sowing depth is 4–5 cm. Cowpea requires soil with fine tilth for good root growth. Generally, deep ploughing followed by harrowing provides an adequate tilth. In intercropping systems, tillage normally follows the crop in which cowpea is interplanted. Peri-urban vegetable farmers use special cultivars for ratoon cropping of the leaves. They broadcast the seed on raised beds, made on well-manured soil, aiming at a dense stand of about 25 plants per m2. Farmers in Africa use yard-long bean seed harvested from a previous crop, in contrast to South-East Asia, where many farmers procure healthy seed from improved cultivars. The 1000-seed weight of yard-long bean is lower than that of cowpea, 100–150 g. Seed is sown in pockets of 2–4 seeds. Cultivation is usually on raised beds for good drainage and easy surface irrigation and for easy staking and harvesting. Earthing-up the young plants protects the shallow root system and gives support to the seedlings. Some farmers apply mulch of rice straw, but this is not a common practice. Cowpea derives a significant amount of its nitrogen requirements from the atmosphere and may leave 75–150 kg/ha in the soil for the benefit of the succeeding crop. If cowpea is grown in localities where it has not been grown recently, inoculation with nitrogen-fixing bacteria has been found to be beneficial. Cowpea requires phosphorus for nodulation and root growth. Incorporation of 25 kg/ha P is adequate for plant growth in phosphorus-deficient soils. In soils known to be deficient in potassium, application of 25 kg/ha K is recommended. Cowpea must be kept weed free during the early stages of growth. Two to three weedings during the first 6 weeks after planting are recommended; once the crop is established it outcompetes weeds. Weeding is usually done by superficial hoeing. Cowpea grown as a vegetable and yard-long bean have a high mineral uptake. In soils of average fertility an application is recommended of 5–10 t/ha of farmyard manure during soil preparation, together with N 20 kg/ha, K 25 kg/ha and P 40 kg/ha. Three weeks after emergence a top dressing of 50 kg/ha urea is given. In yard-long bean, 2–2.5 m long stakes are inserted near the seed beds before sowing or during the first two weeks after emergence, before the plants have reached a height of 30 cm. A cheap method of staking is to relay-plant yard-long bean next to the stems of maize before or just after the cobs are harvested. Diseases and pests Cowpea is susceptible to a wide range of diseases and pests. Yard-long bean suffers from the same diseases and pests as cowpea but seems less susceptible than cowpea under humid conditions. Fungal diseases are more troublesome during the rainy season, whereas insect and mite pests and virus diseases cause more damage during the dry season. The major fungal diseases are anthracnose (Colletotrichum lindemuthianum), Ascochyta blight (Phoma exigua), brown blotch (Colletotrichum truncatum), leaf smut (Protomycopsis phaseoli), leaf spot (Cercospora canescens, Septoria vignae, Mycosphaerella cruenta synonym: Pseudocercospora cruenta), brown rust (Uromyces appendiculatus), scab (Elsinoë phaseoli), powdery mildew (Erysiphe polygoni), pythium soft stem rot (Pythium aphanidermatum), stem canker (Macrophomina phaseolina) and web blight (Thanatephorus cucumeris, synonym Rhizoctonia solani). Crop rotation and the use of chemicals and resistant cultivars are necessary for integrated disease control. Bacterial diseases include bacterial blight (Xanthomonas campestris pv. vignicola), which occurs worldwide, and bacterial pustules (Xanthomonas axonopodis pv. glycines synonym: Xanthomonas campestris pv. vignaeunguiculatae) reported from Nigeria. These bacteria are seed-transmitted and secondary spread occurs by wind-driven rain. Control measures include the use of pathogen-free seeds, seed treatment with a mixture of antibiotics and fungicides such as streptocycline plus captan, and strict crop rotation. Resistance genes are available for bacterial blight and bacterial pustules. Many viruses attack Vigna unguiculata. Some viruses of economic importance are cowpea aphid-borne mosaic potyvirus (CABMV), cowpea mottle carmovirus (CPMoV), cowpea yellow mosaic virus (CYMV), black eye cowpea mosaic potyvirus or bean common mosaic potyvirus (BCMV), cucumber mosaic cucumovirus (CMV-CS) and cowpea golden mosaic virus (CPGMV). Some of the viruses are seedborne, while aphids, white flies and beetles perform field transmission. Control measures include use of healthy seed of resistant cultivars if available, and weeding to remove alternative hosts. In poor sandy soils, cowpea is attacked by root-knot nematodes (Meloidogyne spp.). It is also a host plant of, among others, reniform nematodes (Rotylenchus spp.), root-lesion nematodes (Pratylenchus spp.) and lance nematodes (Hoplolaimus spp.). Crop rotation and resistant cultivars are used to control nematodes. Insect pests are also a major factor limiting cowpea production and may even cause total seed loss. In tropical Africa much damage is caused by cowpea aphids (Aphis craccivora), flower thrips (Megalurothrips sjostedti), legume pod borers (Maruca vitrata, Etiella zinckenella), pod bugs and seed suckers (e.g. Clavigralla tomentosicollis, synonym: Acanthomia tomentosicollis). Lygus beetle (Lygus hesperus), cowpea curculio (Chalcodermus aeneus) and green leafhoppers (Empoasca spp.) are of less importance. Yard-long bean is especially attractive to aphids (Myzus persicae, Aphis gossypii), green stink bug (Nezara viridula) and red spider mite (Tetranychus spp.); greasy cutworms (Agrotis ipsilon) often cause damage just after emergence. The bean shoot fly (Ophiomyia phaseoli) is a common pest; the larvae tunnel in the leaves and stems, and severely attacked young plants will die, whereas older plants will suffer from hampered growth and serious yield reduction. Lodging incidence is generally high in infested fields; tolerant cultivars may produce aerial roots above the wound. Another common pest is the bean pod fly (Melanagromyza sojae). The larvae damage the petioles and young pods. Control of insect pests involves protecting the seed with a systemic insecticide (e.g. carbofuran) at sowing or applied as a solution to the emerging seedlings in the planting holes. Plant debris and affected plants must be burned. Cowpea seeds are extremely vulnerable to storage pests, with the cosmopolitan cowpea weevil (Callosobruchus maculatus) being the major storage pest. Measures to reduce pest damage include application of inoffensive vegetable oil, neem (Azadirachta indica A.Juss.) oil or wood ash, roasting and bagging the seeds in airtight plastic bags, and storing as whole pods. Use of chemicals, resistant cultivars, biological control and proper crop management such as intercropping and weeding are necessary for integrated pest management. Chemical control of insects is common practice on yard-long bean, but not on cowpea. Because of the risks for farmer and consumer (especially when leaves are harvested), these sprayings must be reduced to the strict minimum. Two parasitic weeds are a serious problem: Alectra vogelii Benth. prevalent in the southern savanna regions of West Africa, East Africa and southern Africa, and Striga gesnerioides (Willd.) Vatke prevalent in the savanna regions of West and Central Africa. Crop rotation, deep cultivation, intercropping, early planting and use of resistant cultivars reduce infestation by these parasitic weeds. Cowpea leaves are picked in a period from 4 weeks after emergence of the seedlings to the onset of flowering. In crops grown for the seed, farmers often harvest 10–20% of the leaves before the start of flowering with little detrimental effect on the seed yield. Stronger defoliation increasingly reduces flowering, fruiting and seed yield. Growers of leafy cowpea types cut the plants at about 10 cm above the ground for a succession of new shoots (ratooning). Green pods are harvested when the seed is still immature, 12–15 days after flowering. Harvesting of dry seed is done when at least two-thirds of the pods are dry and yellow. In indeterminate types harvesting is complicated by prolonged and uneven ripening; for some landraces harvesting may require 5–7 rounds. Mature seeds are usually harvested by hand. Sometimes plants are pulled out when most of the pods are mature. In the complex traditional intercrop patterns of Kano state (Nigeria), early cowpea and sorghum cultivars are harvested at the end of August or the beginning of September. The late cowpea and sorghum cultivars are harvested after the onset of the dry season, between October and November, when the leaves show signs of wilting. The fodder types are uprooted or cut from the base and rolled into bundles with the leaves intact. These bundles are then kept on roof tops or in tree forks for drying, and are used or sold in the peak dry season. The first picking of yard-long bean pods in the desirable stage takes place 6–7 weeks after planting, depending on cultivar and market requirements. Normally the pods are picked when the outline of the seeds is just visible. Picking must be meticulous, because pods which are passed over until the next harvest will become tough and discoloured, with swollen seed, and may exhaust the plant. Successive harvests take place at least once a week (twice a week for a better tuned grading) during 4–8 weeks. Farmers may harvest up to 400 kg/ha of cowpea leaves in a few rounds with no noticeable reduction of seed yields. In Nigeria climbing cultivars yielded 9–17 t/ha of fresh pods, whereas decumbent cultivars yielded 6–15 t/ha. The mean dry seed yield of the same cultivars was 1. 4–1.7 t/ha. The world average yield of dry cowpea seed is low, 240 kg/ha, and for fodder it is 500 kg/ha (air-dried leafy stems). Average yield of dry cowpea seeds under subsistence agriculture in tropical Africa is 100–500 kg/ha. The average seed yield in Niger is 120 kg/ha, in Nigeria 400 kg/ha, and in the United States 900 kg/ha. Apart from the effects of diseases and pests, the low yields are partly explained by the fact that the crop is mostly grown at low densities in intercropping systems, shaded by taller cereals. Furthermore, cowpea is often sown later in the rainy season, which results in a shorter crop duration due to photoperiod-sensitivity. A yield potential of 3 t/ha of seed and 4 t/ha of hay can be achieved in sole-cropping with good management. In the United States seed yields up to 7 t/ha have been obtained. For yard-long bean, a total yield of 15 t/ha in a harvest period of at least one month is considered satisfactory, but yields as high as 30 t/ha have been reported. Handling after harvest Harvested leaves cannot be kept for long; they have to be sold within 2 days. The shoots can be kept longer by putting them in a basin with water. Cowpea leaves are frequently dried in the sun for preservation, either after boiling and squeezing to black balls, or directly as whole or broken leaves, or as powder. Green yard-long bean pods are tied in bundles of 20–40 and packed in baskets or crates for transport to the market. Yard-long bean is less susceptible to loss of weight by transpiration and to transport damage than most other vegetables. In cool storage (8ºC) the pods will keep for 4 weeks. Immature fresh cowpea seeds have a limited shelflife if stored at ambient temperatures, but at 8°C they can stay fresh for 8 days. In Europe, the United States and Japan, immature tender green pods are sometimes frozen or canned. As a pulse, the threshed seed should be dried thoroughly to a moisture content of 14% or less for good storability. The International Institute of Tropical Agriculture (IITA), Ibadan, Nigeria holds a collection of over 15,000 accessions of the cultivated cowpea and 1000 accessions of related wild Vigna; the University of California, Riverside, United States holds 5000 accessions. IITA characterized 8500 accessions for resistance to Maruca pod borer and sucking bugs, and 4000 for resistance to flower thrips, bruchids and viruses. The level of resistance to insect pests is high in the wild species Vigna vexillata (L.) A.Rich., especially to pod sucking bugs and Maruca pod borer. Many accessions of wild Vigna species possess high levels of resistance to the storage weevil. Small collections of yard-long bean are present at the Asian Vegetable Research and Development Center (AVRDC), Shanhua, Taiwan and the Institute of Crop Germplasm Resources (CAAS), Beijing, China and in national institutes in Asia. Only very small collections of catjang cowpea exist. In Asia landraces of vegetable and pulse types of Vigna unguiculata are in danger of being lost since improved cultivars are widely grown. This process has also started in Africa. Much work has been performed on Vigna unguiculata breeding, mostly for cultivars grown as a pulse, and in South-East Asia for yard-long bean. In the United States special cowpea cultivars for harvesting pods and young seeds have been developed. Selection criteria for cowpea concern resistances (to insect pests, diseases, nematodes, parasitic weeds, drought), plant type, seed type, yield and cropping system. IITA has a large breeding programme and distributes cowpea germplasm, breeding material and cultivars. In collaboration with the International Livestock Research Institute (ILRI), IITA initiated a breeding programme to develop improved cowpea cultivars that provide both seed for human consumption and fodder for livestock in the dry season. Improved cultivars have also been developed for intercropping. National programmes in many countries have released improved cowpea cultivars with resistances to bacterial blight, cowpea aphid-borne mosaic potyvirus, cowpea aphids, cowpea curculio, root-knot nematodes, cowpea weevil and parasitic weeds. New early maturing cultivars were developed for hot and dry conditions, e.g. ‘Ein al Ghazal’ and ‘Mouride’. Improved cultivars are often short, erect, determinate types selected for optimal dry seed production and less suitable for the traditional leaf picking. Wild African Vigna species have been successfully crossed with Vigna unguiculata. Breeding work on African vegetable types is scarce. Simlaw Seeds in Kenya has commercialized ‘Kenduke-1’, a semi-trailing type selected for large leaves with an attractive green colour and good taste and that can be picked for a long time. In Senegal the leaf vegetable ‘Fuuta’ with a vegetative period of up to 50 days was selected. The Crop Breeding Institute in Harare, Zimbabwe, selected dual-purpose cultivars with high leaf and seed yield; the Zimbabwean cultivar ‘Chigwa’ is specially suited for use as a leaf vegetable because of late flowering. ‘Melakh’ is a dual-purpose cultivar bred for dry and fresh seed production in Senegal. Breeding of improved cultivars of yard-long bean by backcrossing and pedigree selection has been performed in South-East Asia. Yield is strongly correlated with pod length and the number of pods per plant. Resistance to bean flies would be welcome but seems difficult to achieve. East-West Seed Company in Thailand selected cultivars adapted to a wide range of growing conditions, e.g. ‘Aba’, with early maturity (first harvest 45 days after sowing), high yield, greyish green pods 60–70 cm long, and excellent market quality. Genetic linkage maps of cowpea have been constructed using RAPD, AFLP and RFLP; the linkage maps have been used to locate genes conferring resistance to Striga gesnerioides, several viruses and root-knot nematodes, as well as to locate quantitative trait loci (QTLs) for time to flowering, time to maturity, pod length, pod and seed weight, and resistance to aphids. Direct organogenesis of cowpea has been achieved using hypocotyl, epicotyl or cotyledon tissue. Regeneration of cowpea via somatic embryogenesis has been attempted, but callus failed to regenerate plants at an acceptable frequency. Genetic transformation has been proposed, e.g. to achieve resistance to pests by incorporating Bacillus thuringiensis (Bt) genes and α-amylase inhibitor genes, but a robust system for stable genetic transformation of cowpea is not yet available. Cowpea serves as a cheap source of plant protein, especially in West Africa. It plays an important role in multiple cropping systems and is a major component of integrated crop/livestock systems in West Africa. Diseases and pests are the major constraints in cowpea production. Resistance breeding could be of utmost importance to overcome these constraints, with an increasingly important role for biotechnological tools. Future improvement also relies on the collection of landraces and their wild relatives and their incorporation into breeding programmes. The prospects for vegetable cowpea in Africa are bright. Apart from traditional dual-purpose cowpea cultivars (harvested as pulse and for the leaves) there is a need for special vegetable types. As a leaf vegetable: dwarf plants with erect or prostrate habit, long vegetative period, tender shoots and leaves. For immature seed: dwarf plants with erect or prostrate, determinate habit. For fresh pods: pods about 15 cm long (replacing French bean in hot lowland regions). As a fruit vegetable, it seems logical to replace cowpea by yard-long bean, because of its superior yield and quality. Asian cultivars should be tested on suitability for tropical African conditions because, if combined with market development, yard-long bean has the potential to become an excellent enrichment of the available vegetable assortment. • Ehlers, J.D., 1997. Cowpea (Vigna unguiculata). Field Crops Research 53(1–3): 187–204. • Grubben, G.J.H., 1993. Vigna unguiculata (L.) Walp. cv. group Sesquipedalis. In: Siemonsma, J.S. & Kasem Piluek (Editors). Plant Resources of South-East Asia No 8. Vegetables. Pudoc Scientific Publishers, Wageningen, Netherlands. pp. 274–278. • Hall, A.E. & Coyne, D. (Editors), 2003. Research highlights of the Bean/Cowpea Collaborative Research Support Program 1981–2002. Field Crops Research 82(2–3), Special Issue. 242 pp. • Langyintuo, A.S., Lowenberg-deBoer, J., Faye, M., Lambert, D., Ibro, G., Moussa, B., Kergna, A., Kushwaha, S., Musa, S. & Ntoukam, G., 2003. Cowpea supply and demand in West and Central Africa. Field Crops Research 82(2–3): 215–231. • Ng, N.Q. & Singh, B.B., 1997. Cowpea. In: Fuccillo, D., Sears, L. & Stapleton, P. (Editors). Biodiversity in trust: conservation and use of plant genetic resources in CGIAR Centres. Cambridge University Press, Cambridge, United Kingdom. pp. 82–99. • Pandey, R.K. & Westphal, E., 1989. Vigna unguiculata (L.) Walp. In: van der Maesen, L.J.G. & Somaatmadja, S. (Editors). Plant Resources of South-East Asia No 1. Pulses. Pudoc, Wageningen, Netherlands. pp. 77–81. • Pasquet, R.S. & Baudoin, J.-P., 1997. Le niébé. In: Charrier, A., Jacquot, M., Hamon, S. & Nicolas, D. (Editors). L’amélioration des plantes tropicales. Centre de coopération internationale en recherche agronomique pour le développement (CIRAD) & Institut français de recherche scientifique pour le développement en coopération (ORSTOM), Montpellier, France. pp. 483–505. • Singh, S.R. & Rachie, K.O. (Editors), 1985. Cowpea research production and utilization. John Wiley and Sons, Chichester, United Kingdom. 460 pp. • Singh, B.B., Mohan Raj, D.R., Dashiell, K.E. & Jackai, L.E.N. (Editors), 1997. Advances in cowpea research. International Institute of Tropical Agriculture, Ibadan, Nigeria. 375 pp. • Vanderborght, T. & Baudoin, J.P., 2001. Cowpea. In: Raemaekers, R.H. (Editor). Crop production in tropical Africa. DGIC (Directorate General for International Co-operation), Ministry of Foreign Affairs, External Trade and International Co-operation, Brussels, Belgium. pp. 334–348. • Allen, D.J., Thottappilly, G., Emechebe, A.M. & Singh, B.B., 1998. Diseases of cowpea. In: Allen, D.J. & Lenné, J.M. (Editors). The pathology of food and pasture legumes. CAB International, Wallingford, United Kingdom. pp. 267–324. • Bhat, R.B., Etejere, E.O. & Oladipo, V.T., 1990. Ethnobotanical studies from Central Nigeria. Economic Botany 44(3): 382–390. • Burkill, H.M., 1995. The useful plants of West Tropical Africa. 2nd Edition. Volume 3, Families J–L. Royal Botanic Gardens, Kew, Richmond, United Kingdom. 857 pp. • de Vries, J. & Toenniessen, G., 2001. Securing the harvest: biotechnology, breeding and seed systems for African crops. CAB International, Wallingford, United Kingdom. 224 pp. • Ezedinma, F.O.C., 1973. Effects of defoliation and topping on semi-upright cowpeas (Vigna unguiculata (L.) Walp.) in a humid tropical environment. Experimental Agriculture 9(3): 203–207. • Hall, A.E., Cisse, N., Thiaw, S., Elawad, H.O.A., Ehlers, J.D., Ismail, A.M., Fery, R.L., Roberts, P.A., Kitch, L.W., Murdock, L.L., Boukar, O., Phillips, R.D. & McWatters, K.H., 2003. Development of cowpea cultivars and germplasm by the Bean/Cowpea CRSP. Field Crops Research 82(2–3): 103–134. • Kahn, J., 1993. Studies on interference between newly defined bean-infecting potyviruses. [Internet] WAU Dissertation Abstracts No 1689, Wageningen, Netherlands. http://library.wur.nl/wda/abstracts/ab1689.html. Accessed June 2004. • Madamba, R., 1997. The nutritive value of indigenous grain legumes and their food role at the household level. In: Adipala, E., Tenywan, J.S. & Openga-Latingo, M.W. (Editors). Proceedings of the conference of African crop science, 13–17 January, 1996, Pretoria, South Africa. Volume 3. Pretoria, South Africa. pp. 1255–1258. • Madamba, R., 2001. Cowpea leaf, an alternative vegetable in Zimbabwe. Report of DFID’s Crop Post-Harvest Programme’s Indigenous Vegetable Project. Crop Breeding Institute, Harare, Zimbabwe. • Magkoko, C., 2001. Overview of production and post-harvest constraints of cowpea in Botswana. In: Kitch, L. & Tafadzwa Sibanda (Editors). Post-harvest storage technologies for cowpea (Vigna unguiculata) in Southern Africa. Copublication of Food and Agriculture Organisation (FAO), Bean/Cowpea Collaborative Research Support Programme (CRSP) and Crop Post-harvest Programme (CPHP), Harare, Zimbabwe. pp. 82–83. • Messiaen, C.-M., 1989. Le potager tropical. 2nd Edition. Presses Universitaires de France, Paris, France. 580 pp. • Ouédraogo, J.T., Gowda, B.S., Jean, M., Close, T.J., Ehlers, J.D., Hall, A.E., Gillaspie, A.G., Roberts, P.A., Ismail, A.M., Bruening, G., Gepts, P., Timko, M.P. & Belzile, F.J., 2002. An improved genetic linkage map for cowpea (Vigna unguiculata L.) Combining AFLP, RFLP, RAPD, biochemical markers, and biological resistance traits. Genome 45(1): 175–188. • Pasquet, R.S., 1998. Morphological study of cultivated cowpea (Vigna unguiculata (L.) Walp.). Importance of ovule number and definition of cv gr Melanophthalmus. Agronomie 18: 61–70. • Popelka, J.C., Terryn, N. & Higgins, T.J.V., 2004. Gene technology for grain legumes: can it contribute to the food challenge in developing countries? Plant Science 167: 195–206. • Schippers, R.R., 2000. African indigenous vegetables. An overview of the cultivated species. Natural Resources Institute/ACP-EU Technical Centre for Agricultural and Rural Cooperation, Chatham, United Kingdom. 214 pp. • Singh, B.B., Ajeigbe, H.A., Tarawali, S.A., Fernandez-Rivera, S. & Musa Abubakar, 2003. Improving the production and utilization of cowpea as food and fodder. Field Crops Research 84(1–2): 169–177. • Ubi, B.E., Mignouna, H. & Thottapilly, G., 2000. Construction of a genetic linkage map and QTL analysis using a recombinant inbred population derived from an intersubspecific cross of cowpea (Vigna unguiculata (L.) Walp.). Breeding Science 50(3): 161–172. • Uguru, M.I., 1996. A note on Nigerian vegetable cowpea. Genetic Resources and Crop Evolution 43(2): 125–128. • Uguru, M.I., 1998. Traditional conservation of vegetable cowpea in Nigeria. Genetic Resources and Crop Evolution 45: 135–138. • USDA, 2004. USDA national nutrient database for standard reference, release 17. [Internet] U.S. Department of Agriculture, Agricultural Research Service, Nutrient Data Laboratory, Beltsville Md, United States. http://www.nal.usda.gov/fnic/foodcomp. Accessed December 2004. Sources of illustration • Pandey, R.K. & Westphal, E., 1989. Vigna unguiculata (L.) Walp. In: van der Maesen, L.J.G. & Somaatmadja, S. (Editors). Plant Resources of South-East Asia No 1. Pulses. Pudoc, Wageningen, Netherlands. pp. 77–81. Correct citation of this article: Madamba, R., Grubben, G.J.H., Asante, I.K. & Akromah, R., 2006. Vigna unguiculata (L.) Walp. In: Brink, M. & Belay, G. (Editors). PROTA 1: Cereals and pulses/Céréales et légumes secs. [CD-Rom]. PROTA, Wageningen, Netherlands. wild and planted 1, inflorescence; 2, fruiting branch; 3, seed. flowering plants in the field flowering and fruiting plant Sesquipedalis Group, plants on the market hedge of Sesquipedalis Group the stringless landrace ‘Eje-O’Ha’
<urn:uuid:ecd7783e-27ab-4437-95ba-ab04cbfcec4a>
CC-MAIN-2016-26
http://database.prota.org/PROTAhtml/Vigna%20unguiculata_En.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.871576
11,418
2.90625
3
A torsional pendulum consists of an object suspended by a wire of a certain stiffness. The object is turned through an angle and released from rest, resulting in the harmonic motion of the object rotating back and forth. This Demonstration illustrates this type of harmonic motion, which follows Newton's second law for rotations, . The torque of this pendulum is directly proportional to the angle it is turned by a factor of the torsional constant, which is a measure of the stiffness of the wire. Since , , which is a second-order differential equation. The solution of this equation as a function of time is , where is the angular frequency. Select any of the four object types, and change the mass, radius/length, torsional constant, and initial angular displacement. The left graphic is a 2D representation of the angle through which the object is currently turned. The right graphic is a 3D depiction of the object rotating in space. Open the "time" slider and press play to animate the rotation of the object.
<urn:uuid:ae8f8442-317a-421f-964e-8698a338e069>
CC-MAIN-2016-26
http://demonstrations.wolfram.com/TorsionalPendulums/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941799
210
4.375
4
DENVER (AP) — Planners for a prescribed burn that apparently sparked a wildfire in Jefferson County acknowledged there was a potential for fires to escape and cause a “significant threat” to nearby homes. However, Colorado State Forest Service officials thought it was more likely that they’d be able to put out fires before they got that far, partly because of crews and water on site. The plan, released Friday, also states that forest thinning would help protect those homes from a potential wildfire in the future. The plan dates from 2006 and covers a series of burns being done in the area for Denver Water. Under the plan, nearby residents were supposed to get warning letters. The Forest Service has refused to say if that happened because of an independent review into the burn. (© Copyright 2012 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.)
<urn:uuid:b5164f1d-448f-40fb-9180-01ca5863b008>
CC-MAIN-2016-26
http://denver.cbslocal.com/2012/03/30/colorado-burn-plan-acknowledged-risk-of-fires-escaping/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970147
186
2.578125
3
Silent dog whistles, also called Galton whistles after the inventor, Francis Galton, are useful whenever you need to give Rover a clear signal. Although you can't hear it, a blast on one of these small devices reaches your dog's ears easily, allowing you to communicate a training cue or simply get a distracted pooch's attention back where it belongs. Why Do Silent Whistles Work? Humans can hear sounds up to about 23,000 hertz, but Rover can hear far higher frequencies than you can. Silent whistles produce a noise between 23,000 and 45,000 hertz; it may sound like a low hiss of air to you, but your canine buddy perceives it as a shrill blast. Not all dogs respond equally to the sound of a silent whistle, but many -- especially small dogs -- seem to show an immediate interest in the noise, which may be reminiscent of the high squeaks made by tasty rodents in the wild. Whistling for Attention Because the noise made by a silent dog whistle attracts the attention of many dogs naturally, it can be a useful tool for getting Rover's focus back on you when he is distracted or doing something wrong. It's also a lot more dignified than yelling at the top of your lungs when you catch your beloved pup digging in the rosebushes at the furthest corner of the yard; the neighbors won't even have to know what's going on behind your privacy fence. Whistling for Training Rover's no dummy. Just as he can learn to respond to dozens of voice commands or hand signals, he can also learn to respond to a specific pattern of whistles with a specific response. A silent whistle makes a perfect training cue for dogs working in the field, far from their owners. It's clear, it's consistent and it carries over a distance. With careful training, Rover can become an expert, responding instantly to seemingly undetectable cues from you. Whistling for Unwanted Barking Several manufacturers offer collars that automatically produce an ultrasonic whistle whenever your dog barks. The theory behind these products is that the same whistle that distracts Rover from digging in the rosebushes can distract him from barking as well. According to the Humane Society of the United States, however, using such a method to control barking does not address the underlying issue and is therefore undesirable, since it does not reduce the feeling of stress that causes the dog to bark. - University of Toledo: High-Frequency Hearing - Louisiana State University: How Well Do Dogs and Other Animals Hear? - Oregon State University: 4-H Sporting Dog Project Member Guide - Power of the Dog: Things Your Dog Can Do That You Can't; Les Krantz - The Humane Society of the United States: Dog Collars - Jupiterimages/Photos.com/Getty Images
<urn:uuid:7366debc-7e92-4c37-a925-d509fa758c04>
CC-MAIN-2016-26
http://dogcare.dailypuppy.com/silent-whistles-used-dogs-2323.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944443
592
2.84375
3
In my eighth-grade history class, we recently were challenged to learn about something that could change the world. An ongoing problem in America is the shortage of organ donors. As of now, there are more than 112,777 people in the United States who need transplants, but there are only 39,392 active donors available. Recently, a desperate mother used Facebook to find a donor for her son who was diagnosed with Goodpasture Disease, a disease which shuts down the kidneys. In two days she estimated that about 300,000 people saw the video that she posted on Facebook, requesting a kidney match for her son. In that time, 10 donors became available, eventually resulting in one perfect match.
<urn:uuid:690ffe13-cac9-478f-bd9a-7f95c9ef019c>
CC-MAIN-2016-26
http://donatelife-organdonation.blogspot.com/2012/02/youth-view-organ-donation-can-change.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97276
143
2.578125
3
House of Representatives candidate sees parallel with today Germany tried to “monetize” its World War One reparations debt through hyperinflation during the period after the victors imposed reparations on the struggling Weimar Republic. In the last few days before Adolf Hitler and the Nazi Party seized power in 1933, it was actually cheaper to burn Reichsmarks as fuel for heating than it was to use it to buy fire wood to burn for the same purpose. At the time, for each 1% raised in taxes, 99% was hyperinflated fiat currency. We caught up with Constitutionalist candidate for U.S. House of Representatives District 25 Wes Riddle at his office in Belton. Mr. Riddle is a retired Colonel of the U.S. Army and a historian trained at Oxford University who has taught history at two area universities. Asked if there are any historical events that parallel those of today, Col. Riddle said that the current debt crisis, the U.S. Treasury's policy of monetizing its debt to China, and the fact that German bond ratings have already declared U.S. Treasury bonds in default remind him of depression-era conditions in the Weimar Republic. Unlike Germany's Weimar Republic, America seems to have won the war, but lost the peace.
<urn:uuid:38892ac3-f551-41a4-be2c-230a39592fd4>
CC-MAIN-2016-26
http://downdirtyword.blogspot.com/2011/07/money-to-burn-in-weimars-post-war.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967756
271
2.578125
3
On September 15, the long process begins to dismantle two fish killing dams on the Elwha River. On that day, removal will begin at Glines Canyon dam. Two days later, on September 17, tribal dignitaries, politicians and hundreds, or thousands, of people will gather to celebrate the removal of the Elwha Dams. The largest dam removal undertaking in U.S. history was started more than 20 years ago by local tribal members and visionary activists with legal support from Earthjustice. The Elwha Dam. Once the Elwha’s waters flow free again, experts predict that the river’s salmon population will swell. (NPS) “What will happen on the Elwha River with the dams coming down is a historic return of a wild river and its legendary fish runs,” said Todd True, a long-time attorney for Earthjustice. “It is also a story about how enforcing our environmental laws might take years to show results but eventually can bring about lasting change." According to experts, removing the two massive dams on the Elwha River, which runs through Washington's Olympic peninsula, is considered one of the most promising acts of salmon habitat restoration in the region and the nation. Once the Elwha’s waters flow free again, experts predict that the river’s salmon population will swell from their current number of about 3,000 to nearly 400,000 fish spawning annually by 2039. The river will be especially important as climate change reduces salmon habitat elsewhere. The Elwha is expected to remain a cold, clean river because it flows protected through undisturbed forests in Olympic National Park. Completed in 1913, the 108-foot high Elwha Dam is about 4 miles from the mouth of the Elwha River. The 210-foot high Glines Canyon Dam, completed in 1927, is about 10 miles farther upriver. Both dams, built to provide electricity for a paper mill in Port Angeles, were constructed without fish ladders, which blocked salmon from most of their historic spawning habitat. The dams’ removal had been proposed back in the 1970’s and early 1980’s. Richard Rutz, with advice and legal support from Earthjustice, raised what is believed to be the first challenge to a dam’s operating permit, arguing the dams should come out. One of the two dams slated for removal was built on a part of the river that later became part of Olympic National Park. The dam was never legally grandfathered out of the national park and laws protecting national parks made clear the dam couldn’t legally be relicensed. Rutz, working with local environmental groups and Earthjustice, investigated the history and technical information for the two Elwha dams which had come due for relicensing in the late 1970’s. “Back then, hydropower licensing and relicensing were routinely rubber stamped by the Federal Energy Regulatory Commission (FERC),” explained Rutz. “Participation was limited to exclude the public and pro forma environmental reviews and approvals were conducted internally.” An Earthjustice lawyer named Ron Wilson provided the legal expertise for Rutz. They argued that FERC was legally prohibited from relicensing the upper dam. Furthermore, the two dams operated together and the lower dam was unsafe alone and consequently they both could not be relicensed and should come out. An independent review of the environmental impacts of relicensing the dams, sought by Rutz and others, concluded that the preferred alternative would be to remove the dams. The Tribe and the environmental groups began to work together to achieve tear-down of the two dams. Finally, with increasing public support from Rep. John Miller, R-Washington, Sen. Dan Evans, R-Washington, Sen. Bill Bradley, D-New Jersey and others, Congress passed the Elwha River Ecosystem and Fisheries Restoration Act, which was signed by President George H.W. Bush in 1992. The new law called for removal of the dams. “When contractors start removing these fish killing dams next month, it will be a great day for the Elwha River its salmon, Olympic National Park and all the people of Washington,” said Rutz. “A river once known for epic salmon runs will now likely become legendary again—our own Copper River.” The removal of the Elwha dams’ may inspire other restoration project across the country. “People will see a truly remarkable event—an iconic river coming back to life,” explained True. “They will start asking important questions about their own rivers after witnessing the benefits of the Elwha River returning to its historic state. Perhaps they’ll be inspired to seek restoration of their own rivers too.” Demolition and removal of the dams is expected to take three years and eventually will allow the 45-mile Elwha River to run free once again. Todd True, Earthjustice, (206) 343-7340, ext. 1030 Earthjustice is the premier nonprofit environmental law organization. We wield the power of law and the strength of partnership to protect people’s health, to preserve magnificent places and wildlife, to advance clean energy, and to combat climate change. We are here because the earth needs a good lawyer.
<urn:uuid:ab292dee-d3b2-46ee-9929-ad120115f94c>
CC-MAIN-2016-26
http://earthjustice.org/news/press/2011/biggest-u-s-dam-removal-to-restore-salmon-runs
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953713
1,100
3.703125
4
Data Sources and Methods for the Air Health Indicator Canadian communities for which the ground-level ozone (O3) and fine particulate matter (PM2.5) concentrations were used for the National Air Quality Indicators of Canadian Environmental Sustainability Indicators (CESI) were considered. The Air Health Indicator (AHI) is based on the criteria of having a reasonably complete time series of pollution and weather measurements, and enough daily mortality data. For each community there were three types of data used for the AHI : daily numbers of cause-specific deaths, air pollution concentrations, and potential confounders to the mortality-air pollution association. 3.1 Data source 3.1.1 Daily numbers of cause-specific deaths The daily numbers of cause-specific deaths (non-accidental mortality data) were obtained from the national mortality database (Vital Statistics Database–Deaths) maintained by Statistics Canada. Based on the International Classification of Diseases (ICD), the mortality data included only deaths from internal causes (ICD-9 code < 800 for up to year 1999 and ICD-10 code A00-R99 for years 2000 onwards), excluding external causes such as injuries. Regarding cause-specific deaths, in particular, we were interested in cardiopulmonary mortality related to the circulatory or respiratory system. For this specification, our mortality data were categorized into a cardiopulmonary group (ICD-9 code between 390 and 520 and ICD-10 code between I00–I99 and J00–J99). The cardiopulmonary mortality data were extracted for a specified census division only where the census division of residence was the same as the census division of death occurrence. 3.1.2 Air pollution concentrations The daily O3 and PM2.5 concentration data were obtained from the National Air Pollution Surveillance (NAPS) Network operated by Environment Canada. Established in 1969, NAPS provides accurate and long-term air quality data of a uniform standard across Canada to monitor the quality of ambient (outdoor) air in populated regions by specific procedures for the selection and positioning of monitoring stations. For each NAPS monitoring station, the daily average concentration for a certain day was calculated only if at least 75% of 24 hourly concentrations for that day (i.e. at least 18 hourly concentrations) were available. Otherwise, it was recorded as missing. For each census division, the daily average concentration was averaged over monitoring stations if there were two or more stations located in that census division. The metrics used for the concentrations were the daily 8-hours maximum (April to October) for O3 and the daily mean (April to October) for PM2.5. 3.1.3 Potential confounders to the mortality-air pollution association As for potential confounding variables to the exposure-mortality association, three factors were considered: time; temperature; and indicators for days of the week. Calendar time is included to control both temporal and seasonal variations. Daily temperature controls for the short-term effect of weather on daily mortality; and day of the week accounts for mortality that varies by day of the week. Specifically, to account for the weather effect, daily mean temperature data were obtained from the National Climate Data and Information Archive of Environment Canada. As for lifestyle factors such as smoking or cholesterol in the community, they do not vary meaningfully from day to day and thus can be ignored as confounders. 3.2 Spatial coverage Twenty Canadian communitiesFootnote were selected for O3. Eighteen communitiesFootnote were selected for PM2.5. Each community's geographic boundaries were defined by the census division associated with the city. 3.3 Temporal coverage Yearly data for the years 1990 to 2010 were used for O3 and yearly data for the years 2001 to 2010 were used for PM2.5. 3.4 Data completeness At the time of the modeling of the AHI, only the 1990 to 2007 mortality data were sufficiently complete and available in the correct format. The indicators values reported for years 2008 to 2010 should be considered as preliminary, as they are approximated using the averages of annual national risk estimates from the previous periods (1990 to 2007 for O3 and 2001 to 2007 for PM2.5). A reasonable assumption was made that the consistency observed in these estimates continued. The latest year used for the air pollution concentration is 2010. 3.5 Data timeliness Due to the complexity of mortality data collection, the AHI is few years behind the other data (air pollution concentrations). - Date modified:
<urn:uuid:2948e807-9f13-49e0-9ae7-e96fe6151650>
CC-MAIN-2016-26
http://ec.gc.ca/indicateurs-indicators/default.asp?lang=En&n=A48B53E6-1&offset=3&toc=hide
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953609
942
3.203125
3
Indoor air pollution (IAP) is dangerously high for many poor families in Bangladesh. Concentrations of 300 ug/m3 for respirable airborne particulates (PM10) or greater are common in Bangladeshi households, implying widespread exposure to a serious health hazard. To promote a better understanding of IAP, in 2003, the World Bank’s research department has investigated the IAP in Bangladesh using the latest air monitoring technology and a national household survey (IAP Research, Phase I). The analysis of determinants of IAP verifies the IAP reducing potential of clean fuels (kerosene, natural gas, etc.), but the nationwide survey results revealed: Poor households in Bangladesh (like in other parts of Asia and Latin America) almost always use “dirty” biomass fuels, because in most rural areas, clean fuels are not available at all. Even where a clean fuel is available, poor households prefer and use “dirty” fuels because the relative price of the “clean” fuel is simply too high. Improved stoves for biomass combustion could help but, as other studies in Asia and Latin America have also discovered, the World Bank survey found almost no adoption of improved stoves despite widespread promotional efforts in Bangladesh. Households report non-adoption for a variety of reasons, including capital and maintenance costs, inconvenience, and incompatibility with food preparation traditions. Thus, neither clean fuels nor improved stoves offer strong prospects for reducing IAP in the rural area in near future. Fortunately, the World Bank study has identified another option that looks much more promising. In Bangladesh, common variations in certain household characteristics -- construction materials, space configurations, cooking locations and use of doors and windows -- have produced large differences in IAP exposure. As a result, some poor households using “dirty” fuels enjoy indoor air quality normally associated with clean fuels, while others suffer from pollution levels ten times the international safe standard. Since many poor households already have some of the relevant characteristics, they are clearly acceptable and affordable in Bangladesh. The IAP-Phase I research, therefore, has tentatively concluded that a national “clean household” promotion program, combined with effective public education on the associated health benefits, could reduce IAP exposure to much safer levels for many poor families. Although the general results are quite robust, the first-round research has only been able to consider a subset of feasible measures that might yield significant benefits in this context. Before proposing a national “clean household” program to policy makers, one needs to establish a broader and more rigorously-confirmed set of clean characteristics as well as to assess their cost effectiveness in different regions of Bangladesh. Current research in Bangladesh has conducted a program of direct, controlled experimentation and cost-effectiveness analysis that will provide the needed evidence. The experimentation of the research is confined to structural arrangements(building materials, cooking locations, window/ door configurations etc.) that are already common among poor households in Bangladesh. The World Bank study has used two types of equipment: real-time monitors that record PM10 at 2-minute intervals, and air samplers that measure 24-hour average PM10 concentrations. 1. The real-time monitoring instrument is the Thermo Electric Personal DataRAM (pDR-1000). The pDR-1000 uses a light scattering photometer (nephelometer) to measure airborne particle concentrations. The operative principle is real-time measurement of light scattered by aerosols, integrated over as wide a range of angles as possible. At each location, the instrument operated continuously, without intervention, for a 24-hour period to record PM10 concentrations at 2-minute intervals. 2. The other instrument used in the study is the Airmetrics MiniVol Portable Air Sampler (Airmetrics, 2004), a more conventional device that samples ambient air for 24 hours. While the MiniVol is not a reference method sampler, it gives results that closely approximate data from U.S. Federal Reference Method samplers. The MiniVols were programmed to draw air at 5 liters/minute through PM10 particle size separators (impactors) and then through filters. The particles were caught on the filters, and the filters were weighed pre- and post exposure with a microbalance. Access to Datasets The readings of pDR-1000 and MiniVol air sampler provide a detailed record of IAP concentration in each house. pDR-1000 monitored PM10 data Primary data on PM-10 concentrations in indoor air of 89 combinations constructed for the experiment in Bangladesh. In each case PM-10 concentrations were recorded by Thermo Electric Personal DataRAM (pDR-1000) at 2-minute intervals for a 24-hour period. Indoor air for these houses were monitored during the two time periods. The detailed information include: (i) construction material of the house, (ii) configuration of the kitchen, and (iii) type of cooking fuel. MiniVol monitored PM10 data Primary data on 24-hour average PM-10 concentrations in indoor air of 337 combinations constructed for the experiment in Bangladesh as recorded by Airmetrics MiniVol Portable Air Sampler during pre-monsoon period of 2005 and post-monsoon period of 2005 and early 2006. The detailed information include: (i) construction material of the kitchen and living room, (ii) configuration of the kitchen, (iii) type of fuel used for cooking, and (4) position of the stove. Kitchen configurations in Bangladesh (MS PowerPoint file, 18kb)
<urn:uuid:bab42996-b852-4ddc-a570-45debfb582bc>
CC-MAIN-2016-26
http://econ.worldbank.org/WBSITE/EXTERNAL/EXTDEC/EXTRESEARCH/0,,contentMDK:22318663~pagePK:64214825~piPK:64214943~theSitePK:469382,00.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934305
1,138
3.21875
3
Thomas Alva Edison The accomplishments and life of electrical engineer and entrepreneur Thomas Edison, 1847-1931 Thomas Edison is arguably the most famous of all engineers in history. He is the most famous electrical engineer in history although during his time the electrical field had no "EE" official designation and complex mathematics and science were not fully integrated into the field of invention. 1.) The controversy of the "Shakespeare" of his field Electric Cars, talking dolls, concrete pre-fab housing and a working lightbulb - Edison worked on all of this and more, but it would impossible to talk about Edison these days without mention of many issues featured in countless online publications and blogs about his negative side. Edison was persistent, ruthless, egotistical and sometimes inhuman. While you won't find many people who admire his personality you cannot deny his genius and work habits which focused on pumping out useful inventions, and his undying love for technology. Edison was intense and driven. One admirable goal was his drive to make technology cheaper for the masses, while other inventors of the time solely focused on luxury and industrial customers. He may not have been the "first" to come up with ideas, but he tirelessly worked day and night to come up with mechanical or electrical designs that mostly resulted in reliable, mass-producible designs for the customer. What good is a fragile and unreliable prototype in a lab anyway? That being said about design, Edison had plenty of designs that did fail to work on a mass scale, however he didn't give up. While Edison possessed a unique genius and understanding of physics he understood the limitations of one man (at least sometimes!) and learned to depend on a team to push the limits of invention. We like to think of Edison as the "Shakespeare" of his field. His crack team of inventors and technicians came up with many of Edison's companies innovations and deserve credit, however Edison needed his marketing machine to keep Edison's name on everything that came out so many of these individuals didn't get credit. Edison guided the direction of research, often stepping in personally to help in his operations so he does deserve credit for being a hard-driving team leader. Shakespeare took credit for the works of his team, however in all systems we need people who have an eye for the best ideas and act as a master filter. A genius invention lost in a myriad of bad inventions will become lost to humanity, so in this sense the role of Edison is great, just as the role of Steve Jobs was important to the financial success and widespread adoption of Apple's products. Many geniuses of the electrical industry including C.S. Bradley and N. Tesla owe a part of their success to their time working for Edison. Thomas Edison's usefulness to the industry diminished as a new crop of engineers with mathematics and physics training rose up in the 1890s. When General Electric was formed Edison (the man, not the mythical marketing figure) was pushed aside as a working member of the board of directors and just remained as a figurehead and symbol of the company. While many vilify J.P. Morgan and other major investors we can thank those investors for being level headed and not allowing Thomas Edison to continue to manage. The world and technology moved forward and Edison had done his critical part. Other leader-electrical engineers like Elihu Thomson and George Westinghouse did learn from the foundation that Edison started and they were able to adapt better to changing times, retaining their important management positions. So lets take a moment to read about this larger than life and critical person of history called Thomas Alva Edison. Whether you look up to Thomas Edison or don't like him (and people like him... such as Steve Jobs). We ask that you please avoid the idiotic simplification of history into the Tesla vs. Edison myth. Edison fulfilled his critical role in commercializing electric power, Elihu Thomson, William Stanley and Nikola Tesla (who I consider equally as important) did their roles. History is more rich and interesting than a rivalry of personalities concocted and amplified by journalists trying to appeal to the lowest common denominator of the public. 2.) Short Biography: American Innovator (total of 1,093 patents) 1847-1931, Milan, Ohio, USA Telegraph Years: Edison began his career in the era of the telegraph. The telegraph was the most widespread use of electrical technology in the world at the time. He became a telegrapher in Port Huron, Michigan at age 16. In 1869 young innovator Edison patented an electric vote recorder, but its unconventionality led to commercial failure, forcing Edison to focus on marketability as he innovated. Knowing the telegraph's extensive use led the newly entrepreneurial Edison to develop a telegraph to receive two messages while simultaneously sending two - a "quadruplex" capacity. Above: schematic of Edison's carbon microphone with transformer Western Union purchased Edison's technology, financially enabling him to relocate from Newark, NJ to nearby Menlo Park. He created a premiere industrial research lab in 1876. In In 1878 Edison founded the Edison Electric Lighting Company in lower Manhattan to produce his new incandescent filament bulbs. Edison worked on many technologies besides the lightbulb. He revolutionized the telephone industry by invention of the carbon button microphone in 1979. Above: A "Long-Legged Mary-Ann" Generator by Edison. This generator was studied by competition and was milestone in generator design. In 1886 Edison relocated the Edison Machine Works to Schenectady, NY - reminiscent of his rural birthplace involved in shipping grain by canal, he saw the Erie Canal as advantageous in receiving materials and shipping products. Three years later, Edison merged Edison Machine Works with Edison Electric Light Company, Bergmann & Company and the Edison Lamp Company to form Edison General Electric Company in Schenectady. Edison General Electric then merged with Thomson -Houston Company to form General Electric Company in 1892. Edison's difficult personality and reluctance to deal with AC power led the board of the General Electric to reduce his influence in the company. He remained a figurehead with little power after he sold his 10% stake in the company. By the 1890s a new crop of innovators like Steinmetz, William Stanley, Dr. Lois Bell, and Thomson took the reigns as leaders of AC power innovations at GE. Edison was left to pursue his passions and projects freely without the control of GE's board. Thomas Edison had a long time interest in batteries of all sizes. When electric vehicles came to commercial use in the 1890s it renewed Edison's interest in batteries. Edison looked to improve the battery by first improving durability. Batteries made of fragile materials like clays and glass, with liquid electrolyte solutions did not work well for cars. He developed alkaline and rechargeable batteries. In the Edison-Lalande cell he replaced powdered copper oxide with briquettes and his batteries would last for an entire year. Edison's NiFe (iron-nickel) batteries were designed for electric cars and ended up in many other uses including railroad signal backup and countless other uses which require a long lasting storage and recharge ability. Even though Edison's NiFe battery helped make electric cars more practical the internal combustion engine had taken over. Thomas Edison (and members of General Electric's board) had personals relationship with Henry Ford and were in no position to go into competition to push electric cars. Edison's batteries became an industry standard for many other uses and some specimens can still be made to work even today. Learn more about this period: More on Edison's Batteries 3.) Extended Biography: By Dr. Edwin Reilly Jr. Thomas A. Edison was born in Milan, Ohio, February 11, 1847. In 1854 the family moved to Port Huron, Michigan, where seven-year old Tom Edison set up his first chemical laboratory in the cellar of their large house. When he was 12 he got his first job as train-boy on the Grand Trunk Railroad. It was on this run between Detroit and Port Huron that he acquired exclusive newsdealer's rights selling candy and papers on the train. Edison's career as a telegraph operator began when he saved the station agent's young son from the path of a moving freight car. Out of gratitude the father taught Edison the new science of telegraphy. By the time he was seventeen, Edison was "on the road" as a telegraph operator. He drifted from Stratford, Canada, to Adrian, Michigan, Fort Wayne, Indianapolis and Boston. When he was 21 years old Edison went to New York, almost penniless. By fixing a broken-down machine in the Gold and Stock Telegraph Company, he landed a $300 a month job as superintendent of the company. At the same time he was making many inventions, among them the "Universal" stock ticker. For this and other inventions he received $40.000 and with this money he opened a manufacturing shop in Newark, making stock tickers. At the age of 29 he went to Menlo Park to make perhaps the greatest invention of all - a successful incandescent electric lamp. Out of the Edison laboratory in the important years between 1876 to 1886 came the carbon telephone transmitter, the phonograph, the Edison dynamo and the Edison incandescent lamp. When the electrical system with which he hoped to light whole cities required a new piece of machinery or a new device, Edison developed it. And if after developing it he could find no manufacturer, he would set up his own plants for manufacturing the equipment he had invented. By the very force of necessity the wizard of Menlo Park became a manufacturer of New York City. On September 4, 1882, Edison started operating the Pearl Street Station, the first central generating station to light New York City. The Edison interests were expanding and in 1886 Edison sent his agents to look for suitable sites for a new factory. On the outskirts of Schenectady stood two unfinished factory buildings, which were to have been the McQueen Locomotive Works. The location of these buildings impressed Edison and he negotiated to purchase the two plants which were soon turning out the dynamos needed by the Edison generating stations. Other buildings sprang up alongside the original shops and in 1892 this plant became the headquarters of the newly formed General Electric Company. It began to be apparent early in the 1890s that electrical development was being held up because no company controlled the patents on all the necessary elements for installing an efficient and serviceable system. The conviction was taking shape that the incandescent lamp and the alternating-current transformer system belonged together. The outcome in 1892 was the formation of the General Electric Company with the consolidation of the Thomson-Houston and the Edison General Electric Companies. Edison's was one of the many distinguished names which appeared on the first Board of Directors of the new Company. At this period, however, he concerned himself less and less with manufacturing activities and soon devoted his entire time to his laboratory in West Orange to perfect a modernized phonograph, a motion picture camera, and an electrical storage battery. During World War I Edison experimented on many war problems for the US Government, among them the sound detection of guns and submarines, airplane detection, increasing power and effectiveness of torpedoes, improving submarines and mining harbors. But some of Edison's greatest contributions to America's war efforts were in developing synthetic products for goods we could no longer get from Europe. Honors and awards were bestowed lavishly on Mr. Edison by persons, societies and countries throughout the world. His greatest honor perhaps, was the Congressional Medal of Honor, the nation's highest recognition of service. Edison died October 18, 1931 in Llewellyn Park, New Jersey at the age of eighty-four. Article by Dr. Edwin Reilly Jr., Breanna Day and M. Whelan Edison. by Neil Baldwin. University of Chicago Press. 2001 Wikipedia. "Thomas Alva Edison" The MiSci (Schenectady Museum) The Thomas Edison Papers. Rutgers Dr. Edwin Reilly Jr. - Prof. emeritus State University of New York at Albany Dynamos and Generators Wires and Cables Electric Power Transmission If you are a historian and wish to correct facts or publish a commentary or embedded article feel free to contact us. Photos:: Permission and fees are required for use of photos in printed or internet publications. Educational Use:: Students and teachers may use photos and videos at school. Graphics and photos must retain the Edison Tech Center watermark or captions and remain unmanipulated except for sizing. Videos:: DVDs are available for personal/educational use. Republication of any part or whole of any ETC video requires a professional license agreement. Contact us for legal permissions and fees.
<urn:uuid:34cf604d-69b7-4887-b020-d87cccaf785b>
CC-MAIN-2016-26
http://edisontechcenter.org/ThomasAlvaEdison.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.967739
2,651
2.703125
3
TOC | Pulm Idiopathic Pulmonary Fibrosis The term pulmonary fibrosis describes a heterogeneous group of pulmonary diseases characterized by interstitial inflammation (alveolitis), thickening of the alveolar walls, and varying degrees of parenchymal destruction and fibrosis. More than 160 individual diseases associated with pulmonary fibrosis have been identified. Despite this long list, an underlying etiology is not found in more than 50% of patients even after intensive investigation. These cases are referred to as idiopathic pulmonary fibrosis (IPF), also called Hamman-Rich syndrome, diffuse interstitial fibrosis, diffuse or cryptogenic fibrosing alveolitis. The exact incidence is unknown, but is estimated to be 3 to 5 cases per 100,000 population. Patients are often middle-aged, usually between 40 and 70 years of age. Familial cases do exist and follow a simple autosomal dominant pattern of transmission. I. Clinical presentation A. Symptoms and signs The insidious and progressive development of shortness of breath, initially during exercise, and a nonproductive cough are the most common complaints (80-100%) and are often present 12-24 months before presentation. A small percentage of patients may present with abnormal chest x-rays without respiratory symptoms but invariably develop symptoms as the disease progresses. Up to 50% of patients develop systemic or constitutional symptoms (e.g., fatigue, weight loss, fever, myalgias, and arthralgias). Examination of the chest reveals late respiratory fine dry crackles ("Velcro rales") at the bases. Late in the course of disease, clubbing of the fingers and evidence of cor pulmonale and pulmonary hypertension (augmented P2, S3 gallop, right ventricular heave) are often found. B. Laboratory findings Hypergammaglobulinemia (80%), an elevated erythrocyte sedimentation rate (50%), positive rheumatoid factor (30%), positive antinuclear antibodies (15%-20%), and circulating immune complexes are all relatively common with IPF but are nonspecific. Polycythemia rarely occurs even with hypoxemia. C. Radiographic and nuclear medicine findings Initially, the chest x-ray reveals a nonspecific, bilateral, fine reticular or reticulonodular pattern, which is most apparent in the lung bases. Serial chest x-rays show progressive coarse reticulation, 5- to 10-mm thin-walled cysts (honeycomb lung), and loss of lung volume. Pleural involvement is not a part of IPF. The correlation between plain chest x-ray and clinical or histopathologic stage is poor, and up to 10% of patients with IPF have normal chest x-rays. High-resolution computed tomography (HRCT) is very sensitive in detecting IPF and is especially useful in patients with normal chest x-rays. HRCT reveals the peripheral predominance of the interstitial densities, and the patchy nature of the involvement with areas of normal tissue. Gallium-67 lung scintigraphy does suggest active alveolitis when positive, but is nonspecific, correlates poorly with clinical or histopathologic stage, and if negative, does not exclude disease. The utility of gallium scintigraphy is therefore unclear. D. Physiologic tests Pulmonary function studies invariably reveal a restrictive ventilatory defect, with reduction in vital capacity, total lung capacity, and diffusing capacity. Arterial blood gas analysis reveals hypoxemia at rest secondary to ventilation/perfusion mismatch and respiratory alkalosis (hyperventilation) induced by stimulation of intrapulmonary stretch-receptors. With exercise the alveolar-arterial oxygen gradient increases and oxygen saturation falls in part because of diffusion impairment and ventilation/perfusion mismatch. A. Desquamative interstitial pneumonia (DIP) Desquamative interstitial pneumonia (DIP) is characterized by a relatively uniform appearance with intra-alveolar collections of macrophages and lymphocytes and concurrent hyperplasia of type II alveolar cells. Alveolar-wall thickening and interstitial inflammatory cell infiltration are present, but fibrosis is minimal. B. Usual interstitial pneumonia (UIP) Usual interstitial pneumonia (UIP) is characterized by varying degrees of parenchymal edema, fibrinous exudates, mononuclear cell infiltration, and fibroblast proliferation. Areas of dense fibrosis and destruction of normal lung architecture are common. C. End-stage fibrosis End-stage fibrosis is associated with marked connective tissue alterations and derangement of parenchymal structures. Cystic spaces (honeycomb lung) lined with metaplastic bronchial epithelium and evidence of pulmonary hypertension (cholesterol-ester clefts, smooth-muscle proliferation, pulmonary arteriolar fibrointimal thickening, and obliteration) are late findings. Transbronchial biopsy and bronchoalveolar lavage (BAL) using the fiberoptic bronchoscope may be useful in excluding alternative diagnoses, especially sarcoidosis and infection. However, the small sample size of the biopsy is often insufficient to make an accurate diagnosis of IPF, and the cellularity observed on the BAL is a poor indicator of the interstitial inflammatory response. Bronchoscopy, therefore, rarely gives a definitive diagnosis of IPF or an accurate assessment of its level of activity. C. Lung biopsy A definite determination of the cause and activity state of diffuse interstitial fibrosis can only be made by examining tissue obtained by lung biopsy. Both thoracoscopic and open lung biopsy provide adequate tissue samples. At least two lobes (avoiding the dependent portions of the right middle lobe and lingula) should be biopsied to sample both an area of obvious abnormality and an apparently uninvolved area, since histologic activity can vary from one area to another and since the changes of end-stage pulmonary fibrosis are nonspecific. D. Differential diagnosis IPF is a diagnosis of exclusion but can be diagnosed with a high degree of accuracy on clinical and laboratory grounds. Since the injured lung responds to many diseases with similar clinical and histologic changes, determination of the etiology and pathogenesis is difficult unless there is historic or physical evidence of infection, occupational or environmental exposure, or multisystem involvement (e.g., collagen vascular disease). The many diseases that should be considered are listed in Table 8-2 Table 8-2. Many of these diseases are serious, and management and prognosis will be influenced by the specific diagnosis. Every effort should be made to identify treatable diseases such as tuberculosis and other infections, collagen vascular diseases, sarcoidosis, and hypersensitivity pneumonitis. A. Oral corticosteroids Prednisone, beginning at 1.5-2 mg/kg/day (not to exceed 100 mg/day), is the preferred treatment. The initial dose is continued for 6 weeks and then reduced to 1 mg/kg/day for an additional 6 weeks and then 0.5 mg/kg/day for 3 months. If the patient responds (stabilized or improved) the dose is slowly tapered (1-2 mg/week) to 0.25 mg/kg/day. After approximately 1 year, it may be possible to taper the prednisone further, but immediate or late relapses are common and repeat therapy with prednisone may be required. No well-controlled studies demonstrate the superiority of any particular regimen of corticosteroids, and many other regimens may be equally effective. Precise guidelines are not available. B. Immunosuppressive therapy In patients who have failed or cannot tolerate corticosteroid therapy, therapy with cyclophosphamide may be initiated at 2 mg/kg/day (not to exceed 200 mg/day) with prednisone 0.25 mg/kg/day. The dose of cyclophosphamide should be adjusted to maintain the neutrophil count > 1500 cells/mm3. Therapy should be continued for at least 3 months and, if stabilization or clinical improvement are documented, then continued for 9-12 months. Azathioprine may be less effective than cyclophosphamide but may have more manageable side effects. Azathioprine may be initiated at 2 mg/kg/day (not to exceed 200 mg/day) with prednisone 0.25 mg/kg/day. It too should be continued for at least 3 months and, if stabilization or clinical improvement are documented, then continued for 9-12 months. C. Monitoring of response In general, responsive patients report a decrease in symptoms, demonstrate clearing of chest x-ray findings, and experience improvement or no further decline of physiologic tests. A 25% increase in vital capacity, a 40% increase in diffusing capacity, and reduction or normalization of oxygen desaturation during exercise are all significant. Any therapeutic trial should continue at least 3 months (if not prevented by side effects) before a decision is reached as to its ineffectiveness. The course of idiopathic pulmonary fibrosis is variable and frequently chronic and slowly progressive, ultimately ending in death. Recent studies have shown a mean survival of 12 years in patients with predominantly a DIP pattern, as compared with 6 years in patients with predominantly a UIP pattern. Without treatment, 22% with DIP but none with UIP improved. With corticosteroid therapy, 62% with DIP and only 12% with UIP improved. The main causes of death are respiratory failure, cor pulmonale, infection, and lung carcinoma. NEJM October 21, 1999 -- Vol. 341, No. 17 A Preliminary Study of Long-Term Treatment with Interferon Gamma-1b and Low-Dose Prednisolone in Patients with Idiopathic Pulmonary Fibrosis Rolf Ziesche, Elisabeth Hofbauer, Karin Wittmann, Ventzislav Petkov, Lutz-Henning Block Patients with idiopathic pulmonary fibrosis have progressive scarring of the lung and usually die within four to five years after symptoms develop. Treatment with oral glucocorticoids is often ineffective. We conducted an open, randomized trial of treatment with a combination of interferon gamma-1b, which has antifibrotic properties, and an oral glucocorticoid. We studied 18 patients with idiopathic pulmonary fibrosis who had not had responses to glucocorticoids or other immunosuppressive agents. Nine patients were treated for 12 months with oral prednisolone alone (7.5 mg daily, which could be increased to 25 to 50 mg daily), and nine with a combination of 200 µg of interferon gamma-1b (given three times per week subcutaneously) and 7.5 mg of prednisolone (given once a day). All the patients completed the study. Lung function deteriorated in all nine patients in the group given prednisolone alone: total lung capacity decreased from a mean (±SD) of 66±8 percent of the predicted value at base line to 62±6 percent at 12 months. In contrast, in the group receiving interferon gamma-1b plus prednisolone, total lung capacity increased (from 70±6 percent of the predicted value at base line to 79±12 percent at 12 months, P<0.001 for the difference between the groups). In the group that received interferon gamma-1b plus prednisolone, the partial pressure of arterial oxygen at rest increased from 65±9 mm Hg at base line to 76±8 mm Hg at 12 months, whereas in the group that received prednisolone alone it decreased from 65±6 to 62±4 mm Hg (P<0.001 for the difference in the change from base-line values between the two groups); on maximal exertion, the value increased from 55±6 to 65±8 mm Hg in the group that received combined treatment and decreased from 55±6 mm Hg to 52±5 mm Hg in the group given prednisolone alone (P<0.001). The side effects of interferon gamma-1b, such as fever, chills, and muscle pain, subsided within the first 9 to 12 weeks. In a preliminary study, 12 months of treatment with interferon gamma-1b plus prednisolone was associated with substantial improvements in the condition of patients with idiopathic pulmonary fibrosis who had had no response to glucocorticoids alone. (N Engl J Med 1999;341:1264-9.) see the Editorial Manual of Allergy & Immunology - Glenn Lawlor, etc. Progress: Idiopathic Pulmonary Fibrosis Gross T. J., Hunninghake G. W. N Engl J Med 2001; 345:517-525, Aug 16, 2001. Review Articles Approach to the Patient With Diffuse Lung Disease - Jay H. Ryu, Eric J. Olson, David E. Midthun, and Stephen J. Swensen Mayo Clinic Proceedings November 2002 Volume 77 Number 11
<urn:uuid:1c2593e6-5205-4b73-936d-7abe0a8d8b46>
CC-MAIN-2016-26
http://enotes.tripod.com/pulm-fibrosis.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897879
2,785
2.765625
3
In a quest to reduce dependence on foreign oil, the United States government is increasing its mandatory minimum levels of renewable biofuel production each year. Because the US’s first large-scale foray into biofuels—corn for ethanol—was heavily criticized, many non-food plant species are now under consideration for biofuel production. However, this search for non-food biofuels has another, currently underappreciated, impact: The introduction and spread of invasive plant species across the US. The problem with using nonnative plants for biofuel is that successful biofuel crop traits —short generation time, pest resistance, high growth rates, high water-use efficiency—are the same traits of many invasive plants. Nonnative plants are those humans introduce into an area from far-off geographic regions. If these plants spread far beyond the place where they were originally planted, they are considered invasive. Not all nonnative plants turn invasive, but recent research indicates that the species the US government is considering for biofuels are three times more likely to be invasive than a random sampling of nonnative species. For more on the plants currently under consideration, see the sidebar Potential Invasives Awiting Approval below. To address this, the National Environmental Coalition on Invasive Species brought invasive plant experts from around the nation to the Washington, DC last week to meet with members of Congress, congressional staff, and federal agencies The goal of the meetings was to dissuade policymakers from providing federal support for the use of nonnative, invasive plants as biofuel feedstock. The Environmental Protection Agency gave the green-light nearly a year ago to businesses wishing to grow two well-known invasive grasses: giant reed (Arundo donax) and napiergrass (Pennisetum purpureum). Ironically, the EPA is not the only governmental agency thinking about these species. Giant reed, an aptly named grass that can easily grow stalks over 30 feet in height, is growing unabated along the US-Mexico border. This weed presents such a problem for border patrol agents with the Department of Homeland Security that DHS has commissioned the US Department of Agriculture’s help in coming up with a method to reduce the giant reed populations in Texas. So why would the EPA approve the wide-spread planting of invasive species? It comes down to strict and literal adherence to laws passed by Congress a few years back. Currently, the EPA reviews potential biofuel feedstocks as part of the Renewable Fuel Standards (RFS) Program, created under the Energy Policy Act of 2005 and revised in the Energy Independence and Security Act of 2007. These laws, in short, demand that the transportation fuel must be a blend of traditional carbon-intensive oil as well as renewable fuels with lower carbon emissions. EPA conducts a greenhouse gas (GHG) lifecycle analysis on potential biofuel feedstocks to determine if they have lower carbon emissions than traditional fuels. Biofuel producers and purchasers can (and must) petition the EPA to consider their specific biofuel “pathway” to see if it is eligible for renewable fuel standard credits. Because the only explicit requirement in the Energy Independence and Security Act is for EPA to perform GHG analysis, the EPA is sticking to this bare minimum in its environmental review, and has chosen to ignore other existing mandates, such as a presidential Executive Order requiring federal agencies to prevent the introduction and spread of invasive species. Although EPA doesn’t explicitly consider the potential invasiveness of a plant species under the RFS program, the agency did respond to the unanimous outcry from scientists in 2012 when it first approved giant reed and napiergrass as RFS compliant. However, EPA’s concession did not signal a commitment to consider the ecological impacts of potential feedstocks. Instead, EPA determined that if these invasive plants spread beyond the original planting, necessary control and management efforts would increase their “carbon-costs.” In other words, EPA determined that in some cases, invasion may have climate implications. EPA ended up withdrawing the original 2012 ruling, and replacing it in 2013 with a supplemental ruling that required producers to submit a “Risk Mitigation Plan” that lays out a plan for keeping these species from spreading beyond the biofuel plantations. So far, no company has submitted a plan. And the scientific community is skeptical about the effectiveness of any self-enforced plan. For those of us who think using invasive plants for biofuels is a bad idea, the ultimate frustration is that many other plants could make excellent feedstock. There does not have to be a “business vs. environment” trade-off when choosing renewable biofuel plants. Although the traits of biofuels and invasive plants strongly overlap, scientists have a resoundingly solid track record of predicting what species are at “high risk” of becoming invasive, and they’ve developed many practical and useful Weed Risk Assessment tools that allow users to evaluate the potential invasiveness of a species. These tools are so accurate that some governments, including Australia and New Zealand, require that all plant species pass an assessment before introduction into the country. The scientific recommendation is that Weed Risk Assessments are made a fundamental component of any federal decision on biofuel production. Plants that are considered “low-risk” should be prioritized and incentivized over those that are “high-risk” for invasive potential. Last week, there was some indication that this could be a possibility. Scientists with the National Environmental Coalition on Invasive Species had positive reception from some agency staff, namely the Department of Energy’s Bioenergy Technologies Office that provided R&D funding for many potential biofuel feedstocks. These staffers were already aware of the invasive potential of some biofuel feedstocks, and seemed receptive to using more formalized assessment tools in their own internal decisions on what species should receive federal funding. However, it appears that under the current status quo, ecological invasions are likely to increase. The passing of the Energy Independence and Security Act increased EPA’s workload without increasing staffing to complete the task. This has, in part, probably led to EPA’s decision to stick with only the limited consideration of lifecycle GHG emission. And, in another round of agency irony, the Department of Agriculture is touting the transformation of field pennycress (Thlaspi arvense) from “nuisance weed to biofuel” as if the new use will change its ecological properties or limit its invasion. USDA has a long history of importing invasive plants into the United States. Through the Department of Agriculture Soil Conservation program, many nonnative species were promoted for preventing soil erosion and improving wildlife habitat. The most infamous of these species is kudzu (Pueria lobelata) “the vine that ate the south,” but also includes the highly invasive bush honeysuckle (Lonicera maackii), autumn olive (Elaeagnus umbellata) and Russian olive (Elaeagnus angustifolia). Would it really be too much to ask for our federal agencies to learn from their past mistakes and avoid promoting kudzu’s successor? Want to know more about invasive species and US biofuel policies? Check out these good reads: Lewis KC and RD Porter (2014.) Global approaches to addressing biofuel-related invasive species risks and incorporation into U.S. laws and policies. Ecological Monographs 171. http://dx.doi.org/10.1890/13-1625.1 Quinn LD, Gordon D, Glaser A, Lieurance D, and SL Flory. (accepted, in press) Bioenergy feedstocks at low risk for invasion in the US: A white-list approach. Bioenergy Research. Potential Invasives Awiting Approval While it seems highly unlikely that the EPA will revise its final ruling on giant reed and napiergrass, more potential invasive plants are sitting in the EPA’s docket. Most of these petition listings are so vague that it is impossible to evaluate the invasiveness potential without further clarification of the exact species under consideration. Currently, the EPA has four different petitions for “grain sorghum,” one for “biomass sorghum,” one for “jatropha,” and one for “pennycresss.” Although scientists and taxonomists purposely use a consistent and widespread convention for naming plants and animals so that they can avoid confusion between different languages or even different regional slang, these petitions are most likely intentionally vague to protect proprietary information about the exact variety of the plant under consideration. For example—and using the proper conventional nomenclature—the plant genera Sorghum contains a few highly invasive plants species: Sorghum bicolor (which has a slew of common names including shattergrass, Sudangrass, and, sometimes, grain sorghum) is listed as a noxious weed in six states, and its close relative Sorghum halepense (Johnsongrass) is listed in a 19 states. Likewise, the genera Jatropha contains two members of the IUCN’s infamous “100 of the World’s Worst Invasive Alien Species.” Some proactive researchers have already red-flagged these species because of Weed Risk Assessment results: Jatropha curcas was resoundingly rejected by three different assessments, Sorghum halepense and Thlaspi arvense (field pennycress) by one, and Sorghum bicolor was recommended for further evaluation three times.
<urn:uuid:313a32be-3190-462a-bd5d-1a08930b43d0>
CC-MAIN-2016-26
http://environment.yale.edu/envirocenter/post/homegrown-energy-and-homeland-security/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939848
1,992
3.171875
3
Discussion of Phylogenetic Relationships While carabid phylogeny has been extensively studied, the convergences and reversals present in morphological traits has lead to a great deal of controversy about many groups. Two of these groups, the tiger beetles (Cicindelitae) and wrinkled bark beetles (Rhysodini) are often considered outside the carabid clade. The phylogeny shown of carabid tribes on this and other pages is a conservative consensus view, in which a large number of "basal" groups give rise to a middle and upper grade of carabids. Within this latter group is a large, relatively uniform clade, the Harpalinae, which includes many of the larger, more common carabids. Included below the tree are a number of especially enigmatic groups, including Gehringiini and Rhysodini, which may be older lineages, related to groups in this page, or they may instead be related to groups within the Carabidae Conjunctae. Their placement, along with the resolution of other aspects of carabid phylogeny, awaits numerical analysis of available morphological and molecular data. No one has provided updates yet.
<urn:uuid:031d0b7c-6f21-47c2-8bbd-da04588d141c>
CC-MAIN-2016-26
http://eol.org/data_objects/10109504
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906315
250
3.359375
3
Out staff of freelance writers includes over 120 experts proficient in Hamlet, therefore you can rest assured that your assignment will be handled by only top rated specialists. Order your Hamlet paper at affordable prices with ! Instructor G. Cook Modern folklore suggests women look at a mans relationship with his mother to predict how they will treat other women in their life. Hamlet is a good example of a sons treatment of his mother reflecting how he will treat the woman he loves because when considering Hamlets attitude and treatment of the Ophelia in William Shakespeares play, Hamlet, one must first consider how Hamlet treated his mother. A characteristic of Hamlets personality is to make broad, sweeping generalizations and nowhere is this more evident than in his treatment toward women. Very early in the play, while discussing his mothers transgressions, he comments, Frailty, thy name is woman. Hamlet appears to believe all women act in the same manner as his mother. The first time the audience meets Hamlet, he is angry and upset at Queen Gertrude, his mother, for remarrying his uncle so soon after the death of his father. In his first soliloquy he comments on the speed of her remarriage Within a month, Ere yet the salt of most unrighteous tears Had left the flushing in her galled eyes, She married. O, most wicked speed, to post With such dexterity to incestuous sheets! It is not, nor it cannot come to good. It is understandable Hamlet is upset with his mother for forgetting about his father and marrying his uncle, Claudius. In Hamlets eyes, his father deserves more than one month of mourning and by remarrying so quickly, the queen has sullied King Hamlets memory. This remarriage is a sin and illegal, however special dispensation was made because she Hamlets opinion of his mother worsens as the play progresses because his father, who appears as a ghost, tells him of his mothers adulterous behavior and his uncles shrewd and unconscionable murder. Although Hamlet promises to seek revenge on King Claudius for murdering his father, he is initially more concerned with the ghosts revelations regarding his mother. King Hamlet tells Hamlet not to be concerned with his mother but after the apparition leaves, it is the first thing Hamlet speaks of. Before vowing to avenge his fathers death, he comments on the sins his mother committed. Although Hamlet decides to pretend to be insane in order to plot against the King, it is clear, he really does go mad. His madness seems to amplify his anger toward his mother. During the play scene, he openly embarrasses her and acted terribly toward her in the closet scene. The closet scene explains much about Hamlets treatment of women and his feelings toward his mother. Hamlet yells at his mother for destroying his ability to love. He accuses her of such an act That blurs the grace and blush of modesty, Calls virtue hypocrite, takes off the rose From the fair forehead of an innocent love And sets a blister there. Hamlet curses his mother for being responsible for his inability to love Ophelia. Queen Gertrudes actions have caused Hamlet to see all women in a different light because she has taken away his innocence and love for women. After Hamlet kills Polonius, he tests Queen Gertrude to see if she knows about the murder of his father and both he and the audience seem satisfied she was not party to that knowledge. Hamlet takes it upon himself to tell the queen her new husband killed the former king, however he is interrupted by the ghost who warns Hamlet not to tell his mother. The ghosts tells Hamlet he should be more concerned with King Claudius, suggesting revenge must be taken soon. During this scene Queen Gertrude is unable to see her dead husband which in Elizabethan times implied she was unable to see the gracious figure of her husband because her eyes are held by the adultery she has committed. The ghosts steals away from the closet when he realizes his widow cannot see him, causing Hamlet to hate Gertrude even more because he felt the same rejection when Ophelia rejected him. He can feel his fathers grief as a son and as a lover. It was devastating to see his father rejected by the queen in the same manner he was rejected by Ophelia. Understanding Hamlets hatred toward his mother is pivotal in understanding his relationship with Ophelia because it provides insight into his treatment of Ophelia. In Hamlets eyes, Ophelia did not treat him with the love and respect she should have. Hamlet and Ophelia loved each other but very early in the play, she is told by her father to break off all contact with him. Hamlet is understandably upset and bewildered when Ophelia severs their relationship with no explanation. The audience does not see the next interaction with Hamlet and Ophelia but hear Ophelia tell her father about Hamlets distress, causing them to both to believe Hamlet is mad, thus falling for his plot. According to Ophelia, Hamlets appearance was one of a madman. She described for her father the length of time he stayed her in bedroom and He raised a sigh so piteous and profound As it did seem to shatter all his bulk, And end his being. That done, he lets me go, And with his head over his shoulder turned He seemed to find his way without his eyes, For out a doors he went without their helps, And to the last bended their light on me. Hamlet comes to Ophelia on the brink of a breakdown, partly caused by his mothers infidelities and when he turns to his lover for support, his mothers lesson are reinforced and through her actions, Ophelia confirms in Hamlets mind, that women can not be trusted. Although Hamlet was pretending to be mad, he still loved Ophelia and was devastated by her disloyalty. Although Ophelia was only following the wishes of her father, her actions suggest to Hamlet she can be no more trusted than Queen Gertrude. In a cryptic way Hamlet is incredibly rude to Polonius calling him a fishmonger, or a bawd and his daughter a prostitute in Act II. This is the jilted lover speaking in this scene more so than the mad man Hamlet is pretending to be. Hamlets anger deepens toward Ophelia when he hears of the King, Queen and Polonius plot to use Ophelia to find out if he has gone mad for love of her. Poor Ophelia, just wanting to please her father and the royalty, sadly over plays her role during the nunnery scene. Ophelia anxiously jumps into her role at the beginning of their conversation, barely even greeting Hamlet before she tries to return his gifts. Although he claims not to have given such gifts, she says My honored lord, you know right well you did, And with them words of so sweet breath composed As made the things more rich. Their perfume lost, Take these again, for to the noble mind Rich gifts wax poor when givers prove unkind. There, my lord. With this speech, Ophelia wanted to provoke Hamlet into declaring his love, but instead he called her a liar. The entire rest of this scene is meant for Polonius and the King who are listening. Hamlet recognizes Ophelias dismal attempt at acting and gives her one last chance to redeem herself Ham. Wheres your father? Oph. At home my lord. Ophelia has failed the final test because Hamlet knows her father is listening. At this point in the play, Hamlet is very unstable and in his mind, he thinks all women are adulterous like his mother and cannot be trusted. Ophelia has just proved this to him and he acts terribly toward her, telling her Get thee to a nunnery, farewell. Or if thou wilt needs marry, marry a fool, for wise men know well enough what monsters you make of them. To a nunnery, go, and quickly too. Farewell Hamlet seems to be talking about women in general when he says a wise man knows what a monster a woman can make of them. He is being very cruel to all women, not just Ophelia, in this scene, because they are all the same to him. Hamlet goes as far as calling Ophelia a prostitute as a nunnery refers to a bawd house. For someone who is presumably in love, Hamlet treats Ophelia terribly in this play. His anger and hatred toward his mother, on top of his insanity, makes it difficult for him to see that Ophelia was following her fathers orders, not purposefully betraying Hamlet. This treatment of women is unbecoming of a hero in a tragedy and really shows the extent of his insanity. It was too much for Hamlet to accept the death of his father by the hand of his uncle and the adulterous behavior of his mother, so consequently he was very harsh on Ophelia. Hamlet could not bear any more rejection and despair in his life which Ophelia, whether she meant to or not, brought into it. Please note that this sample paper on Hamlet is for your review only. In order to eliminate any of the plagiarism issues, it is highly recommended that you do not use it for you own writing purposes. In case you experience difficulties with writing a well structured and accurately composed paper on Hamlet, we are here to assist you. Your cheap custom college paper on Hamlet will be written from scratch, so you do not have to worry about its originality. Order your authentic assignment from and you will be amazed at how easy it is to complete a quality custom paper within the shortest time possible!
<urn:uuid:fcb6fc7c-219b-422c-b6ff-9c9d06d2205d>
CC-MAIN-2016-26
http://essayideas.blogspot.com/2013/04/hamlet_9.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.98019
2,147
2.796875
3
- Children’s Nutrition Series (Intro) - The State of Our Union’s Children A detailed overview of what trends are occurring in our children's diets, and the factors that contribute to the issues - Our Children Are What They Eat A look at what our children are eating and the nutritional issues parents face. - Why Kids Eat What They Do (or Don’t) Part I: Parents' Role A look at all the sources of dietary influence on our children's food choices. Part I includes the parents' role in influencing our children's diets. - Why Kids Eat What They Do (or Don’t) Part II: Outside Influences A look at all the other sources of dietary influence on our children's food choices. This includes schools, social activity, marketing, food supply, culture. The post will examine each of the outside influences and how it affects our kids. - Food Marketing and Your Child Part I: The Small Screen with Big Impact This topic belongs under the sources post, but it has become such a huge issue that it needs to be reviewed in depth. An estimated $12 billion is spent anually to market foods to children and youth. Often these marketing messages are targeted to pre-schoolers who are too young to be able to differentiate commercial messages from educational messages. Part I covers television advertising. - Food Marketing and Your Child Part II: When the TV is Off, the Marketing is Still On Part II covers all the other forms of advertising, including marketing in our schools. - We Shall Overcome: Recommendations for Parents A set of ten actionable steps we can take as parents to encourage a better diet and lifestyle for our children and minimize the impact of food marketing to our kids. - Links and Resources Want to learn more on this topic? These links and resources are a great place to start. Wednesday, April 04, 2007 Childhood Nutrition Series: Complete Index Wow. It's done. And I will be happy to get back to some recipes I have been working on! If you are looking for the article series, here is the index the complete set of posts:
<urn:uuid:67d474e8-c070-4499-94df-c6c861977097>
CC-MAIN-2016-26
http://expatriateskitchen.blogspot.com/2007/04/childhood-nutrition-series-complete.html?showComment=1213456440000
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948798
447
2.71875
3
Geriatric Staff Nurse Geriatric staff nurses focus on caring for older adults. As the U.S. population ages, this career is in high demand. According to the U.S. Census, by 2050 more than 20% of Americans – 88 million people – will be over age 65. Yet less than 1% of registered nurses and 3% of advanced practice registered nurses are certified in geriatrics, according to the American Geriatric Society. Geriatric nurses are educated to understand and treat the often complex physical and mental health needs of older people. They try to help their patients protect their health and cope with changes in their mental and physical abilities, so older people can stay independent and active as long as possible. Geriatric nurses must enjoy working with older people. They must be patient, listen extremely carefully and balance the needs of their patients with sometimes conflicting demands from family members. When working with their patients, a geriatric nurse will: Many older people have health conditions that do not require hospitalization, but must be treated with medication, changes in diet, use of special equipment (such as a blood sugar monitor or walker), daily exercises or other adaptations. Geriatric nurses help design and explain these healthcare regimens to patients and their families. They often function as “case managers,” linking families with community resources to help them care for elderly members. Geriatric nurses work in a variety of practice settings such as hospitals, nursing homes, rehabilitation facilities, senior centers, retirement communities and patients’ homes. They often work as part of a care team that includes physicians, social workers, nursing aides, physical and occupational therapists and other caring professionals. In hospitals, geriatric nurses tend to work with treatment teams that have large older patient populations, such as outpatient surgery, cardiology, rehabilitation, ophthalmology, dermatology and geriatric mental health (treating older patients with psychiatric conditions, such as Alzheimer’s, anxiety and depression). In rehabilitation and long-term care facilities, geriatric nurses manage patient care from initial assessment through development, implementation and evaluation of the care plan. They may also take on administrative, training and leadership roles. Outlook and Salary Range Because of the aging population, there is increasing demand for geriatric nurses, especially in nursing homes and health care facilities that have a high older patient population. Bilingual nurses, particularly those fluent in both Spanish and English, are needed. The average salary for a geriatric nurse is $63,382, but salaries vary greatly depending on your experience, education and where you work. About a Career as a Geriatric Staff Nurse About Health Care Careers Note: The American Association of Colleges of Nursing has reviewed this profile. Twenty Years Later: What I Know Now That I Wish I Had Known Then Part 1: How to Attend College Without Going into Too Much Debt Criminal Background Check? But, I’m Not A Criminal! Making the Most of Your Shadowing Experiences Part 1: Do’s and Don’ts When Applying to College Part 1: Accreditation Matters How to Manage a Career Change (Part 2) Applying for Financial Aid (Part II) Are You Credit Ready and Credit Worthy? Why Diversity Matters in the Health Professions Start Preparing for Your Health Care Career in High School Reconciliation Act of 2010 Includes Significant Student Aid Provisions Healthcare Reform 101 As Americans Age, the Need for Geriatric Health Workers Will Increase Keep Past Mistakes from Limiting Your Future Health Care Career Making a Major Decision Top 10 Reasons to Pursue a Health Career Now In preparation for a career in geriatric nurses, many individuals volunteer at a local senior center, nursing home or hospice and seek experiences working with patients who have mobility issues, sensory (hearing and sight) deficits, cognitive impairments, and chronic and terminal disease. It is important to assess your ability to handle the physical and emotional challenges of working with patients who may not ever “get well.” To become a geriatric nurse, you must become a registered nurse by first earning a Bachelor of Science in Nursing at an accredited four-year college or an associate degree or diploma. After graduation, you must pass a national licensing exam called the NCLEX-RN before you can practice as a nurse. Once you have gained some work experience, you can pursue certification as a geriatric nurse. With additional education at the graduate level, you can become a gerontological nurse practitioner or geriatric clinical nurse specialist. Graduate education is typically required for specialist, administrative or supervisory roles, and for geriatric nursing research. Search for funding opportunities related to this career Search for enrichment programs related to this career Search for academic degree and certificate programs related to this career Last updated: June 9, 2016 ©2012 American Dental Education Association
<urn:uuid:bfbe9b15-5d4a-464c-9677-b91f04b09ba7>
CC-MAIN-2016-26
http://explorehealthcareers.org/en/Career/151/Geriatric_Staff_Nurse
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942176
1,008
2.96875
3
University of Missouri Home | People | Locations | Program index | Calendar | News | Publications Continuing education Seminars Courses mu extension > news > display story MU news media Duane DaileyWriterUniversity of Missouri ExtensionPhone: 573-882-9181Email: [email protected] Photos available for this release: Brown marmorated stink bug Credit: Gary Bernon, USDA APHIS, Bugwood.org Credit: U.S. Army Published: Tuesday, Jan. 3, 2012 Wayne C. Bailey, 573-864-9905 COLUMBIA, Mo. – A new, stinkier stink bug may hitchhike into Missouri this year to destroy crops and upset homeowners, says a University of Missouri Extension entomologist. The brown marmorated stink bug, a pest found in 33 states, mostly to the east and south, will likely be found for the first time this year in Missouri, says Wayne Bailey of the MU Division of Plant Sciences. Dead specimens were found in Columbia in a stored travel trailer from the East Coast. Live stink bugs were found at the end of the growing season at an Interstate 70 rest stop near Kansas City, Kan. The new stink bug destroys fruit, vegetable and field crops. However, homeowners may be the first to detect the pest, Bailey says. It invades homes as well as injuring crops. "A crushed marmorated stink bug can be quite repugnant," Bailey said. "The smell makes some people sick and some have had to vacate their homes for a few hours. "The stink bug invasion might make ladybug home intrusions seem like nothing," he adds. "Like the ladybug, the stink bug enters homes in large numbers seeking overwintering sites. Stink bugs are winter-hardy. However, they seek warm places to live." First found in Pennsylvania in 1998, the pest has spread slowly. Starting in the Mid-Atlantic States, stink bugs are now working their way through the Midwest. The stink bug probably came in cargo from China or a neighboring country, Bailey says. It travels as a stowaway. The new stink bug has become a problem for truck farms and orchards. As it moved west it gained an appetite for corn and soybeans. The marmorated stink bug joins local stink bugs that already attack crops. "It is a juice-sucking insect that heads for the developing fruit or pods," Bailey said. "It can shrivel all of the kernels on an ear of corn. Heaviest crop damage has been on soybeans, a concern to Missouri farmers." All stink bugs are difficult to control with pesticides, Bailey says. They don't eat foliage, but pierce the plant to suck juices. The new pest has proven more resistant to control. However, more insecticides are becoming available for use on the various host crops, Bailey adds. Truck farmers have found that insect netting offers one method of control on fruits and vegetables. Researchers at the U.S. Department of Agriculture are working on finding biological controls. "The most likely control will be from wasps that attack the eggs," Bailey says. In the United States, the pest has not been as prolific as in China, where it has five or six generations in a crop season. Here, the pest has one generation a year. It is not a prolific egg layer. The adult insect grows to about three-fourths of an inch in length. The shield-shaped body has alternating black and white triangles on the back edge of the wings. White bands are shown on the pair of long antennae and the hind legs. The distinguishing feature is a white underbelly. Common stink bugs have brown or green undersides, Bailey says. When disturbed, the bug emits a powerful odor. "If one is attacked, the other stink bugs around it emit the defensive odor. Inside a house, that odor can be repulsive," Bailey says. For crop farmers, the new insect will require weekly scouting of fields. As pod feeders, stink bugs can be quite destructive, Bailey adds. About | Jobs | Extension councils | For faculty and staff | For researchers | Giving | Ask an expert | Contact to 2015 Curators of the University of Missouri, all rights reserved, DMCA and other copyright information University of Missouri Extension is an equal opportunity/ADA institution. University of Missouri Extension to 2015 Curators of the University of Missouri, all rights reserved
<urn:uuid:b15d5c38-f022-4526-9fb5-cd95140c8c6c>
CC-MAIN-2016-26
http://extension.missouri.edu/news/DisplayStory.aspx?N=1312
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93556
941
2.546875
3
|The Narcolepsy Gene and Man's Best Friend| |By Ellen Kuwana | Neuroscience for Kids Staff Writer March 4, 2000 NarcolepsyNarcolepsy is a sleep disorder that disrupts both the onset of sleep and the sleep cycle, including REM sleep. Because it is difficult to study human sleep behavior and physiology, researchers have turned to other mammals that have narcolepsy. Clues from Man's Best FriendStanford University's Sleep Clinic has a colony of dogs that have narcolepsy. Researchers, led by Emmanuel Mignot, PhD, have found that these dogs have the gene for narcolepsy. They used a technique called positional cloning and published their results in the August 6, 1999 issue of the journal called Cell. Positional CloningBriefly, positional cloning starts by isolating DNA from a blood sample. This does not harm the donor (in this case, the dog). DNA is made up of building blocks called nucleotides (just like a necklace is made up of links or beads). There are four nucleotides, called A, T, G, and C. The Stanford scientists compared the patterns of these nucleotides to look for differences between healthy dogs and dogs with narcolepsy. The scientists identified a unique region (marker) within individuals that allowed them to track the disease with the marker. This is called linkage analysis. Then the researchers searched for the disease gene near that marker. This approach identified the narcolepsy gene. The scientists dubbed the gene hypocretin receptor 2 . This gene carries the instructions to make a protein that acts like an antenna on certain cells, picking up messages from other cells. Because the gene is defective in narcoleptic dogs, the hypocretin signal is not received by these cells. Therefore, the neurons that they connect to are not stimulated appropriately. This may be one reason why signals that tell the brain and body to be awake and alert go unheeded in individuals with narcolepsy. It was already known that hypocretins play a role in feeding behavior, but their role in arousal was a surprise. But What Does this Mean for Humans?Dr. Mignot and others know that a similar gene exists in humans, so the task is to find the defective human gene or genes that cause narcolepsy. While this is not of immediate help to narcoleptics, knowledge about how sleep works may help treat sleep disorders. Not only could this information lead to drugs which promote wakefulness, it could also influence how medications to help people sleep are made in the future. |BACK TO:||Neuroscience In The News||Table of Contents| Fill out survey
<urn:uuid:f29caeb9-66e0-42e9-bce4-c12b2a7bcc0d>
CC-MAIN-2016-26
http://faculty.washington.edu/chudler/narcg.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944288
563
3.5625
4
Food for the Hungry (FH), our partners and the poor work together to develop food security through agriculture programs. The United Nations Food and Agriculture Organization (FAO) defines food security as the constant access to nutritious food for ensuring dietary health. Promoting food security is a holistic process with multiple approaches, includes training, use of agricultural experts and economic development. Connecting to global markets BURUNDI—coffee farmers were poor due to unproductive farming methods, overworked soil and erratic harvests. FH trained farmers to improve coffee bean quality and helped connect farmers to local and international markets. Now these farmers have a stable income to feed their families. How we work Working with farmers FH and poor farmers identify agricultural problems, such as water access, weather conditions, soil, seeds and livestock health. FH experts research to find solutions. FH experts teach farmers new planting techniques, pest control and irrigation for increased crop yields. Livestock thrive with improved care. Farmers get connected to marketplaces. Farmers raise better crops and livestock. The result is more food and income. Poor farming communities become food secure and thriving producers for markets.
<urn:uuid:432a83eb-6ab1-4f24-b7e1-478cd4f35fd0>
CC-MAIN-2016-26
http://fh.org/work/causes/agriculture
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937909
236
3.125
3
A mysterious condition has wiped out 40 to 50 percent of the hives needed to pollinate many of the nation’s fruits and vegetables over the last year, commercial beekeepers told Michael Wines of The New York Times. Colony Collapse Disorder (CCD) first surfaced in 2005 — when annual honeybee losses jumped from 5 to 10 percent to 30 percent — and is now decimating populations at an unprecedented rate. “They looked so healthy last spring,” Bill Dahle, who owns Big Sky Honey in Fairview, Mont., told the Times. "Then, about the first of September, they started to fall on their face, to die like crazy. We’ve been doing this 30 years, and we’ve never experienced this kind of loss before.” Since their introduction in the 1990s, neonicotinoids are used to treated 94 percent of all corn seeds in the U.S. The problem is that the pesticide permeates corn plants and manifests in the pollen, nectar, and water bees rely on as a key protein source. The Pesticide Action Network of North America, noting that bees often bring contaminated pollen back to the hive, claims that CCD symptoms first arose around the same time that seed treatment with neonicotinoids increased five-fold. “Honeybees are caught in the crossfire,” said Steve Ellis, owner of Old Mill Honey Co., told NBC Nightly News. “Honey bees, like mine, are subjected to increasingly toxic load of pesticides in corn fields.” Wines notes that a quarter of the American diet — from apples to cherries to watermelons to onions to almonds — depends on pollination by honeybees, and fewer bees means smaller harvests and higher food prices. More From Business Insider - Thai Turtle Smuggler Caught With 14 Percent Of An Extremely Rare Species - Epic Battle Between Bald Eagle And Red-Tailed Hawk Leaves One Dead - Huge Landslide On Washington Island Forces Evacuation Of 34 Homes - Michael Wines
<urn:uuid:c20bbbbc-7ab0-464a-a433-cbd466520de1>
CC-MAIN-2016-26
http://finance.yahoo.com/news/americas-honeybee-population-collapsing-unprecedented-184852924.html?soc_src=mediacontentsharebuttons
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.914741
433
2.9375
3
Coal has been used for nearly as long as mankind has thrived. From the times of the cavemen to present day, coal is used for everything from cooking to heating to running steam-powered trains to generating electricity. Today, coal is burned as fuel or gasified to create a synthesis gas (syngas) that can then be used as a feedstock for the production of chemicals, fertilizer and electric power. Coal is also used for producing heat through combustion. The USA, Russia, Australia, China, India and South Africa have the largest coal reserves in the world. Coal is produced in 25 states in the US, spread across three coal-producing regions. The majority of the current production originates in just five states: Wyoming, West Virginia, Kentucky, Pennsylvania and Montana. The importance of coal as a source of generating power has increased over time with the rise in industrialization. In due course, alternatives to coal for generating power have now curbed the dominance of coal to some extent. Coal Dominates U.S. Power Generation: Coal as a major source of fuel for power generation dominates the Utility industry. Coal is used to generate about half of the electricity consumed in the U.S. and is also the largest domestically-produced source of energy. Electricity generation absorbs about 93% of total U.S. coal consumption. The reason is simple: coal is by far the least expensive and most abundant fossil fuel in the country. Coal will continue to dominate as the major source of electricity production. Taking into consideration the long-term prospect of coal, one of its key producers Arch Coal Inc. (ACI) expanded its reserves in the Powder River Basin (“PRB”) through a successful bidding of a coal lease. The coal produced from South Hilight coal reserves are of high quality. This high quality, ultra-low-sulfur-dioxide-content coal is in huge global demand due to stringent government regulations on emission (pollution) standards. In contrast, petroleum and nuclear power as sources of power generation have been losing market share displaced by the strong growth of renewable sources of generation and natural gas-fired generation. Petroleum is losing out to coal because it is becoming increasingly expensive. After the Japan earthquake/tsunami incident in 2011, nuclear power’s contribution to the total energy generation has declined from the prior year. Not Just Electric Generation: Electricity generation is just one use of coal in the U.S. Manufacturing plants and industries use coal to make chemicals, cement, paper, ceramics and metal products, to name a few. Methanol and ethylene, which can be made from coal gas, are used to make products such as plastics, medicines, fertilizers and tar. Certain industries consume large amounts of coal. For example, concrete and paper companies burn coal, and the steel industry uses coke and coal by-products to make steel for bridges, buildings and automobiles. Coal as an Input for Steel Industry: Due to its heat-producing feature, today hard coal (metallurgical or coking coal) forms a key ingredient in the production of steel. Nearly 70% of global steel production depends on coal. The steel companies foresee a return of prospects in 2012 due to improving demand from the end markets. According to an Energy Information Administration (EIA) report, U.S. coal exports in 2011 were 107 million short tons (MMst), which reflected growth of 31% year over year. Flooding in Australian mines during 2011 disrupted coal exports, which benefited US producers. The upsurge in coal exports during 2011 mainly emanated from demand from Asian countries. As per the EIA report, with Australian mines back in operation, U.S. coal exports are expected to decline to 100 MMst in 2012. The projected average delivered coal price to the electric power sector, which was $2.40 per MMBtu in 2011, is expected to fall to $2.38 per MMBtu in 2012 and $2.30 MMBtu in 2013. The downside is attributed to lower demand for coal in generating electricity. Demand Upsurge in Asian Countries: The increase in coal demand in Asian economies like China and India has been a key price driver since the end of the recession in 2009. We expect this trend to continue in future mainly due to the growing energy needs in India, China and South Korea. Of the Asian countries, economic growth in China and India will be the fastest. These two countries do produce coal, but its domestic coal production has yet to match the growing demand, resulting in the continuous need of importing coal. These countries rely heavily on coal for electricity generation. A major potion of the new electricity generation units, which are expected to come up in these two countries, will utilize coal as a source of fuel. As per The Economic Times, it is projected that coal imports will touch 1 billion tons in China in 2030 from the present level of 175 million tons in 2011. Indian imports for coal are expected to reach 400 million tons in 2030, up from 80 million tons in 2011. Given the growing demand from the fast-growing Asian economies, companies find it attractive to export coal to the emerging regions. Some of the names making the most from overseas coal exports are Peabody Energy Corporation (BTU) and CONSOL Energy Inc. (CNX). To cater to the increasing demand of coal in Asian countries, Peabody has acquired Macarthur Coal in Australia and expanded its footprint in high-demand regions. Elsewhere, certain coal master limited partnerships (MLP), including Penn Virginia Resource Partners L.P. (PVR), Natural Resource Partners L.P. (NRP) and Alliance Resources Partners L.P. (ARLP), are also good investment bets for people seeking exposure in the coal sector. According to the EIA’s report, U.S. coal production in 2012 will experience a dip from the last five-year average. The projected decline is attributed to lower demand due to adverse weather conditions, large stock of coal and increasing competition from natural gas as an alternate fuel. In the ensuing year, the demand for coal to produce power is likely to fall 10% from the previous year due to increasing use of natural gas to generate power. EIA forecasts coal use in the U.S. power sector to fall below 900 million short tons in 2012 and 2013. Coal is plentiful and fairly cheap relative to the cost of other sources of electricity, but its use produces emissions that adversely affect the environment. Coal emits sulfur dioxide, nitrogen oxide and mercury, which have been linked to acid rain, smog and health issues. Coal also emits carbon dioxide, a greenhouse gas that contributes to climate change. Without proper care, coal mining can have a negative impact on ecosystems and water, and alter landscapes and scenic views. With governments becoming more and more stringent on environmental issues, the electricity generators are implementing new measures to bring down emission levels of greenhouse gases. As per an EIA report, the combined impact from the usage of natural gas and renewable sources to generate power will gradually reduce the share of coal to produce electricity to 39% in 2035 from the high of 49% in 2007. Environmental Legislations: Coal has been losing its importance as a fuel source over the last few years, particularly in the U.S., vis-à-vis other sources that have a lesser impact on the environment. Concerns on the emission of greenhouse gases and global climate change have resulted in the formulation of new legislations and policies which emphasize on the use of environment friendly fuel sources, particularly in the power sector. This has considerably slowed the expansion of coal-fired capacity in the power sector, with utility companies now building new natural gas-fired plants and resorting to alternative sources of energy generation like wind, solar and hydro power. To meet the environmental regulations, American Electric Power (AEP) has decided to retire 4,600 megawatts (MW) of coal-fired generation from its portfolio. Natural Gas Substituting Coal: A major substitute for coal in energy generation is natural gas. Coal is being dumped in favor of natural gas, which due to extensive exploration and production, is seeing significantly lower prices than in the past. Natural gas is usually an attractive choice for new generating plants because of its relative fuel efficiency, low emissions, quick construction timelines and low capital costs. There is an abundance of natural gas in the U.S. markets, resulting in lower prices. This trend is encouraging power generators to not only convert their existing plants to gas-fired ones but to build new nat-gas units. Electric generation through gas-fired plants is likely to become more competitive over the coming years given its abundant domestic availability and the threat of regulation hanging over the coal mining industry. As per EIA’s reports, 96.65 gigawatts (:GW) of new electric generation will be added in the U.S. within 2009 -2015, out of which 20% will be natural gas-fired plants. Large electricity generators in the U.S., like Exelon Corporation (EXC), FirstEnergy Corp. (FE) and others are turning to natural gas for additional electrical capacity. Competition from Alternative Energy Sources: Apart from natural gas, the coal industry has been losing a major share of its electric generation demand to renewable sources of energy like wind, solar and hydro power. Production of power from renewable sources has also been supported by the various U.S. states. At present there is no national consensus regarding the percentage of energy to be generated from renewable sources by the power generators. Undoubtedly, state legislators are giving more emphasis to produce power from renewables. At present, 30 U.S. states and State of Columbia have enforceable renewable portfolio standards or other renewable generation policies. These policies were designed to spread awareness and encourage the power generators to produce more from renewable sources. The share in energy generation of renewable fuels (including conventional hydro) is projected to grow from 10% in 2010 to 16% in 2035, as per the EIA’s long-term outlook. Though there is ample pressure on coal from legislations and increasing competition from natural gas and renewable energy sources, we believe the global power industry will continue to depend on coal for a large part of its generation. Coal as a fuel source will continue to power the growth in emerging nations like China and India, both for utility companies and steel makers as it is cheaper compared to other energy sources. On the flip side, the debt crisis in Europe is still lingering, despite relief packages that have already been announced to revive the economy. The uncertain economic climate continues to impact the industry and curb its growth prospects. The lackluster demand for steel, which is widely used in different industries, could be an indicator of where we are heading. ArcelorMittal (MT), a major producer in the global steel industry, has as yet idled 5 of its 25 blast furnaces in Europe due to tepid demand. Likewise, demand for coal is expected to decline in Europe as the steel industry consuming a large volume of high quality coal continues to struggle. The EIA estimates, even if no new reserve is added, the present U.S. coal reserve will exhaust in 168 years, taking into consideration the incremental production rate. This is promising because, in addition to the many existing ways to use coal, the future holds new methods and potential for growth. Products from coal may soon be part of communications and transportation systems, computer networks and even space expeditions. In addition to these new and increased uses of coal, new technologies will continue to enhance the ability to identify the shape and composition of untapped coal reserves. Emerging know-how is also likely to look for a solution to the adverse effects of coal on the environment mitigating greenhouse effects and other environmental concerns. For example, the dry sorbent injection pollution control technology can play an important part in coal usage in the power plants. This technology will aid the power plant operators using coal to lower SO2 emissions and enable them to comply with the Environmental Protection Agency’s Mercury and Air Toxics Standards (MATS). These new technologies focused on achieving near-zero emissions open up avenues for potential long-term industry growth. Clean-coal technology development in the U.S. also has funding earmarked under the American Recovery and Reinvestment Act of 2009. This is an encouraging sign for coal producers. Even if alternate sources for generating fuels are available, coal’s advantage lies in its price, which is far lower than the other sources of fuel. We believe reinvigorating demand from the growing economies and steady demand from U.S. will drive the coal industry in the future. More From Zacks.com - Nature & Environment
<urn:uuid:94d63ff4-cb9c-4824-9eea-490fd87ad674>
CC-MAIN-2016-26
http://finance.yahoo.com/news/coal-industry-stock-review-april-192001568.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943558
2,628
3.6875
4
All fired-up on the missile front. In May 1989, the Agni technology demonstrator was successfully launched with around 1,000 km range. In two decades, the technology demonstrator has transformed into a large programme, which has seen the successful development and launch of four versions of the missile — Agni I to the latest Agni-IV. This ambitious journey — begun by India's missile man, A. P. J. Abdul Kalam and led now by ‘missile woman', Tessy Thomas (Agni-IV) now — is marked by indigenous technology effort. In the face of stiff technology denials, the country's missile scientists have made some significant contributions in technology, establishing new materials and spurring the domestic industry to play a greater role. There have been a fair share of failures and delays in this multi-million dollar Agni programme, which is targeted to give India the capability to launch Inter Continental Ballistic Missiles (ICBM) and provide a strong deterrence. After 23 years, Agni-I (700 km), Agni-II (1500 km) have been inducted into the armed forces, while Agni-III (2500 km) is in an advanced stage and Agni-IV (3500 km), has been successfully test fired on November 15. The Agni-IV is not just India's longest range missile, but is lighter, manoeuvrable, robust and capable of high acceleration. It has established a wide range of indigenous technologies, and given the perfect platform for launching Agni-V, in early 2012. The success of the ICBM would put India in the exclusive club of nations which can launch them — US, Russia, China and France. Perhaps, the most important and visible indigenous contribution to the Agni missile are the composite materials that are used in its fabrication. Composites are lightweight, non-corrosive, tough materials. Composites are used in most of the Agni-IV missile — beginning with the critical nose tip of the missile (which is crucial as the missile re-enters the earth's atmosphere at around 3000 degrees C) to nearly 60 per cent of the 20 metre tall, 17 tonne heavy structure. This makes the missile lighter, manoeuvrable, easy to operate and launch. Moreover, the higher the composites the lesser the cost of manufacturing the missile. All this means a cost-effective missile with a longer reach and destructive ability. Composites are of use in making lightweight boots for the polio-affected, and in tennis racquets and in medical devices. The DRDO also established its own in-house composite production centre. The composite rocket motor casing, which has been successfully tested in Agni-IV, was developed by the Advanced Systems Laboratory (ASL) a few years ago. It is made of carbon filament-wound composite. Interestingly, a private industry based in Vijayawada, Andhra Pradesh, has fabricated it for the DRDO. In the civil and defence sectors, maraging steels are commonly used to make motor casings. The Hyderabad-based, state-owned Midhani Steels produces the special steels to meet the needs of the strategic sector. It is tough but heavier. Since a lighter missile can be transported quickly and can carry higher payloads over longer distances, big players have been looking for composite materials. The ASL is working towards the goal of making a missile completely out of composites. A major advantage of a composite casing it that it cuts down costs by nearly half compared with the maraging steel version. Since they are not prone to corrosion, the life of a stored missile is longer. EADS, the leading European consortia, the US and Russia are capable of making it now. For the coming generation of long-range missiles, composites would be the key material, says Mr. Avinash Chander, Chief Controller (Strategic Missile Systems). The synergy between the DRDO labs, Indian industry and the user (armed forces) has resulted in the successful march of the missile programme in the last decade. The missile system is homegrown. The fabrication, airframes, propulsion systems, the fuel, flex nozzle system, the on-board communication and control systems, the software, the mobile launchers and the tracking systems, the integrated safety and security systems have been developed and tested. The defence scientists also came up with a technology that helps increase the range of missiles as well as satellite launch vehicles by approximately 40 per cent. Giants like L&T to Tatas and Godrej, PSUs like BDL, HAL, BEL, Keltech, ECIL to a number of private industry like Data Patterns, Sameer, Vem Tech, SEC Industries, Astra Microwave, Resins Allied Products, Walchandnagar etc. have played a big role. In the entire cycle of missile development to production, a couple of areas where domestic expertise is required are in select electronics, sensors and radar systems. There are collaborations with Israel and France in the area of radars. Similarly, some of the private industry players have forged joint ventures and tie-ups to make sophisticated electronic components. But, with technology denials still not eased on India, it is vital to acquire the technology in these areas for the acceleration and reliability of the missile armoury. It can also help in cutting down the time of design to delivery, which is around 10 years, to a more desirable five or seven years. The stage is now set for Agni-5. The platform is ready and efforts are geared up to be test-fired in February, 2012. All that is required is to scale up Agni-IV, say confident missile scientists. With an expected distance of over 5,000 km to be traversed by the three-stage missile, on trial will be the quality and robustness of components and systems fabricated by the industry and critical technologies indigenously developed by the scientists.
<urn:uuid:f81c4eae-40e6-42a9-b9f5-353a80901db7>
CC-MAIN-2016-26
http://forums.bharat-rakshak.com/viewtopic.php?f=3&t=6219&start=40
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946431
1,226
3.140625
3
Since young I’ve known from statistics in encyclopedias that redwoods are the planet’s tallest trees – the tallest of which grows in excess of 100 meters. While my mind could probably read the numerical figure, it probably doesn’t comprehend the actual immensity and awe of it. And that’s where the following video from the National Geographic help to give some perspective: Such amazing mother nature, and what dedication and tenacity on the parts of the photographers to capture that magnificent image! This concept design for the UK plug’s been making the rounds round the web like wildfire – probably as a testament to how much people loathe the big, fat bulky UK 3-pin plug. Here’s how it works: It’s a concept design by designer Min Kyu Choi. There are certainly still many technical issues to resolve – putting numerous moving parts and hinges into that small an area will probably require a hell lot of (costly or difficult?) engineering to realize in a large-scale, cheap manner; the live wire looks really perilously close to the neutral wire in the assembly, etc. The final comparison for the 3-way plug was also somewhat unfair as the bulkiness of the plug-heads were also due to the transformer-circuits (e.g. in Apple’s plug). That said, I loved how the design has approached this prickly problem and tackled it with an elegant and innovative solution (loved the fuse idea – makes it easier to change too!), while still maintaining the compatibility with the current sockets. Kudos to the designer! Typically in the plant/botany section of a nature museum, you’d find specimens of various plant species pressed flat and preserved in formaldehyde. These flat-pressed clippings lose much of their vibrancy in color, as well as the 3-dimensionality that one would naturally find in real, live plants. In comes glass artists Leopold Blaschka and his son Rudolf. Using glass, they are able to sculpt and replicate the plant’s 3-dimensional properties and color, giving an almost indistinguishable form from the real plants, including every intricate detail: Just how good they are? Apart from the samples in the photos above – The astonishing accuracy of Harvard’s glass flowers has surprised many of the museum’s visitors, who, on seeing the display, ask to see the glass flowers. Wow, this is simply an incredible motion sculpture. Initially it looks like it’s simply a heart composed of carved gears of various proportions coming together to form a shape – but when the motion starts magic happens, as gears of various ratios engage each other in a most harmonious way. If you look closely enough you’d also notice that on each gear the spacing between the teeth are also varying to accommodate for the variation due to turning. It must have taken gazillion tries (or, genius mathematical calculation) before the gears can be totally in sync – and even reform back to the heart shape after a few cycles. While the world’s ooh-ing and ah-ing with Microsoft Surface some time ago for its engaging and intuitive interaction, researchers within the campus are moving on to yet another interesting interface – touch control but out of the screen. Called SideSight, the interface allows you to control a phone placed on a table by wiggling your fingers in the space around it. This helps to solve the problem that a touch screen is limited by the need for fingers to touch it – thereby limiting how small the screen can go. Personally I see application of this more outside of the phone though – how often do you place a phone down on the table? But think about things like ultramobile laptops and stuff – a virtual trackpad if you will – and things start getting more interesting. This, I think is one of the holy grails of 3D-design, be it product, character or others. ILoveSketch is an absolutely awesome program that straddles the sweet spot between sketching and 3D-modeling – sketching in 3D plane and turning those sketches into curves on 3D space on-the-fly, giving the quickness and agility of sketches, while also delivering multi-view perspective capabilities in 3D models. Of course, nothing will replace a pair of good hands. No matter what software it is, if you can’t throw a line the way you want it, or even conjure aesthetically pleasing designs in your mind before sketching (proportion, form, weight, curves, etc.), software alone isn’t going to help. What it does though is to increase the sweet spot, and to reduce the turnaround time between a sketch-idea and a 3D-representation. Now I’m just waiting for it to have a ‘paint’ function where you can render the views, and have it turn out 3D surfaces based on the shading (now I’m thinking too much). That would be the holy grail. The Formula 1 Grand Prix in Singapore in 2008 was the first of the republic – it was also the first time it took place at night. I’ve always been partial to street circuits as I feel they give a raw and yet romantic sense of speed – perhaps something that is easier to relate to for the average fan. Anyway, it was a magnificent night with the flood lights gracing the track with the Singapore downtown skyline as a backdrop. Most Singaporeans probably haven’t watched a single F1 race in their lives (“too boring!”) – but last weekend droves turned up to check it out and I’m sure many have found new appreciation in the sport. I was down near the track too as the cars zipped around on practice days – and certainly for me it felt quite a bit different from what I’ve seen: the noise, the smell and the sense of proximity (that the cars aren’t just doing overhyped roundabouts in some circuits far away) gave me a different perception of the sports. As seen from the examples above, Boston.com seems to be getting into a habit of amassing great events-reportage pictures (see their Olympics coverage too).
<urn:uuid:e888c7fa-a40b-4819-bfd2-99ea79359278>
CC-MAIN-2016-26
http://gemssty.com/category/wow/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950287
1,308
2.515625
3
Thursday, September 13, 2012 Death Valley Retroactively Claims Title for Hottest Place on the Planet see the New Yorker response here). I came across an interesting abstract that outlines how meteorological researchers have retroactively declared Death Valley National Park as the hottest place in the world, based on an observation in 1913 when the temperature reached 56.7 degrees Celsius (134 degrees Fahrenheit). The story was picked up by NBC news where I first came across the information. The gist of the argument is that the meteorologists have doubts about the circumstances of the measurement in Libya: an obsolete thermometer, inexperienced observer and improper positioning of the apparatus (apparently over asphalt). Nearby areas did not record corresponding record temperatures. So the record reverts to the 1913 reading at Death Valley. It's an interesting convergence of records, because Death Valley made the news this year for another meteorological world record: the highest overnight temperature ever recorded on Earth. On July 12 of this year the low temperature at Death Valley, California dropped to only 107°F (41.7°C). Because the daytime high was128° (53.3°C), the average temperature of 117.5°F made that day the warmest 24-hour temperature on record. Ever. PS: Jeff Master was on this story first today
<urn:uuid:18885b3c-2b8c-4a8d-b765-0383a54eb150>
CC-MAIN-2016-26
http://geotripper.blogspot.com/2012/09/death-valley-retroactively-claims-title.html?showComment=1347596867848
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902526
265
3.40625
3
Group 24 - GM Inline Four Cylinder Engine Executive Summary Over the course of the September through December we have been in the process of reverse engineering a GM 2.2L Four Cylinder Inline Engine. This process had four stages, with the first involving a basic overview of the capabilities of the group and inspection of the product. After this we dissected it and re-evaluated the time we believed would be required for the project. We then analyzed every component based on a series of criteria and modeled a few of the parts that we deemed to be of greatest importance. During this step we also analyzed the effect of a specific type of failure that our engine could have, a small leak in the oil pan. Finally we reassembled the entire product and insured that it worked as well as it did when we received it. This process has given all members of the group a much more in-depth understanding of the workings of a gasoline-powered engine. Although we were not able to actually test the engine, we were able to see everything that goes into the engine to make it run. 1. Request for Proposal The first step in reverse engineering process is to take an initial look at the engine and estimate what will be required for this process. For this reason we have put together a work and management proposal, to cover the initial phase, product break-down. We have also created a road-map of where we believe we should be and when for the rest of the project. This plan can be seen in the Gantt chart in the management proposal. Work Proposal has been moved here Management proposal has been moved here Initial Product Assessment The product we have received is a GM Four cylinder in-line engine. The intended use of this product is, by definition, to use different forms of energy (in this case comes from fuels and oxidizers that undergo combustion to produce very high temperatures and pressures) to produce mechanical work or torque. This engine will produce mechanical work from the energy in the high-pressure high-temperature combustion chambers to spin the wheels, in most cases, on an automobile. - This product would lie mainly under the field of home or personal use because of its commercial use in automobiles; however, this does not exclude this product from professional use. In fact, this product can be very helpful in professional use, whether in the use of research, company cars, or even to produce mechanical work in any innovative way outside of automotive use, such as using the mechanical work produced or torque to produce electricity or even power generators. - The four-cylinder engine’s main function is to produce mechanical work for automobiles out of combustion, which produces high pressure and temperatures. The mechanical work, or torque, produced by the engine is used to spin the tires on an automobile. The other functions of this product will all relate back to the necessity of work. Whether linear or rotational work is needed; the engine can provide both. If rotational work is needed to spin the wheels on an automobile, to power a generator to produce heat or electricity, or if torque or mechanical work is needed to operate a large pulley or gear system, the the engine can perform the task. How It Works Most parts in an engine are designed to make possible or increase the efficiency of the cylinders. The explosive expansion of gases is what provides the force driving the output shaft. There are four steps for an Otto cycle engine to work. The fuel air mixture enters the cylinder via the open valve. A small starter motor drives the piston upward, compressing the mixture. This mixture is ignited by the spark plug once the cylinder has reached its maximum compression. The increased pressure from the mixture's combustion drives the piston down, providing energy to crankshaft. The gases in the cylinder are forced out through a second valve and then the process repeats, but the starter motor is no longer needed due to the momentum of the crankshaft. There are many kinds of energies that are used and converted within a combustion engine. Electrical energy is used to power the starter motor, provide the spark to ignite the mixture in the cylinders and recharge the battery. Chemical energy can be found in the gasoline-air mixture which enters the cylinders for combustion as well as the material in the battery which provides electrical energy. Thermal energy is a result of the combustion process and is mostly a waste by-product. Mechanical energy can be found in all the moving parts of the engine, which is the ultimate goal of an engine. This mechanical energy is usually harnessed to do work such as generate electricity or move a vehicle. When an engine is not running something must provide the initial energy to compress the mixture and ignite the mixture. For this reason modern engines have a battery to provide power and a small starter motor to power the first cycle. Chemical energy in the battery is converted to electricity and flows through the wires to the starter engine. This produces a magnetic field which spins the output shaft of the motor, converting electricity to mechanical energy. This rotational mechanical energy is converted by the piston connecting rod to linear motion, compressing the fuel-air mixture in the cylinder. The spark plug receives electrical energy from the battery which then ignites the compressed mixture, converting its chemical energy into thermal energy as well as creating a large amount of pressure against the cylinder and piston. This pressure moves the piston, converting it to mechanical energy. The piston's linear motion is converted by the connecting rod to rotation of the crankshaft. The gases in the cylinder leave through another valve, taking much of their thermal energy with them and dumping it into the environment. Once the crankshaft starts spinning the above process is repeated without further usage of the starter motor. Without some major overhauling, our engine is destined to remain a paperweight. Many of the parts have had sections cut away to allow a view of the inside, rendering them unusable. Among the cut open parts are the oil filter, the main housing, the oil pan and many of the tubes have also been cut open. If we were to replace quite a few parts it would most likely function, but the cost and time required to accomplish this make it hardly a worthwhile task. Therefore, it will remain a large complex paperweight. In current engine designs there are those considered to be extremely simple and others that are incredibly complex. On the simple end of the spectrum is the steam engine, a functional engine but very simplistic. Since the invention of the steam engine other more efficient and more powerful cycles have been discovered and put into use. Of the designs that are actually in use, the jet engine is the most complex. Its complexity is based on the engineering that has gone into the design of every piece in order to optimize its efficiency and thrust. Midway between these two extremes are two-stroke engines, four-stroke engines, diesel engines in order of increasing complexity. The two-stroke cycle is simpler but less efficient than the four-stroke and the diesel uses a different type of cycle entirely, relying on compression to ignite the fuel mixture. Our engine, on a scale of one to ten, with the given extremes, would probably be about a 6. - There are innumerable nuts, bolts and small pieces that make up each component of our product. The outside shell is basically large pieces of steel from molds, held together with bolts and attached in some way to every other piece of the engine. There are pipes running into and out of this shell to provide fluid flow, whether that be air, exhaust, coolant, fuel, or oil. Inside the engine block are 4 pistons (1 per cylinder) with 2 valve assemblies per piston. The valves are opened and closed by the cams on the rotating camshaft. The pistons spin the crankshaft which then goes through a system of gears to provide the desired output. Among the other important parts are the oil pan, oil pump and oil filter, without which the engine would quickly cease to function. Our engine also contains a carburetor to regulate the blending of air gasoline that the cylinders receive. One final important piece of the engine is the spark plug, without which there would be no combustion. - Engines as a whole are fairly complex, but when broken most of them are quite simple. The majority of parts in an engine are metal from molds, mostly steel with some aluminum. What makes an engine complex is the number of simple parts that it combines and how tight the tolerances are on these parts. Parts in an engine have very tight tolerances in order to prevent the leak of high pressure fluids. If the parts don't fit perfectly it is potentially hazardous as well as messy and inefficient. Although relatively simple in shape, every piece has been extensively tested and modified to optimize performance and efficiency. The majority of the engine is made out of only a couple different metals, with the most highly used one being steel. Aluminum is used in our engine for a few different components, but it is not nearly as prevalent as steel. As with most systems that use electricity, the engine contains copper wires. Copper is also used for small pipes in a few locations. A few small components, like caps on pipes and snaps connecting wires are made out plastic, as well as the . Rubber is used in all the hoses on our product and covers for a few other components.Prior to our receiving it most of the oil was removed, but in the oil filter there is still a small amount of oil as well as the expected filter paper. - Due to the holes cut in our product, more components are visible than would be seen in a functional engine. Steel makes up the majority of what can be seen, the entire outside shell is composed of it as are some of the components inside the engine block itself. Among other things, aluminum is used in the headers. Plastic coats all of the wires and is the material used in the caps on the oil reservoir and a few other things. Rubber and copper tubes provide fluid flow throughout the engine, with rubber forming the ones designed to be disconnected and moved easily. Copper and rubber also form the wires and insulation that connect the spark plugs and a few other components which require electric power. Oil and filter paper can be seen in the cross-section of the oil filter. - We know that the engine contains spark plugs, so therefore there is a small amount of porcelain. In addition to what we can see, we know there is a lot more steel and aluminum in use. Aluminum camshafts and pistons are used, and the springs on the valve assemblies are made out of steel. If I had to use this product I would be very happy with it. The majority of the developed world's population rely on engines very similar to this every day for transportation. This engine and others very similar to it provide power to personal generators, cars, and most anything which runs on gasoline. Every time I've used a gasoline-fueled engine I have been very happy with it, given that the alternative is performing the same task by muscle alone. - Engines are not thought of by most as being particularly comfortable. However, very few would say that their engines cause them any amount of discomfort either. Therefore, we can temporarily conclude that engines fall at about a 5 when it comes to comfortability. While during operation they make a fairly large amount of noise, this is offset by the use of sound-deadening materials. As a result, the sound of the engine is barely noticeable when driving. Because driving provides an immensely more convenient and comfortable way of traveling than by walking, this ups the overall comfortability to about a 7 or 8 since there are no real drawbacks. - The ease of which this product can be used varies somewhat from person to person, but for the most part, is considered quite simple. Starting a properly functioning engine inside a well running automobile consists of inserting and turning the key. There are other small requirements that must be met, such as having the car in park, but for the most part it is as simple as a twist of the hand. For someone with no basic training (and therefore a lack of the knowledge above), getting the engine to start could be a small challenge, but for anyone with a driver's license it is incredibly easy to use and takes virtually no thought. - An engine requires a few different types of "regular" maintenance. The first type would be something that often, such as filling the tank with gasoline. This is something that anyone who drives is capable of doing. The other type of "regular" maintenance is that which is not required often, but follows a regular cycle, such as once every x number of miles. This includes changing the oil, filters, and hoses. While these are not incredibly complex, the average person does not have the knowledge and experience to know how to do such tasks. Most people take their engines into a shop to have it done for them by trained professionals. Although the gasoline engine has become the most common, there are many different alternatives, each with their own advantages and disadvantages. The gasoline engine is by far the most common because of a combination of its complexity, cost and power output. Currently hydrogen and electric motors are the new big things in the field of automobiles, while steam was abandoned long ago and nuclear power cannot be safely used on a small scale such as a car safely or effectively. Currently electric motors are used for small portable tools, with gas engines being used in the higher end versions of these same tools. Hydrogen is being tested in vehicles but is not main-stream yet. For cars, at least for a few more years, gasoline engines will continue to be the standard. They are cheap compared to hybrid's, have a tried and true design and greater efficiency can still be pushed out of them. - Steam engines were the pioneer for gas, they used steam to push a pistons forward and backward, thus producing double the power produced by a conventional gas engine which only gets one power stroke. Unfortunately, steam engines were given up for gas engines and not engineered much after that so they lack power and reliability compared to gas engines. - On the other side of the spectrum, jet engines are some of the most advanced and powerful made to date. Since they are so powerful thought they are rarely used for anything but airplanes and they are very expensive to build maintain and run. - Electric motors have arguably been gasoline's main competitor from weed-whackers to cars, as they are a clean, efficient motor with instant power and very reliable, but very few are able to match the power output of gas motors. The biggest downside to electric motors is that they need batteries to power them which becomes a chore when powering large machines. - After those comes diesel motors which have been around about as long as gas motors, yet they have been used mostly for heavy loads, such as 18 wheelers, submarines, and earth moving vehicles. Diesel engines are very powerful, do not require spark plugs, and the fuel is usually more resistant to price fluctuation. However, the engines are louder, have a distinctive smell, and do not work well in cold conditions. - Another power source is a nuclear reactor. Unfortunately they are not very efficient, produce radioactive waste, cost far to much for commercial use, and are extremely large. Though they are an alternative, it looks very unlikely that nuclear power will become main stream anytime in the near future. - Finally, the newest technologies have yielded a new style of motor which is hydrogen powered and has the possibility to replace the gas engine. Hydrogen power is clean, efficient, and the only byproduct is water, but the technology is still new and will need years to be engineered and adapted to become a threat to the gas engines. Also, the threat of how reactive hydrogen is becomes a safety risk in the event of accidents. 2. Preliminary Project Review At this point we have done an initial overview of the project and given estimates as to the time required for each step in the process. At this point we took apart the engine and documented each step in the process. Once this was completed we made the necessary revisions to our gantt chart based on the actual time things took compared to what we expected them to take. Product Dissection Plan This product is not something that is considered easy to take apart. It has many small pieces, and if things are not kept track of and organized when being removed, putting it back together becomes a nightmare. Also, upon reassembly things need to be tightened and attached very specifically in order for the engine to function properly. Because of this most people take their engines in to a shop to be worked on by professionals. While we are by no means professionals, the product does not have to be returned in working condition (since it did not function upon receiving it). Bolts of various sizes hold most of the pieces of the engine together. This is because bolts are sturdy, yet removable, and can also be adjusted with a few twists of a socket wrench. All three of these are essential, due to the fact that engines must be solidly built, but also must be capable of being dissembled and adjusted when problems occur. The tools needed for this project were exactly what we expected, no other tools were needed for dissection. As expected the engine was already on an engine mount, giving us easy access to any part of the engine and allowing us to turn the engine over to get at the bottom. Crescent wrenches and a socket set were used to unscrew every bolt and nut, with none requiring any other tool. For disassembly the pliers proved unnecessary, all nuts and bolts were easily accessed with sockets or crescents. As expected, the mallet was required to get the bearing mounts off the crankshaft and to get the pistons out of the cylinders. The chart below is a guide to taking an engine similar to this one apart. Included are difficulty rankings for each step, ranging from “1” to “3”. Where: - The part required very little effort to remove, usually involved unscrewing a few bolts or pulling the piece off. - The part requires some effort to remove, usually hard to reach bolts or parts that require force to remove - The part is difficult or time-intensive to remove, usually due to tight spaces within the engine making it difficult to remove a part or many long fasteners |Front view||Back view||Right side view||Left side view||Top view| Causes for Corrective Action Our group sat down before even beginning to take the engine apart and decided how we would go about doing this in an organized and efficient manner. We realized that, when it came to an engine, the planning would be more important than the actual disassembly. Our first action was getting into the lab and looking at what we had to work with. Adam, the car expert of the group, outlined the general plan for which we would go about taking the engine apart. We later went back and began taking it apart piece by piece, starting from the outside and working our way in. As we went along we kept notes of the order of which each piece was removed. The pieces were then bagged, and a description of the piece, as well as what tool was used to remove it, was placed on each bag. This plan worked perfectly, as by doing so we essentially already had the disassembly chart (seen below) done. By reversing those steps and following the information on the bags, we will be able to easily reassemble the project when the time comes. In this way we have already overcome many of the future problems we potentially could have come across by staying organized and adhering to the original plan, which called for “removing each piece in the order which they are available” and “labeling and keeping track of part regardless of size and shape." 3. Coordination Review After completion of the product disassembly each piece must be analyzed for various characteristics as well as an assessment of that part's complexity. For this reason we have put together a chart of all the parts with the information as well as a short summary answering the important questions about the part. For this we created a table of all the parts and came up with a rating system of the complexity of each part. Parts with a complexity of are are simple, either having a very simple shape or requiring very few processes to create. Parts with a complexity of five are incredibly complex, requiring many processes to form and a very intricate shape. As expected, anything between these two extremes has characteristics of both, with three being an even mix of the requirements of the two extremes. Exhaust Pipe - Sand molded steel is used for high wear resistance. No force is applied to this component except its weight. This component requires a particular shape, in order to connect other parts. However the shape does not affect the manufacturing prcoess. Sand mold is used to make this component because of good accuracy and low cost. It is a functional component. Intake - This component is made by plastic. So the temperature of fuel and air mixture is not easily affect by the engine. No force is applied to this component except its weight. This component requires a particular shape to control the flow rate of fuel. The shape does not affect the manufacturing process. Injection mold is used to make this component because of good accuracy and low cost. The manufacturing process would be more complicated if it is not made by plastic. It is a functional component. Head Gasket - Steel is used for strength. A partial of force generated by combustion will transfer to this component. This component requires a particular shape to fit between parts. The shape does not affect the manufacturing process. To manufacture this part can simply stamped and cutted to desired shape. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Dip Stick Tube - Painted steel is used for low cost and wear resistance. No force acts on this component except its weight. It does not require a partiuclar shape as long as the dip stick is fitted inside. The shape does not affect the manufacturing process. This part is machined, bended from a tube, for low cost. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Dip Stick - Plastic and steel were used for low cost. No force acts on this component except its weight. This component does not require a particular shape. The shape does not affect the manufacturing process. This part is machined, cutting out from a large piece of metal, so the cost would be very low. The manufacturing process would be te same regardless the chosen metal. It is a functional component. Oil Filter - The component is made up of steel and filter paper. No force is applied to this component except its weight. This component does not require a particular shape as long as it does the job. The shape does not affect the manufacturing process. It is machined, bended and cut from a piece of metal, for low cost. The manufacturing process would be the same regardless of the chosen metal. It is a functional component. Distributor - Aluminum transfer electricity to the spark plug. Plastic insulates aluminum from other component of an engine. No force acts on this component except is weight. This component does not require a particular shape. The shape does not affect the manufacturing process. This part is molded plastic for good accuracy. The manufacturing prcoess would be more complicated if it is not made by plastic. It is a functional component. Distributor Mount - Aluminum is used for low cost, very little force is acted on this component since it just holds distributor in place. It requires a particular shape depending on the distributor. The shape does not affect the manufacturing process. It is died cast for high accuracy. The manufacturing process would be the same regardless of the chosen metal. It is a functional component. Distributor Mount Gasket - Aluminum is used for low cost. No force is applied to it except its weight. This component requires a partiuclar shape to hold distributor in place. The shape does not affect the manufacturing process. It is die casted for high accuracy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Spark Plug Wire - Copper is used for transfer of electricity, rubber and plastic insulates copper from other components. No force is applied to it except its weight. It does not require a particular shape. Wires are encased in plastic and rubber. The manufacturing process would be the same regardless the chosen metal however it would affect the performace. It is a functional component. Header Cover - Aluminum is used for low cost. It seals header, therefore some force created by combustion will transfer to it. This component requires a particular shape to do its work. The shape does not affect the manufacturing process. It is die casted for high accuracy. The manufacturing process would be the same regardless the chosen metal. It is functional component. Coolant Tube - Steel is used for low cost. No force is applied to it except its weight. This component does not require a particular shape. The shape does not affect the manufacturing process. This part is machined, bended and machined to desire shape, becuase of low cost. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Oil Pump - Aluminum is used for low cost. No force is applied to it except its weight. This component requires a particular shape. The shape does not affect the manufacturing process. This part is partly die casted for high accuarcy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Oil Pan - Steel is used for strength. It is painted to increase wear resistance. No force is applied to it except its weight. This component does not reqire a particular shape. The shape does not affect the manufacturing process. This part is machined, cut out from a metal and bended to desire shape, for low cost. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Piston Bearing - Aluminum was used for low cost, little force is applied to it since it hold connecting rod in place. This component requires a very particular shape to connect other parts. The shape does affect the manufacturing process because of the size. Machining would give a high percentage error. This part is die casted for high accuarcy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Piston Head - Aluminum was used for strength. A high magnitude of force created by combustion process transfer through piston head. The shape of this component is very important. The shape does not affect the manufacturing prcess. Die casted process is necessary for high accuracy becuase it has to be perfectly fit the cylinder. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Piston Connecting Rod - Steel was used for strength. It transfers the force generated from combustion process to the crank shaft. This component requires a particular shape to connect other parts. The shape does not affect the manufacturing process. It is sand casted for lower cost and good accuracy. The manufacturing process would be the same regardless the chosen metal. It is functional component. Crank Shaft Bearing - Aluminum was used for low cost. The rolation and virabation of crank shaft create a force acting on it. This component requires a partiuclar shape to connect other parts. The shape does affect the manufacturing process because of the size. Machining would give a high percentage error. It is die casted and machined for high accuracy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Engine Block - It is the housing for internal combustion. Steel was used for strength since the expansion of gas would create an enormous force to the surrounding. This component requires a particular shape, so other parts would fit inside. The shape does not affect the manufacturing process. This part is die casted and machined for high accuracy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Crank Shaft - Steel was used for strength. A large amount of forces acts on it from the cylinders. It requires a particular shape to connect other parts. The shape does not affect the manufacturing process. This part is sand casted for low cost and good accuracy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Rocker Assembly - Steel was used for strength and low cost, very little force is applied to it. It requires a particular shape to connect other parts. The shape does affect the manufacturing process because of the size. Machining would give a high percentage error. This part is die casted for high accuarcy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Push Rod - Steel was used for strength, very little force tranfers from crank shaft through push rod. This component does not require a particular shape. The shape does not affect the manufacturing process. This part is machined, a piece of metal is cut to a desire, for low cost. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Coolant Valve - Steel was used for strength and low cost. No force is applied to it except its weight. This component requires a particular shape. The shape does not affect the affect the manufacturing process. This part is sand casted for low cost and good accuracy. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Spark Plug - Porcelain and Steel were used for conduction of electricity. No force is applied to it except its weight. It does not require a particular shape. The shape does not affect the manufacturing process. This part is machined to desire shape because of low cost. The manufacturing process would be the same regardless the chosen metal. It is a functional component. Metal Bracket - Steel was used for strength. No force is applied to it except its weight. The painting process can increase its wear resistance. It does not require a particular shape and the shape does not affect the manufacturing process. This part is machined, cut from a large sheet to desired shape. The manufacturing process would be the same regardless of the chosen metal. It is a functional component. Header - Steel was used for strength and low cost, very little force is actually applied to this bracket. It require a particular shape. The shape does not affect the manufacturing process. This part is cut out of a large sheet and bent to the desired shape and then painted. The manufacturing process would be the same regardless of the chosen metal. It is a functional component. An enormous amount of time and energy has already been spent by professional engineers to optimize this engine for the intended audience and usage. This being said, we believe a few revisions could be made to increase efficiency and performance. These modifications can be done by the end user or by revision to the molds used to make the parts. - The first design revision is to expand the holes of the intake. Larger holes would increase the amount of air flowing into the engine, which allows for more oxygen to mix with fuel in the engine. This results in a more powerful combustion reaction, increasing the engine's horsepower and overall performance. If done by an individual this is a very simple and easy modification which will noticeably increase the engine's performance. For the manufacturer to make the change would be a fairly expensive change, requiring a new mold to be used with the changes made. If this change is made in conjunction with the old mold needing to be replaced it would not increase the cost to them much and would provide the user with a better product. The changes would have to be made to the intake ports on the engine block as well as to the plastic tubes that make up the intake system. Unless the company is getting new molds this revision does not make sense due to the high cost of getting a new mold. - Our second design revision is to increase the diameter of the exhaust pipes coming out of the cylinders. An increased bore size can greatly increase the flow of exhaust gases out of the cylinders, reducing the work done by the crankshaft to remove the exhaust and decreasing the amount remaining in the cylinder for the next cycle. Both of these things will increase the efficiency and thus the power output of the engine. This revision would require some simple design modification, increasing the hole diameter in the block for the exhaust and increasing the diameter of the pipes leading from the engine. This change would require a new mold to be used for the engine block, so it is a fairly costly change for them to make. - Our third design revision is to increase the distance of the intake system to the engine block or to put heat shielding between the intake and the engine block. This revision will decrease the temperature of the air being put into the engine which yields multiple desired results, power and efficiency. By lowering the temperature going into the cylinder we get a more efficient combustion process, yielding more horsepower from a given amount of fuel. This change does have the drawback of increasing the size of the assembled engine or increasing weight, but for many applications the increased size would not prove a problem. Solid Modeled Assembly Of the many components of an engine to choose from we selected a piston for our solid model. A piston is a very key component to an engine, in addition to being one of the more commonly recognized parts of an engine to the average person. In addition due to our lack of any specialists in the field of 3D solid modeling it was convenient to select a part of moderate complexity, in order to solid model the part we needed to be able to transport the part which was easy being that we selected the engine piston. These reasons all factored into why we selected the piston for our solid model. As for our selected CAD package, we decided to go with Autodesk Inventor. This was one of the most user friendly, easy to learn packages readily available. It has a free downloadable student version, giving members of the group access to the software from their own computers requiring people to go to a computer lab. Another reason we chose this was that members of the group had some experience with the software and With a downloadable student version available for free online, and group member would have access. In addition, we had people within access, with prior knowledge on how to use inventor which would also help our group in the long run. An important part of creating a product is designing it so that it can stand up to extensive wear and tear in the real world. Consumers are not known to take good care of things that they purchase, resulting in an enormous number of products designed for the purpose of protecting consumer products from the carelessness of consumers. In the design and testing stages engineering analysis is used to find weak points in a product and remedy them. This can be done by physically testing a product, usually with an automated system that tests the product as a whole or specific components of it. Parts are also analyzed for weaknesses before they are built using sophisticated modeling software and its ability to simulate forces acting on pieces of the desired material. This does use ideal properties of the objects, but is a good test of a part's strength. A problem that occurs relatively frequently in consumer automobiles is an oil leak. It could be caused by damage to any part with oil flowing through it or any point that is not sealing correctly and is allowing some fluid through. In either case the engine is losing vital oil and will tear itself apart if the level drops too low. Here we are evaluating how long it will take for a slow leak to reduce the amount of oil to a dangerous level. We are assuming that the speed of the leak is constant and unrelated to the remaining volume of oil. We are also treating oil as an incompressible substance Starting volume of oil: 4.5 quarts (260 in^3) Dangerous volume of oil: 2.5 quarts (144.5 in^3) Hole diameter: 0.1 inches leak speed (v): 1in/min As can be seen above, even a very small leak can quickly reduce the volume of oil available in an engine to dangerous levels. The decreased volume will also tend to have a higher percentage of foreign particles in it such as metal shavings from metal wearing and soot from the cylinders. Depending on when the car last had its oil changed, the minimum safe volume of oil could be significantly higher than what we assumed for this problem due to the amount of junk mixed in with the oil. 4. Critical Design Review At this stage in the project reassembly of the unit takes place as well as an assessment of its current condition compared to the condition it was initially in. This is the last major step in the process, with the next step being to put together a presentation giving an overview of the entire project and making any necessary revisions to the Wikipage. As expected, the tools required for reassembly are almost identical to those required for dissection. A socket set and crescent wrenches were used to tighten all the bolts ranging from 8mm to 16mm. A special tool was required to compress the rings on the piston heads which was not needed during dissection. Pliers were useful for one part that needed to have one part remain still while another turned. The mallet was needed to get the pistons into place. No other tools were needed for reassembling the product. We rated the difficulty of each step on a scale of one to five with one being very simple and five very complex. A complexity of one means that it was very straight forward to perform this step of reassembly. This means there were very few parts to reattach, they did not require special tools and positioning of the part is fool-proof. A complexity of five means that the part required special tools to reattach, part location or orientation was not sure even with excellent documentation, it may also mean that there were a large number of parts with slight differences among them. Q. Does your product run the same as it did before you disassembled it? A. Without replacing a significant number of parts our engine is destined to remain a paperweight. Many of the parts have had sections cut away to allow a view of the inside, rendering them unusable. Among the cut open parts are the oil filter, the engine block, the oil pan and the headers. If we were to replace all of these parts it would most likely function, but due to the cost in time, labor, and parts it would be more economical to simply replace it. Q. What were the differences between the disassembly/reassembly processes? Were the same sets of tools used? Were you able to reassemble the entire product? A. We were able to entirely reassemble our product, everything went back together the way it came apart. For the most part assembly was identical to disassembly but in reverse. Putting the pistons back in took a special tool to compress the rings which was not needed during disassembly. Other than that one tool the tools used were identical. A combination of sockets and crescent wrenches enabled us to put the entire thing back together. Q. Are there any additional recommendations your group would make at the product level (operation, manufacturing, assembly, design, configuration, etc.)? A. The one thing that could be simplified would be the number of different socket sizes that are needed to handle all of the nuts and bolts. Often we would be working with one size socket and then the next set of bolts would require us to use the size 1mm larger or smaller. Simplifying their system so it only used 2 sizes instead of 5 would greatly reduce the number of tools needed to put the engine together. As a whole the product seems well designed and the process for manufacturing seems streamlined. There were a few spots on the engine where it looked like something could be attached, which we can only assume is due to this engine design being used for a variety of tasks. They could remove these unnecessary features but that would require a different mold for the engine block for other purposes.
<urn:uuid:9c664c23-d785-40c0-a8b3-3bacc8df2e96>
CC-MAIN-2016-26
http://gicl.cs.drexel.edu/index.php/Group_24_-_GM_Inline_Four_Cylinder_Engine
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966903
8,267
2.515625
3
Sure, traipsing about the lunar surface is all fun and games when you've got a golf club and a flag for planting but if you're there to work, those puffy, sausage-fingered space suits are more hindrance than help. Just look at Apollo 16 astronaut Charlie Duke as he valiantly muscles Lunar Sample 61016 from the ground at Plum Crater in 1972. The rock—dubbed "Big Muley" after NASA field geology team leader Bill Muehlberger—weighed 26 pounds and was comprised of shocked anorthosite melded into a fragment of troctolitic, most likely generated during the impact 1.8 million years ago that formed the South Ray Crater, where Apollo 16 landed. The sample is now housed in the Lunar Sample Laboratory Facility at the Lyndon B. Johnson Space Center. [It's OK to Be Smart - Wikipedia]
<urn:uuid:48446a81-7828-44a0-9840-555530cf7b35>
CC-MAIN-2016-26
http://gizmodo.com/5955864/astronaut-charles-duke-struggles-mightily-to-collect-the-biggest-lunar-sample-ever?tag=space
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933033
182
2.515625
3
Mobile devices as teaching tools are becoming a more and more common part of the American education experience in classrooms, from preschool through graduate school. A recent Pew Research Center survey found that 58% of U.S. teachers own smartphones — 10 percentage points higher than the national average for adults. Those teachers are building that tech-savviness into their lesson plans, too, by embracing bring-your-own-device policies and leading the push for an iPad for every student. In 2013, an estimated 25% of U.S. schools had BYOD policies in place and it’s reasonable to assume those numbers have risen in the past two years. What do these mobile devices really add, though? Is there more to this tech trend than just grabbing the attention of students? Is mobile technology boosting classroom instruction, or is it all just a flashy way to accomplish the same things as analog instruction? Research finds benefits of mobile technology That same Pew Research Center survey asked a group of Advanced Placement and National Writing Project teachers about the educational impact of Internet technology in the classroom. Here’s what those teachers had to say about mobile technology specifically: - 73% of the teachers reported using mobile technology in their classrooms, either through their own instruction or by allowing students to use it to complete assignments - English teachers are more likely to use mobile technology in the classroom than math teachers - 47% of teachers strongly agreed, and an additional 44% somewhat agreed, that students need digital literacy courses to be successful academically and beyond. As far back as 2010, reports were surfacing that mobile apps are not only engaging, but educational, for children as young as preschool. PBS Kids, in partnership with the US Department of Education, found that the vocabulary of kids ages three to seven who played its Martha Speaks mobile app improved up to 31%. Abilene Christian University conducted research around the same time that found math students who used the iOS app “Statistics 1” saw improvement in their final grades. They were also more motivated to finish lessons on mobile devices than through traditional textbooks and workbooks. More recently, two studies that separately followed fifth and eighth graders who used tablets for learning in class and at home found that learning experiences improved across the board. 35% of the 8th graders said that they were more interested in their teachers’ lessons or activities when they used their tablet, and the students exceeded teachers’ academic expectations when using the devices. When self-reporting, 54% of students say they get more involved in classes that use technology and 55% say they wish instructors used more educational games or simulations to teach lessons. My own college students report back from student teaching in P-12 classrooms and say kids do seem to respond well to the stimulus of mobile devices. They stay on task, they correct mistakes in real-time and, most importantly, they get excited about learning. Mobile devices also bring challenges Alongside the benefits, mobile devices certainly come with their share of complications. Teacher authority, for example, is one area that can easily be undermined when mobile technology is allowed in classrooms. One of the often-mentioned benefits of mobile devices in classrooms is that they allow simultaneous work to take place — but does that undercut the master lesson plan? There is also the question of cost. Of course there’s a price associated with schools purchasing the technology (and bringing teachers up to speed). But even having kids bring their own devices can be an issue. Bring-your-own-device policies may draw attention to situations where some students are more privileged than others, and there is always the potential for theft. Tech policies are also more difficult to implement on personal electronics than on school-owned ones. A tablet that is owned by a particular school district, for example, can come pre-installed with the right programs and apps and not allow for any outside play. A device that goes home with a student, however, can’t have the same rules. There are privacy issues to consider, too, especially now that tracking cookies are so prevalent on personal mobile devices. Do we really want third parties following our students on their learning paths? And should teachers have access to what students do on their mobile devices when outside the classroom? Mobile tech in classrooms: what works? Simply using mobile technology in the classroom does not guarantee a rise in comprehension or even the attention of students. So what types of mobile technology use make the most sense for classrooms? • E-readers. Part of the issue with traditional textbooks is that they’re so quickly outdated, both regarding subject matter and which formats best reach readers. E-readers eliminate that issue and allow real-time updates that are useful to students and teachers immediately, not the next school year when the new textbook is released. • Individual mobile modules. Within educational apps and games are options for individual student logins. This gives students the chance to work at their own pace, taking extra time in the areas where they need it most. • Text-response programs. Websites that allow teachers to send homework or test questions to students via text, and then ask for responses, do result in a more interactive approach to learning. Most of the programs that facilitate this technology allow for real-time feedback on the answers, allowing students to learn from mistakes and put it all in context in the moment. Pew Research found that American teens send an average of 60 text messages per day, making this an effective way to reach students in a medium that is close to universally used. The OneVille Project has tracked teachers and their experiences with texting high school students and has found that students become more motivated to come to school and to complete work on time when they have text message access to teachers. • Seamless cloud learning. Using mobile technology that is connected to the cloud means that students can transition from working in the classroom to working at home — or anywhere else — easily, as long as they have access to a phone, tablet or computer. This saves time and improves organizational skills for students. Mobile learning can and does make a positive difference in how students learn, and it’s not just because of the “cool” factor. When used the right way, mobile technology has the potential to help students learn more and comprehend that knowledge. In an ideal world, every student would have his or her own mobile device that syncs information between school and home, those devices would stay on task and the students would see significant gains in their academic achievement. Real-life classrooms are never picture perfect, though, not for any learning initiative. Mobile devices are not a silver bullet. In 1995, Steve Jobs famously said that the problems facing education need more than technology to be fixed. Competent, engaged teachers are more necessary than ever in the Information Age, and balancing mobile educational advantages with healthy teaching interaction is the key to maximizing the worth of both. This article was originally published on The Conversation. Read the original article. Matthew Lynch is Dean of Syphax School of Education, Psychology & Interdisciplinary Studies at Virginia Union University.Image by Brad Flickinger under Creative Commons license.
<urn:uuid:ed38cda7-9226-4396-b76c-114a5cb099b5>
CC-MAIN-2016-26
http://gizmodo.com/do-tablets-in-the-classroom-really-help-children-learn-1694963939
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962239
1,462
3.15625
3
Florida has its oranges, Georgia has its peaches, and California has its lettuce. These leafy greens are the Golden State's biggest vegetable crop, bringing in $1.6 billion annually. Problem is, they require a lot of attention to raise properly. Historically, California has relied of its abundant, affordable work force. But with that labor pool shrinking and foreign competition increasing, lettuce farmers in America's Salad Bowl are facing rising labor costs and worker shortages. And that's where the fully automated Lettuce Bot comes in. The problem is put simply by Stavros G. Vougioukas, professor of biological and agricultural engineering at the University of California, Davis. "We need to increase our efficiency," he told the Economist. "But nobody wants to work in the fields." And since all those acres of lettuce won't thin, weed, and harvest themselves, a pair of Stanford engineers, Jorge Heraud and Lee Redden, developed a robot to do the work that humans wouldn't. The Lettuce Bot is a tractor-towed device that images a row of plants as it rolls past and compares the visual data against a million-point database of other pictures of lettuce (which must have been super exciting to compile) using a custom designed computer-vision algorithm. It's reportedly 98 percent accurate, and if it spots a weed or a lettuce plant in need of thinning (lettuce will remain dwarfed if planted too close together), the Lettuce Bot gives it a shot of concentrated fertilizer, killing the offending plant while improving the growth prospects of the rest. Incredibly, even though it dawdles through the fields at just 1.2 mph, the Lettuce Bot can still thin a field as accurately and as quickly as 20 field hands. And the Lettuce Bot is only the start. Farmers across the country are finding it harder and more expensive to find enough human workers and are starting to look to robots to augment the labor force. In response, both private and public ventures have started pouring money into agrimech (agricultural mechanization) technology. As such, research is advancing quickly. Robots are being outfitted with suites of EO sensors, nimble manipulator arms, GPS-guidance, and more processing power than the robots in Runaway. Other harvesting-edge agrimechs include an automated tulip-bulb planter, the seedling transplanter at the Vineland Research and Innovation Centre in Ontario, Canada, as well as an integrated mushroom harvester/trimmer/packager and a potted plant packer; the Harvest Vehicle HV-100 by Harvest Automation that shuttles potted plants around nurseries; and John Deere 7760, king of the mechanized cotton pickers. Picking fresh fruit designated for consumption rather than processing remains the ultimate goal of the industry given how easily the product bruises. There's also the challenge of teaching the robot to accurately judge a fruit's quality and ripeness as matching the dexterity of their human counterparts. "The hand-eye coordination workers have is really amazing, and they can pick incredibly fast. To replicate that in a machine, at the speed humans do and in an economical manner, we're still pretty far away," said Daniel L. Schmoldt at the U.S. Agriculture Department's National Institute of Food and Agriculture in a press statement. One company, Agrobot, is working to overcome those challenges with its 24-armed strawberry picking machine. The device reportedly evaluates each berry for size, color, and quality before plucking and depositing it on a conveyor belt to be packed by a human worker. Unfortunately, many experts estimate that it will be another decade before a majority of our fresh fruit and vegetables will be picked by the likes of Lettuce Bot toiling in the fields. When we actually get there, though, it'll be nothing shot of amazing. Just nobody tell Gene Simmons. [Physorg - Economist - Australian News - Blue River - top image: Runaway - Bottom image: Marcio Jose Sanchez / AP]
<urn:uuid:0809a961-4833-46f3-8d83-c296abd0de52>
CC-MAIN-2016-26
http://gizmodo.com/lettuce-gaze-upon-the-future-of-agriculture-789817712
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947047
845
3.21875
3
MANILA, Philippines—On Sept. 16, 1991, the Senate voted 12 to 11 to reject a new bases treaty with the United States, citing the pact as “one-sided and unequal.” The agreement would have allowed the US to keep Subic Naval Base for 10 more years in exchange for about $200 million in aid. The last US ship and helicopter carrier, the USS Bealleau Wood, sailed away from Subic Bay in November 1992, ending nearly a century of US military presence in the Philippines. For the first time in so many years, the Philippine flag flew alone over the base. In May 1999, the Senate ratified the Visiting Forces Agreement (VFA), voting 18 to 5. The VFA paved the way for large-scale joint military exercises, called Balikatan (shoulder to shoulder), between Philippine and US forces. It allowed US servicemen to return to the Philippines after the closure of the US bases in Clark and Subic in 1991 and 1992. US soldiers arrived for the first Balikatan exercise in February 2000, the first of many. Last April, some 2,000 Filipino troops and 4,500 US troops participated in the annual large-scale military exercises, the 28th such bilateral exercise. Last year’s Balikatan included, among others, field training exercises on counterterrorism and maritime security, and humanitarian assistance and disaster response command exercises. Source: Inquirer Archives
<urn:uuid:c860d7b3-a3a3-49c9-b627-1fad40af6d88>
CC-MAIN-2016-26
http://globalnation.inquirer.net/71907/what-went-before-subic-naval-base
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940976
296
2.625
3
University of Wisconsin scientists are studying how mixing the water in a lake could eliminate an invasive fish. The technology works by moving large air bladders up and down the depth of a lake, mixing the water and raising its temperature to where it is intolerable for the fish, said Jake Vander Zanden, supervisor of the study. The bladders are much like gigantic trampolines, Vander Zanden said. They’re about 25 feet across. Air is pumped in and out so it rises and falls. The project is designed to eliminate invasive rainbow smelt from the small Crystal Lake in Vilas County, Wis. If successful, it may be applied to other lakes where smelt have invaded and decimated native populations of yellow perch, lake whitefish, northern cisco and commercially important walleye. “They are highly predatory and voracious,” said Vander Zanden. “(They have) really big teeth and they specialize in feeding on the young of other fish species.” The smelt can only live in the cold water at the bottom of the lakes. That’s where the mixing comes in. A typical northern Wisconsin lake is stratified with warm waters on top and colder waters on the bottom, said Jordan Read, the developer of the technology. “That’s because warmer water is less dense than colder water.” When the mixer, called a Gradual Entrainment Lake Inverter, homogenizes the temperature of the lake, native species are unaffected. But rainbow smelt will become stressed and perhaps die, the scientists said. After some initial testing on Crystal Lake last summer, Vander Zanden said it’s hard to tell exactly what’s happening with the rainbow smelt. “They didn’t have a massive die off where, you know, they’re all floating up at the surface,” he said. “Our goal was to get the lake up to 21 degrees Celsius because laboratory studies that put rainbow smelt in 21 degrees show that (they) die almost immediately. “We warmed up the lake until about 21 degrees for a long period of time. Rainbow smelt were acting very weird. They clearly were stressed, but it looks like there are still some rainbow smelt out there in the lake.” The scientists need to collect data up until the lake freezes before they can calculate how many live smelt remain, he said. “A lot of them have died, but there are still some that have somehow managed to survive.” Young rainbow smelt like warm waters, but the adults are cold-water dwellers, said Zachary Lawson, a graduate student on the project. So it takes more than a year of the method to kill both adults and the young fish that are invulnerable until they grow to be adults. “The idea is to mix the lake the first year to remove the adults, the second year to remove the young from the previous year and then mix it the third year just to make sure we get all of them,” Lawson said. Also, “the stressed fish from this past summer still have to make it through the entire ice-on season, which is normally a pretty difficult environment for fish anyways, (those) that aren’t stressed,” he said. The researchers mixed the lake for the first time last summer. Other methods for mixing lakes have been used to control water quality, such as providing aeration, said Read. That technique works by pumping air to the lake bottom and letting bubbles rise to the surface. But for deep lakes like Crystal Lake, the amount of energy required to get compressed air 65- to 70-feet deep is very expensive, he said. The new inverters do something similar but more efficiently. The air bladder is like a giant bubble. The larger the size, the larger the wake, which means more water is mixed. “The end result is that we use less air to generate a similar amount of mixing potential,” Read said. The testing will go on for another two summers, after which the scientists hope to intensively monitor the lake for another year or two, Lawson said.
<urn:uuid:5270bf63-4c86-4e4d-be8a-4a6a8be99cb8>
CC-MAIN-2016-26
http://greatlakesecho.org/2012/12/13/wisconsin-scientists-use-lake-mixer-to-drive-out-invasive-species/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953932
881
3.3125
3
The MPC is the official body that deals with astrometric observations and orbits of minor planets (asteroids, natural satellites) and comets. We receive observations from all over the world, process and catalog them, then make them available via our public database. Maybe the most important thing the MPC does is keep track of NEOs (Near Earth Objects), flying pieces of rock or ice that live in the Earth's neighbourhood and might one day pose a threat to our planet if we cross paths and they crash into us. Fear not, there are no large-body impacts anticipated within the next few decades. If you want to check for yourself, we keep a list of what's coming close to us within the next 33 years. Past areas of research include X-ray binaries, both high- and low-mass, with neutron stars or black holes, both within the Milky Way and in other galaxies, such as the Small Magellanic Cloud, M31 (Andromeda), NGC 922 or IC 10.
<urn:uuid:561d0224-1d91-4d71-9a6c-57106bb8ff5c>
CC-MAIN-2016-26
http://hea-www.harvard.edu/~jlgalache/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932037
208
3.109375
3
If air embolism is suspected while doing human autopsy, the head should be opened first and the surface vessels of the brain examined for gas bubbles, which must be prominent and definite, but not segmental breakup of the blood in the vessels with collapsed segments between. Care should be taken to avoid pulling the sternum and ribs to avoid creating negative pressure in the tissues which may result in aspiration of air into vessels. Before handling the thoracic organs, the pericardium is opened, heart is lifted upwards and the apex is cut with a knife. The left ventricle is filled with frothy blood, if air is present in sufficient quantity to cause death. If the right ventricle contains air, the heart will float in water. Another method of demonstrating air embolism is by cutting the pericardium anteriorly and grasping the edges with hemostat on each side. The pericardial sac is filled with water and the heart is punctured with a scalpel and twisted a few times. Bubbles of air will escape if air is present. A wide-bore needle attached to a 50 ml syringe filled with water is inserted into the right ventricle. If air is present it will bubble out through the water. 4 ml. of a 2% freshly prepared pyrogallol solution is collected into two 10 ml syringes. To the first syringe four drops of 0.5 M sodium hydroxide solution is added. Gas is aspirated from the right side of the heart. The needle is removed and replaced with a stopper, and the syringe shaken. If air (oxygen) is present, the mixture turns brown. In the second syringe some air is introduced and the test repeated as a control. The solution should turn brown showing air embolism. Chest x-ray Air in inferior vena cava can be demonstrated by puncturing it under water, and looking for escape of bubbles of gas. If fat embolism is suspected, the pulmonary artery should be dissected under water and the escape of fat droplets noted.
<urn:uuid:3a3c08bc-641c-4350-9a7a-6ddf9eeb763c>
CC-MAIN-2016-26
http://healthdrip.com/air-embolism/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917083
433
3.28125
3
Damage from concussions and the progressive deterioration of neurons in Alzheimer’s look similar on brain scans, according to the latest study, and produce similar symptoms as well. In studying a group of concussions patients to determine which ones experienced the most severe symptoms, researchers from the University of Pittsburgh School of Medicine report that those who experienced mild traumatic brain injury after a blow to the head or a fall had brains that looked similar to those of Alzheimer’s patients. Previous studies have documented changes in the brain resulting from trauma to the head, and some analyses have associated concussions with a higher risk of learning problems, depression and early death. The latest study, published in the journal of Radiology, looked at 64 patients who experienced concussions and compared their MRI brain scans a year after their injury to those of 15 healthy patients over the same time period. The images picked up white matter, which is made up of nerves and their protective coating, myelin, which facilitates connections between nerves in different regions of the brain. Networks of these nerves are responsible for cognitive functions such as memory, planning and reasoning. The scans revealed that the damage to the white matter in the concussion patients was similar to that of Alzheimer’s patients, whose nerves gradually died after being strangled by expanding plaques of amyloid proteins. The study also showed that concussion patients suffered from the same sleep-wake disturbances that plague Alzheimer’s patients. These problems tend to make other cognitive issues, such as memory lapses and changes in behavior, worse. Both groups of patients also complained of being distracted by white noise, a common result of dysfunctional white matter that makes it increasingly difficult to filter irrelevant sounds and concentrate on specific ones. “When we sleep, the brain organizes our experiences into memories, storing them so that we can later find them. The parahippocampus is important for this process, and involvement of the parahippocampus may, in part, explain the memory problems that occur in many patients after concussion,” says study author Dr. Saeed Fakhran, an assistant professor of radiology in the Division of Neuroradiology at the University of Pittsburgh in a statement. The connection between concussions and Alzheimer’s pathology could lead to better understanding of how concussions affect the brain over time. The similarity to Alzheimer’s nerve damage, for example, suggests that the damage caused by the initial trauma continues to spur other harmful changes, just as they do in Alzheimer’s. “Our preliminary findings suggest that the initial traumatic event that caused the concussion acts as a trigger for a sequence of degenerative changes in the brain that results in patient symptoms and that may be potentially prevented. Furthermore, these neurodegenerative changes are very similar to those seen in early Alzheimer’s dementia,” says Fakhran. That doesn’t mean that every concussion patient will develop Alzheimer’s but the growing body of knowledge in each field could lead to improvements in diagnosing and treating both conditions. Recognizing that brain injury from concussions, for example, progresses long after the trauma, could heighten efforts to protect athletes at high risk of concussions from getting injured in the first place.
<urn:uuid:e6b95d57-06c8-4b79-b599-826e275fbe27>
CC-MAIN-2016-26
http://healthland.time.com/2013/06/19/concussion-and-alzheimers-patients-show-similar-brain-changes/?iid=hl-main-belt
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952936
660
3.03125
3
The logging grapple used in swing yarding is not moved by hydraulics but by cables. To open and close the tongs of the grapple, two cables are used. One is tensioned and the other is slacked off to move the tongs. A third cable goes back to the tail hold then to the yarder. This third cable is used to pull the grapple out into the setting and to create tension for lifting the grapple in the air. A grapple can be mounted to a tractor or excavator with a movable arm that may lift, extend/retract, and move side-to-side (pivot or rotate). Some machines also have a separate control for rotating the grapple. Simpler grapple machines consist of a hydraulically liftable fork, rake ("grapple rake"), or bucket and a movable, opposing "thumb" (one or more hooks or levers) that enclose and grip materials for lifting or dragging. A "demolition bucket" or "multi-purpose bucket" on a loader may also operate as a grapple whereby the bottom and rear side of the bucket are hinged and can be forced apart or together with hydraulic cylinders. A lifting grapple is a type of hardware that can attached to most large, heavy or bulky object to provide a feature on the item to which material handling equipment can attach. Lifting grapples sometimes double as tie downs, allowing heavy items to be held firmly in place by providing a point to which ropes or chains can be attached to the item to hold it in place.
<urn:uuid:0589cbe6-a620-4e4a-89f6-21194e33c52b>
CC-MAIN-2016-26
http://heavyequipmentworld.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939135
316
2.734375
3
Issues related to the gender digital divide have been prominent in discussions of the information society. However, the paucity of statistical data on the subject makes it difficult, if not impossible, to make the case for the inclusion of gender issues in ICT policies, plans, and strategies to policymakers, particularly those in developing countries. This paper surveys available gender ICT statistics and indicators and makes recommendations for filling the gaps that exist. Few gender ICT statistics are available because many governments do not collect ICT statistics consistently and regularly, and rarely are the data disaggregated by sex. The best practices are generally found in developed countries, with most developing countries lagging behind. Recent work that sheds light on women, gender, and the information society includes a major six-country study on the gender digital divide in francophone countries of West Africa and Orbicom's 2005 research on women in the information society. Although major composite ICT indices do not publish gender and ICT statistics, the potential remains for them to do so, and some indices encourage others to enrich their work with gender data. Gender; Digital Divide; Information Society; Gender Issues; ICTs; ICT Policy; Information Society
<urn:uuid:580140b9-f7f2-473f-bd47-cc1baf147eda>
CC-MAIN-2016-26
http://itidjournal.org/itid/article/view/254
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919427
241
2.5625
3
The Fusco Brothers cartoon for April 13 ( see it here) had a woman saying to a man in a bar: I REALLY DON'T CARE WHOM YOU CLAIM YOUR ANCESTOR WAS. The woman's remark is a little threefold grammar lesson in its own right. Here's the issue: is the whom correct? Well, there are three or four layers to this. First, the word whom is understood as the predicative NP complement of was. In ordinary English this is a function that goes with accusative case on a pronoun: if you knock on my door and I call out Who is it?, you, as a normal person, knowing that I would recognize your voice, would say It's me. If you said It is I, I would not be nearly so inclined to let you in. It is I is an extremely formal usage, encouraged by really old-fashioned prescriptivists but not seriously used these days by anyone except the unbearably affected. That, other things being equal, would mean that the case on who should also be accusative. But other things are not equal. Whom is very restricted indeed for most speakers, and it is highly implausible that it would be used as the complement of be. That is, hardly anyone would be inclined to say, You claim your ancestor was whom?. We would be much more likely to say, You claim your ancestor was who?. The case marking with the pronoun who is the exact opposite of what we find with personal pronouns like he or I. In addition, accusative case on who does not typically survive when the word is shunted to the beginning of an interrogative or relative clause. That is, even for people who would say You talking to whom? (e.g., to re-query an answer that wasn't heard correctly), it is highly unlikely that if they started the sentence with the wh-word they would use the accusative form: Whom were you talking to?. In normal conversation, the frequency of whom at the beginning of a clause (as opposed to preceded by a preposition) is now virtually zero. And this does not indicate near-universal error: there is no way Who were you talking to? can be regarded as incorrect use of the language. If you are teaching English to foreign learners, you should unquestionably teach them to who in such contexts, not whom. So, although we would expect accusative on an ordinary personal pronoun after was (as in It's me), what is typical on a wh-pronoun at the beginning of a clause is the nominative (Who were you talking to?). So in fact the accusative in the cartoon is not grammatical in Standard English as normally used. It is what is known as a hypercorrection. But I should also mention that there seem to be some people who regularly and unconsciously say things like I wonder whom they imagined was going to believe them. That is, they appear to convert who to whom whenever it ends up following a verb, regardless of whether it would have been nominative if left in its logical position (compare Who was going to believe them?). We would expect those people to say I really don't care whom regardless of what followed. Possibly the woman in the cartoon is one of those speakers. In that case whom would indeed be the expected form. But it is not clear that the resultant variety of English could properly be called standard. Prescriptivists would regard I wonder whom they imagined was going to believe them as clearly an error, because whom is actually understood as a subject (the subject of was). Do you find this confusing? I certainly hope so. Anyone who wasn't a bit confused by this point couldn't have been paying attention. Things are in a confused state. The form whom is dying. For lots of speakers it is really only used right after a preposition in a relative clause (anyone to whom this is confusing), and perhaps sometimes at the beginning of a relative clause (those whom I have succeeded in confusing). It hardly occurs in interrogatives at all (I looked for whom in a couple of months of my recent email, mostly from fellow professors, and I didn't find a single example of it in an interrogative). It isn't true that, as the grammar pontificators often imply, that the rules are fixed and perfectly simple and everyone ought to know them and it's only laziness if you don't. Often the rules are quite difficult to puzzle out, and very complex and awkward when you've identified them and stated them explicitly. Recently the college-educated daughter of a linguistics professor I know wrote to her dad to ask about a perfectly simple example: >> Hi Dad - I have a grammar question for you (actually, its my >> co-worker's, but I'm the only one with a linguist for a dad...) >> >> The sentence is: There are 3.6 million New Yorkers on Medicaid, of >> who/whom 2.4 million reside in New York City... >> >> Is it who or whom? He was astonished to get this, because of course here it is very simple: unquestionably, whom would be normal in this case because it's right after a preposition, and that's the one place whom is still common. But young people in their twenties are beginning to lose their grasp even of that last bastion of whom. It's not surprising. The present situation is multi-layered, subtle, and devilishly complex to describe. At least one linguist has decided there is no correct description of it at all, the situation is just chaos. It's also thoroughly confusing, and of course, just about totally irrelevant to understanding. The study of grammar interests me academically, and although I am prepared to rage and fume against people who pontificate about it mistakenly, I don't blame people who find the who/whom distinction deeply puzzling. The woman in the Fusco Brothers cartoon probably guessed wrong about whether to say who or whom, but you can hardly say she had no excuse.Posted by Geoffrey K. Pullum at April 17, 2004 04:23 PM
<urn:uuid:7af03d2a-a6da-4bce-a380-064418606743>
CC-MAIN-2016-26
http://itre.cis.upenn.edu/%7Emyl/languagelog/archives/000777.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983079
1,264
2.671875
3
Jewelers have always recycled gold. It has intrinsic value, so no one simply tosses this precious metal into the trash bin. Those who wish to part with old pieces sell them to local retailers or metal refiners who weigh each item and pay cash for the percentage of pure gold present (note: pure gold is 24K, anything less, ie 10K, 14K, and 18K, is a percentage of pure gold mixed with other, less valuable metals– this is called an alloy) Generally speaking, most consumers are not aware of the origin of the gold in their jewelry. However, media attention is focusing on the harmful effects of unethical mining on communities and the environment, and the term “dirty” gold gets its moniker from such practices. Critics say that a single band of gold leaves behind more than twenty tons of mine waste. Some of this is simply rock, however, other toxic metals and acid are also exposed and these can leach into groundwater creating a dangerous health hazard to wild as well as human life. Concerned jewelry manufacturers and designers are examining their gold sources far more closely, and some have joined the “No Dirty Gold” campaign founded by EARTHWORKS, a non-profit organization dedicated to protecting communities and the environment from the destructive impacts of mineral development, in the U.S. and worldwide. There are those who market their recycled-gold use as better for the environment. This permits us, as consumers, to be more responsible for and sensitive to the repercussions of our jewelry purchases. But the fact remains that the use of recycled gold has relatively no impact on the issues surrounding gold mining today.
<urn:uuid:73c79c74-1b3f-46ba-b21d-f92da741c527>
CC-MAIN-2016-26
http://jewelhistory.com/2008/11/30/recycled-gold-vs-dirty-gold-what-you-should-know/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964232
335
3.046875
3
According to a recent report, "nearly five times as many people have celiac disease today than did during the 1950s" and "the rate of celiac disease has doubled every 15 years since 1974 and is now believed to affect one in every 133 U.S. residents." I know a lot of people think gluten allergies —and a lot of allergies in general— are in the mind, or that people just make them up, but as someone who is allergic to oh so many things, I can assure you that most of the time that is not the case. I would definitely not choose to "make up" an allergy that often forces me to miss out on eating anything other than a side salad at a dinner function, but I can only speak for myself. I'll let Dr. Alessio Fasano do the rest of the talking: "There are many theories out there, not all independent of each other and not all of them true," Fasano said. Celiac disease is an inherited autoimmune disorder that causes the body's immune system to attack the small intestine, according to the U.S. National Institutes of Health and the University of Chicago Celiac Disease Center. The attack is prompted by exposure to gluten, a protein found in such grains as wheat, rye and barley. The disease interferes with proper digestion and, in children, prompts symptoms that include bloating, vomiting, diarrhea or constipation. Adults with celiac disease are less likely to show digestive symptoms but will develop problems such as anemia, fatigue, osteoporosis or arthritis as the disorder robs their bodies of vital nutrients. There are plenty of theories as to why this is happening, including the "we're just too clean a society, so our immune systems aren't as developed as they should be," theory courtesy of Carol McCarthy Shilson of the University of Chicago Celiac Disease Center, and that "there are theories out there that say breast-feeding will protect you, or prevent celiac disease," but either way, Shilson says the key is early intervention. If you think you may have celiac disease —even if you were previously symptom-free— it's worth getting screened just to make sure. And if you do find out you have it and you're worried about your ability to carry on a life of bread-based happiness post-diagnosis, I will say this. 1. Plenty of other people have it. 2. It's 2011.You have the internet and health food stores —hell, even most grocery stores— carrying a ton of gluten-free products. Celiac disease on the rise in the U.S. [USAToday]
<urn:uuid:f475a124-57b7-43e3-a32e-a4a4f515afff>
CC-MAIN-2016-26
http://jezebel.com/5832865/the-dreaded-gluten-allergy-may-be-coming-for-you
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973977
554
2.8125
3