id
stringlengths
47
47
text
stringlengths
426
671k
keywords_count
int64
1
10
codes_count
int64
2
4.68k
<urn:uuid:77aca634-4957-44e0-8360-09934a930f38>
logging in or signing up INPUT OUTPUT DEVICES aSGuest139762 Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINT lite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Dynamic Copy Does not support media & animations Automatically changes to Flash or non-Flash embed WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 525 Category: Education License: Some Rights Reserved Like it (0) Dislike it (0) Added: July 18, 2012 This Presentation is Public Favorites: 0 Presentation Description The various input , output and secondary storage devices are presented here... Comments Posting comment... Premium member Presentation Transcript Slide 1: Input devices Keyboard : Keyboard Keyboard : Keyboard A keyboard has many small buttons on it called keys . There are 101 keys ( app.) available in a keyboard . It is used to give the input . The keyboard is connected to the CPU by a wire . Keyboards : Keyboards Keyboards are of FOUR types Normal keyboard Flexible keyboard Ergonomics keyboard Virtual Keyboard Slide 5: Normal keyboard Flexible keyboard : Flexible keyboard Ergonomics keyboard : Ergonomics keyboard Slide 9: Ergonomics keyboard & mouse Virtual Keyboard : Virtual Keyboard Slide 11: Working using Bluetooth network Slide 13: mouse Slide 14: Mechanical mouse: Slide 15: Optical mouse Slide 16: Infrared (IR) or radio frequency cordless mouse: Slide 17: A mouse with many buttons Slide 18: Trackball mouse Slide 19: Stylus mouse Slide 20: Cordless 3-D mouse Slide 21: scanners Slide 22: Flatbed Scanners Slide 23: Transparency Scanners Slide 24: Handheld Scanners Slide 25: Drum Scanners Slide 26: Barcode reader A barcode is an optical machine-readable representation of data relating to the object to which it is attached. Originally barcodes represented data by varying the widths and spacing's of parallel lines, and may be referred to as linear or one-dimensional (1D) A barcode reader (or barcode scanner) is an electronic device for reading printed barcodes Slide 27: BARCODE READER Slide 28: Digital camera Slide 29: DIGITAL CAMERA Slide 30: Touch screen Slide 31: TOUCH SCREEN Slide 33: MICR Magnetic Ink Character Recognition, or MICR, is a character recognition technology used primarily by the banking industry to facilitate the processing of cheque and makes up the routing number and account number at the bottom of a cheque. The technology allows computers to read information (such as account numbers) of printed documents. Unlike barcodes or similar technologies, however, MICR codes can be easily read by humans. Slide 34: MICR Slide 35: OCR Optical character recognition, usually abbreviated to OCR, is the mechanical or electronic conversion of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used as a form of data entry from some sort of original paper data source, whether documents, sales receipts, mail, or any number of printed records Slide 36: OCR The dual illumination OCR reader captures code lines from Machine Slide 37: OMR Optical Mark Recognition (also called Optical Mark Reading and OMR) is the process of capturing human-marked data from document forms such as surveys and tests. OMR machine is used in evaluating MCQ Slide 38: OMR Slide 39: LIGHT PENS A light pen is a computer input device in the form of a light-sensitive and used in conjunction with the computer's CRT monitor. It allows the user to point to displayed objects, or draw on the screen, in a similar way to a touch screen but with greater positional accuracy. A light pen can work with any CRT-based monitor, but not with LCD screens, projectors or other display devices. However, due to the fact that the user was required to hold his or her arm in front of the screen for long periods of time, the light pen fell out of use as a general purpose input device Slide 40: LIGHT PEN Slide 41: MAGNETIC READER The Universal Magnetic Swipe Reader can read any combination of up to three tracks of magnetic information with a single swipe in either direction. An audible tone and a visual LED signal indicate a successful read. The Universal Magnetic Swipe Reader accepts both high and low coactivity magnetic cards, and can read all ISO 7811, AAMVA and California Driver's license, and custom data formats. Slide 42: MAGNETIC READER Slide 43: SMART CARD READER Smart reader/writers make access control and read/write applications more powerful, more versatile, and most importantly, offer-enhanced security through encryption and mutual authentication. Slide 44: SMART CARD READER Slide 45: NOTES TAKER Smart note taker (or digital pen)--- portable handwriting capture device using handwriting recognition technology to capture your hand writing notes, drawings, sketches anytime, anywhere then upload, file, email your handwriting notes, drawings, sketches once connected to computer. Slide 46: NOTE TAKER Slide 47: A microphone a mic or mike is an acoustic-to-electric transducer or sensor that converts sound into an electrical signal. Microphone Slide 48: Output devices Monitor : Monitor Monitor : Monitor The screen of the computer is known as Monitor. It looks like TV , But it is not a TV. It displays alphabets ,numbers , pictures & movies . The size of the monitor is measured in INCHES TYPES OF MONITORS : TYPES OF MONITORS CRT (Cathode Ray Tube) LCD (Liquid Crystal Display) Plasma Touch Screen OLED (Organic Light Emitting Diode) CRT Monitor : CRT Monitor Now-a-days this kind of monitors are no more in production. LCD (Liquid Crystal Display) : LCD (Liquid Crystal Display) Plasma : Plasma : : Slide 57: Speakers, or multimedia speakers, are speakers external to a computer, that disable the lower fidelity built-in speaker. They often have a low-power internal amplifier. SPEAKERS Slide 58: PRINTER A printer is a peripheral which produces a text or graphics of documents stored in electronic form, usually on physical print media such as paper or transparencies. TYPES OF PRINTERS : TYPES OF PRINTERS IMPACT PRINTERS Dot-Matrix Printers Daisy-Wheel Printers Line Printers NON – IMPACT PRINTERS Inkjet printers Laser printers Solid ink printers Dye-sublimation printers Thermal wax printers Thermal auto chrome printers Slide 60: IMPACT PRINTERS Impact printers are the oldest printing technologies still in active production. Some of the largest printer vendors continue to manufacture, market, and support impact printers, parts, and supplies. Impact printers are most functional in specialized environments where low-cost printing is essential. Slide 61: 1. Dot-Matrix Printers Here the paper is pressed against a drum (a rubber-coated cylinder) and is intermittently pulled forward as printing progresses. The electromagnetically-driven print head moves across the paper and strikes the printer ribbon situated between the paper and print head pin. The impact of the print head against the printer ribbon imprints ink dots on the paper which form human-readable characters. Most dot-matrix printers have a maximum resolution of around 240 dpi (dots per inch). It is ideal for environments that must produce carbon copies. Slide 62: Dot-Matrix Printers Slide 63: If you have ever worked with a manual typewriter before, then you understand the technological concept behind daisy-wheel printers. These printers have print heads composed of metallic or plastic wheels cut into petals. Each petal has the form of a letter (in capital and lower-case), number, or punctuation mark on it. When the petal is struck against the printer ribbon, the resulting shape forces ink onto the paper. Daisy-wheel printers are loud and slow. They cannot print graphics, and cannot change fonts unless the print wheel is physically replaced. 2. Daisy-Wheel Printers Slide 64: Metal Daisy Wheel for Xerox & Diablo printers Daisy-Wheel Printers Slide 65: 3. Line Printers It is somewhat similar to the daisy-wheel is the line printer. Line printers allow multiple characters to be simultaneously printed on the same line. The mechanism may use a large spinning print drum or a looped print chain. As the drum or chain is rotated over the paper's surface, electromechanical hammers behind the paper push the paper (along with a ribbon) onto the surface of the drum or chain, marking the paper with the shape of the character on the drum or chain. line printers are much faster than dot-matrix or daisy-wheel printers. But tend to be quite loud, have limited multi-font capability, and often produce lower print quality than more recent printing technologies. Because line printers are used for their speed, they use special tractor-fed paper with pre-punched holes along each side. This arrangement makes continuous unattended high-speed printing possible, with stops only required when a box of paper runs out. Slide 66: Line Printers Slide 67: Non-IMPACT PRINTERS All the impact printers print slow due to the slow mechanical movement of the print head. Efforts were made to eliminate the mechanical motion of the print head to increase the speed of the printers and as a result non-impact printers were developed. Nonimpact printers use methods for creating an image that don't involve actually touching the paper Slide 68: 1. Inkjet printers Inkjet printers spray tiny drops of ink onto the paper. Slide 69: 2. Laser printers Laser printers use static electricity to arrange toner on paper to form an image. The toner is then bonded to the paper with heat. Slide 70: 3. Solid ink printers Solid Ink color printers and MFPs are also very efficient with the use of the ink. Because there is no cartridge virtually all the ink gets used so customers get more out of their printer. Solid Ink sticks are easy to store. T he packaging is so small you can store your extra set of ink sticks in your desk drawer Slide 71: 4. Dye-sublimation printers dye sublimation ink for digital inkjet printer Dye-sublimation printers use rolls of transparent film that are embedded with solid dyes. The film is heated to vaporize the dye, which then permeates the paper's surface and returns to solid form. Slide 72: 5. Thermal wax printers Thermal Ribbons Thermal wax printers use a ribbon that passes in front of tiny heated pins that melt the wax from a ribbon onto the paper. Slide 73: 6. Thermal auto chrome printers Thermal auto chrome printers use paper embedded with dye. Slide 74: Plotter A Plotter is sometimes confused with a printer, but a plotter uses line drawings to form an image instead of using dots.A common type of plotter is one that uses a pen or pencil. usually held by a mechanical “arm,” to draw lines on paper as images are typed. It may be a component that is added to a computer system or it may have its own internal computer.It can be used to create layouts, diagrams, specs, and banners. Slide 75: Plotter Slide 76: A hard disk drive (HDD; also hard drive, hard disk, or disk drive) is a device for storing and retrieving digital information, primarily computer data. Hard Disk Drive It consists of one or more rigid (hence "hard") rapidly rotating discs (platters) coated with magnetic material, and with magnetic heads arranged to write data to the surfaces and read it from them. Slide 77: Magnetic tape is a medium for magnetic recording, made of a thin magnetizable coating on a long, narrow strip of plastic film. It was developed in Germany, based on magnetic wire recording. Devices that record and play back audio and video using magnetic tape are tape recorders and video tape recorders. A device that stores computer data on magnetic tape is a tape drive (tape unit, streamer). MAGNETIC TAPE Slide 78: Compact Cassette 7-inch reel of ¼-inch-wide audio recording tape, typical of consumer use in the 1950s–70s. MAGNETIC TAPE Slide 79: FLOPPY DISK A floppy disk, or diskette, is a disk storage medium composed of a disk of thin and flexible magnetic storage medium, sealed in a rectangular plastic carrier lined with fabric that removes dust particles. They are read and written by a floppy disk drive (FDD). Floppy disks, initially as 8-inch (200 mm) media and later in 5.25-inch (133 mm) and 3.5-inch (89 mm) sizes, were a ubiquitous form of data storage and exchange from the mid-1970s well into the first decade of the 21st century. Slide 80: 8-inch, 5 1⁄4-inch, and 3 1⁄2-inch floppy disks with respective drives Slide 81: n 1971, IBM introduced the 8-inch floppy disk, initial capacity was about 100K bytes (100,000 characters) In 1980, Sony introduced the3 1/2-inch floppy disk. Initially holding about 400K, current capacity is 1.4Meg per disk. In 1984, the Apple Macintosh had an internal 3 1/2-inchfloppy drive capable of storing 400K of data. Slide 82: An optical disc (OD) is a flat, usually circular disc which encodes binary data (bits) in the form of pits (binary value of 0 or off, due to lack of reflection when read) and lands (binary value of 1 or on, due to a reflection when read) on a special material ( often aluminium )on one of its flat surfaces OPTICAL DISK Slide 84: The end You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation.
1
2
<urn:uuid:82057022-7870-44ec-b345-f29042f3f06c>
Introduction to Tegra series Tegra series is a system-on-chip developed by NVIDIA. Unlike Tegra 1, which is based on ARM11 design (slow!), NVIDIA skipped ARM Cortex A8 design (pretty fast!), and went straight to Cortex A9 design(very fast!). Cortex A9 is expected to significantly outperform Cortext A8 based processors but currently no benchmark results exist to prove this yet (well, almost - read further). Currently no smartphones on the market are using Cortex A9 based processors. Future Cortex A9 based processors include 3rd generation Qualcomm processors, Samsung's Orion and Texas Instrument's OMAP 4 platform. Here are the phones you will find today that uses respective ARM designs: - ARM 11: iPhone 3, HTC G1, Magic, Hero, etc. - ARM Cortex A8: iPhone 3GS, iPhone 4, Nexus One, Motorola Droid, Droid X, Droid 2, Samsung Galaxy S, HTC Evo, HTC Desire HD, etc. - ARM Cortex A9: None yet. Tegra 2's GPU core is NVIDIA's own design. It is essentially the same design as the one found in Tegra 1, but faster. NVIDIA claims about 2-3x the performance improvement. I believe it since the bandwidth alone between the processor and the memory increased 2x. They also claim far less power consumption when Tegra 2 chip is decoding music and video compare to Qualcomm's Snapdragon processors. If this is true, Tegra 2 based phones will last significantly longer when listening music and watching YouTube video. So what makes Tegra 2 CPU core go fast? This is really asking a question of what makes ARM Cortex A9 fast. ARM has implemented out-of-order execution on its dual-issue pipeline. It also features a shorter pipeline which helps complete instructions faster. Cortex A9 is expected to feature significantly better IPC (instructions per clock) than Cortex A8. In other words, 1GHz Cortex A9 core will perform significantly better than 1GHz Cortex A8 core (about 25% improvement!). And on top, Cortex A9 is known to clock well - up to 2GHz. However, whether NVIDIA can scale other components in Tegra 2 will be a separate question all together. So in short, 1GHz dual-core Cortex A9 will destroy any 1GHz single-core Cortex A8 based processors. Also, NVIDIA's design can be extended to quad-core packaging. Eventually, we may see a quad-core Tegra 2 processors if there is a market for it. What we are seeing here, is essentially a repeat of the evolution of the desktop processors in a smaller, more efficient package. Devices using Tegra 2 Some vendors are producing devices based on Tegra 2 already. One of the early adopters is Toshiba. Toshiba AC100 is a netbook that uses Tegra 2 processor. Check out the Quadrant benchmark score for AC100: Not impressed? Based on this observation, Samsung's Galaxy S (which is based on ARM Cortex A8 design) outperforms AC100! I think the reasons are as follows: - Quadrant measures overall system performance, including GPU performance. Galaxy S features a great performing GPU for current generation - Hummingbird's PowerVR SGX540. No phones can perform at this level as of today. Only HTC G2/Desire Z/Desire HD are closing the gap using their second generation Snapdragon processors. Perhaps the GPU in Tegra 2 isn't as good as PowerVR SGX540. Although 2-3x faster than GPU found in Tegra 1, Tegra 1 is not a speed demon by today's standard. - Quadrant isn't dual-core ready - I doubt this application utilizes two cores in Tegra 2. I would be very surprised if ANY smartphone applications are multi-core aware yet. This will change quickly however as more dual-core hardware arrives to the scene. Once Quadrant is updated to handle dual-core processors, I believe the score will improve significantly. - AC100 ran using a netbook screen resolution which is higher than today's smartphones. It is more than likely that this would have impacted the 2D/3D results. Motorola, LG and Samsung have announced that they will produce smartphones based on Tegra 2 processors in the near future. UPDATED (Oct 4, 2010): Verizon/Motorola have announced a new model called Droid T2 to be released by Christmas this year. Notice that they claim this will be a dual-core phone, and the word "T2" (as in Tegra 2). What do I think of Tegra 2 processor? I like it. It IS based on the latest generation ARM Cortex A9 based design. Tegra 2 is the first SoC to come to the market for smartphones with this state of the art design. With the combination of IPC improvement and dual-core design, it will seriously run any typical smartphone applications that are being used today (and perhaps in the near future) very well. Its GPU appears to be a bit weak based on the first impression. Samsung's Hummingbird GPU might still have an edge here. So if 3D games is all you do on your phones, Galaxy S may work out better for you. So until other Cortex A9 based processors arrive to the scene by Samsung, Qualcomm and Texas Instrument, this might have the fastest solution. NVIDIA has intentionally left out the modem component from Tegra 2 SoC. I don't blame them for not tackling the mess we are currently in with the presence of GSM/CDMA/HSPA/LTE using different bands depending on where you live. But it does mean individual phone manufacturers will have to add their own modems instead, which is a overhead both in time and money. Will LG and Samsung deliver competitive phones using Tegra 2 chip sets? We will find out soon. Specification for Tegra 2 series can be found here: Some interesting points from the specification: - 1080p H.264/VC-1/MPEG-4 Video Decode - 1080p H.264 Video Encode - True dual-display support - 1080p (1920x1080) HDMI 1.3 capable - WSXGA+ (1680x1050) LCD capable - NTSC/PAL TV output UPDATED (Sep 20, 2010): Apparently there's some question on Tegra 2's usefulness due to its inability to handle the High Profile 1080p H.264 encoded files. And indeed this appears to be true based on the recent news on Boxee switching from Tegra 2 to Intel's Atom CE4100. I am also curious to see what NVIDIA has to say about this. This will probably not affect smartphones much, but on tablets and netbooks, it may become an issue for some users.
1
8
<urn:uuid:b25976dd-ab8f-42ef-ba05-11b1a0c9e180>
By Sanjay Badri-Maharaj On 1st December 1948, Costa Rica (pop. 4.5 million), under the leadership of President José Figueres Ferrer, abolished the Costa Rican army.1 Figueres, leader of the Social Democratic party had emerged victorious in a 44-day civil war during which time his forces – based in part around the 700 strong Caribbean Legion – defeated Communist guerillas and the Costa Rican army and established an 18-month old provisional junta known as the Junta Fundadora (Founding Junta). This junta enacted a series of far reaching reforms to Costa Rica’s social and political structure before voluntarily demitting office (paving the way for democratic elections). One of the most far reaching reforms was the abolition of the army which was later enshrined in Article 12 of the Costa Rican Constitution which states:2 The Army as a permanent institution is proscribed. For the vigilance and conservation of the public order, there will be the necessary forces of police. Military forces may only be organized by a continental agreement or for the national defense; one and the other will always be subordinate to the civil power: they may not deliberate, or make manifestations or declarations in an individual or collective form. This single step, never altered by successive governments, has ensured that Costa Rica, unique among the countries of Central America, has never been plagued by the bane of civilian or military dictatorships in its political history post-1948 and has been viewed as having established strong democratic and constitutional credentials supported by independent institutions.3 Through the darkest days of the Cold War when guerilla movements, insurgencies and death-squads plagued many of its neighbours, Costa Rica remained a bastion of stable democratic governance that served as a peacemaker and mediator with its neighbours. Yet Costa Rica is far from immune to the security challenges that plague South and Central America. The country has become a major hub for transnational crime and drug cartels, moving away from merely being a transit point to becoming a storage and collection point for Colombian and Mexican drug cartels. Colombian cartels ship drugs to Costa Rica where they are stored and then retrieved by such groups as the Mexican Sinaloa cartel.4 With rival Mexican cartels sensing opportunities in a country which lacks an army or a large cadre of paramilitary police and which is mindful about the rights of its citizens, the courts and security establishment of Costa Rica are facing an unprecedented challenge.5 It should be noted that while Costa Rica faces no serious threat of external aggression, it does have a border dispute with its northern neighbour – Nicaragua. Nicaraguan troops have established a camp on Portillos Island which was deemed to be within Costa Rican territory by the International Court of Justice (ICJ) in a ruling issued on 16th December 2015.6 The new incursion led to Costa Rica approaching the ICJ with a fresh complaint in January 2017.7 However, even this apparent territorial violation has sparked no moves in Costa Rica to develop any sort of viable military capability. This is in perhaps in recognition of the fact that the country would always be overwhelmingly outmatched by the large and well-equipped Nicaraguan army but is perhaps more a reflection of Costa Rica’s view that this dispute is one which should be settled through the arbitration mechanisms of the ICJ. Costa Rica’s Southern border with Panama is free from such disputes but the peaceful relationship between the two countries as well as a border that has little by way of demarcation has made this frontier a major route for narcotics and human trafficking.8 The thinly spread Policia de Fronteras are unable to do much more than token patrols along this frontier with a free flow of people – legal and illegal – being nearly impossible to halt. With no tangible external threat, it is understandable that Costa Rica has felt no need to break with its antipathy towards military forces. What is puzzling is the decision not to develop paramilitary police forces to deal with the scourge of narcotics trafficking and the epidemic of violence that inevitably follows. Costa Rica’s decision not to adopt this approach was a conscious one and done with due regard to the country’s concern to preserve the civilian identity of its police force. Indeed, a decree by then President Oscar Arias in 2008 to allow the police to carry automatic weapons was nullified by the courts.9 Costa Rica is therefore unique among its neighbours in having neither an army nor a fully militarised police force. Since the abolition of the army in 1948, the closest Costa Rica came to reestablishing any form of military force was during the 1980s when, in response to the turmoil in neighbouring Nicaragua, the then Civil Guard provided the nucleus for two USSF-trained border rapid reaction battalions (Relampago, and Binicio Battalions). 10 These two Rapid Intervention Infantry Battalions were followed by a third (Batallón Frontera Sur).11 The Costa Rican Guardia Civil had some M113 APCs, 1 UR-416 APC and 2 M3A1 armoured cars but these have not been seen in use for close to three decades.12 Old 20mm anti-aircraft guns that were briefly deployed to counter aerial incursions by the Sandinista regime’s air force have long been discarded. In the aftermath of the Nicaraguan conflict (which ended in 1990), Costa Rica began a major overhaul of its Guardia Civil and the resulting formation continues to shoulder the security burden of that country. In 1996, the Costa Rican Fuerzas Publica (FP – Public Force) was formed under the Ministerio de Seguridad Publica (MSP). The FP has grown into a force that combines police, coast-guard, air surveillance and quasi-military functions. With a strength of some 12,600, the FP has incorporated the old Civil Guard, the Rural Guard and the two border security battalions.13 The FP principally functions as a police force and within the region, it enjoys a good reputation for professionalism and while not immune from corruption, is noticeably less so than its counterparts in the rest of Central America.14 The general human-rights environment in Costa Rica is much better than anywhere else in Central America and despite some lapses, the FP is not viewed as a predatory force by the Costa Rican population Nonetheless, the FP suffers from chronic shortages of equipment and despite strenuous efforts to improve and sustain training, the FP is still under-resourced.15 It is noteworthy that the FP is roughly as large as the Mauritius Police Force which is responsible for a population three-times smaller than Costa Rica’s. Military capabilities of a very modest degree are retained by the successors of the two border security battalions which are now constituted into the Policia de Fronteras which comprises seven border security companies distributed between the Southern and Northern Commands.16 While usually clad in variations on police apparel, the Policia de Fronteras have been known to don military-style camouflage and carry assault rifles and machine guns while being supported by a limited number of 60mm and 81mm mortars. While the Policia de Fronteras has a small but effective riverine force, it lacks organic air support. The Policia de Fronteras also has control of Costa Rica’s small air and naval components. The latter, termed the Servicio de Vigilancia Aérea (SVA – Air Vigilance Service) has 13 light liaison-cum-transport aircraft and two helicopters. Some of the aircraft were seized from narcotics traffickers and while useful assets, none of the SVA’s aircraft have specialized surveillance equipment.17 The shortage of helicopters is an acute problem given the inaccessibility of some parts of the country and the potential need for rapid deployment of forces to such regions. The Costa Rican Servicio Nacional de Guardacosta (SNG – National Coastguard Service) comprises 10 obsolete patrol boats, the most capable of which are 3 82-foot Point-class cutters.18 This modest force will receive a significant boost in capability when two Island-class 110-foot vessels are transferred from the United States in 2017.19 While the larger vessels of the SNG can be armed with 0.50-cal M2HB heavy machine guns and/or 20mm Mk68 Oerlikon cannon, it was revealed in 1995 that none of the personnel assigned to the vessels knew how to operate the weapons. It is not known whether this situation has changed.20 Augmenting these units are two elite police special operations units – Comisaria 9 – Unidad de Operaciones Especiales (Special Operations Unit), and Comisaria 5 – Unidad Tactica de Policia (Police Tactical Unit). The former is largely American trained while the latter has close training ties to the Chilean Carabineros.21 These units are akin to elite riot control and Special Weapons and Tactics (SWAT) teams and provide support to the FP constabulary units which, while armed, do not normally carry automatic weapons. Outside the MSP and under the control of the country’s Departamento de Inteligencia y Seguridad (DIS – Department of Intelligence and Security) is Costa Rica’s elite Unidad Especial de Intervención (UEI – Special Intervention Unit) which is a company-sized commando unit with a high standard of training and equipment.22 Despite its militarised nature, the Costa Rican government seeks to downplay its capability and insists it is a police rather than a military unit.23 Yet the UEI has an excellent regional reputation and is one of the region’s finest special forces units and exercises regularly with their Central and North American counterparts where it has consistently proven to be a capable outfit. As has been noted, Costa Rica is facing a major challenge from violent transnational organised crime largely linked to the trade in illegal narcotics. Though the country registered a decline in homicides between 2010 (527) and 2012 (407), by 2014 that figure had increased to 471.24 This prompted calls for the establishment of a dedicated unit to combat organized crime but to date this has been limited to the 50 strong Policia de Control de Drogas. This unit relies heavily on support from other units – in particular the Policia de Fronteras and the SNG – to deal with the dual threat of Colombian and Mexican cartels using Costa Rica as a transit, storage, collection and trans-shipment point. In recognition of these challenges, Costa Rica boosted its security budget by 123% between 2006 and 2012.25 However, unlike its neighbours, Costa Rica declined to deploy its elite and/or militarised police units in anything more than a supporting role with the FP constabulary bearing the brunt of the fight against organised crime. This has been accompanied by aggressive social programs in local municipalities aimed at conflict resolution and providing training and employment opportunities.26 Whether this enlightened approach will produce the desired results is as yet an open question but what is not in doubt is that Costa Rica is intent on maintaining a demilitarised approach to internal security challenges, maintaining at all times the country’s reputation of being a stable and democratic country with an enviable reputation for the protection of the human rights of its nationals. Views expressed are of the author and do not necessarily reflect the views of the IDSA or of the Government of India. Originally published by Institute for Defence Studies and Analyses (www.idsa.in) at http://idsa.in/idsacomments/costa-rica-challenge-maintaining-internal-security_sbmaharaj_230317 - 1. Abolición del Ejército Guias Costa Rica at http://guiascostarica.info/acontecimientos/abolicion-del-ejercito/ (Accessed 22nd February 2017) - 2. Costa Rica 1949 (rev. 2011) Constitute Project (English Translation of Costa Rican Constitution) at https://www.constituteproject.org/constitution/Costa_Rica_2011?lang=en#38 (Accessed 22nd February 2017) - 3. Costa Rica’s Democracy a Role Model for Central America Democracy Chronicles at https://democracychronicles.com/costa-ricas-democracy/ (Accessed 22nd February 2017) - 4. D. Marin, Costa Rica Becomes Hub of Drug Cartels Latin America Herald Tribune at http://laht.com/article.asp?ArticleId=349013&CategoryId=23558 (Accessed 22nd February 2017) - 5. N. Miroff, Costa Rica struggles to maintain its ‘pure life’ as drug cartels move insocia The Sydney Morning Herald, 31st December 2011 http://www.smh.com.au/world/costa-rica-struggles-to-maintain-its-pure-life-as-drug-cartels-move-in-20111230-1pfju.html (Accessed 22nd February 2017) - 6. L. Arias,The Hague Court: Territory disputed with Nicaragua belongs to Costa Rica Tico Times 16th December 2015 at http://www.ticotimes.net/2015/12/16/hague-court-calero-island-belongs-costa-rica (Accessed 22nd February 2017) - 7. L. Arias, Costa Rica sues Nicaragua over military camp near border Tico Times 16th January 2017 at http://www.ticotimes.net/2017/01/16/costa-rica-border-dispute-hague (Accessed 22nd February 2017) - 8. Z. Dyer, On patrol with the Costa Rican Border Police Tico Times 27th June 2016 at http://www.ticotimes.net/2016/06/27/costa-rica-border-police (Accessed 22nd February 2017) - 9. E. Fieser, Costa Rica doubles down on security Christian Science Monitor, 8th February 2014 at http://www.csmonitor.com/World/Americas/2014/0208/Costa-Rica-doubles-down-on-security (Accessed 22nd February 2017) - 10. C. Caballero Jurado & N. Thomas Central American Wars: 1959-1989(London, Osprey: 1990), p.33 - 11. J. A. Montes Small Arms of the Costa Rican Paradise Small Arms Review at http://www.smallarmsreview.com/display.article.cfm?idarticles=2814 (Accessed 23rd February 2017) - 12. A. English, Regional Defence Profile: Latin America (London, Jane’s: 1988), p. 116 - 13. Fuerza Publica Ministerio de Seguridad Publica at http://www.seguridadpublica.go.cr/direccion/fuerza_publica/ (Accessed 23rd February 2017) - 14. Costa Rica Country Profile, The International Security Sector Advisory Team at the Geneva Centre for the Democratic Control of Armed Forces – updated 2nd February 2015 at http://issat.dcaf.ch/Learn/Resource-Library/Country-Profiles/Costa-Rica-Country-Profile (Accessed 23rd February 2017) - 15. Costa Rica: An Army-less Nation in a Problem-Prone Region Council on Hemispheric Affairs, 2nd June 2011, http://www.coha.org/costa-rica-an-army-less-nation-in-a-problem-prone-region/comment-page-1/ (Accessed 23rd February 2017) - 16. Op.cit. n. 11 - 17. Servicio de Vigilancia Aérea Ministerio de Seguridad Publica at http://www.seguridadpublica.go.cr/direccion/vigilancia_aerea/ (Accessed 23rd February 2017) - 18. E. Wertheim, Combat Fleets of the World: 16th Edition (Annapolis, Naval Institute Press: 2013), pp.145-146 - 19. Z. Dyer, US donates $19 million to Costa Rica Coast Guard Tico Times 22nd June 2016 at http://www.ticotimes.net/2016/06/22/us-donates-19-million-to-costa-rica-coast-guard (Accessed 23rd February 2017) - 20. Op.cit.n.11 - 21. Ibid. - 22. S. Blaskey, Costa Rican special operations unit participates in regional ‘war games’ Tico Times 9th August 2014 at http://www.ticotimes.net/2014/08/09/costa-rican-special-operations-unit-participates-in-regional-war-games (Accessed 23rd February 2017) - 23. R. Beckhusen, Costa Rica Doesn’t Have a Military? Not So Fast War is Boring 10th August 2014 at https://warisboring.com/costa-rica-doesnt-have-a-military-not-so-fast-499b5d67e160#.ol3ebb9ep (Accessed 23rd February 2017) - 24. Z. Dyer, Costa Rica’s Public Security Minister calls for new organized crime unit after spike in killings, Tico Times 15th October 2015 at http://www.ticotimes.net/2015/10/15/security-minister-calls-new-organized-crime-unit-killings-spike-costa-rica (Accessed 23rd February 2017) - 25. Op. cit. n.9 - 26. Ibid. Enjoy the article? Did you find this article informative? Please consider contributing to Eurasia Review, as we are truly independent and do not receive financial support from any institution, corporation or organization.
1
3
<urn:uuid:4e092494-43ed-451b-beb0-c7cd60f8e704>
View in PDF: Tax News & Comment — February 2013 Elder Law Planning: Deciphering the Puzzle I. Social Security The Social Security program, begun during the Great Depression under President Roosevelt, is the forerunner of Medicare and Medicaid. The largest program under the Social Security Act is that which provides for retirement benefits. The monthly retirement benefit is a function upon two variables: the recipient’s earnings record (which is tracked by the Social Security Administration automatically) and the age at which the recipient chooses to begin receiving benefits. The recipient’s monthly income benefit is based on the highest 35 years of the recipient’s “covered earnings.” Covered earnings in any year cannot exceed the Social Security Wage Base, which is also the maximum amount of earnings subject to the FICA payroll tax. In 2013, that amount is $113,700. FICA (Federal Insurance Contributions Act) imposes a Social Security withholding tax of 6.2 percent on employers and employees alike. During the tax years 2011 and 2012, the employee’s contribution was temporarily reduced to 4.2 percent. Wages in excess of $113,700 though not subject to FICA are subject to a separate payroll tax of 2.9 percent. Liability for the amount above FICA is divided equally between the employer and the employee, with each paying 1.45 percent. If a recipient has fewer than 35 years of covered earnings, the number of years required to reach 35 years are assigned a zero value. Nevertheless, the formula utilized in calculating benefits is progressive and significant benefits can be achieved even if the recipient has far fewer than 35 covered years. The earliest time at which Social Security benefits become payable to a covered worker is age 62. A person who chooses to begin receiving benefits at age 62 will receive only about 75 percent of the amount which the person would have received had payments been deferred until age 66. A person who defers benefits past age 66 will earn delayed retirement credits that increase benefits until age 70. Past age 70, no further benefits will accrue. These benefits will inure to the surviving spouse. Married couples may also take advantage of spousal benefits. Under current law, a spouse cannot claim a spousal benefit unless the main beneficiary claims benefits first. Once full retirement age is reached at age 66, a beneficiary can “file and suspend.” By doing this, the beneficiary’s spouse can claim a spousal benefit, but the beneficiary’s own retirement benefit will continue to grow until age 70. Spouses are entitled to receive 50 percent of the benefits of a covered employee. If a beneficiary age 66 is entitled to receive $2,000 per month, his spouse would, at age 66, be entitled to receive $1,000 per month in spousal benefits. If spouse had not worked, or the benefits that spouse would have received are less than $1,000 per month, it might make sense for the couple to follow this strategy. This strategy will (i) allow one spouse to receive benefits in excess of that which he or she would have received based upon his or her own entitlement; (ii) allow the benefit to continue to grow at 8 percent per year up to age 70; and (iii) allow the benefit of the spouse taking spousal benefits to continue to grow until that spouse also reaches 70, at which time he or she can begin claiming retirement benefits based on his or her own record. Surviving spouses are also entitled to Social Security Benefits. The benefits of a surviving spouse depend upon when the covered worker began taking benefits: If the covered worker takes benefits at full retirement age, the surviving spouse may take 100 percent of the benefits provided he or she is also at full retirement age. If the covered worker takes benefits before full retirement age, then the surviving spouse — again provided he or she was of retirement age — would take 100 percent of the (reduced) amount that the covered worker was taking. A surviving spouse need not wait until full retirement age to take survivor benefits: A surviving spouse may elect to begin receiving benefits as early as age 60, or age 50, if he or she is disabled. However, those benefits will be reduced. Divorced spouses are eligible to receive benefits if the marriage had lasted at least 10 years. (Survivor benefits will not be available to same sex partners, since same sex marriages are not recognized under federal law.) The benefits received by a divorced spouse have no effect on the benefits of the current spouse. Unmarried children, as well as dependent parents, may also qualify to receive survivor benefits. Social Security benefits were historically not subject to income tax. However, in order to avoid projected insolvency of the Social Security system, 50 percent of the portion of benefits were potentially subject to income tax in 1984. In 1994, 85 percent of Social Security benefits became subject to income tax. II. National Health Care The Affordable Health Care Act established a national health insurance program. Under the Act, the existing Medicare program will be expanded to include all U.S. residents. The goal of the legislation is to provide all residents access to the “highest quality and most cost effective healthcare services regardless of their employment, income, or healthcare status.” Every person living or visiting the United States will receive a United States National Health Insurance (“USNHI”) Card and ID number upon registration. The program will cover all medically necessary services, including primary care, inpatient care, outpatient care, emergency care, prescription drugs, durable medical equipment, long term care, mental health services, dentistry, eye care, chiropractic, and substance abuse treatment. Patients have their choice of physicians, providers, hospitals, clinics, and practices. No co-pays or deductibles are permitted under the Act. The system will convert to a non-profit healthcare system over a period of fifteen years. Private health insurers will be prohibited from selling coverage that duplicates the benefits of the USNHI program. However, nonprofit health maintenance organizations (HMOs) that deliver care at their own facilities may participate. Exceptions from coverage will also be made for cosmetic surgery and other medically unnecessary treatments. The National USNHI will (i) set annual reimbursement rates for physicians; (ii) allow annual lump sums for operating expenses for healthcare providers; and (iii) negotiate prescription drug prices. A “Medicare for All Trust Fund” will be established to ensure constant funding for the program. The Act also requires states to expand Medicaid or face the loss of federal funds. Advocates of universal health care believe that lower administrative costs will result in savings of approximately $200 billion per year, which is more than the cost of insuring all of those who are now uninsured. A “single-payer” system is said to reduce administrative costs because it would not have to devote resources to screening out high-risk persons or charging them higher fees. Critics argue that the nationalization of health care will raise taxes for higher-income individuals, and note that the government has a poor track record in effectively managing large bureaucratic organizations (e.g., the Postal Service and AMTRAK). In June of 2012, the Supreme Court, in an opinion written by Chief Justice Roberts, narrowly upheld the Constitutionality of the Affordable Care Act as proper exercise of the federal government’s power to impose tax. Medicare is a federal insurance program that provides health insurance to persons age 65 and over, and to persons under age 65 who are permanently disabled. Medicare and Medicaid were signed into law by President Johnson in 1965 as part of the Social Security Act. Medicare is administered by the federal government. In general, all five-year legal residents of the U.S. over the age of 65 are eligible for Medicare. Medicare consists of four parts, A through D. Medicare Part A covers inpatient hospital stays of up to 90 days provided an official order from a doctor states that inpatient care is required to treat the illness. For days 1 through 60, a $1,184 deductible is required for each benefit period in 2013. For days 61 through 90, a daily copay of $296 is required. After 90 days, a daily co-pay of $592 must be paid for up to 60 “lifetime reserve days.” Once lifetime reserve days have been exhausted, Medicare hospitalization benefits will have terminated. Part A also covers stays for convalescence in a skilled nursing facility (SNF) for up to 100 days following hospitalization. To be eligible, (i) the patient must have a qualifying hospital stay of at least three days (and have unused days); (ii) a physician must order the stay; and (iii) the SNF facility must be approved by Medicare. No copay is required for the first 20 days. Thereafter, a daily copay of $148 is required until day 100. Part A is funded by the separate payroll tax of 2.9 percent, and the recently enacted 3.8 percent Medicare surtax. Medicare Part B covers physician and some outpatient services. Eligibility depends on satisfying requirements for Part A, and the payment of monthly premiums, which range from $100 for those with adjusted gross income of up to $85,000, to $319, for those with higher incomes. The patient is responsible for 20 percent of physician’s fees. Medicare pays only the “approved” rate. A patient may seek care from a nonparticipating doctor and pay the rate differential. Private Medicare supplement (Medigap) insurance is available to those who have Medicare Part A and Part B. This insurance will cover such items as copays and deductibles. Those without a Medigap policy are required to pay these costs out of pocket. Medicaid Part C consists of plans offered by private companies (“Medicare Advantage”) that contract with the government to provide all Part A and Part B benefits. By law, these plans must be “equivalent” to regular Part A and Part B coverage. Some Part C plans provide prescription drug coverage. Medicare Advantage supersedes Medigap coverage; if a person is enrolled in a Medicare Advantage Plan under Part C, Medigap will not pay. Medicare Part D provides prescription drug benefits. Under Part D, a patient pays an initial deductible of $325 and is subject to a co-pay of 25 percent for amounts up to $2,970. The patient must pay all costs incurred between $2,970 and $4,750, but is required to pay only 5 percent of drug costs over $4,750. III. The Intersection of Medicare to Medicaid Without planning, the elderly are at significant risk of losing an entire life’s savings in the event of catastrophic illness following the expiration of Medicare hospitalization and nursing care benefits. At that point, the choice for extended care is in general either Medicaid or payment out of pocket. Many who opt for private payment of long-term care costs will risk exhausting their resources. This loss will have implications for other family members as well. (It should be noted that long term health care insurance has generally been less than effective in managing these costs, for a variety of reasons.) The principal difference between Medicare and Medicaid is that Medicaid is need-based and Medicare is (theoretically) entitlement based. While Medicare is federally funded and administered, Medicaid is a federal program jointly funded and administered by the states. Financial resources play no role in determining Medicare eligibility. Medicaid eligibility is limited to persons with limited income and limited financial resources. Medicaid covers more health care services than Medicare and unlike Medicare, will cover long-term care for elderly and disabled persons who cannot afford such care. IV. Medicaid Eligibility Elderly and disabled persons are more likely to require continuing long-term care not covered by Medicare. For example, a serious illness such as a stroke could require a long period of convalescence, the costs of which are not covered by Medicare. Medicaid will cover many long term health costs that Medicare will not. Medicaid now pays for about half of all nursing home costs incurred in the country. Ownership of substantial assets or in some states the right to monthly income above certain thresholds (“resources”) will preclude Medicaid qualification. Before qualifying for Medicaid, a person with substantial assets would be required to deplete (“spend down”) those assets. To avoid the scenario in which nearly all of one’s assets might be required to be paid to a nursing home before becoming eligible for Medicaid, some persons choose to transfer in advance assets that would impair Medicaid eligibility. Such transfers might be outright to a spouse, or to other family members such as children, or to a trust. Recognizing the increased cost to the government of intentional transfers made to become eligible for Medicaid, Congress, in the Deficit Reduction Act of 2005 (DRA), created a five-year “look back” period relating to the transfer of assets either outright or in trust. Essentially, any transfers made during the five-year period preceding a Medicaid application are ignored for purposes of determining eligibility. Despite the existence of a five year look-back period, courts have upheld Medicaid planning as an appropriate objective, and have held that such transfers do not violate public policy or constitute fraudulent transfers. Under pre-DRA law, the five-year period of ineligibility began on the date of the transfer. Under DRA the period of ineligibility does not commence until the person is in the nursing home and applies for Medicaid. Therefore, if within five years after making a transfer of assets to qualify for future Medicaid a person requires Medicaid assistance, the transferred assets must be “repaid” before Medicaid payments will commence. The upshot is that if long-term care appears likely in the foreseeable future, transferring assets to qualify for future Medicaid eligibility may actually be counterproductive, as it will trigger a penalty. Transfer penalties are based on the monthly cost of nursing home care in the applicant’s state. If the cost of nursing home care in New York is $7,000 per month and the person transferred $70,000, he would be ineligible for Medicaid for a period of ten months (i.e., the amount transferred divided by the cost per month of nursing home care). An exception provides that transferring assets, including one’s house, to a spouse will not trigger the transfer penalty. Nursing homes will generally be unaware of the transfer prior to admission. As a result, the facility may have no source for reimbursement during the penalty period. Federal law prohibits a nursing home from discharging a patient unless it has found replacement care. Determining the appropriate amount of assets to transfer is important. Transferring many assets, while helpful for future Medicaid eligibility purposes, may leave future applicant with insufficient assets. Transferring too few assets will create a larger reserve to be “paid down” after the five year look-back period ends and Medicaid would, but for the reserve, begin to pay. Federal law requires states to investigate gifts made during the look-back period. New York does not specify a threshold amount, but some counties will examine all transfers over $1,000. Even common gifts, such as those made for birthdays, for tuition, or for weddings may be considered an available asset for Medicaid purposes. Care must be taken to ensure that gifts made under a power of attorney do not have unintended and deleterious Medicaid implications. V. Medicaid Exempt Assets Certain resources are exempt in determining Medicaid eligibility. New York Medicaid exempts up to $786,000 in home equity. If the residence is worth substantially more than this amount, the home may be gifted, but the five-year look back period will apply. If the home is gifted but the transferor retains a special power of appointment, the transfer will achieve the objective of starting the commencement of the five-year look-back period. In addition, although the transfer will be complete for Medicaid purposes, it will be incomplete for federal estate tax purposes. This means that children will benefit from a stepped up basis at the death of the parent. A spouse may also transfer title to the other spouse without risking Medicaid eligibility. A residence may also be transferred to a sibling with an equity interest who has lived in the residence for more than one year, or to a child who has been a caregiver and lived in the home for at least two years before the parent enters a nursing home. There are advantages to using a trust. If an irrevocable income-only trust is used, the parent may continue to live in the house, or sell it. (See discussion below.) A retirement account may also be an exempt asset if it is in “payout” status. So too, an automobile may be an exempt asset. If the applicant is in a nursing home, the applicant’s residence remains an exempt asset provided the applicant has a “subjective intent” to return to his home. Exempt assets may be transferred without penalty because such assets would not impair Medicaid eligibility even if not transferred. Therefore, if government assistance will be required within 60 months, excess resources could be converted into exempt assets. For example, a residence could be improved, a mortgage on the residence repaid, or other exempt assets could be purchased. New York may impose a lien on a personal residence even though ownership of the residence would not impair Medicaid eligibility. However, no lien may be imposed unless the person is permanently absent and is not reasonably expected to be discharged. N.Y. Soc. Serv. Law §369(2)(a)(ii); 18 N.Y.C.R.R. §360-7.11(a)(3)(ii). Prior to filing a lien, New York must satisfy notice and due process requirements, and must show that the person cannot reasonably be expected to return home. 42 U.S.C. §1396p(a)(2). If the person does return home, the lien is vanquished by operation of law. 18 N.Y.C.R.R. §360-7.11(a)(3)(i). Conversion of cash into an annuity may reduce the available resources available that would otherwise be required to be “paid down” before becoming eligible for Medicaid. Such annuities must be structured so that (i) their payout period is not longer than the actuarial life of the annuitant; (ii) payments are made in equal installments; and (iii) the annuities are paid to the state following the death of the annuitant to the extent required to reimburse the state for amounts paid for Medicaid benefits. VI. Recovery From Estate New York has the right to recover from the estate exempt assets that had no bearing on Medicaid eligibility. However, no recovery from an estate may be made until the death of a surviving spouse. N.Y. Soc. Serv. Law §366(2)b)(ii). For purposes of New York Medicaid recovery, the term “estate” means property passing by will or by intestacy. No right of recovery exists with respect to property passing in trust, by right of survivorship in a joint tenancy, or to the beneficiary of a bank or retirement account. Therefore, to avoid recovery of estate assets, avoidance of probate and intestacy are paramount. The amount that may be recovered from an estate for a lien against a personal residence cannot exceed the value of services provided while the Medicaid recipient was absent from the home. N.Y. Soc. Serv. Law §369(2)(a)(ii). A lien may be waived in cases of undue hardship. N.Y. Soc. Serv. Law §369(5). VII. Medicaid Trusts A Medicaid trust provides income to the grantor or to the grantor’s spouse. Transfers to a Medicaid trust may facilitate eligibility for Medicaid since assets so transferred are excluded when determining Medicaid eligibility (provided the five year look-back period is satisfied). Furthermore, transfer to an irrevocable trust will effectively bar any right of recovery by New York if a lien had been placed on the residence or other asset. Most trusts created to facilitate Medicaid planning will be drafted to be irrevocable, since assets transferred to a revocable trust can be reacquired by the settlor, and Medicaid can reach anything the settlor can reach. Therefore, avoidance of probate is not enough: The transferor must also be unable to reacquire the assets. If a residence is transferred to a Medicaid trust, the “income” permitted to be paid to the grantor will consist of the grantor’s retained right to reside in the residence. If the Medicaid trust is structured as a grantor trust, the capital gains exclusion provided by IRC §121 will be available if the residence is sold by the trust. The grantor of a Medicaid trust will want the assets to be included in his or her estate at death in order to receive a basis step up under IRC §1014. This can be achieved if the grantor retains a limited power of appointment. Retaining a limited power of appointment will also enable the grantor to retain the ability to direct which beneficiaries will ultimately receive trust assets. Irrevocable trusts granting the trustee discretion to distribute income or principal pursuant to an ascertainable standard are effective in asset protection, but are ineffective for purposes of Medicaid: Federal law treats all assets in discretionary trusts in which the applicant or his spouse is a discretionary beneficiary of principal or income as an available resource for purposes of determining Medicaid eligibility. The trust may provide that the grantor or spouse has a right to a fixed income amount, but that will be considered an available resource. The trust may provide for discretionary trust distributions to beneficiaries (other than the grantor or the grantor’s spouse) during the grantor’s lifetime or at death, and may provide that upon the death of the grantor, the trust corpus will be distributed outright to beneficiaries, or held in further trust. Inclusion of trust assets in the estate of a Medicaid recipient will ordinary not be problematic since the federal estate tax exemption is $5 million and the NYS lifetime exemption is $1 million. If outright gifts and transfers to a Medicaid trust are both anticipated, transferring low basis property to the trust will be preferable, since a step up in basis will be possible with respect to those assets if the trust is includible in the applicant’s estate (unless the trust has been structured as a grantor trust). The recipients of lifetime gifts, on the other hand, will take a transferred basis in the gifted assets. The trustee of a Medicaid trust should be persons other than the grantor or the grantor’s spouse. However, the trust may allow the grantor to replace the trustee. Since assets transferred to a Medicaid trust are transferred irrevocably, it is important that the grantor and his or her spouse consider the nature and extent of assets to be retained, since assets must remain to provide for daily living expenses. VIII. Special Needs Trusts A Special (or “Supplemental”) Needs Trust (SNT) established for a person with severe and chronic disabilities may enable a parent or family member to supplement Medicaid or Supplemental Security Income (SSI), without adversely affecting eligibility under these programs, both of which impose restrictions on the amount of “income” or “resources” which the beneficiary may possess. 42 U.S.C. § 1382a. Assets owned by the SNT will not be deemed to be owned by the beneficiary. Federal law authorizes the creation of SNTs that will not be considered “resources” for purposes of determining SSI or Medicaid eligibility where the disabled beneficiary is under age 65, provided the trust is established by a parent, grandparent, legal guardian, or a court. Thus, personal injury recoveries may be set aside to supplement state assistance. The beneficiary’s income (which includes gifts, inheritances and additions to trusts) will reduce available SSI benefits. The SNT may be created by either an inter vivos or testamentary instrument. If an inter vivos trust is used, the trust may, but is not required to be, irrevocable. Provided the beneficiary may not revoke the trust, trust assets will not constitute income or resources for SSI or Medicaid purposes. A revocable inter vivos trust could permit the parent or grandparent, for example, to modify the trust to meet changing circumstances. If the trust is revocable the trust assets will be included in the settlor’s estate under IRC §2038. The trust may be funded with life insurance or gifts, or bequests. An SNT may be established at the death of a surviving parent for a disabled adult child. Such trust may be established either by will or by revocable inter vivos trust. An SNT may even be formed by a court. In Matter of Ciraolo, NYLJ Feb. 9, 2001 (Sur. Ct. Kings Cty), the court allowed reformation of a will to create an SNT out of an outright residuary bequest for a chronically disabled beneficiary. Neither the beneficiary nor the beneficiary’s spouse should be named as trustee, as this might result in a failure to qualify under the SSI and Medicare resource and income rules. A family member or a professional trustee would be a preferable choices as trustee. The trustee may make disbursements for items not paid by government programs. These could include dental care or clothing, or a case manager or companion. EPTL 7-1.12 expressly provides for special needs trusts and includes suggested trust language. The statute imposes certain requirements for the trust. The trust must (i) evidence the creator’s intent to supplement, rather than impair, government benefits; (ii) prohibit the trustee from expending trust assets that might in any way impair government benefits; (iii) contain a spendthrift provision; and (iv) not be self-settled (except in narrowly defined circumstances). EPTL 7-1.12 further provides that notwithstanding the general prohibition imposed on the trustee from making distributions that might impair qualification under federal programs, the trustee may have discretionary power to make distributions in the best interests of the beneficiary. However, distributions should not be made directly to the beneficiary, as this could disqualify the beneficiary from receiving governmental assistance. A “payback” provision mandates that on trust termination the trustee must reimburse Medicaid for benefits paid to the beneficiary. Only a “first-party” (self-settled) SNT, which is an SNT funded by the beneficiary himself, must include a payback provision. However, a first-party SNT funded by a personal injury award will not be required to pay back Medicaid. A “third party” SNT is a trust created by a person other than the beneficiary (e.g., a parent for a developmentally disabled child). Third party SNTs do not require a “payback” provision. Inclusion of a payback provision requiring Medicaid reimbursement in a third party SNT is unnecessary, and its inclusion where none is required could result in a windfall to Medicaid at the expense of remainder beneficiaries. The purpose of a special needs trust will be obviated if the beneficiary receives gifts or bequests outright. Therefore, it is imperative that the beneficiary not receive gifts, bequests, intestate legacies, or be made the beneficiary of a retirement account. No trust should be established for these purposes under the Uniform Transfer to Minors Act. All of these intended gifts (except an intestate legacy) should instead be made payable directly to the SNT.
1
2
<urn:uuid:0b9e2d71-0de5-4050-9a13-2a8ceb3ce4e1>
Maintenance and Control - 1 Introduction - 2 Goals & Principles - 3 Context Diagram - 4 Maintenance Responsibilities - 5 Evolutionary Responsibilities - 6 Change Control Systems and Processes - 7 Types of System Change Releases - 8 Summary - 9 Key Maturity Frameworks - 10 Key Competence Frameworks - 11 Key Roles - 12 Standards - 13 References Maintenance and Control are two sides of the same process. Maintenance activities ensure that a system remains operational and does not degrade over time. Maintenance activities preserve existing function. Control of a system manages the approval process for requested changes to a system, including defect fixes, evolution of third-party components, and in-house enhancements. Control activities evaluate and determine approval and schedules for changes to existing function, which are then implemented through the transition of a solution to operations (see Transition into Operations). Considerable research over the past several decades has shown that the majority of expenses (75-80%) reported as maintenance costs are actually due to changes to systems in response to enhancement requests. Changes that add functionality almost always add value, and therefore, should not properly be called “maintenance.” They are instead properly seen as stages in the system’s evolution. An EIT system (asset or solution) is maintained by activities performed to ensure that the system continues to be operational over time . The maintenance team is responsible for two main functions: - The maintenance team plans for, designs, and directs the maintenance necessary to prevent the deterioration and failure of a system, which can be due to defects, obsolescence, or environmental conditions . - The maintenance team recommends changes to assure continuing environmental compatibility of a system due to evolution of its hardware and software as provided by vendors. Maintenance is different from operations; however, the understanding of a system’s maintenance needs is closely connected to its operational function. - Operations and Support activities ensure that production systems operate consistently in a steady state of defined functionality. Operational support focuses outwardly on preserving execution of systems providing service to users. - Maintenance evaluates the system’s operations over time in comparison to changing environment component standards (upgrades from vendors) and increasing age of components (failures due to normal use). Maintenance focuses inwardly on proactively preserving a system’s ability to provide defined functionality services to users over time. Maintenance is different from evolution or enhancement (as shown in Table 1). In short, evolution development and enhancement change the functionality of a system or add functionality to a system. Maintenance does not. Table 1. Comparison of Maintenance with Operational Support and Evolutionary Development and Enhancement |Operational Support||Maintenance||Evolutionary Development and Enhancement| |Functionality effect||Provides consistent functionality and recovers from failures||Predicts activities necessary to preserve functionality or prevents failures||Changes or adds functionality| |Focus||Activity||Process||Process and activity| |User affect||Low to none||Low to none||High| |Frequency||Continual||Regularly scheduled||Scheduled for inclusion in development projects| |Standard Activities||Executes maintenance processes||Creates and implements maintenance processes||N/A| To illustrate the differences, consider these concepts as they relate a hot air balloon. - Maintenance is an important part of risk management. It scans for and analyzes changes in current safety regulations, wind and weather conditions, and the condition of the heater, basket, and balloon fabric, which will determine what parts need to be replenished, repaired, or replaced. Maintenance determines the schedule and defines the standard checklist of pre- and post-flight tasks, as well as the schedule and criteria for performing occasional maintenance tasks, such as examining parts for signs of wear or failure, resulting in activities to patch the basket or balloon, or replace hoses, straps, heating elements, or ropes before they reach end-of-life. - Operational support executes the scheduled standard tasks from Maintenance that keep the balloon safely airborne when in use, such as pre-flight unpacking, fuel tank loading, fuel consumption, equipment inventory, and post-flight packing. - Evolution or enhancement adds new features, or changes out the heater, basket, or balloon to enable longer flights, more passengers, or more comfort during flight. Control is the process of ensuring only expected and approved changes are implemented in any system. Change requests may come through defect reports, external drivers such as patches or revisions from third-party software providers, changes to relevant laws or regulations (such as for tax or payroll systems, or privacy protection), or through feature enhancement requests. Change requests are collected and reviewed by a team of stakeholders in the organization, including members of the Operations, Finance, and Architecture teams, other business users, and software portfolio managers. Changes may be deferred (to be done later when more feasible), rejected (will never be done), or approved and prioritized for implementation (see Figure 1: Change Request Life Cycle). 2 Goals & Principles The basic goal for all of Enterprise IT ([http://eitbokwiki.org/Glossary#eit EIT) is to keep systems operating to provide value to the organization, despite defects discovered after installation, changes in laws, and advances in technology. The main goal for Maintenance and Control is to preserve operations over time through asset lifecycle management and control of changes to assets. This goal has three parts: - Ensure service levels and stability through standard maintenance activities to prevent service disruption. - Preserve service levels and functionality through approved changes to assets as provided by suppliers, required by law or regulation, or to repair a defect. - Reduce risk to assets by designing achievable maintenance activities, removing obsolete assets, reviewing all requested changes, implementing only stakeholder-approved changes, and ensuring changes occur through defined standard construction, acquisition, and transition processes. EIT risk management responsibilities include coordinating with disaster recovery planning, testing, and evaluation. This responsibility is especially important for EIT, because the business of the enterprise almost certainly cannot be conducted without EIT systems operation. 2.1 Guiding Principles - Design maintenance activities to be simple, straightforward, and consistent across systems to minimize the need for specialized skills. - Have good relationships with suppliers to enable open and honest discussions about any offered upgrades or fixes, and to ensure that no side-effects or unintended consequences occur. - Determine how much risk the organization is willing to take on when considering an upgrade – early adopters might get considerations for any defects they discover, while late adopters usually have fewer implementation issues. - Ensure that the Business stakeholders have a presence in change request reviews. - Ensure that change request reviews include prioritization based upon the value to the business. - Change more than necessary to reduce risk of failure, fix a defect, or meet a requirement. - Approve changes to data that are not implemented through existing applications or interfaces. 3 Context Diagram Figure 1: The Context Diagram for Maintenance and Control 4 Maintenance Responsibilities Maintenance is defined as activities required to keep a system operational and responsive after it is accepted and placed into production. The maintenance of EIT systems includes preventive actions (risk reduction) and corrective actions (fixes) that preserve consistent operations. In EIT systems, maintenance can be performed on hardware, software, and data. As part of portfolio management, EIT and the Enterprise are expected to set policies about support levels for operational systems. The support level for any given system will be determined by weighing the value of the service provided against the cost to support it. The goal is to not spend more on a system than its value to the enterprise. The capability to be maintainable must be designed into all systems. This includes the system’s architecture, and by extension, its maintainability requirements. The ability to maintain a system is determined by the processes required (a function of the system’s design) and the availability of resources to execute those processes . Maintenance processes must be monitored and measured for continuous improvement. As part of system evaluations, maintenance tools and processes must be included. Systems should have included, as part of the package, expected maintenance activities (much like regular oil changes on cars), and tools to monitor system behavior, and possibly tools to perform maintenance activities. 4.1 Define Maintenance Activities There are four types of maintenance, corrective, preventive, adaptive, and perfective that are defined: - Corrective Maintenance: Corrective maintenance can be either unscheduled (emergency) or scheduled. - In an incident (ITIL) or emergency situation, maintenance activities occur to recover and bring a system back into operation. These occurrences can be reduced by proper implementation and execution of the other types of maintenance, as well as proper control, which reduces the risk of emergencies. - Scheduled corrective maintenance occurs to remove existing defects in a system, which are related to a problem, or are due to an issue with a change applied to the system. - Preventive Maintenance: Preventive maintenance is scheduled. It is set up based upon analysis of similar systems to find patterns of flaws and to replace components before they fail. This type of maintenance may be required by vendors, regulations, or laws, especially in safety-related systems. Even when not required by vendors, regulations, or laws, preventative maintenance keeps track of such things as aging of components, vis-à-vis their expected useful lives and inspects wires, and connections for signs of wear. This is an important part of risk management. By extension, preventative maintenance activities require certain interactions with facilities management, for example, with regard to provisions for power back-ups for EIT systems in case of power failures. - Adaptive Maintenance: The less common adaptive maintenance occurs when EIT changes one system to adapt to changes in another system. This is actually a type of enhancement, because the entire environment is enhanced when one part is upgraded. An adaptive maintenance task can be as simple as changing a configuration in one system to adapt to an upgrade in another system, using a different driver to connect databases because the other system’s database software was upgraded, or increasing data capacity via a parameter change. On the other hand, it could be a complex set of operations such as ones that would enable increasing the number of concurrent users. - Perfective Maintenance: Perfective maintenance is a misnomer and the term is used less often. It is defined as the process of improving or evolving a system in some manner, which is actually enhancement, not maintenance (see Evolutionary Responsibilities.) Defining maintenance activities depends on - The type of system and component Activities required will vary based on the system, and the component type within the system. Maintenance of (for example) disk drives may vary depending on whether they are installed in individual servers, or a storage cluster. - Expected support levels and system priorities As assigned by the organization, some systems will be designated as ‘mission-critical’ and therefore will be maintained to preserve the least amount of risk of failure, whereas non-production systems may be on a slower maintenance schedule, or have lower priority for resource assignment during times of peak production usage or risk. A high priority system may be assigned maintenance activities that cycle out components that have a predictable lifecycle, while a low priority system may be assigned a support level that allows only mandatory support activities response to component failure. - System economics The economics of a system can be described as the difference between the benefits the service is bringing to the business versus the cost to maintain and support it. Some measurements that determine the benefits of a solution are: - Criticality of the business processes supported - number of users accessing the system - number of new transaction being processed - amount of time saved by using the functions (e.g., versus manual procedures) - The vendor support costs (maintenance, subscription or licensing, or leasing) - Infrastructure costs (server, storage, rack space, power, cooling) - Technical stack costs (operating system, utilities, printing, data transfer, reporting, and so on) - FTE costs of the support, maintenance, and operations These measurements should be compared with the total cost of maintenance and support of the system, which includes: When the cost becomes greater than the benefits, it is time to retire or replace the service. Aging out, and therefore eliminating utility/maintenance/licensing costs, should reduce EIT costs, as long as the functional replacement does not result in an added maintenance burden for EIT staff, or provide a reduced benefit to the business (see Transition into Operations for the planning and processes for these functions. Generally, maintenance activity design should take into account: - The importance of the system to the organization (priority) – all systems vary in importance to the organization, depending on the function it provides, and whether the organization can continue to transact business without that system operational. Mission critical systems will have a higher priority for recovery from failure, and therefore will have more monitoring and maintenance activities performed to prevent any failures from occurring at all. - Maintenance requirements due to regulations and laws – Some industries have monitoring and maintenance regulations, rather than letting each organization define their own. In these cases, there may be reporting of monitoring and maintenance activities to an outside organization as well, to manage compliance. - The risk and business cost of a failure occurring – - Some components may have a low risk of failure, but once a failure occurs, the business cost and recovery costs are high. Older systems may need parts that become scarce or expensive over time. Technical debt (the additional cost of maintaining and/or upgrading systems that lag behind current releases or technology) increases as components age, which may make the cost of a system failure catastrophic. - Some components may have a high risk of failure, but recovery is cheap and quick, such as by swapping out drives or cards in arrays, such that the system overall has a low risk of failure, even though components are replaced frequently. - Maintenance recommendations from vendors – Any system purchased or leased will have vendor-recommended activities to keep the system operational. Of course, the vendor will probably err on the side of more frequent and/or expensive activities, so each organization must determine for itself its needs. Cloud systems remove this responsibility from the lessee as part of the Platform-as-a-Service, although maintenance activity costs are built into the contract. - The expected life of each component (lifecycle) under normal use – all components will have an expected life under normal conditions. Some components are less durable than others or consumable, and therefore will need more frequent maintenance. - The probability of each component’s failure at or before scheduled maintenance – a function of the expected life is a growing probability that a failure will occur over time, or after extraordinary use or strain. - The cost of the maintenance activity in both parts and labor – a balance must be found between overprotection: continuous monitoring and replacement at the first sign of trouble (costly) and negligence: inadequate attention to monitoring or delayed maintenance (which leads to technical debt). 4.2 Define Maintenance Schedules Maintenance activities almost always interfere with normal processing. Components must be made unavailable or incur additional strain from both production and maintenance activities occurring simultaneously. Only in extreme situations should maintenance activities occur on online components. All systems must be designed to enable offline maintenance, even if infrequent, as that ability will also be used in disaster (Incident) situations (see and Disaster Recovery). Maintenance activities occur as scheduled over time or occur due to an event. Some components may recommend maintenance activities occur both on a schedule and/or due to an event. Both may also be automated to automatically perform some maintenance activity based either on a time or an event. - Scheduled maintenance activities have defined time periods for activities, and each activity is placed on the schedule according to the organization’s needs. This type of maintenance is pro-active. - Event-based maintenance occurs when a monitoring threshold is reached (time to clean out the shared temp area), or a component signals a need for attention. This type of maintenance is re-active. Almost all vendors provide suggested maintenance schedules or monitoring thresholds for the components they support. Schedules are in terms of activities to be performed per time period (week, month, etc.). Otherwise a list of events and suggested actions are provided. In distributed (failover) or high-availability systems, maintenance on components may occur while the system is online (even if the component is not), as other components will take over processing from the ones undergoing maintenance. So maintenance activities may occur during business hours as an option. For systems without failover, plan the most intrusive maintenance activities to occur during non-peak times, and include options for taking components offline for a time to perform activities that may adversely affect business processing. Most maintenance activities occur during times of lower business processing, such as overnight or on weekends, although with the advances in distributed systems and networks, these activities require less downtime and lower frequency. 4.3 Design and Implement Standard Maintenance Processes Maintenance processes should be designed to be simple, easy to perform, repeatable activities which occur based on a schedule or on an event, and ideally, can be automated as much as possible. (See above) In many cases, those performing manual maintenance activities are not knowledgeable about the component or system; instead, these operators follow the instructions provided. Manual maintenance activities which are onerous, intricate, difficult, and/or not clearly documented so not understood by the operator will be less likely to be performed correctly without monitoring. Design each task to be simple, including breaking up complex tasks into simpler parts, clearly document the activity, automate as much as possible, and train the operators on any manual tasks to increase success. For example, one important maintenance activity is the ongoing standard purging of temp data areas, such as shared databases and standard cleanup of old files and logs. - An automatic activity can be scheduled to remove data older than a specific time regularly. The maintenance team develops and tests the scripts that do the cleanup, and then submit them to the Operations and Support teams to run on a regular basis (in a smaller shop this may be the same team). - If DBAs report that temporary work areas are running out of space, maintenance activities need to identify acceptable remedial actions, such as by temporarily adding space, temporarily restricting access to the shared space, or automatically dropping temporary data objects based on previously identified criteria. 4.4 Design Standard Alert Thresholds, Reports, and Forecasts Over time, infrastructure resources such as storage, CPUs, network bandwidth, or data transfer rates grows. This ‘capacity’ growth should be monitored and trends reported to help determine future capacity needs to the system. Commonly, system capacity is designed to meet an initial workload, and to handle projected usage changes going forward. There are two main usage patterns: - Slow steady growth – for certain industries (healthcare for example), there are few times when the average usage spikes either up or down greater than a standard deviation. Capacity increases can then be planned in advance to be ahead of the usage slope. - Peaks and Valleys – for certain industries (such as retail), there are standard times when the capacity needs to handle several times the average usage. Capacity can be planned to either always be able to handle the highest peak (which means most of the year there is unused capacity being supported), or to handle an average capacity with the ability to temporarily expand capacity during peak periods (which can also be expensive if the capacity is leased from a vendor). Capacity limits affect availability because if a system begins to experience downtime due to inability to allocate storage areas or CPU power when needed, it will influence the amount of resources that must be allocated to the system, to meet future business demands. - Thresholds must provide enough headroom or lead time to allow for analysis before taking action. - Reports must provide enough information to enable appropriate decisions - Forecasts must be based on enough data to rule out anomalies, resulting in either excessive or inadequate capacity growth, both of which are costly. Excessive capacity may have unintended consequences such as: - More time needed for back-ups, or other scheduled maintenance - Increased power needed for equipment - Need to upgrade computer chassis to support capacity increases - Increased floor space / footprint for equipment - Increased cooling needs to maintain preferred data center temperatures - Changes in UPS needs - Assumptions by users that space is infinite and therefore efficiency in storage and processing is unnecessary Inadequate capacity may result in more frequent system additions, which may also cost in reduced bulk discounts and more frequent maintenance events to add the capacity. Today’s technologies are providing great advancements in on-demand capacity allocation in CPU processing (compute capacity) and storage capacity. This capability is available both for in-house delivered infrastructure, as well as with cloud computing. 5 Evolutionary Responsibilities Requests to add functionality or to change the way existing functionality works are Enhancement Requests (ERs). Acting on enhancement requests without sufficient analysis can be very dangerous to the overall health of the system. In fact, enhancement requests, done in isolation, contribute to the problem of spaghetti code often encountered in legacy systems. For that reason, the standard practice is now to recognize that enhancement activities evolve systems. In other words, evolution is not the same as simply maintaining a system. Enhancement requests should be collected and addressed in groups, within development projects. See Construction' for the tools and techniques. An EIT organization can submit ERs to third-party vendors, which may or may not be acted upon. Vendors have their own internal systems for evaluating ERs, whether from customers or generated internally. Thus, vendor-provided components evolve independently from any customer using those components. When vendors notify organizations of upgrades ( new versions or patches), the maintenance team must assure that all changes to the component — and their potential impacts — are well understood before recommending installation of an upgrade, and going through the Transition process (see Transition into Operations). If a component has been customized by the organization (not just configured, but significantly changed from the Off-The-Shelf installed version), it can become increasingly difficult to retain those customizations in the component as new versions become available. This leads to components falling behind, which increases Technical Debt both in opportunity cost from the inability to take advantage of functional improvements (for example, security improvements), and in increasing the eventual cost when an upgrade is unavoidable. Often, local modifications of a 3rd party system make it very difficult to accept new versions, because so much work would be required to carry those modifications forward to the new version, and the vendor may not be inclined (without significant cost) to include the customizations into their base product. Evolution is a continuous change from a lesser, simpler, or worse state to a higher or better state. However, acting on vendor notifications without sufficient analysis can be very dangerous to the overall health of the system. No organization only has components provided by a single vendor, so evaluation of the entire environment must be made to assess the impact an upgrade to a component may have on other systems (ripple affect). Some upgrades may require other systems to change how they interface or connect to the component being upgraded (adaptive maintenance). It is also often the case where one component may have an upgrade available, but other connected components may not be compatible with the upgrade until a later time. Careful evaluation of the entire component inventory is essential to prevent an upgrade causing an Incident in another system, disrupting business, and requiring remediation, such as blackout (see Transition into Operations). Both EIT management and the business product owners have a responsibility to ensure a solution does not fall behind in both service currency (i.e., meeting the business need), but also product currency (i.e., vendor support and maintenance). A system lapsing from supportability by vendors is negligence on the organization’s part, unless the component is placed in "sunset status" with a defined retirement date. In this case, there may be little point in upgrading to the latest version. Keeping a system around only to be used for historical reference is a waste of resources — convert the data to a currently-readable archive and disconnect the system. 6 Change Control Systems and Processes The maintenance function has the responsibility for establishing change control mechanisms for any and all types of changes requested for installed systems. While Operations and Support uses the change management system for recording and tracking, the Maintenance function assures the orderly progression of requests through to resolution. 6.1 Define and Implement a Standard Change Management Process A Change Management (CM) system is a set of processes that defines, at a high level, how subsystems can be introduced or changed. The CM tracking system includes a change request process and a defect handling process. These processes are generic across all component types. For example, a change request to add new hardware is treated exactly the same as a change request to enhance functionality (i.e. both CRs are assigned, approved, etc.). In order for this process to work effectively, a number of Change Management mechanisms must be established and consistently used. First and foremost, a change control authority (a Change Control Board (CCB) or Change Advisory Board (CAB)) has to be defined and established. It is typically chaired by the maintenance function, and includes representatives of all stakeholders: product owners, developers, testers, users, operations and support. In addition, for some types of requests, specialists (such as enterprise architects and the original business analysts) may be called in. In order to support and facilitate the functioning of the Board, specific mechanisms need to be in place. These include things such as: - A numbering scheme for defect reports, enhancement requests, adaptive requests, and prevention requests - A scheme for categorizing, assessing risk, and prioritizing the requests taking into account how severe an incident is and how many users it might affect - A scheme for siphoning off enhancement requests into queues for bundling requests into development projects - Assuring that all requests are entered and tracked in a change management system - Defining a closed loop change management process so that - all requests are tracked through resolution (Deferred, Rejected, Approved, Change Made, Change Tested, Change Released) - A clear path of request reporting, reviews, approvals, and resolution is defined - All requests come through the same system that the Operations help desk (aka Support) uses - Tracking and reporting provides trend analysis for such things as error-prone areas or module, volatility of change requests (especially defect reports) - Assuring that action on requests is reflected in the Operations CM database system and the development CM system 6.2 Define Request Intake and Evaluation Process A typical change control process looks something like this: - Operations receives software change requests via its Help Desk function and enters them into the incident-tracking system. Operations does not make changes to software. - Defects and adaptive requests are automatically sent to the responsible development team (or vendor relationship manager for acquired components or systems), where they are assigned by appropriate manager according to relative priority. - Approved enhancement requests are periodically reviewed by the Product Manager for inclusion in later releases of the system. In EIT organizations that do not have a Product Manager function, a suitable user representative is tapped for this role in the CCB. - Preventive requests are reviewed by the Product Manager and the CCB. - The status of all changes to a system are reviewed by the CCB prior to release. - The CCB is comprised of representatives of all stakeholders (development, testing, documentation and training, operations and product management). 6.3 Change Request Processing and Approval Flow Configuration Management is the foundation of a software project. It is the management of change to components and systems. Without it, no matter how talented the staff, how large the budget, how robust the development and test processes, or how technically superior the development tools, project discipline will collapse and success will be left to chance. Once a Change Request (CR) is assigned and approved, its “owner” manages the necessary change via the defined CM process. Procedures may differ depending on the type of change. For example, a developer may be required to apply an operating system patch. An operating system patch will be applied differently than a system release. The owner, at various stages of the configuration control process, will be able to identify where in the high-level CM process the change is. For example, if an operating system patch is ready to be applied in a test environment, the CR should be marked as Ready_for_testing. Once the patch has been successfully tested, the CR should be marked Tested. The CM process and its supporting mechanisms should provides a clear, documented trail of change requests, their disposition, and changes introduced into the system, enabling better team communication, and collection of meaningful project metrics. The request itself, the requestor, the approvers, and all actions taken in response to the CR should be available through the CM process and tools to everyone on the project. The generic approval flow defines a generic CR. The CR may be used to represent and track defects, enhancements, greenfield development, documentation, etc. The Change Control Board (CCB) is a central control mechanism to ensure that every change request is properly considered, authorised, and co-ordinated. The full CCB should meet on a regular basis, probably once a week. Emergency meetings can be called as necessary. All decisions made by the CCB should be documented in the CM system. A CCB member is the top level of the change management hierarchy and can also act as every role defined lower in the hierarchy. For example, if a team leader is not present, a CCB member can act on behalf of the team leader. The CCB includes the following members: - Configuration process manager - CM system administrator - Respective system/component development managers - Key stakeholders, such as Operations & Support, and user representatives This table describes basic actions performed on a change request. |Submit CR||Any stakeholder on the project can submit a Change Request (CR). This logs the CR in in the CM system, places it into the CCB Review Queue, and sets its state to Submitted.||Submitter| |Review CR||This CCB action reviews Submitted Change Requests. The CR’s content is initially reviewed in the CCB Review meeting to determine if it is a valid request. If it is, a determination is made if the change is in or out of scope for the current release(s), based on priority, schedule, resources, level-of-effort, risk, severity and any other relevant criteria as determined by the group. The state of a valid CR is set to Assigned or Postponed accordingly.||CCB| |Confirm Duplicate or Reject||If a CR is suspected of being a duplicate or invalid request (e.g., operator error, not reproducible, the way it works, etc.), a delegate of the CCB is assigned to confirm the duplicate or rejected CR and to gather more information from the submitter, if necessary. The CR state is set to Duplicate or Closed as appropriate.||CCB Delegate| |Re-open||If more information is needed, or if a CR is rejected at any point in the process, the submitter is notified and may update the CR with new information. The updated CR is then re-submitted to the CCB Review Queue for consideration of the new data.||Submitter| |Open & Work-on||Once a CR is assigned by the CCB, the Project Lead will assign the work to the appropriate user – depending on the type of request (enhancement request, defect, documentation change, test defect, etc.) – and make any needed updates to the project schedule. The CR state is set to Opened.||Configuration Manager| |Resolve||The assigned worker performs the set of activities defined within the appropriate section of the process (e.g., requirements, analysis & design, implementation, produce user-support materials, design test, etc.) to make the changes requested. These activities will include all normal review and unit test activities as described within the normal development process. The CR will then be marked as Resolved.||Assigned user| |Validate||After the changes are Resolved by the assigned user (analyst, developer, tester, etc.), the changes are placed into a test queue to be assigned to a tester and validated in a test build of the product.||Tester| 7 Types of System Change Releases 7.1 "Patch" Release (patch only) A patch is a relatively small change generally to source code to fix a defect. However data fixes may be required to rectify invalid data that has been created by bad code or user error. Either patch type, although small, can still have wide-spread impact on the system, especially if it the source code change affects a critical component of a system or a data fix changes millions of data records. Therefore, all patches applied must be fully tested before being scheduled for production implementation. 7.2 Full Release Full release generally means most or all system components are packaged on a release medium This is usually called a version upgrade. 7.3 Traceability and Audibility Processes and systems that will assist in the management of patch and version upgrades are a part of configuration management. They track past, current and future versions of software and infrastructure components (i.e. databases, utilities, hardware) that have been, or will be, implemented. For large systems, like ERPs, with thousands of modules, manual processes become error prone and unmanageable, so automated tracking is required to ensure major downtime is not experienced due to user error. A configuration management system can provide a great audit trail for implemented changes; however, this is not the only tracking necessary in most cases. Many companies today have regulatory requirements to comply with accounting and other regulations or standards, such as Sarbanes Oxley (SOX) or internal auditing control functions. Both internal and external EIT auditors will use this history to ensure control processes are followed by IT. Specific EIT staff are allocated responsibility and oversight for these control processes, and are responsible and accountable to ensure the defined processes are followed, but also that the audits are completed on a timely basis and are accurate. Systems should be designed and built to be easily maintained. Maintenance is the responsibility of EIT, and should be an auditable process, with mechanisms for tracking and reporting. Systems need to be monitored, measured, and validated to ensure this happens. 9 Key Maturity Frameworks Capability maturity for EIT refers to its ability to reliably perform. Maturity is a measured by an organization’s readiness and capability expressed through its people, processes, data and technologies and the consistent measurement practices that are in place. Please see Appendix F for additional information about maturity frameworks. Many specialized frameworks have been developed since the original Capability Maturity Model (CMM) that was developed by the Software Engineering Institute in the late 1980s. This section describes how some of those apply to the activities described in this chapter. 9.1 IT-Capability Maturity Framework (IT-CMF) The IT-CMF was developed by the Innovation Value Institute in Ireland. It helps organizations to measure, develop, and monitor their EIT capability maturity progression. It consists of 35 IT management capabilities that are organized into four macro capabilities: - Managing IT like a business - Managing the IT budget - Managing the IT capability - Managing IT for business value The three most relevant critical capabilities are Technical Infrastructure Management (TIM), Service Provisioning (SRP), and Business Planning (BP). 9.1.1 Technical Infrastructure Management Maturity The following statements provide a high-level overview of the Technical Infrastructure Management (TIM) capability at successive levels of maturity. |Level 1||Management of the IT infrastructure is reactive or ad hoc.| |Level 2||Documented policies are emerging relating to the management of a limited number of infrastructure components. Predominantly manual procedures are used for IT infrastructure management. Visibility of capacity and utilization across infrastructure components is emerging.| |Level 3||Management of infrastructure components is increasingly supported by standardized tool sets that are partly integrated, resulting in decreased execution times and improving infrastructure utilization.| |Level 4||Policies relating to IT infrastructure management are implemented automatically, promoting execution agility and achievement of infrastructure utilization targets.| |Level 5||The IT infrastructure is continually reviewed so that it remains modular, agile, lean, and sustainable.| 9.1.2 Service Provisioning Maturity The following statements provide a high-level overview of the Service Provisioning (SRP) capability at successive levels of maturity. |Level 1||The service provisioning processes are ad hoc, resulting in unpredictable IT service quality.| |Level 2||Service provisioning processes are increasingly defined and documented, but execution is dependent on individual interpretation of the documentation. Service level agreements (SLAs) are typically defined at the technical operational level only.| |Level 3||Service provisioning is supported by standardized tools for most IT services, but may not yet be adequately integrated. SLAs are typically defined at the business operational level.| |Level 4||Customers have access to services on demand. Management and troubleshooting of services are highly automated.| |Level 5||Customers experience zero downtime or delays, and service provisioning is fully automated.| 9.1.3 Business Planning Maturity The following statements provide a high-level overview of the Business Planning (BP) capability at successive levels of maturity. |Level 1||The IT business plan is developed only for the purpose of budget acquisition, and offers little value beyond this to the organization.| |Level 2||The IT business plan typically covers the resource requirements for a limited number of key areas that contribute to the objectives of the IT strategy.| |Level 3||The IT business plan includes standardized details regarding required resources and identifies some of the ways in which the planned activities will contribute to the objectives of the IT strategy. Some input from some other business units is considered.| |Level 4||The IT business plan is comprehensively validated by the IT function and all other business units, and identifies all required resources and the expected benefits.| |Level 5||Relevancy of the IT business plan is continually reviewed, with regular input from relevant business ecosystem partners, to identify opportunities for organization-wide benefits.| 10 Key Competence Frameworks While many large companies have defined their own sets of skills for purposes of talent management (to recruit, retain, and further develop the highest quality staff members that they can find, afford and hire), the advancement of EIT professionalism will require common definitions of EIT skills that can be used not just across enterprises, but also across countries. We have selected 3 major sources of skill definitions. While none of them is used universally, they provide a good cross-section of options. Creating mappings between these frameworks and our chapters is challenging, because they come from different perspectives and have different goals. There is rarely a 100% correspondence between the frameworks and our chapters, and, despite careful consideration some subjectivity was used to create the mappings. Please take that in consideration as you review them. 10.1 Skills Framework for the Information Age The Skills Framework for the Information Age (SFIA) has defined nearly 100 skills. SFIA describes 7 levels of competency which can be applied to each skill. Not all skills, however, cover all seven levels. Some reach only partially up the seven step ladder. Others are based on mastering foundational skills, and start at the fourth or fifth level of competency. It is used in nearly 200 countries, from Britain to South Africa, South America, to the Pacific Rim, to the United States. (http://www.sfia-online.org) |Skill||Skill Description||Competency Levels| |Application support||The provision of application maintenance and support services, either directly to users of the systems or to service delivery functions. Support typically includes investigation and resolution of issues and may also include performance monitoring. Issues may be resolved by providing advice or training to users, by devising corrections (permanent or temporary) for faults, making general or site-specific modifications, updating documentation, manipulating data, or defining enhancements Support often involves close collaboration with the system's developers and/or with colleagues specialising in different areas, such as Database administration or Network support.||2 - 5| |Business risk management||The planning and implementation of organisation-wide processes and procedures for the management of risk to the success or integrity of the business, especially those arising from the use of information technology, reduction or non-availability of energy supply or inappropriate disposal of materials, hardware or data.||4 - 7| |Capacity management||The management of the capability, functionality and sustainability of service components (including hardware, software, network resources and software/infrastructure as a Service) to meet current and forecast needs in a cost efficient manner aligned to the business. This includes predicting both long-term changes and short-term variations in the level of capacity required to execute the service, and deployment, where appropriate, of techniques to control the demand for a particular resource or service.||4 - 6| |Conformance review||The independent assessment of the conformity of any activity, process, deliverable, product or service to the criteria of specified standards, accepted practice, or other documented requirements. May relate to, for example, asset management, network security tools, firewalls and internet security, sustainability, real-time systems, application design and specific certifications.||3 - 6| |Customer service support||The management and operation of one or more customer service or service desk functions. Acting as a point of contact to support service users and customers reporting issues, requesting information, access, or other services.||1 - 6| |Database administration||The installation, configuration, upgrade, administration, monitoring and maintenance of databases.||2 - 5| |Digital forensics||The collection, processing, preserving, analysing, and presenting of computer-related evidence in support of security vulnerability mitigation and/or criminal, fraud, counterintelligence, or law enforcement investigations.||4 - 6| |Facilities management||The planning, control and management of all the facilities which, collectively, make up the IT estate. This involves provision and management of the physical environment, including space and power allocation, and environmental monitoring to provide statistics on energy usage. Encompasses physical access control, and adherence to all mandatory policies and regulations concerning health and safety at work.||3 - 6| |Financial management||The overall financial management, control and stewardship of the IT assets and resources used in the provision of IT services, including the identification of materials and energy costs, ensuring compliance with all governance, legal and regulatory requirements.||4 - 6| |Incident management||The processing and coordination of appropriate and timely responses to incident reports, including channelling requests for help to appropriate functions for resolution, monitoring resolution activity, and keeping clients appraised of progress towards service restoration.||2 - 5| |IT Infrastructure||The operation and control of the IT infrastructure (typically hardware, software, data stored on various media, and all equipment within wide and local area networks) required to deliver and support IT services and products to meet the needs of a business. Includes preparation for new or changed services, operation of the change process, the maintenance of regulatory, legal and professional standards, the building and management of systems and components in virtualised computing environments and the monitoring of performance of systems and services in relation to their contribution to business performance, their security and their sustainability.||1 - 4| |IT management||The management of the IT infrastructure and resources required to plan for, develop, deliver and support IT services and products to meet the needs of a business. The preparation for new or changed services, management of the change process and the maintenance of regulatory, legal and professional standards. The management of performance of systems and services in terms of their contribution to business performance and their financial costs and sustainability. The management of bought-in services. The development of continual service improvement plans to ensure the IT infrastructure adequately supports business needs.||5 - 7| |Network support||The provision of network maintenance and support services. Support may be provided both to users of the systems and to service delivery functions. Support typically takes the form of investigating and resolving problems and providing information about the systems. It may also include monitoring their performance. Problems may be resolved by providing advice or training to users about the network's functionality, correct operation or constraints, by devising work-arounds, correcting faults, or making general or site-specific modifications.||2 - 5| |Problem management||The resolution (both reactive and proactive) of problems throughout the information system lifecycle, including classification, prioritisation and initiation of action, documentation of root causes and implementation of remedies to prevent future incidents.||3 - 5| |Security administration||The provision of operational security management and administrative services. Typically includes the authorisation and monitoring of access to IT facilities or infrastructure, the investigation of unauthorised access and compliance with relevant legislation.||1 - 6| |Storage management||The planning, implementation, configuration and tuning of storage hardware and software covering online, offline, remote and offsite data storage (backup, archiving and recovery) and ensuring compliance with regulatory and security requirements.||3 - 6| |System software||The provision of specialist expertise to facilitate and execute the installation and maintenance of system software such as operating systems, data management products, office automation products and other utility software.||3 - 5| 10.2 European Competency Framework The European Union’s European e-Competence Framework (e-CF) has 40 competences and is used by a large number of companies, qualification providers and others in public and private sectors across the EU. It uses five levels of competence proficiency (e-1 to e-5). No competence is subject to all five levels. The e-CF is published and legally owned by CEN, the European Committee for Standardization, and its National Member Bodies (www.cen.eu). Its creation and maintenance has been co-financed and politically supported by the European Commission, in particular, DG (Directorate General) Enterprise and Industry, with contributions from the EU ICT multi-stakeholder community, to support competitiveness, innovation, and job creation in European industry. The Commission works on a number of initiatives to boost ICT skills in the workforce. Version 1.0 to 3.0 were published as CEN Workshop Agreements (CWA). The e-CF 3.0 CWA 16234-1 was published as an official European Norm (EN), EN 16234-1. For complete information, please see http://www.ecompetences.eu. |e-CF Dimension 2||e-CF Dimension 3| |C.3.Service Delivery (RUN) Ensures service delivery in accordance with established service level agreements (SLA's). Takes proactive action to ensure stable and secure applications and ICT infrastructure to avoid potential service disruptions, attending to capacity planning and to information security. Updates operational document library and logs all service incidents. Maintains monitoring and management tools (i.e. scripts, procedures). Maintains IS services. Takes proactive measures. |C.4. Problem Management (RUN) Identifies and resolves the root cause of incidents. Takes a proactive approach to avoidance or identification of root cause of ICT problems. Deploys a knowledge system based on recurrence of common errors. Resolves or escalates incidents. Optimises system or component performance. |E.3. Risk Management (MANAGE) Implements the management of risk across information system s through the application of the enterprise defined risk management policy and procedure. Assesses risk to the organisation’s business, including web, cloud and mobile resources. Documents potential risk and containment plans. |E.4. Relationship Management (MANAGE) Establishes and maintains positive business relationships between stakeholders (internal or external) deploying and complying with organisational processes. Maintains regular communication with customer / partner / supplier, and addresses needs through empathy with their environment and managing supply chain communications. Ensures that stakeholder needs, concerns or complaints are understood and addressed in accordance with organisational policy. |E.8. Information Security Management (MANAGE) Implements information security policy. Monitors and takes action against intrusion, fraud and security breaches or leaks. Ensures that security risks are analysed and managed with respect to enterprise data and information. Reviews security incidents, makes recommendations for security policy and strategy to ensure continuous improvement of security provision. 10.3 i Competency Dictionary The Information Technology Promotion Agency (IPA) of Japan has developed the i Competency Dictionary (iCD), translated it into English, and describes it at https://www.ipa.go.jp/english/humandev/icd.html. It is an extensive skills and tasks database, used in Japan and southeast Asian countries. It establishes a taxonomy of tasks and the skills required to perform the tasks. The IPA is also responsible for the Information Technology Engineers Examination (ITEE), which has grown into one of the largest scale national examinations in Japan, with approximately 600,000 applicants each year. The iCD consists of a Task Dictionary and a Skill Dictionary. Skills for a specific task are identified via a “Task x Skill” table. (Please see Appendix A for the task layer and skill layer structures.) EITBOK activities in each chapter require several tasks in the Task Dictionary. The table below shows a sample task from iCD Task Dictionary Layer 2 (with Layer 1 in parentheses) that correspond to activities in this chapter. It also shows the Layer 2 (Skill Classification), Layer 3 (Skill Item), and Layer 4 (knowledge item from the IPA Body of Knowledge) prerequisite skills associated with the sample task, as identified by the Task x Skill Table of the iCD Skill Dictionary. The complete iCD Task Dictionary (Layer 1-4) and Skill Dictionary (Layer 1-4) can be obtained by returning the request form provided at http://www.ipa.go.jp/english/humandev/icd.html. |Task Dictionary||Skill Dictionary| |Task Layer 1 (Task Layer 2)||Skill Classification||Skill Item||Associated Knowledge Items| |System operation design |System maintenance, operation, and evaluation||System operations management requirements definition|| 11 Key Roles Both SFIA and the e-CF have described profiles (similar to roles) for providing examples of skill sets (skill combinations) for various roles. The iCD has described tasks performed in EIT and associated those with skills in the IPA database. The following roles are common to ITSM. - 1st, 2nd, 3rd Level Support - Access Manager - Facilities Manager - Incident Manager - IT Operations Manager - IT Operator - Major Incident Team - Problem Manager - Service Request Fulfillment Other key roles are: - Product Owner - Solution Architect - Solution Manager - Technology Architect - ISO/IEC 24765:2016 Systems and software engineering—Vocabulary (also available online as SEVOCAB at https://pascal.computer.org/ and it is free) - ISO 20000 series - IEEE Std 14764-2006, ISO/IEC 14764-2006 - International Standard for Software Engineering - Software Life Cycle Processes - Maintenance - IEEE Std 982.1-2005 - Standard Dictionary of Measures of the Software Aspects of Dependability (2010). SEVOCAB at https://pascal.computer.org/ Retrieved from Guide to the Systems Engineering Body of Knowledge (SEBoK), version 1.3.: http://www.sebokwiki.org/w/index.php?title=Logistics&oldid=48199 Ibid., Section 2.3 Maintenance Cost Estimation International Institute of Business Analysis. (2009). A Guide to the Business Analysis Body of Knowledge® Version 2.0. IIBA. [IIBA-BABOK 7.5.2, p.134] [Ibid., 7.6.2, p. 137] Pigoski, Thomas M., 1997. Practical software maintenance: Best practices for managing your software investment. Wiley Computer Pub. (New York) and Lehman M. M., 1980. Program, Life-Cycles and the Laws of Software Evolution. In Proceedings of IEEE, 68, 9,1060-1076 Introduction to the ITIL Service Lifecycle, Second Edition, Office of Government Commerce, 2010. p. 52; p. 215
1
8
<urn:uuid:66773bfe-a937-46b7-aee3-5f234de39115>
Imagine trying to find Earth in a photograph of the entire universe on a cell phone screen and you'll have an inkling... By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. of the task facing an IT team helping biomedical researchers view huge datasets on a PC. "Trying to view these datasets on a typical monitor can be analogous to looking through a straw into the haystack," said David Lee, BioWall Application Engineer for National Center for Microscopy and Imaging Research (NCMIR) and the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, Calif. One half of NCMIR technology development facility houses scientists, the other half the IT group, which develops life sciences applications, imaging technologies and systems for remote instrumentation control and exploring visualization methods. Overall, NCMIR's 70 employees' expertise covers biology, mathematics, physics, computer sciences to mechanical engineering. Over 30 servers are used by NCMIR, and they run Linux, Windows, Solaris and Mac OS on various architecture types and two clusters, one 64-bit and one 32-bit. "Most scientific applications are now being developed on Linux due to its flexibility, available software, ease of deployment, cluster operating systems such as [open source project] Rocks and management tools," said Lee. The Rocks Linux distribution is a high-performance cluster toolkit. At NCMIR, Lee's IT team has modified instrumentation and written applications to take large images from light and electron microscopes. Unfortunately, the size of these images is far greater the resolution of typical computer displays. "Even with the high resolution tiled display, we are only able to view a fraction of the entire dataset simultaneously," Lee said. Researchers' productivity is reduced because, on a small screen, they can't detect the slight grey-scale changes that distinguish the useful data in electron microscope images. For the past few years, NCMIR scientists have been frustrated with computer technology, particularly digital imaging. Many have gone back to using older recording mediums, such as film. Frustrated himself, Lee began to search for a way to 'supersize' a computer display. First, Lee's team evaluated IBM T221 displays. They worked well for viewing very large amounts of data. "Because of the physical size of the displays, they are only useful for a single person sitting directly in front of the displays," said Lee. Most of the work on the biological side is collaboration, so a one-person display didn't fit the bill. Then, the IT team took a cue from the type of postmodern art installations that display images on walls with banks of video screens. They created a tiled wall, the BioWall Tiled Display, of high-resolution, flat-panel displays placed five across in four connecting rows. The 40-megapixels display shows detailed two- and three-dimensional images, creating more realistic and easy-to-see depictions of biological structures and cells and tissues in relation to each other. Lee's team ran into no problems installing the BioWall's 21 Dual Opteron Sun Java Workstations with 8GB of RAM, NVIDIA Quadro 3000G graphics cards, dual- gigabit Ethernet and a single 10k RPM SCSI drive. Rocks Linux plays an important role for BioWall. "The operating system was deployed with the Rocks mechanisms with a special package to make Rocks clusters suitable for graphics and multi-display clusters," said Lee. "Administration requirements are very low with Rocks clusters, requiring only the occasional maintenance from systems administrators." The BioWall was a hit with the scientists, even the recalcitrant biologists who had clung to using older film technologies. Productivity increases have been dramatic. The only shortcoming of BioWall, Lee said, is that "it is made of many small pieces and is not easily portable." That's a hint to display manufacturers, and visions of a single-paneled BioWall probably dance through Lee's dreams. Rather than dwell on dreams, however, Lee has concrete plans for adding a 3D overlay on top of the BioWall displays. Also, he's evaluating motion tracking devices that would increase biologists' interactivity, enabling them to experiment with usability scenarios with other input methods for the display.
1
2
<urn:uuid:c02a0269-1ed7-4313-930c-e18eff0d54ee>
Yoko Ono at the Museum of Contemporary Art of the University of São Paulo, Brazil, in 2007 February 18, 1933 |Children||2, including Sean Lennon| Yoko Ono (born February 18, 1933) is a Japanese multimedia artist, singer, songwriter, and peace activist who is also known for her work in performance art and filmmaking. She performs in both English and Japanese. She was the second wife and widow of singer-songwriter John Lennon of the Beatles. Ono grew up in Tokyo and studied at Gakushuin. She withdrew from her course after two years and rejoined her family in New York in 1953. She spent some time at Sarah Lawrence College, and then became involved in New York City's downtown artists scene, including the Fluxus group. She first met Lennon in 1966 at her own art exhibition in London, and they became a couple in 1968. With their performance Bed-Ins for Peace in Amsterdam and Montreal in 1969, Ono and Lennon famously used their honeymoon at the Hilton Amsterdam as a stage for public protests against the Vietnam War. She brought feminism to the forefront in her music, influencing artists as diverse as the B-52s and Meredith Monk. Ono achieved commercial and critical acclaim in 1980 with the chart-topping album Double Fantasy, a collaboration with Lennon that was released three weeks before his death. Public appreciation of Ono's work has shifted over time, helped by a retrospective at a Whitney Museum branch in 1989 and the 1992 release of the six-disc box set Onobox. Retrospectives of her artwork have also been presented at the Japan Society in New York City in 2001, in Bielefeld, Germany, and the UK in 2008, Frankfurt, and Bilbao, Spain, in 2013 and MOMA in New York City in 2015. She received a Golden Lion Award for lifetime achievement from the Venice Biennale in 2009 and the 2012 Oskar Kokoschka Prize, Austria's highest award for applied contemporary art. As Lennon's widow, Ono works to preserve his legacy. She funded Strawberry Fields in Manhattan's Central Park, the Imagine Peace Tower in Iceland, and the John Lennon Museum in Saitama, Japan (which closed in 2010). She has made significant philanthropic contributions to the arts, peace, Philippine and Japan disaster relief, and other causes. In 2012 Yoko Ono received the Dr. Rainer Hildebrandt Human Rights Award endowed by Alexandra Hildebrandt. The award is given annually in recognition of extraordinary, non-violent commitment to human rights. Ono continues her social activism, inaugurating a biennial $50,000 LennonOno Grant for Peace in 2002 and co-founding the group Artists Against Fracking in 2012. She has a daughter, Kyoko Chan Cox, from her marriage to Anthony Cox and a son, Sean Taro Ono Lennon, from her marriage to Lennon. She collaborates musically with Sean Lennon. - 1 Early life and family - 2 New York City - 3 John Lennon - 4 Separation from Lennon and reunion - 5 Artwork - 6 Musical career - 7 BMI Foundation's John Lennon Scholarships - 8 Public image - 9 2000s - 10 2010s - 11 Political activism and social media - 12 Relationship with the Beatles - 13 In popular culture - 14 Discography - 15 Books and monographs - 16 Films - 17 See also - 18 References - 19 Sources - 20 Further reading - 21 External links Early life and family Ono was born on February 18, 1933, in Tokyo, to Isoko Ono (小野 磯子 Ono Isoko) and Eisuke Ono (小野 英輔 Ono Eisuke), a banker and former classical pianist. Isoko's father was ennobled in 1915.[clarification needed] Isoko's maternal grandfather Zenjiro Yasuda (安田 善次郎 Yasuda Zenjirō) was an affiliate of the Yasuda clan and zaibatsu. Eisuke came from a long line of samurai warrior-scholars. The kanji translation of Yoko means "ocean child". Two weeks before Yoko's birth, Eisuke was transferred to San Francisco by his employer, the Yokohama Specie Bank. The rest of the family followed soon after, with Yoko meeting Eisuke when she was two. Her younger brother Keisuke was born in December 1936. Yoko was enrolled in piano lessons from the age of 4. In 1937, the family was transferred back to Japan and Ono enrolled at Tokyo's Gakushuin (also known as the Peers School), one of the most exclusive schools in Japan. In 1940, the family moved to New York City. The next year, Eisuke was transferred from New York City to Hanoi, and the family returned to Japan. Ono was enrolled in Keimei Gakuen, an exclusive Christian primary school run by the Mitsui family. She remained in Tokyo through the great fire-bombing of March 9, 1945, during which she was sheltered with other family members in a special bunker in the Azabu district of Tokyo, far from the heavy bombing. Ono later went to the Karuizawa mountain resort with members of her family. Ono and her family were forced to beg for food while pulling their belongings in a wheelbarrow. It was during this period in her life, Ono says, that she developed her "aggressive" attitude and understanding of "outsider" status when children taunted her and Keisuke, who were once well-to-do. Other stories have her mother bringing a large number of goods with them to the countryside, where they bartered them for food. In one anecdote, her mother bartered a German-made sewing machine for 60 kilograms (130 lb) of rice to feed the family. Her father remained in the city, unknown to the family, who believed he was in a prisoner of war camp in China. Ono told Amy Goodman of Democracy Now on October 16, 2007, that "He was in French Indochina, which is Vietnam actually.... in Saigon. He was in a concentration camp." By April 1946, Gakushuin was reopened and Ono re-enrolled. The school, located near the Tokyo Imperial Palace, had not been damaged by the war, and Ono found herself a classmate of Prince Akihito, the future emperor of Japan. She graduated in 1951 and was accepted into the philosophy program of Gakushuin University as the first woman to enter the department. However, she left the school after two semesters. New York City College and downtown beginnings Ono's family moved to Scarsdale, New York (an affluent town 25 miles north of mid-town Manhattan) without her after the war. When she later rejoined her family in the U.S., she enrolled nearby in Sarah Lawrence College. While her parents approved of her college choice, Ono said they disapproved of her lifestyle and chastised her for befriending people they felt were beneath her. In spite of her parents' disapproval, Ono loved meeting artists, poets, and others who represented the bohemian lifestyle to which she aspired. Visiting galleries and art happenings in the city whetted her desire to display her own artistic endeavors publicly. American avant-garde artist, composer, and musician La Monte Young, her first important contact in the New York art world, helped Ono start her career by using her Chambers Street loft in Tribeca as a performance space. After Ono set a painting on fire at one performance, her mentor John Cage advised her to treat the paper with flame retardant. Return to Japan, early career, and motherhood In 1956, Ono left college to elope with Japanese composer Toshi Ichiyanagi, a star in Tokyo's experimental community. After living apart for several years, they filed for divorce in 1962. Ono returned home to live with her parents and, suffering from clinical depression, was briefly placed in a mental institution. Later that year, on November 28, 1962, Ono married Anthony Cox, an American jazz musician, film producer, and art promoter, who was instrumental in securing her release from the Japanese mental institution. However, because Ono had neglected to finalize her divorce from Ichiyanagi, her second marriage was annulled on March 1, 1963. After finalizing the divorce, Cox and Ono married again on June 6, 1963. She gave birth to their daughter Kyoko Chan Cox two months later on August 8, 1963. The marriage quickly fell apart, but the Coxes stayed together for the sake of their joint careers. They performed at Tokyo's Sogetsu Hall, with Ono lying atop a piano played by John Cage. Soon, the couple returned to New York with Kyoko. In the early years of the marriage, Ono left most of Kyoko's parenting to Cox while she pursued her art full-time. Cox also managed her publicity. She and Cox divorced on February 2, 1969. However, in 1971, Cox disappeared with their then-eight-year-old daughter, in the midst of the custody battle. He won custody after claiming Ono was an "unfit mother" due to her drug use. Ono's ex-husband subsequently raised Kyoko under the name Ruth Holman in an organization known as the Church of the Living Word (or "the Walk"). Ono and her third husband John Lennon searched for Kyoko for years, to no avail. She first saw Kyoko again in 1998. Fluxus, a loose association of Dada-inspired avant-garde artists that developed in the early 1960s, was active in New York and Europe. Ono visited London to meet artist and political activist Gustav Metzger's Destruction in Art Symposium in September 1966, as the only woman artist chosen to perform her own events and only one of two invited to speak. There are two versions of the story regarding how Lennon met Ono. According to the first account, on November 9, 1966 Lennon went to the Indica Gallery in London, where Ono was preparing her conceptual art exhibit, and they were introduced by gallery owner John Dunbar. Lennon was initially unimpressed with the exhibits he saw, including a pricey bag of nails, but one piece had a ladder with a spyglass at the top. When he climbed the ladder, Lennon felt a little foolish, but he looked through the spyglass and saw the word "YES" which he said meant he didn't walk out, as it was positive, whereas most concept art he encountered was "anti" everything. Lennon was also intrigued by Ono's Hammer a Nail. Viewers hammered a nail into a wooden board, creating the art piece. Although the exhibition had not yet opened, Lennon wanted to hammer a nail into the clean board, but Ono stopped him. Dunbar asked her, "Don't you know who this is? He's a millionaire! He might buy it." Ono supposedly had not heard of the Beatles, but relented on the condition that Lennon pay her five shillings, to which Lennon replied, "I'll give you an imaginary five shillings and hammer an imaginary nail in." Paul McCartney told a second version of the first meeting of Ono and Lennon. In 1965, Ono was in London compiling original musical scores for a book entitled Notations'; John Cage was working on the book. McCartney declined to give her any of his own manuscripts, but suggested that Lennon might oblige. Lennon did, giving Ono the original handwritten lyrics to "The Word." In a 2002 interview, she said, "I was very attracted to him. It was a really strange situation." The two began corresponding and, in September 1967, Lennon sponsored Ono's solo show at Lisson Gallery in London. When Lennon's wife Cynthia asked for an explanation for Ono's telephoning their home, he told her that Ono was only trying to obtain money for her "avant-garde bullshit." In early 1968, while the Beatles were making their famous visit to India, Lennon wrote "Julia" and included a reference to Ono: "Ocean child calls me", referring to the translation of Yoko's Japanese spelling. In May 1968, while his wife was on holiday in Greece, Lennon invited Ono to visit. They spent the night recording what would become the Two Virgins album, after which, he said, they "made love at dawn". When Lennon's wife returned home, she found Ono wearing her bathrobe and drinking tea with Lennon who simply said, "Oh, hi." On September 24 and 25, 1968, Lennon wrote and recorded "Happiness Is a Warm Gun", which contains sexual references to Ono. A few weeks after Lennon's divorce from Cynthia was granted, Ono became pregnant, but she suffered the miscarriage of a male child on November 21, 1968. Bed-Ins and other early collaborations During the last two years of the existence the Beatles, Lennon and Ono created and attended their own public protests against the Vietnam War. On March 20, 1969, they were married at the registry office in Gibraltar and spent their honeymoon in Amsterdam, campaigning with a week-long Bed-In for Peace. They planned another Bed-In in the US, but were denied entry to the country. They held one instead at the Queen Elizabeth Hotel in Montreal, where they recorded "Give Peace a Chance". Lennon later stated his regrets about feeling "guilty enough to give McCartney credit as co-writer on my first independent single instead of giving it to Yoko, who had actually written it with me." The famous couple often combined advocacy with performance art, such as in "bagism", first introduced during a Vienna press conference, where they satirised prejudice and stereotyping by wearing a bag over their entire bodies. Lennon detailed this period in the Beatles' song "The Ballad of John and Yoko". Lennon changed his name by deed poll on April 22, 1969, switching out Winston for Ono as a middle name. Although he used the name John Ono Lennon thereafter, official documents referred to him as John Winston Ono Lennon, since he was not permitted to revoke a name given at birth. The couple settled at Tittenhurst Park at Sunninghill, Berkshire, in southeast England. When Ono was injured in a car accident, Lennon arranged for a king-sized bed to be brought to the recording studio as he worked on the Beatles' last recorded album, Abbey Road. The two collaborated on many albums, beginning in 1968 when Lennon was still a Beatle, with Unfinished Music No.1: Two Virgins, an album of experimental musique concrète. The same year, the couple contributed an experimental piece to The White Album called "Revolution 9". Also on The White Album, Ono contributed backing vocals on "Birthday", and one line of lead vocals on "The Continuing Story of Bungalow Bill." The latter marked the only occasion in a Beatles recording in which a woman sings lead vocals. The Plastic Ono Band Ono influenced Lennon to produce more "autobiographical" output and, after "The Ballad of John and Yoko", they decided it would be better to form their own band rather than put the material out under the Beatles name. In 1969, the Plastic Ono Band's first album, Live Peace in Toronto 1969, was recorded during the Toronto Rock and Roll Revival festival. This first incarnation of the group also consisted of guitarist Eric Clapton, bass player Klaus Voormann, and drummer Alan White. The first half of their performance consisted of rock standards. During the second half, Ono took to the microphone and performed an avant-garde set along with the band, finishing with music that consisted mainly of feedback, while she screamed and sang. First solo album and Fly Ono released her first solo album, Yoko Ono/Plastic Ono Band, in 1970 as a companion piece to Lennon's better-known John Lennon/Plastic Ono Band. The two albums also had companion covers: Ono's featured a photo of her leaning on Lennon, and Lennon's a photo of him leaning on Ono. Her album included raw, harsh vocals, which bore a similarity with sounds in nature (especially those made by animals) and free jazz techniques used by wind and brass players. Performers included Ornette Coleman, other renowned free jazz performers, and Ringo Starr. Some songs on the album consisted of wordless vocalizations, in a style that would influence Meredith Monk and other musical artists who have used screams and vocal noise in lieu of words. The album reached No. 182 on the US charts. When Lennon was invited to play with Frank Zappa at the Fillmore (then the Filmore West) on June 5, 1971, Ono joined them. Later that year, she released Fly, a double album. In it, she explored slightly more conventional psychedelic rock with tracks including "Midsummer New York" and "Mind Train", in addition to a number of Fluxus experiments. She also received minor airplay with the ballad "Mrs. Lennon". The track "Don't Worry, Kyoko (Mummy's Only Looking for Her Hand in the Snow)" was an ode to Ono's missing daughter, and featured Eric Clapton on guitar. That same year, while studying with Maharishi Mahesh Yogi in Majorca, Spain, Ono's ex-husband Anthony Cox accused Ono of abducting their daughter Kyoko from his hotel. Accusations flew between the two, as well as the matter of custody. Cox eventually moved away with Kyoko; Ono would not see her daughter until 1998. It was during this time that she wrote "Don't Worry Kyoko", which also appears on Lennon and Ono's album Live Peace in Toronto 1969, in addition to Fly. Kyoko is also referenced in the first line of "Happy Christmas (War Is Over)" when Yoko whispers "Happy Christmas, Kyoko", followed by Lennon whispering, "Happy Christmas, Julian." The song reached No. 4 in the UK, where its release was delayed until 1972, and has periodically reemerged on the UK Singles Chart. Originally a protest song about the Vietnam War, "Happy Xmas (War Is Over)" has since become a Christmas standard. That August the couple appeared together at a benefit in Madison Square Garden with Roberta Flack, Stevie Wonder, and Sha Na Na for mentally handicapped children organized by WABC-TV's Geraldo Rivera. Separation from Lennon and reunion After the Beatles disbanded in 1970, Ono and Lennon lived together in London and then in New York—the latter to escape tabloid racism towards Ono. Their relationship became strained because Lennon was facing the threat of deportation due to drug charges that had been filed against him in England, and also because of Ono's separation from her daughter. The couple separated in July 1973, with Ono pursuing her career and Lennon living between Los Angeles and New York with personal assistant May Pang; Ono had given her blessing to Lennon and Pang. By December 1974, Lennon and Pang considered buying a house together, and he refused to accept Ono's phone calls. The next month, Lennon agreed to meet Ono, who claimed to have found a cure for smoking. After the meeting, he failed to return home or call Pang. When Pang telephoned the next day, Ono told her that Lennon was unavailable, because he was exhausted after a hypnotherapy session. Two days later, Lennon reappeared at a joint dental appointment with Pang; he was stupefied and confused to such an extent that Pang believed he had been brainwashed. He told her that his separation from Ono was now over, though Ono would allow him to continue seeing her as his mistress. Ono and Lennon's son, Sean, was born on October 9, 1975, Lennon's 35th birthday. John did not help relations with his first son when he described Julian in 1980 as being part of the "ninety percent of the people on this planet [who resulted from an unplanned pregnancy]" and that "Sean is a planned child, and therein lies the difference." He said, "I don't love Julian any less as a child. He's still my son, whether he came from a bottle of whiskey or because they didn't have pills in those days." The couple maintained a low profile for the next five years. Sean has followed in his parents' footsteps with a musical career, performing solo work, working with Ono, and forming a band, the Ghost of a Saber Tooth Tiger. Lennon's murder, tributes, and memorials Lennon took a hiatus from music and became a househusband to care for Sean. He resumed his songwriting career shortly before his December 1980 murder, which Ono witnessed at close range. She has stated that the couple was thinking about going out to dinner after spending several hours in a recording studio, but were returning to their apartment instead, because Lennon wanted to see Sean before he was put to bed. Following the murder, Ono went into complete seclusion for an extended period. Ono funded the construction and maintenance of the Strawberry Fields memorial in Manhattan's Central Park, directly across from the Dakota Apartments, which was the scene of the murder and remains Ono's residence to this day. It was officially dedicated on October 9, 1985, which would have been his 45th birthday. In 1990, Ono collaborated with music consultant Jeff Pollack to honor what would have been Lennon's 50th birthday with a worldwide broadcast of "Imagine". Over 1,000 stations in over 50 countries participated in the simultaneous broadcast. Ono felt the timing was perfect, considering the escalating conflicts in the Middle East, Eastern Europe, and Germany. In 2000, she founded the John Lennon Museum in Saitama, Saitama, Japan. In March 2002, she was present with Cherie Blair at the unveiling of a 7-foot statue of Lennon, to mark the renaming of Liverpool airport to Liverpool John Lennon Airport. (Julian and Cynthia Lennon were present at the unveiling of the John Lennon Peace Monument next to ACC Liverpool in the same city eight years later.) On October 9, 2007, she dedicated a new memorial called the Imagine Peace Tower, located on the island of Viðey, 1 km outside the Skarfabakki harbour in Reykjavík, Iceland. Each year, between October 9 and December 8, it projects a vertical beam of light high into the sky. In 2009, Ono created an exhibit called "John Lennon: The New York City Years" for the NYC Rock and Roll Hall of Fame Annex. The exhibit used music, photographs, and personal items to depict Lennon's life in New York, and a portion of the cost of each ticket was donated to Spirit Foundation, a charitable foundation set up by Lennon and Ono. Every time Chapman has a parole hearing – which occurs every two years – Ono has come out strongly against his release from prison. As the widow of the victim, her opinion has a strong influence on the parole board. Ono is often associated with the Fluxus group, whose founder George Maciunas, her friend during the 1960s, admired and promoted her work enthusiastically, giving Ono her US show at his AG Gallery in 1961. Maciunas invited her formally join the Fluxus group, but she declined because she wanted to remain independent. She did however, collaborate with him, Charlotte Moorman, George Brecht, and the poet Jackson Mac Low, among others associated with the group. John Cage and Marcel Duchamp were significant influences on Ono's art. She learned of Cage at Sarah Lawrence and met him through his student Ichiyanagi Toshi in Cage's legendary experimental composition class at the New School for Social Research: She was thus introduced to more of Cage's unconventional neo-Dadaism first hand and his New York City protégés Allan Kaprow, Brecht, Mac Low, Al Hansen and the poet Dick Higgins. After Cage finished teaching at the New School in the summer of 1960, Ono was determined to rent a place to present her works along with the work of other avant-garde artists in the city. She eventually found a cheap loft in downtown Manhattan at 112 Chambers Street that she used as a studio and living space. Supporting herself through secretarial work and lessons in the traditional Japanese arts at the Japan Society, Ono allowed composer La Monte Young to organize concerts in the loft. Both began organizing a series of events there from December 1960 through June 1961, with people such as Marcel Duchamp and Peggy Guggenheim attending, and both Ono and Young claimed to have been the primary curator of these events, with Ono claiming to have been eventually pushed into a subsidiary role by Young. The Chambers Street series hosted some of Ono's earliest conceptual artwork, including Painting to Be Stepped On, which was a scrap of canvas on the floor that became a completed artwork upon the accrual of footprints. With that work, Ono suggested that a work of art no longer needed to be mounted on a wall and inaccessible. She showed this work and other instructional work again at Macunias's AG Gallery in July 1961. Cut Piece, 1964 Ono was a pioneer of conceptual art and performance art. A seminal performance work is Cut Piece, first performed in 1964 at the Sogetsu Art Center in Tokyo. The piece consisted of Ono, dressed in her best suit, kneeling on a stage with a pair of scissors in front of her. She invited and then instructed audience members to join her on stage and cut pieces of her clothing off. Confronting issues of gender, class and cultural identity, Ono sat silently until the piece concluded at her discretion. The piece was subsequently performed at New York's Carnegie Hall in 1965 and London's Africa Center in 1966. Of the piece, John Hendricks in the catalogue to Ono's Japan Society retrospective wrote: "[Cut Piece] unveils the interpersonal alienation that characterizes social relationships between subjects, dismantling the disinterested Kantian aesthetic model..... It demonstrates the reciprocity between artists, objects, and viewers and the responsibility beholders have to the reception and preservation of art." Other performers of the piece have included Charlotte Moorman and John Hendricks. Ono reprised the piece in Paris in 2003, in the low post-9/11 period between the US and France, saying she hoped to show that this is "a time where we need to trust each other." In 2013, the Canadian singer, Peaches reprised it at the multi-day Meltdown festival at the Southbank Centre in London, which Ono curated. Grapefruit book, 1964 Ono's small book titled Grapefruit is another seminal piece of conceptual art. First published in 1964, the book reads as a set of instructions through which the work of art is completed-either literally or in the imagination of the viewer participant. One example is "Hide and Seek Piece: Hide until everybody goes home. Hide until everybody forgets about you. Hide until everybody dies." Grapefruit has been published several times, most widely distributed by Simon & Schuster in 1971, who reprinted it again in 2000. David Bourdon, art critic for The Village Voice and Vogue, called Grapefruit "one of the monuments of conceptual art of the early 1960's." He noted that her conceptual approach was made more acceptable when white male artists like Joseph Kosuth and Lawrence Weiner came in and "did virtually the same things" she did, and that her take also has a poetic and lyrical side that sets it apart from the work of other conceptual artists. Ono would enact many of the book's scenarios as performance pieces throughout her career, which formed the basis for her art exhibitions, including the highly publicized retrospective exhibition, This Is Not Here in 1971 at the Everson Museum in Syracuse, New York, that was nearly closed when it was besieged by excited Beatles fans, who broke several of the art pieces and flooded the toilets. It was her last major exhibition until 1989's Yoko Ono: Objects, Films retrospective at the Whitney. Experimental films, 1964–72 Ono was also an experimental filmmaker who made 16 short films between 1964 and 1972, gaining particular renown for a 1966 Fluxus film called simply No. 4, often referred to as Bottoms. The five-and-a-half-minute film consists of a series of close-ups of human buttocks walking on a treadmill. The screen is divided into four almost equal sections by the elements of the gluteal cleft and the horizontal gluteal crease. The soundtrack consists of interviews with those who are being filmed, as well as those considering joining the project. In 1996, the watch manufacturing company Swatch produced a limited edition watch that commemorated this film. Wish Tree, 1981–present Another example of Ono's participatory art was her Wish Tree project, in which a tree native to the installation site is installed. Her 1996 Wish Piece had the following instructions: - Make a wish - Write it down on a piece of paper - Fold it and tie it around a branch of a Wish Tree - Ask your friends to do the same - Keep wishing - Until the branches are covered with wishes. Her Wish Tree installation in the Sculpture Garden of the Museum of Modern Art, New York, established in July 2010, has attracted contributions from all over the world. Other installation locations include London; St. Louis; Washington, DC; San Francisco; the Stanford University campus in Palo Alto, California; Japan; Venice; and Dublin. In 2014 Ono's Imagine Peace exhibit opened at the Bob Rauschenburg Gallery at Florida SouthWestern State College in Fort Myers, FL. Ono installed a billboard on U.S. Route 41 in Fort Myers to promote the show and peace. In October 2016, Ono unveiled her first permanent art installation in the United States, which is located in Jackson Park, Chicago and promotes peace. Ono was inspired during a visit to the Garden of the Phoenix in 2013 and feels a connection to the city of Chicago. Recognition and retrospectives John Lennon once described her as "the world's most famous unknown artist: everybody knows her name, but nobody knows what she does." Her circle of friends in the New York art world has included Kate Millett, Nam June Paik, Dan Richter, Jonas Mekas, Merce Cunningham, Judith Malina, Erica Abeel, Fred DeAsis, Peggy Guggenheim, Betty Rollin, Shusaku Arakawa, Adrian Morris, Stefan Wolpe, Keith Haring, and Andy Warhol (she was one of the speakers at Warhol's 1987 funeral), as well as George Maciunas and La Monte Young. In addition to Mekas, Maciunas, Young, and Warhol, she has also collaborated with DeAsis, Yvonne Rainer, and Zbigniew Rybczyński. In 1989, the Whitney Museum held a retrospective of her work, Yoko Ono: Objects, Films, marking Ono's reentry into the New York art world after a hiatus. At the suggestion of Ono's live-in companion at the time, interior decorator Sam Havadtoy, she recast her old pieces in bronze after some initial reluctance. "I realized that for something to move me so much that I would cry, there's something there. There seemed like a shimmering air in the 60s when I made these pieces, and now the air is bronzified. Now it's the 80s, and bronze is very 80s in a way - solidity, commodity, all of that. For someone who went through the 60s revolution, there has of course been an incredible change. . . . I call the pieces petrified bronze. That freedom, all the hope and wishes are in some ways petrified." Over a decade later, in 2001, Y E S YOKO ONO, a 40-year retrospective of Ono's work, received the International Association of Art Critics USA Award for Best Museum Show Originating in New York City, considered one of the highest accolades in the museum profession. YES refers to the title of a 1966 sculptural work by Yoko Ono, shown at Indica Gallery, London: viewers climb a ladder to read the word "yes", printed on a small canvas suspended from the ceiling. The exhibition's curator Alexandra Munroe wrote that "John Lennon got it, on his first meeting with Yoko: when he climbed the ladder to peer at the framed paper on the ceiling, he encountered the tiny word YES. 'So it was positive. I felt relieved.'” The exhibition traveled to 13 museums in the U.S., Canada, Japan, and Korea from 2000 through 2003. In 2001, she also received an honorary Doctorate of Laws from Liverpool University and, in 2002, was presented with the honorary degree of Doctor of Fine Arts from Bard College, as well as the Skowhegan Medal for work in assorted media. The next year, she was awarded the fifth MOCA Award to Distinguished Women in the Arts from the Museum of Contemporary Art Los Angeles. In 2005, she received a lifetime achievement award from the Japan Society of New York, which had hosted Yes Yoko Ono and where she had worked in the late 1950s and early 1960s. In 2008, she showed a large retrospective exhibition, Between The Sky and My Head, at the Kunsthalle Bielefeld, Bielefeld, Germany, and the Baltic Centre for Contemporary Art in Gateshead, England. The following year, she showed a selection of new and old work as part of her show "Anton's Memory" in Venice, Italy. She also received a Golden Lion Award for lifetime achievement from the Venice Biennale in 2009. In 2012, Ono held a major exhibition of her work To The Light at the Serpentine Galleries, London. She was also the winner of the 2012 Oskar Kokoschka Prize, Austria's highest award for applied contemporary art. In February 2013, to coincide with her 80th birthday, the largest retrospective of her work, Half-a-Wind Show, opened at the Schirn Kunsthalle Frankfurt and travelled to Denmark's Louisiana Museum of Modern Art, Austria's Kunsthalle Krems, and Spain's Guggenheim Museum Bilbao. In 2014 she contributed several artworks to the triennial art festival in Fo–lkestone, England. In 2015 the MOMA held a retrospective exhibition of her early work, "Yoko Ono: One Woman Show, 1960- 1971". Ono studied piano from the age of 4 to 12 or 13. She attended kabuki performances with her mother, who was trained in shamisen, koto, otsuzumi, kotsuzumi, nagauta, and could read Japanese musical scores. At 14 Yoko took up vocal training in lieder-singing. At Sarah Lawrence, she studied poetry with Alastair Reid, English literature with Kathryn Mansell, and music composition with the Viennese-trained André Singer. Of this time Ono has said that her heroes were the twelve-tone composers Arnold Schoenberg and Alban Berg. She said, "I was just fascinated with what they could do. I wrote some twelve-tone songs, then my music went into [an] area that my teacher felt was really a bit off track, and..... he said, 'Well, look, there are people who are doing things like what you do and they're called avant-garde.'" Singer introduced her to the work of Edgar Varèse, John Cage, and Henry Cowell. She left college and moved to New York in 1957, supporting herself through secretarial work and lessons in the traditional Japanese arts at the Japan Society. She met Cage through Ichiyanagi Toshi in Cage's legendary composition class at the New School for Social Research, and in the summer of 1960, she found a cheap loft in downtown Manhattan at 112 Chambers Street and allowed composer La Monte Young to organize concerts in the loft with her, with people like Marcel Duchamp and Peggy Guggenheim attending. Ono only presented work once during the series. In 1961, years before meeting Lennon, Ono had her first major public performance in a concert at the 258-seat Carnegie Recital Hall (smaller than the "Main Hall"). This concert featured radical experimental music and performances. She had a second engagement at the Carnegie Recital Hall in 1965, in which she debuted Cut Piece. She premiered The Fog Machine during her Concert of Music for the Mind at the Bluecoat Society of Arts in Liverpool, England in 1967. In early 1980, John Lennon heard Lene Lovich and the B-52's' "Rock Lobster" on vacation in Bermuda. The latter reminded him of Ono's musical sound and he took this as an indication that she had reached the mainstream (the band in fact had been influenced by Ono). In addition to her collaborations with experimental artists including John Cage and jazz legend Ornette Coleman, many other musicians, particularly those of the new wave movement, have paid tribute to Ono (both as an artist in her own right, and as a muse and iconic figure). For example, Elvis Costello recorded a version of Ono's song "Walking on Thin Ice", the B-52's (who drew from her early recordings) covered "Don't Worry, Kyoko (Mummy's Only Looking for Her Hand in the Snow)" (shortening the title to "Don't Worry"), and Sonic Youth included a performance of Ono's early conceptual "Voice Piece for Soprano" on their experimental album SYR4: Goodbye 20th Century. On December 8, 1980, Lennon and Ono were in the studio working on Ono's song "Walking on Thin Ice". When they returned to The Dakota, their home in New York City, Lennon was shot dead by Mark David Chapman, a deranged fan who had been stalking Lennon for two months. "Walking on Thin Ice (For John)" was released as a single less than a month later, and became Ono's first chart success, peaking at No. 58 and gaining major underground airplay. In 1981, she released the album Season of Glass, which featured the striking cover photo of Lennon's bloody spectacles next to a half-filled glass of water, with a window overlooking Central Park in the background. This photograph sold at an auction in London in April 2002 for about $13,000. In the liner notes to Season of Glass, Ono explained that the album was not dedicated to Lennon because "he would have been offended—he was one of us." The album received highly favorable reviews and reflected the public's mood after Lennon's assassination. In 1982, she released It's Alright. The cover featured Ono in her famous wrap-around sunglasses, looking towards the sun, while on the back the ghost of Lennon looks over her and their son. The album scored minor chart success and airplay with the single "Never Say Goodbye". In 1984, a tribute album titled Every Man Has a Woman was released, featuring a selection of Ono songs performed by artists such as Elvis Costello, Roberta Flack, Eddie Money, Rosanne Cash, and Harry Nilsson. It was one of Lennon's projects that he never got to finish. Later that year, Ono and Lennon's final album, Milk and Honey, was released as an unfinished demo. It peaked at No. 3 in the UK and No. 11 in the U.S., going gold in both countries as well as in Canada. Ono's final album of the 1980s was Starpeace, a concept album that she intended as an antidote to Ronald Reagan's "Star Wars" missile defense system. On the cover, a warm, smiling Ono holds the Earth in the palm of her hand. Starpeace became Ono's most successful non-Lennon effort. The single "Hell in Paradise" was a hit, reaching No. 16 on the US dance charts and No. 26 on the Billboard Hot 100, and the video, directed by Zbigniew Rybczyński received major airplay on MTV and won "Most Innovative Video" at Billboard Music Video Awards in 1986. In 1986, Ono set out on a goodwill world tour for Starpeace, primarily visiting Eastern European countries. Ono went on a musical hiatus until signing with Rykodisc in 1992 to release the comprehensive six-disc box set Onobox. It included remastered highlights from all of Ono's solo albums, as well as unreleased material from the 1974 "lost weekend" sessions. She also released a one-disc sampler of highlights from Onobox, simply titled Walking on Thin Ice. That year, she sat down for an extensive interview with music journalist Mark Kemp for a cover story in the alternative music magazine Option. The story took a revisionist look at Ono's music for a new generation of fans more accepting of her role as a pioneer in the merger of pop and the avant-garde. In 1994, Ono produced her own off-Broadway musical entitled New York Rock, featuring Broadway renditions of her songs. In 1995, she released Rising, a collaboration with her son Sean and his then-band, Ima. Rising spawned a world tour that traveled through Europe, Japan, and the United States. The following year, she collaborated with various alternative rock musicians for an EP entitled Rising Mixes. Guest remixers of Rising material included Cibo Matto, Ween, Tricky, and Thurston Moore. In 1997, Rykodisc reissued all her solo albums on CD, from Yoko Ono/Plastic Ono Band through Starpeace. Ono and her engineer Rob Stevens personally remastered the audio, and various bonus tracks were added, including outtakes, demos, and live cuts. 2001, saw the release of Ono's feminist concept album Blueprint for a Sunrise. In 2002, Ono joined The B-52's in New York for their 25th anniversary concerts, coming out for the encore and performing "Rock Lobster" with the band. Starting the next year, some DJs remixed other Ono songs for dance clubs. For the remix project, she dropped her first name and became known simply as "ONO", in response to the "Oh, no!" jokes that dogged her throughout her career. Ono had great success with new versions of "Walking on Thin Ice", remixed by top DJs and dance artists including Pet Shop Boys, Orange Factory, Peter Rauhofer, and Danny Tenaglia. In April 2003, Ono's Walking on Thin Ice (Remixes) was rated number 1 on Billboard's Dance/Club Play chart, gaining Ono her first no. 1 hit. She returned to no. 1 on the same chart in November 2004 with "Everyman ... Everywoman ...", a reworking of her song "Every Man Has a Woman Who Loves Him", in January 2008, with "No No No", and in August 2008, with "Give Peace a Chance". In June 2009, at the age of 76, Ono scored her fifth no. 1 hit on the Dance/Club Play chart with "I'm Not Getting Enough". Ono released the album Yes, I'm a Witch in 2007, a collection of remixes and covers from her back catalog by various artists including The Flaming Lips, Cat Power, Anohni, DJ Spooky, Porcupine Tree, and Peaches, released in February 2007, along with a special edition of Yoko Ono/Plastic Ono Band. Yes I'm a Witch has been critically well received. A similar compilation of Ono dance remixes entitled Open Your Box was also released in April of that year. In 2009, Ono recorded Between My Head and the Sky, her first album to be released as "Yoko Ono/Plastic Ono Band" since 1973's Feeling the Space. The all-new Plastic Ono Band lineup included Sean Lennon, Cornelius, and Yuka Honda. On February 16, 2010, Sean organized a concert at the Brooklyn Academy of Music called "We Are Plastic Ono Band", at which Yoko performed her music with Sean, Clapton, Klaus Voormann, and Jim Keltner for the first time since the 1970s. Guests including Bette Midler, Paul Simon and his son Harper, and principal members of Sonic Youth and the Scissor Sisters interpreted her songs in their own styles. In April 2010, RCRD LBL made available free downloads of Junior Boys' mix of "I'm Not Getting Enough", a single originally released 10 years prior on Blueprint for a Sunrise. That song and "Wouldnit (I'm a Star)", released September 14, made it to Billboard's end of the year list of favorite Dance/Club songs at No. 23 and No. 50 respectively. The next year, "Move on Fast" became her sixth consecutive number-one hit on the Billboard Hot Dance Club Songs chart and her eighth number-one hit overall. In January 2012, a Ralphi Rosario mix of her 1995 song "Talking to the Universe" became her seventh consecutive No. 1 hit on the Billboard Hot Dance Club Songs chart, and both songs charted again as favorites on Billboard's year-end lists for Dance/Club songs for 2011. In 2013, She and her band released the LP Take Me to the Land of Hell which featured numerous guests including Yuka Honda, Cornelius, Hirotaka "Shimmy" Shimizu, mi-gu's Yuko Araki, Wilco's Nels Cline, tUnE-yArDs, Questlove, Lenny Kravitz, and Ad-Rock and Mike D of the Beastie Boys. Her online video for "Bad Dancer" released in November 2013, which featured some of these guests, was well-liked by the press. By the end of the year she had become one of three artists with two songs in the Top 20 Dance/Club and had two consecutive number 1 hits on Billboard's Hot Dance Club Play Charts. On the strength of the singles "Hold Me" (Featuring Dave Audé) and "Walking on Thin Ice", the then-80-year-old beat Katy Perry, Robin Thicke, and her friend Lady Gaga. In 2014, "Angel" was Ono's twelfth number one on the US Dance chart. During her career, Ono also has collaborated with Earl Slick, David Tudor, Fred DeAsis, and Richard Maxfield. As a dance music artist, Ono has worked with re-mixers/producers including Basement Jaxx, Bill Kates, Keiji Haino, Nick Vernier Band, Billy Martin, DJ Spooky, Apples in Stereo, Damien Price, DJ Chernobyl, Bimbo Jones, DJ Dan, Craig Armstrong, Jorge Artajo, Shuji Nabara, and Konrad Behr. In 2012, the album Yokokimthurston was released featuring a collaboration with Sonic Youth's Thurston Moore and Kim Gordon. Notable also as the first collaboration between Moore and Gordon after their divorce, it was characterized by AllMusic as "focused and risk-taking" and "above the best" of the couple's experimental music, with Ono's voice described as "one-of-a-kind." BMI Foundation's John Lennon Scholarships In 1997, together with the BMI Foundation, Yoko Ono established an annual music competition program for songwriters of contemporary musical genres, to honor John Lennon's memory and his large creative legacy. Over $350,000 have been given through BMI Foundation's John Lennon Scholarships to talented young musicians in the United States, making it one of the most respected awards for emerging songwriters. Ono was frequently criticized by the press and the public for many years. She was blamed for the breakup of the Beatles and repeatedly criticized for her influence over Lennon and his music. Her experimental art was also not popularly accepted. The English press were particularly negative, and prompted the couple's move to the US. As late as December 1999, NME was calling her a "no-talent charlatan", and in October 2013, the mother of tennis pro Andy Murray took over a Twitter handle entitled Destroying Yoko Ono on Twitter. Her name still connotes the figure of the evil female interloper to the mainstream. Courtney Love, Kurt Cobain's widow, has endlessly been compared to Ono for her supposed bothersome role in Nirvana's businesses and as a scapegoat for Cobain's suicide. In 2007, when American singer Jessica Simpson was dating Dallas Cowboys quarterback Tony Romo, the Simpson-Romo relationship was blamed for Romo's poor performances. In response, some Cowboys' fans gave her the moniker "Yoko Romo". In March 2015, Perrie Edwards, member of English girl group Little Mix, was compared to Yoko Ono and criticised for being the supposed reason for Zayn Malik's departure from the British boy band One Direction, creating tension within the group and causing widespread controversy. One month after the 9/11 attacks, she organized the concert "Come Together: A Night for John Lennon's Words and Music" at Radio City Music Hall. Hosted by the actor Kevin Spacey and featuring Lou Reed, Cyndi Lauper and Nelly Furtado, it raised money for September 11 relief efforts and aired on TNT and the WB. During the Liverpool Biennial in 2004, Ono flooded the city with two images on banners, bags, stickers, postcards, flyers, posters and badges: one of a woman's naked breast, the other of the same model's vulva. (During her stay in Lennon's city of birth, she said she was "astounded" by the city's renaissance.) The piece, titled My Mummy Was Beautiful, was dedicated to Lennon's mother, Julia, who had died when he was a teenager. According to Ono, the work was meant to be innocent, not shocking; she was attempting to replicate the experience of a baby looking up at its mother's body, those parts of the mother's body being a child's introduction to humanity. Ono performed at the opening ceremony for the 2006 Winter Olympic Games in Turin, Italy, wearing white, like many of the other performers during the ceremony, to symbolize the snow of winter. She read a free verse poem calling for world peace as an introduction to Peter Gabriel's performance of "Imagine". On December 13, 2006, one of Ono's bodyguards was arrested after he was allegedly taped trying to extort $2 million from her, threatening to release private conversations and photographs. His bail was revoked, and he pleaded not guilty to two counts of attempted grand larceny. In February 16, 2007 a deal was reached where extortion charges were dropped, and he pleaded guilty to attempted grand larceny in the third degree, a felony, and sentenced to the 60 days he had already spent in jail. After reading an unapologetic statement, he was released to immigration officials because he had also been found guilty of overstaying his business visa. On June 26, 2007, Ono appeared on Larry King Live along with McCartney, Ringo Starr, and Olivia Harrison. She headlined the Pitchfork Music Festival in Chicago on July 14, 2007, performing a full set that mixed music and performance art. She sang "Mulberry," a song about her time in the countryside after the Japanese collapse in World War II for only the third time ever, with Thurston Moore: She had previously performed the song with John and with Sean. On October 9 of that year, the Imagine Peace Tower on Viðey Island in Iceland, dedicated to peace and to Lennon, was turned on with her, Sean, Ringo, George Harrison's widow Olivia in attendance. Ono returned to Liverpool for the 2008 Liverpool Biennial, where she unveiled Sky Ladders in the ruins of Church of St Luke (which was largely destroyed during World War II and now stands roofless as a memorial to those killed in the Liverpool Blitz). Two years later, on March 31, 2009, she went to the inauguration of the exhibition "Imagine: The Peace Ballad of John & Yoko" to mark the 40th anniversary of the Lennon-Ono Bed-In at the Queen Elizabeth Hotel in Montreal, Canada, from May 26 to June 2, 1969. (The hotel has been doing steady business with the room they stayed in for over 40 years.) That year she became a grandmother, when Emi was born to Kyoko. In May 2009, she designed a T-shirt for the second Fashion Against AIDS campaign and collection of HIV/AIDS awareness, NGO Designers Against AIDS, and H&M, with the statement "Imagine Peace" depicted in 21 languages. Ono appeared onstage at Microsoft's June 1, 2009, E3 Expo press conference with Olivia Harrison, Paul McCartney, and Ringo Starr to promote the Beatles: Rock Band video game, which was universally praised by critics. Ono appeared on the Basement Jaxx album Scars, featuring on the single "Day of the Sunflowers (We March On)". On February 16, 2010, Ono revived an early Plastic Ono Band lineup with Eric Clapton, and special guests including Paul Simon and Bette Midler. On April 1 of that year, she was named the first "Global Autism Ambassador" by the Autism Speaks organization. She had created an artwork the year before for autism awareness and allowed it to be auctioned off in 67 parts to benefit the organization. Ono appeared with Ringo Starr on July 7 at New York's Radio City Music Hall in celebration of Starr's 70th birthday, performing "With a Little Help from My Friends" and "Give Peace a Chance". On September 16, she and Sean attended the opening of Julian Lennon's photo exhibition at the Morrison Hotel in New York City, appearing for the first time photos with Cynthia and Julian. She also promoted his work on her website. On October 2, Ono and the Plastic Ono Band performed at the Orpheum Theatre in Los Angeles, with special guest Lady Gaga, whom she deeply admires. On February 18, 2011, her 78th birthday, Ono took out a full page advert in the UK free newspaper Metro for "Imagine Peace 2011". It took the form of an open letter, inviting people to think of, and wish for, peace. With son Sean, she held a benefit concert to aid in the relief efforts for earthquake and tsunami-ravaged Japan on March 27 in New York City. The effort raised a total of $33,000. In July 2011, she visited Japan to support earthquake and tsunami victims and tourism to the country. During her visit, Ono gave a lecture and performance entitled "The Road of Hope" at Tokyo's Mori Art Museum, during which she painted a large calligraphy piece entitled "Dream" to help raise funds for construction of the Rainbow House, an institution for the orphans of the Great East Japan earthquake. She also collected the 8th Hiroshima Art Prize for her contributions to art and for peace, that she was awarded the year prior. In January 2012, a Ralphi Rosario mix of her 1995 song "Talking to the Universe" became her seventh consecutive No. 1 hit on the Billboard Hot Dance Club Songs chart. In March of the same year, she was awarded the 20,000-euro ($26,400) Oskar Kokoschka Prize in Austria. From June 19 to September 9, her work To the Light was exhibited at the Serpentine Gallery in London. It was held in conjunction with the London 2012 Festival, a 12-week UK-wide celebration featuring internationally renowned artists from Midsummer's Day (June 21) to the final day of the Paralympic Games on September 9. On June 29, 2012, Ono received a lifetime achievement award at the Dublin Biennial. During this (her second) trip to Ireland (the first was with John before they married), she visited the crypt of Irish leader Daniel O'Connell at Glasnevin Cemetery and Dún Laoghaire, from where Irish departed for England to escape the famine. In February 2013, Ono accepted the Rainer Hildebrandt Medal at Berlin's Checkpoint Charlie Museum, awarded to her and Lennon for their lifetime of work for peace and human rights. The next month, she tweeted an anti-gun message with the Season of Glass image of Lennon's bloodied glasses on what would have been her and Lennon's 44th anniversary, noting that more than 1 million people have been killed by guns since Lennon's death in 1980. She was also given a Congressional citation from the Philippines for her monetary aid to the victims of typhoon Pablo. She also donated to disaster relief efforts after typhoon Ondoy in 2009, and she assists Filipino schoolchildren. In June 2013, she curated the Meltdown festival in London, where she played two concerts, one with the Plastic Ono Band, and the second on backing vocals during Siouxsie Sioux's rendition of "Walking on Thin Ice" at the Double Fantasy show. In July, OR Books published Ono's sequel to 1964's Grapefruit, another book of instruction-based 'action poems' this time entitled, Acorn. She was made an honorary citizen of Reykjavík, Iceland, on October 9, 2013. That same year, she became an honorary patron to Alder Hey Charity. Ono has been an activist for peace and human rights since the 1960s. After their wedding, she and Lennon held a "Bed-In for Peace" in their honeymoon suite at the Amsterdam Hilton Hotel in March 1969, where the pair of newlyweds in pajamas invited visitors and members of the press, eager to talk about and promote world peace. Another Bed-In two months later at the Queen Elizabeth Fairmont in Montreal resulted in the recording of their first single, "Give Peace A Chance", a top-20 hit for the newly christened Plastic Ono Band. Other performance/demonstrations with John included "bagism," iterations with John of the Bag Pieces she introduced in the early 1960s, which encouraged a disregard for physical appearance in judging others. In December 1969, the two continued spread their message of peace with billboards in 12 major world cities reading "WAR IS OVER! If You Want It - Happy Christmas from John & Yoko." In the 1970s, Ono and Lennon became close to many radical, counterculture leaders, including Bobby Seale, Abbie Hoffman, Jerry Rubin, Michael X, John Sinclair (for whose rally in Michigan they flew to sing Lennon's song "Free John Sinclair" that effectively released the poet from prison), Angela Davis, and street musician David Peel. Friend and Sexual Politics author Kate Millett has said Ono inspired her activism. Ono and Lennon appeared on The Mike Douglas Show, taking over hosting duties for a week. Ono spoke at length about the evils of racism and sexism. She remained outspoken in her support of feminism, and openly bitter about the racism she had experienced from rock fans, especially in the UK. Her reception within the UK media was not much better. For example, an Esquire article of the period was titled "John Rennon's Excrusive Gloupie" and featured an unflattering David Levine cartoon. In 1999, after the Columbine High School massacre, Ono paid for billboards to be put up in New York City and Los Angeles that bore the image of Lennon's blood-splashed spectacles. Early in 2002 she paid about £150,000 ($213,375) for a billboard in Piccadilly Circus with a line from Lennon's "Imagine": "Imagine all the people living life in peace." Later the same year, she inaugurated a peace award, the LennonOno Grant for Peace, by giving $50,000 (£31,900) in prize money originally to artists living "in regions of conflict". The award is given out every two years in conjunction with the lighting of the Imagine Peace Tower, and was first given to Israeli and Palestinian artists. Its program has since expanded to include writers, such as Michael Pollan and Alice Walker, activists such as Vandana Shiva and Pussy riot, organizations such as New York's Center for Constitutional Rights, even an entire country (Iceland). On Valentine's Day 2003, on the eve of the Iraqi invasion by the US and UK, Ono heard about a couple, Andrew and Christine Gale, who were holding a love-in protest in their tiny bedroom in Addingham, West Yorkshire. She phoned them and said, "It's good to speak to you. We're supporting you. We're all sisters together." The couple said that songs like "Give Peace a Chance" and "Imagine" inspired their protest. In 2004, Ono remade her song "Everyman..... Everywoman....." to support same-sex marriage, releasing remixes that included "Every Man Has a Man Who Loves Him" and "Every Woman Has a Woman Who Loves Her". In August 2011, she made the documentary film about the Bed-Ins Bed Peace available for free on YouTube, and as part of her website "Imagine Peace". In January 2013, the 79-year-old Ono, along with Sean Lennon and Susan Sarandon, took to rural Pennsylvania in a bus under the banner of the Artists Against Fracking group she and Sean created with Mark Ruffalo in August 2012 to protest against hydraulic fracturing. Other group members include Lady Gaga and Alec Baldwin. Ono promotes her art, and shares inspirational messages and images, through a robust and active Twitter, Instagram, and Facebook presence. In April 2014 her Twitter followers reached 4.69 million, while her Instagram followers exceeded 99,000. Her tweets are short instructional poems, comments on media and politics, and notes about performances. Relationship with the Beatles According to journalist Barry Miles, after Lennon and Ono had been injured in a car accident in June 1969, partway through recording Abbey Road, a bed was installed in the studio with a microphone so the latter could make artistic comments about the album. Miles thought Ono's continual presence in the studio during the latter part of the Beatles' career put strain on Lennon's relationship with the other band members. George Harrison verbally assaulted her after she took one of his chocolate digestive biscuits without asking. The English press dubbed her "the woman who broke up the Beatles", but Ono has stated that the Beatles broke up themselves without any direct involvement from her, adding "I don't think I could have tried even to break them up." In an interview with Dick Cavett, Lennon explicitly denied that Ono broke up The Beatles and even Harrison said in an interview with Cavett that the Beatles had problems long before Ono came on the scene. While the Beatles were together, every song written by Lennon or McCartney was credited as Lennon–McCartney regardless of whether the song was a collaboration or written solely by one of the two (except for those appearing on their first album, Please Please Me, which originally credited the songs to McCartney–Lennon). In 1976, McCartney released a live album called Wings over America, which credited the five Beatles tracks as P. McCartney–J. Lennon compositions, but neither Lennon nor Ono objected. After Lennon's death, however, McCartney again attempted to change the order to McCartney–Lennon for songs that were solely or predominantly written by him, such as "Yesterday," but Ono would not allow it, saying she felt this broke an agreement that the two had made while Lennon was still alive, and the surviving Beatle argued that such an agreement never existed. A spokesman for Ono said McCartney was making "an attempt to rewrite history". In a Rolling Stone interview in 1987, Ono pointed out McCartney's place in the process of the disintegration of the band. On the 1998 John Lennon anthology, Lennon Legend, the composer credit of "Give Peace a Chance" was changed to "John Lennon" from its original composing credit of "Lennon–McCartney." Although the song was written by Lennon during his tenure with the Beatles, it was both written and recorded without the help of the band, and released as Lennon's first independent single under the "Plastic Ono Band" moniker. Lennon subsequently expressed regret that he had not given co-writing credit to Ono instead, who actually helped him write the song. In 2002, McCartney released another live album, Back in the U.S. Live 2002, and the 19 Beatles songs included are described as "composed by Paul McCartney and John Lennon", which reignited the debate over credits with Ono. Her spokesperson Elliott Mintz called it "an attempt to rewrite history.", but nevertheless, Ono did not sue. In 1995, after the Beatles released Lennon's "Free as a Bird" and "Real Love", with demos provided by Ono, McCartney and his family collaborated with her and Sean to create the song "Hiroshima Sky is Always Blue", which commemorates the 50th anniversary of the atomic bombing of that Japanese city. Of Ono, McCartney stated: "I thought she was a cold woman. I think that's wrong..... she's just the opposite..... I think she's just more determined than most people to be herself." Two years later, however, Ono publicly compared Lennon to Wolfgang Amadeus Mozart, while McCartney, she said, more closely resembled his less-talented rival Antonio Salieri. This remark infuriated McCartney's wife Linda, who was dying from breast cancer at the time, and when Linda died less than a year later, McCartney did not invite Ono to his wife's memorial service in Manhattan. Accepting an award at the 2005 Q Awards, Ono mentioned that Lennon had once felt insecure about his songwriting. She had responded, "You're a good songwriter. It's not June with spoon that you write. You're a good singer, and most musicians are probably a little bit nervous about covering your songs." In an October 2010 interview, Ono spoke about Lennon's "lost weekend" and her subsequent reconciliation with him. She credited McCartney with helping save her marriage to John. "I want the world to know that it was a very touching thing that [Paul] did for John." While visiting with Ono in March 1974, McCartney, on leaving, asked "[W]hat will make you come back to John?" McCartney subsequently passed her response to Lennon while visiting him in Los Angeles. "John often said he didn't understand why Paul did this for us, but he did." In 2012, McCartney revealed that he did not blame Ono for the breakup of the Beatles and credited Ono with inspiring much of Lennon's post-Beatles work. Relationship with Julian Lennon Ono had a difficult relationship with her stepson, Lennon's son Julian, which has improved over the years. He has expressed disappointment at her handling of Lennon's estate, and at the difference between his upbringing and Sean's, adding, "when Dad gave up music for a couple of years to be with Sean, why couldn't he do that with me?" More egregiously, however, Julian was left out of his father's will, and he battled Ono in court for years, settling in 1996 for an unspecified amount which the papers reported was "believed to" be in the area of £20 million, which Julian has denied. He has admitted that he is his "mother's boy", which Ono has cited as the reason why she was never able to get close to him: "Julian and I tried to be friends. Of course, if he's too friendly with me, then I think that it hurts his other relatives. He was very loyal to his mother. That was the first thing that was in his mind." Nevertheless, she and Sean attended the opening of Julian's photo exhibition at the Morrison Hotel in New York City in 2010, appearing for the first time for photos with Cynthia and Julian. She also promoted the exhibition on her website, and Julian and Sean are close. In popular culture Canadian rock band Barenaked Ladies' debut single was "Be My Yoko Ono," first released in 1990 and later appearing on their 1992 album Gordon. The lyrics are "a shy entreaty to a potential girlfriend, caged in terms that self-deflatingly compare himself to one of pop music's foremost geniuses." It also has a "sarcastic imitation of Yoko Ono's unique vocal style in the bridge". In 2000, American folk singer Dar Williams recorded a song titled "I Won't Be Your Yoko Ono." Bryan Wawzenek of the website Ultimate Classic Rock described the song as "us[ing] John and Yoko as a starting point for exploring love, and particularly, love between artists." The British band Elbow mentioned Ono in their song "New York Morning" from their 2014 album The Take Off and Landing of Everything ("Oh, my giddy aunt, New York can talk / It's the modern Rome and folk are nice to Yoko"). In response Ono posted an open letter to the band on her website, thanking them and reflecting on her and Lennon's relationship with the city. |Year||Album||US chart peak||Notes| |1970||Yoko Ono/Plastic Ono Band||182| |1972||Approximately Infinite Universe||193| |1973||Feeling the Space||-| |1981||Season of Glass||49| |2001||Blueprint for a Sunrise||-| |2009||Between My Head and the Sky||-| |2013||Take Me to the Land of Hell||-| Albums with John Lennon |Year||Album||US chart peak| |1968||Unfinished Music No.1: Two Virgins||124| |1969||Unfinished Music No.2: Life with the Lions||174| |Live Peace in Toronto 1969||10| |1972||Some Time in New York City||48| |1984||Milk and Honey||11| Compilations, soundtrack albums and EPs - Onobox (1992) - Walking on Thin Ice (1992) - New York Rock (1994) (Original cast recording) - A Blueprint for the Sunrise (2000) (3-track EP included with YES YOKO ONO book) - Don't Stop Me! EP (2009) - Rising Mixes (1996) - Yes, I'm a Witch (2007) - Open Your Box (2007) - Onomix (2012) - Yes, I'm a Witch Too (2016) - Every Man Has a Woman (1984) - Mrs. Lennon (2010) |1971||"Touch Me"/"Open Your Box"||–||–||Non-album single| |"Mrs. Lennon"/"Midsummer New York"||–||–||Fly| |"Mind Train"/"Listen, the Snow Is Falling"||–||–||Fly' (b-side non-album single)| |1972||"Now or Never"/"Move on Fast"||–||–||Approximately Infinite Universe| |1973||"Death of Samantha"/"Yang Yang"||–||–| |"Josejoi Banzai (Part 1)"/"Josejoi Banzai (Part 2)" (Japan-only)||–||–||Non-album single| |"Woman Power"/"Men, Men, Men"||–||–||Feeling the Space| |"Run, Run, Run"/"Men, Men, Men"||–||–| |1974||"Yume O Motou (Let's Have a Dream)"/"It Happened" (Japan-only)||–||–||Non-album single| |1981||"Walking on Thin Ice"/"It Happened"||35||13| |"No, No, No"/"Will You Touch Me"||–||–||Season of Glass| |1982||"My Man"/"Let the Tears Dry"||–||–||It's Alright (I See Rainbows)| |"Never Say Goodbye"/"Loneliness"||–||–| |1985||"Hell in Paradise"/"Hell in Paradise" (instrumental)||–||12||Starpeace| |"Cape Clear"/"Walking on Thin Ice" (promo)||–||–| |2001||"Open Your Box" (remixes)||144||25||Non-album singles| |2002||"Kiss Kiss Kiss" (remixes)||–||20| |"Yang Yang" (remixes)||–||17| |2003||"Walking on Thin Ice" (remixes)||35||1| |"Will I" (remixes)/"Fly" (remixes)||–||19| |2004||"Hell in Paradise" (remixes)||–||4| |"Everyman... Everywoman..." (remixes)||–||1| |2007||"You're the One" (remixes)||–||2| |"No, No, No" (remixes)||–||1| |2008||"Give Peace a Chance" (remixes)||–||1| |2009||"I'm Not Getting Enough" (remixes)||–||1| |2010||"Give Me Something" (remixes)||–||1| |"Wouldnit (I'm a Star)" (remixes)||–||1| |2011||"Move on Fast" (remixes)||–||1| |"Talking to the Universe" (remixes)||–||1| |2012||"She Gets Down on Her Knees" (remixes)||–||5| |"Early in the Morning"||–||–||Yokokimthurston| |"I'm Moving On" (remixes)||–||4||Non-album single| |2013||"Hold Me" (featuring Dave Audé) (remixes)||–||1| |"Walking on Thin Ice 2013" (remixes)||–||1| |2015||"Woman Power" (remixes)||–||6| |"I Love You, Earth" (Antony & Yoko Ono) / "I'm Going Away Smiling" (Antony) (10″ vinyl single + download)||–||–| |"Blink" (Yoko Ono & John Zorn) (10″ vinyl single + download)||–||–| |"Happy Xmas (War Is Over)" (Yoko Ono & Flaming Lips) / "Atlas Eets Christmas" (Yoko Ono & Flaming Lips) (7" vinyl single)||–||–| |2016||"Hell in Paradise 2016" (remixes)||–||1||Yes, I'm a Witch Too| B-side appearances on John Lennon singles - "Remember Love" (on "Give Peace a Chance") (1969) - "Don't Worry, Kyoko" (on "Cold Turkey") (1969) - "Who Has Seen the Wind?" (on "Instant Karma!") (1970) - "Why" (on "Mother") (1971) - "Open Your Box" (on "Power to the People") (1971) - "Listen, the Snow is Falling" (on "Happy Xmas (War Is Over)") (1971) - "Sisters, O Sisters" (on "Woman Is the Nigger of the World") (1972) - "Kiss Kiss Kiss" (on "(Just Like) Starting Over") (1980) - "Beautiful Boys" (on "Woman") (1981) - "Yes, I'm Your Angel" (on "Watching the Wheels") (1981) - "O'Sanity" (on "Nobody Told Me") (1984) - "Your Hands (あなたの手") (on "Borrowed Time") (1984) - "Sleepless Night" (on "I'm Stepping Out") (1984) - "It's Alright" (on "Every Man Has a Woman Who Loves Him") (1985) Books and monographs - Grapefruit (1964) - Summer of 1980 (1983) - ただの私 (Tada-no Watashi – Just Me!) (1986) - The John Lennon Family Album (1990) - Instruction Paintings (1995) - Grapefruit Juice (1998) - YES YOKO ONO (2000) - Odyssey of a Cockroach (2005) - Imagine Yoko (2005) - Memories of John Lennon (editor) (2005) - 2:46: Aftershocks: Stories From the Japan Earthquake (contributor) (2011) - 郭知茂 Vocal China Forever Love Song - Acorn (2013) - Eye blink (1966, 5 min) - Bottoms (1966, 5½ min) - Match (1966, 5 min) - Cut Piece (1965, 9 min) - Wrapping Piece (1967, approx. 20 min, music by Delia Derbyshire) - Film No. 4 (Bottoms) (1966/1967, 80 min) - Bottoms, advertisement/commercial (1966/1967, approx. 2 min) - Two Virgins (1968, approx. 20 min), a portrait film consisting of super-impositions of John's and Yoko's faces. - Film No. Five (Smile) (1968, 51 min) - Rape (1969, 77 min), filmed by Nick Rowland, a young woman is relentlessly pursued by a camera crew. - Apotheosis (1970, 18½ mins) - Freedom (1970, 1 min), a slow-motion film showing a woman attempting to take off her bra. - Making of Fly (1970, approx. 30 min) - Up Your Legs Forever (1970, 70 min), a film consisting of continuous panning shots up a series of 367 human legs. - Erection (1971, 20 min), a film of a hotel's construction over many months, based on still photographs by Iain McMillan. - Sisters, O Sisters (1971, 4 min) - Luck of the Irish (1971, approx. 4 min) - Blueprint for the Sunrise (2000, 28 min) - Onochord (2004, continuous loop) - With John Lennon, Bed-In, (1969, 74 min) - With Jonas Mekas, Fly (1970, 25 min), a fly crawls slowly across a woman's naked body. Premiered at the Cannes Film Festival in May 1971. Actress or as self - Satan's Bed (as an actress), directed by Michael Findlay - Let It Be (1970, 81 min) - Imagine (1971, 70 min) - Flipside (Canadian TV show, 1972, approx. 25 min) - Mad About You (American TV show, guest star in 1995 episode "Yoko Said") - Isle of Dogs (Upcoming, voice) - "Yoko Ono Net Worth And Body Measurements - Celebrity Richness". celebrityrichness.com. Retrieved June 24, 2017. - "Yoko Ono retrospective opens in Frankfurt". Yahoo Malaysia. February 16, 2013. - "Yoko Ono: biography". AllMusic. Retrieved February 1, 2014. - Haven, Cynthia (December 19, 2008). "Yoko Ono to speak at Stanford, Stanford Report". Stanford University. - ""Brought to Book", 31 July 1971 interview with Alan Smith". Uncut Presents NME Originals Beatles-The Solo Years. 2010. p. 42. - Murray Sayle, "The Importance of Yoko Ono", JPRI Occasional Paper No. 18, Japan Policy Research Institute, November 2000. - "Yoko Ono - Charts & Awards - Billboard Singles". AllMusic. Retrieved January 12, 2014. - Munroe et al. 2000, p. 231 - Goodman, Amy (October 16, 2007). "EXCLUSIVE: Yoko Ono on the New Imagine Peace Tower in Iceland, Art & Politics, the Peace Movement, Government Surveillance and the Murder of John Lennon". Democracy Now!. Retrieved February 25, 2014. - "Yoko Ono". biography.com. Retrieved February 14, 2014. - Munroe et al. 2000, p. 23. - "Yoko Ono Biography". Biography Channel (UK). - Hockinson, Michael J. (1992). The Ultimate Beatles Quiz Book. Macmillan. - Munroe et al. 2000, p. 27. - Munroe et al. 2000, p. 168. - Harry 2001, p. 682. - Buskin, Richard. "John Lennon: John Lennon Meets Yoko Ono". HowStuffWorks.com. Retrieved April 17, 2014. - Sheff, David (2000). All We Are Saying: The Last Major Interview with John Lennon and Yoko Ono. St. Martin's Griffin. - Miles 1997, p. 272. - Williams, Precious (May 19, 2002). "Eternal flame". The Scotsman. Edinburgh, UK. - "Yoko Ono: Biography". Rolling Stone. Retrieved February 5, 2014. - Harry 2001, p. 683. - Two Virgins liner notes, Apple, SAPCOR 2 - Lennon, Cynthia, A Twist of Lennon, Avon, ISBN 978-0-380-45450-1, 1978, p. 183 - Spizer, Bruce, The Beatles on Apple Records, 498 Productions, ISBN 0-9662649-4-0, 2003, pp. 107-108 - Harry 2001, p. 510. - Spitz, Bob, The Beatles: The Biography, 2005, p. 800 - Kruse, Robert J. II, "Geographies of John and Yoko's 1969 Campaign for Peace: An Intersection of Celebrity, Space, Art, and Activism", in Johansson, Ola, Bell, Thomas L., eds., Sound, Society and the Geography of Popular Music, Ashgate, ISBN 978-0-7546-7577-8, 2009, p. 16 - Norman, Philip (2008). John Lennon: The Life. Doubleday Canada. p. 608. ISBN 978-0-385-66100-3. - Harry 2001, p. 276. - Norman, Philip, John Lennon: The Life, 2008, Doubleday Canada, p. 608, ISBN 978-0-385-66100-3 - Coleman, Ray, Lennon: The Definitive Biography, 1992, p. 550 - Coleman, Ray, Lennon: The Definitive Biography, 1984b, p. 64 - Norman, Philip, John Lennon The Life, Hammersmith, England: Harper Collins, 2008, ISBN 978-0-00-719741-5, p. 615 et seq - Emerick, Massey, 2006, pp. 279–80 - Gibron, Bill (December 21, 1968). "An in-depth Look at the Songs on Side-Three". Rolling Stone. The White Album Project. Retrieved February 1, 2014. - Lewisohn, Mark, 2000, The Complete Beatles Chronicle, London: Hamlyn, ISBN 978-0-600-60033-6, p. 284 - McDonald, Ian, Revolution in the Head, 3rd ed., Chicago: Chicago Review Press, 2007, ISBN 978-1-55652-733-3, 1556527330 - Calkin, Graham. "Live Peace in Toronto 1969". Jpgr.co.uk. Retrieved February 1, 2014. - Blaney, John (2005). John Lennon: Listen to This Book (illustrated ed.). [S.l.]: Paper Jukebox. p. 42. ISBN 978-0-9544528-1-0. - "Women in Music: Trailblazing Female Singers, Songwriters and Musicians". makers.com. - "Yoko Ono/Plastic Ono Band: Awards". AllMusic. - Liner notes to Disc 2, Sometime in New York City album. - Carr, R. & Tyler, T. (1978). The Beatles: An illustrated record. Harmony Books. p. 83. ISBN 0-517-53367-7. - Jackson, Andrew Grant. Still the Greatest: The Essential Songs of The Beatles' Solo Careers, Lanham, MD: Scarecrow Press, July 2012 (p.50) - "Various Artists: Now That's What I Call Christmas!: The Essential". AllMusic. Retrieved February 14, 2014. - "Happy Xmas (War Is Over): Overview". AllMusic. Retrieved February 14, 2014. - Munroe et al. 2000, p. 320. - Ali, Tariq (Feb 2, 2010). "John Lennon's power for the people". The Guardian. - Brenda Giuliano, Geoffrey Giuliano (1998). Press Release Interview with May Pang. ISBN 978-0-7119-6470-9. Retrieved June 9, 2011. - Harry 2001, p. 698-99. - Harry 2001, p. 700-01. - Willman, Chris (April 7, 2013). "Julian Lennon at 50: It's Never 'Much Too Late' For Lennon Family Discord | Stop The Presses! (NEW)". Music.yahoo.com. Retrieved April 17, 2014. - H, Erika. "Sean Ono Lennon and Charlotte Kemp Muhl to release debut as Ghost of a Saber Tooth Tiger; win award for worst band name since Dogs Die in Hot Cars". tinymixtapes.com. Retrieved September 28, 2011. - Dakss, Brian (Dec 8, 2005). "John Lennon Remembered". CBS News. - Allin, Olivia (March 27, 2011). "Yoko Ono headlining shows for Japan relief efforts". On the Red Carpet. - "Worldwide Broadcast Planned in Honor of Lennon's 50th Birthday". The Tufts Daily. October 5, 1990. p. 3. - Steve Hochman, GRAMMY.com. "A Monument in the Life". - "Spirit Foundation". Retrieved January 31, 2014. - Newhall, Edith (Oct 2000). "A Long and Winding Road". ARTnews. p. 163. - Munroe et al. 2000, p. 40-41. - Munroe et al. 2000, p. 233. - Cardace, Sara (Oct 9, 2009). "Influences: Sean Lennon". New York magazine. - Munroe et al. 2000, p. 17. - Blau, Max (Sep 5, 2012). "33 Musicians on What John Cage Communicates". npr.org. - Munroe et al. 2000, p. 232 - Munroe et al. 2000, p. 65. - Munroe et al. 2000, p. 21 - Kotz, Liz (Winter 2001). "Post-Cagean Aesthetics and the "Event" Score". October. 95: 55–89 . JSTOR 779200. - Munroe et al. 2000, p. 158 - Empire, Kitty (June 22, 2013). "Yoko Ono's Meltdown – review". The Guardian. - Taylor, Paul (February 5, 1989). "Yoko Ono's New Bronze Age at the Whitney". The New York Times. Retrieved January 31, 2014. - Concannon, Kevin (2011). Joan M. Marter, ed. The Grove Encyclopedia of American Art, Volume 1. Oxford University Press. p. 596. ISBN 0195335791. - Pang, May (1983). Loving John. Warner Books (Paperback). ISBN 978-0-446-37916-8. - Ono, Yoko (2013). Acorn. OR Books. ISBN 978-1-939293-23-7. - "Yoko Ono Biography: Films". Biography Channel (UK). - "New York 65–66 Fluxus Films + London 66–67". Archived from the original on February 22, 2005."England 68–69". Archived from the original on February 22, 2005. "London 69–71". Archived from the original on February 22, 2005."Around the World 69–71". Archived from the original on February 22, 2005."New York 70 – 71". Archived from the original on February 22, 2005."Ann Arbor/NYC 71–72 + 2000". Archived from the original on February 22, 2005. ICA website. - "Film No. 4". swatch.com. Retrieved February 5, 2014. - Munroe et al. 2000, p. 294. - "Pharrell Williams Wrote a Pretty Cool Wish on Yoko Ono's Wish Tree". N.Y. Observer. June 6, 2013. - "Yoko Ono's Wish Tree at Saint Louis Art Museum". Blouin Art Info. August 19, 2013. - "Yoko Ono's Wish Trees". Imagine Peace Tower website. - "Yoko Ono". Peggy Guggenheim Collection. - "Yoko Ono receives a lifetime achievement award in Dublin | Irish Entertainment in Ireland and Around the World". IrishCentral. June 28, 2012. Retrieved September 25, 2012. - "Yoko Ono Imagine Peace at the Rauschenberg Gallery - ArtSWFL.com". www.artswfl.com. Retrieved June 24, 2017. - "2014 Exhibition Archives - Bob Rauschenberg Gallery". rauschenberggallery.com. Retrieved June 24, 2017. - "SKYLANDING By Yoko Ono". www.skylanding.com. Retrieved June 24, 2017. - "Project 120 Chicago - SKYLANDING by Yoko Ono". www.project120chicago.org. Retrieved June 24, 2017. - Higgins, Charlotte (June 8, 2012). "The Guardian Profile: Yoko Ono". The Guardian. - Munroe et al. 2000, pp. 23, 55. - Munroe et al. 2000, p. 28. - Munroe et al. 2000, p. 18. - Munroe et al. 2000, p. 55. - Munroe et al. 2000, p. 82. - Munroe et al. 2000, p. 22. - "MARCH 10-JUNE 17, 2001 Y E S YOKO ONO". 2000. - "Spirit of YES: The Art and Life of Yoko Ono". 2000. - "YES Yoko Ono Exhibition Details". August 4, 2015. - "Visual and Recording Artist Yoko Ono To Be Awarded An Honorary Degree at Bard College on Tuesday, October 29 (press release)". Bard College website. October 17, 2002. Retrieved October 28, 2011. - "Yoko Ono: Freight Train". MoMA/P.S.1. - "The Curve: The 8th MOCA Award to Distinguished Women in the Arts Luncheon". September 3, 2013. - "Yoko Ono wins achievement award". Japan Times. Retrieved April 18, 2014. - "Archived copy". Archived from the original on December 19, 2010. Retrieved February 23, 2010.. imaginepeace.com - "53rd International Art Exhibition: Jury and Awards". La Biennale di Venezia. Retrieved October 28, 2011. - Yoko Ono: To The Light 2012 at the Serpentine Galleries, London - "Yoko Ono wins Oskar Kokoschka art prize in Austria". BBC News. March 2, 2012. Retrieved November 11, 2013. - "Retrospective. Yoko Ono. Half-a-Wind Show". Kunsthalle Krems. - "YOKO ONO PLASTIC ONO BAND Part of Festival of Neighbourhood and Meltdown Royal Festival Hall Friday 14 June 2013". Southbank Centre website. Retrieved November 11, 2013. - "Yoko Ono: One Woman Show, 1960–1971 - MoMA". The Museum of Modern Art. Retrieved June 24, 2017. - "Ono, Yoko: Cut Piece". Medien Kunst Netz (Media Art Net). Retrieved November 2013. Check date values in: - "Centre of the Creative Universe: Liverpool and the Avant-Garde: Timeline". tate.or.uk. - "#29: John Lennon and Yoko Ono, Double Fantasy". Rolling Stone. Retrieved April 18, 2014. - Wiskirchen, Julie. "The B-52s 25th Anniversary Concert with Chicks on Speed". Ape Culture. Retrieved April 18, 2014. - "Elvis Costello-Walking on Thin Ice". last.fm. Retrieved February 7, 2014. - "SYR4 - Goodbye 20th Century". NME. December 1, 1999. - AllMusic Season of Glass Review. AllMusic. Retrieved January 1, 2012. - Trebay, Guy (April 6, 2011). "A Collector of People Along With Art". New York Times. - "Yoko Ono, It's Alright (I See Rainbows), Billboard Albums". February 7, 2014. - "Yoko Ono: Biography". iTunes. - "Various Artists, Every Man Has a Woman, Credits". AllMusic. Retrieved February 7, 2014. - "John Lennon/Yoko Ono: Milk and Honey, Overview". AllMusic. - "John Lennon/Yoko Ono: Milk and Honey, Awards". AllMusic. - "John Lennon and Yoko Ono in Searchable Database". riaa.com.[dead link] page 3 - "Searchable Database". bpi.co.uk. - "Gold Platinum Database: John Lennon". Canadian Recording Industry Association. Archived from the original on February 25, 2012. Retrieved July 12, 2011. - "ZBIG RYBCZYNSKI::FILM AND VIDEO AWARD". ZBIG RYBCZYNSKI. Retrieved July 26, 2011. - "Yoko Ono: Onobox". AllMusic. - "Yoko Ono": Walking on Thin Ice". AllMusic. - Kemp, Mark (1992). "She Who Laughs Last: Yoko Ono Reconsidered". Option. pp. 74–81. - "Yoko Ono, New York Rock [Original Cast]". AllMusic. Retrieved November 7, 2014. - "The Ballads (and Uptempo Songs) of Yoko: Ask Billboard". Billboard.com. Sep 18, 2009. Retrieved February 7, 2014. - "Ima/Yoko Ono: Rising Mixes". AllMusic. - Kaufman, Gil (Feb 19, 1997). "Ready Or Not: Yoko Ono Albums To Be Reissued". MTV.com. - "Yoko Ono/Plastic Ono Band", The". Discogs. - "Yoko Ono - Starpeace". Discogs. - "Yoko Ono: Blueprint for a Sunrise". Pitchfork Media. Oct 25, 2001. - Locker, Melissa (December 19, 2013). "Q&A: Yoko Ono on Her Rebirth As A Dance-Music Star". TIME. - "ONO-Hell in Paradise". Discogs. Retrieved April 18, 2014. - "ONO - Walking on Thin Ice 2013 (Danny Tenaglia and Sebastian Dub)". Soundcloud. Retrieved April 18, 2014. - Plastic Ono Band (Mlps): Yoko Ono: Music. Amazon.com. Retrieved April 4, 2011. - Petridis, Alexis (February 16, 2007). "Yoko Ono, Yes, I'm a Witch". The Guardian. London. - "Basement Jaxx, Pet Shop Boys Remix Yoko Ono". Pitchforkmedia.com via the Way Back Machine. March 5, 2007. Archived from the original on March 12, 2007. - "Yoko Ono/Plastic Ono Band: Between My Head and the Sky, Overview". AllMusic. - "Yoko Ono/Plastic Ono Band: Between My Head and the Sky, Credits". AllMusic. - Pareles, Jon (February 18, 2010). "Review: "Amid All That Experience, Innocence"". The New York Times. Retrieved February 18, 2010. - Fitzmaurice, Larry. "Yoko Ono: "Give Me Something" (Junior Boys Remix)". Pitchfork Media. - "Wouldnit (I'm a Star) – Single by Yoko Ono". iTunes Store US. Apple Inc. Retrieved July 28, 2011. - "Dance Club Songs, Best of 2010, 21-30". Retrieved April 18, 2014. - "Dance Club Songs, Best of 2010, 41-50". Retrieved April 18, 2014. - Perpetua, Mathew. "Yoko Ono Scores Sixth Consecutive Dance Chart-Topper With 'Move on Fast'". Rolling Stone. Retrieved July 26, 2011. - "Dance Club Songs, Best of 2011, 21-30". Retrieved April 18, 2014. - "The Approval Matrix". New York magazine. Nov 18, 2013. - Hermes, Will (Dec 3, 2013). "Yoko Ono Plastic Ono Band: Take Me To The Land Of Hell". Rolling Stone. - "Hot Dance Club Songs". Billboard. Nielsen Business Media, Inc. November 8, 2014. Retrieved October 30, 2014. - "Yoko Ono / Ima (2) – Rising Mixes". Discogs. - "Bio". DJSpooky.com. - "Yoko Ono: You're the One [Bimbo Jones Main Mix]". AllMusic. - "Yoko Ono: Give Peace a Chance [DJ Dan Vocal Mix]". AllMusic. - "Craig Armstrong / Yoko Ono Shiranakatta (I Didn't Know)". AllMusic. - "THE SUN IS DOWN! remix competition – THE 20 WINNING REMIXES". ImaginePeace.com. January 26, 2010. - "Kim Gordon/Thurston Moore/Yoko Ono/YOKOKIMTHURSTON". AllMusic. - "BMI Foundation's John Lennon Scholarships". - Badman 1999, p. 40. - "Yoko Ono". Nndb.com. Retrieved August 24, 2015. - Phillips, Brian (March 24, 2014). "Today in Twitter Beefs: Andy Murray’s Mom vs. Yoko Ono". Grantland. - Jackson, Buzzy (February 17, 2005). A Bad Woman Feeling Good: Blues and the Women Who Sing Them. W. W. Norton & Company. pp. 264–65. ISBN 978-0-393-05936-6. - Dahlberg, Tim (December 22, 2007). "Yoko Romo: Jessica Simpson cast in the role of villain". USA Today. Retrieved August 7, 2008. - "Perrie Edwards: The Blonde Yoko Ono?". MTV. March 26, 2015. Retrieved March 26, 2015. - "Come Together: A Night For John Lennon's Words & Music, Dedicated To New York City & It's People (2001)". tntdrama.com. - "Yoko Ono: I'm astounded by Liverpool's renaissance". Daily Post via the Free Library. September 18, 2004. - Coslett, Paul. "But Is It Art?". bbc.co.uk. Retrieved February 8, 2014. - "Interview with Michele Robecchi" (84). Contemporary Magazine. 2006. via ImaginePeace.com - Elfman, Doug (February 22, 2006). "Agony of defeat: Coverage of "oh no" Games seems lackluster to callous generation of American viewers". Chicago Sun-Times. Sun-Times Media. Retrieved December 8, 2010. - "Olympics Open in Spectacular Style". CNN. Feb 10, 2006. - "Yoko Ono: IMAGINE PEACE at the opening ceremony for The 2006 XX Winter Olympic Games". ImaginePeace.com. Feb 10, 2006. - "Gabriel, Pavarotti Participate in Surreal Olympic Opening". Billboard. Feb 10, 2006. - Pineda, Nina (December 13, 2006). "Yoko Ono bodyguard accused of extortion". Eyewitness News. ABC. WABC-TV. Retrieved December 8, 2010. - "Driver's Lawyer Calls Yoko Ono Abusive". The New York Times. December 19, 2006. - "Deal Ends Case Against Yoko Ono's Chauffeur". The New York Times. February 16, 2007. - "The Beatles, Aired June 26, 2007 - 21:00 ET". CNN LARRY KING LIVE. CNN. Retrieved February 8, 2014. - "Yoko imagines peace on Lennon's birthday". October 11, 2007. - Yoko Ono: SKYLADDERS – Articles. Imagine Peace (October 21, 2008). Retrieved April 4, 2011. - "Montreal hotel celebrates 40th anniversary of John Lennon and Yoko Ono's "Bed-in for Peace"". The Seattle Times. June 28, 2009. Retrieved November 9, 2013.) - "Star Tracks". People. Jan 15, 2001. - Designers against AIDS Website. Designersagainstaids.com. Retrieved January 1, 2012. - Radosh, Daniel (August 16, 2009). "While My Guitar Gently Beeps". The New York Times. p. MM26. ISSN 0362-4331. Retrieved December 8, 2010. - Fletcher, Brenden (June 2, 2009). "Best Animated Game-Intro Ever: The Beatles Rock Band". fps magazine. Retrieved June 4, 2009. - Bernardin, Marc (June 2, 2009). "'The Beatles: Rock Band': Most amazing animated commercial ever?". Entertainment Weekly. Retrieved June 4, 2009. - "Basement Jaxx feat. Yoko Ono – Day of the Sunflowers (We March On)". Imagine Peace. September 1, 2009. Retrieved December 8, 2010. - Harvilla, Rob (February 23, 2010). "Oh, Yoko Ono". The Village Voice. Retrieved December 8, 2010. - "Your Global Autism Ambassador Is ... Yoko Ono? Really?". About.com. April 2, 2010. - "Amazing Ringo 70th Birthday show – McCartney, Yoko, Joe Walsh, Little Steven and much more.". Rock Art Show Blog. Rock Art Show. July 8, 2010. Archived from the original on July 13, 2010. Retrieved December 8, 2010. - "Julian Lennon on His 'Timeless' Photo Exhibition". Rolling Stone. September 17, 2010. - "Julian Lennon: 'Timeless' exhibition at Morrison Hotel Gallery, NYC: Sept17-Oct7". Imagine Peace. September 30, 2010. - "Yoko Ono on Lady Gaga: 'She is Incredible'". The Hollywood Reporter. September 12, 2011. Retrieved November 17, 2013. - Metro (UK), page 30, February 18, 2011 - March 27 JAPAN BENEFIT Concert: YOKO ONO, Sean Lennon, Sonic Youth, Mike Patton, Cibo Matto & more (Miller Theater Columbia University, NY). Imagine Peace (March 24, 2011). Retrieved April 4, 2011. - "Yoko Ono to Japanese Disaster Victims: 'We Are All Together'". Billboard.com. July 22, 2011. - The Hiroshima Art Prize – Hiroshima MOCA. Retrieved February 21, 2014. - Russeth, Andrew (March 2, 2012). "Awards: 2012 Oscar Kokoschka Prize Goes to Yoko Ono". Galleristny. - "Yoko Ono, To the Light". Serpentine Gallery. Retrieved February 10, 2014. - "Yoko Ono To Exhibit at London 2012 Festival". Huffington Post UK. Dec 13, 2011. Retrieved February 10, 2014. - "Yoko Ono picks up German human rights prize at Berlin's Checkpoint Charlie Museum". Agence France-Presse via ArtDaily. Retrieved November 11, 2013. - "Yoko Ono Tweets Photo of John Lennon's Bloody Glasses With Anti-Gun Statement". The Hollywood Reporter. March 21, 2013. - "Congressional Citation for Yoko Ono". Manila Bulletin. February 21, 2013. Archived from the original on February 24, 2013. - "Yoko Ono, HP donate to Pablo victims". The Philippine Star. Retrieved November 17, 2013. - Petridis, Alexis (June 15, 2013). "Yoko Ono/ Plastic Ono Band -review". The Guardian. Retrieved November 11, 2013. - Price,Simon (June 29, 2013). Yoko Ono's Meltdown Finale. The Independent. Retrieved May 14, 2015. - "Yoko Ono given top Iceland honour". express.co.uk. October 10, 2013. - Alder Hey Charity. "Our Patrons", Liverpool 2013. Retrieved on June 23, 2014. - Karimi, Faith (February 27, 2016). "Artist Yoko Ono hospitalized with 'extreme' flu-like symptoms". CNN. Retrieved February 27, 2016. - "Secretly Canadian ANNOUNCE // Yoko Ono Reissue Project". secretlycanadian.com. Retrieved June 24, 2017. - "Yoko Ono Announces Reissue Project - Pitchfork". pitchfork.com. Retrieved June 24, 2017. - "John Lennon: Awards". AllMusic. - Munroe et al. 2000, p. 162-65. - Munroe et al. 2000, p. 190-91. - Doggett, Peter (2007). There's a Riot Going on: Revolutionaries, Rock Stars, and the Rise and Fall of the '60s. Grove/Atlantic. p. 501. - Risen, Tom (January 22, 2014). "John Lennon: Rebel Beatle". U.S. News & World Report. - Harry 2001. - O'Hagan, Sean (March 2, 2014). "John Sinclair: 'We wanted to kick ass – and raise consciousness'". The Guardian. Retrieved April 29, 2014. - Derienzo, Paul (Dec 13, 2012). "John Lennon, David Peel and rock's greatest flattery". The Villager. - Simmons, William (December 1, 2011). "Conversations with Kate Millett". The Harvard Independent. - "The Mike Douglas Show with John Lennon & Yoko Ono". AllMusic. - Cronin, J. Ken; Robertson, Kirsty (2011). Imagining Resistance: Visual Culture and Activism in Canada. Wilfrid Laurier University Press. p. 71. - "Yoko brings peace message to UK". BBC News. March 5, 2002. - Oanda.com's currency converter, 3/5/02 - Imagine Peace. (PDF). Taipei Times. December 24, 2008. Retrieved January 1, 2012. - "Yoko Ono supports bed protest". BBC. April 3, 2003. Retrieved February 10, 2014. - Johnstone, Nick. Yoko Ono Talking. Omnibus Press. p. 13. ISBN 085712255X. - "BED PEACE starring John Lennon & Yoko Ono". August 12, 2011. - Yoko Ono Lennon (September 3, 2011). "Watch the film #BEDPEACE starring John Lennon & Yoko Ono ✩✩✩ FREE ✩✩✩". ImaginePeace.com. - Gabbatt, Adam (January 18, 2013). "Fracking debate draws Yoko, Lennon and Sarandon to rural battlegrounds: Artists Against Fracking board bus for magical mystery tour of Pennsylvania as New York and New Jersey decisions draw near". The Guardian. - "Intelligencer: Fracklash". New York. September 10, 2012. - Jamieson, Ruth (February 23, 2009). "Art on Twitter: yes, but is it twart?". The Guardian. Retrieved February 22, 2014. - "Yoko Ono". Twitter. Retrieved April 17, 2014. - Sinclair, Hannah (July 8, 2011). "Yoko Ono's Tweets of Wisdom". Yen. Retrieved July 30, 2013. - "Yoko Ono tweets John Lennon's bloody glasses". CBS News. March 21, 2013. Retrieved July 30, 2013. - Miles 1997, p. 552. - Udovitch, Mim (October 8, 2000). "Let Us Now Praise Famous Men". New York Times. Retrieved February 21, 2014. - Badman 1999, p. 41. - "John Lennon-on Yoko Breaking Up the Beatles". YouTube. January 11, 2008. Retrieved August 24, 2015. - "George harrison talks about Lennon, Paul, yoko ono and beatles beakup". YouTube. December 5, 1990. Retrieved August 24, 2015. - "Talking Point | Lennon-McCartney: Who do you give credit to?". BBC News. December 23, 2002. Retrieved April 18, 2012. - "Update: McCartney Reignites Beatles Credit Controversy". Billboard.com. Retrieved February 14, 2014. - Vultaggio, Maria (December 29, 2012). "Yoko Ono Blames Paul McCartney for the Beatles' Breakup?". International Business Times. - Garcia, Gilbert. (January 27, 2003) "The ballad of Paul and Yoko". Salon. Retrieved April 4, 2011. - Herbert, Ian (October 15, 2005). "Yoko Ono claims she was misquoted over McCartney outburst". The Independent. London. Retrieved February 1, 2014. - "Can't buy me love: Yoko tells how Paul saved her marriage to John". The Times. October 9, 2010. - "Paul McCartney: Yoko Ono Didn't Break Up the Beatles". Rolling Stone. October 29, 2012. - "Julian Lennon blames father John for his lack of children". The Daily Telegraph. December 4, 2011. Retrieved February 21, 2014. - "Barenaked Ladies: Be My Yoko Ono". last.fm. Retrieved February 7, 2014. - "Barenaked Ladies: Be My Yoko Ono (Overview)". AllMusic. Retrieved February 7, 2014. - "Dar Williams - I Won't Be Your Yoko Ono". Discogs. Retrieved February 7, 2014. - "Top 10 Songs Inspired by Yoko Ono". Ultimate Classic Rock. Retrieved June 24, 2017. - "Yoko Ono thanks Elbow for new song 'New York Morning' in open letter". NME.com. March 5, 2014. Retrieved April 23, 2014. - "Death of Samantha: Notes from the Underground," The Plain Dealer Magazine, February 22, 1987, Christopher Evans, Page 6 - "Chart Stats - Yoko Ono". Chart Stats. - "Chart Log UK : 1994–2010 : The O – Ozric Tentacles". zobbel.de. Retrieved January 12, 2014. - "Ono - Chart history". Billboard. Retrieved January 18, 2017. - "Hell in Paradise 2016 | ONO". Imaginepeace.com. Retrieved January 19, 2017. It's the first official single from ONO's critically-acclaimed remix collaborations album YES, I'M A WITCH TOO - Ono, Yoko (2013). Acorn. OR Books. ISBN 978-1-939293-23-7. Retrieved July 30, 2013. Note ISBN 978-1-939293-23-7 (paperback), ISBN 978-1-939293-24-4 (ebook), but as of 30 July 2013[update], it is only available directly from the publisher - Ono, Yoko. "Yoko Ono: Onochord on Vimeo". Vimeo.com. Retrieved September 14, 2010. - John Lennon: One Day at a Time, by Anthony Fawcett (Grove Press, 1976) - Come Together: John Lennon in His Time, by Jon Wiener (Random House, 1984) - The Ballad of John and Yoko, by the editors of Rolling Stone (Rolling Stone Press, 1982) - Spitz, Bob. The Beatles. Little, Brown, and Company: New York, 2005. - Kemp, Mark. "She Who Laughs Last: Yoko Ono Reconsidered." (July/Aug 1992). Option, p. 74–81. - "Ono apologises for comment." (November 6, 2005). New Sunday Times, p. 29. - The Rare Films of Yoko Ono: New York 65–66 Fluxus Films + London 66–67, England 68–69, London 69–71, Around the World 69–71, New York 70 – 71 and Ann Arbor/NYC 71–72 + 2000 at the ICA, London, March 2004. - Harry, Bill (October 2001). The John Lennon Encyclopedia. Virgin. ISBN 0-7535-0404-9. - Munroe, Alexandra; Ono, Yoko; Hendricks, Jon; Altshuler, Bruce; Ross, David A.; Wenner, Jann S.; Concannon, Kevin C.; Tomii, Reiko; Sayle, Murray; Gomez, Edward M. (October 2000). Yes Yoko Ono. New York: Harry N. Abrams. ISBN 0-81094-587-8. - Ayres, Ian (2004). Van Gogh's Ear: Best World Poetry & Prose (Volume 3 includes Yoko Ono's poetry/artwork). Paris: French Connection. ISBN 978-2-914853-02-6. - Ayres, Ian (2005). Van Gogh's Ear: Best World Poetry & Prose (Volume 4 includes Yoko Ono's poetry/artwork). Paris: French Connection. ISBN 978-2-914853-03-3. - Clayson, Alan et al. Woman: The Incredible Life of Yoko Ono - Goldman, Albert. The Lives of John Lennon - Green, John. Dakota Days - Badman, Keith (1999). The Beatles After the Breakup. Omnibus Press. ISBN 0-7119-7520-5. - Haskell, Barbara. Yoko Ono: Arias and Objects. Exhibition Catalogue. New York: Whitney Museum of American Art, 1991. - Hendricks, Geoffrey. Fluxus Codex - Hendricks, Geoffrey. Yoko Ono: Arias and Objects - Hopkins, Jerry. Yoko Ono - Klin, Richard, and Lily Prince, photos. "'I Remembered Carrying a Glass Key to Open the Sky.'" In Something to Say: Thoughts on Art and Politics in America. (Leapfrog Press, 2011) - Miles, Barry (1997). Many Years From Now. Vintage-Random House. ISBN 978-0-7493-8658-0. - Millett, Kate. Flying - Norman, Philip, John Lennon : the life, 1st ed., New York : Ecco, 2008. ISBN 978-0-06-075401-3. - Norman, Philip, Days in the life : John Lennon remembered, London : Century, 1990. ISBN 0-7126-3922-5 - Munroe, Alexandra. Yoko Ono's Bashō: A Conversation, published in Yoko Ono: Half-a-Wind Show; A Retrospective. April 14, 2013. - Munroe, Alexandra. Spirit of YES: The Art and Life of Yoko Ono, published in YES YOKO ONO, 2000. - Munroe, Alexandra. Why War? Yoko by Yoko at the Serpentine, published in Yoko Ono: To the Light. 2012. - Obrist, Hans Ulrich. The Conversation Series: Yoko Ono, Walther König, Cologne, 2010. - Rumaker, Michael. The Butterfly - Seaman, Frederic. The Last Days of John Lennon - Sheff, David. Last Interview: John Lennon and Yoko Ono New York: Pan Books, 2001. ISBN 978-0-330-48258-5. - Wiener, Jon. Come Together: John Lennon in His Time - Wenner, Jann, ed. The Ballad of John and Yoko - Yoon, Jean. The Yoko Ono Project |Wikiquote has quotations related to: Yoko Ono| |Wikimedia Commons has media related to Yoko Ono.|
1
22
<urn:uuid:0fcf4712-dc21-4254-ab1c-2a1a9503a0eb>
The process used to produce the Resuscitation Council (UK) Guidelines 2015 has been accredited by the National Institute for Health and Care Excellence. The guidelines process includes: - Systematic reviews with grading of the quality of evidence and strength of recommendations. This led to the 2015 International Liaison Committee on Resuscitation (ILCOR) Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science with Treatment Recommendations.1,2 - The involvement of stakeholders from around the world including members of the public and cardiac arrest survivors. - Details of the guidelines development process can be found in the Resuscitation Council (UK) Guidelines Development Process Manual. - These Resuscitation Council (UK) Guidelines have been peer reviewed by the Executive Committee of the Resuscitation Council (UK), which comprises 25 individuals and includes lay representation and representation of the key stakeholder groups. Back to top - Guidelines 2015 highlights the critical importance of the interactions between the emergency medical dispatcher, the bystander who provides cardiopulmonary resuscitation (CPR) and the timely deployment of an automated external defibrillator (AED). An effective, co-ordinated community response that draws these elements together is key to improving survival from out-of-hospital cardiac arrest. - The emergency medical dispatcher plays an important role in the early diagnosis of cardiac arrest, the provision of dispatcher-assisted CPR (also known as telephone CPR), and the location and dispatch of an AED. The sooner the emergency services are called, the earlier appropriate treatment can be initiated and supported. - The knowledge, skills and confidence of bystanders will vary according to the circumstances, of the arrest, level of training and prior experience. The bystander who is trained and able should assess the collapsed victim rapidly to determine if the victim is unresponsive and not breathing normally and then immediately alert the emergency services. Whenever possible, alert the emergency services without leaving the victim. - The victim who is unresponsive and not breathing normally is in cardiac arrest and requires CPR. Immediately following cardiac arrest blood flow to the brain is reduced to virtually zero, which may cause seizure-like episodes that may be confused with epilepsy. Bystanders and emergency medical dispatchers should be suspicious of cardiac arrest in any patient presenting with seizures and carefully assess whether the victim is breathing normally. Back to top The community response to cardiac arrest is critical to saving lives. Each year, UK ambulance services respond to approximately 60,000 cases of suspected cardiac arrest. Resuscitation is attempted by ambulance services in less than half of these cases (approximately 28,000).3 The main reasons are that either the victim has been dead for several hours or has not received bystander CPR so by the time the emergency services arrive the person has died. Even when resuscitation is attempted, less than one in ten victims survive to go home from hospital. Strengthening the community response to cardiac arrest by training and empowering more bystanders to perform CPR and by increasing the use of automated external defibrillators (AEDs) at least doubles the chances of survival and could save thousands of lives each year.4,5 This guideline is based on the International Liaison Committee on Resuscitation (ILCOR) 2015 Consensus on Science and Treatment Recommendations (CoSTR) for Basic Life Support and Automated External Defibrillation and the European Resuscitation Council Guidelines for Resuscitation 2015 Section 2 Adult basic life support and automated external defibrillation.2,6 These contain all the reference material for this section. Back to top The Chain of Survival (Figure 1) describes four key, inter-related steps, which if delivered effectively and in sequence, optimise survival from out-of-hospital cardiac arrest.7 1: Early recognition and call for help If untreated, cardiac arrest occurs in a quarter to a third of patients with myocardial ischaemia within the first hour after onset of chest pain. Once cardiac arrest has occurred, early recognition is critical to enable rapid activation of the ambulance service and prompt initiation of bystander CPR. 2: Early bystander CPR The immediate initiation of bystander CPR can double or quadruple survival from out-of-hospital cardiac arrest.5,8-13 Despite this compelling evidence, only 40% of victims receive bystander CPR in the UK.14 3: Early defibrillation Defibrillation within 3–5 min of collapse can produce survival rates as high as 50–70%.15 This can be achieved through public access defibrillation, when a bystander uses a nearby AED to deliver the first shock.4,15-17 Each minute of delay to defibrillation reduces the probability of survival to hospital discharge by 10%. In the UK, fewer than 2% of victims have an AED deployed before the ambulance arrives.18 4: Early advanced life support and standardised post-resuscitation care Advanced life support with airway management, drugs and the correction of causal factors may be needed if initial attempts at resuscitation are unsuccessful. The quality of treatment during the post-resuscitation phase affects outcome and is addressed in the Adult advanced life support and Post-resuscitation care Back to top Figure 1. The Chain of Survival The Resuscitation Council (UK) recommends that to improve survival from cardiac arrest: - All school children are taught CPR and how to use an AED. - Everyone who is able to should learn CPR. - Defibrillators are available in places where there are large numbers of people (e.g. airports, railway stations, shopping centres, sports stadiums), increased risk of cardiac arrest (e.g. gyms, sports facilities) or where access to emergency services can be delayed (e.g. aircraft and other remote locations). - Owners of defibrillators should register the location and availability of devices with their local ambulance services. - Systems are implemented to enable ambulance services to identify and deploy the nearest available defibrillator to the scene of a suspected cardiac arrest. - All out-of-hospital cardiac arrest resuscitation attempts are reported to the National Out-of-Hospital Cardiac Arrest Audit. www.warwick.ac.uk/ohcao. Back to top The remainder of this section contains guidance on the initial resuscitation of an adult cardiac arrest victim where the cardiac arrest occurs outside a hospital. This includes basic life support (BLS: airway, breathing and circulation support without the use of equipment other than a protective barrier device) and the use of an automated external defibrillator (AED). Simple techniques used in the management of choking (i.e. foreign body airway obstruction) are also included. Guidelines for the use of manual defibrillators and starting in-hospital resuscitation are found in Advanced life support guidelines section. The guidelines are based on the ILCOR 2015 Consensus on Science and Treatment Recommendations (CoSTR) for BLS/AED and European Resuscitation Council Guidelines for BLS/AED.2,6 Back to top - Ensure it is safe to approach the victim. - Promptly assess the unresponsive victim to determine if they are breathing normally. - Be suspicious of cardiac arrest in any patient presenting with seizures and carefully assess whether the victim is breathing normally. - For the victim who is unresponsive and not breathing normally: - Dial 999 and ask for an ambulance. If possible stay with the victim and get someone else to make the emergency call. - Start CPR and send for an AED as soon as possible. - If trained and able, combine chest compressions and rescue breaths, otherwise provide compression-only CPR. - If an AED arrives, switch it on and follow the instructions. - Minimise interruptions to CPR when attaching the AED pads to the victim. - Do not stop CPR unless you are certain the victim has recovered and is breathing normally or a health professional tells you to stop - Treat the victim who is choking by encouraging them to cough. If the victim deteriorates give up to 5 back slaps followed by up to 5 abdominal thrusts. If the victim becomes unconscious – start CPR. - The same steps can be followed for resuscitation of children by those who are not specifically trained in resuscitation for children – it is far better to use the adult BLS sequence for resuscitation of a child than to do nothing. Back to top The sequence of steps for the initial assessment and treatment of the unresponsive victim are summarised in Figure 2. Further technical information on each of the steps is presented in Table 1 and below. The sequence of steps takes the reader through recognition of cardiac arrest, calling an ambulance, starting CPR and using an AED. The number of steps has been reduced to focus on the key actions. The intent of the revised algorithm is to present the steps in a logical and concise manner that is easy for all types of rescuers to learn, remember and perform CPR and use an AED. Back to top Figure 2. Adult basic life support algorithm |SEQUENCE ||Technical description | |SAFETY ||Make sure you, the victim and any bystanders are safe | |RESPONSE ||Check the victim for a response | If he responds leave him in the position in which you find him, provided there is no further danger; try to find out what is wrong with him and get help if needed; reassess him regularly - Gently shake his shoulders and ask loudly: “Are you all right?" |AIRWAY ||Open the airway | - Turn the victim onto his back - Place your hand on his forehead and gently tilt his head back; with your fingertips under the point of the victim's chin, lift the chin to open the airway |BREATHING ||Look, listen and feel for normal breathing for no more than 10 seconds | In the first few minutes after cardiac arrest, a victim may be barely breathing, or taking infrequent, slow and noisy gasps. Do not confuse this with normal breathing. If you have any doubt whether breathing is normal, act as if it is they are not breathing normally and prepare to start CPR |DIAL 999 || Call an ambulance (999) | - Ask a helper to call if possible otherwise call them yourself - Stay with the victim when making the call if possible - Activate the speaker function on the phone to aid communication with the ambulance service |SEND FOR AED ||Send someone to get an AED if available | If you are on your own, do not leave the victim, start CPR |CIRCULATION ||Start chest compressions | - Kneel by the side of the victim - Place the heel of one hand in the centre of the victim’s chest; (which is the lower half of the victim’s breastbone (sternum)) - Place the heel of your other hand on top of the first hand - Interlock the fingers of your hands and ensure that pressure is not applied over the victim's ribs - Keep your arms straight - Do not apply any pressure over the upper abdomen or the bottom end of the bony sternum (breastbone) - Position your shoulders vertically above the victim's chest and press down on the sternum to a depth of 5–6 cm - After each compression, release all the pressure on the chest without losing contact between your hands and the sternum; - Repeat at a rate of 100–120 min-1 |GIVE RESCUE BREATHS ||After 30 compressions open the airway again using head tilt and chin lift and give 2 rescue breaths | Continue with chest compressions and rescue breaths in a ratio of 30:2 - Pinch the soft part of the nose closed, using the index finger and thumb of your hand on the forehead - Allow the mouth to open, but maintain chin lift - Take a normal breath and place your lips around his mouth, making sure that you have a good seal - Blow steadily into the mouth while watching for the chest to rise, taking about 1 second as in normal breathing; this is an effective rescue breath - Maintaining head tilt and chin lift, take your mouth away from the victim and watch for the chest to fall as air comes out - Take another normal breath and blow into the victim’s mouth once more to achieve a total of two effective rescue breaths. Do not interrupt compressions by more than 10 seconds to deliver two breaths. Then return your hands without delay to the correct position on the sternum and give a further 30 chest compressions If you are untrained or unable to do rescue breaths, give chest compression only CPR (i.e. continuous compressions at a rate of at least 100–120 min-1) |IF AN AED ARRIVES | |Switch on the AED | If a shock is indicated, deliver shock - Attach the electrode pads on the victim’s bare chest - If more than one rescuer is present, CPR should be continued while electrode pads are being attached to the chest - Follow the spoken/visual directions - Ensure that nobody is touching the victim while the AED is analysing the rhythm If no shock is indicated, continue CPR - Ensure that nobody is touching the victim - Push shock button as directed (fully automatic AEDs will deliver the shock automatically) - Immediately restart CPR at a ratio of 30:2 - Continue as directed by the voice/visual prompts - Immediately resume CPR - Continue as directed by the voice/visual prompts |CONTINUE CPR | |Do not interrupt resuscitation until: | It is rare for CPR alone to restart the heart. Unless you are certain the person has recovered continue CPR - A health professional tells you to stop - You become exhausted - The victim is definitely waking up, moving, opening eyes and breathing normally Table 1: BLS/AED detailed sequence of steps |RECOVERY POSITION | |If you are certain the victim is breathing normally but is still unresponsive, place in the recovery position | Be prepared to restart CPR immediately if the victim deteriorates or stops breathing normally - Remove the victim’s glasses, if worn - Kneel beside the victim and make sure that both his legs are straight - Place the arm nearest to you out at right angles to his body, elbow bent with the hand palm-up - Bring the far arm across the chest, and hold the back of the hand against the victim’s cheek nearest to you - With your other hand, grasp the far leg just above the knee and pull it up, keeping the foot on the ground - Keeping his hand pressed against his cheek, pull on the far leg to roll the victim towards you on to his side - Adjust the upper leg so that both the hip and knee are bent at right angles - Tilt the head back to make sure that the airway remains open - If necessary, adjust the hand under the cheek to keep the head tilted and facing downwards to allow liquid material to drain from the mouth - Check breathing regularly For clarity, the algorithm is presented as a linear sequence of steps. It is recognised that the early steps of ensuring the scene is safe, checking for a response, opening the airway, checking for breathing and calling the ambulance may be accomplished simultaneously or in rapid succession. Open the airway using the head tilt and chin lift technique whilst assessing whether the person is breathing normally. Do not delay assessment by checking for obstructions in the airway. The jaw thrust and finger sweep are not recommended for the lay provider. Agonal breaths are irregular, slow and deep breaths, frequently accompanied by a characteristic snoring sound. They originate from the brain stem, which remains functioning for some minutes even when deprived of oxygen. The presence of agonal breathing can be interpreted incorrectly as evidence of a circulation and that CPR is not needed. Agonal breathing may be present in up to 40% of victims in the first minutes after cardiac arrest and, if correctly identified as a sign of cardiac arrest, is associated with higher survival rates.20-29 The significance of agonal breathing should be emphasised during basic life support training. Bystanders should suspect cardiac arrest and start CPR if the victim is unresponsive and not breathing normally. Immediately following cardiac arrest, blood flow to the brain is reduced to virtually zero. This may cause a seizure-like episode that can be confused with epilepsy. Bystanders should be suspicious of cardiac arrest in any patient presenting with seizures. Although bystanders who have witnessed cardiac arrest events report changes in the victims’ skin colour, notably pallor and bluish changes associated with cyanosis, these changes are not diagnostic of cardiac arrest. Checking the carotid pulse (or any other pulse) is an inaccurate method for confirming the presence or absence of circulation.30-34 Early contact with the ambulance service will facilitate dispatcher assistance in the recognition of cardiac arrest, telephone instruction on how to perform CPR and locating and dispatching the nearest AED. If possible, stay with the victim while calling the ambulance. If the phone has a speaker facility, switch it to speaker mode as this will facilitate continuous dialogue with the dispatcher including (if required) CPR instructions.6 It seems reasonable that CPR training should include how to activate the speaker phone. Additional bystanders may be used to call the ambulance service. In adults needing CPR, there is a high probability of a primary cardiac cause for their cardiac arrest. When blood flow stops after cardiac arrest, the blood in the lungs and arterial system remains oxygenated for some minutes. To emphasise the priority of chest compressions, start CPR with chest compressions rather than initial ventilations. Deliver compressions ‘in the centre of the chest’ Experimental studies show better haemodynamic responses when chest compressions are performed on the lower half of the sternum. Teach this location simply, such as, “place the heel of your hand in the centre of the chest with the other hand on top”. Accompany this instruction by a demonstration of placing the hands on the lower half of the sternum. Chest compressions are most easily delivered by a single CPR provider kneeling by the side of the victim, as this facilitates movement between compressions and ventilations with minimal interruptions. Over-the-head CPR for single CPR providers and straddle-CPR for two CPR providers may be considered when it is not possible to perform compressions from the side, for example when the victim is in a confined space. Compress the chest to a depth of 5–6 cm Fear of doing harm, fatigue and limited muscle strength frequently result in CPR providers compressing the chest less deeply than recommended. Four observational studies, published after the 2010 Guidelines, suggest that a compression depth range of 4.5–5.5 cm in adults leads to better outcomes than all other compression depths during manual CPR.35-38 The Resuscitation Council (UK) endorses the ILCOR recommendation that it is reasonable to aim for a chest compression depth of approximately 5 cm but not more than 6 cm in the average sized adult.2,6 In making this recommendation, the Resuscitation Council (UK) recognises that it can be difficult to estimate chest compression depth and that compressions that are too shallow are more harmful than compressions that are too deep. Training should continue to prioritise achieving adequate compression depth. Compress the chest at a rate of 100–120 per minute (min-1) Two studies, with a total of 13,469 patients, found higher survival among patients who received chest compressions at a rate of 100–120 min-1.6 Very high chest compression rates were associated with declining chest compression depths.39,40 The Resuscitation Council (UK) therefore recommends that chest compressions are performed at a rate of 100–120 min-1. Minimise pauses in chest compressions Delivery of rescue breaths, defibrillation shocks, ventilations and rhythm analysis lead to pauses in chest compressions. Pre- and post-shock pauses of less than 10 seconds, and minimising interruptions in chest compressions (proportion of resuscitation attempt delivering chest compression >60% (chest compression fraction) are associated with improved outcomes.41-45 Pauses in chest compressions should be minimised and training should emphasise the importance of close co-operation between CPR providers to achieve this. Leaning on the chest preventing full chest wall recoil is common during CPR.46,47 Allowing complete recoil of the chest after each compression results in better venous return to the chest and may improve the effectiveness of CPR.46,48-50 CPR providers should, therefore, take care to avoid leaning forward after each chest compression. The proportion of a chest compression spent in compression compared to relaxation is referred to as the duty cycle. There is very little evidence to recommend any specific duty cycle and, therefore, insufficient new evidence to prompt a change from the currently recommended ratio of 50%. Feedback on compression technique CPR feedback and prompt devices (e.g. voice prompts, metronomes, visual dials, numerical displays, waveforms, verbal prompts, and visual alarms) should be used when possible during CPR training. Their use during clinical practice should be integrated with comprehensive CPR quality improvement initiatives rather than as an isolated intervention.51,52 CPR provider fatigue Chest compression depth can decrease as soon as two minutes after starting chest compressions. If there are sufficient trained CPR providers, they should change over approximately every two minutes to prevent a decrease in compression quality. Changing CPR providers should not interrupt chest compressions. CPR providers should give rescue breaths with an inflation duration of 1 second and provide sufficient volume to make the victim’s chest rise. Avoid rapid or forceful breaths. The maximum interruption in chest compression to give two breaths should not exceed 10 seconds.53 Mouth-to-nose ventilation is an acceptable alternative to mouth-to-mouth ventilation. It may be considered if the victim’s mouth is seriously injured or cannot be opened, the CPR provider is assisting a victim in the water, or a mouth-to-mouth seal is difficult to achieve. Mouth-to-tracheostomy ventilation may be used for a victim with a tracheostomy tube or tracheal stoma who requires rescue breathing. Barrier devices for use with rescue breaths Barrier devices decrease transmission of bacteria during rescue breathing in controlled laboratory settings. Their effectiveness in clinical practice is unknown. If a barrier device is used, care should be taken to avoid unnecessary interruptions in CPR. Manikin studies indicate that the quality of CPR is improved when a pocket mask is used, compared to a bag-mask or simple face shield during basic life support. CPR providers trained and able to perform rescue breaths should perform chest compressions and rescue breaths as this may provide additional benefit for children and those who sustain an asphyxial cardiac arrest or where the EMS response interval is prolonged.54-57 Only if rescuers are unable to give rescue breaths should they do compression-only CPR. The Resuscitation Council (UK) has carefully considered the balance between potential benefit and harm from compression-only CPR compared to standard CPR that includes ventilation. Our confidence in the equivalence between chest-compression-only and standard CPR is not sufficient to change current practice. The Resuscitation Council (UK), therefore, endorses the ILCOR and ERC recommendations that CPR providers should perform chest compressions for all patients in cardiac arrest. CPR providers trained and able to perform rescue breaths should perform chest compressions and rescue breaths as this may provide additional benefit for children and those who sustain an asphyxial cardiac arrest or where the EMS response interval is prolonged. When an untrained bystander dials 999, the ambulance dispatcher should instruct him to give chest-compression-only CPR while awaiting the arrival of trained help. Further guidance on dispatcher-assisted CPR is given in the Prehospital resuscitation guidelines Back to top AEDs are safe and effective when used by laypeople, including if they have had minimal or no training.58 AEDs may make it possible to defibrillate many minutes before professional help arrives. CPR providers should continue CPR with minimal interruption to chest compressions both while attaching an AED and during its use. CPR providers should concentrate on following the voice prompts, particularly when instructed to resume CPR, and minimising interruptions in chest compression. Public access defibrillation (PAD) programmes Public access AED programmes should be actively implemented in public places with a high density and movement of people such as airports, railway stations, bus terminals, sport facilities, shopping malls, stadiums, centres, offices, and casinos – where cardiac arrests are frequently witnessed and trained CPR providers can quickly be on scene.15,59-62 AEDs should also be provided in remote locations where an emergency ambulance response would be likely to be delayed (e.g. aircraft, ferries and off-shore locations). The potential benefits of AEDs being placed in schools as a method to raise awareness and familiarity with this lifesaving equipment is highlighted in the Education and implementation of resuscitation Registration of defibrillators with the local ambulance services is highly desirable so that dispatchers can direct CPR providers to the nearest AED.63 When implementing an AED programme, community and programme leaders should consider factors such as the development of a team with responsibility for monitoring and maintaining the devices, training and retraining individuals who are likely to use the AED, and identification of a group of volunteer individuals who are committed to using the AED in victims of cardiac arrest.64 Funds must be allocated on a permanent basis to maintain the programme. The Resuscitation Council (UK) and British Heart Foundation have produced information endorsed by the National Ambulance Service Medical Directors Group about AEDs and how they can be deployed in the community – A guide to Automated External Defibrillators Risks to recipients of CPR It is extremely rare for bystander CPR to cause serious harm in victims who are eventually found not to be in cardiac arrest. Those who are in cardiac arrest and exposed to longer durations of CPR are likely to sustain rib and sternal fractures. Damage to internal organs can occur but is rare.65 The balance of benefits from bystander CPR far outweighs the risks. CPR providers should not, therefore, be reluctant to start CPR because of the concern of causing harm. Risks to the CPR provider CPR training and actual performance is safe in most circumstances. Although rare occurrences of muscle strain, back symptoms, shortness of breath, hyperventilation, pneumothorax, chest pain, myocardial infarction and nerve injury have been described in rescuers, the incidence of these events is extremely low. Individuals undertaking CPR training should be advised of the nature and extent of the physical activity required during the training programme. Learners and CPR providers who develop significant symptoms (e.g. chest pain or severe shortness of breath) during CPR training should be advised to stop and seek medical attention. Although injury to the CPR provider from a defibrillator shock is extremely rare, standard surgical or clinical gloves do not provide adequate electrical protection. CPR providers, therefore, should not continue manual chest compressions during shock delivery. Avoid direct contact between the CPR provider and the victim when defibrillation is performed. Implantable cardioverter defibrillators (ICDs) can discharge without warning during CPR and rescuers may therefore be in contact with the patient when this occurs. However the current reaching the rescuer from the ICD is minimal and harm to the rescuer is unlikely. Adverse psychological effects after performing CPR are relatively rare. If symptoms do occur the CPR provider should consult their general practitioner. Back to top Choking is an uncommon but potentially treatable cause of accidental death. As most choking events are associated with eating, they are commonly witnessed. As victims are initially conscious and responsive, early interventions can be life-saving. Recognition of airway obstruction is the key to successful outcome, so do not confuse this emergency with fainting, myocardial infarction, seizure or other conditions that may cause sudden respiratory distress, cyanosis or loss of consciousness. Choking usually occurs while the victim is eating or drinking. People at increased risk of choking include those with reduced consciousness, drug and/or alcohol intoxication, neurological impairment with reduced swallowing and cough reflexes (e.g. stroke, Parkinson’s disease), respiratory disease, mental impairment, dementia, poor dentition and older age.66 Table 2 and Figure 3 present the treatment for the adult with choking. Foreign bodies may cause either mild or severe airway obstruction. It is important to ask the conscious victim “Are you choking?” The victim that is able to speak, cough and breathe has mild obstruction. The victim that is unable to speak, has a weakening cough, is struggling or unable to breathe, has severe airway obstruction. Back to top |Technical description | | SUSPECT CHOKING | | Be alert to choking particularly if victim is eating | | ENCOURAGE TO COUGH | | Instruct victim to cough | | GIVE BACK BLOWS || If cough becomes ineffective give up to 5 back blows | - Stand to the side and slightly behind the victim - Support the chest with one hand and lean the victim well forwards so that when the obstructing object is dislodged it comes out of the mouth rather than goes further down the airway - Give five sharp blows between the shoulder blades with the heel of your other hand | GIVE ABDOMINAL THRUSTS || If back blows are ineffective give up to 5 abdominal thrusts | - Stand behind the victim and put both arms round the upper part of the abdomen - Lean the victim forwards - Clench your fist and place it between the umbilicus (navel) and the ribcage - Grasp this hand with your other hand and pull sharply inwards and upwards - Repeat up to five times - If the obstruction is still not relieved, continue alternating five back blows with five abdominal thrusts Table 2: Sequence of steps for managing the adult victim who is choking | START CPR | | Start CPR if the victim becomes unresponsive | - Support the victim carefully to the ground - Immediately activate the ambulance service - Begin CPR with chest compressions Figure 3. Adult choking algorithm Treatment for mild airway obstruction Coughing generates high and sustained airway pressures and may expel the foreign body. Aggressive treatment with back blows, abdominal thrusts and chest compressions at this stage may cause harm and can worsen the airway obstruction. These treatments are reserved for victims who have signs of severe airway obstruction. Victims with mild airway obstruction should remain under continuous observation until they improve, as severe airway obstruction may subsequently develop. Treatment for severe airway obstruction The clinical data on choking are largely retrospective and anecdotal. For conscious adults and children over one year of age with complete airway obstruction, case reports show the effectiveness of back blows or ‘slaps’ and abdominal thrusts. Approximately half of cases of airway obstruction are not relieved by a single technique. The likelihood of success is increased when combinations of back blows or slaps, and abdominal and chest thrusts are used. Treatment of choking in an unresponsive victim Higher airway pressures can be generated using chest thrusts compared with abdominal thrusts. Bystander initiation of chest compressions for unresponsive or unconscious victims of choking is associated with improved outcomes. Therefore, start chest compressions promptly if the victim becomes unresponsive or unconscious. After 30 compressions, attempt 2 rescue breaths, and continue CPR until the victim recovers and starts to breathe normally. Aftercare and referral for medical review Following successful treatment of choking, foreign material may nevertheless remain in the upper or lower airways and cause complications later. Victims with a persistent cough, difficulty swallowing or the sensation of an object being still stuck in the throat should, therefore, seek medical advice. Abdominal thrusts and chest compressions can potentially cause serious internal injuries and all victims successfully treated with these measures should be examined afterwards for injury. Back to top Many children do not receive resuscitation because potential CPR providers fear causing harm if they are not specifically trained in resuscitation for children. This fear is unfounded: it is far better to use the adult BLS sequence for resuscitation of a child than to do nothing. For ease of teaching and retention, laypeople are taught that the adult sequence may also be used for children who are not responsive and not breathing normally. The following minor modifications to the adult sequence will make it even more suitable for use in children: - Give 5 initial rescue breaths before starting chest compressions. - If you are on your own, perform CPR for 1 minute before going for help. - Compress the chest by at least one third of its depth, approximately 4 cm for the infant and approximately 5 cm for an older child. Use two fingers for an infant under 1 year; use one or two hands as needed for a child over 1 year to achieve an adequate depth of compression. The same modifications of 5 initial breaths and 1 minute of CPR by the lone CPR provider before getting help may improve outcome for victims of drowning. This modification should be taught only to those who have a specific duty of care to potential drowning victims (e.g. lifeguards). Back to top These guidelines have been adapted from the European Resuscitation Council 2015 Guidelines. We acknowledge and thank the authors of the ERC Guidelines for Adult basic life support and automated external defibrillation: Gavin D Perkins, Anthony J Handley, Rudolph W. Koster, Maaret Castrén, Michael A Smyth, Theresa Olasveengen, Koenraad G. Monsieurs, Violetta Raffay, Jan-Thorsten Gräsner, Volker Wenzel, Giuseppe Ristagno, Jasmeet Soar. Back to top NICE has accredited the process used by Resuscitation Council (UK) to produce its Guidelines development Process Manual. Accreditation is valid for 5 years from March 2015. More information on accreditation can be viewed at www.nice.org.uk/accreditation - Nolan JP, Hazinski MF, Aicken R, et al. Part I. Executive Summary: 2015 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science with Treatment Recommendations. Resuscitation 2015:95:e1-e32. - Perkins GD, Travers AH, Considine J, et al. Part 3: Adult basic life support and automated external defibrillation: 2015 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Resuscitation 2015:95:e43-e70. - Perkins GD, Lockey AS, de Belder MA, Moore F, Weissberg P, Gray H. National initiatives to improve outcomes from out of hospital cardiac arrest in England. Emergency Medicine Journal 2015. doi: 10.1136/emermed-2015-204847 - Blom MT, Beesems SG, Homma PC, et al. Improved survival after out-of-hospital cardiac arrest and use of automated external defibrillators. Circulation 2014;130:1868-75. - Hasselqvist-Ax I, Riva G, Herlitz J, et al. Early cardiopulmonary resuscitation in out-of-hospital cardiac arrest. N Engl J Med 2015;372:2307-15. - Perkins GD, Handley AJ, Koster KW, et al. European Resuscitation Council Guidelines for Resuscitation 2015 Section 2 Adult basic life support and automated external defibrillation. Resuscitation 2015:95:81-98. - Nolan J, Soar J, Eikeland H. The chain of survival. Resuscitation 2006;71:270-1. - Waalewijn RA, Tijssen JG, Koster RW. Bystander initiated actions in out-of-hospital cardiopulmonary resuscitation: results from the Amsterdam Resuscitation Study (ARRESUST). Resuscitation 2001;50:273-9. - Valenzuela TD, Roe DJ, Cretin S, Spaite DW, Larsen MP. Estimating effectiveness of cardiac arrest interventions: a logistic regression survival model. Circulation 1997;96:3308-13. - Holmberg M, Holmberg S, Herlitz J, Gardelov B. Survival after cardiac arrest outside hospital in Sweden. Swedish Cardiac Arrest Registry. Resuscitation 1998;36:29-36. - Holmberg M, Holmberg S, Herlitz J. Factors modifying the effect of bystander cardiopulmonary resuscitation on survival in out-of-hospital cardiac arrest patients in Sweden. Eur Heart J 2001;22:511-9. - Wissenberg M, Lippert FK, Folke F, et al. Association of national initiatives to improve cardiac arrest management with rates of bystander intervention and patient survival after out-of-hospital cardiac arrest. JAMA 2013;310:1377-84. - Sasson C, Rogers MA, Dahl J, Kellermann AL. Predictors of survival from out-of-hospital cardiac arrest: a systematic review and meta-analysis. Circ Cardiovasc Qual Outcomes 2010;3:63-81. - Perkins GD, Lall R, Quinn T, et al. Mechanical versus manual chest compression for out-of-hospital cardiac arrest (PARAMEDIC): a pragmatic, cluster randomised controlled trial. Lancet 2015;385:947-55. - Valenzuela TD, Roe DJ, Nichol G, Clark LL, Spaite DW, Hardman RG. Outcomes of rapid defibrillation by security officers after cardiac arrest in casinos. N Engl J Med 2000;343:1206-9. - Berdowski J, Blom MT, Bardai A, Tan HL, Tijssen JG, Koster RW. Impact of onsite or dispatched automated external defibrillator use on survival after out-of-hospital cardiac arrest. Circulation 2011;124:2225-32. - Ringh M, Rosenqvist M, Hollenberg J, et al. Mobile-phone dispatch of laypersons for CPR in out-of-hospital cardiac arrest. N Engl J Med 2015;372:2316-25. - Deakin CD, Shewry E, Gray HH. Public access defibrillation remains out of reach for most victims of out-of-hospital sudden cardiac arrest. Heart 2014;100:619-23. - Nolan JP, Soar J, Cariou A, et al. European Resuscitation Council and European Society of Intensive Care Medicine Guidelines for Resuscitation 2015 Section 5 Post Resuscitation Care. Resuscitation 2015:95:201-21. - Dami F, Fuchs V, Praz L, Vader JP. Introducing systematic dispatcher-assisted cardiopulmonary resuscitation (telephone-CPR) in a non-Advanced Medical Priority Dispatch System (AMPDS): implementation process and costs. Resuscitation 2010;81:848-52. - Nurmi J, Pettila V, Biber B, Kuisma M, Komulainen R, Castren M. Effect of protocol compliance to cardiac arrest identification by emergency medical dispatchers. Resuscitation 2006;70:463-9. - Lewis M, Stubbs BA, Eisenberg MS. Dispatcher-assisted cardiopulmonary resuscitation: time to identify cardiac arrest and deliver chest compression instructions. Circulation 2013;128:1522-30. - Hauff SR, Rea TD, Culley LL, Kerry F, Becker L, Eisenberg MS. Factors impeding dispatcher-assisted telephone cardiopulmonary resuscitation. Ann Emerg Med 2003;42:731-7. - Bohm K, Stalhandske B, Rosenqvist M, Ulfvarson J, Hollenberg J, Svensson L. Tuition of emergency medical dispatchers in the recognition of agonal respiration increases the use of telephone assisted CPR. Resuscitation 2009;80:1025-8. - Bohm K, Rosenqvist M, Hollenberg J, Biber B, Engerstrom L, Svensson L. Dispatcher-assisted telephone-guided cardiopulmonary resuscitation: an underused lifesaving system. Eur J Emerg Med 2007;14:256-9. - Bang A, Herlitz J, Martinell S. Interaction between emergency medical dispatcher and caller in suspected out-of-hospital cardiac arrest calls with focus on agonal breathing. A review of 100 tape recordings of true cardiac arrest cases. Resuscitation 2003;56:25-34. - Roppolo LP, Westfall A, Pepe PE, et al. Dispatcher assessments for agonal breathing improve detection of cardiac arrest. Resuscitation 2009;80:769-72. - Vaillancourt C, Verma A, Trickett J, et al. Evaluating the effectiveness of dispatch-assisted cardiopulmonary resuscitation instructions. Acad Emerg Med 2007;14:877-83. - Tanaka Y, Taniguchi J, Wato Y, Yoshida Y, Inaba H. The continuous quality improvement project for telephone-assisted instruction of cardiopulmonary resuscitation increased the incidence of bystander CPR and improved the outcomes of out-of-hospital cardiac arrests. Resuscitation 2012;83:1235-41. - Bahr J, Klingler H, Panzer W, Rode H, Kettler D. Skills of lay people in checking the carotid pulse. Resuscitation 1997;35:23-6. - Nyman J, Sihvonen M. Cardiopulmonary resuscitation skills in nurses and nursing students. Resuscitation 2000;47:179-84. - Tibballs J, Russell P. Reliability of pulse palpation by healthcare personnel to diagnose paediatric cardiac arrest. Resuscitation 2009;80:61-4. - Tibballs J, Weeranatna C. The influence of time on the accuracy of healthcare personnel to diagnose paediatric cardiac arrest by pulse palpation. Resuscitation 2010;81:671-5. - Moule P. Checking the carotid pulse: diagnostic accuracy in students of the healthcare professions. Resuscitation 2000;44:195-201. - Hostler D, Everson-Stewart S, Rea TD, et al. Effect of real-time feedback during cardiopulmonary resuscitation outside hospital: prospective, cluster-randomised trial. BMJ 2011;342:d512. - Stiell IG, Brown SP, Christenson J, et al. What is the role of chest compression depth during out-of-hospital cardiac arrest resuscitation?*. Crit Care Med 2012;40:1192-8. - Stiell IG, Brown SP, Nichol G, et al. What is the optimal chest compression depth during out-of-hospital cardiac arrest resuscitation of adult patients? Circulation 2014;130:1962-70. - Vadeboncoeur T, Stolz U, Panchal A, et al. Chest compression depth and survival in out-of-hospital cardiac arrest. Resuscitation 2014;85:182-8. - Idris AH, Guffey D, Pepe PE, et al. Chest compression rates and survival following out-of-hospital cardiac arrest. Crit Care Med 2015;43:840-8. - Idris AH, Guffey D, Aufderheide TP, et al. Relationship between chest compression rates and outcomes from cardiac arrest. Circulation 2012;125:3004-12. - Cheskes S, Schmicker RH, Verbeek PR, et al. The impact of peri-shock pause on survival from out-of-hospital shockable cardiac arrest during the Resuscitation Outcomes Consortium PRIMED trial. Resuscitation 2014;85:336-42. - Cheskes S, Schmicker RH, Christenson J, et al. Perishock pause: an independent predictor of survival from out-of-hospital shockable cardiac arrest. Circulation 2011;124:58-66. - Vaillancourt C, Everson-Stewart S, Christenson J, et al. The impact of increased chest compression fraction on return of spontaneous circulation for out-of-hospital cardiac arrest patients not in ventricular fibrillation. Resuscitation 2011;82:1501-7. - Sell RE, Sarno R, Lawrence B, et al. Minimizing pre- and post-defibrillation pauses increases the likelihood of return of spontaneous circulation (ROSC). Resuscitation 2010;81:822-5. - Christenson J, Andrusiek D, Everson-Stewart S, et al. Chest compression fraction determines survival in patients with out-of-hospital ventricular fibrillation. Circulation 2009;120:1241-7. - Niles DE, Sutton RM, Nadkarni VM, et al. Prevalence and hemodynamic effects of leaning during CPR. Resuscitation 2011;82 Suppl 2:S23-6. - Fried DA, Leary M, Smith DA, et al. The prevalence of chest compression leaning during in-hospital cardiopulmonary resuscitation. Resuscitation 2011;82:1019-24. - Zuercher M, Hilwig RW, Ranger-Moore J, et al. Leaning during chest compressions impairs cardiac output and left ventricular myocardial blood flow in piglet cardiac arrest. Crit Care Med 2010;38:1141-6. - Aufderheide TP, Pirrallo RG, Yannopoulos D, et al. Incomplete chest wall decompression: a clinical evaluation of CPR performance by EMS personnel and assessment of alternative manual chest compression-decompression techniques. Resuscitation 2005;64:353-62. - Yannopoulos D, McKnite S, Aufderheide TP, et al. Effects of incomplete chest wall decompression during cardiopulmonary resuscitation on coronary and cerebral perfusion pressures in a porcine model of cardiac arrest. Resuscitation 2005;64:363-72. - Couper K, Salman B, Soar J, Finn J, Perkins GD. Debriefing to improve outcomes from critical illness: a systematic review and meta-analysis. Intensive Care Med 2013;39:1513-23. - Couper K, Kimani PK, Abella BS, et al. The System-Wide Effect of Real-Time Audiovisual Feedback and Postevent Debriefing for In-Hospital Cardiac Arrest. Crit Care Med 2015:1. doi: 10.1097/CCM.0000000000001202 - Beesems SG, Wijmans L, Tijssen JG, Koster RW. Duration of ventilations during cardiopulmonary resuscitation by lay rescuers and first responders: relationship between delivering chest compressions and outcomes. Circulation 2013;127:1585-90. - Kitamura T, Iwami T, Kawamura T, et al. Conventional and chest-compression-only cardiopulmonary resuscitation by bystanders for children who have out-of-hospital cardiac arrests: a prospective, nationwide, population-based cohort study. Lancet 2010;375:1347-54. - Goto Y, Maeda T, Goto Y. Impact of dispatcher-assisted bystander cardiopulmonary resuscitation on neurological outcomes in children with out-of-hospital cardiac arrests: a prospective, nationwide, population-based cohort study. J Am Heart Assoc 2014;3:e000499. - Kitamura T, Iwami T, Kawamura T, Nagao K, Tanaka H, Hiraide A. Bystander-Initiated Rescue Breathing for Out-of-Hospital Cardiac Arrests of Noncardiac Origin. Circulation 2010;122:293-9. - Iwami T, Kawamura T, Hiraide A, et al. Effectiveness of bystander-initiated cardiac-only resuscitation for patients with out-of-hospital cardiac arrest. Circulation 2007;116:2900-7. - Yeung J, Okamoto D, Soar J, Perkins GD. AED training and its impact on skill acquisition, retention and performance--a systematic review of alternative training methods. Resuscitation 2011;82:657-64. - Caffrey SL, Willoughby PJ, Pepe PE, Becker LB. Public use of automated external defibrillators. N Engl J Med 2002;347:1242-7. - Page RL, Hamdan MH, McKenas DK. Defibrillation aboard a commercial aircraft. Circulation 1998;97:1429-30. - O'Rourke MF, Donaldson E, Geddes JS. An airline cardiac arrest program. Circulation 1997;96:2849-53. - The Public Access Defibrillation Trial Investigators. Public-access defibrillation and survival after out-of-hospital cardiac arrest. N Engl J Med 2004;351:637-46. - Zijlstra JA, Stieglis R, Riedijk F, Smeekes M, van der Worp WE, Koster RW. Local lay rescuers with AEDs, alerted by text messages, contribute to early defibrillation in a Dutch out-of-hospital cardiac arrest dispatch system. Resuscitation 2014;85:1444-9. - Priori SG, Bossaert LL, Chamberlain DA, et al. Policy statement: ESC-ERC recommendations for the use of automated external defibrillators (AEDs) in Europe. Resuscitation 2004;60:245-52. - Miller AC, Rosati SF, Suffredini AF, Schrump DS. A systematic review and pooled analysis of CPR-associated cardiovascular and thoracic injuries. Resuscitation 2014;85:724-31. - Wong SC, Tariq SM. Cardiac arrest following foreign-body aspiration. Respir Care 2011;56:527-9. Back to top
1
5
<urn:uuid:a4b1663f-773f-492c-bb44-95576873faf1>
Digital X-ray radiogrammetry of hand or wrist radiographs can predict hip fracture risk—a study in 5,420 women and 2,837 men - First Online: - Cite this article as: - Wilczek, M.L., Kälvesten, J., Algulin, J. et al. Eur Radiol (2013) 23: 1383. doi:10.1007/s00330-012-2706-9 - 2.8k Downloads To assess whether digital X-ray radiogrammetry (DXR) analysis of standard clinical hand or wrist radiographs obtained at emergency hospitals can predict hip fracture risk. A total of 45,538 radiographs depicting the left hand were gathered from three emergency hospitals in Stockholm, Sweden. Radiographs with insufficiently included metacarpal bone, fractures in measurement regions, foreign material or unacceptable positioning were manually excluded. A total of 18,824 radiographs from 15,072 patients were analysed with DXR, yielding a calculated BMD equivalent (DXR-BMD). Patients were matched with the national death and inpatient registers. Inclusion criteria were age ≥ 40 years, no prior hip fracture and observation time > 7 days. Hip fractures were identified via ICD-10 codes. Age-adjusted hazard ratio per standard deviation (HR/SD) was calculated using Cox regression. 8,257 patients (65.6 % female, 34.4 % male) met the inclusion criteria. One hundred twenty-two patients suffered a hip fracture after their radiograph. The fracture group had a significantly lower DXR-BMD than the non-fracture group when adjusted for age. The HR/SD for hip fracture was 2.52 and 2.08 in women and men respectively. The area under the curve was 0.89 in women and 0.84 in men. DXR analysis of wrist and hand radiographs obtained at emergency hospitals predicts hip fracture risk in women and men. • Digital X-ray radiogrammetry of emergency hand/wrist radiographs predicts hip fracture risk. • Digital X-ray radiogrammetry (DXR) predicts hip fracture risk in both women and men. • Osteoporosis can potentially be identified in patients with suspected wrist fractures. • DXR can potentially be used for selective osteoporosis screening. KeywordsOsteoporosis Hip fracture Digital X-ray radiogrammetry Cohort BMD Abbreviations and acronyms Digital X-ray radiogrammetry Quantitative computed tomography Dual-energy X-ray absorptiometry Bone mineral density Standard deviations above or below the mean for a healthy young adult of the same sex and ethnicity as the patient The number of standard deviations above or below the mean for the patient’s age, sex and ethnicity Area under the curve International Statistical Classification of Diseases and Related Health Problems, 10th Revision According to epidemiological studies, about one third of women over 50 years of age will experience a fragility fracture [1, 2]. The number of fractures can be reduced by adequate measures, such as lifestyle adaptations [3, 4, 5, 6] and pharmaceutical interventions [7, 8, 9, 10]. To be cost effective and avoid unnecessary and potentially harmful pharmaceutical treatment, it is imperative that those at risk are properly identified and targeted. Previous studies have shown a correlation between bone mineral density (BMD) and the risk of fracture [11, 12, 13, 14, 15]. Consequently, BMD measurement has a prominent position in fracture risk assessment and diagnosis of osteoporosis. Dual-energy X-ray absorptiometry (DXA) of the hip or spine is considered the gold standard [12, 15, 16]. Unfortunately, most patients with a high risk of osteoporosis are not scanned by DXA, even when the clinician suspects osteoporosis. This is due to the relatively high costs [17, 18] and low availability of equipment [19, 20, 21, 22]. The objective of this study was to assess whether DXR analysis can predict hip fracture risk by using standard clinical hand or wrist radiographs obtained at emergency hospitals. Materials and methods This cohort study includes data from 1 January 2000 to 31 December 2008. The data were retrieved in 2009 and 2010. The ethical committee in Stockholm approved the study. Database queries based on examination codes were used to extract all left hand and wrist radiographs from the digital archives of three major emergency hospitals in the Stockholm region. Clinical indications for left hand or wrist radiographs included suspicion of fracture, luxation, foreign body or arthritis. All patients whose radiographs were included for DXR analysis were identified in the National Patient Register provided by the National Board of Health and Welfare. Patients who subsequently had a hip fracture were identified via ICD-10 codes (S72.0, S72.1, S72.2). Inclusion criteria were age >40 years, no hip fracture prior to acquisition of the radiograph and observation time >7 days. To minimise the risk of erroneous registrations, only those coded for both diagnosis and adequate intervention (either upper femur fracture surgery or hip replacement, i.e. ICD-10 surgical codes NFJ and NFB) were registered as hip fracture. In order not to exclude patients with a hip fracture who were too critically ill for surgery, patients who died within 3 days after a registered fracture were also included. Date of fracture diagnosis, date of death or study end date (31 December 2008) was used as censoring time. The National Cause of Death Register provided date of death. Digital X-ray radiogrammetry and BMD Digital X-ray radiogrammetry (DXR) (Onescreen, Sectra Imtec AB, Linköping, Sweden) is a development of the traditional technique of radiogrammetry. On a standard projection radiograph, measurement regions are automatically placed around the narrowest parts of metacarpals II-IV. A BMD equivalent measurement (DXR-BMD) is then computed. The calculation is defined as DXR-BMD = cxVPAx(1-p) where c is a density constant empirically determined so that DXR-BMD on average is equal to the mid-distal forearm region of the Hologic QDR-2000 densitometer (Hologic, Bedford, MA, USA), VPA is cortical bone volume per area and p is porosity. When comparing an individual’s DXR-BMD to the mean DXR-BMD of a young, healthy, normal reference population, a DXR T-score can be derived. When compared to a healthy reference population of the same age, a DXR Z-score is obtained. Any digital or CR radiography equipment that is applicable for acquiring hand X-ray images can be used to acquire images for DXR-BMD analysis. The DXR analysis process is automated and operator independent. However, there are some requirements about positioning and exposure settings, e.g., when acquiring radiographs intended for DXR analysis. Some requirements are generic (posterior-anterior X-ray image of one hand, palm flat to detector table/image plate surface, focus centred on metacarpal III) and some are specific per modality type and model (image postprocessing settings, focus distance, exposure settings, location on detector). Distribution of numbers, bone mineral density (DXR-BMD), DXR T-score and fracture rate by 5-year age groups Mean DXR-BMD (standard deviation) Mean DXR T-score (standard deviation) Mean annual fracture rate (%) Distribution of numbers, observation time and fracture rate by various DXR T-scores Annual fracture rate (%) −6 < −5 −5 < −4 −4 < −3 −3 < −2 −2 < −1 −1 < 0 0 < 1 1 < 2 2 < 3 3 < 4 4 < 5 −6 < −5 −5 < −4 −4 < −3 −3 < −2 −2 < −1 −1 < 0 0 < 1 1 < 2 2 < 3 3 < 4 4 < 5 Differences between fracture and non-fracture groups by sex. One standard deviation is given within parentheses. All differences were statistically significant at the P < 0.001 level Age at exam (years) The sensitivity and specificity of DXR to predict hip fracture at DXR T-score ≤−2.5, with 95 % confidence intervals and the corresponding annual fracture rate Sensitivity (95 % CI) Specificity (95 % CI) Annual fracture rate (%) The age-adjusted AUC for the DXR T-score to predict hip fracture was 0.89 in women and 0.84 in men (Fig. 3). In the 55–85-year age group, the corresponding AUCs were 0.83 and 0.80, respectively. Age-adjusted hazard ratios for hip fracture for different DXR T-scores by sex. DXR T-score >−1.0 was defined as HR = 1 95 % Confidence interval <−2 to −1 <−3 to −2 <−4 to −3 <−5 to −4 <−2 to −1 <−3 to −2 <−4 to −3 <−5 to −4 In this study, DXR-BMD of archived radiographs of the wrist or hand were used to evaluate the association with hip fractures occurring after the radiograph had been obtained. In order to evaluate whether DXR of radiographs of the wrist and hand obtained at emergency hospitals can be used for selected screening for osteoporosis, so-called area under the curve (AUC) calculations were made. Those describe the relationship between the sensitivity and specificity. Ideally such curves are close to 1, where only those who will suffer from a hip fracture are discriminated from those who will not. In this study, the age-adjusted AUC for the DXR T-score to predict hip fracture was 0.89 in women and 0.84 in men. These values are similar to, or higher than, those previously published for central DXA . Bousson et al. compared quantitative computed tomography (QCT) and hip DXA in a female study population with a similar age distribution to our cohort and obtained AUCs almost as large as ours (0.84 for QCT and 0.80 for DXA) . In the EPIDOS cohort, AUC calculations for hip fracture risk based on central DXA measurements ranged from 0.68 to 0.72 depending on the region measured . However, the EPIDOS population was older and the age distribution smaller (average age 80.5, SD 3.8), which results in smaller AUCs. Cummings’ study, based on a younger population (average age 73.2) than the EPIDOS cohort, provided a slightly greater AUC, with a maximum of 0.78 for women undergoing DXA of Ward’s triangle (AUC was 0.76 when measured at the femoral neck) . A meta-analysis from 1996 estimated that the increased risk of hip fracture with a decrease of 1.0 in T-score is 2.6 for hip BMD and 1.9 for lumbar spine BMD in women . Later studies have shown similar results [13, 30]. When using DXR, Bouxsein et al. reported a hazard ratio per decrease of 1.0 in DXR T-score of 1.8 for hip fractures in a case-cohort study of a sample from the Study of Osteoporotic Fracture (SOF) cohort, i.e., community-dwelling US women aged >65 years . In their prospective study DXR compared well to other peripheral BMD measurements, but was inferior to central BMD measurements when predicting hip fracture risk. In a DXR analysis of archived radiographs from a subgroup of the Copenhagen City Heart Study cohort, consisting of women >55 years with self-reported joint or bone pain, Bach-Mortensen et al. reported a HR/SD of 1.4 for hip fractures (P = 0.052) . In our study the HR/SD for DXR in women aged 55–85 was 2.33 and in all women 2.52. Thus, DXR provided predictive values comparable to those previously published for central BMD measurements and considerably higher than those previously reported for DXR [31, 32]. However, our study differs from the previous studies by Bouxsein’s and Bach-Mortensen’s groups with regard to the study population. Their studies included patients recruited from selected cohorts (SOF cohort and Copenhagen City Heart Study cohort, respectively). Our study reflects all patients undergoing standard X-ray examination of the left hand at three emergency hospitals in Stockholm. These groups might differ in terms of fracture and disease prevalence. The hazard ratio was also high in the male group; 2.08 in all men and 2.00 in men aged 55–85. To our knowledge, DXR’s predictability has not previously been studied in men. Due to the inclusion of all left hand/wrist radiographs obtained at three emergency hospitals, the patients included were most likely those with a suspected wrist fracture, i.e. trauma, and patients undergoing routine control for rheumatoid arthritis. Distal forearm fractures have been shown to be associated with other fragility fractures and have therefore been suggested as a possible predictor in fracture risk assessment [12, 33, 34, 35]. The correlation between wrist fracture and other fragility fractures appears to be stronger in men . Rheumatoid arthritis has been shown to increase the risk of fracture significantly [30, 37, 38]. Hence, the patients in our study were most likely at higher risk of fragility fractures than a normal population. This might also explain the improved performance of DXR observed in our study compared to previous reports [31, 32]. Defining clinical thresholds There are no previous recommendations on when patients should receive treatment based on DXR measurements. According to the National Osteoporosis Foundation clinician’s guide to prevention and treatment of osteoporosis, health-care providers should consider FDA-approved medical therapies in postmenopausal women and men aged 50 years and older at T-score ≤−2.5 at the femoral neck or spine or in the case of low bone mass (T-score between −1.0 and −2.5 at the femoral neck or spine) and a 10-year probability of a hip fracture ≥3 % . In clinical fracture outcome trials, a T-score of −2.5 is one of the most commonly used inclusion criteria, for example in the FREEDOM trial . A combination of low BMD and prevalent vertebral fracture has also been used for inclusion. In the HORIZON trial subjects could be included if they had femoral neck BMD with a T-score of ≤−2.5 or if they had a T-score of ≤−1.5 plus at least two prevalent vertebral fractures . When calculating the annual hip fracture rate in women receiving a placebo during those two clinical trials, the rate was 0.37 % and 0.76 % respectively. If the threshold DXR T-score ≤−2.5 is applied for all women in our material, the sensitivity to find hip fractures would be 0.81 and the specificity 0.79. However, in guidelines treatment is normally recommended for the 55–85-year age group. Used in that age group, the threshold DXR T-score ≤−2.5 would result in a sensitivity of 0.76 and a specificity of 0.72, with an annual fracture rate of 1.6 %. That rate is clearly higher than those observed in most pivotal fracture trials. This higher fracture incidence illustrates the difference in fracture incidence between clinical cohorts and that of clinical trials with selected subjects. It can therefore be discussed whether treatment should be given at higher DXR T-scores than <−2.5 in order to give treatment at the same fracture risk level as those shown to be efficient in the clinical trials. However, if the threshold were based on annual hip fracture rates similar to those observed in the placebo group of the HORIZON trial (0.76 %) , it would result in a severely reduced specificity. Women aged 55 to 85 would then receive treatment already at DXR T-scores of ≤−0.59, resulting in a sensitivity of 0.97, but with the poor specificity of 0.23. This would lead to substantially higher numbers needed to treat in order to avoid one hip fracture. Defining the threshold level for men is more challenging. At DXR T-score <−2.5 the sensitivity for all men >40 years is only 0.36 and specificity 0.93, with an annual fracture rate of 2.0 %. Applying a threshold of DXR T-score <−2.5 would therefore imply severe undertreatment, excluding most men at risk. In men the threshold value of osteopenia seems more reasonable when applied to the 55–85-year age group, resulting in a sensitivity of 0.52 and specificity of 0.76, with an annual hip fracture rate of 0.81 %. However, there were only 20 hip fractures in that age group, so the threshold for treatment in men when using DXR cannot be robustly estimated. The radiographs used in this study were not intentionally intended for DXR analysis. Only 41 % of the retrieved radiographs were considered suitable for analysis with DXR. This percentage might appear small, but one has to consider that all radiographs with plaster, external fixation or fractures that affected the metacarpals (i.e. measurement regions) were excluded. When retrieving radiographs from the digital archives, examinations of fingers or carpal bones were also included as they carry the same examination code as a radiograph of the hand. Such radiographs could not be included as they do not fully depict the metacarpal bones. In some included cases, the radiographs considered acceptable for DXR analysis were actually slightly suboptimal because they were not obtained according to the DXR protocol. This source of error might affect DXR’s predictive value in this study. According to population registers from Statistics Sweden, 78 % of the population in Stockholm was born in Sweden, 15 % in other Caucasian countries, 2.6 % in Asian countries, 2.2 % in African countries and 1.7 % in Hispanic countries. Due to the retrospective data collection of radiographs only, there was no information on patient race, body mass index or menopause age. Neither were there records on prior or subsequent treatment for osteoporosis or other diseases. This could influence the predictive value found in this study. If a great proportion of our material had treatment with cytostatics or corticosteroids, this would lead to an overestimation of the predictive value, but on the other hand osteoporosis treatment would lead to an underestimation. The DXR analysis is intended for use on the non-dominant hand. As there was no patient record regarding hand dominance, the left hand was analysed in all patients. Approximately 10 % of the population has a left hand dominance . This is a source of error, probably resulting in slightly higher DXR-BMD in left-dominant individuals, which could lead to a slight underestimation of DXR’s ability to predict future hip fractures. The use of The Swedish National Patient Register and National Cause of Death Register minimised the risk of loss to follow-up. This is a major strength of this study due to the high quality of the registers with 99 % completion . Unfortunately, we were not able to identify whether any of the patients had been exposed to high-energy trauma that could have caused their hip fracture. This is a limitation of our study as non-fragility fractures might have been included in the fracture group. However, this effect is believed to be minimal, as patients with hip fractures were on average significantly older than those without fracture and had lower DXR-BMD. Furthermore, included non-fragility, i.e. high-energy trauma, hip fractures would most likely affect DXR’s predictive ability unfavorably, resulting in an underestimation of the method’s predictive value. In conclusion, hip fractures can be predicted in both women and men by DXR analysis of clinical wrist and hand radiographs obtained at emergency hospitals. DXR may therefore be useful to identify patients with increased hip fracture risk already at the emergency department and might provide an alternative where access to central BMD measurements is limited. However, further studies are warranted to determine DXR’s ability in a normal population. Financial support was provided through the regional agreement on medical training and clinical research (ALF) between Stockholm County Council and Karolinska Institutet. Johan Kälvesten and Jakob Algulin are employees of Sectra Imtec AB, Linköping, Sweden. The authors wish to thank Ali Waqar for the data retrieval of all digital radiographs used in this study. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1
3
<urn:uuid:7430aaea-c4f8-49fa-915a-32f727f1e003>
A mind map (or mind-map) is a diagram used for linking words and ideas to a central key word or idea, often used in permaculture teaching, meetings and other events, eg, for note-taking, recording minutes, etc. It is used to visualize, classify, structure, and generation|generate ideas, as well as an aid in study, problem solving, and decision making. It is similar to a semantic network or cognitive map but there are no formal restrictions on the kinds of links used. Most often the map involves images, words, and lines. The elements are arranged intuitively according to the importance of the concepts and they are organized into groupings, branches, or areas. In other words, a mind map is an image-centered radial diagram that represents semantic or other connections between portions of information. The uniform graphic formulation of the semantic structure of information on the method of gathering knowledge, may aid recall of existing memories. It is also advertised as a way of increasing motivation to work on a task. For example, the map can graphically illustrate the structure of government institutions within a state. A mind map well-structured and well-established can be subject to review (e.g. with spaced repetition). Mind maps (or similar concepts) have been used for centuries, for learning, brainstorming, memory, visual thinking, and problem solving by educators, engineers, psychologists and people in general. Some of the earliest examples mind maps were developed by Porphyry of Tyros, a noted thinker of the 3rd century as he graphically visualised the concept categories of Aristotle. Ramon Llull also used these structures of the mind map form. People have been using image centered radial graphic organization techniques referred to variably as mental or generic mind maps for centuries in areas such as engineering, psychology, and education, although the claim to the origin of the mind map has been made by a British popular psychology author, Tony Buzan. He claimed the idea was inspired by the general semantics of science fiction novels, such as those of A. E. van Vogt and L. Ron Hubbard. He argues that 'traditional' articles rely on the reader to scan left to right and top to bottom, whilst what actually happens is that the brain will scan the entire page in a non-linear fashion. He also uses popular assumptions about the cerebral hemispheres in order to promote the exclusive use of mind mapping over other forms of note making. More recently the semantic network was developed as a theory to understand human learning, and developed into mind maps by the rennaisance man Dr Allan Collins, and the noted researcher M. Ross Quillian during the early 1960s. As such, due to his commitment and published research, and his work with learning, creativity, and graphical thinking, Dr Allan Collins can be considered the father of the modern mind map. The mind map continues to be used in various forms, and for various applications including learning and education (where it is often taught as 'Webs' or 'Webbing'), planning and in engineering diagramming. When compared with the earlier original concept map (which was developed by learning experts in the 1960s) the structure of a mind map is a similar, but simplified, radial by having one central key word. Uses of mind mapsEdit Mind maps have many applications in personal, family, educational, and business situations, including note-taking, brainstorming (wherein ideas are inserted into the map radially around the center node, without the implicit prioritization that comes from hierarchy or sequential arrangements, and wherein grouping and organizing is reserved for later stages), summarizing, revising and general clarifying of thoughts. For example, one could listen to a lecture and take down notes using mind maps for the most important points or keywords. One can also use mind maps as a mnemonic technique or to sort out a complicated idea. Mind maps are also promoted as a way to collaborate in colour pen creativity sessions. Software and technique research have concluded that managers and students find the techniques of mind mapping to be useful, being better able to retain information and ideas than by using traditional 'linear' note taking methods. Template:Citation needed Mindmaps can be drawn by hand, either as 'rough notes', for example, during a lecture or meeting, or can be more sophisticated in quality. Examples of both are illustrated. There are also a number of software packages available for producing mind maps (see below). Mind map guidelinesEdit These are the foundation structures of a Mind Map, although these are open to free interpretation by the individual: - Start in the centre with an image of the topic, using at least 3 colours. - Use images, symbols, codes and dimensions throughout your Mind Map. - Select key words and print using upper or lower case letters. - Each word/image must be alone and sitting on its own line. - The lines must be connected, starting from the central image. The central lines are thicker, organic and flowing, becoming thinner as they radiate out from the centre. - Make the lines the same length as the word/image. - Use colours – your own code – throughout the Mind Map. - Develop your own personal style of Mind Mapping. - Use emphasis and show associations in your Mind Map. - Keep the Mind Map clear by using Radiant hierarchy, numerical order or outlines to embrace your branches. (See: BUZAN, Tony. The Mind Map Book. Chapter "Mind Mapping Guidelines"). Scholarly research on mind mapsEdit Buzan (1991) claims that the mind map is a vastly superior note taking method because it does not lead to the alleged "semi-hypnotic trance" state induced by the other note forms. Buzan also claims that the mind map utilizes the full range of left and right human cortical skills, balances the brain, taps into the 99% of your unused mental potential, and taps into your intuition (which he calls "superlogic"). There has been research conducted on the technique which suggests that such claims may actually be marketing hype based on misconceptions about the brain and the cerebral hemispheres (see Human brain#Popular misconceptions). Template:Citation needed There are benefits to be gained by applying a wide range of graphic organizers, and it follows that the mind map, specifically, is limited to only a few learning tasks. Research by Farrand, Hussain, and Hennessy (2002) found that the mind map technique had a limited but significant impact on recall only, in undergraduate students (a 10% increase over baseline for a 600-word text only) as compared to preferred study methods (a −6% increase over baseline). This improvement was only robust after a week for those in the mind map group, and there was a significant decrease in motivation compared to the subjects' preferred methods of note taking. They suggested that learners preferred to use other methods because using a mind map was an unfamiliar technique, and its status as a "memory enhancing" technique engendered reluctance to apply it. Pressley, VanEtten, Yokoi, Freebern, and VanMeter (1998) found that learners tended to learn far better by focusing on the content of learning material rather than worrying over any one particular form of note-making. These tools can be used effectively to organise large amounts of information, combining spatial organisation, dynamic hierarchical structuring and node folding. - FreeMind is a free open source mind-mapping application written in Java - VYM for Linux and MacOS Free Software (GNU/GPL). - Kdissert for Linux only Free Software (GNU/GPL) for the creation of other documents from mindmaps: presentation, reports, applets. - DeepaMehta Opensource Mindmap desktop - WikkaWiki is a free PHP/MySQL wiki engine with native support for FreeMind maps. Commercial / proprietary software Edit - Eylean Board is a project management tool based on the visual representation of tasks. - FlexusNetresources for mind mapping and business mapping in spanish - MindManager is commercial mind-mapping software running on MS Windows and integrated with MS Office. - NovaMind is a commercial mind-map application for Mac OS X. Features include arbitrary branch shapes, a branch proposal system, and integrated screenplay support. - MyMind is a mind mapper with built-in outlining functionality. It is "donationware" for Mac OS X. - i2Brain takes the next step - away from a flat tree to a network of ideas with depth. Multi-platform. - ConceptDraw MINDMAP Mind Mapping, Brainstorming and Project Planning software that works both on Windows and Mac OS X - OpenMind - software used by British schools - MindGenius is commercial mind-mapping software for MS Windows with a vast array of features. This product is sleek and simple, with excellent export. - SmartDraw, a visio-like product. - Mindjet is one of the more robust of programs, with lots of visual features including on-screen notes, highlighting of sections, arrows outside of the tree structure, etc. - BrainMine's major advantages come from an extensive graphics (icon/image) package, a convenient overview map, and an object attribute panel. The final products can be visually impressive, but the interface can be overwhelming (even distracting) for basic idea organizing. - Visual Concept touts itself as a mind mapping program. The final product is more like visio, but seems to emphasize hexagon shaped maps. - InfoRapid KnowledgeMap achieves some of the same MindMap type organization, but is constructed to seem more like an outline. - Cornerstone is a visual thinking tool that supports a variety of visual styles. - Aviz Thought Mapper : Thought Mapper is the cheapest commercial mind mapping tool available in the market with full functionalities . It integrates with MS Office & Outlook. It is built on Java technology & is available for windows as well as linux operating systems. Mind mapping in contrast with concept mappingEdit The mind map can be contrasted with the similar idea of concept mapping. The former is based on radial hierarchies and tree structures, whereas concept maps are based on connections between concepts. Concept maps also encourage one to label the connections one makes between nodes, while mind maps are based on separated focused topics; both of them have been found to enhance meaningful learning while enabling the potential as a true cognitive, intuitive, spatial and metaphorical mapping. The use of the term "Mind Maps" is trade-marked by The Buzan Organisation, Ltd. in the UK and the USA , though the trade-mark does not appear in the records of the Canadian Intellectual Property Office . - Buzan, T. (1991). The Mind Map Book . New York: Penguin. - Farrand P, Hussain F, Hennessy E. Med Educ. (2002) "The efficacy of the 'mind map' study technique". May;36(5):426-31. EBSCOHost. Retrieved May 5th, 2005. - Novak, J. D. (1993). How do we learn our lesson? : Taking students through the process. The Science Teacher, 60(3), 50-55. - Pressley, M., VanEtten, S., Yokoi, L., Freebern, G., & VanMeter, P. (1998). "The metacognition of college studentship: A grounded theory approach". In: D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in Theory and Practice (pp. 347-367). Mahwah NJ: Erlbaum. - Novak A ,Hermann W., Bovo V (2005) Mapas Mentais: Enriquecendo Inteligências- Manual de Aprendizagem e Desenvolvimento de Inteligências"; ( p XI 27, 331). Ed IDPH - Flexus Groupis a commercial site with many business and personal mindmapping applications - MindGeniuscommercial with many resources and mapping ideas - Mind mapping software for MAC and PC - Thinking with Pictures resources and information about model maps and concept maps - Mind mapping software for writing dissertations and essays - Buzan Centres - High quality free mind mapping software - Some good pictures at msoworld - http://www.mapyourmind.com/ - A commercial site promoting mind-mapping and mind-mapping software - IAResearch - A commercial site promoting mind-mapping and mind-mapping software - http://web.archive.org/web/20091027093829/http://www.geocities.com/buzanguru/MindMapping1.html — mind map overview and advocacy - Some more example maps on a commercial site - http://www.petillant.com/ - French portal dedicated to mind mapping - http://www.innovationtools.com/resources/mindmapping.asp - Mind mapping resource center. - I-Navigation Mind mapping software with focus+context visualization. - Mindmapping Outliners — an analysis of mindmapping and an overview of various products for Macs - Beyond Crayons — A series of essays about the professional use of mapping software |This page uses content from the English-language version of Wikipedia. The original article was at Mind map. The list of authors can be seen in the page history. As with PermaWiki, the text of Wikipedia is available under the GNU Free Documentation License.|
1
2
<urn:uuid:2cab9f5b-1d09-4162-b37d-acbf18e77b93>
The first time-sharing systems at MIT, in the early sixties, provided remote access to users at Flexowriters connected to the machine with direct wires. These typewriter-like devices were some of the first remote terminals. By the time I began using the CTSS time-sharing machine at MIT Project MAC, in 1963, terminals were connected to the machine by modems, and one dialed a single digit to call the computer via a separate internal phone exchange, a "private branch exchange" or PBX. That products such as the Bell 103A modem and the IBM 7750 communications computer were available shows that MIT was by no means the first to connect terminals to computers. Airline reservations like SABRE and stock trading applications were already commercial applications by then; such applications usually provided computer service to terminals used as "remote inquiry stations." (The 103A modem came out in 1962. BBN patented the modem in 1963.) The first terminals I used on CTSS were IBM 1050 systems, with Selectric-style golf ball typewriters built into a substantial desk, with a large knee-side box of electronics and a dozen control switches on the front that selected different operation modes. A few 1050 systems had additional peripherals such as paper tape readers and punches. Teletype model 35s were also used on CTSS, but these devices were less desirable because they were slower, noisier, uppercase only devices, and because they used narrow roll-fed paper, as opposed to the 1050's wider paper. Terminals were scarce at MIT, and were usually shared by many users. Some of the Computation Center staff had terminals in their offices, and some departments had terminals, usually shared by groups of several users. Most users, though, got access to the time-sharing machines by using the terminals in public or semi-public terminal rooms. (I was using CTSS in the 8th floor terminal room at MIT Project MAC when they announced that the Institute was closing because President Kennedy had been shot.) The first people to have home terminals were those system programmers who might be called at any hour to investigate or repair a problem with the time-sharing system. These folks took home a machine that might cost as much as half a year's salary, and had a leased phone line connected to the MIT data PBX installed at home. They discovered that having a machine at home was useful not only for fixing the operating system, but also for programming and writing documents from home, and for sending electronic mail to other users on the MIT system. I got my first home terminal in 1967, when I was working on Multics at Project MAC. It was an IBM 2741, the standard machine for the programming staff. Like the 1050, the 2741 had a Selectric mechanism built into a desk, but one smaller than the 1050's, and with a slimmer electronics box and fewer switches. The original 2741s were designed as "inquiry" terminals: the keyboard was normally locked, and the user was supposed to hit the ATTN button to get the attention of the computer, which would unlock the keyboard and let the user type one line, and then lock the keyboard on carriage return. This mode of operation was no good for time-sharing use, and we had to have two special features installed on the 2741's for CTSS (and later Multics) use. The 2741 used paper with perforations on each side, like printer paper, and had a tractor feed that kept the paper from going crooked and jamming. Annoyingly, the 2741 platen was a little narrower than the 14 7/8 inches wide regular line printer paper, and so Operations had to stock two sizes of paper, and more than once I brought home a box of the wrong size. I remember when my home terminal was installed at my apartment at 140 Huron Avenue in Cambridge. The 2741 weighed over 300 pounds and took four people to get it up four flights of narrow stairs. Then I had three IBM CEs and two phone men in the apartment, and the phone guys were talking to a chain of colleagues at the various switching centers across Cambridge to MIT, setting up the leased line and "conditioning" it so that it would be free enough of noise to transmit data. The 2741 transmitted at 134.5 baud, somewhat faster than the 110 baud speed of the Teletypes, and used the EBCDIC character set. When we began to bring Multics up, we needed mixed case terminals that used the ASCII character set, and so we began with the new Model 37 Teletypes, which ran at the relatively high speed of 150 baud. Not many MIT folks had Model 37s at home, but 37s were the preferred terminal for the Bell Labs folks working on Multics, such as Ken Thompson and Dennis Ritchie. Then the GE folks obtained TermiNet 300s, which ran in ASCII at 300 baud, and those were highly prized as home terminals. The easiest way to understand the cost of all these devices is to say that they cost about as much as a new Buick did. [Dick Snyder] You mentioned the Terminet300. When we first got them at CISL they only ran at 150 baud. We couldn't get them up to 300 until we got the Datanet 300 communications front end. What a happy day when the Datanet 300 went up and the Terminets doubled in speed. The keys on the Terminet had little magnets on them and a keystroke was made by depressing the key which passed thru a wire loop that noticed the current induced by the magnet. One time I was out in Phoenix working night shift with Noel Morris bringing up the 6180. Noel had a bit of a temper you will recall. At one point he got angry and smashed his fist down on the Terminet 300 keyboard. All of the keys in the shape of a fist outline broke as the magnets were smashed. Noel never 'fessed up to it - I'll bet the people who had to fix that Terminet wondered about the strange pattern of the broken keys. The high cost of terminals meant that the development groups tried to find ways to get as much terminal per dollar as possible. The early 70s saw many manufacturers entering the terminal market, with machines that undersold IBM and Teletype. I moved my 2741 to several apartments, and then replaced it with a Datel-30, assembled by Datel with a lighter-duty version of the Selectric mechanism that they bought from IBM. It broke down a lot, and I can remember having to bring it in for repair repeatedly. Then MIT found another terminal manufacturer who made a device using a heavy duty Selectric, and this worked quite well for me as a home terminal. Other programmers had to share portable terminals, such as Texas Instruments Silent 700s and Execuports, carrying them home for an evening and returning them the next day. One nice thing about a Selectric as a home machine was that its output was very high quality, suitable for correspondence. Teletype 37 output, TermiNet 300 output, and even line printer output weren't really letter quality. Another advantage of Selectric-based mechanisms was that you could replace the typeball, to produce output in a different font, or to run APL programs. About 1970, GE also produced a 1200 baud terminal, the TermiNet 1200, but I don't think anybody had one at home; for one thing, this high speed was beyond the ability of the 103A modem. To use a TN1200, one used an asymmetric modem, the Bell 202C6, which supported a 1200 baud computer-to-user channel and a 75 baud "reverse" channel. Alternatives to Bell modems were slow to enter the market because the phone company had complex rules to "protect the network" from nonstandard voltages and signals. Acoustic couplers, that clamped a phone handset into some rubber cups and used the phone's microphone and speaker to transmit data, were used by some -- I used one with my Datel -- but they were most often used with the portable terminals. When I changed jobs from MIT to Honeywell, I gave up the 2741 and got a TermiNet 300 at home, and my leased line to MIT was changed to be a leased line to the Honeywell PBX instead. Having a PBX line at home had advantages besides being able to log in; the data PBX had tie lines to the regular voice PBX, and so one could call colleagues in Phoenix easily, and even make long distance calls charged to the company, a nice perk. In the early 70s, the ARPANet also began to bridge the electronic mail systems of individual machines, so that one could log into the MIT Multics system and then communicate with other computer users around the world. Having a home terminal allowed programmers to be flexible in their hours of work, and to get more done. If the choice was between competing for a pool terminal in a terminal room at work, and staying home programming in quiet and comfort, staying home often won. Home terminals helped programmers get access to the time-sharing systems late at night, when system response was much better. In addition to regular assignments, people also felt free to do more personal hacks from their home terminals, many of which became system enhancements that nobody had asked for, but that were accepted since they were a fait accompli. I did a lot of computing from home over the years, writing both programs and memos, and working on projects both assigned and unassigned. System programmers didn't have to wait for the invention of the personal computer to have unmetered computing at home. Often, I would log in to the mainframe from home as soon as I got there, and leave the terminal logged in until I went to bed. If your process was inactive for an hour, the operating system logged you out, so I wrote a tiny program that typed the time every half hour, like a high-tech cuckoo clock. If the time stopped typing, I knew the system had crashed, and could call the operators (if they didn't call me). Another reason I used to stay logged in was to sign on to the "online consultant" facility, which would send a message from a user with a problem to whatever consultant was logged in. I wrote this facility in the early 70s when I worked at the MIT Information Processing Center; descendants of it are still in use on MIT's Athena systems today. When the Big Snow of 1978 hit, it came with plenty of warning. So I took two cases of my favorite beer and a full box of terminal paper home, and reminded the MIT operators to set the Multics machine for unattended reboot. The blizzard paralyzed the Boston area for a week, but I was able to work. The MIT Multics had only a few users logged in, and response was great. I wrote most of the initial transaction processing facility for Multics during that time, and rigged up a way to test it using absentee processes. At the end of the 70s, Honeywell replaced my TN300 with a device called a Rosy-21, which used a wire-matrix print head. These were great terminals, with few moving parts and much better reliability. About this time, also, video terminals began to become widely available, from several manufacturers. Using Multics Emacs over a 300 baud line was painful but possible; higher modem speeds, first 1200 and then 2400 baud, arrived in the 80s just in time for full screen editing. About this time I left the Multics team and took a job at Tandem, which used a very different kind of terminal. Characters were still sent asynchronously, but the terminal could be programmed to hold a "form" and to send a whole screenful in "block mode." The Tandem full-screen editor was a big step down from Emacs. But the company did supply me with a home 6520 terminal (made by Zentec), and later a 6530 with a big green-screen monitor, and these terminals were adequate to let me program and handle mail from home via 1200 baud dialup. Terminal emulators on PCs replaced terminals in the late 80s. There were four phases to switching over: first, just replacing the terminal with a PC that ran a terminal emulator; second, as disk storage shrank in price and size, moving mainframe functions like address books, calendar, resume updating, and personal databases to the PC; third, switching over to editing, compiling, and debugging on the PC, accessing shared source database by file copying; and fourth, dispensing with terminal sessions except for rare maintenance operations, doing all mail, news, bulletin board, appointments, file sharing, and cooperative work using local PC clients that connected to servers without using an interactive session. Modem speeds also jumped during the 90s: in 1990, 1200 baud was respectable; five years later, 28,800 baud was considered limiting. My wife brought home a company Mac Plus in the late 80s, and we used this machine over dialup to access Tandem's systems with a terminal emulator for several years, as well as external services such as CompuServe and AOL. I carried a company supplied 18-pound Mac Portable for a couple of years. Logging in from home tended to blur the difference between the work place and the home place: you worked when you had the time, inclination, and inspiration. Portable computers and virtual private networks take this another step further, allowing one to carry one's office anywhere and work. The next step was wireless connection using smart phones, which allowed your portable office in your pocket to access all the information in the company. The late Bob Bemer wrote an interesting article for Datamation in 1984, describing his experiences working at home, titled "Working on Moon Mountain."
1
2
<urn:uuid:0f1b1caa-8d38-4fa3-adfb-bfc4226ff743>
An 8MHz Intel 80286 Microprocessor |Produced||From 1982 to early 1990s| |Max. CPU clock rate||6 MHz (4 MHz for a short time) to 25 MHz| |Instruction set||x86-16 (with MMU)| |Predecessor||8086, 8088 (while 80186 was contemporary)| The Intel 80286 (also marketed as the iAPX 286 and often called Intel 286) is a 16-bit microprocessor that was introduced on 1 February 1982. It was the first 8086 based CPU with separate, non-multiplexed, address and data buses and also the first with memory management and wide protection abilities. The 80286 used approximately 134,000 transistors in its original nMOS (HMOS) incarnation and, just like the contemporary 80186, it could correctly execute most software written for the earlier Intel 8086 and 8088 processors. The 80286 was employed for the IBM PC/AT, introduced in 1984, and then widely used in most PC/AT compatible computers until the early 1990s. History and performance Intel's first 80286 chips were specified for a maximum clockrate of 4, 6 or 8 MHz and later releases for 12.5 MHz. AMD and Harris later produced 16 MHz, 20 MHz and 25 MHz parts, respectively. Intersil and Fujitsu also designed fully static CMOS versions of Intel's original depletion-load nMOS implementation, largely aimed at battery powered devices. On average, the 80286 was reportedly measured to have a speed of about 0.21 instructions per clock on "typical" programs, although it could be significantly faster on optimized code and in tight loops, as many instructions could execute in 2 clock cycles each. The 6 MHz, 10 MHz and 12 MHz models were reportedly measured to operate at 0.9 MIPS, 1.5 MIPS and 2.66 MIPS respectively. The later E-stepping level of the 80286 was free of the several significant errata that caused problems for programmers and operating system writers in the earlier B-step and C-step CPUs (common in the AT and AT clones). The 80286 was designed for multi-user systems with multitasking applications, including communications (such as automated PBXs) and real-time process control. It had 134,000 transistors and consisted of four independent units: address unit, bus unit, instruction unit and execution unit organized into a loosely coupled (buffered) pipeline just as in the 8086. The significantly increased performance over the 8086 was primarily due to the non-multiplexed address and data buses, more address calculation hardware (most importantly a dedicated adder) and a faster (more hardware based) multiplier. It was produced in a 68-pin package including PLCC (Plastic Leaded Chip Carrier), LCC (Leadless chip carrier) and PGA (Pin Grid Array) packages. The performance increase of the 80286 over the 8086 (or 8088) could be more than 100% per clock cycle in many programs (i.e. a doubled performance at the same clock speed). This was a large increase, fully comparable to the speed improvements around a decade later when the i486 (1989) or the original Pentium (1993) were introduced. This was partly due to the non-multiplexed address and data buses but mainly to the fact that address calculations (such as base+index) were less expensive. They were performed by a dedicated unit in the 80286 while the older 8086 had to do effective address computation using its general ALU, consuming several extra clock cycles in many cases. Also, the 80286 was more efficient in the prefetch of instructions, buffering, execution of jumps, and in complex microcoded numerical operations such as MUL/DIV than its predecessor. The 80286 included, in addition to all of the 8086 instructions, all of the new instructions of the 80186: ENTER, LEAVE, BOUND, INS, OUTS, PUSHA, POPA, PUSH immediate, IMUL immediate, and immediate shifts and rotates. The 80286 also added new instructions for protected mode: ARPL, CLTS, LAR, LGDT, LIDT, LLDT, LMSW, LSL, LTR, SGDT, SIDT, SLDT, SMSW, STR, VERR, and VERW. Some of the instructions for protected mode can (or must) be used in real mode to set up and switch to protected mode, and a few (such as SMSW and LMSW) are useful for real mode itself. The Intel 80286 had a 24-bit address bus and was able to address up to 16 MB of RAM, compared to the 1 MB addressability of its predecessor. However, memory cost and the initial rarity of software using the memory above 1 MB meant that 80286 computers were rarely shipped with more than one megabyte of RAM. Additionally, there was a performance penalty involved in accessing extended memory from real mode (in which DOS, the dominant PC operating system until the mid-1990s, ran), as noted below. The 286 was the first of the x86 CPU family to support Protected virtual address mode, commonly called "protected mode". In addition, it was the first commercially available microprocessor with on-chip MMU capabilities. (Systems using the contemporaneous Motorola 68010 and NS320xx could be equipped with an optional MMU controller.) This would allow IBM compatibles to have advanced multitasking OSes for the first time and compete in the Unix-dominated server/workstation market. Several additional instructions were introduced in protected mode of 80286, which are helpful for multitasking operating systems. Another important feature of 80286 is Prevention of Unauthorized Access. This is achieved by: - Forming different segments for data, code, and stack, and preventing their overlapping - Assigning Privilege levels to each segment. Segment with lower privilege level cannot access the segment with higher privilege level. In 80286 (and in its co-processor Intel 80287), arithmetic operations can be performed on the following different types of numbers: - unsigned packed decimal, - unsigned binary, - unsigned unpacked decimal, - signed binary, and - floating point numbers (only with an 80287). By design, the 286 could not revert from protected mode to the basic 8086-compatible Real address mode ("real mode") without a hardware-initiated reset. In the PC/AT introduced in 1984, IBM added external circuitry as well as specialized code in the ROM BIOS and the 8042 peripheral microcontroller to enable software to cause the reset, allowing real-mode reentry while retaining active memory and returning control to the program that initiated the reset. (The BIOS is necessarily involved because it obtains control directly whenever the CPU resets.) Though it worked correctly, the method imposed a huge performance penalty. In theory, real-mode applications could be directly executed in 16-bit protected mode if certain rules (newly proposed with the introduction of the 80286) were followed; however, as many DOS programs did not conform to those rules, protected mode was not widely used until the appearance of its successor, the 32-bit Intel 80386, which was designed to go back and forth between modes easily and to provide an emulation of real mode within protected mode. When Intel designed the 286, it was not designed to be able to multitask real-mode applications; real mode was intended to be a simple way for a bootstrap loader to prepare the system and then switch to protected mode; essentially, in protected mode the 80286 was designed to be a new processor with many similarities to its predecessors, while real mode on the 80286 was offered for smaller-scale systems that could benefit from a more advanced version of the 80186 CPU core, with advantages such as higher clock rates, faster instruction execution (measured in clock cycles), and unmultiplexed buses, but not the 24-bit (16 MB) memory space. To support protected mode, new instructions have been added: ARPL, VERR, VERW, LAR, LSL, SMSW, SGDT, SIDT, SLDT, STR, LMSW, LGDT, LIDT, LLDT, LTR, CLTS. There are also new exceptions (internal interrupts): Invalid opcode, Coprocessor not available, Double Fault, Coprocessor segment overrun, Stack fault, Segment overrun/General protection fault, and others only for protected mode. The protected mode of the 80286 was not utilized until many years after its release, in part because of the high cost of adding extended memory to a PC, but also because of the need for software to support the large user base of 8086 PCs. For example, in 1986 the only program that made use of it was VDISK, a RAM disk driver included with PC DOS 3.0 and 3.1. A DOS could utilize the additional RAM available in protected mode (extended memory) either via a BIOS call (INT 15h, AH=87h), as a RAM disk, or as emulation of expanded memory. The difficulty lay in the incompatibility of older real mode DOS programs with protected mode. They simply could not natively run in this new mode without significant modification. In protected mode, memory management and interrupt handling were done differently than in real mode. In addition, DOS programs typically would directly access data and code segments that did not belong to them, as real mode allowed them to do without restriction; in contrast, the design intent of protected mode was to prevent programs from accessing any segments other than their own unless special access was explicitly allowed. While it was possible to set up a protected mode environment that allowed all programs access to all segments (by putting all segment descriptors into the GDT and assigning them all the same privilege level), this undermined nearly all of the advantages of protected mode except the extended (24-bit) address space. The choice that OS developers faced was either to start from scratch and create an OS that would not run the vast majority of the old programs, or to come up with a version of DOS that was slow and ugly (i.e., ugly from an internal technical viewpoint) but would still run a majority of the old programs. Protected mode also did not provide a significant enough performance advantage over the 8086-compatible real mode to justify supporting its capabilities; actually, except for task switches when multitasking, it actually yielded only a performance disadvantage, by slowing down many instructions through a litany of added privilege checks. In protected mode, registers were still 16-bit, and the programmer was still forced to use a memory map composed of 64k segments, just like in real mode. In January 1985, Digital Research previewed the Concurrent DOS 286 operating system developed in cooperation with Intel. The product would function strictly as an 80286 native mode (i.e. protected mode) operating system, allowing users to take full advantage of the protected mode to perform multi-user, multitasking operations while running 8086 emulation. This worked on the B-1 prototype step of the chip, but Digital Research discovered problems with the emulation on the production level C-1 step in May, which would not allow Concurrent DOS 286 to run 8086 software in protected mode. The release of Concurrent DOS 286 was delayed until Intel would develop a new version of the chip. In August, after extensive testing on E-1 step samples of the 80286, Digital Research acknowledged that Intel corrected all documented 286 errata, but said there were still undocumented chip performance problems with the prerelease version of Concurrent DOS 286 running on the E-1 step. Intel said the approach Digital Research wished to take in emulating 8086 software in protected mode differed from the original specifications. Nevertheless, in the E-2 step, they implemented minor changes in the microcode that would allow Digital Research to run emulation mode much faster. Named IBM 4680 OS, IBM originally chose DR Concurrent DOS 286 as the basis of their IBM 4680 computer for IBM Plant System products and Point-of-Sale terminals in 1986. Digital Research's FlexOS 286 version 1.0, a derivation of Concurrent DOS 286, was developed in 1986, introduced in January 1987, and later adopted by IBM for their IBM 4690 OS, but the same limitations affected it. The problems led to Bill Gates famously referring to the 80286 as a "brain dead chip",[when?] since it was clear that the new Microsoft Windows environment would not be able to run multiple MS-DOS applications with the 286. It was arguably responsible for the split between Microsoft and IBM, since IBM insisted that OS/2, originally a joint venture between IBM and Microsoft, would run on a 286 (and in text mode). Other operating systems that used the protected mode of the 286 were Microsoft Xenix (around 1984), Coherent, and Minix. These were less hindered by the limitations of the 80286 protected mode because they did not aim to run MS-DOS applications or other real-mode programs. In its successor 80386 chip, Intel enhanced the protected mode to address more memory and also added the separate virtual 8086 mode, a mode within protected mode which has much better MS-DOS compatibility, in order to satisfy the diverging needs of the market. - U80601 – Almost identical copy of the 80286 manufactured 1989/90 in East Germany. In the Soviet Union a clone of the 80286 was designated KR1847VM286 (Russian: КР1847ВМ286). - LOADALL – Undocumented 80286/80386 instruction that could be used to gain access to all available memory in real mode. - iAPX, for the iAPX name - "Microprocessor Hall of Fame". Intel. Archived from the original on 2007-07-06. Retrieved 2007-08-11. - Official Intel iAPX 286 programmers' manual (page 1-1) - A simpler cousin in the 8086-line with integrated peripherals, intended for embedded systems. - "Intel Museum – Microprocessor Hall of Fame". Intel.com. 2009-05-14. Archived from the original on 2009-03-12. Retrieved 2009-06-20. - "Intel Architecure [sic] Programming and Information". Intel80386.com. 2004-01-13. Retrieved 2009-04-28. - "80286 Microprocessor Package, 1982". Content.cdlib.org. Retrieved 2009-04-28. - Edward Foster (26 August 1985). "Intel shows new 80286 chip – Future of DRI's Concurrent DOS 286 still unclear after processor fixed". InfoWorld. InfoWorld Media Group. 7 (34): 21. ISSN 0199-6649. - Bahadure, Nilesh B. (2010). "15 Other 16-bit microprocessors 80186 and 80286". Microprocessors: 8086/8088, 80186/80286, 80386/80486 and the Pentium Family. PHI Learning Pvt. Ltd. pp. 503–537. ISBN 8120339428. - "Intel 80286 microprocessor family". CPU-World. Retrieved 19 May 2012. - Petzold, Charles (1986). "Obstacles to a grown up operating system". PC Magazine. 5 (11): 170–74. - Edward Foster (13 May 1985). "Super DOS awaits new 80286 – Concurrent DOS 286 – delayed until Intel upgrades chip – offers Xenix's power and IBM PC compatibility". InfoWorld. InfoWorld Media Group. 7 (19): 17–18. ISSN 0199-6649. - Melissa Calvo and Jim Forbes (1986-02-10). InfoWorld, ed. IBM to use a DRI operating system. p. 12. Retrieved 2011-09-06. - Dewar, Robert B. K.; Smosna, Matthew (1990). Microprocessors: A Programmer's View. New York: McGraw-Hill. ISBN 0-07-016638-2. - Charles Petzold, Intel's 32-bit Wonder: The 80386 Microprocessor, PC Magazine, November 25, 1986, pp. 150-152 - "Soviet microprocessors, microcontrollers, FPU chips and their western analogs". CPU-world. Retrieved 24 March 2016. |Wikimedia Commons has media related to Intel 80286.| - Intel Datasheets - Intel 80286 and 80287 Programmer's Reference Manual at bitsavers.org - Intel 80286 Programmer's Reference Manual 1987(txt). Hint: use e.g. Hebrew (IBM-862) encoding. - Linux on 286 laptops and notebooks - Intel 80286 images and descriptions at cpu-collection.de - CPU-INFO: 80286, in-depth processor history - Overview of all 286 compatible chips
1
62
<urn:uuid:27bcfac5-221c-4c3d-b14a-57baa8987456>
Game Boys may be old tech, but they still provide challenges to modern hackers. [Dhole] has come up with a cartridge emulator which uses an STMicroelectronics STM32F4 discovery board to do all the work. Until now, most flash cartridges used programmable logic devices, either CPLDs or FPGAs to handle the high-speed logic requirements. [Alex] proved that a microcontroller could emulate a cartridge by using an Arduino to display the “Nintendo” Game Boy boot logo. The Arduino wasn’t fast enough to actually handle high-speed accesses required for game play. [Dhole] kicked the speed up by moving to the ARM Cortex-M4 based 168 MHz STM32F4. The F4’s 70 GPIO pins can run via internal peripherals at up to 100MHz, which is plenty to handle the 1MHz clock speed of the Game Boy’s bus. Logic levels are an issue, as the STM32 uses 3.3V logic while the Game Boy is a 5V device. Thankfully the STM32’s inputs are 5V tolerant, so things worked just fine. Simple Game Boy cartridges like Tetris were able to directly map a ROM device into the Game Boys memory space. More complex titles used Memory Block Controller (MBC) chips to map sections of ROM and perform other duties. There were several MBC chips used for various titles, but [Dhole] can emulate MBC1, which is compatible with the largest code base. One of the coolest tricks [Dhole] implemented was displaying a custom boot logo. The Game Boy used the “Nintendo” logo as a method of copyright protection. If a cartridge didn’t have the logo, the Game Boy wouldn’t run. The logo is actually read twice – once to check the copyright info, and once to display it on the screen. By telling the emulator to change the data available at those addresses after the first read, any graphic can be displayed. If you’re wondering what a cartridge emulator would be useful for (other than pirating games), you should check out [Jeff Frohwein’s] Gameboy Dev page! [Jeff] has been involved in Game Boy development since the early days. There are literally decades of demos and homebrew games out there for the Game Boy and various derivatives. . Continue reading “Game Boy Cartridge Emulator Uses STM32” While [Rob] was digging around in his garage one day, he ran across an old Commodore 64 cartridge. With no ROM to be found online, he started wondering what was stored in this ancient device. Taking a peek at the bits stored in this cartridge would require dumping the entire thing to a modern computer, and armed with an Arduino, he created a simple cart dumper, capable of reading standard 8k cartridges without issue. The expansion port for the C64 has a lot of pins corresponding to the control logic inside these old computers, but the only ones [Rob] were really interested in were the eight data lines and the sixteen address lines. With a little bit of code, [Rob] got an Arduino Mega to step through all the address pins and read the corresponding data at that location in memory. This data is then sent over USB to a C app that dumps everything in HEX and text. While the ROM for just about every C64 game can be found online, [Rob] was unlucky enough to find one that wasn’t. It doesn’t really matter, though, as we don’t know if [Rob] has the 1541 disk drive that makes this cart useful. Still, it’s a good reminder of how useful an Arduino can be when used as an electronic swiss army knife. From [Basami Sentaku] in Japan comes this 8bit harmonica. [Basami] must remember those golden days of playing Famicom (or Nintendo Entertainment System for non-Japanese players). As the systems aged, the contacts would spread. In the case of the NES, this would often mean the infamous blinking red power light. The solution for millions of players was simple. Take the cartridge out, blow on it, say a few incantations, and try again. In retrospect, blowing on the cartridges probably did more harm than good, but it seemed like a good idea at the time. We’d always assumed that the Famicom, being a top loading design, was immune from the issues that plagued the horizontal slot on the NES. Either [Basami] spent some time overseas, or he too took to tooting his own cartridge. Blowing into cartridges has inspired a few crafty souls to stuff real harmonicas into cartridge cases. [Basami] took a more electronic route. A row of 8 microphones picks up the players breath sound. Each microphone is used to trigger a specific note. The katakana in the video shows the traditional Solfège musical scale: do, re, mi, fa, so, la ti, do. A microcontroller monitors the signal from each microphone and determines which one is being triggered. The actual sound is created by a Yamaha YMZ294. The ‘294 is an 18 pin variant of the venerable General Instrument AY-3-8910, a chip long associated with video game music and sound effects. While we’re not convinced that the rendition of Super Mario Bros’ water theme played in the video wasn’t pre-recorded, we are reasonably sure that the hardware is capable of doing everything the video shows. Continue reading “The 8 Bit Harmonica Blows In From Japan” [Chris Osborn] had an old Atari 800 collecting dust and decided to pull it out and get to work. The problem is that it’s seen some rough storage conditions over the years including what appears to be moisture damage. He’s read about a cartridge called SALT II which can run automatic diagnostics. Getting your hands on that original hardware can be almost impossible, but if he had a flashable cartridge he could just download an image. So he bought the cheapest cartridge he could find and modified it to use an EPROM. When he cracked open his new purchase he was greeted with the what you see on the left. It’s a PCB with the edge connector and two 24-pin sockets. These are designed to take 4k ROMs. He dropped in an EPROM of the same size but the pin-out doesn’t match what the board layout had in mind. After following the traces he found that it is pretty much an exact match for an Intel 2764 chip. The one problem being that the chip has 28-pins, four too many for the footprint. The interesting thing is that the larger footprint (compared to the 2732) uses all the same pins, simply adding to the top and moving the power pins. A small amount of jumper wire soldering and [Chris] is in business. We’ve sure been seeing a lot of original NES cases used in projects lately. This time around the thing still plays the original cartridges. This was one of the mains goals which [Maenggu] set for himself when integrating the LCD screen with the gaming console. There is a quick video clip which shows off the functionality of the device. It’s embedded after the break along with a few extra images. To our eye the NES looks completely unmodified when the case is closed. The cartridge slot still accepts games, but you don’t have to lower the frame into place once that cartridge has been inserted. The image above shows a ribbon cable connecting the top and bottom halves of the build. It routes the signals for both the LCD screen and the cartridge adapter to the hardware in the base. He mentions that he used the original power supply. We’re not sure if the original motherboard is used as well or if this is using some type of emulator. Continue reading “Hinged NES case hides an integrated LCD screen” [Dan] has his own Stratasys Dimension SST 768 3D printer. It’s a professional grade machine which does an amazing job. But when it comes time to replace the cartridge he has to pay the piper to the tune of $260. He can buy ABS filament for about $50 per kilogram, so he set out to refill his own P400 cartridges. Respooling the cartridge must be quite easy because he doesn’t describe the process at all. But the physical act of refilling it doesn’t mean you can keep using it. The cartridge and the printer both store usage information that prevents this type of DIY refill; there’s an EEPROM in the cartridge and a log file on the printer’s hard drive. [Dan] pulled the hard drive out and used a Live CD to make an image. He loaded the image in a virtual machine, made some changes to enable SSH and zap the log file at each boot, then loaded the image back onto the printer’s drive. A script that he wrote is able to backup and rewrite the EEPROM chip, which basically rolls back the ‘odometer’ on how much filament has been used. [Ed] needed a bunch of edge connectors for video game cartridges. He was unable to source parts for Neo Geo Pocket games and ended up building his own from PCI sockets. But it sounds like this technique would work with other console cartridges as well. From the picture you can see that this is a bit more involved than just slapping a cartridge into a socket. Because there are multiple steps, and many connectors were needed, [Ed’s] dad lent a hand and built a few jigs to help with the cutting. The first step was to cut off the key and the narrow end of the socket. These NGP cartridges are one-sided, so the socket was cut in half using a board with a dado cut in it as a jig. From there the plastic bits can be cleaned up before pulling out two center pins and cutting a groove to receive the cartridge key. There are also two shoulder cuts that need to be made after trimming the piece to length. The video after the break will walk you through this whole process. These PCI sockets are versatile. One of our other favorite hacks used them to make SOIC programming clips. Continue reading “Machining cartridge connectors from PCI sockets”
1
3
<urn:uuid:bd09c441-f3e5-4fba-a4ad-f5b4d547c87a>
Malignant Mesothelioma Mortality — United States, 1999–2015 Weekly / March 3, 2017 / 66(8);214–218 SummaryWhat is already known about this topic? Malignant mesothelioma is a neoplasm associated with inhalation exposure to asbestos fibers and other elongate mineral particles (EMPs). The median survival after malignant mesothelioma diagnosis is approximately 1 year. The latency period between the first exposure to asbestos fibers or other EMPs and mesothelioma development ranges from 20 to 71 years. Occupational exposure has occurred in industrial operations including mining and milling, manufacturing, shipbuilding and repair, and construction. Current occupational exposure occurs predominantly during maintenance and remediation of asbestos-containing buildings. The projected number of malignant mesothelioma deaths was expected to increase to 3,060 annually by 2001–2005, and after 2005, mortality was projected to decrease.What is added by this report? During 1999–2015, a total of 45,221 malignant mesothelioma deaths were reported, increasing from 2,479 (1999) to 2,597 (2015). Mesothelioma deaths increased for persons aged ≥85 years, for both sexes, persons of white, black and Asian or Pacific Islander race, and all ethnic groups. Continuing occurrence of malignant mesothelioma deaths in persons aged <55 years suggests ongoing inhalation exposure to asbestos fibers and possibly other causative EMPs.What are the implications for public health practice? Despite regulatory actions and decline in asbestos use, the annual number of malignant mesothelioma deaths remains substantial. Contrary to past projections, the number of malignant mesothelioma deaths has been increasing. The continuing occurrence of mesothelioma deaths, particularly among younger populations, underscores the need for maintaining efforts to prevent exposure and for ongoing surveillance to monitor temporal trends. Malignant mesothelioma is a neoplasm associated with occupational and environmental inhalation exposure to asbestos* fibers and other elongate mineral particles (EMPs) (1–3). Patients have a median survival of approximately 1 year from the time of diagnosis (1). The latency period from first causative exposure to malignant mesothelioma development typically ranges from 20 to 40 years but can be as long as 71 years (2,3). Hazardous occupational exposures to asbestos fibers and other EMPs have occurred in a variety of industrial operations, including mining and milling, manufacturing, shipbuilding and repair, and construction (3). Current exposures to commercial asbestos in the United States occur predominantly during maintenance operations and remediation of older buildings containing asbestos (3,4). To update information on malignant mesothelioma mortality (5), CDC analyzed annual multiple cause-of-death records† for 1999–2015, the most recent years for which complete data are available. During 1999–2015, a total of 45,221 deaths with malignant mesothelioma mentioned on the death certificate as the underlying or contributing cause of death were reported in the United States, increasing from 2,479 deaths in 1999 to 2,597 in 2015 (in the same time period the age-adjusted death rates§ decreased from 13.96 per million in 1999 to 10.93 in 2015). Malignant mesothelioma deaths increased for persons aged ≥85 years, both sexes, persons of white, black, and Asian or Pacific Islander race, and all ethnic groups. Despite regulatory actions and the decline in use of asbestos the annual number of malignant mesothelioma deaths remains substantial. The continuing occurrence of malignant mesothelioma deaths underscores the need for maintaining measures to prevent exposure to asbestos fibers and other causative EMPs and for ongoing surveillance to monitor temporal trends. For this report, malignant mesothelioma deaths during 1999–2015 were identified from death certificates and included deaths for which International Classification of Diseases (ICD), 10th Revision codes for malignant mesothelioma¶ were listed as either the underlying or contributing cause of death in the multiple cause-of-death mortality data. The analysis was restricted to deaths of persons aged ≥25 years, as they were more likely to have been occupationally exposed than were younger decedents. Age-adjusted death rates per 1 million persons aged ≥25 years by demographics, neoplasm anatomical site, and year were calculated using the 2000 U.S. Census standard population estimate. Industry and occupation information was available from death certificates for decedents reported from 23 states for 1999, 2003, 2004, and 2007, and was coded** using the U.S. Census 2000 Industry and Occupation Classification System. Proportionate mortality ratios (PMRs)†† for malignant mesothelioma by industry and occupation were calculated. Confidence intervals (CIs) were calculated assuming Poisson distribution of the data. During 1999–2015, a total of 45,221 deaths with malignant mesothelioma mentioned on the death certificate as the underlying or contributing cause of death among persons aged ≥25 years were reported in the United States; 16,914 (37.4%) occurred among persons aged 75–84 years, 36,093 (79.8%) occurred among males, 42,778 (94.6%) among whites, and 43,316 (95.8%) among non-Hispanics (Table 1). Malignant mesothelioma was classified as mesothelioma of pleura (3,351; 7.4%), peritoneum (1,854; 4.1%), pericardium (74; 0.2%), other anatomic site (5,280; 11.7%), and unspecified anatomic site (35,068; 77.5%). Among 42,470 (93.9%) decedents, malignant mesothelioma was coded as the underlying§§ cause of death. During 1999–2015, the annual number of malignant mesothelioma deaths increased 4.8% overall, from 2,479 in 1999 to 2,579 in 2015 (p-value for linear time trend <0.001). The number of malignant mesothelioma deaths increased among persons aged ≥85 years, both sexes, white, black, and Asian or Pacific Islander race, and all ethnic groups; and patients with mesothelioma of the peritoneum and unspecified anatomic site. Malignant mesothelioma deaths decreased among persons aged 35–44, 45–54, and 55–64 years, and among persons with mesothelioma of the pleura and other anatomic sites. During 1999–2015, the mesothelioma age-adjusted death rate decreased 21.7% from 13.96 per million population (1999) to 10.93 (2015) (p-value for time trend <0.001). This trend in the standardized rate is a weighted average of the trends in the age-specific rates and masks the differences in individual age groups. The age-specific death rate decreased significantly among persons 45–54 (p<0.001), 55–64 (p<0.001), and 65–74 (p<0.001) years and increased significantly among persons aged ≥85 years (p<0.001). During 1999–2015, the annualized state mesothelioma age-adjusted death rate exceeded 20 per million per year in two states: Maine (22.06) and Washington (20.10) (Figure). Industry and occupation data were available for 1,830 (96.3%) of 1,900 malignant mesothelioma deaths that occurred in residents of 23 states during 1999, 2003, 2004, and 2007 (Table 2).¶¶ Among 207 industries and 274 occupations, significantly elevated PMRs for malignant mesothelioma were found for 11 industries and 17 occupations. By industry, the highest PMRs were for ship and boat building and repairing (6.7; 95% CI = 4.3–9.9); petroleum refining (4.1; CI = 2.6–6.0); and industrial and miscellaneous chemicals (3.8; CI = 2.9–5.0). By occupation, the highest PMRs were for insulation workers (26.9; CI = 16.2–42.0); chemical technicians (4.9; CI = 2.1–9.6); and pipelayers, plumbers, pipefitters, and steamfitters (4.8; CI = 3.7–6.1). The annual number of malignant mesothelioma deaths is increasing, particularly among persons aged ≥85 years, most likely representing exposure many years ago. However, although malignant mesothelioma deaths decreased in persons aged 35–64 years, the continuing occurrence of mesothelioma deaths among persons aged <55 years suggests ongoing occupational and environmental exposures to asbestos fibers and other causative EMPs, despite regulatory actions by the Occupational Safety and Health Administration (OSHA)*** and the Environmental Protection Agency††† aimed at limiting asbestos exposure. OSHA established a permissible exposure limit for asbestos of 12 fibers per cubic centimeter (f/cc) of air as an 8-hour time-weighted average in 1971. This initial permissible exposure limit was reduced to 5 f/cc in 1972, 2 f/cc in 1976, 0.2 f/cc in 1986, and 0.1 f/cc in 1994 (6). Although inspection data during 1979–2003 indicated a general decline in the proportion of samples exceeding designated occupational exposure limits, 20% of air samples collected in the construction industry in 2003 for compliance purposes exceeded the OSHA permissible exposure limit. Moreover, asbestos products remain in use, and new asbestos-containing products continue to be manufactured in or imported§§§ into the United States. Although most deaths from malignant mesothelioma in the United States are the result of exposures to asbestos 20–40 years prior, new cases might result from occupational exposure to asbestos fibers during maintenance activities, demolition and remediation of existing asbestos in structures, installations, and buildings if controls are insufficient to protect workers. The OSHA asbestos standard describes engineering and work practice controls (e.g., use of wet methods, local exhaust ventilation, and vacuum cleaners equipped with high-efficiency particulate air [HEPA] filters) during asbestos handling, mixing, removal, cutting, application, and cleanup and requires the use of respiratory protection if these controls are not sufficient to reduce employee exposure to levels at or below the permissible limit. Moreover, family members of workers engaged in activities placing them at risk for asbestos exposures also have the potential for exposure to asbestos (3). In addition, ongoing research is focusing on the potential nonoccupational and environmental exposures to asbestos fibers and other EMPs (e.g., erionite, a naturally occurring fibrous mineral that belongs to a group of minerals called zeolites), and nonmineral elongate particles (e.g., carbon nanotubes) to assess exposures and potential health risks (7,8). Among the 96.3% of deaths in 23 states for which industry and occupation were known, shipbuilding and construction industries were major contributors to malignant mesothelioma mortality (4). The large number of deaths among construction workers is consistent with large number of construction workers with prior direct and indirect exposure to asbestos fibers through most of the 20th century (the construction industry accounted for 70%–80% of asbestos consumption) (4). For example, direct exposure to asbestos has occurred during installation of asbestos-cement pipes, asbestos-cement sheets, architectural panels, built-up roofing, and removal of roofing felts or asbestos insulation. Workers also might have been exposed to asbestos during spraying of asbestos insulation in multistoried structures during 1958–1972 (asbestos-containing materials were banned for fireproofing/insulating in 1973) (4). In addition, workers in other occupations (e.g., carpenters, electricians, pipefitters, plumbers, welders) might also have been exposed if they were present on-site during spraying activities. A review of studies projecting the number of deaths from asbestos-related malignant mesothelioma in the United States indicated that the number of deaths during 1985–2009 would range from 620 to 3,270 annually (9). Based on an estimated 27.5 million workers with some exposure to asbestos during 1940–1972, a 1982 study estimated that the number of malignant mesothelioma deaths would rise to 3,060 annually by 2001–2005 (4). After 2005, mortality was projected to decrease but would continue for three decades. Based on asbestos consumption and malignant mesothelioma incidence data, it was estimated that the number of mesothelioma cases among males would peak during 2000–2004 (approximately 2,000 cases) and after that period, the number of mesothelioma cases was expected to decline and return to background levels by 2055 (10). The number of mesothelioma cases among females (approximately 560 in 2003) was projected to increase slightly over time. The results of the current study indicate an increase in the number of malignant mesothelioma deaths during 1999–2015. This discrepancy might be explained, in part, by the methodology of the projection studies, which were based on multiple assumptions including variations in the number of employed workers at risk, exposure levels and timing, and the linear dose–response relationship between asbestos exposure¶¶¶ and malignant mesothelioma. Moreover, additional persons who might have been exposed to asbestos and be at risk for malignant mesothelioma (e.g., family contacts of asbestos-exposed workers, persons exposed to naturally occurring asbestos, persons exposed to asbestos in surfacing materials or as fireproofing material in buildings) were not considered (4,10). The findings in this report are subject to at least five limitations. First, information on exposure to asbestos or a specific work history was not available to assess the potential source of exposure. The industry and occupation listed on a death certificate might not be the industry and occupation in which the decedent's exposures occurred. Second, the state issuing a death certificate might not be the state or country in which the decedent's exposures occurred. Third, malignant mesothelioma did not have a discrete ICD code until the 10th revision of the ICD; thus, evaluation of mortality trends before 1999 was not possible. Fourth, some mesothelioma cases might not be included in this analysis because of misdiagnosis and the use of incorrect ICD-10 codes (1). Finally, information on decedents’ industry and occupation was available only for selected states of residence and years, and might not be nationally representative. Despite regulatory actions and the decline in use of asbestos, the annual number of malignant mesothelioma deaths remains substantial. Effective asbestos exposure prevention strategies for employers recommended by OSHA and CDC’s National Institute for Occupational Safety and Health (https://www.cdc.gov/niosh/topics/asbestos/) are available. The continuing occurrence of malignant mesothelioma deaths underscores the need for maintaining asbestos exposure prevention efforts and for ongoing surveillance to monitor temporal trends. Robert Cohen, MD, School of Public Health, University of Illinois at Chicago; Martin Harper, PhD, Health Effects Laboratory Division, National Institute for Occupational Safety and Health, CDC. Corresponding author: Jacek M. Mazurek, [email protected], 304-285-5983. * “Asbestos” is a term used for certain minerals that have crystallized in a particular macroscopic habit with certain commercially useful properties. “Asbestiform” is a term applied to minerals with a macroscopic habit similar to that of asbestos. https://www.cdc.gov/niosh/docs/2011-159/pdfs/2011-159.pdf. † CDC WONDER. https://wonder.cdc.gov/. § Age-adjusted death rates were calculated by applying age-specific death rates to the 2000 U.S. Census standard population age distribution. https://wonder.cdc.gov/wonder/help/mcd.html#Age-Adjusted Rates. ¶ ICD-10 codes C45.0 (mesothelioma of pleura), C45.1 (mesothelioma of peritoneum), C45.2 (mesothelioma of pericardium), C45.7 (mesothelioma of other sites), and C45.9 (mesothelioma, unspecified). The death counts reported are the number of times each specific cause of death is mentioned in the record, or any mention of the specified cause of death. Up to 20 causes can be indicated on any single death certificate. †† PMR was defined as the observed number of deaths with malignant mesothelioma in a specified industry/occupation, divided by the expected number of deaths with malignant mesothelioma. The expected number of deaths was the total number of deaths in industry or occupation of interest multiplied by a proportion defined as the number of malignant mesothelioma deaths in all industries and/or occupations, divided by the total number of deaths in all industries/occupations. The malignant mesothelioma PMRs were internally adjusted by 5-year age groups, gender, and race. §§ Underlying cause of death is defined as the disease or injury that initiated the chain of morbid events leading directly to death, or the circumstances of the accident or violence which produced the fatal injury. ¶¶ For 70 residents of these 23 states, deaths have occurred in states that did not provide the industry and occupation information to the National Institute for Occupational Safety and Health. *** U.S. Department of Labor. OSHA. Asbestos. OSHA Standards (29 CFR 1910, 1915, and 1926). https://www.osha.gov/SLTC/asbestos/standards.html. ††† U.S. Environmental Protection Agency (EPA). Asbestos Laws and Regulations (https://www.epa.gov/asbestos/asbestos-laws-and-regulations) and EPA Asbestos Materials Bans: Clarification. May 18, 1999 (http://www.ewg.org/asbestos/documents/pdf/asb-bans2.pdf). On July 12, 1989, EPA issued a final rule banning most asbestos-containing products. In 1991, many aspects of this standard were set aside by the U.S. Fifth Circuit Court of Appeals. The following specific asbestos-containing products remain banned: flooring felt, rollboard, and corrugated, commercial, or specialty paper (58 Federal Register 58964). In addition, the regulation continues to ban the use of asbestos in products that have not historically contained asbestos, otherwise referred to as “new uses” of asbestos. Asbestos-containing product categories no longer subject to the 1989 ban include asbestos-cement corrugated sheet, asbestos-cement flat sheet, asbestos clothing, pipeline wrap, roofing felt, vinyl-asbestos floor tile, asbestos-cement shingle, millboard, asbestos-cement pipe, automatic transmission components, clutch facings, friction materials, disc brake pads, drum brake linings, brake blocks, gaskets, non-roofing coatings, and roof coatings. §§§ In the United States, approximately 340 metric tons of asbestos were imported in 2016; nearly all asbestos was used by the chloralkali industry to manufacture semipermeable diaphragms. An unknown quantity of asbestos was imported within manufactured products, including brake linings and pads, building materials, gaskets, millboard, and yarn and thread, among others (Flanagan DM. U.S. Geological Survey, 2017, Mineral Commodity Summaries. Asbestos. https://minerals.usgs.gov/minerals/pubs/mcs/2017/mcs2017.pdf). ¶¶¶ Malignant mesothelioma can develop after short-term asbestos exposures of only a few weeks, and from very low levels of exposure. There is no evidence of a threshold level below which there is no risk for mesothelioma. The risk for mesothelioma increases with intensity and duration of asbestos exposure. - Lemen RA. Mesothelioma from asbestos exposures: epidemiologic patterns and impact in the United States. J Toxicol Environ Health B Crit Rev 2016;19:250–65. CrossRef PubMed - Lanphear BP, Buncher CR. Latent period for malignant mesothelioma of occupational origin. J Occup Med 1992;34:718–21. PubMed - National Institute for Occupational Safety and Health. Current intelligence bulletin 62. Asbestos fibers and other elongate mineral particles: state of the science and roadmap for research. Cincinnati, Ohio: US Department of Health and Human Services, CDC, National Institute for Occupational Safety and Health; 2011.https://www.cdc.gov/niosh/docs/2011-159/pdfs/2011-159.pdf - Nicholson WJ, Perkel G, Selikoff IJ. Occupational exposure to asbestos: population at risk and projected mortality—1980–2030. Am J Ind Med 1982;3:259–311. CrossRef PubMed - Bang KM, Mazurek JM, Storey E, Attfield MD, Schleiff PL, Wassell JT. Malignant mesothelioma mortality—United States, 1999–2005. MMWR Morb Mortal Wkly Rep 2009;58:393–6. PubMed - Martonik JF, Nash E, Grossman E. The history of OSHA’s asbestos rule makings and some distinctive approaches that they introduced for regulating occupational exposure to toxic substances. AIHAJ 2001;62:208–17. CrossRef PubMed - Carlin DJ, Larson TC, Pfau JC, et al. Current research and opportunities to address environmental asbestos exposures. Environ Health Perspect 2015;123:A194–7. CrossRef PubMed - Gordon RE, Fitzgerald S, Millette J. Asbestos in commercial cosmetic talcum powder as a cause of mesothelioma in women. Int J Occup Environ Health 2014;20:318–32. CrossRef PubMed - Lilienfeld DE, Mandel JS, Coin P, Schuman LM. Projection of asbestos related diseases in the United States, 1985–2009. I. Cancer. Br J Ind Med 1988;45:283–91. PubMed - Price B, Ware A. Mesothelioma trends in the United States: an update based on Surveillance, Epidemiology, and End Results Program data for 1973 through 2003. Am J Epidemiol 2004;159:107–12. CrossRef PubMed TABLE 1. Malignant mesothelioma deaths and age-adjusted rates* among decedents aged ≥25 years, by selected characteristics — United States, 1999–2015 |Characteristics||No. of deaths||Death rate| |Age group (yrs)§| |Black or African American||1,870||5.84| |Asian or Pacific Islander||440||3.52| | American Indian or Alaska| FIGURE. Malignant mesothelioma annualized age-adjusted death rate* per 1 million population,† by state — United States, 1999–2015 * Age-adjusted death rates were calculated by applying age-specific death rates to the 2000 U.S standard population age distribution (https://wonder.cdc.gov/wonder/help/mcd.html#Age-Adjusted Rates). In two states (Maine and Washington), the age-adjusted death rate exceeded 20 per million per year. † Decedents aged ≥25 years for whom the International Classification of Diseases, 10th Revision codes C45.0 (mesothelioma of pleura), C45.1 (mesothelioma of peritoneum), C45.2 (mesothelioma of pericardium), C45.7 (mesothelioma of other sites), or C45.9 (mesothelioma, unspecified) were listed on death certificates were identified using CDC multiple cause-of-death data for 1999–2015. TABLE 2. Industries and occupations with significantly elevated proportionate mortality ratios, 1,830 malignant mesothelioma decedents aged ≥25 years — 23 states,* 1999, 2003, 2004, and 2007 |Characteristic||No. of deaths||PMR† (95% CI)| |Ship and boat building||24||6.7 (4.3–9.9)| |Petroleum refining||25||4.1 (2.6–6.0)| |Industrial and miscellaneous chemicals||58||3.8 (2.9–5.0)| |Labor unions||7||3.7 (1.5–7.6)| |Miscellaneous nonmetallic mineral product manufacturing||5||3.6 (1.2–8.4)| |Electric and gas and other combinations||7||3.1 (1.3–6.5)| |Water transportation||12||2.3 (1.2–3.9)| |Electric power generation transmission and distribution||24||2.2 (1.4–3.3)| |U.S. Navy||11||2.0 (1.0–3.6)| |Architectural, engineering, and related services||23||1.9 (1.2–2.8)| |All other industries||1,312||—| |Insulation workers||19||26.9 (16.2–42.0)| |Chemical technicians||8||4.9 (2.1–9.6)| |Pipelayers, plumbers, pipefitters, and steamfitters||67||4.8 (3.7–6.1)| |Chemical engineers||12||4.0 (2.1–7.1)| |Sheet metal workers||17||3.5 (2.0–5.5)| |Sailors and marine oilers||5||3.4 (1.1–8.0)| |Structural iron and steel workers||10||3.3 (1.6–6.0)| |Stationary engineers and boiler operators||15||2.9 (1.6–4.8)| |Welding, soldering, and brazing workers||30||2.1 (1.4–3.0)| |Construction managers||37||2.0 (1.4–2.8)| |Engineers, all other||12||2.0 (1.0–3.5)| |Mechanical engineers||14||1.9 (1.0–3.2)| |First-line supervisors or managers of mechanics, installers, and repairers||27||1.8 (1.2–2.6)| |First-line supervisors or managers of production and operating workers||40||1.4 (1.0–2.0)| |All other occupations||1,362||—| Suggested citation for this article: Mazurek JM, Syamlal G, Wood JM, Hendricks SA, Weston A. Malignant Mesothelioma Mortality — United States, 1999–2015. MMWR Morb Mortal Wkly Rep 2017;66:214–218. DOI: http://dx.doi.org/10.15585/mmwr.mm6608a3. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. References to non-CDC sites on the Internet are provided as a service to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of pages found at these sites. URL addresses listed in MMWR were current as of the date of publication. All HTML versions of MMWR articles are generated from final proofs through an automated process. This conversion might result in character translation or format errors in the HTML version. Users are referred to the electronic PDF version (https://www.cdc.gov/mmwr) and/or the original MMWR paper copy for printable versions of official text, figures, and tables. Questions or messages regarding errors in formatting should be addressed to [email protected]. - Page last reviewed: March 2, 2017 - Page last updated: March 2, 2017 - Content source:
2
13
<urn:uuid:c455d9b4-1179-4308-8ff8-904d9405da39>
|Synonyms||Intellectual developmental disability (IDD), general learning disability| |Children with intellectual disabilities or other developmental conditions can compete in the Special Olympics.| |Frequency||153 million (2015)| Intellectual disability (ID), also known as general learning disability, and mental retardation (MR), is a generalized neurodevelopmental disorder characterized by significantly impaired intellectual and adaptive functioning. It is defined by an IQ score under 70 in addition to deficits in two or more adaptive behaviors that affect everyday, general living. Once focused almost entirely on cognition, the definition now includes both a component relating to mental functioning and one relating to individuals' functional skills in their environments. As a result of this focus on the person's abilities in practice, a person with an unusually low IQ may not be considered to have intellectually disability. Intellectual disability is subdivided into syndromic intellectual disability, in which intellectual deficits associated with other medical and behavioral signs and symptoms are present, and non-syndromic intellectual disability, in which intellectual deficits appear without other abnormalities. Down syndrome and fragile X syndrome are examples of syndromic intellectual disabilities. Intellectual disability affects about 2–3% of the general population. Seventy-five to ninety percent of the affected people have mild intellectual disability. Non-syndromic or idiopathic cases account for 30–50% of cases. About a quarter of cases are caused by a genetic disorder, and about 5% of cases are inherited from a person's parents. Cases of unknown cause affect about 95 million people as of 2013. The terms used for this condition are subject to a process called the euphemism treadmill. This means that whatever term is chosen for this condition, it eventually becomes perceived as an insult. The terms mental retardation and mentally retarded were invented in the middle of the 20th century to replace the previous set of terms, which were deemed to have become offensive. By the end of the 20th century, these terms themselves have come to be widely seen as disparaging, politically incorrect, and in need of replacement. The term intellectual disability is now preferred by most advocates and researchers in most English-speaking countries. As of 2015[update], the term "mental retardation" is still used by the World Health Organization in the ICD-10 codes, which have a section titled "Mental Retardation" (codes F70–F79). In the next revision, the ICD-11 is expected to replace the term mental retardation with either intellectual disability or intellectual developmental disorder, which the DSM-5 already uses. Because of its specificity and lack of confusion with other conditions, the term "mental retardation" is still sometimes used in professional medical settings around the world, such as formal scientific research and health insurance paperwork. - 1 Signs and symptoms - 2 Causes - 3 Diagnosis - 4 Management - 5 Epidemiology - 6 History - 7 Society and culture - 8 Health disparities - 9 See also - 10 References - 11 External links Signs and symptoms Intellectual disability (ID) begins during childhood and involves deficits in mental abilities, social skills, and core activities of daily living (ADLs) when compared to same-aged peers. There often are no physical signs of mild forms of ID, although there may be characteristic physical traits when it is associated with a genetic disorder (e.g., Down syndrome). The level of impairment ranges in severity for each person. Some of the early signs can include: - Delays in reaching or failure to achieve milestones in motor skills development (sitting, crawling, walking) - Slowness learning to talk or continued difficulties with speech and language skills after starting to talk - Difficulty with self-help and self-care skills (e.g., getting dressed, washing, and feeding themselves) - Poor planning or problem solving abilities - Behavioral and social problems - Failure to grow intellectually or continued infant-like behavior - Problems keeping up in school - Failure to adapt or adjust to new situations - Difficulty understanding and following social rules In early childhood, mild ID (IQ 50–69) may not be obvious or identified until children begin school. Even when poor academic performance is recognized, it may take expert assessment to distinguish mild intellectual disability from specific learning disability or emotional/behavioral disorders. People with mild ID are capable of learning reading and mathematics skills to approximately the level of a typical child aged nine to twelve. They can learn self-care and practical skills, such as cooking or using the local mass transit system. As individuals with intellectual disability reach adulthood, many learn to live independently and maintain gainful employment. Moderate ID (IQ 35–49) is nearly always apparent within the first years of life. Speech delays are particularly common signs of moderate ID. People with moderate intellectual disability need considerable supports in school, at home, and in the community in order to fully participate. While their academic potential is limited, they can learn simple health and safety skills and to participate in simple activities. As adults, they may live with their parents, in a supportive group home, or even semi-independently with significant supportive services to help them, for example, manage their finances. As adults, they may work in a sheltered workshop. People with severe or profound ID need more intensive support and supervision their entire lives. They may learn some ADLs, but an intellectual disability is considered severe or profound when individuals are unable to independently care for themselves without ongoing significant assistance from a caregiver throughout adulthood. Individuals with profound ID are completely dependent on others for all ADLs and to maintain their physical health and safety, although they may be able to learn to participate in some of these activities to limited degree. Among children, the cause of intellectual disability is unknown for one-third to one-half of cases. About 5% of cases are inherited from a person's parents. Genetic defects that cause intellectual disability but are not inherited can be caused by accidents or mutations in genetic development. Examples of such accidents are development of an extra chromosome 18 (trisomy 18) and Down syndrome, which is the most common genetic cause. Velocariofacial syndrome and fetal alcohol spectrum disorders are the two next most common causes. However, doctors have found many other causes. The most common are: - Genetic conditions. Sometimes disability is caused by abnormal genes inherited from parents, errors when genes combine, or other reasons. The most prevalent genetic conditions include Down syndrome, Klinefelter syndrome, Fragile X syndrome (common among boys), neurofibromatosis, congenital hypothyroidism, Williams syndrome, phenylketonuria (PKU), and Prader–Willi syndrome. Other genetic conditions include Phelan-McDermid syndrome (22q13del), Mowat–Wilson syndrome, genetic ciliopathy, and Siderius type X-linked intellectual disability (OMIM 300263) as caused by mutations in the PHF8 gene (OMIM 300560). In the rarest of cases, abnormalities with the X or Y chromosome may also cause disability. 48, XXXX and 49, XXXXX syndrome affect a small number of girls worldwide, while boys may be affected by 49, XXXXY, or 49, XYYYY. 47, XYY is not associated with significantly lowered IQ though affected individuals may have slightly lower IQs than non-affected siblings on average. - Problems during pregnancy. Intellectual disability can result when the fetus does not develop properly. For example, there may be a problem with the way the fetus' cells divide as it grows. A pregnant person who drinks alcohol (see fetal alcohol spectrum disorder) or gets an infection like rubella during pregnancy may also have a baby with intellectual disability. - Problems at birth. If a baby has problems during labor and birth, such as not getting enough oxygen, he or she may have developmental disability due to brain damage. - Exposure to certain types of disease or toxins. Diseases like whooping cough, measles, or meningitis can cause intellectual disability if medical care is delayed or inadequate. Exposure to poisons like lead or mercury may also affect mental ability. - Iodine deficiency, affecting approximately 2 billion people worldwide, is the leading preventable cause of intellectual disability in areas of the developing world where iodine deficiency is endemic. Iodine deficiency also causes goiter, an enlargement of the thyroid gland. More common than full-fledged cretinism, as intellectual disability caused by severe iodine deficiency is called, is mild impairment of intelligence. Certain areas of the world due to natural deficiency and governmental inaction are severely affected. India is the most outstanding, with 500 million suffering from deficiency, 54 million from goiter, and 2 million from cretinism. Among other nations affected by iodine deficiency, China and Kazakhstan have instituted widespread iodization programs, whereas, as of 2006, Russia had not. - Malnutrition is a common cause of reduced intelligence in parts of the world affected by famine, such as Ethiopia. - Absence of the arcuate fasciculus. According to both the American Association on Intellectual and Developmental Disabilities(Intellectual Disability: Definition, Classification, and Systems of Supports (11th Edition) and the American Psychiatric Association Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), three criteria must be met for a diagnosis of intellectual disability: significant limitation in general mental abilities (intellectual functioning), significant limitations in one or more areas of adaptive behavior across multiple environments (as measured by an adaptive behavior rating scale, i.e. communication, self-help skills, interpersonal skills, and more), and evidence that the limitations became apparent in childhood or adolescence. In general, people with intellectual disability have an IQ below 70, but clinical discretion may be necessary for individuals who have a somewhat higher IQ but severe impairment in adaptive functioning. It is formally diagnosed by an assessment of IQ and adaptive behavior. A third condition requiring onset during the developmental period is used to distinguish intellectual disability from other conditions dementia such as Alzheimer's disease or traumatic brain injuries. The first English-language IQ test, the Stanford–Binet Intelligence Scales, was adapted from a test battery designed for school placement by Alfred Binet in France. Lewis Terman adapted Binet's test and promoted it as a test measuring "general intelligence." Terman's test was the first widely used mental test to report scores in "intelligence quotient" form ("mental age" divided by chronological age, multiplied by 100). Current tests are scored in "deviation IQ" form, with a performance level by a test-taker two standard deviations below the median score for the test-taker's age group defined as IQ 70. Until the most recent revision of diagnostic standards, an IQ of 70 or below was a primary factor for intellectual disability diagnosis, and IQ scores were used to categorize degrees of intellectual disability. Since current diagnosis of intellectual disability is not based on IQ scores alone, but must also take into consideration a person's adaptive functioning, the diagnosis is not made rigidly. It encompasses intellectual scores, adaptive functioning scores from an adaptive behavior rating scale based on descriptions of known abilities provided by someone familiar with the person, and also the observations of the assessment examiner who is able to find out directly from the person what he or she can understand, communicate, and such like. IQ assessment must be based on a current test. This enables diagnosis to avoid the pitfall of the Flynn effect, which is a consequence of changes in population IQ test performance changing IQ test norms over time. Distinction from other disabilities Clinically, intellectual disability is a subtype of cognitive deficit or disabilities affecting intellectual abilities, which is a broader concept and includes intellectual deficits that are too mild to properly qualify as intellectual disability, or too specific (as in specific learning disability), or acquired later in life through acquired brain injuries or neurodegenerative diseases like dementia. Cognitive deficits may appear at any age. Developmental disability is any disability that is due to problems with growth and development. This term encompasses many congenital medical conditions that have no mental or intellectual components, although it, too, is sometimes used as a euphemism for intellectual disability. Limitations in more than one area Adaptive behavior, or adaptive functioning, refers to the skills needed to live independently (or at the minimally acceptable level for age). To assess adaptive behavior, professionals compare the functional abilities of a child to those of other children of similar age. To measure adaptive behavior, professionals use structured interviews, with which they systematically elicit information about persons' functioning in the community from people who know them well. There are many adaptive behavior scales, and accurate assessment of the quality of someone's adaptive behavior requires clinical judgment as well. Certain skills are important to adaptive behavior, such as: - Daily living skills, such as getting dressed, using the bathroom, and feeding oneself - Communication skills, such as understanding what is said and being able to answer - Social skills with peers, family members, spouses, adults, and others By most definitions, intellectual disability is more accurately considered a disability rather than a disease. Intellectual disability can be distinguished in many ways from mental illness, such as schizophrenia or depression. Currently, there is no "cure" for an established disability, though with appropriate support and teaching, most individuals can learn to do many things. There are thousands of agencies around the world that provide assistance for people with developmental disabilities. They include state-run, for-profit, and non-profit, privately run agencies. Within one agency there could be departments that include fully staffed residential homes, day rehabilitation programs that approximate schools, workshops wherein people with disabilities can obtain jobs, programs that assist people with developmental disabilities in obtaining jobs in the community, programs that provide support for people with developmental disabilities who have their own apartments, programs that assist them with raising their children, and many more. There are also many agencies and programs for parents of children with developmental disabilities. Beyond that, there are specific programs that people with developmental disabilities can take part in wherein they learn basic life skills. These "goals" may take a much longer amount of time for them to accomplish, but the ultimate goal is independence. This may be anything from independence in tooth brushing to an independent residence. People with developmental disabilities learn throughout their lives and can obtain many new skills even late in life with the help of their families, caregivers, clinicians and the people who coordinate the efforts of all of these people. There are four broad areas of intervention that allow for active participation from caregivers, community members, clinicians, and of course, the individual(s) with an intellectual disability. These include psychosocial treatments, behavioral treatments, cognitive-behavioral treatments, and family-oriented strategies. Psychosocial treatments are intended primarily for children before and during the preschool years as this is the optimum time for intervention. This early intervention should include encouragement of exploration, mentoring in basic skills, celebration of developmental advances, guided rehearsal and extension of newly acquired skills, protection from harmful displays of disapproval, teasing, or punishment, and exposure to a rich and responsive language environment. A great example of a successful intervention is the Carolina Abecedarian Project that was conducted with over 100 children from low SES families beginning in infancy through pre-school years. Results indicated that by age 2, the children provided the intervention had higher test scores than control group children, and they remained approximately 5 points higher 10 years after the end of the program. By young adulthood, children from the intervention group had better educational attainment, employment opportunities, and fewer behavioral problems than their control-group counterparts. Core components of behavioral treatments include language and social skills acquisition. Typically, one-to-one training is offered in which a therapist uses a shaping procedure in combination with positive reinforcements to help the child pronounce syllables until words are completed. Sometimes involving pictures and visual aids, therapists aim at improving speech capacity so that short sentences about important daily tasks (e.g. bathroom use, eating, etc.) can be effectively communicated by the child. In a similar fashion, older children benefit from this type of training as they learn to sharpen their social skills such as sharing, taking turns, following instruction, and smiling. At the same time, a movement known as social inclusion attempts to increase valuable interactions between children with an intellectual disability and their non-disabled peers. Cognitive-behavioral treatments, a combination of the previous two treatment types, involves a strategical-metastrategical learning technique[clarification needed] that teaches children math, language, and other basic skills pertaining to memory and learning. The first goal of the training is to teach the child to be a strategical thinker through making cognitive connections and plans. Then, the therapist teaches the child to be metastrategical by teaching them to discriminate among different tasks and determine which plan or strategy suits each task. Finally, family-oriented strategies delve into empowering the family with the skill set they need to support and encourage their child or children with an intellectual disability. In general, this includes teaching assertiveness skills or behavior management techniques as well as how to ask for help from neighbors, extended family, or day-care staff. As the child ages, parents are then taught how to approach topics such as housing/residential care, employment, and relationships. The ultimate goal for every intervention or technique is to give the child autonomy and a sense of independence using the acquired skills he/she has. Although there is no specific medication for intellectual disability, many people with developmental disabilities have further medical complications and may be prescribed several medications. For example, autistic children with developmental delay may be prescribed antipsychotics or mood stabilizers to help with their behavior. Use of psychotropic medications such as benzodiazepines in people with intellectual disability requires monitoring and vigilance as side effects occur commonly and are often misdiagnosed as behavioral and psychiatric problems. Intellectual disability affects about 2–3% of the general population. 75–90% of the affected people have mild intellectual disability. Non-syndromic or idiopathic ID accounts for 30–50% of cases. About a quarter of cases are caused by a genetic disorder. Cases of unknown cause affect about 95 million people as of 2013. Intellectual disability has been documented under a variety of names throughout history. Throughout much of human history, society was unkind to those with any type of disability, and people with intellectual disability were commonly viewed as burdens on their families. Greek and Roman philosophers, who valued reasoning abilities, disparaged people with intellectual disability as barely human. The oldest physiological view of intellectual disability is in the writings of Hippocrates in the late fifth century BCE, who believed that it was caused by an imbalance in the four humors in the brain. Until the Enlightenment in Europe, care and asylum was provided by families and the church (in monasteries and other religious communities), focusing on the provision of basic physical needs such as food, shelter and clothing. Negative stereotypes were prominent in social attitudes of the time. In the 13th century, England declared people with intellectual disability to be incapable of making decisions or managing their affairs. Guardianships were created to take over their financial affairs. In the 17th century, Thomas Willis provided the first description of intellectual disability as a disease. He believed that it was caused by structural problems in the brain. According to Willis, the anatomical problems could be either an inborn condition or acquired later in life. In the 18th and 19th centuries, housing and care moved away from families and towards an asylum model. People were placed by, or removed from, their families (usually in infancy) and housed in large professional institutions, many of which were self-sufficient through the labor of the residents. Some of these institutions provided a very basic level of education (such as differentiation between colors and basic word recognition and numeracy), but most continued to focus solely on the provision of basic needs of food, clothing, and shelter. Conditions in such institutions varied widely, but the support provided was generally non-individualized, with aberrant behavior and low levels of economic productivity regarded as a burden to society. Individuals of higher wealth were often able to afford higher degrees of care such as home care or private asylums. Heavy tranquilization and assembly line methods of support were the norm, and the medical model of disability prevailed. Services were provided based on the relative ease to the provider, not based on the needs of the individual. A survey taken in 1891 in Cape Town, South Africa shows the distribution between different facilities. Out of 2046 persons surveyed, 1,281 were in private dwellings, 120 in jails, and 645 in asylums, with men representing nearly two thirds of the number surveyed. In situations of scarcity of accommodation, preference was given to white men and black men (whose insanity threatened white society by disrupting employment relations and the tabooed sexual contact with white women). In the late 19th century, in response to Charles Darwin's On the Origin of Species, Francis Galton proposed selective breeding of humans to reduce intellectual disability. Early in the 20th century the eugenics movement became popular throughout the world. This led to forced sterilization and prohibition of marriage in most of the developed world and was later used by Adolf Hitler as a rationale for the mass murder of people with intellectual disability during the holocaust. Eugenics was later abandoned as an evil violation of human rights, and the practice of forced sterilization and prohibition from marriage was discontinued by most of the developed world by the mid-20th century. Although ancient Roman law had declared people with intellectual disability to be incapable of the deliberate intent to harm that was necessary for a person to commit a crime, during the 1920s, Western society believed they were morally degenerate. Ignoring the prevailing attitude, Civitans adopted service to people with developmental disabilities as a major organizational emphasis in 1952. Their earliest efforts included workshops for special education teachers and daycamps for children with disabilities, all at a time when such training and programs were almost nonexistent. The segregation of people with developmental disabilities was not widely questioned by academics or policy-makers until the 1969 publication of Wolf Wolfensberger's seminal work "The Origin and Nature of Our Institutional Models", drawing on some of the ideas proposed by SG Howe 100 years earlier. This book posited that society characterizes people with disabilities as deviant, sub-human and burdens of charity, resulting in the adoption of that "deviant" role. Wolfensberger argued that this dehumanization, and the segregated institutions that result from it, ignored the potential productive contributions that all people can make to society. He pushed for a shift in policy and practice that recognized the human needs of those with intellectual disability and provided the same basic human rights as for the rest of the population. The publication of this book may be regarded as the first move towards the widespread adoption of the social model of disability in regard to these types of disabilities, and was the impetus for the development of government strategies for desegregation. Successful lawsuits against governments and an increasing awareness of human rights and self-advocacy also contributed to this process, resulting in the passing in the U.S. of the Civil Rights of Institutionalized Persons Act in 1980. From the 1960s to the present, most states have moved towards the elimination of segregated institutions. Normalization and deinstitutionalization are dominant. Along with the work of Wolfensberger and others including Gunnar and Rosemary Dybwad, a number of scandalous revelations around the horrific conditions within state institutions created public outrage that led to change to a more community-based method of providing services. By the mid-1970s, most governments had committed to de-institutionalization, and had started preparing for the wholesale movement of people into the general community, in line with the principles of normalization. In most countries, this was essentially complete by the late 1990s, although the debate over whether or not to close institutions persists in some states, including Massachusetts. In the past, lead poisoning and infectious diseases were significant causes of intellectual disability. Some causes of intellectual disability are decreasing, as medical advances, such as vaccination, increase. Other causes are increasing as a proportion of cases, perhaps due to rising maternal age, which is associated with several syndromic forms of intellectual disability. Along with the changes in terminology, and the downward drift in acceptability of the old terms, institutions of all kinds have had to repeatedly change their names. This affects the names of schools, hospitals, societies, government departments, and academic journals. For example, the Midlands Institute of Mental Subnormality became the British Institute of Mental Handicap and is now the British Institute of Learning Disability. This phenomenon is shared with mental health and motor disabilities, and seen to a lesser degree in sensory disabilities. Terms that denote mental deficiency have been subjected to the euphemism treadmill. The several traditional terms that long predate psychiatry are simple forms of abuse in common usage today; they are often encountered in such old documents as books, academic papers, and census forms. For example, the British census of 1901 has a column heading including the terms imbecile and feeble-minded. Negative connotations associated with these numerous terms for intellectual disability reflect society's attitude about the condition. Some elements of society seek neutral medical terms, while others want to use such terms as weapons of abuse. Today, new expressions like developmentally disabled, special, or challenged are replacing the term mentally retarded. The term developmental delay is popular among caretakers and parents of individuals with intellectual disability because delay suggests that a person is slowly reaching his or her full potential rather than having a lifelong condition. Usage has changed over the years and differed from country to country. For example, mental retardation in some contexts covers the whole field but previously applied to what is now the mild MR group. Feeble-minded used to mean mild MR in the UK, and once applied in the US to the whole field. "Borderline intellectual functioning" is not currently defined, but the term may be used to apply to people with IQs in the 70s. People with IQs of 70 to 85 used to be eligible for special consideration in the US public education system on grounds of intellectual disability. - Cretin is the oldest and comes from a dialectal French word for Christian. The implication was that people with significant intellectual or developmental disabilities were "still human" (or "still Christian") and deserved to be treated with basic human dignity. Individuals with the condition were considered to be incapable of sinning, thus "christ-like" in their disposition. This term has not been used in scientific endeavors since the middle of the 20th century and is generally considered a term of abuse. Although cretin is no longer in use, the term cretinism is still used to refer to the mental and physical disability resulting from untreated congenital hypothyroidism. - Amentia has a long history, mostly associated with dementia. The difference between amentia and dementia was originally defined by time of onset. Amentia was the term used to denote an individual who developed deficits in mental functioning early in life, while dementia included individuals who develop mental deficiencies as adults. During the 1890s, amentia meant someone who was born with mental deficiencies. By 1912, ament was a classification lumping "idiots, imbeciles, and feeble minded" individuals in a category separate from a dement classification, in which the onset is later in life. - Idiot indicated the greatest degree of intellectual disability, where the mental age is two years or less, and the person cannot guard himself or herself against common physical dangers. The term was gradually replaced by the term profound mental retardation (which has itself since been replaced by other terms). - Imbecile indicated an intellectual disability less extreme than idiocy and not necessarily inherited. It is now usually subdivided into two categories, known as severe intellectual disability and moderate intellectual disability. - Moron was defined by the American Association for the Study of the Feeble-minded in 1910, following work by Henry H. Goddard, as the term for an adult with a mental age between eight and twelve; mild intellectual disability is now the term for this condition. Alternative definitions of these terms based on IQ were also used. This group was known in UK law from 1911 to 1959–60 as feeble-minded. - Mongolism and Mongoloid idiot were medical terms used to identify someone with Down syndrome, as the doctor who first described the syndrome, John Langdon Down, believed that children with Down syndrome shared facial similarities with Blumenbach's "Mongolian race." The Mongolian People's Republic requested that the medical community cease use of the term as a referent to intellectual disability. Their request was granted in the 1960s, when the World Health Organization agreed that the term should cease being used within the medical community. - In the field of special education, educable (or "educable intellectual disability") refers to ID students with IQs of approximately 50–75 who can progress academically to a late elementary level. Trainable (or "trainable intellectual disability") refers to students whose IQs fall below 50 but who are still capable of learning personal hygiene and other living skills in a sheltered setting, such as a group home. In many areas, these terms have been replaced by use of "moderate" and "severe" intellectual disability. While the names change, the meaning stays roughly the same in practice. - Retarded comes from the Latin retardare, "to make slow, delay, keep back, or hinder," so mental retardation meant the same as mentally delayed. The term was recorded in 1426 as a "fact or action of making slower in movement or time." The first record of retarded in relation to being mentally slow was in 1895. The term mentally retarded was used to replace terms like idiot, moron, and imbecile because retarded was not then a derogatory term. By the 1960s, however, the term had taken on a partially derogatory meaning as well. The noun retard is particularly seen as pejorative; a BBC survey in 2003 ranked it as the most offensive disability-related word, ahead of terms such as spastic (or its abbreviation spaz) and mong. The terms mentally retarded and mental retardation are still fairly common, but currently the Special Olympics, Best Buddies, and over 100 other organizations are striving to eliminate their use by referring to the word retard and its variants as the "r-word", in an effort to equate it to the word nigger and the associated euphemism "n-word", in everyday conversation. These efforts have resulted in federal legislation, sometimes known as "Rosa's Law", to replace the term mentally retarded with the term intellectual disability in some federal statutes. The term mental retardation was a diagnostic term denoting the group of disconnected categories of mental functioning such as idiot, imbecile, and moron derived from early IQ tests, which acquired pejorative connotations in popular discourse. It acquired negative and shameful connotations over the last few decades due to the use of the words retarded and retard as insults. This may have contributed to its replacement with euphemisms such as mentally challenged or intellectually disabled. While developmental disability includes many other disorders, developmental disability and developmental delay (for people under the age of 18) are generally considered more polite terms than mental retardation. - In North America, intellectual disability is subsumed into the broader term developmental disability, which also includes epilepsy, autism, cerebral palsy, and other disorders that develop during the developmental period (birth to age 18). Because service provision is tied to the designation 'developmental disability', it is used by many parents, direct support professionals, and physicians. In the United States, however, in school-based settings, the more specific term mental retardation or, more recently (and preferably), intellectual disability, is still typically used, and is one of 13 categories of disability under which children may be identified for special education services under Public Law 108-446. - The phrase intellectual disability is increasingly being used as a synonym for people with significantly below-average cognitive ability. These terms are sometimes used as a means of separating general intellectual limitations from specific, limited deficits as well as indicating that it is not an emotional or psychological disability. It is not specific to congenital disorders such as Down syndrome. The American Association on Mental Retardation changed its name to the American Association on Intellectual and Developmental Disabilities (AAIDD) in 2007, and soon thereafter changed the names of its scholarly journals to reflect the term "intellectual disability." In 2010, the AAIDD released its 11th edition of its terminology and classification manual, which also used the term intellectual disability In the UK, mental handicap had become the common medical term, replacing mental subnormality in Scotland and mental deficiency in England and Wales, until Stephen Dorrell, Secretary of State for Health for the United Kingdom from 1995–97, changed the NHS's designation to learning disability. The new term is not yet widely understood, and is often taken to refer to problems affecting schoolwork (the American usage), which are known in the UK as "learning difficulties." British social workers may use "learning difficulty" to refer to both people with intellectual disability and those with conditions such as dyslexia. In education, "learning difficulties" is applied to a wide range of conditions: "specific learning difficulty" may refer to dyslexia, dyscalculia or developmental coordination disorder, while "moderate learning difficulties", "severe learning difficulties" and "profound learning difficulties" refer to more significant impairments. In England and Wales between 1983 and 2008, the Mental Health Act 1983 defined "mental impairment" and "severe mental impairment" as "a state of arrested or incomplete development of mind which includes significant/severe impairment of intelligence and social functioning and is associated with abnormally aggressive or seriously irresponsible conduct on the part of the person concerned." As behavior was involved, these were not necessarily permanent conditions: they were defined for the purpose of authorizing detention in hospital or guardianship. The term mental impairment was removed from the Act in November 2008, but the grounds for detention remained. However, English statute law uses mental impairment elsewhere in a less well-defined manner—e.g. to allow exemption from taxes—implying that intellectual disability without any behavioral problems is what is meant. A BBC poll conducted in the United Kingdom came to the conclusion that 'retard' was the most offensive disability-related word. On the reverse side of that, when a contestant on Celebrity Big Brother live used the phrase "walking like a retard", despite complaints from the public and the charity Mencap, the communications regulator Ofcom did not uphold the complaint saying "it was not used in an offensive context [...] and had been used light-heartedly". It was, however, noted that two previous similar complaints from other shows were upheld. In the past, Australia has used British and American terms interchangeably, including "mental retardation" and "mental handicap". Today, "intellectual disability" is the preferred and more commonly used descriptor. Society and culture People with intellectual disabilities are often not seen as full citizens of society. Person-centered planning and approaches are seen as methods of addressing the continued labeling and exclusion of socially devalued people, such as people with disabilities, encouraging a focus on the person as someone with capacities and gifts as well as support needs. The self-advocacy movement promotes the right of self-determination and self-direction by people with intellectually disabilities, which means allowing them to make decisions about their own lives. Until the middle of the 20th century, people with intellectual disabilities were routinely excluded from public education, or educated away from other typically developing children. Compared to peers who were segregated in special schools, students who are mainstreamed or included in regular classrooms report similar levels of stigma and social self-conception, but more ambitious plans for employment. As adults, they may live independently, with family members, or in different types of institutions organized to support people with disabilities. About 8% currently live in an institution or a group home. In the United States, the average lifetime cost of a person with an intellectual disability amounts to $1,014,000 per person, in 2003 US dollars. This is slightly more than the costs associated with cerebral palsy, and double that associated with serious vision or hearing impairments. About 14% is due to increased medical expenses (not including what is normally incurred by the typical person), 10% is due to direct non-medical expenses, such as the excess cost of special education compared to standard schooling, and 76% is indirect costs accounting for reduced productivity and shortened lifespans. Some expenses, such as costs associated with being a family caregiver or living in a group home, were excluded from this calculation. People with intellectual disability as a group have higher rates of adverse health conditions such as epilepsy and neurological disorders, gastrointestinal disorders, and behavioral and psychiatric problems compared to people without disabilities. Adults also have a higher prevalence of poor social determinants of health, behavioral risk factors, depression, diabetes, and poor or fair health status than adults without intellectual disability. In the United Kingdom people with intellectual disability live on average 16 years less than the general population. - Wilmshurst, Linda (2012). "general+learning+disability" Clinical and Educational Child Psychology an Ecological-Transactional Approach to Understanding Child Problems and Interventions. Hoboken: Wiley. p. 168. ISBN 9781118439982. - GBD 2015 Disease and Injury Incidence and Prevalence, Collaborators. (8 October 2016). "Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015.". Lancet (London, England). 388 (10053): 1545–1602. PMID 27733282. - Tidy, Colin (25 January 2013). "General Learning Disability". Patient.info. The term general learning disability is now used in the UK instead of terms such as mental handicap or mental retardation. The degree of disability can vary greatly, being classified as mild, moderate, severe or profound. - "Rosa's Law" (PDF). Washington, D.C.: U.S.G.P.O. 2010. Retrieved 13 September 2013. - Ansberry, Clare (20 November 2010). "Erasing a Hurtful Label From the Books". The Wall Street Journal. Retrieved 4 December 2010. Decades-long quest by disabilities advocates finally persuades state, federal governments to end official use of 'retarded'. - Daily DK, Ardinger HH, Holmes GE (February 2000). "Identification and evaluation of mental retardation". Am Fam Physician. 61 (4): 1059–67, 1070. PMID 10706158. - "Definition of mentally retarded". Gale Encyclopedia of Medicine. - Global Burden of Disease Study 2013, Collaborators (5 June 2015). "Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013". The Lancet. 386: 743–800. PMC . PMID 26063472. doi:10.1016/S0140-6736(15)60692-4. - Cummings, Nicholas A.; Rogers H. Wright (2005). "Chapter 1, Psychology's surrender to political correctness". Destructive trends in mental health: the well-intentioned path to harm. New York: Routledge. ISBN 0-415-95086-4. - American Psychiatric Association (2013). Diagnostic and Statistical Manual of Mental Disorders (Fifth ed.). Arlington, VA: American Psychiatric Publishing. ISBN 978-0-89042-555-8. Lay summary (15 July 2013). - Salvador-Carulla L, Reed GM, Vaez-Azizi LM, et al. (October 2011). "Intellectual developmental disorders: towards a new name, definition and framework for "mental retardation/intellectual disability" in ICD-11". World Psychiatry. 10: 175–180. PMC . PMID 21991267. doi:10.1002/j.2051-5545.2011.tb00045.x. - John Cook (5 July 2001). "The "R" Word". Slate. - Kaneshiro, Neil K. (April 21, 2015), "Intellectual disability", MedlinePlus, U.S. National Library of Medicine, retrieved October 27, 2016 - Queensland Government (July 30, 2015), "Intellectual disability", qld.gov.au, retrieved October 27, 2016 - Badano, Jose L.; Norimasa Mitsuma; Phil L. Beales; Nicholas Katsanis (September 2006). "The Ciliopathies : An Emerging Class of Human Genetic Disorders". Annual Review of Genomics and Human Genetics. 7: 125–148. PMID 16722803. doi:10.1146/annurev.genom.7.080505.115610. Retrieved 2008-06-15. - Siderius LE, Hamel BC, van Bokhoven H, et al. (2000). "X-linked mental retardation associated with cleft lip/palate maps to Xp11.3-q21.3". Am. J. Med. Genet. 85 (3): 216–220. PMID 10398231. doi:10.1002/(SICI)1096-8628(19990730)85:3<216::AID-AJMG6>3.0.CO;2-X. - Laumonnier F, Holbert S, Ronce N, et al. (2005). "Mutations in PHF8 are associated with X linked mental retardation and cleft lip/cleft palate". J. Med. Genet. 42 (10): 780–786. PMC . PMID 16199551. doi:10.1136/jmg.2004.029439. - Bender, Bruce G. (1986). Genetics and Learning Disabilities. San Diego: College Hill Press. pp. 175–201. Figure 8-3. Estimated full-scale IQ distributions for SCA and control children: 47,XXX (mean ~83), 45,X & Variant (mean ~85), 47,XXY (mean ~95), 47,XYY (mean ~100), Controls and SCA Mosaics (mean ~104) - Leggett, Victoria; Jacobs, Patricia; Nation, Kate; Scerif, Gaia; Bishop, Dorothy V M (2010-02-01). "Neurocognitive outcomes of individuals with a sex chromosome trisomy: XXX, XYY, or XXY: a systematic review*". Developmental Medicine & Child Neurology. 52 (2): 119–129. ISSN 1469-8749. PMC . PMID 20059514. doi:10.1111/j.1469-8749.2009.03545.x. - McNeil, Donald G., Jr. (2006-12-16). "In Raising the World's I.Q., the Secret's in the Salt". The New York Times. Retrieved 2009-07-21. - Wines, Michael (2006-12-28). "Malnutrition Is Cheating Its Survivors, and Africa's Future". The New York Times. Retrieved 2009-07-21. - Sundaram SK, Sivaswamy L, Makki MI, Behen ME, Chugani H (2008). "Absence of arcuate fasciculus in children with global developmental delay of unknown etiology: a diffusion tensor imaging study". J Pediatr. 152 (2): 250–5. PMID 18206698. doi:10.1016/j.jpeds.2007.06.037. - "What Is Intellectual Disability?". - Lawyer, Liz (2010-11-26). "Rosa's Law to remove stigmatized language from law books". Ithaca, New York: The Ithaca Journal. Retrieved 2010-12-04. The resolution ... urges a change from the old term to "developmental disability" - Mash, E., & Wolfe, D. (2013). Abnormal child psychology. (5th ed., pp. 308-313). Wadsworth Cengage Learning. - Hodapp, R.M., & Burack, J.A. (2006). Developmental approaches to children with mental retardation: A second generation? In D. Cicchetti & D. J. Cohen (Eds.), Developmental psychopathology, Vol. 3: Risk, disorder, and adaptation (2nd ed., pp. 235-267). Hoboken, NJ: Wiley. - Ramey S.L.; Ramey C.T. (1992). "Early educational intervention with disadvantaged children—To what effect?". Applied and Preventive Psychology. 1: 131–140. doi:10.1016/s0962-1849(05)80134-9. - Campbell F.A.; Ramey C.T.; Pungello E.; Sparling J.; Miller-Johnson S. (2002). "Early childhood education: Young adult outcomes from the Abecedarian Project". Applied Developmental Science. 6: 42–57. doi:10.1207/s1532480xads0601_05. - Matson J.L.; Matson M.L.; Rivet T.T. (2007). "Social-skills treatments for children with autism spectrum disorders: an overview". Behavior Modification. 31 (5): 682–707. PMID 17699124. doi:10.1177/0145445507301650. - Van der Schuit M, Segers E, van Balkom H, Verhoeven L (2011). "Early language intervention for children with intellectual disabilities: a neurocognitive perspective". Research in Developmental Disabilities. 32 (2): 705–12. doi:10.1016/j.ridd.2010.11.010. - Kemp C.; Carter M. (2002). "The social skills and social status of mainstreamed students with intellectual disabilities". Educational Psychology. 22: 391–411. doi:10.1080/0144341022000003097. - Siperstein G.N.; Glick G.C.; Parker R. (2009). "The social inclusion of children with intellectual disabilities in an out of school recreational setting". Intellectual and Developmental Diasbilities. 47 (2): 97–107. doi:10.1352/1934-9556-47.2.97. - Hay I.; Elias G.; Fielding-Barnsley R.; Homel R.; Freiberg K. (2007). "Language delays, reading delays and learning difficulties: Interactive elements requiring multidimensional programming". Journal of Learning Disabilities. 40 (5): 400–409. doi:10.1177/00222194070400050301. - Bagner D.M.; Eyberg S.M. (2007). "Parent-child interaction therapy for disruptive behavior in children with mental retardation: A randomized controlled trial". Journal of Clinical Child and Adolescent Psychology. 36: 418–429. doi:10.1080/15374410701448448. - Kalachnik, JE.; Hanzel, TE.; Sevenich, R.; Harder, SR. (Sep 2002). "Benzodiazepine behavioral side effects: review and implications for individuals with mental retardation". Am J Ment Retard. 107 (5): 376–410. ISSN 0895-8017. PMID 12186578. doi:10.1352/0895-8017(2002)107<0376:BBSERA>2.0.CO;2. - Wickham, Parnell. Encyclopedia of Children and Childhood in History and Society. Retrieved 8 October 2010. - Roy Porter; David Wright (7 August 2003). The Confinement of the Insane: International Perspectives, 1800-1965. Cambridge University Press. ISBN 978-0-521-80206-2. Retrieved 11 August 2012. - Armbrester, Margaret E. (1992). The Civitan Story. Birmingham, AL: Ebsco Media. pp. 74–75. - Wolf Wolfensberger (January 10, 1969). "The Origin and Nature of Our Institutional Models". Changing Patterns in Residential Services for the Mentally Retarded. President's Committee on Mental Retardation, Washington, D.C. - "The ARC Highlights — Beyond Affliction: Beyond Affliction Document". Disabilitymuseum.org. Retrieved 2010-06-29. - "Christmas in Purgatory & Willowbrook". Arcmass.org. Retrieved 2010-06-29. - "Fernald School Closing and RICCI Class". Arcmass.org. Retrieved 2010-06-29. - Columbia Electronic Encyclopedia, 2013 - "cretin". The American Heritage Dictionary of the English Language, Fourth Edition. Houghton Mifflin Company. 2006. Retrieved 2008-08-04. - Howard-Jones, Norman (1979). "On the diagnostic term "Down's disease"". Medical History. 23 (1): 102–04. PMC . PMID 153994. doi:10.1017/s0025727300051048. - "Worst Word Vote". Ouch. BBC. 2003. Archived from the original on 2007-03-20. Retrieved 2007-08-17. - "SpecialOlympics.org". SpecialOlympics.org. Retrieved 2010-06-29. - "R-Word.org". R-Word.org. 2010-06-18. Retrieved 2010-06-29. - "Intellectual Disability: Definition, Classification, and Systems of Supports (11th Edition)". - "Frequently Asked Questions on Intellectual Disability". American Association on Intellectual and Developmental Disabilities (AAIDD). Retrieved 12 September 2013. The term intellectual disability covers the same population of individuals who were diagnosed previously with mental retardation in number, kind, level, type, duration of disability, and the need of people with this disability for individualized services and supports. - "mencap". Retrieved 2010-12-07. Website of the UK's leading learning disability charity, which uses that term throughout. - "Learning Disabilities: Prevalence". Social Work, Alcohol & Drugs. University of Bedfordshire. Retrieved 2014-10-18. - "Special Educational Needs and Disability: A. Cognition and Learning Needs". teachernet. Archived from the original on 2010-05-01. Retrieved 2010-12-08. - Vickerman, Philip (2009-07-08). "Severe Learning Difficulties". Teacher Training Resource Bank. Retrieved 2014-10-19. Extensive further references. - "Draft Illustrative Code of Practice" (PDF). Retrieved 2007-08-23. - Rohrer, Finlo (2008-09-22). "The path from cinema to playground". BBC News. Retrieved 2010-06-29. - Beckford, Martin (2010-03-11). "Ofcom says TV channels have 'human right' to broadcast offensive material". Telegraph. Retrieved 2010-06-29. - "Australian Psychological Society : Psychologists and intellectual disability". Psychology.org.au. Retrieved 2010-06-29. - Cooney G, Jahoda A, Gumley A, Knott F (June 2006). "Young people with intellectual disabilities attending mainstream and segregated schooling: perceived stigma, social comparison and future aspirations". J Intellect Disabil Res. 50 (Pt 6): 432–44. PMID 16672037. doi:10.1111/j.1365-2788.2006.00789.x. - Centers for Disease Control and Prevention (CDC) (January 2004). "Economic costs associated with mental retardation, cerebral palsy, hearing loss, and vision impairment--United States, 2003". MMWR Morb. Mortal. Wkly. Rep. 53 (3): 57–9. PMID 14749614. - Krahn GL, Fox MH (2013). "Health disparities of adults with intellectual disabilities: what do we know? What do we do?". Journal of Applied Research in Intellectual Disability. 27 (5): 431–446. doi:10.1111/jar.12067. - Haider SI, Ansari Z, Vaughan L, Matters H, Emerson E (2013). "Health and wellbeing of Victorian adults with intellectual disability compared to the general Victorian population". Research in Developmental Disabilities. 34 (11): 4034–4042. PMID 24036484. doi:10.1016/j.ridd.2013.08.017. - Schraer, Rachel (8 April 2017). "Patients with learning disabilities missing out on health checks". BBC News. Retrieved 10 April 2017. |Wikimedia Commons has media related to Intellectual disability.|
1
19
<urn:uuid:f706d8a5-e25b-4b04-9fb2-e5d46871481b>
UNIX System V UNIX System V (pronounced: "System Five") is one of the first commercial versions of the Unix operating system. It was originally developed by AT&T and first released in 1983. Four major versions of System V were released, numbered 1, 2, 3, and 4. System V Release 4, or SVR4, was commercially the most successful version, being the result of an effort, marketed as "Unix System Unification", which solicited the collaboration of the major Unix vendors. It was the source of several common commercial Unix features. System V is sometimes abbreviated to SysV. |Source model||Closed source| As of 2012[update], the Unix market is divided between four System V variants: IBM's AIX, Hewlett-Packard's HP-UX, Oracle's Solaris. and illumos distributions being the open-source OpenSolaris continuation. System V was the successor to 1982's UNIX System III. While AT&T sold their own hardware that ran System V, most customers instead ran a version from a reseller, based on AT&T's reference implementation. A standards document called the System V Interface Definition outlined the default features and behavior of implementations. During its formative years, AT&T went through several phases of System V software groups, beginning with the Unix Support Group (USG), followed by Unix System Development Laboratory (USDL), followed by AT&T Information Systems (ATTIS), and finally Unix System Laboratories (USL). Rivalry with BSDEdit In the 1980s and early-1990s, System V was considered one of the two major versions of UNIX, the other being the Berkeley Software Distribution (BSD). Historically, BSD was also commonly called "BSD Unix" or "Berkeley Unix". Eric S. Raymond summarizes the longstanding relationship and rivalry between System V and BSD during the early period: In fact, for years after divestiture the Unix community was preoccupied with the first phase of the Unix wars – an internal dispute, the rivalry between System V Unix and BSD Unix. The dispute had several levels, some technical (sockets vs. streams, BSD tty vs. System V termio) and some cultural. The divide was roughly between longhairs and shorthairs; programmers and technical people tended to line up with Berkeley and BSD, more business-oriented types with AT&T and System V. While HP, IBM and others chose System V as the basis for their Unix offerings, other vendors such as Sun Microsystems and DEC extended BSD. Throughout its development, though, System V was infused with features from BSD, while BSD variants such as DEC's Ultrix received System V features. Since the early 1990s, due to standardization efforts such as POSIX and the commercial success of Linux, the division between System V and BSD has become less important. System V, known inside Bell Labs as Unix 5.0, succeeded AT&T's previous commercial Unix called System III in January, 1983. There was never an external release of Unix 4.0, which would have been System IV. This first release of System V (called System V.0, System V Release 1, or SVR1) was developed by AT&T's UNIX Support Group (USG) and based on the Bell Labs internal USG UNIX 5.0. System V also included features such as the vi editor and curses from 4.1 BSD, developed at the University of California, Berkeley; it also improved performance by adding buffer and inode caches. It also added support for inter-process communication using messages, semaphores, and shared memory, developed earlier for the Bell-internal CB UNIX. The concept of the "porting base" was formalized, and the DEC VAX-11/780 was chosen for this release. The "porting base" is the so-called original version of a release, from which all porting efforts for other machines emanate. Educational source licenses for SVR2 were offered by AT&T for US$800 for the first CPU, and $400 for each additional CPU. A commercial source license was offered for $43,000, with three months of support, and a $16,000 price per additional CPU. Maurice J. Bach's book, The Design of the UNIX Operating System, is the definitive description of the SVR2 kernel. System V Release 3 was released in 1986. It included STREAMS, the Remote File System (RFS), the File System Switch (FSS) virtual file system mechanism, a restricted form of shared libraries, and the Transport Layer Interface (TLI) network API. The final version was Release 3.2 in 1988, which added binary compatibility to Xenix on Intel platforms (see Intel Binary Compatibility Standard). User interface improvements included the "layers" windowing system for the DMD 5620 graphics terminal, and the SVR3.2 curses libraries that offered eight or more color pairs and other at this time important features (forms, panels, menus, etc.). The AT&T 3B2 became the official "porting base." SCO UNIX was based upon SVR3.2, as was ISC 386/ix. Among the more obscure distributions of SVR3.2 for the 386 were ESIX 3.2 by Everex and "System V, Release 3.2" sold by Intel themselves; these two shipped "plain vanilla" AT&T's codebase. System V Release 4.0 was announced on October 18, 1988 and was incorporated into a variety of commercial Unix products from early 1989 onwards. A joint project of AT&T Unix System Laboratories and Sun Microsystems, it combined technology from: New features included: - From BSD: TCP/IP support, sockets, UFS, support for multiple groups, C shell. - From SunOS: the virtual file system interface (replacing the File System Switch in System V release 3), NFS, new virtual memory system including support for memory mapped files, an improved shared library system based on the SunOS 4.x model, the OpenWindows GUI environment, External Data Representation (XDR) and ONC RPC. - From Xenix: x86 device drivers, binary compatibility with Xenix (in the x86 version of System V). - Korn shell. - ANSI X3J11 C compatibility. - Multi-National Language Support (MNLS). - Better internationalization support. - An application binary interface (ABI) based on Executable and Linkable Format (ELF). - Support for standards such as POSIX and X/Open. Many companies licensed SVR4 and bundled it with computer systems such as workstations and network servers. SVR4 systems vendors included Atari (Atari System V), Commodore (Amiga Unix), Data General (DG/UX), Fujitsu (UXP/DS), Hitachi (HI-UX), Hewlett-Packard (HP-UX), NCR (Unix/NS), NEC (EWS-UX, UP-UX, UX/4800), OKI (OKI System V), Pyramid Technology (DC/OSx), SGI (IRIX), Siemens (SINIX), Sony (NEWS-OS), Sumitomo Electric Industries (SEIUX), and Sun Microsystems (Solaris) with illumos in 2010's as the only open-source platform. Software porting houses also sold enhanced and supported Intel x86 versions. SVR4 software vendors included Dell (Dell UNIX), Everex (ESIX), Micro Station Technology (SVR4), Microport (SVR4), and UHC (SVR4). The primary platforms for SVR4 were Intel x86 and SPARC; the SPARC version, called Solaris 2 (or, internally, SunOS 5.x), was developed by Sun. The relationship between Sun and AT&T was terminated after the release of SVR4, meaning that later versions of Solaris did not inherit features of later SVR4.x releases. Sun would in 2005 release most of the source code for Solaris 10 (SunOS 5.10) as the open-source OpenSolaris project, creating, with its forks, the only open-source (albeit heavily modified) System V implementation available. After Oracle took over Sun, Solaris was forked into proprietary release, but illumos as the continuation project is being developed in open-source. A consortium of Intel-based resellers including Unisys, ICL, NCR Corporation, and Olivetti developed SVR4.0MP with multiprocessing capability (allowing system calls to be processed from any processor, but interrupt servicing only from a "master" processor). SVR4.2 / UnixWareEdit In 1992, AT&T USL engaged in a joint venture with Novell, called Univel. That year saw the release System V.4.2 as Univel UnixWare, featuring the VERITAS File System. Other vendors included UHC and Consensys. Release 4.2MP, completed late 1993, added support for multiprocessing and it was released as UnixWare 2 in 1995. Eric S. Raymond warned prospective buyers about SVR4.2 versions, as they often did not include on-line man pages. In his 1994 buyers guide, he attributes this change in policy to Unix System Laboratories. SVR5 / UnixWare 7Edit The Santa Cruz Operation (SCO), owners of Xenix, eventually acquired the UnixWare trademark and the distribution rights to the System V Release 4.2 codebase from Novell, while other vendors (Sun, IBM, HP) continued to use and extend System V Release 4. Novell transferred ownership of the Unix trademark to The Open Group. Any operating system that meets the Single Unix Specification (SUS), effectively a successor to the System V Interface Definition, may be granted Unix rights. The SUS is met by Apple's Mac OS X, a BSD derivative, as well as several other operating systems not derived from either BSD or System V. System V Release 5 was developed in 1997 by the Santa Cruz Operation (SCO) as a merger of SCO OpenServer (an SVR3-derivative) and UnixWare, with a focus on large-scale servers.:23,32 It was released as SCO UnixWare 7. SCO's successor, The SCO Group, also based SCO OpenServer 6 on SVR5, but the codebase is not used by any other major developer or reseller. Availability during '90s on x86 platformsEdit In the 1980s and 1990s, a variety of SVR4 versions of Unix were available commercially for the x86 PC platform. However, the market for commercial Unix on PCs declined after Linux and BSD became widely available. In late 1994, Eric S. Raymond discontinued his PC-clone UNIX Software Buyer's Guide on USENET, stating, "The reason I am dropping this is that I run Linux now, and I no longer find the SVr4 market interesting or significant." In 1998, a confidential memo at Microsoft stated, "Linux is on track to eventually own the x86 UNIX market," and further predicted, "I believe that Linux – moreso than NT – will be the biggest threat to SCO in the near future." An InfoWorld article from 2001 characterized SCO UnixWare as having a "bleak outlook" due to being "trounced" in the market by Linux and Solaris, and IDC predicted that SCO would "continue to see a shrinking share of the market." Project Monterey was started in 1998 to combine major features of existing commercial Unix platforms, as a joint project of Compaq, IBM, Intel, SCO, and Sequent Computer Systems. The target platform was meant to be Intel's new IA-64 architecture and Itanium line of processors. However, the project was abruptly canceled in 2001 after little progress. System V and the Unix marketEdit By 2001, several major Unix variants such as SCO UnixWare, Compaq Tru64 UNIX, and SGI IRIX were all in decline. The three major Unix versions doing well in the market were IBM AIX, Hewlett-Packard's HP-UX, and Sun Solaris. In 2006, when SGI declared bankruptcy, analysts questioned whether Linux would replace proprietary Unix altogether. In a 2006 article written for Computerworld by Mark Hall, the economics of Linux were cited as a major factor driving the migration from Unix to Linux: Linux's success in high-end, scientific and technical computing, like Unix's before it, preceded its success in your data center. Once Linux proved itself by executing the most complex calculations possible, IT managers quickly grasped that it could easily serve Web pages and run payroll. Naturally, it helps to be lucky: Free, downloadable Linux's star began to rise during one of the longest downturns in IT history. With companies doing more with less, one thing they could dump was Unix. The article also cites trends in high-performance computing applications as evidence of a dramatic shift from Unix to Linux: A look at the Top500 list of supercomputers tells the tale best. In 1998, Unix machines from Sun and SGI combined for 46% of the 500 fastest computers in the world. Linux accounted for one (0.2%). In 2005, Sun had 0.8% — or four systems — and SGI had 3.6%, while 72% of the Top500 ran Linux. In a November 2015 survey of the top 500 supercomputers, Unix was used by only 1.2% (all running IBM AIX), while Linux was used by 98.8%. System V derivatives continued to be deployed on some proprietary server platforms. The principal variants of System V that remain in commercial use are AIX (IBM), Solaris (Oracle), and HP-UX (HP). According to a study done by IDC, in 2012 the worldwide Unix market was divided between IBM (56%), Oracle (19.2%), and HP (18.6%). No other commercial Unix vendor had more than 2% of the market. Industry analysts generally characterize proprietary Unix as having entered a period of slow but permanent decline. OpenSolaris and illumos distributionsEdit OpenSolaris and its derivatives are the only SVR4 descendants that are open-source software. Core system software continues to be developed as illumos used in illumos distributions such as SmartOS, OpenIndiana and others. System V compatibilityEdit The System V interprocess communication mechanisms are available in Unix-like operating systems not derived from System V; in particular, in Linux (a reimplementation of Unix) as well as the BSD derivative FreeBSD. POSIX 2008 specifies a replacement for these interfaces. FreeBSD maintains a binary compatibility layer for the COFF format, which allows FreeBSD to execute binaries compiled for some SVR3.2 derivatives such as SCO UNIX and Interactive UNIX. Modern System V, Linux, and BSD platforms use the ELF file format for natively compiled binaries. - "The Last Days of Unix". Network World. 19 August 2013. Retrieved 26 Jun 2014. - "Whither OpenSolaris? Illumos Takes Up the Mantle". Archived from the original on 26 September 2015. - Garfinkel, Simson. Spafford, Gene. Schwartz, Alan. Practical UNIX and Internet Security. 2003. pp. 15-20 - Raymond, Eric S. The Art of Unix Programming. 2003. p. 38 - Lévénez, Éric. "Unix History (Unix Timeline)". Archived from the original on 2010-12-29. Retrieved 2010-12-29. - Overview of the XENIX 286 Operating System (PDF). Intel Corporation. November 1984. p. 1.10. There was no System IV. - Dale Dejager (1984-01-16). "UNIX History". Newsgroup: net.unix. - Tanenbaum, Andrew S. (2001). Modern Operating Systems (2nd ed.). Upper Saddle River, NJ: Prentice Hall. p. 675. ISBN 0-13-031358-0. Whatever happened to System IV is one of the great unsolved mysteries of computer science. - Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 921. - Goodheart, Berny; Cox, James (1994), The Magic Garden Explained, Prentice Hall, p. 11, ISBN 0-13-098138-9 - "UNIX System V and add on applications prices" (PDF). AT&T International. 24 February 1983. Retrieved 27 April 2014. - Kenneth H. Rosen (1999). UNIX: The Complete Reference. McGraw-Hill Professional. - Bach, Maurice (1986), The Design of the UNIX Operating System, Prentice Hall, ISBN 0-13-201799-7 - Rargo, Stephan A. (April 10, 1993), UNIX System V Network Programming, Addison-Wesley Professional, ISBN 978-0-201-56318-4 - Jeff Tye (10 July 1989). Other OSs That Run Unix on a 386. InfoWorld. p. 62. ISSN 0199-6649. - "SEVERAL MAJOR COMPUTER AND SOFTWARE COMPANIES ANNOUNCE STRATEGIC COMMITMENT TO AT&T'S UNIX SYSTEM V, RELEASE 4.0" (Press release). Amdahl, Control Data Corporation, et al. October 18, 1988. Retrieved 2007-01-01. - John R. Levine (1999). Linkers and Loaders. Morgan–Kauffman. - Technologists notes — A brief history of Dell UNIX, 10 January 2008, retrieved 2009-02-18 - Eric S. Raymond, A buyer's guide to UNIX versions for PC-clone hardware, posted to Usenet November 16, 1994. - Unix Internatl. and USL release early version of SVR4 multiprocessing software, 17 June 1991, retrieved 2009-04-22 - William Fellows (13 August 1992). "Unix International reviews the Unix System V.4 story so far". Computer Business Review. Retrieved 2008-10-31. - Bishop, Matt (December 2, 2002), Computer Security, Addison Wesley, p. 505, ISBN 0-201-44099-7 - UnixWare 2 Product Announcement Questions& Answers, 1995 - Eric S. Raymond (16 November 1994). "PC-clone UNIX Software Buyer's Guide". Retrieved 6 May 2014. - SCO updates Unix, OpenServer product plans InfoWorld, August 19, 2003 - SCO UNIX Roadmap at Archive.is - Eric S. Raymond (16 November 1994). "PC-clone UNIX Software Buyer's Guide". Retrieved 3 February 2014. - Vinod Valloppillil (11 August 1998). "Open Source Software: A (New?) Development Methodology". Retrieved 3 February 2014. - Tom Yager (19 November 2001). "Vital Signs for Unix". Computerworld. Retrieved 5 June 2015. - Raymond, Eric S. The Art of Unix Programming. 2003. p. 43 - Mark Hall (15 May 2006), The End of Unix?, retrieved 5 June 2015 - "TOP500 Supercomputer Sites - List Statistics". Retrieved 28 January 2016. - Patrick Thibodeau (12 December 2013). "As Unix fades away from data centers, it's unclear what's next". Retrieved 6 June 2015. - Linux Programmer's Manual – Overview, Conventions and Miscellanea – - FreeBSD System Calls Manual – - Lehey, Greg. The Complete FreeBSD: Documentation from the Source. 2003. pp. 164-165
1
14
<urn:uuid:4a8582e9-9456-47af-a2f9-d9d088e5e632>
Occasionally we find articles so compelling, we ask for permission to repost them here in our Guest Article series. Today’s article is by Cecilia Casarini, who is a PhD researcher in the Centre for Ultrasonic Engineering at the University of Strathclyde, in Glasgow, Scotland. Her article is on the subject of Bell Labs, their legendary scientists Claude Shannon & Harry Nyquist, the Nyquist sampling theorem, and aliasing, including audio samples. One slight edit we made from Casarini’s original is that sampling rates per second (samples/sec) are often improperly attributed to “Hertz” (Hz, cycles/sec). This misattribution started c.1983, about the time audio CD’s came into the mainstream, with their 44,100 sample/sec rate. With that brief introduction, here is Ms Casarini’s excellent article — Professor W Marshall Leach (obituary) smilingly approves! Last weekend I was in Paris to meet a friend and we decided to visit the Musée des Arts et Métiers, which was hosting a very nice exposition on Claude Shannon (1916—2001) titled “Le magicien des codes.” It is impressive to see how much Shannon contributed to the world of communication, as it seems that almost everything we do to communicate today, from writing a message on Facebook to calling a friend on Skype on the other end of the world, or paying the bill with our bank card, is the result of what he discovered years ago. Shannon is considered to be the father of information theory and a good amount of his work is included in his book “A Mathematical Theory of Communication.” Here’s a documentary on his life and impact from University of California Television. Among other things he influenced the field of Boolean algebra; contributed to the concept of digitization and compression which allow us today to stream videos; and worked on speech encryption together with Alan Turing. He also in some way anticipated the ideas behind machine learning by building Theseus, an electrically controlled mouse that can find its way home by trial and error, as explained by Shannon himself in this video: Well, of course Shannon has also been helped by a fascinating and inspiring working environment: Bell Labs. The north-central New Jersey Bell Labs [in fierce competition with RCA’s nearby David Sarnoff Research Center in Princeton ~DLS] contributed with Shannon to the development of Information Theory, and also to the invention of the transistor, the LASER, Unix, C, C++ …Just think that eight Nobel Prizes have been won by engineers & scientists working there. [Editor’s note: Back in the day, competition was indeed fierce between Bell Labs and RCA’s Sarnoff Labs in Princeton, which wasn’t chopped liver, either. The first time, in 1980, I walked into the lobby, the 1956 Emmy Award, for Color TV was there on a pedestal. As you can see from their 70+ year timeline, other inventions, many in support of their NBC division, included the way to display computer characters on a screen, the LCD display you are reading this on, videotape, and so much more (And that doesn’t count the AEGIS missile defense system developed at Moorestown, the VCR, the color TV camera, the videodisk, and so much more at Camden). In between Sarnoff Labs and Bell Labs is the town of Menlo Park, New Jersey, home of the original engineering laboratory of Thomas Edison — Take that, Silicon Valley Millennials!] Nyquist is without doubt well known to the DSP and audio engineering community for the development of the Shannon-Nyquist theorem, which explains aliasing and defines the Nyquist sampling rate. He was already quite revered at Bell Labs though, as testified by point 8 of this picture: THE DAILY LIFE AT BELL LABS IS… - The 20 to 30% of work time has to be devoted to a free project, chosen by the employee. - Smart dress code is required. - There are not personal profits: All the patents are sold for $1 to Bell Labs. - The leaders have a role of advisors: they cannot interfere in the work of their subordinates. - It is forbidden to flirt with the secretaries! - It is allowed to collaborated without referring to one’s superior. - Everyone has to give a scientific explanation to anyone who demands it. - Whoever produces the highest number of patents has the right to have lunch with Harry Nyquist, one of the stars of our Lab! - The employees are loyal to Bell Labs: One stays there for all his or her career. - Works schedules are often substantial. While dreaming of a lunch with Harry Nyquist, let’s then dive into the world of aliasing. In order to record and digitize an analog signal we need to sample it. Sampling means that we check the value of that signal, we store it, and than we check it again after a certain period of time. We do not have any information regarding what happens between samples. If our sampling rate is not high enough in comparison to the highest frequency contained in our analog signal, we may experience aliasing. “Aliasing” comes from the Latin word “alias,” ALIASING EXAMPLE WITH MATLAB: Let’s simulate a continuous signal sc composed of a simple sinusoid at a frequency f of 1000 Hz (I say “simulate” because of course nothing will ever be continuous in MATLAB). % Number of points NP = 50000; % Signal length (s) -increase tmax if you want to hear a longer signal (but % you will have a more dense graph.) tmin = 0; tmax = 0.01; % Time axis tc = linspace(tmin, tmax, NP); % Frequency of the signal (Hz) f = 1000; sc = sin(2*pi*f*tc); And let’s plot it: Now, if we sample it at 44100 Hz, which is the most common sample rate for audio, we can clearly see that we have enough points to represent the signal. % Sample rate (Hz) Fs1 = 44100; % Sampling period Ts1 = 1/Fs1; % Time line t1 = tmin:Ts1:tmax; s1 = sin(2*pi*f*t1); In MATLAB you can also play the signal to listen to the frequency: % Playing the resulting frequency You can listen to it here: Let’s now sample at a lower sample rate, for example 1500 Hz. % Sample rate (Hz) Fs2 = 1500; % Sampling period Ts2 = 1/Fs2; % Time line t2 = tmin:Ts2:tmax; s2 = sin(2*pi*f*t2); We can notice that there are not enough points to represent the signal: There are many signals that could be represented by sinusoids passing through these orange points! It could be a signal at 500 Hz, 1500 Hz, 2000 Hz, 2500 Hz, etc… In this case the 1000 Hz original signal will be aliased and we will hear a 500 Hz signal, because 500 Hz is lower than the Nyquist frequency (1500 Hz / 2 = 750 Hz): If you listen to the signal with soundsc you will hear indeed a 500 Hz frequency. You can also listen to it here: You can download or have a look at the Matlab .m file on aliasing that I created here. Just to give you now a more musical example, I took the most famous bit of Der Hölle Rache and created an aliased version of it. As you probably know, the highest note in this aria is an F6, which corresponds to a frequency of 1396.91 Hz. By sampling at a rate of 2756/sec, which is lower than the Nyquist rate, the F6 will be aliased. We’ll hear a frequency of 1359 Hz instead, which sounds already a bit like an E6, a nightmare for singers trying to sing this aria! Here you can listen to the original 44,100 samples/sec version: And here you can enjoy (or probably not) the aliased version: DISCLAIMER: This may be really hurting to hear if you love this piece! Conclusion: Choose carefully your sampling rate when recording! You can download the MATLAB script used to obtain the aliased version of Der Hölle Rache. (Edda Moser (s), Queen of the Night) About the author: Cecilia Casarini is a PhD researcher in the Centre for Ultrasonic Engineering at the University of Strathclyde, Glasgow, Scotland. At this time she is working on 3D printed acoustic metamaterials, and her research interests are in general acoustics, bioinspired technologies, auditory systems, hearing aids, psychoacoustics, spatial & binaural audio, DSP, speech and language processing. Her first encounter with the world of acoustics was during her years spent studying piano at the Conservatory. She also followed some courses on organology related to the history and the acoustics of musical instruments and to the physics of sound. It was only some years later, when she moved to Scotland, that she realized what an amazing subject acoustics actually is. She enrolled in the MSc in Acoustics and Music Technology at the University of Edinburgh, where she attended course subjects including acoustics, digital signal processing, speech and language processing, and automatic speech recognition. In her final project she built a MATLAB model which simulated the phenomenon of otoacoustic emissions (OAE’s); and also measured them using a specific equipment and LabVIEW. While editing and further researching this article, we stumbled across her fascinating basilar membrane oscillation simulation, which in itself is worth a look: Ms Casarini publishes the new “It’s Acoustics Time! ” blog; and you can find her original version of this article here.
1
3
<urn:uuid:0e707a9e-652b-4e0f-b3d7-b79311408217>
Revision as of 18:23, 18 July 2011 by Jodi General Yeast Resources - General Yeast Topics describes the yeast model organisms: Saccharomyces cerevisiae (budding, bakers', and sometimes brewers'), Schizosaccharomyces pombe (fission), and Candida albicans. Includes information for non-specialists and teachers. - BioSci Yeast Archives search the Yeast Biosci Newsgroup Nucleic Acid Data Resources - DDBJ sequence repository at Mishima, Japan - GenePalette software application, freely available to academic users, for visualizing annotated features and other sequence elements in GenBank sequences - OriDB catalog of confirmed and predicted DNA replication origin sites, currently limited to S. cerevisiae - PROSPECT: Promoter Inspection Tool explore promoter regions by searching for genes or a sequence. Also contains a list of yeast transcription factor binding sites. - Regulatory Sequence Analysis Tools search for regulatory signals in the non-coding sequences of S. cerevisiae, S. pombe, and other organisms - SCPD: The Promoter Database of Saccharomyces cerevisiae explore the promoter regions of all ORFs in the yeast genome. Note that this web site is not exhaustive, and some of the information is outdated. - Search for conserved patterns in Saccharomyces spp. search tool for identification and analysis of conserved patterns in Saccharomyces promoters. Formore information, please refer to Kohli DK, et al (2004) In Silico Biol 4(3):0034] - Yeast Introns yeast intron data from the Ares Lab, U.C. Santa Cruz - Yeast snoRNAs yeast small nucleolar RNA (snoRNA) data from Dmitry A. Samarsky and Maurille J. Fournier, Univ. of Massachusetts - Yeast tRNAs yeast tRNA information from Todd Lowe, UC-Santa Cruz Genome and Protein Resources - Yeast Genome Directory A collection of papers describing the sequencing of each chromosome of the S. cerevisiae genome - Structural assignments of proteins This website provides structural assignments to protein sequences at the superfamily level. You can browse an overview of all the superfamilies currently identified in S. cerevisiae; the site also provides several different ways to search this information. See Gough et al. for more information. - Genome Sequence Center: BLAST Serverat the Washington University GSC in St. Louis. BLAST searches may be performed against genomic sequences from five Saccharomyces species: S. mikatae, S. kudriavzevii, S. bayanus, S. castellii, and S. kluyveri. - Yeast Project at MIPS yeast information from the Munich Information Center for Protein Sequences (MIPS) - Yeast Proteome Database (YPD) yeast protein information from Incyte Genomics, Inc. Access to this resource requires a subscription. - Genome-Wide Protein Function Prediction Links between functionally related yeast proteins are used to predict functions for about two thirds of all predicted yeast proteins in Marcotte et al. (1999) Nature 402:83-86. - TRIPLES - a database of transposon-insertion phenotypes, localization and expression Yeast transposon tagging data from the Yale Genome Analysis Center. - Yeast Protein Linkage Map The Fields lab's systematic Two-Hybrid project. Results are described in Uetz et al. (2000) Nature 403:623-627. - General Repository for Interaction Datasets (BioGRID) a database of genetic and physical interactions. BioGRID, developed in Mike Tyers group, contains interaction data from many sources, including both small-scale and genome/proteome-wide studies, the MIPS database, and BIND. - Osprey: A Network Visualization System a powerful application for graphically representing physical and genetic biological interactions. It provides many features that are helpful in analysis of interaction data. Osprey is also coupled with the General Repository of Interaction Datasets (BioGRID) and as a result brings a large dataset of interactions to your fingertips. - Yeast Resource Center Public Data Repository provides protein searching from multiple yeast databases and provides experimental data from mass spectrometry, yeast two-hybrid, fluorescence microscopy, protein structure prediction and protein complex predictions for S. cerevisiae proteins. Please see Riffle et al. (2005) Nucleic Acids Res 33(Database issue):D378-82 for more information. - The Global Proteome Machine Database (GPMDB) a database of mass spectrometry based proteomics information, populated by the general proteomics community. - Yeast Gene Order Browser a tool used to visualize the syntenic context of any gene from S. cerevisiae, S. castellii, C. glabrata, A. gossypii, K. lactis, K. waltii, and S. kluyveri. This tool was developed by Kevin Byrne and Ken Wolfe (Trinity College, Dublin, Ireland), as described in Byrne and Wolfe. - YOGY:eukarYotic OrtholoGY a tool to view orthologous proteins from eukaryotic orgranisms (Homo sapiens, Mus musculus, Rattus norvegicus, Arabidopsis thaliana, Drosophila melanogaster, Caenorhabditis elegans, Plasmodium falciparum, Schizosaccharomyces pombe, and Saccharomyces cerevisiae). This tool provides information from KOGs, Inparanoid, Homologene, OrthoMCL and manually curated orthologs between S. cerevisiae and S. pombe. This tool was developed by Penkett CJ, Morris JA, Wood V, and Bahler J (Wellcome Trust Sanger Institute, Cambridge, UK). - iProto Yeast an iPhone application containing proteome information for several different Saccharomyces and Schizosaccharomyces strains - Yeast Nucleosome Positions a compiled and systematic reference map of nucleosome positions across the S. cerevisiae genome - InterologFinder A database of conserved and predicted novel protein-protein interactions (interologs or interologues) in human, mouse, fly, worm, and yeast based on interaction of orthologous proteins found in the other organisms. For more information please see Wiles et al., (2010) BMC Syst Biol. Mar 30;4:36.. Expression Data Resources - YEASTRACT (Yeast Search for Transcriptional Regulators And Consensus Tracking) A curated database of regulatory associations between transcription factors and their target genes, and information on transcription factor binding sites (see Monteiro et al.</a, (2008), Nucleic Acids Research 2008 36(Database issue):D132-D136). - SCEPTRANS A website for visualizing and studying periodic transcription in yeast (see Kudlicki et al. (2007) Bioinformatics) - Yeast Cell Cycle Analysis Projects Comprehensive identification of genes whose mRNA levels are regulated by the cell cycle; accompanies Spellman et al. (1998) Mol Cell Biol 9:3273-3297. These data were re-analyzed in: (1) Zhao LP, et al. (2001) Statistical modeling of large microarray data sets to identify stimulus-response profiles. Proc Natl Acad Sci U S A 98(10):5631-6. View data, and (2) Alter O, Brown PO, Botstein D (2000) Singular value decomposition for genome-wide expression data processing and modeling. Proc Natl Acad Sci U S A 97(18):10101-6. View data. - Periodic Genes of the Yeast Saccharomyces cerevisiae A combined analysis of five cell cycle microarray data sets, has been published in G&D by Pramila et al. (in press). This includes access to three new expression data sets generated using spotted cDNA arrays after sampling alpha factor synchronized cells across a time-course. Two of these data sets and three additional data sets from the public domain have been analyzed using a permutation based method published by de Lichtenberg et al. (2005) Bioinformatics 21:1164-71 to calculate a weighted average peak time base. This interactive resource allows users to sort and filter data, as well as access expression plots and heat maps. - Yeast Microarray Global Viewer a database containing most of the published yeast microarray expression datasets as described in Marc et al. (2001) Nucleic Acids Res 29(13):e63 - PAM: Prediction Analysis for Microarrays a software tool for interpretation of microarray expression datasets - GenMAPP a Visual Basic application that displays expression data on biochemical and cellular pathways as well as groups of genes. MAPPFinder is a related tool that integrates GO annotations with GenMAPP to create a global expression profile (see Doniger et al. (2003) Genome Biology 4:R7) - Yeast SAGE homepage for Serial Analysis of Gene Expression project at Johns Hopkins University Localization Data Resources - Yeast Resource Center Public Image Repository (YRC PIR) Very large repository of images and metadata from fluorescence microscopy experiments, including localization, colocalization and FRET experiments. Mostly contains data from S. cerevisiae, but also currently includes some S. pombe. Submission of data is welcome and encouraged. Please see Riffle et al (2010) for more information. - YeastProtein Localization database (YPL.db) S. cerevisiae protein localization data from University Graz. Users may also submit images. - TRIPLES- a database of transposon-insertion phenotypes, localization and expression Yeast transposon tagging data from the Yale Genome Analysis Center. - Yeast Membrane Protein Library A collection of polytopic membrane protein sequences (containing two or more predicted membrane spanning domains) from S. cerevisiae - Yeast GFP Fusion Localization Database S. cerevisiae protein localization data from the laboratories of Erin O'Shea and Jonathan Weissman at the University of California San Francisco hosted by SGD. Phenotype Data Resources - Agria Triterpene Glycoside (TTG) Phenotype Query Page A chemical genomics phenotype database to query for phenotypes of href="#deletions yeast deletion strains] grown in the presence of triterpene glycosides (TTG's) from the cactus Stenocereus gummosus (common name agria). Pre-publication access provided by Scott Erdman at Syracuse University. - The Saccharomyces Cerevisiae Morphological Database (SCMD) A collection of micrographs of budding yeast mutants. For more information, please refer to Saito et al. (2004) Nucleic Acids Res 32:D319-22. - PROPHECY A database that provides quantitative information regarding growth rate, growth efficiency, and adaptation time for haploid deletion strains. For more information, please refer to Warringer et al. (2003) Proc Natl Acad Sci USA 100:15724-9. - GenAge GenAge provides a compiled list of genes associated with aging and longevity in yeast and other model organisms. Additional Yeast Research Resources - Yeast Deletion Strains Deletion strains created by the Saccharomyces Genome Deletion project] are available via ATCC (online catalog available), Open Biosystems, and Invitrogen (Research Genetics). The Community Posting Page provided by the Saccharomyces Genome Deletion Project enables users of the mutant collection to share information about the collection. EUROSCARF makes deletion strains available from the EUROFAN projects. - NCRR Yeast Resource Center Information about services offered by the NCRR Yeast Resource Center at the University of Washington in Seattle.] - The Saccharomyces Genome Resequencing Project (SGRP) A colloboration between the Sanger Institute and Prof. Ed Louis' group at the Institute of Genetics, University of Nottingham to analyze sequences from multiple strains of S. cerevisiae and S. paradoxus. - Yeast Systems Biology Network Promotes the study of S. cerevisiae systems biology by facilitating cooperation between experimental scientists and theoreticians, generating quantitative data, and developing new resources. Download YSBN's [http:www.yeastgenome.org/community/ysbn.pdf brochure] for more information and contacts. - Addgene Repository and distributor of plasmid constructs described in published literature. Search for a plasmid or deposit your plasmid to the repository. - SGD Tutorial Developed by OpenHelix, this online tutorial describes navigation of SGD and many features of the database. - Gene Ontology Gene Ontology (GO) Consortium home page - KEGG metabolic reactions and pathways from Kyoto University, Kyoto, Japan - Molecular Biology resources from CMS. An impressive set of hyperlinks with a great presentation on all aspects of molecular biology, biotechnology, molecular evolution, biochemistry, and biomolecular modeling. - WWW Virtual Library: Biosciences provided by the Harvard Biolabs - Microbes.info The Microbiology Information Portal. This site contains resources, news, and information about many different aspects of microbiology. Its General Microbiology section contains links to various databases, culture collections, genetic analysis sites, and method sites. - Encyclopedia of Life Sciences a collection of articles on a wide variety of biological topics, from the Nature Publishing Group - Yahoo list of WWW biological information resources - Scitable A free library providing overviews of key science concepts, with a focus on genetics, compiled by the Nature Publishing Group.
1
7
<urn:uuid:75f39608-6923-4b3b-b32a-3995860da0de>
||This article's lead section may not adequately summarize its contents. (October 2015)| An organic light-emitting diode (OLED) is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound that emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes; typically, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens, computer monitors, portable systems such as mobile phones, handheld game consoles and PDAs. A major area of research is the development of white OLED devices for use in solid-state lighting applications. There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell (LEC) which has a slightly different mode of operation. An OLED display can be driven with a passive-matrix (PMOLED) or active-matrix (AMOLED) control scheme. In the PMOLED scheme, each row (and line) in the display is controlled sequentially, one by one, whereas AMOLED control uses a thin-film transistor backplane to directly access and switch each individual pixel on or off, allowing for higher resolution and larger display sizes. An OLED display works without a backlight; thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions (such as a dark room), an OLED screen can achieve a higher contrast ratio than an LCD, regardless of whether the LCD uses cold cathode fluorescent lamps or an LED backlight. - 1 History - 2 Working principle - 3 Carrier balance - 4 Material technologies - 5 Device architectures - 6 Fabrication - 7 Advantages - 8 Disadvantages - 9 Manufacturers and commercial uses - 10 Research - 11 See also - 12 References - 13 Further reading - 14 External links André Bernanose and co-workers at the Nancy-Université in France made the first observations of electroluminescence in organic materials in the early 1950s. They applied high alternating voltages in air to materials such as acridine orange, either deposited on or dissolved in cellulose or cellophane thin films. The proposed mechanism was either direct excitation of the dye molecules or excitation of electrons. In 1960 Martin Pope and some of his co-workers at New York University developed ohmic dark-injecting electrode contacts to organic crystals. They further described the necessary energetic requirements (work functions) for hole and electron injecting electrode contacts. These contacts are the basis of charge injection in all modern OLED devices. Pope's group also first observed direct current (DC) electroluminescence under vacuum on a single pure crystal of anthracene and on anthracene crystals doped with tetracene in 1963 using a small area silver electrode at 400 volts. The proposed mechanism was field-accelerated electron excitation of molecular fluorescence. Pope's group reported in 1965 that in the absence of an external electric field, the electroluminescence in anthracene crystals is caused by the recombination of a thermalized electron and hole, and that the conducting level of anthracene is higher in energy than the exciton energy level. Also in 1965, W. Helfrich and W. G. Schneider of the National Research Council in Canada produced double injection recombination electroluminescence for the first time in an anthracene single crystal using hole and electron injecting electrodes, the forerunner of modern double-injection devices. In the same year, Dow Chemical researchers patented a method of preparing electroluminescent cells using high-voltage (500–1500 V) AC-driven (100–3000 Hz) electrically insulated one millimetre thin layers of a melted phosphor consisting of ground anthracene powder, tetracene, and graphite powder. Their proposed mechanism involved electronic excitation at the contacts between the graphite particles and the anthracene molecules. Roger Partridge made the first observation of electroluminescence from polymer films at the National Physical Laboratory in the United Kingdom. The device consisted of a film of poly(N-vinylcarbazole) up to 2.2 micrometers thick located between two charge injecting electrodes. The results of the project were patented in 1975 and published in 1983. The first practical OLEDs Hong Kong-born American physical chemist Ching W. Tang and his co-worker Steven Van Slyke at Eastman Kodak built the first practical OLED device in 1987. This was a revolution for the technology. This device used a novel two-layer structure with separate hole transporting and electron transporting layers such that recombination and light emission occurred in the middle of the organic layer; this resulted in a reduction in operating voltage and improvements in efficiency. Research into polymer electroluminescence culminated in 1990 with J. H. Burroughes et al. at the Cavendish Laboratory in Cambridge reporting a high efficiency green light-emitting polymer based device using 100 nm thick films of poly(p-phenylene vinylene). A typical OLED is composed of a layer of organic materials situated between two electrodes, the anode and cathode, all deposited on a substrate. The organic molecules are electrically conductive as a result of delocalization of pi electrons caused by conjugation over part or all of the molecule. These materials have conductivity levels ranging from insulators to conductors, and are therefore considered organic semiconductors. The highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) of organic semiconductors are analogous to the valence and conduction bands of inorganic semiconductors. Originally, the most basic polymer OLEDs consisted of a single organic layer. One example was the first light-emitting device synthesised by J. H. Burroughes et al., which involved a single layer of poly(p-phenylene vinylene). However multilayer OLEDs can be fabricated with two or more layers in order to improve device efficiency. As well as conductive properties, different materials may be chosen to aid charge injection at electrodes by providing a more gradual electronic profile, or block a charge from reaching the opposite electrode and being wasted. Many modern OLEDs incorporate a simple bilayer structure, consisting of a conductive layer and an emissive layer. More recent developments in OLED architecture improves quantum efficiency (up to 19%) by using a graded heterojunction. In the graded heterojunction architecture, the composition of hole and electron-transport materials varies continuously within the emissive layer with a dopant emitter. The graded heterojunction architecture combines the benefits of both conventional architectures by improving charge injection while simultaneously balancing charge transport within the emissive region. During operation, a voltage is applied across the OLED such that the anode is positive with respect to the cathode. Anodes are picked based upon the quality of their optical transparency, electrical conductivity, and chemical stability. A current of electrons flows through the device from cathode to anode, as electrons are injected into the LUMO of the organic layer at the cathode and withdrawn from the HOMO at the anode. This latter process may also be described as the injection of electron holes into the HOMO. Electrostatic forces bring the electrons and the holes towards each other and they recombine forming an exciton, a bound state of the electron and hole. This happens closer to the emissive layer, because in organic semiconductors holes are generally more mobile than electrons. The decay of this excited state results in a relaxation of the energy levels of the electron, accompanied by emission of radiation whose frequency is in the visible region. The frequency of this radiation depends on the band gap of the material, in this case the difference in energy between the HOMO and LUMO. As electrons and holes are fermions with half integer spin, an exciton may either be in a singlet state or a triplet state depending on how the spins of the electron and hole have been combined. Statistically three triplet excitons will be formed for each singlet exciton. Decay from triplet states (phosphorescence) is spin forbidden, increasing the timescale of the transition and limiting the internal efficiency of fluorescent devices. Phosphorescent organic light-emitting diodes make use of spin–orbit interactions to facilitate intersystem crossing between singlet and triplet states, thus obtaining emission from both singlet and triplet states and improving the internal efficiency. Indium tin oxide (ITO) is commonly used as the anode material. It is transparent to visible light and has a high work function which promotes injection of holes into the HOMO level of the organic layer. A typical conductive layer may consist of PEDOT:PSS as the HOMO level of this material generally lies between the work function of ITO and the HOMO of other commonly used polymers, reducing the energy barriers for hole injection. Metals such as barium and calcium are often used for the cathode as they have low work functions which promote injection of electrons into the LUMO of the organic layer. Such metals are reactive, so they require a capping layer of aluminium to avoid degradation. Experimental research has proven that the properties of the anode, specifically the anode/hole transport layer (HTL) interface topography plays a major role in the efficiency, performance, and lifetime of organic light emitting diodes. Imperfections in the surface of the anode decrease anode-organic film interface adhesion, increase electrical resistance, and allow for more frequent formation of non-emissive dark spots in the OLED material adversely affecting lifetime. Mechanisms to decrease anode roughness for ITO/glass substrates include the use of thin films and self-assembled monolayers. Also, alternative substrates and anode materials are being considered to increase OLED performance and lifetime. Possible examples include single crystal sapphire substrates treated with gold (Au) film anodes yielding lower work functions, operating voltages, electrical resistance values, and increasing lifetime of OLEDs. Single carrier devices are typically used to study the kinetics and charge transport mechanisms of an organic material and can be useful when trying to study energy transfer processes. As current through the device is composed of only one type of charge carrier, either electrons or holes, recombination does not occur and no light is emitted. For example, electron only devices can be obtained by replacing ITO with a lower work function metal which increases the energy barrier of hole injection. Similarly, hole only devices can be made by using a cathode made solely of aluminium, resulting in an energy barrier too large for efficient electron injection. Balanced charge injection and transfer are required to get high internal efficiency, pure emission of luminance layer without contaminated emission from charge transporting layers, and high stability. A common way to balance charge is optimizing the thickness of the charge transporting layers but is hard to control. Another way is using the exciplex. Exciplex formed between hole-transporting (p-type) and electron-transporting (n-type) side chains to localize electron-hole pairs. Energy is then transferred to luminophore and provide high efficiency. An example of using exciplex is grafting Oxadiazole and carbazole side units in red diketopyrrolopyrrole-doped Copolymer main chain shows improved external quantum efficiency and color purity in no optimized OLED. Efficient OLEDs using small molecules were first developed by Dr. Ching W. Tang et al. at Eastman Kodak. The term OLED traditionally refers specifically to this type of device, though the term SM-OLED is also in use. Molecules commonly used in OLEDs include organometallic chelates (for example Alq3, used in the organic light-emitting device reported by Tang et al.), fluorescent and phosphorescent dyes and conjugated dendrimers. A number of materials are used for their charge transport properties, for example triphenylamine and derivatives are commonly used as materials for hole transport layers. Fluorescent dyes can be chosen to obtain light emission at different wavelengths, and compounds such as perylene, rubrene and quinacridone derivatives are often used. Alq3 has been used as a green emitter, electron transport material and as a host for yellow and red emitting dyes. The production of small molecule devices and displays usually involves thermal evaporation in a vacuum. This makes the production process more expensive and of limited use for large-area devices, than other processing techniques. However, contrary to polymer-based devices, the vacuum deposition process enables the formation of well controlled, homogeneous films, and the construction of very complex multi-layer structures. This high flexibility in layer design, enabling distinct charge transport and charge blocking layers to be formed, is the main reason for the high efficiencies of the small molecule OLEDs. Coherent emission from a laser dye-doped tandem SM-OLED device, excited in the pulsed regime, has been demonstrated. The emission is nearly diffraction limited with a spectral width similar to that of broadband dye lasers. Researchers report luminescence from a single polymer molecule, representing the smallest possible organic light-emitting diode (OLED) device. Scientists will be able to optimize substances to produce more powerful light emissions. Finally, this work is a first step towards making molecule-sized components that combine electronic and optical properties. Similar components could form the basis of a molecular computer. Polymer light-emitting diodes Polymer light-emitting diodes (PLED), also light-emitting polymers (LEP), involve an electroluminescent conductive polymer that emits light when connected to an external voltage. They are used as a thin film for full-spectrum colour displays. Polymer OLEDs are quite efficient and require a relatively small amount of power for the amount of light produced. Vacuum deposition is not a suitable method for forming thin films of polymers. However, polymers can be processed in solution, and spin coating is a common method of depositing thin polymer films. This method is more suited to forming large-area films than thermal evaporation. No vacuum is required, and the emissive materials can also be applied on the substrate by a technique derived from commercial inkjet printing. However, as the application of subsequent layers tends to dissolve those already present, formation of multilayer structures is difficult with these methods. The metal cathode may still need to be deposited by thermal evaporation in vacuum. An alternative method to vacuum deposition is to deposit a Langmuir-Blodgett film. Typical polymers used in pleaded displays include derivatives of poly(p-phenylene vinylene) and polyfluorene. Substitution of side chains onto the polymer backbone may determine the colour of emitted light or the stability and solubility of the polymer for performance and ease of processing. While unsubstituted poly(p-phenylene vinylene) (PPV) is typically insoluble, a number of PPVs and related poly(naphthalene vinylene)s (PNVs) that are soluble in organic solvents or water have been prepared via ring opening metathesis polymerization. These water-soluble polymers or conjugated poly electrolytes (CPEs) also can be used as hole injection layers alone or in combination with nanoparticles like graphene. Phosphorescent organic light emitting diodes use the principle of electrophosphorescence to convert electrical energy in an OLED into light in a highly efficient manner, with the internal quantum efficiencies of such devices approaching 100%. Typically, a polymer such as poly(N-vinylcarbazole) is used as a host material to which an organometallic complex is added as a dopant. Iridium complexes such as Ir(mppy)3 are currently the focus of research, although complexes based on other heavy metals such as platinum have also been used. The heavy metal atom at the centre of these complexes exhibits strong spin-orbit coupling, facilitating intersystem crossing between singlet and triplet states. By using these phosphorescent materials, both singlet and triplet excitons will be able to decay radiatively, hence improving the internal quantum efficiency of the device compared to a standard pleaded where only the singlet states will contribute to emission of light. Applications of OLEDs in solid state lighting require the achievement of high brightness with good CIE coordinates (for white emission). The use of macromolecular species like polyhedral oligomeric silsesquioxanes (POSS) in conjunction with the use of phosphorescent species such as Ir for printed OLEDs have exhibited brightnesses as high as 10,000 cd/m2. - Bottom or top emission - Bottom or top distinction refers not to orientation of the OLED display, but to the direction that emitted light exits the device. OLED devices are classified as bottom emission devices if light emitted passes through the transparent or semi-transparent bottom electrode and substrate on which the panel was manufactured. Top emission devices are classified based on whether or not the light emitted from the OLED device exits through the lid that is added following fabrication of the device. Top-emitting OLEDs are better suited for active-matrix applications as they can be more easily integrated with a non-transparent transistor backplane. The TFT array attached to the bottom substrate on which AMOLEDs are manufactured are typically non-transparent, resulting in considerable blockage of transmitted light if the device followed a bottom emitting scheme. - Transparent OLEDs - Transparent OLEDs use transparent or semi-transparent contacts on both sides of the device to create displays that can be made to be both top and bottom emitting (transparent). TOLEDs can greatly improve contrast, making it much easier to view displays in bright sunlight. This technology can be used in Head-up displays, smart windows or augmented reality applications. - Graded heterojunction - Graded heterojunction OLEDs gradually decrease the ratio of electron holes to electron transporting chemicals. This results in almost double the quantum efficiency of existing OLEDs. - Stacked OLEDs - Stacked OLEDs use a pixel architecture that stacks the red, green, and blue subpixels on top of one another instead of next to one another, leading to substantial increase in gamut and color depth, and greatly reducing pixel gap. Currently, other display technologies have the RGB (and RGBW) pixels mapped next to each other decreasing potential resolution. - Inverted OLED - In contrast to a conventional OLED, in which the anode is placed on the substrate, an Inverted OLED uses a bottom cathode that can be connected to the drain end of an n-channel TFT especially for the low cost amorphous silicon TFT backplane useful in the manufacturing of AMOLED displays. Patternable organic light-emitting devices use a light or heat activated electroactive layer. A latent material (PEDOT-TMA) is included in this layer that, upon activation, becomes highly efficient as a hole injection layer. Using this process, light-emitting devices with arbitrary patterns can be prepared. Colour patterning can be accomplished by means of laser, such as radiation-induced sublimation transfer (RIST). Organic vapour jet printing (OVJP) uses an inert carrier gas, such as argon or nitrogen, to transport evaporated organic molecules (as in organic vapour phase deposition). The gas is expelled through a micrometre-sized nozzle or nozzle array close to the substrate as it is being translated. This allows printing arbitrary multilayer patterns without the use of solvents. Conventional OLED displays are formed by vapor thermal evaporation (VTE) and are patterned by shadow-mask. A mechanical mask has openings allowing the vapor to pass only on the desired location. Like ink jet material depositioning, inkjet etching (IJE) deposits precise amounts of solvent onto a substrate designed to selectively dissolve the substrate material and induce a structure or pattern. Inkjet etching of polymer layers in OLED's can be used to increase the overall out-coupling efficiency. In OLEDs, light produced from the emissive layers of the OLED is partially transmitted out of the device and partially trapped inside the device by total internal reflection (TIR). This trapped light is wave-guided along the interior of the device until it reaches an edge where it is dissipated by either absorption or emission. Inkjet etching can be used to selectively alter the polymeric layers of OLED structures to decrease overall TIR and increase out-coupling efficiency of the OLED. Compared to a non-etched polymer layer, the structured polymer layer in the OLED structure from the IJE process helps to decrease the TIR of the OLED device. IJE solvents are commonly organic instead of water based due to their non-acidic nature and ability to effectively dissolve materials at temperatures under the boiling point of water. For a high resolution display like a TV, a TFT backplane is necessary to drive the pixels correctly. Currently, low temperature polycrystalline silicon (LTPS) – thin-film transistor (TFT) is used for commercial AMOLED displays. LTPS-TFT has variation of the performance in a display, so various compensation circuits have been reported. Due to the size limitation of the excimer laser used for LTPS, the AMOLED size was limited. To cope with the hurdle related to the panel size, amorphous-silicon/microcrystalline-silicon backplanes have been reported with large display prototype demonstrations. Transfer-printing is an emerging technology to assemble large numbers of parallel OLED and AMOLED devices efficiently. It takes advantage of standard metal deposition, photolithography, and etching to create alignment marks commonly on glass or other device substrates. Thin polymer adhesive layers are applied to enhance resistance to particles and surface defects. Microscale ICs are transfer-printed onto the adhesive surface and then baked to fully cure adhesive layers. An additional photosensitive polymer layer is applied to the substrate to account for the topography caused by the printed ICs, reintroducing a flat surface. Photolithography and etching removes some polymer layers to uncover conductive pads on the ICs. Afterwards, the anode layer is applied to the device backplane to form bottom electrode. OLED layers are applied to the anode layer with conventional vapor deposition, and covered with a conductive metal electrode layer. As of 2011[update] transfer-printing was capable to print onto target substrates up to 500mm X 400mm. This size limit needs to expand for transfer-printing to become a common process for the fabrication of large OLED/AMOLED displays. The different manufacturing process of OLEDs lends itself to several advantages over flat panel displays made with LCD technology. - Lower cost in the future - OLEDs can be printed onto any suitable substrate by an inkjet printer or even by screen printing, theoretically making them cheaper to produce than LCD or plasma displays. However, fabrication of the OLED substrate is currently more costly than that of a TFT LCD, until mass production methods lower costs through scalability. Roll-to-roll vapor-deposition methods for organic devices do allow mass production of thousands of devices per minute for minimal cost; however, this technique also induces problems: devices with multiple layers can be challenging to make because of registration—lining up the different printed layers to the required degree of accuracy. - Lightweight and flexible plastic substrates - OLED displays can be fabricated on flexible plastic substrates, leading to the possible fabrication of flexible organic light-emitting diodes for other new applications, such as roll-up displays embedded in fabrics or clothing. If a substrate like polyethylene terephthalate (PET) can be used, the displays may be produced inexpensively. Furthermore, plastic substrates are shatter-resistant, unlike the glass displays used in LCD devices. - Better picture quality - OLEDs enable a greater contrast ratio and wider viewing angle compared to LCDs, because OLED pixels emit light directly. Furthermore, OLED pixel colors appear correct and unshifted, even as the viewing angle approaches 90° from the normal. - Better power efficiency and thickness - LCDs filter the light emitted from a backlight, allowing a small fraction of light through. Thus, they cannot show true black. However, an inactive OLED element does not produce light or consume power, allowing true blacks. Removing the backlight also makes OLEDs lighter because some substrates are not needed. When looking at top-emitting OLEDs, thickness also plays a role when talking about index match layers (IMLs). Emission intensity is enhanced when the IML thickness is 1.3–2.5 nm. The refractive value and the matching of the optical IMLs property, including the device structure parameters, also enhance the emission intensity at these thicknesses. - Response time - OLEDs also have a much faster response time than an LCD. Using response time compensation technologies, the fastest modern LCDs can reach response times as low as 1 ms for their fastest color transition, and are capable of refresh frequencies as high as 240 Hz. According to LG, OLED response times are up to 1,000 times faster than LCD, putting conservative estimates at under 10 μs (0.01 ms), which could theoretically accommodate refresh frequencies approaching 100 kHz (100,000 Hz). Due to their extremely fast response time, OLED displays can also be easily designed to be strobed, creating an effect similar to CRT flicker in order to avoid the sample-and-hold behavior seen on both LCDs and some OLED displays, which creates the perception of motion blur. - The biggest technical problem for OLEDs was the limited lifetime of the organic materials. One 2008 technical report on an OLED TV panel found that "After 1,000 hours the blue luminance degraded by 12%, the red by 7% and the green by 8%." In particular, blue OLEDs historically have had a lifetime of around 14,000 hours to half original brightness (five years at 8 hours a day) when used for flat-panel displays. This is lower than the typical lifetime of LCD, LED or PDP technology. Each currently is rated for about 25,000–40,000 hours to half brightness, depending on manufacturer and model. Degradation occurs because of the accumulation of nonradiative recombination centers and luminescence quenchers in the emissive zone. It is said that the chemical breakdown in the semiconductors occurs in four steps: 1) recombination of charge carriers through the absorption of UV light, 2) homolytic dissociation, 3) subsequent radical addition reactions that form π radicals, and 4) disproportionation between two radicals resulting in hydrogen-atom transfer reactions. However, some manufacturers' displays aim to increase the lifespan of OLED displays, pushing their expected life past that of LCD displays by improving light outcoupling, thus achieving the same brightness at a lower drive current. In 2007, experimental OLEDs were created which can sustain 400 cd/m2 of luminance for over 198,000 hours for green OLEDs and 62,000 hours for blue OLEDs. - Color balance - Additionally, as the OLED material used to produce blue light degrades significantly more rapidly than the materials that produce other colors, blue light output will decrease relative to the other colors of light. This variation in the differential color output will change the color balance of the display and is much more noticeable than a decrease in overall luminance. This can be avoided partially by adjusting color balance, but this may require advanced control circuits and interaction with the user, which is unacceptable for users. More commonly, though, manufacturers optimize the size of the R, G and B subpixels to reduce the current density through the subpixel in order to equalize lifetime at full luminance. For example, a blue subpixel may be 100% larger than the green subpixel. The red subpixel may be 10% smaller than the green. - Efficiency of blue OLEDs - Improvements to the efficiency and lifetime of blue OLEDs is vital to the success of OLEDs as replacements for LCD technology. Considerable research has been invested in developing blue OLEDs with high external quantum efficiency as well as a deeper blue color. External quantum efficiency values of 20% and 19% have been reported for red (625 nm) and green (530 nm) diodes, respectively. However, blue diodes (430 nm) have only been able to achieve maximum external quantum efficiencies in the range of 4% to 6%. - Water damage - Water can instantly damage the organic materials of the displays. Therefore, improved sealing processes are important for practical manufacturing. Water damage especially may limit the longevity of more flexible displays. - Outdoor performance - As an emissive display technology, OLEDs rely completely upon converting electricity to light, unlike most LCDs which are to some extent reflective. e-paper leads the way in efficiency with ~ 33% ambient light reflectivity, enabling the display to be used without any internal light source. The metallic cathode in an OLED acts as a mirror, with reflectance approaching 80%, leading to poor readability in bright ambient light such as outdoors. However, with the proper application of a circular polarizer and antireflective coatings, the diffuse reflectance can be reduced to less than 0.1%. With 10,000 fc incident illumination (typical test condition for simulating outdoor illumination), that yields an approximate photopic contrast of 5:1. Recent advances in OLED technologies, however, enable OLEDs to become actually better than LCDs in bright sunlight. The Super AMOLED display in the Galaxy S5, for example, was found to outperform all LCD displays on the market in terms of brightness and reflectance. - Power consumption - While an OLED will consume around 40% of the power of an LCD displaying an image that is primarily black, for the majority of images it will consume 60–80% of the power of an LCD. However, an OLED can use more than three times as much power to display an image with a white background, such as a document or web site. This can lead to reduced battery life in mobile devices, when white backgrounds are used. Manufacturers and commercial uses OLED technology is used in commercial applications such as displays for mobile phones and portable digital media players, car radios and digital cameras among others. Such portable applications favor the high light output of OLEDs for readability in sunlight and their low power drain. Portable displays are also used intermittently, so the lower lifespan of organic displays is less of an issue. Prototypes have been made of flexible and rollable displays which use OLEDs' unique characteristics. Applications in flexible signs and lighting are also being developed. Philips Lighting have made OLED lighting samples under the brand name "Lumiblade" available online and Novaled AG based in Dresden, Germany, introduced a line of OLED desk lamps called "Victory" in September, 2011. OLEDs have been used in most Motorola and Samsung color cell phones, as well as some HTC, LG and Sony Ericsson models. Nokia has also introduced some OLED products including the N85 and the N86 8MP, both of which feature an AMOLED display. OLED technology can also be found in digital media players such as the Creative ZEN V, the iriver clix, the Zune HD and the Sony Walkman X Series. The Google and HTC Nexus One smartphone includes an AMOLED screen, as does HTC's own Desire and Legend phones. However, due to supply shortages of the Samsung-produced displays, certain HTC models will use Sony's SLCD displays in the future, while the Google and Samsung Nexus S smartphone will use "Super Clear LCD" instead in some countries. OLED displays were used in watches made by Fossil (JR-9465) and Diesel (DZ-7086). DuPont stated in a press release in May 2010 that they can produce a 50-inch OLED TV in two minutes with a new printing technology. If this can be scaled up in terms of manufacturing, then the total cost of OLED TVs would be greatly reduced. DuPont also states that OLED TVs made with this less expensive technology can last up to 15 years if left on for a normal eight-hour day. The use of OLEDs may be subject to patents held by Universal Display Corporation, Eastman Kodak, DuPont, General Electric, Royal Philips Electronics, numerous universities and others. There are by now thousands of patents associated with OLEDs, both from larger corporations and smaller technology companies. Flexible OLED displays are already being produced and these are used by manufacturers to create curved displays such as the Galaxy S7 Edge but so far there they are not in devices that can be flexed by the consumer. Apart from the screen itself the circuit boards and batteries would need to be flexible. Samsung demonstrated a roll-out display in 2016. Textiles incorporating OLEDs are an innovation in the fashion world and pose for a way to integrate lighting to bring inert objects to a whole new level of fashion. The hope is to combine the comfort and low cost properties of textile with the OLEDs properties of illumination and low energy consumption. Although this scenario of illuminated clothing is highly plausible, challenges are still a road block. Some issues include: the lifetime of the OLED, rigidness of flexible foil substrates, and the lack of research in making more fabric like photonic textiles. By 2004 Samsung, South Korea's largest conglomerate, was the world's largest OLED manufacturer, producing 40% of the OLED displays made in the world, and as of 2010 has a 98% share of the global AMOLED market. The company is leading the world of OLED industry, generating $100.2 million out of the total $475 million revenues in the global OLED market in 2006. As of 2006, it held more than 600 American patents and more than 2800 international patents, making it the largest owner of AMOLED technology patents. Samsung SDI announced in 2005 the world's largest OLED TV at the time, at 21 inches (53 cm). This OLED featured the highest resolution at the time, of 6.22 million pixels. In addition, the company adopted active matrix based technology for its low power consumption and high-resolution qualities. This was exceeded in January 2008, when Samsung showcased the world's largest and thinnest OLED TV at the time, at 31 inches (78 cm) and 4.3 mm. In May 2008, Samsung unveiled an ultra-thin 12.1 inch (30 cm) laptop OLED display concept, with a 1,280×768 resolution with infinite contrast ratio. According to Woo Jong Lee, Vice President of the Mobile Display Marketing Team at Samsung SDI, the company expected OLED displays to be used in notebook PCs as soon as 2010. In October 2008, Samsung showcased the world's thinnest OLED display, also the first to be "flappable" and bendable. It measures just 0.05 mm (thinner than paper), yet a Samsung staff member said that it is "technically possible to make the panel thinner". To achieve this thickness, Samsung etched an OLED panel that uses a normal glass substrate. The drive circuit was formed by low-temperature polysilicon TFTs. Also, low-molecular organic EL materials were employed. The pixel count of the display is 480 × 272. The contrast ratio is 100,000:1, and the luminance is 200 cd/m2. The colour reproduction range is 100% of the NTSC standard. In the same month, Samsung unveiled what was then the world's largest OLED Television at 40-inch with a Full HD resolution of 1920 × 1080 pixels. In the FPD International, Samsung stated that its 40-inch OLED Panel is the largest size currently possible. The panel has a contrast ratio of 1,000,000:1, a colour gamut of 107% NTSC, and a luminance of 200 cd/m2 (peak luminance of 600 cd/m2). At the Consumer Electronics Show (CES) in January 2010, Samsung demonstrated a laptop computer with a large, transparent OLED display featuring up to 40% transparency and an animated OLED display in a photo ID card. Samsung's latest AMOLED smartphones use their Super AMOLED trademark, with the Samsung Wave S8500 and Samsung i9000 Galaxy S being launched in June 2010. In January 2011 Samsung announced their Super AMOLED Plus displays, which offer several advances over the older Super AMOLED displays: real stripe matrix (50% more sub pixels), thinner form factor, brighter image and an 18% reduction in energy consumption. At CES 2012, Samsung introduced the first 55" TV screen that uses Super OLED technology. On January 8, 2013, at CES Samsung unveiled a unique curved 4K Ultra S9 OLED television, which they state provides an "IMAX-like experience" for viewers. On August 13, 2013, Samsung announced availability of a 55-inch curved OLED TV (model KN55S9C) in the US at a price point of $8999.99. On September 6, 2013, Samsung launched its 55-inch curved OLED TV (model KE55S9C) in the United Kingdom with John Lewis. Samsung introduced the Galaxy Round smartphone in the Korean market in October 2013. The device features a 1080p screen, measuring 5.7 inches (14 cm), that curves on the vertical axis in a rounded case. The corporation has promoted the following advantages: A new feature called "Round Interaction" that allows users to look at information by tilting the handset on a flat surface with the screen off, and the feel of one continuous transition when the user switches between home screens. The Sony CLIÉ PEG-VZ90 was released in 2004, being the first PDA to feature an OLED screen. Other Sony products to feature OLED screens include the MZ-RH1 portable minidisc recorder, released in 2006 and the Walkman X Series. At the 2007 Las Vegas Consumer Electronics Show (CES), Sony showcased 11-inch (28 cm, resolution 960×540) and 27-inch (68.5 cm), full HD resolution at 1920 × 1080 OLED TV models. Both claimed 1,000,000:1 contrast ratios and total thicknesses (including bezels) of 5 mm. In April 2007, Sony announced it would manufacture 1000 11-inch (28 cm) OLED TVs per month for market testing purposes. On October 1, 2007, Sony announced that the 11-inch (28 cm) model, now called the XEL-1, would be released commercially; the XEL-1 was first released in Japan in December 2007. In May 2007, Sony publicly unveiled a video of a 2.5-inch flexible OLED screen which is only 0.3 millimeters thick. At the Display 2008 exhibition, Sony demonstrated a 0.2 mm thick 3.5 inch (9 cm) display with a resolution of 320×200 pixels and a 0.3 mm thick 11 inch (28 cm) display with 960×540 pixels resolution, one-tenth the thickness of the XEL-1. In July 2008, a Japanese government body said it would fund a joint project of leading firms, which is to develop a key technology to produce large, energy-saving organic displays. The project involves one laboratory and 10 companies including Sony Corp. NEDO said the project was aimed at developing a core technology to mass-produce 40 inch or larger OLED displays in the late 2010s. In October 2008, Sony published results of research it carried out with the Max Planck Institute over the possibility of mass-market bending displays, which could replace rigid LCDs and plasma screens. Eventually, bendable, see-through displays could be stacked to produce 3D images with much greater contrast ratios and viewing angles than existing products. Sony exhibited a 24.5" (62 cm) prototype OLED 3D television during the Consumer Electronics Show in January 2010. On February 17, 2011, Sony announced its 25" (63.5 cm) OLED Professional Reference Monitor aimed at the Cinema and high end Drama Post Production market. On June 25, 2012, Sony and Panasonic announced a joint venture for creating low cost mass production OLED televisions by 2013. As of 2010, LG Electronics produced one model of OLED television, the 15 inch 15EL9500 and had announced a 31" (78 cm) OLED 3D television for March 2011. On December 26, 2011, LG officially announced the "world's largest 55" OLED panel" and featured it at CES 2012. In late 2012, LG announces the launch of the 55EM9600 OLED television in Australia. In January 2015, LG Display signed a long term agreement with Universal Display Corporation for the supply of OLED materials and the right to use their patented OLED emitters. Lumiotec is the first company in the world developing and selling, since January 2011, mass-produced OLED lighting panels with such brightness and long lifetime. Lumiotec is a joint venture of Mitsubishi Heavy Industries, ROHM, Toppan Printing, and Mitsui & Co. On June 1, 2011, Mitsubishi installed a 6-meter OLED 'sphere' in Tokyo's Science Museum. Recom Group/video name tag applications On January 6, 2011, Los Angeles based technology company Recom Group introduced the first small screen consumer application of the OLED at the Consumer Electronics Show in Las Vegas. This was a 2.8" (7 cm) OLED display being used as a wearable video name tag. At the Consumer Electronics Show in 2012, Recom Group introduced the world's first video mic flag incorporating three 2.8" (7 cm) OLED displays on a standard broadcaster's mic flag. The video mic flag allowed video content and advertising to be shown on a broadcasters standard mic flag. BMW plans to use OLEDs in tail lights and interior lights in their future cars; however, OLEDs are currently too dim to be used for brake lights, headlights and indicators. On January 6, 2016, Dell announced the Ultrasharp UP3017Q OLED monitor at the Consumer Electronics Show in Las Vegas. The monitor was announced to feature a 30" 4K UHD OLED panel with a 120 Hz refresh rate, 0.1 millisecond response time, and a contrast ratio of 400,000:1.The monitor was set to sell at a price of $4,999 and release in March, 2016, just a few months later. As the end of March rolled around, the monitor was not released to the market and Dell did not speak on reasons for the delay. Reports suggested that Dell canceled the monitor as the company was unhappy with the image quality of the OLED panel, especially the amount of color drift that it displayed when you viewed the monitor from the sides. On April 13, 2017, Dell finally released the UP3017Q OLED monitor to the market at a price of $3,499 ($1,500 less than its original spoken price of $4,999 at CES 2016). In addition to the price drop, the monitor featured a 60 Hz refresh rate and a contrast ratio of 1,000,000:1. As of June, 2017, the monitor is no longer available to purchase from Dell's website. In 2014, Mitsubishi Chemical Corporation (MCC), a subsidiary of the Mitsubishi Chemical Holdings developed an organic light-emitting diode (OLED) panel with a life of 30,000 hours, twice that of conventional OLED panels. The search for efficient OLED materials has been extensively supported by simulation methods. By now it is possible to calculate important properties completely computationally, independent of experimental input. This allows cost-efficient pre-screening of materials, prior to expensive synthesis and experimental characterisation. - Comparison of display technology - Field emission display - Flexible electronics - List of emerging technologies - Molecular electronics - Organic light-emitting transistor - Printed electronics - Rollable display - Quantum dot display - Surface-conduction electron-emitter display - Kamtekar, K. T.; Monkman, A. P.; Bryce, M. R. (2010). "Recent Advances in White Organic Light-Emitting Materials and Devices (WOLEDs)". Advanced Materials. 22 (5): 572–582. PMID 20217752. doi:10.1002/adma.200902148. - D'Andrade, B. W.; Forrest, S. R. (2004). "White Organic Light-Emitting Devices for Solid-State Lighting". Advanced Materials. 16 (18): 1585–1595. doi:10.1002/adma.200400684. - Chang, Yi-Lu; Lu, Zheng-Hong (2013). "White Organic Light-Emitting Diodes for Solid-State Lighting". Journal of Display Technology. PP (99): 1. Bibcode:2013JDisT...9..459C. doi:10.1109/JDT.2013.2248698. - "PMOLED vs AMOLED - what's the difference? | OLED-Info". www.oled-info.com. Archived from the original on 2016-12-20. Retrieved 2016-12-16. - Bernanose, A.; Comte, M.; Vouaux, P. (1953). "A new method of light emission by certain organic compounds". J. Chim. Phys. Elsevier Science. 50: 64. - Bernanose, A.; Vouaux, P. (1953). "Organic electroluminescence type of emission". J. Chim. Phys. Elsevier Science. 50: 261. - Bernanose, A. (1955). "The mechanism of organic electroluminescence". J. Chim. Phys. Elsevier Science. 52: 396. - Bernanose, A. & Vouaux, P. (1955). J. Chim. Phys. Elsevier Science. 52: 509. Missing or empty - Kallmann, H.; Pope, M. (1960). "Positive Hole Injection into Organic Crystals". The Journal of Chemical Physics. 32: 300. Bibcode:1960JChPh..32..300K. doi:10.1063/1.1700925. - Kallmann, H.; Pope, M. (1960). "Bulk Conductivity in Organic Crystals". Nature. 186 (4718): 31–33. Bibcode:1960Natur.186...31K. doi:10.1038/186031a0. - Mark, Peter; Helfrich, Wolfgang (1962). "Space-Charge-Limited Currents in Organic Crystals". Journal of Applied Physics. 33: 205. Bibcode:1962JAP....33..205M. doi:10.1063/1.1728487. - Pope, M.; Kallmann, H. P.; Magnante, P. (1963). "Electroluminescence in Organic Crystals". The Journal of Chemical Physics. 38 (8): 2042. Bibcode:1963JChPh..38.2042P. doi:10.1063/1.1733929. - Sano, Mizuka; Pope, Martin; Kallmann, Hartmut (1965). "Electroluminescence and Band Gap in Anthracene". The Journal of Chemical Physics. 43 (8): 2920. Bibcode:1965JChPh..43.2920S. doi:10.1063/1.1697243. - Helfrich, W.; Schneider, W. (1965). "Recombination Radiation in Anthracene Crystals". Physical Review Letters. 14 (7): 229–231. Bibcode:1965PhRvL..14..229H. doi:10.1103/PhysRevLett.14.229. - Gurnee, E. and Fernandez, R. "Organic electroluminescent phosphors", U.S. Patent 3,172,862, Issue date: March 9, 1965 - Partridge, Roger Hugh, "Radiation sources" U.S. Patent 3,995,299, Issue date: November 30, 1976 - Partridge, R (1983). "Electroluminescence from polyvinylcarbazole films: 1. Carbazole cations". Polymer. 24 (6): 733–738. doi:10.1016/0032-3861(83)90012-5. - Partridge, R (1983). "Electroluminescence from polyvinylcarbazole films: 2. Polyvinylcarbazole films containing antimony pentachloride". Polymer. 24 (6): 739–747. doi:10.1016/0032-3861(83)90013-7. - Partridge, R (1983). "Electroluminescence from polyvinylcarbazole films: 3. Electroluminescent devices". Polymer. 24 (6): 748–754. doi:10.1016/0032-3861(83)90014-9. - Partridge, R (1983). "Electroluminescence from polyvinylcarbazole films: 4. Electroluminescence using higher work function cathodes". Polymer. 24 (6): 755–762. doi:10.1016/0032-3861(83)90015-0. - Tang, C. W.; Vanslyke, S. A. (1987). "Organic electroluminescent diodes". Applied Physics Letters. 51 (12): 913. Bibcode:1987ApPhL..51..913T. doi:10.1063/1.98799. - Burroughes, J. H.; Bradley, D. D. C.; Brown, A. R.; Marks, R. N.; MacKay, K.; Friend, R. H.; Burns, P. L.; Holmes, A. B. (1990). "Light-emitting diodes based on conjugated polymers". Nature. 347 (6293): 539–541. Bibcode:1990Natur.347..539B. doi:10.1038/347539a0. - Kho, Mu-Jeong, Javed, T., Mark, R., Maier, E., and David, C. (2008) Final Report: OLED Solid State Lighting - Kodak European Research, MOTI (Management of Technology and Innovation) Project, Judge Business School of the University of Cambridge and Kodak European Research, Final Report presented on 4 March 2008 at Kodak European Research at Cambridge Science Park, Cambridge, UK., pp. 1-12 - Piromreun, Pongpun; Oh, Hwansool; Shen, Yulong; Malliaras, George G.; Scott, J. Campbell; Brock, Phil J. (2000). "Role of CsF on electron injection into a conjugated polymer". Applied Physics Letters. 77 (15): 2403. Bibcode:2000ApPhL..77.2403P. doi:10.1063/1.1317547. - D. Ammermann, A. Böhler, W. Kowalsky, Multilayer Organic Light Emitting Diodes for Flat Panel Displays Archived 2009-02-26 at the Wayback Machine., Institut für Hochfrequenztechnik, TU Braunschweig, 1995. - "Organic Light-Emitting Diodes Based on Graded Heterojunction Architecture Has Greater Quantum Efficiency". University of Minnesota. Archived from the original on 24 March 2012. Retrieved 31 May 2011. - Holmes, Russell; Erickson, N.; Lüssem, Björn; Leo, Karl (27 August 2010). "Highly efficient, single-layer organic light-emitting devices based on a graded-composition emissive layer". Applied Physics Letters. 97: 083308. Bibcode:2010ApPhL..97a3308S. doi:10.1063/1.3460285. - Lin Ke, Peng; Ramadas, K.; Burden, A.; Soo-Jin, C. (June 2006). "INdium-Tin-Oxide-Free Organic Light-Emitting Devices". Transactions on Electron Devices. 53 (6): 1483–1486. doi:10.1109/ted.2006.874724. - Carter, S. A.; Angelopoulos, M.; Karg, S.; Brock, P. J.; Scott, J. C. (1997). "Polymeric anodes for improved polymer light-emitting diode performance". Applied Physics Letters. 70 (16): 2067. Bibcode:1997ApPhL..70.2067C. doi:10.1063/1.118953. - Friend, R. H.; Gymer, R. W.; Holmes, A. B.; Burroughes, J. H.; Marks, R. N.; Taliani, C.; Bradley, D. D. C.; Santos, D. A. Dos; Brdas, J. L.; Lgdlund, M.; Salaneck, W. R. (1999). "Electroluminescence in conjugated polymers". Nature. 397 (6715): 121–128. Bibcode:1999Natur.397..121F. doi:10.1038/16393. - "Spintronic OLEDs could be brighter and more efficient". Engineer (Online Edition): 1. 16 July 2012. - Davids, P. S.; Kogan, Sh. M.; Parker, I. D.; Smith, D. L. (1996). "Charge injection in organic light-emitting diodes: Tunneling into low mobility materials". Applied Physics Letters. 69 (15): 2270. Bibcode:1996ApPhL..69.2270D. doi:10.1063/1.117530. - Crone, B. K.; Campbell, I. H.; Davids, P. S.; Smith, D. L. (1998). "Charge injection and transport in single-layer organic light-emitting diodes". Applied Physics Letters. 73 (21): 3162. Bibcode:1998ApPhL..73.3162C. doi:10.1063/1.122706. - Crone, B. K.; Campbell, I. H.; Davids, P. S.; Smith, D. L.; Neef, C. J.; Ferraris, J. P. (1999). "Device physics of single layer organic light-emitting diodes". Journal of Applied Physics. 86 (10): 5767. Bibcode:1999JAP....86.5767C. doi:10.1063/1.371591. - Jin, Yi; Xu, Yanbin; Qiao, Zhi; Peng, Junbiao; Wang, Baozheng; Cao, Derong (2010). "Enhancement of Electroluminescence Properties of Red Diketopyrrolopyrrole-Doped Copolymers by Oxadiazole and Carbazole Units as Pendants". Polymer. 51 (24): 5726–5733. doi:10.1016/j.polymer.2010.09.046. - Bellmann, E.; Shaheen, S. E.; Thayumanavan, S.; Barlow, S.; Grubbs, R. H.; Marder, S. R.; Kippelen, B.; Peyghambarian, N. (1998). "New Triarylamine-Containing Polymers as Hole Transport Materials in Organic Light-Emitting Diodes: Effect of Polymer Structure and Cross-Linking on Device Characteristics". Chemistry of Materials. 10 (6): 1668–1676. doi:10.1021/cm980030p. - Sato, Y.; Ichinosawa, S.; Kanai, H. (1998). "Operation Characteristics and Degradation of Organic Electroluminescent Devices". IEEE Journal of Selected Topics in Quantum Electronics. 4: 40–48. doi:10.1109/2944.669464. - Duarte, FJ; Liao, LS; Vaeth, KM (2005). "Coherence characteristics of electrically excited tandem organic light-emitting diodes". Optics Letters. 30 (22): 3072–4. Bibcode:2005OptL...30.3072D. PMID 16315725. doi:10.1364/OL.30.003072. - Duarte, FJ (2007). "Coherent electrically excited organic semiconductors: visibility of interferograms and emission linewidth". Optics Letters. 32 (4): 412–4. Bibcode:2007OptL...32..412D. PMID 17356670. doi:10.1364/OL.32.000412. - Synopsis: A Single-Molecule Light-Emitting Diode Archived 2014-01-30 at the Wayback Machine., Physics, 28 January 2014 - Researchers Develop First Single-Molecule LED Archived 2014-02-21 at the Wayback Machine., Photonics Online, 31 January 2014 - Hebner, T. R.; Wu, C. C.; Marcy, D.; Lu, M. H.; Sturm, J. C. (1998). "Ink-jet printing of doped polymers for organic light emitting devices". Applied Physics Letters. 72 (5): 519. Bibcode:1998ApPhL..72..519H. doi:10.1063/1.120807. - Bharathan, Jayesh; Yang, Yang (1998). "Polymer electroluminescent devices processed by inkjet printing: I. Polymer light-emitting logo". Applied Physics Letters. 72 (21): 2660. Bibcode:1998ApPhL..72.2660B. doi:10.1063/1.121090. - Heeger, A. J. (1993) in W. R. Salaneck, I. Lundstrom, B. Ranby, Conjugated Polymers and Related Materials, Oxford, 27–62. ISBN 0-19-855729-9 - Kiebooms, R.; Menon, R.; Lee, K. (2001) in H. S. Nalwa, Handbook of Advanced Electronic and Photonic Materials and Devices Volume 8, Academic Press, 1–86. - Wagaman, Michael; Grubbs, Robert H. (1997). "Synthesis of PNV Homo- and Copolymers by a ROMP Precursor Route". Synthetic Metals. 84 (1–3): 327–328. doi:10.1016/S0379-6779(97)80767-9. - Wagaman, Michael; Grubbs, Robert H. (1997). "Synthesis of Organic and Water Soluble Poly(1,4-phenylenevinylenes) Containing Carboxyl Groups: Living Ring-Opening Metathesis Polymerization (ROMP) of 2,3-Dicarboxybarrelenes". Macromolecules. 30 (14): 3978–3985. Bibcode:1997MaMol..30.3978W. doi:10.1021/ma9701595. - Pu, Lin; Wagaman, Michael; Grubbs, Robert H. (1996). "Synthesis of Poly(1,4-naphthylenevinylenes): Metathesis Polymerization of Benzobarrelenes". Macromolecules. 29 (4): 1138–1143. Bibcode:1996MaMol..29.1138P. doi:10.1021/ma9500143. - Fallahi, Afsoon; Alahbakhshi, Masoud; Mohajerani, Ezeddin; Afshar Taromi, Faramarz; Mohebbi, Alireza; Shahinpoor, Mohsen (2015). "Cationic Water-Soluble Conjugated Polyelectrolytes/Graphene Oxide Nanocomposites as Efficient Green Hole Injection Layers in Organic Light Emitting Diodes". The Journal of Physical Chemistry C. 119 (23): 13144–13152. doi:10.1021/acs.jpcc.5b00863. - Yang, Xiaohui; Neher, Dieter; Hertel, Dirk; Daubler, Thomas (2004). "Highly Efficient Single-Layer Polymer Electrophosphorescent Devices". Advanced Materials. 16 (2): 161–166. doi:10.1002/adma.200305621. - Baldo, M. A.; O'Brien, D. F.; You, Y.; Shoustikov, A.; Sibley, S.; Thompson, M. E.; Forrest, S.R. (1998). "Highly Efficient phosphorescent emission from organic electroluminescent devices". Nature. 395 (6698): 151–154. Bibcode:1998Natur.395..151B. doi:10.1038/25954. - Baldo, M. A.; Lamansky, S.; Burrows, P. E.; Thompson, M. E.; Forrest, S. R. (1999). "Very high-efficiency green organic light-emitting devices based on electrophosphorescence". Applied Physics Letters. 75: 4. Bibcode:1999ApPhL..75....4B. doi:10.1063/1.124258. - Adachi, C.; Baldo, M. A.; Thompson, M. E.; Forrest, S. R. (2001). "Nearly 100% internal phosphorescence efficiency in an organic light-emitting device". Journal of Applied Physics. 90 (10): 5048. Bibcode:2001JAP....90.5048A. doi:10.1063/1.1409582. - Singh, Madhusudan; Chae, Hyun Sik; Froehlich, Jesse D.; Kondou, Takashi; Li, Sheng; Mochizuki, Amane; Jabbour, Ghassan E. (2009). "Electroluminescence from printed stellate polyhedral oligomeric silsesquioxanes". Soft Matter. 5 (16): 3002. Bibcode:2009SMat....5.3002S. doi:10.1039/b903531a. - Bardsley, J. N. (2004). "International OLED Technology Roadmap". IEEE Journal of Selected Topics in Quantum Electronics. 10: 3–4. doi:10.1109/JSTQE.2004.824077. - US 5986401, Mark E. Thompson, Stephen R. Forrest, Paul Burrows, "High contrast transparent organic light emitting device display", published 1999-11-16 - "Archived copy". Archived from the original on 2017-01-16. Retrieved 2017-03-01. - Chu, Ta-Ya; Chen, Jenn-Fang; Chen, Szu-Yi; Chen, Chao-Jung; Chen, Chin H. (2006). "Highly efficient and stable inverted bottom-emission organic light emitting devices". Applied Physics Letters. 89 (5): 053503. Bibcode:2006ApPhL..89e3503C. doi:10.1063/1.2268923. - Liu, Jie; Lewis, Larry N.; Duggal, Anil R. (2007). "Photoactivated and patternable charge transport materials and their use in organic light-emitting devices". Applied Physics Letters. 90 (23): 233503. Bibcode:2007ApPhL..90w3503L. doi:10.1063/1.2746404. - Boroson, Michael; Tutt, Lee; Nguyen, Kelvin; Preuss, Don; Culver, Myron; Phelan, Giana (2005). "16.5L: Late-News-Paper: Non-Contact OLED Color Patterning by Radiation-Induced Sublimation Transfer (RIST)". SID Symposium Digest of Technical Papers. 36: 972. doi:10.1889/1.2036612. - Grimaldi, I. A.; De Girolamo Del Mauro, A.; Nenna, G.; Loffredo, F.; Minarini, C.; Villani, F.; d’Amore, A.; Acierno, D.; Grassia, L. (2010). "Inkjet Etching of Polymer Surfaces to Manufacture Microstructures for OLED Applications". AIP Conference Proceedings: 104. doi:10.1063/1.3455544. - Sasaoka, Tatsuya; Sekiya, Mitsunobu; Yumoto, Akira; Yamada, Jiro; Hirano, Takashi; Iwase, Yuichi; Yamada, Takao; Ishibashi, Tadashi; Mori, Takao; Asano, Mitsuru; Tamura, Shinichiro; Urabe, Tetsuo (2001). "24.4L: Late-News Paper: A 13.0-inch AM-OLED Display with Top Emitting Structure and Adaptive Current Mode Programmed Pixel Circuit (TAC)". SID Symposium Digest of Technical Papers. 32: 384. doi:10.1889/1.1831876. - Tsujimura, T.; Kobayashi, Y.; Murayama, K.; Tanaka, A.; Morooka, M.; Fukumoto, E.; Fujimoto, H.; Sekine, J.; Kanoh, K.; Takeda, K.; Miwa, K.; Asano, M.; Ikeda, N.; Kohara, S.; Ono, S.; Chung, C. T.; Chen, R. M.; Chung, J. W.; Huang, C. W.; Guo, H. R.; Yang, C. C.; Hsu, C. C.; Huang, H. J.; Riess, W.; Riel, H.; Karg, S.; Beierlein, T.; Gundlach, D.; Alvarado, S.; et al. (2003). "4.1: A 20-inch OLED Display Driven by Super-Amorphous-Silicon Technology". SID Symposium Digest of Technical Papers. 34: 6. doi:10.1889/1.1832193. - Bower, C. A.; Menard, E.; Bonafede, S.; Hamer, J. W.; Cok, R. S. (2011). "Transfer-Printed Microscale Integrated Circuits for High Performance Display Backplanes". IEEE Transactions on Components, Packaging and Manufacturing Technology. 1 (12): 1916–1922. doi:10.1109/TCPMT.2011.2128324. - Pardo, Dino A.; Jabbour, G. E.; Peyghambarian, N. (2000). "Application of Screen Printing in the Fabrication of Organic Light-Emitting Devices". Advanced Materials. 12 (17): 1249–1252. doi:10.1002/1521-4095(200009)12:17<1249::AID-ADMA1249>3.0.CO;2-Y. - Gustafsson, G.; Cao, Y.; Treacy, G. M.; Klavetter, F.; Colaneri, N.; Heeger, A. J. (1992). "Flexible light-emitting diodes made from soluble conducting polymers". Nature. 357 (6378): 477–479. Bibcode:1992Natur.357..477G. doi:10.1038/357477a0. - "Comparison of OLED and LCD". Fraunhofer IAP: OLED Research. 2008-11-18. Archived from the original on February 4, 2010. Retrieved 2010-01-25. - Zhang, Mingxiao; Chen, Z.; Xiao, L.; Qu, B.; Gong, Q. (18 March 2013). "Optical design for improving optical properties of top-emitting organic light emitting diodes". Journal of Applied Physics. 113 (11): 113105. Bibcode:2013JAP...113k3105Z. doi:10.1063/1.4795584. - "LG 55EM9700". 2013-01-02. Archived from the original on 2015-01-15. Retrieved 2015-01-14. - "Why Do Some OLEDs Have Motion Blur?". Blur Busters Blog (based on Microsoft Research work). 2013-04-15. Archived from the original on 2013-04-03. Retrieved 2013-04-18. - "OLED TV estimated lifespan shorter then expected". HDTV Info Europe. Hdtvinfo.eu (2008-05-08). - HP Monitor manual. CCFL-Backlit LCD. Page 32. Webcitation.org. Retrieved 2011-10-04. - Viewsonic Monitor manual. LED-Backlit LCD. Webcitation.org. Retrieved 2011-10-04. - Kondakov, D; Lenhart, W.; Nochols, W. (2007). "Operational degradation of organic light-emitting diodes: Mechanism and identification of chemical products". Journal of Applied Physics. 101 (2): 1. Bibcode:2007JAP...101b4512K. doi:10.1063/1.2430922. - "OLED lifespan doubled?" HDTV Info Europe. Hdtvinfo.eu (2008-01-25). - Toshiba and Panasonic double lifespan of OLED, January 25, 2008, Toshiba and Panasonic double lifespan of OLED - Cambridge Display Technology, Cambridge Display Technology and Sumation Announce Strong Lifetime Improvements to P-OLED (Polymer OLED) Material; Blue P-OLED Materials Hit 10,000 Hour Lifetime Milestone at 1,000 cd/sq.m, March 26, 2007. Retrieved January 11, 2011. Archived December 26, 2010, at the Wayback Machine. - "Ageless OLED". Archived from the original on 2007-09-08. Retrieved 2009-11-16. - Fallahi, Afsoon; Afshar Taromi, Faramarz; Mohebbi, Alireza; D. Yuen, Jonathan; Shahinpoor, Mohsen (2014). "A novel ambipolar polymer: from organic thin-film transistors to enhanced air-stable blue light emitting diodes". Journal of Materials Chemistry C. 2: 6491. doi:10.1039/c4tc00684d. Archived from the original on 2015-10-17. - Shen, Jiun Yi; Lee, Chung Ying; Huang, Tai-Hsiang; Lin, Jiann T.; Tao, Yu-Tai; Chien, Chin-Hsiung; Tsai, Chiitang (2005). "High Tg blue emitting materials for electroluminescent devices". Journal of Materials Chemistry (free text). 15 (25): 2455. doi:10.1039/b501819f. - Kim, Seul Ong; Lee, Kum Hee; Kim, Gu Young; Seo, Ji Hoon; Kim, Young Kwan; Yoon, Seung Soo (2010). "A highly efficient deep blue fluorescent OLED based on diphenylaminofluorenylstyrene-containing emitting materials". Synthetic Metals. 160 (11–12): 1259–1265. doi:10.1016/j.synthmet.2010.03.020. - Jabbour, G. E.; Kawabe, Y.; Shaheen, S. E.; Wang, J. F.; Morrell, M. M.; Kippelen, B.; Peyghambarian, N. (1997). "Highly efficient and bright organic electroluminescent devices with an aluminum cathode". Applied Physics Letters. 71 (13): 1762. Bibcode:1997ApPhL..71.1762J. doi:10.1063/1.119392. - Mikami, Akiyoshi; Koshiyama, Tatsuya; Tsubokawa, Tetsuro (2005). "High-Efficiency Color and White Organic Light-Emitting Devices Prepared on Flexible Plastic Substrates". Japanese Journal of Applied Physics. 44: 608–612. Bibcode:2005JaJAP..44..608M. doi:10.1143/JJAP.44.608. - Mikami, Akiyoshi; Nishita, Yusuke; Iida, Yoichi (2006). "35-3: High Efficiency Phosphorescent Organic Light-Emitting Devices Coupled with Lateral Color-Conversion Layer". SID Symposium Digest of Technical Papers. 37: 1376. doi:10.1889/1.2433239. - "OLED Sealing Process Reduces Water Intrusion and Increases Lifetime". Georgia Tech Research News. 2008-04-23. - DisplayMate: the GS5 display is the best mobile display ever, outperforming all previous OLED and LCD panels | OLED-Info Archived 2014-04-03 at the Wayback Machine. - Stokes, Jon. (2009-08-11) This September, OLED no longer "three to five years away" Archived 2012-01-25 at the Wayback Machine.. Arstechnica.com. Retrieved 2011-10-04. - Michael Kanellos, "Start-up creates flexible sheets of light", CNet News.com, December 6, 2007. Retrieved 20 July 2008. - "Philips Lumiblades". Lumiblade.com. 2009-08-09. Retrieved 2009-08-17. - Session Border Controller Archived 2012-07-10 at the Wayback Machine.. Tmcnet.com (2011-09-13). Retrieved 2012-11-12. - Electronic News, OLEDs Replacing LCDs in Mobile Phones Archived 2016-10-11 at the Wayback Machine., April 7, 2005. Retrieved September 5, 2016. - "HTC ditches Samsung AMOLED display for Sony's Super LCDs". International Business Times. 2010-07-26. Archived from the original on October 1, 2011. Retrieved 2010-07-30. - "Google Nexus S to feature Super Clear LCD in Russia (and likely in other countries, too)". UnwiredView.com. 2010-12-07. Archived from the original on 2010-12-10. Retrieved 2010-12-08. - "ANWELL: Higher profit, higher margins going forward". nextinsight.com. 2007-08-15. Archived from the original on 2012-03-21. - "AUO". OLED-Info.com. 2012-02-21. Archived from the original on 2012-01-24. - "Chi Mei EL (CMEL)". OLED-Info.com. Archived from the original on 2016-01-05. - "LG OLEDs". OLED-Info.com. Archived from the original on 2016-01-31. - "OLED companies". OLED-info.com. Archived from the original on 2016-02-21. - Rawlings, John (2010-08-07). "OLED Shearwater Predator Dive Computer Review". AtlasOmega Media. Archived from the original on 2014-05-27. Retrieved 2013-04-10. - Tourish, Jeff. "Shearwater Predator CCR Computer". Advanced Diver Magazine. Archived from the original on 2015-10-17. Retrieved 2013-04-10. - "DuPont Creates 50" OLED in Under 2 Minutes". tomsguide.com. Archived from the original on 2010-05-20. Retrieved 2010-06-10. - "DuPont Delivers OLED Technology Scalable for Television". www2.dupont.com. 2010-05-12. Archived from the original on 2010-05-20. Retrieved 2010-05-12. - OLED-Info.com, Kodak Signs OLED Cross-License Agreement Archived 2007-07-07 at the Wayback Machine.. Retrieved March 14, 2008. - "Flexible OLED | OLED-Info". www.oled-info.com. Archived from the original on 2017-03-11. Retrieved 2017-03-25. - Bendable smartphones aren't coming anytime soon Archived 2016-09-22 at the Wayback Machine., The Sydney Morning Herald, Ian King, 16 December 2013 - "Samsung Galaxy X: the story of Samsung’s foldable phone so far". TechRadar. Archived from the original on 2017-01-30. Retrieved 2017-03-25. - Cherenack, Kunigunde; Van Os, K.; Pieterson, L. (April 2012). "Smart photonic textiles begin to weave their magic". Laser Focus World. 48 (4): 63. - "Samsung SDI – The world's largest OLED display maker". Oled-info.com. Archived from the original on 2009-06-22. Retrieved 2009-08-17. - "Samsung, LG in legal fight over brain drain". The Korea Times. 2010-07-17. Archived from the original on 2010-07-21. Retrieved 2010-07-30. - "Frost & Sullivan Recognizes Samsung SDI for Market Leadership in the OLED Display Market | Find Articles at BNET". Findarticles.com. 2008-07-17. Archived from the original on 2009-05-22. Retrieved 2009-08-17. - "World's Largest 21-inch OLED for TVs from Samsung". Physorg.com. 2005-01-04. Archived from the original on 2009-01-12. Retrieved 2009-08-17. - Robischon, Noah (2008-01-09). "Samsung's 31-Inch OLED Is Biggest, Thinnest Yet – AM-OLED". Gizmodo. Archived from the original on 2009-08-10. Retrieved 2009-08-17. - Ricker, Thomas (2008-05-16). "Samsung's 12.1-inch OLED laptop concept makes us swoon". Engadget. Archived from the original on 2009-10-07. Retrieved 2009-08-17. - "Samsung: OLED Notebooks In 2010". Laptop News. TrustedReviews. Archived from the original on 2009-04-16. Retrieved 2009-08-17. - Takuya Otani; Nikkei Electronics (2008-10-29). "[FPDI] Samsung Unveils 0.05mm 'Flapping' OLED Panel – Tech-On!". Techon.nikkeibp.co.jp. Archived from the original on 2008-11-27. Retrieved 2009-08-17. - "40-inch OLED panel from Samsung". Hdtvinfo.eu (2008-10-30) - "Samsung presents world's first and largest transparent OLED laptop at CES". 2010-01-07. Archived from the original on January 11, 2010. - "CES: Samsung shows OLED display in a photo card". 2010-01-07. Archived from the original on 2011-12-20. - "Samsung Super AMOLED Plus display announced". Archived from the original on 2011-01-09. Retrieved 2011-01-06. - Clark, Shaylin (2012-01-12). "CES 2012 Samsung's OLED TV Rakes In Awards". WebProNews. Archived from the original on 2012-11-24. Retrieved 2012-12-03. - Rougeau, Michael (2013-01-08). Samsung's curved OLED TV provides an 'IMAX-like' experience Archived 2013-01-11 at the Wayback Machine.. Techradar. Retrieved 2013-01-08. - Boylan, Chris (2013-08-13). "Bring Out Your OLED: Samsung KN55S9C OLED TV Available Now for $8999.99" Archived 2013-08-17 at the Wayback Machine.. Big Picture Big Sound. Retrieved 2013-08-13. - Alex Lane (6 September 2013). "John Lewis TV Gallery video: 4K and OLED from Samsung, Sony, LG and Panasonic". Recombu. Archived from the original on 27 September 2013. Retrieved 26 September 2013. - Sam Byford (8 October 2013). "Samsung's Galaxy Round is the first phone with a curved display". The Verge. Vox Media, Inc. Archived from the original on 9 November 2013. Retrieved 10 November 2013. - Sony XEL-1:The world's first OLED TV Archived 2016-02-05 at the Wayback Machine., OLED-Info.com (2008-11-17). - "Sony's Clie PEG-VZ90—the world's most expensive Palm?". Engadget. 2004-09-14. Archived from the original on 2010-02-09. Retrieved 2010-07-30. - "MD Community Page: Sony MZ-RH1". Minidisc.org. 2007-02-24. Archived from the original on 2009-05-20. Retrieved 2009-08-17. - "Sony NWZ-X1000-series OLED Walkman specs released". Slashgear. 2009-03-09. Archived from the original on 2011-02-04. Retrieved 2011-01-01. - "Sony announces a 27-inch OLED TV". HDTV Info Europe (2008-05-29) - CNET News, Sony to sell 11-inch OLED TV this year, April 12, 2007. Retrieved July 28, 2007. Archived June 4, 2007, at the Wayback Machine. - The Sony Drive XEL-1 OLED TV: 1,000,000:1 contrast starting December 1st Archived 2007-10-04 at the Wayback Machine., Engadget (2007-10-01). - "Sony claims development of world's first flexible, full-color OLED display". Gizmo Watch. 2007-05-25. Archived from the original on 2007-10-17. Retrieved 2010-07-30. - Sony's 3.5- and 11-inch OLEDs are just 0.008- and 0.012-inches thin Archived 2016-01-05 at the Wayback Machine.. Engadget (2008-04-16). Retrieved 2011-10-04. - (Display 2008)開幕。ソニーの0.3mm有機ELパネルなど-150型プラズマやビクターの3D技術など Archived 2008-06-29 at the Wayback Machine.. impress.co.jp (2008-04-16) - Japanese firms team up on energy-saving OLED panels, AFP (2008-07-10). Archived June 5, 2013, at the Wayback Machine. - Athowon, Desire (2008). "Sony Working on Bendable, Folding OLED Screens". ITProPortal.com. Archived from the original on October 9, 2008. - "Sony OLED 3D TV eyes-on". Engadget. Archived from the original on 2010-01-10. Retrieved 2010-01-11. - Snider, Mike (2011-01-28). "Sony unveils NGP, its new portable gaming device". USA Today. Retrieved 2011-01-27. - "Sony Professional Reference Monitor". Sony. Archived from the original on 2012-03-08. Retrieved 2011-02-17. - "Sony, Panasonic tying up in advanced TV displays". June 25, 2012. - LG 15EL9500 OLED Television Archived 2012-04-14 at the Wayback Machine.. Lg.com. Retrieved 2011-10-04. - LG announces 31" OLED 3DTV Archived 2016-03-04 at the Wayback Machine.. Electricpig.co.uk (2010-09-03). Retrieved 2011-10-04. - LG's 55-inch 'world's largest' OLED HDTV panel is official, coming to CES 2012 Archived 2011-12-26 at the Wayback Machine.. Engadget (2011-12-25). Retrieved 2012-11-12. - OLED TV[permanent dead link]. LG (2010-09-03). Retrieved 2012-12-21. - Summary of UNIVERSAL DISPLAY CORP \PA\ - Yahoo! Finance Archived 2015-01-31 at the Wayback Machine. - MITSUBISHI ELECTRIC News Releases Installs 6-Meter OLED Globe at Science Museum Archived 2012-07-23 at the Wayback Machine.. Mitsubishielectric.com (2011-06-01). Retrieved 2012-11-12. - Coxworth, Ben (2011-03-31). Video Name Tags turn salespeople into walking TV commercials Archived 2011-12-22 at the Wayback Machine.. Gizmag.com. Retrieved 2012-11-12. - Three Minutes of Video Every Broadcaster and Advertiser MUST SEE.avi – CBS Videos : Firstpost Topic – Page 1 Archived 2012-07-23 at the Wayback Machine.. Firstpost.com (2012-08-10). Retrieved 2012-11-12. - BMW wants to put super-efficient OLED tail lights on your next car Archived 2014-04-11 at the Wayback Machine., Engadget, 10 April 2014, Jon Fingas - "Dell unveils stunning 4K OLED UltraSharp display and declares war on bezels". PCWorld. Retrieved 2017-06-20. - "OLED monitor: market status and updates | OLED-Info". www.oled-info.com. Retrieved 2017-06-20. - Japanese company doubles diode panel's life span Archived 2014-10-29 at the Wayback Machine., Global Post, 13 October 2014 - From Molecules to Organic Light Emitting Diodes Archived 2015-04-15 at the Wayback Machine., Max Planck Institute for Polymer Research, 7 April 2015. - Kordt, Pascal; et al. (2015). "Modeling of Organic Light Emitting Diodes: From Molecular to Device Properties". Advanced Functional Materials. 25 (13): 1955–1971. doi:10.1002/adfm.201403004. - P. Chamorro-Posada, J. Martín-Gil, P. Martín-Ramos, L.M. Navas-Gracia, Fundamentos de la Tecnología OLED (Fundamentals of OLED Technology). University of Valladolid, Spain (2008). ISBN 978-84-936644-0-4. Available online, with permission from the authors, at the webpage: http://www.scribd.com/doc/13325893/Fundamentos-de-la-Tecnologia-OLED - Kordt, Pascal; et al. (2015). "Modeling of Organic Light Emitting Diodes: From Molecular to Device Properties". Advanced Functional Materials. 25 (13): 1955–1971. doi:10.1002/adfm.201403004. - Shinar, Joseph (Ed.), Organic Light-Emitting Devices: A Survey. NY: Springer-Verlag (2004). ISBN 0-387-95343-4. - Hari Singh Nalwa (Ed.), Handbook of Luminescence, Display Materials and Devices, Volume 1–3. American Scientific Publishers, Los Angeles (2003). ISBN 1-58883-010-1. Volume 1: Organic Light-Emitting Diodes - Hari Singh Nalwa (Ed.), Handbook of Organic Electronics and Photonics, Volume 1–3. American Scientific Publishers, Los Angeles (2008). ISBN 1-58883-095-0. - Müllen, Klaus (Ed.), Organic Light Emitting Devices: Synthesis, Properties and Applications. Wiley-VCH (2006). ISBN 3-527-31218-8 - Yersin, Hartmut (Ed.), Highly Efficient OLEDs with Phosphorescent Materials. Wiley-VCH (2007). ISBN 3-527-40594-1 - Kho, Mu-Jeong, Javed, T., Mark, R., Maier, E., and David, C. (2008) 'Final Report: OLED Solid State Lighting - Kodak European Research' MOTI (Management of Technology and Innovation) Project, Judge Business School of the University of Cambridge and Kodak European Research, Final Report presented on 4 March 2008 at Kodak European Research at Cambridge Science Park, Cambridge, UK., pages 1–12. |Wikimedia Commons has media related to OLED.|
1
34
<urn:uuid:100a2b66-2ee0-4bf7-8197-906d635a076c>
Good Fish Guide Bream, Black or porgy or seabream Method of production - Caught at sea Capture method - Demersal otter trawl Capture area - North East Atlantic (FAO 27) Stock area - UK Stock detail - All Areas Fish type - White round fish Black bream stocks currently appear to be in a healthy state, however there is a lack of stock assessment and appropriate management measures in force for the species. They are moderately vulnerable to fishing in terms of growth rate and fecundity and the species' spawning behavioural traits make them especially vulnerable to bottom trawling. Trawling for black bream can destroy both nests and eggs. Black bream caught with rod and line or gillnet is a more sustainable option. Cornwall, North Western and Sussex IFCA districts, as well as North Wales, have the best management for black bream and are currently the most sustainable areas to source from (Sussex has mesh regulations and closed areas for the spawning season and Cornwall, Sussex, North Western and North Wales prohibit landing of seabream below 23 cm). Avoid eating immature black bream (below 23 cm) caught prior to and during their spawning season (April & May in UK inshore waters), thus allowing them chance to spawn or reproduce. A member of a group of fish known as Sparidae, the black bream is one of two species commonly found in northern European seas. Found off south-west Britain and eastern Ireland, in the English Channel and the Irish Sea. Spawning occurs in April and May in a number of inshore waters, such as the English Channel. Black bream are unusual in that they are sequential hermaphrodites (undergoing a sex change during their lives), maturing as females, at a length of 23cm then as males at around 30cm. All fish over 40cm are males. The maximum reported age, length and weight are 15 years, 60cm and 1.2kg respectively.They are found over seagrass beds and rocky and sandy bottoms between about 5m to 300m. Black bream lay eggs in a nest that the male has excavated on sand with its tail. The larger the female, the more fecund or the more eggs she lays, e.g. a female of 18.5 cms will lay aroud 31,000 eggs compared to a female of 33.5cm which lays around to 554,000 eggs. Gregarious, sometimes in large schools. Omnivorous, feeding on seaweeds and small invertebrates, especially crustaceans. Likely predators on black bream eggs are clawed crustaceans. Adult black bream have few predators, however a few are likely to be taken by seabirds and marine mammals. An important food fish. The hermaphroditic nature of black bream may have important consequences for the sustained reproductive capacity of the stock. Between 1977 and 1979 the modal size of black bream decreased from 37-38cm to 28-30cm as the bream fishery expanded (the fishing practices used to catch black bream selectively targets larger individuals); this has the potential to effect the sex ratio of the population and thus reproduction and repopulation. There is no formal stock assessment for black bream and as the stock is not evaluated against precautionary limits, the precise state of the stocks are unknown. The species is moderately vulnerable but there is currently no evidence that the fishery is experiencing overfishing. Although stocks in the English Channel were heavily fished in the 1970s and 1980s they have recovered in recent years and appear to be in a healthy state. As water temperatures in the English Channel increase in the spring the stock migrates eastwards into shallower water to feed prior to breeding. Post spawning, they continue to feed inshore, migrating east to the southern North Sea. In November they begin their return migration west, arriving in the western Channel in January then return offshore to deeper waters. There is a lack of adequate management measures for the conservation of black bream. Black bream may receive some protection from the EU fixed net technical measure which requires a mesh size of at least 220mm where catches comprise 70% or more seabream. In EU waters no Minimum Landing Size is specified for the species. However, in some Inshore Fisheries and Conservation Authority (IFCA) Districts e.g. Cornwall, Sussex, North Western and North Wales, local bylaws prohibit landing of seabream below 23 cm. But as a consequence of their changes in sex, a MLS of 23cm does not protect 100% of the juveniles and many will be caught before they have matured and spawned. Sussex IFCA also enforces mesh regulations and closed areas during the spawning season. As black seabream are sequential hermaphrodites (changing from female to male with age), as well as a minimum landing size, a maximum landing size of 40cm may also benefit the species as this will allow protection for mature males as well as mature females. The species spawning behavioural traits make the species especially vulnerable to bottom trawling. Their being sequential hermaphrodites is also a cause for concern as the stock requires a balanced age structure to reproduce successfully. The fact that males show significant paternal investment in creating and guarding eggs also emphasises the need for conservation of an appropriate sex ratio. Seasonal closures, such as those in Sussex, to protect spawning fish are required. Depending on the nature of the seabed there is potential for damage by the heavy fishing gear used for trawling for bottom-dwelling species. Trawling is also associated with discarding of unwanted fish, i.e. undersized and/or non-quota and/or over-quota species. Futhermore trawlers generally target black bream during their spawning period which can destroy nests and eggs and interrupt the spawning process. Bass is a common and valuable bycatch when targeting black bream. Based on method of production, fish type, and consumer rating: only fish rated 2 and below are included as an alternative in the list below . Click on a name to show the sustainable options available. www.marlin.ac.uk/speciesinformation.php?speciesID=4367; Sussex IFCA Species Guide (2011) www.sussex-ifca.gov.uk; www.sussex-ifca.gov.uk/index.php?option=com_content&view=article&id=63:black-seabream-spondyliosoma-cantharus-&catid=15:speciesold&Itemid=159; www.sussex-ifca.gov.uk/repository/Baseline_Fisheries_Information_v2.pdf; www.fishbase.org/Summary/speciesSummary.php?ID=1356&AT=black+seabream; www.fishbase.org/PopDyn/PopGrowthList.php?ID=1356&GenusName=Spondyliosoma&SpeciesName=cantharus&fc=330; www.iucnredlist.org/search; www.mcsuk.org/downloads/fisheries/Most%20sustainable%20fishing%20methods.pdf Sign up to get all the latest marine related news from MCS The UK charity for the protection of our seas, shores and wildlife.
1
2
<urn:uuid:cad9a2a6-7add-46c8-85b6-adebb462294a>
Until now we have seen x86 chromium netbooks and ARM powered chromebook laptops. But soon we will see chrome OS-powered MIPS chromebook laptops. MIPS is currently owned by Imagination Technologies. Lately Coreboot which is open source BIOS has been updated to support MIPS laptops. One of the primary benefits of a diverse CPU landscape is that viruses would be harder to write, since the virus writer would either have to target many architectures or target only one and accept that their field of potential victims is much smaller. Another advantage is that Intel would not hold an effective monopoly on CPUs. Manufacturers would have to compete against each other for market share. Or, manufacturers could specialize and target different niches. That actually has happened in a small way already. ARM is the CPU of choice for almost all smart phones. MIPS is often used in routers and other low-power consumer devices. But neither use stretches the capabilities of the CPU architecture like x86 has been stretched by the diversity of roles for which it has been used. So it will be great to see some MIPS chromebooks around. MIPS placed hardware simplicity over all else. I wouldn’t doubt what the article said about performance per die area. MIPS cores are very simple. One of the most obvious examples is how it does virtual memory. (I’m not up on the latest, but I don’t think they would have changed it.) When an x86 or ARM processor needs to translate a virtual page address into a physical page address, hardware walks an in memory tree to translate the virtual address into a physical one. In MIPS, it traps to the kernel (that runs in physical address land) and lets the OS sort it out. In both cases the translation of virtual to physical addresses happens in chunks of page size, and there is a cache of the most recently used translations. Letting the OS sort it out is neat, flexible, and saves tons of transistors, but costs time for translations.
1
3
<urn:uuid:c2952482-1470-45d8-a01b-59736cb9366c>
Perhaps you have wondered exactly how the touchscreen on your mobile phone, tablet, TV or any other device really works? In fact, it is amazing to think that today we have control as a touch in our hands, although this technology may seem new, it has actually existed since the 1960s. In fact, the technology behind the touch screen really goes back in the 1940s, but it later only two full decades was actually possible to make the use of considerable size. ATMs have always been the use of technology since 1965 when it is. A. Johnson invented the first touchscreen with his fingers is actually exactly the same capacitive touch mechanism that is still used in mobile phones and other devices, even today. Although you can find other types of touch-screen technology, for example, the resistor contact or several touch technology, capacitive touch technology is one that favors larger quantities of consumer goods. How is a capacitive touchscreen actually done? These days, a capacitive touch screen is working to use a pressure sensitive ITO film that is attached to the screen. This film contact is actually a semiconductor, semiconductor manufacturing using procedures such as role in the ongoing digestion on the roll was printed to roll the evaporator system to produce a digital camera in flexible plastic. A movement that moves the vaporizer of the company’s major vendor program can create an ITO-ground contact of the film, which can be used in many devices such as smartphones, LCD or LED displays, tablets and PC screens. Roll, the technology can roll for additional technologies art, for example, roll to be preferred plates and plates in the plate control technologies because it offers a continuous process, as well as increased performance of others. This contact with the ITO film is actually with a software that allows us to create our product instructions with the fingers on the screen. In fact, semiconductor manufacturing processes such as rollers in the evaporator of the system are used for a variety of other products such as solar cells, cameras, and printers. So, now you have a basic idea of where the technology and the current technology used to create the semiconductor film today used in touchscreen origin. None of the devices that use a roll to roll the evaporator program would have the opportunity to have many touchscreen devices available to us today.
1
2
<urn:uuid:13e6f20e-facb-453f-8458-7af08cc8d9b2>
A Close Look at Tarsal Tunnel Syndrome The Tarsal Tunnel Syndrome is to foot what the carpal tunnel syndrome is to wrist. Excessive pressure applied to a nerve in the Tarsal Tunnel of the foot causes the Tarsal Tunnel Syndrome. This is not a common occurrence, though, and thus is quite difficult to determine and diagnose. It might take your doctor several tests to determine that the pain in top of foot is actually tarsal tunnel syndrome, as it is a very rare condition. It can be quite painful and its most common symptom is numbness around the big toe area. Burning sensations are also part of Tarsal Tunnel syndrome symptoms. It is classified under the ICD 10 Code, more about which is given below. ICD 10 Code for Tarsal Tunnel Syndrome ICD stands for International Classification of Diseases and is the official list of diseases and related health problems provided by the World Health Organization. It codes every symptom, disease, injury, signs, complaints, findings and circumstances every recorded. It has a number of revisions, since each day brings in the discovery of many new diseases and conditions, not to mention the numerous abnormal complaints that accompany these diseases. ICD 10 is the tenth revision of the list, and the thirteenth chapter, which contains the blocks M00 to M99, contains the code for Tarsal Tunnel syndrome, as part of the codes on connective tissue and musculoskeletal system related diseases. ICD 10 has about 16000 codes, to describe just about every condition that has ever been reported in the world. The reason why codes are used is that it is easy to identify symptoms by accessing the codes, and it can be used universally. Tarsal Tunnel Syndrome ICD 10 Code The Tarsal Tunnel syndrome is coded in ICD 10, and covers the pain that occurs in the Tarsal Tunnel located in the foot. The tarsal region is identified as the area where the tibial nerve of the foot runs, right behind the ankle. It goes on to the toes of the foot, and it encompasses the tarsal bones that are located in the Tarsal Tunnel region. Any pressure that is inflected on this nerve causes the Tarsal Tunnel syndrome, which results in a lot of pain in top of foot. The pressure that is applied on the tibial nerve can result from many causes. It could be swelling of the foot that closes in on nerve and presses it, or it could be an external injury on the foot that has put pressure on nerve. Doctors usually cannot find out the cause right away, as the Tarsal Tunnel syndrome is not something that you see often. ICD 10 Code for Tarsal Tunnel Syndrome Health Insurance While most health insurance policies do not cover the Tarsal Tunnel syndrome, it is really important that you find out from your health insurance provider if they cover it or not, especially if you do work that involved excessive usage of your feet. Treating the Tarsal Tunnel syndrome is not cheap, if the effect is severe. For the average case, the pressure can be eased by wearing soft shoes with lots of padding, which causes the nerve to relax and function normally. For extreme cases, however, surgery might be required to relax the nerve, which cannot be done by orthopedic means. This is usually done when there is a lot of pain foot, which has the patient in unbearable condition, thus requiring surgery. 100% Satisfaction Guaranteed. For Pain Free Feet Click Here! Tarsal Tunnel Syndrome Relief
1
3
<urn:uuid:39364280-72cb-45b6-ba83-b87bdaa6c7aa>
||It has been suggested that this article be merged with Ant colony optimization algorithms. (Discuss) Proposed since July 2016.| In computer science, Artificial Ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of Artificial Ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e. g., vehicle routing and internet routing. The burgeoning activity in this field has led to conferences dedicated solely to Artificial Ants, and to numerous commercial applications by specialized companies such as AntOptima. As an example, Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, which is more analogous to the foraging patterns of the honey bee, another social insect. For more details, see the page of the paradigm Ant Colony Optimization Ambient networks of intelligent objects New concepts are required since “intelligence” is no longer centralized but can be found throughout all minuscule objects. Anthropocentric concepts have always led us to the production of IT systems in which data processing, control units and calculating forces are centralized. These centralized units have continually increased their performance and can be compared to the human brain. The model of the brain has become the ultimate vision of computers. Ambient networks of intelligent objects and, sooner or later, a new generation of information systems which are even more diffused and based on nanotechnology, will profoundly change this concept. Small devices that can be compared to insects do not dispose of a high intelligence on their own. Indeed, their intelligence can be classed as fairly limited. It is, for example, impossible to integrate a high performance calculator with the power to solve any kind of mathematical problem into a biochip that is implanted into the human body or integrated in an intelligent tag which is designed to trace commercial articles. However, once those objects are interconnected they dispose of a form of intelligence that can be compared to a colony of ants or bees. In the case of certain problems, this type of intelligence can be superior to the reasoning of a centralized system similar to the brain. Nature has given us several examples of how minuscule organisms, if they all follow the same basic rule, can create a form of collective intelligence on the macroscopic level. Colonies of social insects perfectly illustrate this model which greatly differs from human societies. This model is based on the co-operation of independent units with simple and unpredictable behavior. They move through their surrounding area to carry out certain tasks and only possess a very limited amount of information to do so. A colony of ants, for example, represents numerous qualities that can also be applied to a network of ambient objects. Colonies of ants have a very high capacity to adapt themselves to changes in the environment as well as an enormous strength in dealing with situations where one individual fails to carry out a given task. This kind of flexibility would also be very useful for mobile networks of objects which are perpetually developing. Parcels of information that move from a computer to a digital object behave in the same way as ants would do. They move through the network and pass from one knot to the next with the objective of arriving at their final destination as quickly as possible. Artificial Pheromone System Pheromone-based communication is one of the most effective ways of communication which is widely observed in nature. Pheromone is used by social insects such as bees, ants and termites; both for inter-agent and agent-swarm communications. Due to its feasibility; artificial pheromones have been adopted in multi-robot and swarm robotic systems. Pheromone-based communication was implemented by different means such as chemical or physical (RFID tags, light, sound) ways. However, those implementations were not able to replicate all the aspects of pheromones as seen in nature. Using projected light was presented in is an experimental setup to study on pheromone-based communication with micro autonomous robots. Another study that proposed a novel pheromone communication method,COSΦ, for a swarm robotic system based on precise and fast visual localization. The system allows to simulate virtually unlimited number of different pheromones and provides the result of their interaction as a gray-scale image on a horizontal LCD screen that the robots move on. In order to demonstrate the pheromone communication method, Colias autonomous micro robot was deployed as the swarm robotic platfrom. - Waldner, Jean-Baptiste (2008). Nanocomputers and Swarm Intelligence. London: ISTE John Wiley & Sons. p. 225. ISBN 1-84704-002-0. - Monmarché Nicolas, Guinand Frédéric and Siarry Patrick (2010). Artificial Ants. Wiley-ISTE. ISBN 978-1-84821-194-0. - Dorigo, Gambardella, M, L.M. (1997). "Learning Approach to the Traveling Salesman Problem". IEEE Transactions on Evolutionary Computation, 1 (1): 214. - Ant Colony Optimization by Marco Dorigo and Thomas Stützle, MIT Press, 2004. ISBN 0-262-04219-3 - Manderick, Moyson, Bernard, Frans (1988). "The collective behavior of ants: An example of self-organization in massive parallelism.". Stanford: Proceedings of the AAAI Spring Symposium on Parallel Models of Intelligence. - Waldner, Jean-Baptiste (2008). Nanocomputers and Swarm Intelligence. London: ISTE John Wiley & Sons. p. 214. ISBN 1-84704-002-0. - Waldner, Jean-Baptiste (2007). Inventer l'Ordinateur du XXIème Siècle. London: Hermes Science. pp. 259–265. ISBN 2-7462-1516-0. - Waldner, Jean-Baptiste (2008). Nanocomputers and Swarm Intelligence. London: ISTE John Wiley & Sons. p. 215. ISBN 1-84704-002-0. - Russell, R. Andrew. "Ant trails-an example for robots to follow?." Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on. Vol. 4. IEEE, 1999. - Fujisawa, Ryusuke, et al. "Designing pheromone communication in swarm robotics: Group foraging behavior mediated by chemical substance." Swarm Intelligence 8.3 (2014): 227-246. - Sakakibara, Toshiki, and Daisuke Kurabayashi. "Artificial pheromone system using rfid for navigation of autonomous robots." Journal of Bionic Engineering 4.4 (2007): 245-253. - Arvin, Farshad, et al. "Investigation of cue-based aggregation in static and dynamic environments with a mobile robot swarm." Adaptive Behavior (2016): 1-17. - Farshad Arvin, et al. "Imitation of honeybee aggregation with collective behavior of swarm robots." International Journal of Computational Intelligence Systems 4.4 (2011): 739-748. - Schmickl, Thomas, et al. "Get in touch: cooperative decision making based on robot-to-robot collisions." Autonomous Agents and Multi-Agent Systems 18.1 (2009): 133-155. - Garnier, Simon, et al. "Do ants need to estimate the geometrical properties of trail bifurcations to find an efficient route? A swarm robotics test bed." PLoS Comput Biol 9.3 (2013): e1002903. - Arvin, Farshad, et al. "Cue-based aggregation with a mobile robot swarm: a novel fuzzy-based method." Adaptive Behavior 22.3 (2014): 189-206. - Garnier, Simon, et al. "Alice in pheromone land: An experimental setup for the study of ant-like robots." 2007 IEEE Swarm Intelligence Symposium. IEEE, 2007. - Farshad Arvin et al. "COSΦ: artificial pheromone system for robotic swarms research." IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015. - Krajník, Tomáš, et al. "A practical multirobot localization system." Journal of Intelligent & Robotic Systems 76.3-4 (2014): 539-562. - Farshad Arvin, et al. "Colias: An autonomous micro robot for swarm robotic applications." International Journal of Advanced Robotic Systems 11 (2014).
1
5
<urn:uuid:2e6afa01-0e78-4e0c-ad3e-3e8c8ccfe03a>
Accepted for the 28th International Conference in Software Engineering & Knowledge Engineering [doi:10.18293/SEKE2016-007], ranked in top 15 papers, and expanded version published in International Journal of Software Engineering & Knowledge Engineering [doi:10.1142/S0218194016400131]. Lorini is an educational tool to help introduce object-oriented programming and software modelling for the high school or early university level. The student makes UML diagrams, which are automatically converted to Java code in real-time. Clicking on a line of code will highlight the corresponding UML diagram component, and clicking on a UML diagram component will highlight the corresponding line of code, to help understand the link. Dynamic feedback and a sample tutorial helps the user along the way. Once a full program is realised, it can be compiled and executed. The results (or compilation errors if any have arisen) are then displayed to the user. Full details published in the SEKE2016 paper: Eckert, C., Cham, B., Sun, J., & Dobbie, G. (2016). From Design to Code: An Educational Approach. Proceedings of the 28th International Conference on Software Engineering & Knowledge Engineering, 443-448, doi: 10.18293/SEKE2016-007. (if the link no longer works, feel free to contact me and I can send you a copy) How to obtain and install This plug-in is compatible with Windows, Mac OS X and Linux. - Download and install Astah from here. Any version will suffice. - Download Lorini here (73 KB). - Close any open Astah windows. - Open the installation folder in a file explorer. In Windows, this will be “C:\Program Files\astah-professional” or something similar. - Navigate to the “plugins” folder. - Copy Lorini-1.0.jar into this folder. - Launch Astah. How to use Click here for a PDF tutorial. Full evaluation results To evaluate Lorini, we compared its features to thirty-five other automatic code generation tools. In the associated paper, only a small sample of the full table is shown to conserve space. The full comparison table can be accessed here.
1
2
<urn:uuid:5074fe43-4ce7-47aa-8354-84f782f4fcb9>
|Written in||C,C++,Objective-C, Swift| with some open source components For iOS, tvOS, and watchOS, a similar API exists, named Cocoa Touch, which includes gesture recognition, animation, and a different set of graphical control elements. It is in applications for Apple devices such as iPhone, iPad, iPod Touch, Apple TV, and Apple Watch. Cocoa consists of the Foundation Kit, Application Kit, and Core Data frameworks, as included by the Cocoa.h header file, and the libraries and frameworks included by those, such as the C standard library and the Objective-C runtime. Cocoa applications are typically developed using the development tools provided by Apple, specifically Xcode (formerly Project Builder) and Interface Builder, using the languages Objective-C or Swift. However, the Cocoa programming environment can be accessed using other tools, such as Clozure CL, LispWorks, Object Pascal, Python, Perl, Ruby, and AppleScript with the aid of bridge mechanisms such as PasCocoa, PyObjC, CamelBones, RubyCocoa, and a D/Objective-C Bridge. A Ruby language implementation named MacRuby, which removes the need for a bridge mechanism, was formerly developed by Apple, while Nu is a Lisp-like language that can be used with Cocoa with no bridge. It is also possible to write Objective-C Cocoa programs in a simple text editor and build it manually with GNU Compiler Collection (GCC) or clang from the command line or from a makefile. For end-users, Cocoa applications are those written using the Cocoa programming environment. Such applications usually have a distinctive feel, since the Cocoa programming environment automates many aspects of an application to comply with Apple's human interface guidelines. Cocoa continues the lineage of several software frameworks (mainly the App Kit and Foundation Kit) from the NeXTSTEP and OpenStep programming environments developed by NeXT in the 1980s and 1990s. Apple acquired NeXT in December 1996, and subsequently went to work on the Rhapsody operating system that was to be the direct successor of OpenStep. It was to have had an emulation base for classic Mac OS applications, named Blue Box. The OpenStep base of libraries and binary support was termed Yellow Box. Rhapsody evolved into Mac OS X, and the Yellow Box became Cocoa. Thus, Cocoa classes begin with the letters NS, such as NSString or NSArray. These stand for either the NeXT-Sun creation of OpenStep, or for the original proprietary term for the OpenStep framework, NeXTSTEP. Much of the work that went into developing OpenStep was applied to developing macOS, Cocoa being the most visible part. However, differences exist. For example, NeXTSTEP and OpenStep used Display PostScript for on-screen display of text and graphics, while Cocoa depends on Apple's Quartz (which uses the Portable Document Format (PDF) imaging model, but not its underlying technology). Cocoa also has a level of Internet support, including the NSURL and WebKit HTML classes, and others, while OpenStep had only rudimentary support for managed network connections via NSFileHandle classes and Berkeley sockets. The resulting software framework received the name Cocoa for the sake of expediency, because the name had already been trademarked by Apple. For many years before this present use of the name, Apple's Cocoa trademark had originated as the name of a multimedia project design application for children. The application was originally developed at the Apple Advanced Technology Group under the name KidSim, and was then renamed and trademarked as "Cocoa". The name, coined by Peter Jensen who was hired to develop Cocoa for Apple, was intended to evoke "Java for kids", as it ran embedded in web pages. The trademark, and thus the name "Cocoa", was re-used to avoid the delay which would have occurred while registering a new trademark for this software framework. The original "Cocoa" program was discontinued at Apple in one of the rationalizations that followed Steve Jobs's return to Apple. It was then licensed to a third party and marketed as Stagecast Creator as of 2011 . One feature of the Cocoa environment is its facility for managing dynamically allocated memory. Cocoa's NSObject class, from which most classes, both vendor and user, are derived, implements a reference counting scheme for memory management. Objects that derive from the NSObject root class respond to a retain and a release message, and keep a retain count. A method titled retainCount exists, but contrary to its name, will usually not return the exact retain count of an object. It is mainly used for system-level purposes. Invoking it manually is not recommended by Apple. A newly allocated object created with copy has a retain count of one. Sending that object a retain message increments the retain count, while sending it a release message decrements the retain count. When an object's retain count reaches zero, it is deallocated by a procedure similar to a C++ destructor. dealloc is not guaranteed to be invoked. Starting with Objective-C 2.0, the Objective-C runtime implements an optional garbage collector. In this model, the runtime turns Cocoa reference counting operations such as "retain" and "release" into no-ops. The garbage collector does not exist on the iOS implementation of Objective-C 2.0. Garbage Collection in Objective-C runs on a low-priority background thread, and can halt on Cocoa's user events, with the intention of keeping the user experience responsive. In 2011, the LLVM compiler introduced ARC (Automatic Reference Counting), which replaces the conventional garbage collector by performing static analysis of Objective-C source code and inserting retain and release messages as necessary. Cocoa consists of three Objective-C object libraries called frameworks. Frameworks are functionally similar to shared libraries, a compiled object that can be dynamically loaded into a program's address space at runtime, but frameworks add associated resources, header files, and documentation. The Cocoa frameworks are implemented as a type of bundle, containing the aforementioned items in standard locations. A key part of the Cocoa architecture is its comprehensive views model. This is organized along conventional lines for an application framework, but is based on the Portable Document Format (PDF) drawing model provided by Quartz. This allows creating custom drawing content using PostScript-like drawing commands, which also allows automatic printer support and so forth. Since the Cocoa framework manages all the clipping, scrolling, scaling and other chores of drawing graphics, the programmer is freed from implementing basic infrastructure and can concentrate on the unique aspects of an application's content. The Smalltalk teams at Xerox PARC eventually settled on a design philosophy that led to easy development and high code reuse. Named model-view-controller (MVC), the concept breaks an application into three sets of interacting object classes. Cocoa's design is a strict application of MVC principles. Under OpenStep, most of the classes provided were either high-level View classes (in AppKit) or one of a number of relatively low-level model classes like NSString. Compared to similar MVC systems, OpenStep lacked a strong model layer. No stock class represented a "document," for instance. During the transition to Cocoa, the model layer was expanded greatly, introducing a number of pre-rolled classes to provide functionality common to desktop applications. In Mac OS X 10.3, Apple introduced the NSController family of classes, which provide predefined behavior for the controller layer. These classes are considered part of the Cocoa Bindings system, which also makes extensive use of protocols such as Key-Value Observing and Key-Value Binding. The term 'binding' refers to a relationship between two objects, often between a view and a controller. Bindings allow the developer to focus more on declarative relationships rather than orchestrating fine-grained behavior. With the arrival of Mac OS X 10.4, Apple extended this foundation further by introducing the Core Data framework, which standardizes change tracking and persistence in the model layer. In effect, the framework greatly simplifies the process of making changes to application data, undoing changes (if necessary), saving data to disk, and reading it back in. By providing framework support for all three MVC layers, Apple's goal is to reduce the amount of boilerplate or "glue" code that developers have to write, freeing up resources to spend time on application-specific features. In most object-oriented languages, calls to methods are represented physically by a pointer to the code in memory. This restricts the design of an application since specific command handling classes are needed, usually organized according to the chain-of-responsibility pattern. While Cocoa retains this approach for the most part, Objective-C's late binding opens up more flexibility. Under Objective-C, methods are represented by a selector, a string describing the method to call. When a message is sent, the selector is sent into the Objective-C runtime, matched against a list of available methods, and the method's implementation is called. Since the selector is text data, this lets it be saved to a file, transmitted over a network or between processes, or manipulated in other ways. The implementation of the method is looked up at runtime, not compile time. There is a small performance penalty for this, but late binding allows the same selector to reference different implementations. By a similar token, Cocoa provides a pervasive data manipulation method called key-value coding (KVC). This allows a piece of data or property of an object to be looked up or changed at runtime by name. The property name acts as a key to the value. In traditional languages, this late binding is impossible. KVC leads to great design flexibility. An object's type need not be known, yet any property of that object can be discovered using KVC. Also, by extending this system using something Cocoa terms key-value observing (KVO), automatic support for undo-redo is provided. Late static binding is a variant of binding somewhere between static and dynamic binding. The binding of names before the program is run is called static (early); bindings performed as the program runs are dynamic (late or virtual). One of the most useful features of Cocoa is the powerful base objects the system supplies. As an example, consider the Foundation classes NSAttributedString, which provide Unicode strings, and the NSText system in AppKit, which allows the programmer to place string objects in the GUI. NSText and its related classes are used to display and edit strings. The collection of objects involved permit an application to implement anything from a simple single-line text entry field to a complete multi-page, multi-column text layout schema, with full professional typography features such as kerning, ligatures, running text around arbitrary shapes, rotation, full Unicode support and anti-aliased glyph rendering. Paragraph layout can be controlled automatically or by the user, using a built-in "ruler" object that can be attached to any text view. Spell checking is automatic, using a single dictionary used by all applications that uses the squiggly underlining convention introduced by Microsoft (actually a dashed red underline in Cocoa). Unlimited undo-redo support is built in. Using only the built-in features, one can write a text editor application in as few as 10 lines of code. With new controller objects, this may fall to zero[clarification needed]. This is in contrast to the TextEdit APIs found in the earlier Mac OS. When extensions are needed, Cocoa's use of Objective-C makes this a straightforward task. Objective-C includes the concept of "categories," which allows modifying existing class "in-place". Functionality can be accomplished in a category without any changes to the original classes in the framework, or even access to its source. Under more common frameworks, this same task requires making a new subclass supporting the added features, and then changing all instances of the classes to this new class. The Cocoa frameworks are written in Objective-C, and hence that is the preferred language for developing Cocoa applications. Java bindings for the Cocoa frameworks (termed the Java bridge) were also made available with the aim of replacing Objective-C with a more popular language but these bindings were unpopular among Cocoa developers and Cocoa's message passing semantics did not translate well to a statically-typed language such as Java. Cocoa's need for runtime binding means many of Cocoa's key features are not available with Java. In 2005, Apple announced that the Java bridge was to be deprecated, meaning that features added to Cocoa in macOS versions later than 10.4 would not be added to the Cocoa-Java programming interface. Originally, AppleScript Studio could be used to develop simpler Cocoa applications. However, as of Snow Leopard, it has been deprecated. It was replaced with AppleScriptObjC, which allows programming in AppleScript, while using Cocoa frameworks. Third-party bindings available for other languages include Clozure CL, Monobjc and NObjective (C#), Cocoa# (CLI), Cocodao and D/Objective-C Bridge,LispWorks, CamelBones (Perl), PyObjC (Python), FPC PasCocoa (Lazarus and Free Pascal), RubyCocoa (Ruby).Nu uses the Objective-C object model directly, and thus can use the Cocoa frameworks without needing a binding. There are also open source implementations of major parts of the Cocoa framework, such as GNUstep and Cocotron, which allow cross-platform Cocoa application development to target other operating systems, such as Microsoft Windows and Linux. Cocoa is an important inheritance from NeXT, as indicated by .. the "NS" prefix Because Java is a strongly typed language, it requires more information about the classes and interfaces it manipulates at compile time. Therefore, before using Objective-C classes as Java ones, a description of them has to be written and compiled. Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your IT knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.
1
2
<urn:uuid:421880e0-e9bb-4767-af2c-dc47b579088e>
Rockhounding In Colorado: Where To Find Geodes The disease strikes when least expected. You’re out for a hike and something on the path catches your eye. You see a small fossil, a shiny stone or a glimmering crystal and, naturally, you pick it up. Soon, the ground becomes a treasure trove of things to be looked at, turned over, scrutinized and pocketed. Once this happens, there’s no turning back. Rock-hound fever has struck. For inveterate hikers and trekkers who explore Colorado’s mountains and forests regularly, these surprises add another dimension to the experience: That pretty, little green stone may be jade or amazonite; what looks like glass that washed out of a hillside might be a quartz crystal; the round, rough-hewn rock could be a geode. Colorado offers an abundance of minerals, gemstones, crystals and fossils to anyone willing to go out and look for them. For the casual collector or the obsessed amateur, there’s a little something for everyone, just about everywhere. It’s a matter of paying attention. Once the newbie hound is hooked, it helps to understand the terms rock and mineral, and how the two are related. First, there’s no such thing as “just a rock.” A rock is more than an aggregate of minerals. Rocks are the history books of ancient geologic times. They include limestone, granite, sandstone and shale, as well as gravel beds, clay and even — can you believe? — the permafrost of Siberia. In fact, any deposit that makes up the Earth’s outer crust. And rocks are the preservers of fossils that once lived when Colorado was a vast sea. But what about those other treasures hidden in rocks — the minerals that are at the heart of a collector’s expedition? A mineral is a naturally occurring, inorganic solid, and has its own characteristic crystalline structure. Gems, then, are the flowers of the minerals that have ornamental value because of their beauty, durability and, occasionally, some degree of rarity. Colorado ranks among the most strongly mineralized areas in the world, and it’s especially noted for alabaster, amethyst, lapis lazuli (blue sapphire), topaz and turquoise. Its most characteristic gem is amazonite, the bright green mineral that is often mistaken for jade. But, hey, you don’t know what’s going to turn up when you’re out in the woods. The Colorado landscape has changed dramatically in the last 10 years, with public lands becoming private, claims being staked and roads leading to closed gates. But the fun is still there if you know where there is public access. And one of the best resources is membership in the Colorado Mineral Society (CMS). CMS has access to areas not always available to the beginner going it alone, and can help guide newbies as to where to go and what to look for. After a few field trips with seasoned explorers, you will discover how to recognize crystal- and gem-bearing rocks, as well as learn about local geology and topography. CMS 2017 Summer Field Trip Schedule: The adventurous rock hound wanting to venture further in search of lovely stones and crystals should go to www.peaktopeak.com/colorado/index.php3, an excellent Web site where you can access locations and availability for any mineral. Next, get the most recently published guide to rock hounding in Colorado that describes the sites and includes directions and maps. For examples of rocks and minerals that occur in our region, it’s a good idea to visit a rock shop. Then pack a pick and shovel and head out to those beautiful, productive areas that are open to collecting. The beauty of rock hounding is its simplicity. Walking around with your nose to the ground reopens that childhood part of us that thinks finding a pretty stone is a wondrous event. As a CMS member said after finding a lode of geodes, “It gives me a funny feeling, knowing how long they’ve been here waiting for me to find them.” Did You Know? Colorado’s state mineral is rhodochrosite (well-known areas are near Alma). The state gem is aquamarine, found on the lofty reaches of Mount Antero (Chaffee County). The state rock is marble from Yule. The state fossil is the Stegosaurus. Apache Tears, properly known as black obsidian (Ruby Mountain in Chaffee County), were named for the Indian legend that they are the petrified tears of Apache women mourning the slaughter of their men in battle. Garo Park in Park County, still plentiful in blue agate and jasper, was the site of the last Ute Indian battle. Garden Park near Cañon City, rich in geodes, quartz and fossils, is world-famous for its Jurassic dinosaurs and the role the specimens played in the infamous bone wars of the late 1880s. The dinosaur sites now form the Garden Park Paleontological Resource Area overseen by the Bureau of Land Management. If you’re prospecting on Devil’s Head Mountain (Douglas County) in August, you’ll join millions of ladybugs for their annual gathering on the summit. If You Go Colorado Mineral Society, P.O. Box 280755, Lakewood, Colorado 80228; www.coloradomineralsociety.org A good guidebook: Rockhounding Colorado (2004), by William A. and Cora Kappele From the Editors: We spent a heap of time making sure this story was accurate when it was published, but of course, things can change. Please confirm the details before setting out in our great Centennial State.
1
2
<urn:uuid:89367d7b-7a72-4ec4-b365-272951cff7ec>
Liquid crystals have been used for digital calculators and watches for years, but the size required had made them impractical for computer use. Recent improvements in LCD technology reduced the size of the LCD pixel to compare with the size of a Liquid crystal displays operate on the principle of scattering the light from an outside source to provide the desired pattern. The display from a liquid crystal is usually gray or black, but color can be achieved through the use of filters or dyes. They require low power and low voltage, making them ideal for laptop and notebook computers. In manufacturing LCDs, a clear, conductive material is deposited on the inside surfaces of two sheets of glass. This material acts as one electrode. The liquid crystal material is then deposited on the glass in the desired pattern. This pattern can be segmented (watches and calculators), dot matrix (graphic and computer screens), or a custom layout for special purposes. A terminal conductor is connected to an external terminal to control each liquid crystal. The two sheets of glass are then hermetically sealed at Passive Matrix Liquid Crystal Displays Passive matrix liquid crystal displays are used in most monochrome and color laptop computers today. The LCDs are arranged in a dot matrix pattern. Resolution of 640 columns by 480 rows is not Characters are formed by addressing each row and column. Color passive matrix LCDs use three layers of crystals each separated by a color filter. Color is achieved by energizing one, two, or all three LCDs for Passive matrix LCDs have some distinct disadvantages. They have low contrast. This lack of contrast has required the addition of a backlight to aid the user in viewing the screen. The response time to turn the pixels on and off is too slow for full-motion video and can produce a ghosting effect when changing full-screen displays. Color passive matrix LCDs are limited to displaying 16 colors simultaneously, even though the VGA adapter can have a palette of 262,144 colors. Active Matrix Liquid Crystal Displays Active matrix liquid crystal displays closely emulate the capabilities of the full-color CRT. The perfection of the thin film transistor (TFT) is largely responsible for the development of the active matrix LCD. Active matrix LCDs offer a brighter screen, provide response times fast enough to accommodate full-motion video, and can display 256 colors In manufacturing an active matrix display panel, each pixel consists of three crystals, one each for red, green, and blue. Three TFTs control each pixel, one for each color. The TFT technology allows for entire microprocessors to be deposited transparently on the glass plates, increasing the brightness, speed, and color quality of the display. The displays discussed in this chapter are output devices. They display information from the computer for the user. To allow the user to act on the information being displayed, some type of input device is required. The most common input device is Increasing in popularity are cursor pointing devices such as the mouse or trackball. The keyboard is the basic input device for There are several styles of keyboards available, but the most common one today is the 101-key enhanced keyboard. The 101-key enhanced keyboard made several improvements over the 84-key keyboard. Two new function keys, F11 and F12, were added. The function keys were moved from the left side of the keyboard to the top of the keyboard. A group of dedicated cursor and screen control keys were added
1
2
<urn:uuid:2d9c79c0-9ef8-45bb-a999-4123d9e08e35>
Back in the day, when men were men and computers were programmed in hexadecimal, Texas Instruments produced a nifty little display chip called a TIL311. This chip took a 4-bit nibble as input, and produced a single hex digit on a built-in LED display. As if that weren’t cool enough, TI packaged it in translucent red; you could almost convince yourself that you saw the electrons moving through the traces as the display digits changed. This wasn’t a consumer-oriented BCD display for use in calculators and clock radios (though it would work nicely for that) — this was an actual hexadecimal display, capable of displaying numbers beyond zero-through-nine. To paraphrase This Is Spinal Tap, “these went to fifteen!” Unfortunately, although they’re still as useful as ever, TIL311s are extremely scarce today. The best price I could find online was $19.95 apiece. (For comparison, you could get several backlit 32-character LCD displays for the same cost.) For Drexel’s EET325 Microprocessors class this term, we wanted to capture the essence of programming a vintage 8-bit CPU like the Z80. A modern LCD display would be simpler to implement, but somehow it just wouldn’t be the same. Hex displays like the TIL311 worked beautifully: one hex digit exactly covers four bits. Two digits cover one byte (for the data bus) and four cover a 16-bit word (for the full address bus on the Z80). The most recent versions of the DrACo/Z80 are true 8-bit machines, though: the address bus is 8 bits wide (although expandable to 16). Because of this, only four hex digits are needed; two for the address and two for the data. Even so, four TIL311s per board would add $80 to the bill of materials — roughly doubling the cost. This, we figured, was an opportunity to put our LPKF PCB plotter to use creating some modern TIL311 replacements. A PIC16F1825 microcontroller, a common-anode seven-segment LED display, and seven resistors take the place of TI’s slick one-chip solution. It isn’t nearly as pretty, but it gets the job done for a bit less money. A few design revisions and squashed bugs later, the first of the new boards are up and running. The PIC runs in a continuous loop, monitoring the four data pins and updating the seven-segment output based on data in a lookup table. (The code is easily converted to a common-cathode design by inverting the display bits; the board would have to be modified to support this. It would have been nice to be able to do this in software by using an I/O pin as a switchable common, but unfortunately there were no spare pins left.) Here is the zipped PIC project to reproduce this display for yourself, along with the layout file used to create the board layout in FreePCB. The design is completely single-layer for ease of soldering, although if using double-sided copper-clad boards like we did, the holes on the top layer must be insulated. The PIC actually runs at its slowest self-clocked speed of about 31kHz; since the monitor-lookup-and-update loop only takes 36 cycles, the display can still be updated at several hundred Hz — fast enough to be a blur if the Z80 is running that fast. (The design should work at any speed, since no timing at all is used; the PIC is simply emulating combinatorial logic as fast as possible/necessary.) Some details on construction: All resistors are 470 ohm, although any similar value should work OK. The connector is a standard 0.1″ row connector, with the plastic insulator pressed all the way to one side. This way, the connector can be inserted through the top of the board, soldered to the bottom side like all of the other components, and have enough pin length left to comfortably fit into a solderless breadboard. Since these boards will be used in a classroom lab setting, I opted to socket the PIC. That way, if one accidentally gets its power reversed and/or is introduced to 12V instead of 5V, we can easily replace it. Since the LED display is driven by the PIC, it is less likely to be damaged due to incorrect connections; the PIC is unlikely to pass enough current to it to cause damage. The LED is keyed, and will only go in one way. (Yes, it’s supposed to be upside-down; these boards don’t use the decimal point, and the circuit layout worked much better that way.) I’m still working on refining the parameters for producing boards on the LPKF plotter (not to mention dealing with some persistent alignment issues) — but it’s definitely getting there. If you decide to make one of these for yourself and have questions, please email me (“Eric” at this domain); I’ll be happy to help.
1
4
<urn:uuid:d181952e-9252-4cfe-8dae-02a3da41ef37>
-- The Evidence for -- Ancient Atomic Warfare Part 1 of 2 Religious texts and geological evidence suggest that several parts of the world have experienced destructive atomic blasts in ages past. Extracted from Nexus Magazine, Volume 7, Number 5 (August-September 2000) or September-October 2000 in the © 2000 by David Hatcher Childress When the first atomic bomb exploded in New Mexico, the desert sand turned to fused green glass. This fact, according to the magazine Free World, has given certain archaeologists a turn. They have been digging in the ancient Euphrates Valley and have uncovered a layer of agrarian culture 8,000 years old, and a layer of herdsman culture much older, and a still older caveman culture. Recently, they reached another layerÉof fused green glass. It is well known that atomic detonations on or above a sandy desert will melt the silicon in the sand and turn the surface of the Earth into a sheet of glass. But if sheets of ancient desert glass can be found in various parts of the world, does it mean that atomic wars were fought in the ancient past or, at the very least, that atomic testing occurred in the dim ages of history? This is a startling theory, but one that is not lacking in evidence, as such ancient sheets of desert glass are a geological fact. Lightning strikes can sometimes fuse sand, meteorologists contend, but this is always in a distinctive root-like pattern. These strange geological oddities are called fulgurites and manifest as branched tubular forms rather than as flat sheets of fused sand. Therefore, lightning is largely ruled out as the cause of such finds by geologists, who prefer to hold onto the theory of a meteor or comet strike as the cause. The problem with this theory is that there is usually no crater associated with these anomalous sheets of glass. Brad Steiger and Ron Calais report in their book, Mysteries of Time and Space,1 that Albion W. Hart, one of the first engineers to graduate from Massachusetts Institute of Technology, was assigned an engineering project in the interior of Africa. While he and his men were travelling to an almost inaccessible region, they first had to cross a great expanse of desert. "At the time he was puzzled and quite unable to explain a large expanse of greenish glass which covered the sands as far as he could see," writes Margarethe Casson in an article on Hart's life in the magazine Rocks and Minerals (no. 396, 1972). She then goes on to mention: "Later on, during his lifeÉhe passed by the White Sands area after the first atomic explosion there, and he recognized the same type of silica fusion which he had seen fifty years earlier in the African desert."2 Tektites: A Terrestrial Explanation? Large desert areas strewn with mysterious globules of "glass"--known as tektites--are occasionally discussed in geological literature. These blobs of "hardened glass" (glass is a liquid, in fact) are thought to come from meteorite impacts in most instances, but the evidence shows that in many cases there is no impact crater. Another explanation is that tektites have a terrestrial explanation--one that includes atomic war or high-tech weapons capable of melting sand. The tektite debate was summed up in an article entitled "The Tektite Problem", by John O'Keefe, published in the August 1978 edition of Scientific American. Said O'Keefe: If tektites are terrestrial, it means that some process exists by which soil or common rocks can be converted in an instant into homogeneous, water-free, bubble-free glass and be propelled thousands of miles above the atmosphere. If tektites come from the Moon, it seems to follow that there is at least one powerful volcano somewhere on the Moon that has erupted at least as recently as 750,000 years ago. Neither possibility is easy to accept. Yet one of them must be accepted, and I believe it is feasible to pick the more reasonable one by rejecting the more unlikely. The key to solving the tektite problem is an insistence on a physically reasonable hypothesis and a resolute refusal to be impressed by mere numerical coincidences such as the similarity of terrestrial sediments to tektite material. I believe that the lunar volcanism hypothesis is the only one physically possible, and that we have to accept it. If it leads to unexpected but not impossible conclusions, that is precisely its utility. To cite just one example of the utility, the lunar origin of tektites strongly supports the idea that the Moon was formed by fission of the Earth. Tektites are indeed much more like terrestrial rocks than one would expect of a chance assemblage. If tektites come from a lunar magma, then deep inside the Moon there must be material that is very much like the mantle of the Earth--more like the mantle than it is like the shallower parts of the Moon from which the lunar surface basalts have originated. If the Moon was formed by fission of the Earth, the object that became the Moon would have been heated intensely and from the outside, and would have lost most of its original mass and in particular the more volatile elements. The lavas constituting most of the Moon's present surface were erupted early in the Moon's history, when its heat was concentrated in the shallow depleted zone quite near the surface. During the recent periods represented by tektite falls, the sources of lunar volcanism have necessarily been much deeper, so that any volcanoes responsible for tektites have drawn on the lunar material that suffered least during the period of ablation and is therefore most like unaltered terrestrial mantle material. Ironically, that would explain why tektites are in some ways more like terrestrial rocks than they are like the rocks of the lunar surface. Mysterious Glass in the Egyptian Sahara One of the strangest mysteries of ancient Egypt is that of the great glass sheets that were only discovered in 1932. In December of that year, Patrick Clayton, a surveyor for the Egyptian Geological Survey, was driving among the dunes of the Great Sand Sea near the Saad Plateau in the virtually uninhabited area just north of the southwestern corner of Egypt, when he heard his tyres crunch on something that wasn't sand. It turned out to be large pieces of marvellously clear, yellow-green glass. In fact, this wasn't just any ordinary glass, but ultra-pure glass that was an astonishing 98 per cent silica. Clayton wasn't the first person to come across this field of glass, as various 'prehistoric' hunters and nomads had obviously also found the now-famous Libyan Desert Glass (LDG). The glass had been used in the past to make knives and sharp-edged tools as well as other objects. A carved scarab of LDG was even found in Tutankhamen's tomb, indicating that the glass was sometimes used for jewellery. An article by Giles Wright in the British science magazine New Scientist (July 10, 1999), entitled "The Riddle of the Sands", says that LDG is the purest natural silica glass ever found. Over a thousand tonnes of it are strewn across hundreds of kilometres of bleak desert. Some of the chunks weigh 26 kilograms, but most LDG exists in smaller, angular pieces--looking like shards left when a giant green bottle was smashed by colossal forces. According to the article, LDG, pure as it is, does contain tiny bubbles, white wisps and inky black swirls. The whitish inclusions consist of refractory minerals such as cristobalite. The ink-like swirls, though, are rich in iridium, which is diagnostic of an extraterrestrial impact such as a meteorite or comet, according to conventional wisdom. The general theory is that the glass was created by the searing, sand-melting impact of a cosmic projectile. However, there are serious problems with this theory, says Wright, and many mysteries concerning this stretch of desert containing the pure glass. The main problem: Where did this immense amount of widely dispersed glass shards come from? There is no evidence of an impact crater of any kind; the surface of the Great Sand Sea shows no sign of a giant crater, and neither do microwave probes made deep into the sand by satellite radar. Furthermore, LDG seems to be too pure to be derived from a messy cosmic collision. Wright mentions that known impact craters, such as the one at Wabar in Saudi Arabia, are littered with bits of iron and other meteorite debris. This is not the case with the Libyan Desert Glass site. What is more, LDG is concentrated in two areas, rather than one. One area is oval-shaped; the other is a circular ring, six kilometres wide and 21 kilometres in diameter. The ring's wide centre is devoid of the glass. One theory is that there was a soft projectile impact: a meteorite, perhaps 30 metres in diameter, may have detonated about 10 kilometres or so above the Great Sand Sea, the searing blast of hot air melting the sand beneath. Such a craterless impact is thought to have occurred in the 1908 Tunguska event in Siberia--at least as far as mainstream science is concerned. That event, like the pure desert glass, remains a mystery. Another theory has a meteorite glancing off the desert surface, leaving a glassy crust and a shallow crater that was soon filled in. But there are two known areas of LDG. Were there two cosmic projectiles in tandem? Alternatively, is it possible that the vitrified desert is the result of atomic war in the ancient past? Could a Tesla-type beam weapon have melted the desert, perhaps in a test? An article entitled "Dating the Libyan Desert Silica-Glass" appeared in the British journal Nature (no. 170) in 1952. Said the author, Kenneth Oakley:3 Pieces of natural silica-glass up to 16 lb in weight occur scattered sparsely in an oval area, measuring 130 km north to south and 53 km from east to west, in the Sand Sea of the Libyan Desert. This remarkable material, which is almost pure (97 per cent silica), relatively light (sp. gin. 2.21), clear and yellowish-green in colour, has the qualities of a gemstone. It was discovered by the Egyptian Survey Expedition under Mr P.A. Clayton in 1932, and was thoroughly investigated by Dr L.J. Spencer, who joined a special expedition of the Survey for this purpose in 1934. The pieces are found in sand-free corridors between north-south dune ridges, about 100 m high and 2-5 km apart. These corridors or "streets" have a rubbly surface, rather like that of a "speedway" track, formed by angular gravel and red loamy weathering debris overlying Nubian sandstone. The pieces of glass lie on this surface or partly embedded in it. Only a few small fragments were found below the surface, and none deeper than about one metre. All the pieces on the surface have been pitted or smoothed by sand-blast. The distribution of the glass is patchyÉ While undoubtedly natural, the origin of the Libyan silica-glass is uncertain. In its constitution it resembles the tektites of supposed cosmic origin, but these are much smaller. Tektites are usually black, although one variety found in Bohemia and Moravia and known as moldavite is clear deep-green. The Libyan silica-glass has also been compared with the glass formed by the fusion of sand in the heat generated by the fall of a great meteorite; for example, at Wabar in Arabia and at Henbury in central Australia. Reporting the findings of his expedition, Dr Spencer said that he had not been able to trace the Libyan glass to any source; no fragments of meteorites or indications of meteorite craters could be found in the area of its distribution. He said: "It seemed easier to assume that it had simply fallen from the sky." It would be of considerable interest if the time of origin or arrival of the silica-glass in the Sand Sea could be determined geologically or archaeologically. Its restriction to the surface or top layer of a superficial deposit suggests that it is not of great antiquity from the geological point of view. On the other hand, it has clearly been there since prehistoric times. Some of the flakes were submitted to Egyptologists in Cairo, who regarded them as "late Neolithic or pre-dynastic". In spite of a careful search by Dr Spencer and the late Mr A. Lucas, no objects of silica-glass could be found in the collections from Tut-Ankh-Amen's tomb or from any of the other dynastic tombs. No potsherds were encountered in the silica-glass area, but in the neighbourhood of the flakings some "crude spear-points of glass" were found; also some quartzite implements, "quernstones" and ostrich-shell fragments. Oakley is apparently incorrect when he says that LDG was not found in Tutankhamen's tomb, as according to Wright a piece was found. At any rate, the vitrified areas of the Libyan Desert are yet to be explained. Are they evidence of an ancient war--a war that may have turned North Africa and Arabia into the desert that it is today? The Vitrified Forts of Scotland One of the great mysteries of classical archaeology is the existence of many vitrified forts in Scotland. Are they also evidence of some ancient atomic war? Maybe, but maybe not. There are said to be at least 60 such forts throughout Scotland. Among the most well-known are Tap o'Noth, Dunnideer, Craig Phadraig (near Inverness), Abernathy (near Perth), Dun Lagaidh (in Ross), Cromarty, Arka-Unskel, Eilean na Goar, and Bute-Dunagoil on the Sound of Bute off Arran Island. Another well-known vitrified fort is the Cauadale hill-fort in Argyll, West Scotland. One of the best examples of a vitrified fort is Tap o'Noth, which is near the village of Rhynie in northeastern Scotland. This massive fort from prehistory is on the summit of a mountain of the same name which, being 1,859 feet (560 metres) high, commands an impressive view of the Aberdeenshire countryside. At first glance it seems that the walls are made of a rubble of stones, but on closer look it is apparent that they are made not of dry stones but of melted rocks! What were once individual stones are now black and cindery masses, fused together by heat that must have been so intense that molten rivers of rock once ran down the walls. Reports on vitrified forts were made as far back as 1880 when Edward Hamilton wrote an article entitled "Vitrified Forts on the West Coast of Scotland" in the Archaeological Journal (no. 37, 1880, pp. 227-243). In his article, Hamilton describes several sites in detail, including Arka-Unskel:4 At the point where Loch na Nuagh begins to narrow, where the opposite shore is about one-and-a-half to two miles distant, is a small promontory connected with the mainland by a narrow strip of sand and grass, which evidently at one time was submerged by the rising tide. On the flat summit of this promontory are the ruins of a vitrified fort, the proper name for which is Arka-Unskel. The rocks on which this fort are placed are metamorphic gneiss, covered with grass and ferns, and rise on three sides almost perpendicular for about 110 feet from the sea level. The smooth surface on the top is divided by a slight depression into two portions. On the largest, with precipitous sides to the sea, the chief portion of the fort is situated, and occupies the whole of the flat surface. It is of somewhat oval form. The circumference is about 200 feet, and the vitrified walls can be traced in its entire lengthÉ We dug under the vitrified mass, and there found what was extremely interesting, as throwing some light on the manner in which the fire was applied for the purpose of vitrification. The internal part of the upper or vitrified wall for about a foot or a foot-and-a-half was untouched by the fire, except that some of the flat stones were slightly agglutinated together, and that the stones, all feldspatic, were placed in layers one upon another. It was evident, therefore, that a rude foundation of boulder stones was first formed upon the original rock, and then a thick layer of loose, mostly flat stones of feldspatic sand, and of a different kind from those found in the immediate neighborhood, were placed on this foundation, and then vitrified by heat applied externally. This foundation of loose stones is found also in the vitrified fort of Dun Mac Snuichan, on Loch Etive. Hamilton describes another vitrified fort that is much larger, situated on the island at the entrance of Loch Ailort. This island, locally termed Eilean na Goar, is the most eastern and is bounded on all sides by precipitous gneiss rocks; it is the abode and nesting place of numerous sea birds. The flat surface on the top is 120 feet from the sea level, and the remains of the vitrified fort are situated on this, oblong in form, with a continuous rampart of vitrified wall five feet thick, attached at the SW end to a large upright rock of gneiss. The space enclosed by this wall is 420 feet in circumference and 70 feet in width. The rampart is continuous and about five feet in thickness. At the eastern end is a great mass of wall in situ, vitrified on both sides. In the centre of the enclosed space is a deep depression in which are masses of the vitrified wall strewed about, evidently detached from their original site. Hamilton naturally asks a few obvious questions about the forts. Were these structures built as a means of defence? Was the vitrification the result of design or accident? How was the vitrification produced? In this vitrification process, huge blocks of stones have been fused with smaller rubble to form a hard, glassy mass. Explanations for the vitrification are few and far between, and none of them is universally accepted. One early theory was that these forts are located on ancient volcanoes (or the remains of them) and that the people used molten stone ejected from eruptions to build their settlements. This idea was replaced with the theory that the builders of the walls had designed the forts in such a way that the vitrification was purposeful in order to strengthen the walls. This theory postulated that fires had been lit and flammable material added to produce walls strong enough to resist the dampness of the local climate or the invading armies of the enemy. It is an interesting theory, but one that presents several problems. For starters, there is really no indication that such vitrification actually strengthens the walls of the fortress; rather, it seems to weaken them. In many cases, the walls of the forts seem to have collapsed because of the fires. Also, since the walls of many Scottish forts are only partially vitrified, this would hardly have proved an effective building method. Julius Caesar described a type of wood and stone fortress, known as a murus gallicus, in his account of the Gallic Wars. This was interesting to those seeking solutions to the vitrified fort mystery because these forts were made of a stone wall filled with rubble, with wooden logs inside for stability. It seemed logical to suggest that perhaps the burning of such a wood-filled wall might create the phenomenon of vitrification. Some researchers are sure that the builders of the forts caused the vitrification. Arthur C. Clarke quotes one team of chemists from the Natural History Museum in London who were studying the many forts:5 Considering the high temperatures which have to be produced, and the fact that possibly sixty or so vitrified forts are to be seen in a limited geographical area of Scotland, we do not believe that this type of structure is the result of accidental fires. Careful planning and construction were needed. However, one Scottish archaeologist, Helen Nisbet, believes that the vitrification was not done on purpose by the builders of the forts. In a thorough analysis of rock types used, she reveals that most of the forts were built of stone easily available at the chosen site and not chosen for their property of vitrification.6 The vitrification process itself, even if purposely set, is quite a mystery. A team of chemists on Arthur C. Clarke's Mysterious World subjected rock samples from 11 forts to rigorous chemical analysis, and stated that the temperatures needed to produce the vitrification were so intense--up to 1,100°C--that a simple burning of walls with wood interlaced with stone could not have achieved such temperatures.7 Nevertheless, experiments carried out in the 1930s by the famous archaeologist V. Gordon Childe and his colleague Wallace Thorneycroft showed that forts could be set on fire and generate enough heat to vitrify the stone.8 In 1934, these two designed a test wall that was 12 feet long, six feet wide and six feet high, which was built for them at Plean Colliery in Stirlingshire. They used old fireclay bricks for the faces and pit props as timber, and filled the cavity between the walls with small cubes of basalt rubble. They covered the top with turf and then piled about four tons of scrap timber and brushwood against the walls and set fire to them. Because of a snowstorm in progress, a strong wind fanned the blazing mixture of wood and stone so that the inner core did attain some vitrification of the rock. In June 1937, Childe and Thorneycroft duplicated their test vitrification at the ancient fort of Rahoy, in Argyllshire, using rocks found at the site. Their experiments did not resolve any of the questions surrounding vitrified forts, however, because they had only proven that it was theoretically possible to pile enough wood and brush on top of a mixture of wood and stone to vitrify the mass of stone. One criticism of Childe is that he seems to have used a larger proportion of wood to stone than many historians believe made up the ancient wood and stone fortresses. An important part of Childe's theory was that it was invaders, not the builders, who were assaulting the forts and then setting fire to the walls with piles of brush and wood; however, it is hard to understand why people would have repeatedly built defences that invaders could destroy with fire, when great ramparts of solid stone would have survived unscathed. Critics of the assault theory point out that in order to generate enough heat by a natural fire, the walls would have to have been specially constructed to create the heat necessary. It seems unreasonable to suggest the builders would specifically create forts to be burned or that such a great effort would be made by invaders to create the kind of fire it would take to vitrify the walls--at least with traditional techniques. One problem with all the many theories is their assumption of a primitive state of culture associated with ancient Scotland. It is astonishing to think of how large and well coordinated the population or army must have been that built and inhabited these ancient structures. Janet and Colin Bord in their book, Mysterious Britain,9 speak of Maiden Castle to give an idea of the vast extent of this marvel of prehistoric engineering. It covers an area of 120 acres, with an average width of 1,500 feet and length of 3,000 feet. The inner circumference is about 11Ú2 miles round, and it has been estimated...that it would require 250,000 men to defend it! It is hard, therefore, to believe that this construction was intended to be a defensive position. A great puzzle to archaeologists has always been the multiple and labyrinthine east and west entrances at each end of the enclosure. Originally they may have been built as a way for processional entry by people of the Neolithic era. Later, when warriors of the Iron Age were using the site as a fortress, they probably found them useful as a means of confusing the attacking force trying to gain entry. The fact that so many of these "hill-forts" have two entrances--one north of east and the other south of west--also suggests some form of Sun ceremonial. With 250,000 men defending a fort, we are talking about a huge army in a very organised society. This is not a bunch of fur-wearing Picts with spears defending a fort from marauding bands of hunter-gatherers. The questions remain, though. What huge army might have occupied these cliffside forts by the sea or lake entrances? And what massive maritime power were these people unsuccessfully defending themselves against? The forts on the western coast of Scotland are reminiscent of the mysterious clifftop forts in the Aran Islands on the west coast of Ireland. Here we truly have shades of the Atlantis story, with a powerful naval fleet attacking and conquering its neighbours in a terrible war. It has been theorised that the terrible battles of the Atlantis story took place in Wales, Scotland, Ireland and England--however, in the case of the Scottish vitrified forts it looks as if these were the losers of a war, not the victors. And defeat can be seen across the land: the war dykes in Sussex, the vitrified forts of Scotland, the utter collapse and disappearance of the civilisation that built these things. What long-ago Armageddon destroyed ancient Scotland? In ancient times there was a substance known through writings as Greek fire. This was some sort of ancient napalm bomb that was hurled by catapult and could not be put out. Some forms of Greek fire were even said to burn under water and were therefore used in naval battles. (The actual composition of Greek fire is unknown, but it must have contained chemicals such as phosphorus, pitch, sulphur or other flammable chemicals.) Could a form of Greek fire have been responsible for the vitrification? While ancient astronaut theorists may believe that extraterrestrials with their atomic weapons vitrified these walls, it seems more likely that they are the result of a man-made apocalypse of a chemical nature. With siege machines, battleships and Greek fire, did a vast flotilla storm the huge forts and eventually burn them down in a hellish blaze? The evidence of the vitrified forts is clear: some hugely successful and organised civilisation was living in Scotland, England and Wales in prehistoric times, circa 1000 BC or more, and was building gigantic structures including forts. This apparently was a maritime civilisation that prepared itself for naval warfare as well as other forms of attack. Vitrified Ruins in France, Turkey and the Middle East Vitrified ruins can also be found in France, Turkey and some areas of the Middle East. Vitrified forts in France are discussed in the American Journal of Science (vol. 3, no. 22, 1881, pp. 150-151) in an article entitled "On the Substances Obtained from Some 'Forts Vitrifiés' in France", by M. Daubrée. The author mentions several forts in Brittany and northern France whose granite blocks have been vitrified. He cites the "partially fused granitic rocks from the forts of Château-vieux and of Puy de Gaudy (Creuse), also from the neighbourhood of Saint Brieuc (Côtes-du-Nord)".10 Daubrée, understandably, could not readily find an explanation for the vitrification. Similarly, the ruins of Hattusas in central Turkey, an ancient Hittite city, are partially vitrified. The Hittites are said to be the inventors of the chariot, and horses were of great importance to them. It is on the ancient Hittite stelae that we first see a depiction of the chariot in use. However, it seems unlikely that horsemanship and wheeled chariots were invented by the Hittites; it is highly likely that chariots were in use in ancient China at the same time. The Hittites were also linked to the world of ancient India. Proto-Indic writing has been found at Hattusas, and scholars now admit that the civilisation of India, as the ancient Indian texts like the Ramayana have said, goes back many millennia. In his 1965 book, The Bible as History,11 German historian Werner Keller cites some of the mysteries concerning the Hittites. According to Keller, the Hittites are first mentioned in the Bible (in Genesis 23) in connection with the biblical patriarch Abraham who acquired from the Hittites a burial place in Hebron for his wife Sarah. Conservative classical scholar Keller is confused by this, because the time period of Abraham was circa 2000-1800 BC, while the Hittites are traditionally said to have appeared in the 16th century BC. Even more confusing to Keller is the biblical statement (in Numbers 13:29-30) that the Hittites were the founders of Jerusalem. This is a fascinating statement, as it would mean that the Hittites also occupied Ba'albek, which lies between their realm and Jerusalem. The Temple Mount at Jerusalem is built on a foundation of huge ashlars, as is Ba'albek. The Hittites definitely used the gigantic megalithic construction known as cyclopean--huge, odd-shaped polygonal blocks, perfectly fitted together. The massive walls and gates of Hattusas are eerily similar in construction to those in the high Andes and other megalithic sites around the world. The difference at Hattusas is that parts of the city are vitrified, and the walls of rock have been partly melted. If the Hittites were the builders of Jerusalem, it would mean that the ancient Hittite Empire existed for several thousand years and had frontiers with Egypt. Indeed, the Hittite hieroglyphic script is undeniably similar to Egyptian hieroglyphs, probably more so than any other language. Just as Egypt goes back many thousands of years BC and is ultimately connected to Atlantis, so does the ancient Hittite Empire. Like the Egyptians, the Hittites carved massive granite sphinxes, built on a cyclopean scale and worshipped the Sun. The Hittites also used the common motif of a winged disc for their Sun god, just as the Egyptians did. The Hittites were well known in the ancient world because they were the main manufacturers of iron and bronze goods. The Hittites were metallurgists and seafarers. Their winged discs may in fact have been representations of vimanas--flying machines. Some of the ancient ziggurats of Iran and Iraq also contain vitrified material, sometimes thought by archaeologists to be caused by the Greek fire. For instance, the vitrified remains of the ziggurat at Birs Nimrod (Borsippa), south of Hillah, were once confused with the Tower of Babel. The ruins are crowned by a mass of vitrified brickwork--actual clay bricks fused together by intense heat. This may be due to the horrific ancient wars described in the Ramayana and Mahabharata, although early archaeologists attributed the effect to lightning. Greek Fire, Plasma Guns and Atomic Warfare If one were to believe the great Indian epic of the Mahabharata, fantastic battles were fought in the past with airships, particle beams, chemical warfare and presumably atomic weapons. Just as battles in the 20th century have been fought with incredibly devastating weapons, it may well be that battles in the latter days of Atlantis were fought with highly sophisticated, high-tech weapons. The mysterious Greek fire was a "chemical fireball". Incendiary mixtures go back at least to the 5th century BC, when Aineias the Tactician wrote a book called On the Defence of Fortified Positions. Said he:12 And fire itself, which is to be powerful and quite inextinguishable, is to be prepared as follows. Pitch, sulphur, tow, granulated frankincense, and pine sawdust in sacks you should ignite if you wish to set any of the enemy's works on fire. L. Sprague de Camp mentions in his book, The Ancient Engineers,13 that at some point it was found that petroleum, which seeps out of the ground in Iraq and elsewhere, made an ideal base for incendiary mixtures because it could be squirted from syringes of the sort then used in fighting fires. Other substances were added to it, such as sulphur, olive oil, rosin, bitumen, salt and quicklime. Some of these additives may have helped--sulphur at least made a fine stench--but others did not, although it was thought that they did. Salt, for instance, may have been added because the sodium in it gave the flame a bright orange colour. The ancients, supposing that a brighter flame was necessarily a hotter flame, mistakenly believed that salt made the fire burn more fiercely. Such mixtures were put in thin wooden casks and thrown from catapults at hostile ships and at wooden siege engines and defence works. According to de Camp, in AD 673 the architect Kallinikos fled ahead of Arab invaders from Helipolis-Ba'albek to Constantinople. There he revealed to Emperor Constantine IV an improved formula for a liquid incendiary. This could not only be squirted at the foe but could also be used with great effect at sea, because it caught fire when it touched the water and floated, flaming on the waves. De Camp says that Byzantine galleys were armed with a flame-throwing apparatus in the bow, consisting of a tank of this mixture, a pump and a nozzle. With the help of this compound, the Byzantines broke the Arab sieges of AD 674-76 and AD 715-18, and also beat off the Russian attacks of AD 941 and 1043. The incendiary liquid wrought immense havoc; of 800 Arab ships which attacked Constantinople in 716 AD, only a handful returned home. The formula for the wet version of Greek fire has never been discovered. Says de Camp: By careful security precautions, the Byzantine Emperors succeeded in keeping the secret of this substance, called "wet fire" or "wild fire", so dark that it never did become generally known. When asked about it, they blandly replied that an angel had revealed the formula to the first Constantine. We can, therefore, only guess the nature of the mixture. According to one disputed theory, wet fire was petroleum with an admixture of calcium phosphide, which can be made from lime, bones and urine. Perhaps Kallinikos stumbled across this substance in the course of alchemical experiments. Vitrification of brick, rock and sand may have been caused by any number of high-tech means. New Zealand author Robin Collyns suggests in his book, Ancient Astronauts: A Time Reversal?,14 that there are five methods by which the ancients or "ancient astronauts" might have waged war on various societies on planet Earth. He outlines how these methods are again on the rise in modern society. The five methods are: plasma guns, fusion torches, holes punched in the ozone layer, manipulation of weather processes and the release of immense energy, such as with an atomic blast. As Collyns's book was published in Britain in 1976, the mentions of holes in the ozone layer and weather warfare seem strangely prophetic. Explaining the plasma gun, Collyns says: The plasma gun has already been developed experimentally for peaceful purposes: Ukrainian scientists from the Geotechnical Mechanics Institute have experimentally drilled tunnels in iron ore mines by using a plasmatron, i.e., a plasma gas jet which delivers a temperature of 6,000°C. A plasma, in this case, is an electrified gas. Electrified gases are also featured in the Vymaanika-Shaastra,15 the ancient book from India on vimanas, which cryptically talks of using for fuel the liquid metal mercury, which could be a plasma if electrified. Collyns goes on to describe a fusion torch: This is still another possible method of warfare used by spacemen, or ancient advanced civilisations on Earth. Perhaps the solar mirrors of antiquity really were fusion torches? The fusion torch is basically a further development of the plasma jet. In 1970 a theory to develop a fusion torch was presented at the New York aerospace science meeting by Drs Bernard J. Eastlund and William C. Cough. The basic idea is to generate a fantastic heat of at least fifty million degrees Celsius which could be contained and controlled. That is, the energy released could be used for many peaceful applications with zero radioactive waste products to avoid contaminating the environment, or zero production of radioactive elements which would be highly dangerous, such as plutonium which is the most deadly substance known to man. Thermonuclear fusion occurs naturally in stellar processes, and unnaturally in man-made H-bomb explosions. The fusion of a deuterium nucleus (a heavy hydrogen isotope which can be easily extracted from sea water) with another deuterium nucleus, or with tritium (another isotope of hydrogen) or with helium, could be used. The actual fusion torch would be an ionised plasma jet which would vaporise anything and everything that the jet was directed at--if...used for harmful purposes--while for peaceful applications, one use of the torch could be to reclaim basic elements from junk metals. University of Texas scientists announced in 1974 that they had actually developed the first experimental fusion torch which gave an incredible heat output of ninety-three degrees Celsius. This is five times the previous hottest temperature for a contained gas and is twice the minimum heat needed for fusion, but it was held only for one fifty-millionth of a second instead of the one full second which would be required. It is curious to note here that Dr Bernard Eastlund is the patent holder of another unusual device--one that is associated with the High-frequency Active Auroral Research Program (HAARP), based at Gakona, Alaska. HAARP is allegedly linked to weather manipulation--one of the ways in which Collyns thinks the ancients waged warfare. As far as holes in the ozone layer and weather manipulation go, Collyns says: Soviet scientists have discussed and proposed at the United Nations a ban on developing new warfare ideas such as creating holes or "windows" in the ozone layer to bombard specific areas of the Earth with increased natural ultra-violet radiation, which would kill all life-forms and turn the land into barren desert. Other ideas discussed at the meeting were the use of "infrasound" to demolish ships by creating acoustic fields on the sea, and hurling a huge chunk of rock into the sea with a cheap atomic device. The resultant tidal wave could demolish the coastal fringe of a country. Other tidal waves could be created by detonating nuclear devices at the frozen poles. Controlled floods, hurricanes, earthquakes and droughts directed towards specific targets and cities are other possibilities. Finally, although not a new method of warfare, incendiary weapons are now being developed to the point where "chemical fireballs" will be produced which radiated thermal energy similar to that of an atomic bomb. Vitrified Ruins in California's Death Valley: Evidence of Atomic War? In Secrets of the Lost Races,16 Rene Noorbergen discusses the evidence for a cataclysmic war in the remote past that included the use of airships and weapons that vitrified stone cities. The most numerous vitrified remains in the New World are located in the western United States. In 1850 the American explorer Captain Ives William Walker was the first to view some of these ruins, situated in Death Valley. He discovered a city about a mile long, with the lines of the streets and the positions of the buildings still visible. At the center he found a huge rock, between 20 to 30 feet high, with the remains of an enormous structure atop it. The southern side of both the rock and the building was melted and vitrified. Walker assumed that a volcano had been responsible for this phenomenon, but there is no volcano in the area. In addition, tectonic heat could not have caused such a liquefication of the rock surface. An associate of Captain Walker who followed up his initial exploration commented: "The whole region between the rivers Gila and San Juan is covered with remains. The ruins of cities are to be found there which must be most extensive, and they are burnt out and vitrified in part, full of fused stones and craters caused by fires which were hot enough to liquefy rock or metal. There are paving stones and houses torn with monstrous cracksÉ [as though they had] been attacked by a giant's fire-plough." These vitrified ruins in Death Valley sound fascinating--but do they really exist? There certainly is evidence of ancient civilisations in the area. In Titus Canyon, petroglyphs and inscriptions have been scratched into the walls by unknown prehistoric hands. Some experts think the graffiti might have been made by people who lived here long before the Indians we know of, because extant Indians know nothing of the glyphs and, indeed, regard them with superstitious awe. Says Jim Brandon in Weird America:17 Piute legends tell of a city beneath Death Valley that they call Shin-au-av. Tom Wilson, an Indian guide in the 1920s, claimed that his grandfather had rediscovered the place by wandering into a miles-long labyrinth of caves beneath the valley floor. Eventually the Indian came to an underworld city where the people spoke an incomprehensible language and wore clothing made of leather. Wilson told this story after a prospector named White claimed he had fallen through the floor of an abandoned mine at Wingate Pass and into an unknown tunnel. White followed this into a series of rooms, where he found hundreds of leather-clad humanoid mummies. Gold bars were stacked like bricks and piled in bins. White claimed he had explored the caverns on three occasions. On one, his wife accompanied him; and on another, his partner, Fred Thomason. However, none of them [was] able to relocate the opening to the cavern when they tried to take a group of archaeologists on a tour of the place. To be continued next issue... About the Author:
1
2
<urn:uuid:51948969-d606-4373-9c6f-c7ac9f5cd607>
CERN congratulates the two laureates of the 2015 Physics Nobel Prize(link is external): Takaaki Kajita, from the Super-Kamiokande Collaboration in Japan, and Arthur B. McDonald, from the Sudbury Neutrino Observatory (SNO) in Canada. They were awarded the prize for: “the discovery of neutrino oscillations, which shows that neutrinos have mass”. The two experiments independently demonstrated that neutrinos can change or “oscillate” from one type to another. This discovery at the turn of the millennium, more than 40 years after the prediction of the phenomenon by Italian physicist Bruno Pontecorvo, has had a profound impact on our understanding of the Universe. Continue reading Neutrinos: after the Nobel Prize, the hunt continues Colloquium on the 2013 Nobel Prize in Physics Awarded to Francois Englert and Peter Higgs Philip D. Mannheim In 2013 the Nobel Prize in Physics was awarded to Francois Englert and Peter Higgs for their development in 1964 of the mass generation mechanism (the Higgs mechanism) in local gauge theories. This mechanism requires the existence of a massive scalar particle, the Higgs boson, and in 2012 the Higgs boson was finally observed at the Large Hadron Collider after an almost half a century search. In this talk we review the work of these Nobel recipients and discuss its implications. Read more at http://arxiv.org/pdf/1506.04120v1.pdf A Nobel laureate and a blackboard at CERN is all you need to explain the fundamental physics of the universe. At least, that’s what François Englert convinced us on his visit to CERN last week. Englert shared the 2013 Nobel prize in physics with Peter Higgs “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles”. In the video below, he explains how he and Higgs manipulated equations containing mathematical constructs called scalar fields to predict the existence of the Brout-Englert-Higgs field. Nobel laureate François Englert explains the Brout-Englert-Higgs mechanism that gives particles mass, with the help of a blackboard (Video: CERN) According to Englert, the equation describing this mechanism is built in two parts. One part consists of scalar fields; the other consists of constructs called gauge fields. Englert explains that a big problem in particle physics in the 1960s was to find a gauge field that had mass. Solving that problem – working out how a gauge field could have mass -would help to explain other problems in physics, such as how to mathematically describe short-range interactions inside the nuclei of atoms. But Englert says that you cannot easily just add mass to a gauge field “off-hand”. He needed another theoretical approach. The key was to add a new part – a new scalar field – to the equation describing the mass mechanism. Part of this new scalar field could be mathematically simplified. What came out of the algebraic manipulation was a term that gave rise to “a condensate spread out all over the universe”. Then, the interaction between the condensate and another part of the equation could be generalized, says Englert, to give mass to elementary particles. Easy peasy! “I know this is extremely abstract,” says a modest Englert of his explanation. “But if I have two minutes [to explain it], I can hardly do more!” François Englert and Peter W. Higgs are jointly awarded the Nobel Prize in Physics 2013 “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider” François Englert and Peter W. Higgs are jointly awarded the Nobel Prize in Physics 2013 for the theory of how particles acquire mass. In 1964, they proposed the theory independently of each other (Englert together with his now deceased colleague Robert Brout). In 2012, their ideas were confirmed by the discovery of a so called Higgs particle at the CERN laboratory outside Geneva in Switzerland. The awarded mechanism is a central part of the Standard Model of particle physics that describes how the world is constructed. According to the Standard Model, everything, from flowers and people to stars and planets, consists of just a few building blocks: matter particles. These particles are governed by forces mediated by force particles that make sure everything works as it should. The entire Standard Model also rests on the existence of a special kind of particle: the Higgs particle. It is connected to an invisible field that fills up all space. Even when our universe seems empty, this field is there. Had it not been there, electrons and quarks would be massless just like photons, the light particles. And like photons they would, just as Einstein’s theory predicts, rush through space at the speed of light, without any possibility to get caught in atoms or molecules. Nothing of what we know, not even we, would exist. Both François Englert and Peter Higgs were young scientists when they, in 1964, independently of each other put forward a theory that rescued the Standard Model from collapse. Almost half a century later, on Wednesday 4 July 2012, they were both in the audience at the European Laboratory for Particle Physics, CERN, outside Geneva, when the discovery of a Higgs particle that finally confirmed the theory was announced to the world. The model that created order The idea that the world can be explained in terms of just a few building blocks is old. Already in 400 BC, the philosopher Democritus postulated that everything consists of atoms —átomos is Greek for indivisible. Today we know that atoms are not indivisible. They consist of electrons that orbit an atomic nucleus made up of neutrons and protons. And neutrons and protons, in turn, consist of smaller particles called quarks. Actually, only electrons and quarks are indivisible according to the Standard Model. The atomic nucleus consists of two kinds of quarks, up quarks and down quarks. So in fact, three elementary particles are needed for all matter to exist: electrons, up quarks and down quarks. But during the 1950s and 1960s, new particles were unexpectedly observed in both cosmic radiation and at newly constructed accelerators, so the Standard Model had to include these new siblings of electrons and quarks. Besides matter particles, there are also force particles for each of nature’s four forces — gravitation, electro magnetism, the weak force and the strong force. Gravitation and electromagnetism are the most well-known, they attract or repel, and we can see their effects with our own eyes. The strong force acts upon quarks and holds protons and neutrons together in the nucleus, whereas the weak force is responsible for radioactive decay, which is necessary, for instance, for nuclear processes inside the Sun. The Standard Model of particle physics unites the fundamental building blocks of nature and three of the four forces known to us (the fourth, gravitation, remains outside the model). For long, it was an enigma how these forces actually work. For instance, how does the piece of metal that is attracted to the magnet know that the magnet is lying there, a bit further away? And how does the Moon feel the gravity of Earth? Invisible fields fill space The explanation offered by physics is that space is filled with many invisible fields. The gravitational field, the electromagnetic field, the quark field and all the other fields fill space, or rather, the four dimensional space-time, an abstract space where the theory plays out. The Standard Model is a quantum field theory in which fields and particles are the essential building blocks of the universe. In quantum physics, everything is seen as a collection of vibrations in quantum fields. These vibrations are carried through the field in small packages, quanta, which appear to us as particles. Two kinds of fields exist: matter fields with matter particles, and force fields with force particles — the mediators of forces. The Higgs particle, too, is a vibration of its field — often referred to as the Higgs field. Without this field the Standard Model would collapse like a house of cards, because quantum field theory brings infinities that have to be reined in and symmetries that cannot be seen. It was not until François Englert with Robert Brout, and Peter Higgs, and later on several others, showed that the Higgs field can break the symmetry of the Standard Model without destroying the theory that the model got accepted. This is because the Standard Model would only work if particles did not have mass. As for the electromagnetic force, with its massless photons as mediators, there was no problem. The weak force, however, is mediated by three massive particles; two electrically charged W particles and one Z particle. They did not sit well with the light-footed photon. How could the electroweak force, which unifies electromagnetic and weak forces, come about? The Standard Model was threatened. This is where Englert, Brout and Higgs entered the stage with the ingenious mechanism for particles to acquire mass that managed to rescue the Standard Model. The ghost-like Higgs field The Higgs field is not like other fields in physics. All other fields vary in strength and become zero at their lowest energy level. Not the Higgs field. Even if space were to be emptied completely, it would still be filled by a ghost-like field that refuses to shut down: the Higgs field. We do not notice it; the Higgs field is like air to us, like water to fish. But without it we would not exist, because particles acquire mass only in contact with the Higgs field. Particles that do not pay attention to the Higgs field do not acquire mass, those that interact weakly become light, and those that interact intensely become heavy. For example, electrons, which acquire mass from the field, play a crucial role in the creation and holding together of atoms and molecules. If the Higgs field suddenly disappeared, all matter would collapse as the suddenly massless electrons dispersed at the speed of light. So what makes the Higgs field so special? It breaks the intrinsic symmetry of the world. In nature, symmetry abounds; faces are regularly shaped, flowers and snowflakes exhibit various kinds of geometric symmetries. Physics unveils other kinds of symmetries that describe our world, albeit on a deeper level. One such, relatively simple, symmetry stipulates that it does not matter for the results if a laboratory experiment is carried out in Stockholm or in Paris. Neither does it matter at what time the experiment is carried out. Einstein’s special theory of relativity deals with symmetries in space and time, and has become a model for many other theories, such as the Standard Model of particle physics. The equations of the Standard Model are symmetric; in the same way that a ball looks the same from whatever angle you look at it, the equations of the Standard Model remain unchanged even if the perspective that defines them is changed. The principles of symmetry also yield other, somewhat unexpected, results. Already in 1918, the German mathematician Emmy Noether could show that the conservation laws of physics, such as the laws of conservation of energy and conservation of electrical charge, also originate in symmetry. Symmetry, however, dictates certain requirements to be fulfilled. A ball has to be perfectly round; the tiniest hump will break the symmetry. For equations other criteria apply. And one of the symmetries of the Standard Model prohibits particles from having mass. Now, this is apparently not the case in our world, so the particles must have acquired their mass from somewhere. This is where the now-awarded mechanism provided a way for symmetry to both exist and simultaneously be hidden from view. The symmetry is hidden but is still there Our universe was probably born symmetrical. At the time of the Big Bang, all particles were massless and all forces were united in a single primordial force. This original order does not exist anymore — its symmetry has been hidden from us. Something happened just 10–11 seconds after the Big Bang. The Higgs field lost its original equilibrium. How did that happen? It all began symmetrically. This state can be described as the position of a ball in the middle of a round bowl, in its lowest energy state. With a push the ball starts rolling, but after a while it returns down to the lowest point. However, if a hump arises at the centre of the bowl, which now looks more like a Mexican hat, the position at the middle will still be symmetrical but has also become unstable. The ball rolls downhill in any direction. The hat is still symmetrical, but once the ball has rolled down, its position away from the centre hides the symmetry. In a similar manner the Higgs field broke its symmetry and found a stable energy level in vacuum away from the symmetrical zero posi tion. This spontaneous symmetry breaking is also referred to as the Higgs field’s phase transition; it is like when water freezes to ice. In order for the phase transition to occur, four particles were required but only one, the Higgs particle, survived. The other three were consumed by the weak force mediators, two electrically charged W particles and one Z particle, which thereby got their mass. In that way the symmetry of the electroweak force in the Standard Model was saved — the symmetry between the three heavy particles of the weak force and the massless photon of the electromagnetic force remains, only hidden from view. Extreme machines for extreme physics The Nobel Laureates probably did not imagine that they would get to see the theory confirmed in their lifetime. It took an enormous effort by physicists from all over the world. For a long time two laboratories, Fermilab outside Chicago, USA, and CERN on the Franco-Swiss border, competed in trying to discover the Higgs particle. But when Fermilab’s Tevatron accelerator was closed down a couple of years ago, CERN became the only place in the world where the hunt for the Higgs particle would continue. CERN was established in 1954, in an attempt to reconstruct European research, as well as relations between European countries, after the Second World War. Its membership currently comprises twenty states, and about a hundred nations from all over the world collaborate on the projects. CERN’s grandest achievement, the particle collider LHC (Large Hadron Collider) is probably the largest and the most complex machine ever constructed by humans. Two research groups of some 3,000 scientists chase particles with huge detectors — ATLAS and CMS. The detectors are located 100 metres below ground and can observe 40 million particle collisions per second. This is how often the particles can collide when injected in opposite directions into the circular LHC tunnel, 27 kilometres long. Protons are injected into the LHC every ten hours, one ray in each direction. A hundred thousand billion protons are lumped together and compressed into an ultra-thin ray — not entirely an easy endeavour since protons with their positive electrical charge rather aim to repel one another. They move at 99.99999 per cent of the speed of light and collide with an energy of approximately 4 TeV each and 8 TeV combined (one teraelectronvolt = a thousand billion electronvolts). One TeV may not be that much energy, it more or less equals that of a flying mosquito, but when the energy is packed into a single proton, and you get 500 trillion such protons rushing around the accelerator, the energy of the ray equals that of a train at full speed. In 2015 the energy will be almost the double in the LHC. A puzzle inside the puzzle Particle experiments are sometimes compared to the act of smashing two Swiss watches together in order to examine how they are constructed. But it is actually much more difficult than so, because the particles scientists look for are entirely new — they are created from the energy released in the collision. According to Einstein’s well-known formula E = mc2, mass is a kind of energy. And it is the magic of this equation that makes it possible, even for massless particles, to create something new when they collide; like when two photons collide and create an electron and its antiparticle, the positron, or when a Higgs particle is created in the collision of two gluons, if the energy is high enough. The protons are like small bags filled with particles — quarks, antiquarks and gluons. The majority of them pass one another without much ado; on average, each time two particle swarms collide only twenty full frontal collisions occur. Less than one collision in a billion might be worth following through. This may not sound much, but each such collision results in a sparkling explosion of about a thousand particles. At 125 GeV, the Higgs particle turned out to be over a hundred times heavier than a proton and this is one of the reasons why it was so difficult to produce. However, the experiment is far from finished. The scientists at CERN hope to bring further ground breaking discoveries in the years to come. Even though it is a great achievement to have found the Higgs particle — the missing piece in the Standard Model puzzle — the Standard Model is not the final piece in the cosmic puzzle. One of the reasons for this is that the Standard Model treats certain particles, neutrinos, as being vir tually massless, whereas recent studies show that they actually do have mass. Another reason is that the model only describes visible matter, which only accounts for one fifth of all matter in the universe. The rest is dark matter of an unknown kind. It is not immediately apparent to us, but can be observed by its gravitational pull that keeps galaxies together and prevents them from being torn apart. In all other respects, dark matter avoids getting involved with visible matter. Mind you, the Higgs particle is special; maybe it could manage to establish contact with the enigmatic darkness. Scientists hope to be able to catch, if only a glimpse, of dark matter, as they continue the chase of unknown particles in the LHC in the coming decades. By ADAM FRANK THIS summer, physicists celebrated a triumph that many consider fundamental to our understanding of the physical world: the discovery, after a multibillion-dollar effort, of the Higgs boson. Given its importance, many of us in the physics community expected the event to earn this year’s Nobel Prize in Physics. Instead, the award went to achievements in a field far less well known and vastly less expensive: quantum information. It may not catch as many headlines as the hunt for elusive particles, but the field of quantum information may soon answer questions even more fundamental — and upsetting — than the ones that drove the search for the Higgs. It could well usher in a radical new era of technology, one that makes today’s fastest computers look like hand-cranked adding machines. The basis for both the work behind the Higgs search and quantum information theory is quantum physics, the most accurate and powerful theory in all of science. With it we created remarkable technologies like the transistor and the laser, which, in time, were transformed into devices — computers and iPhones — that reshaped human culture. But the very usefulness of quantum physics masked a disturbing dissonance at its core. There are mysteries — summed up neatly in Werner Heisenberg’s famous adage “atoms are not things” — lurking at the heart of quantum physics suggesting that our everyday assumptions about reality are no more than illusions. Take the “principle of superposition,” which holds that things at the subatomic level can be literally two places at once. Worse, it means they can be two things at once. This superposition animates the famous parable of Schrödinger’s cat, whereby a wee kitty is left both living and dead at the same time because its fate depends on a superposed quantum particle. For decades such mysteries were debated but never pushed toward resolution, in part because no resolution seemed possible and, in part, because useful work could go on without resolving them (an attitude sometimes called “shut up and calculate”). Scientists could attract money and press with ever larger supercolliders while ignoring such pesky questions. But as this year’s Nobel recognizes, that’s starting to change. Increasingly clever experiments are exploiting advances in cheap, high-precision lasers and atomic-scale transistors. Quantum information studies often require nothing more than some equipment on a table and a few graduate students. In this way, quantum information’s progress has come not by bludgeoning nature into submission but by subtly tricking it to step into the light. Take the superposition debate. One camp claims that a deeper level of reality lies hidden beneath all the quantum weirdness. Once the so-called hidden variables controlling reality are exposed, they say, the strangeness of superposition will evaporate. Another camp claims that superposition shows us that potential realities matter just as much as the single, fully manifested one we experience. But what collapses the potential electrons in their two locations into the one electron we actually see? According to this interpretation, it is the very act of looking; the measurement process collapses an ethereal world of potentials into the one real world we experience. And a third major camp argues that particles can be two places at once only because the universe itself splits into parallel realities at the moment of measurement, one universe for each particle location — and thus an infinite number of ever splitting parallel versions of the universe (and us) are all evolving alongside one another. These fundamental questions might have lived forever at the intersection of physics and philosophy. Then, in the 1980s, a steady advance of low-cost, high-precision lasers and other “quantum optical” technologies began to appear. With these new devices, researchers, including this year’s Nobel laureates, David J. Wineland and Serge Haroche, could trap and subtly manipulate individual atoms or light particles. Such exquisite control of the nano-world allowed them to design subtle experiments probing the meaning of quantum weirdness. Soon at least one interpretation, the most common sense version of hidden variables, was completely ruled out. At the same time new and even more exciting possibilities opened up as scientists began thinking of quantum physics in terms of information, rather than just matter — in other words, asking if physics fundamentally tells us more about our interaction with the world (i.e., our information) than the nature of the world by itself (i.e., matter). And so the field of quantum information theory was born, with very real new possibilities in the very real world of technology. What does this all mean in practice? Take one area where quantum information theory holds promise, that of quantum computing. Classical computers use “bits” of information that can be either 0 or 1. But quantum-information technologies let scientists consider “qubits,” quantum bits of information that are both 0 and 1 at the same time. Logic circuits, made of qubits directly harnessing the weirdness of superpositions, allow a quantum computer to calculate vastly faster than anything existing today. A quantum machine using no more than 300 qubits would be a million, trillion, trillion, trillion times faster than the most modern supercomputer. Going even further is the seemingly science-fiction possibility of “quantum teleportation.” Based on experiments going on today with simple quantum systems, it is at least a theoretical possibility that one day objects could be reconstituted — beamed — across a space without ever crossing the distance. When a revolution in science yields powerful new technologies, its effect on human culture is multiplied exponentially. Think of the relation between thermodynamics, steam engines and the onset of the industrial era. Quantum information could well be the thermodynamics of the next technological revolution. The discovery of the Higgs — the confirming stroke of a grand, overarching theory of matter — will, eventually, yield a Nobel Prize, and when it comes the award will be justly deserved. But the discovery’s impact on human society will most likely be dwarfed by the consequences of quantum information theory. The steady advances at its frontiers are turning us into safecrackers, nimbly manipulating the tumblers guarding the deepest secrets of nature and our own place within it. What we find when the locks snap open on the quantum world will surely be something far richer and far greater than our imaginations today can conceive. Read more: www.nytimes.com
1
5
<urn:uuid:5402dd7c-8bc3-499e-9981-dc993e690741>
Health Indicator Report of Melanoma of the Skin Incidence According to the American Cancer Society, melanoma is much less common than other skin cancers such as basal cell and squamous cell, but it is far more dangerous. Risk factors that can be controlled are exposure to sunlight and UV radiation during work and play. A history of sunburns early in life increases one's risk for melanoma. Risk for melanoma also increases with the severity of the sunburn or blisters. Lifetime sun exposure, even if sunburn does not occur, is another risk factor for melanoma. Another modifiable risk factor is location. People who live of certain areas in the U.S. experience higher rates of melanoma. These are areas with a high elevation, warmer climate, and where sunlight can be reflected by sand, water, snow, and ice. Risk for melanoma is greatly increased by tanning, both outside with oils and by using sunlamps and tanning booths. Even people who tan well without burning are at risk for melanoma. Tan skin is evidence of skin damaged by UV radiation. Health care providers strongly encourage people, especially young people, to avoid tanning beds, booths, and sunlamps. The risk of melanoma is greatly increased by using these artificial sources of UV radiation before age 30. Melanoma of the Skin Incidence by Local Health District, Utah, 2012-2014 NotesICD-O3 Site C440-C449 and Histology 8720-8790: Melanoma of the Skin, which corresponds to ICD-10 code C43. [[br]] Rates are age-adjusted to the 2000 U.S. population. [[br]] *Use caution in interpreting, the estimate has a relative standard error greater than 30% and does not meet UDOH standards for reliability. For more information, please go to [http://ibis.health.utah.gov/pdf/resource/DataSuppression.pdf].[[br]] Prior to 2015 San Juan County was part of the Southeast Local Health District. In 2015 the San Juan County Local Health District was formed. Data reported are for all years using the current boundaries. - The cancer data was provided by the Utah Cancer Registry, which is funded by contract HHSN2612013000171 from the National Cancer Institute's SEER Program with additional support from the Utah Department of Health and the University of Utah - Population Estimates: National Center for Health Statistics (NCHS) through a collaborative agreement with the U.S. Census Bureau, IBIS Version 2015 DefinitionThe rate of melanoma incidence in Utah per 100,000 population (ICD-O3 Site C440-C449 and Histology 8720-8790: Melanoma of the Skin, which corresponds to ICD-10 code C43). NumeratorThe number of melanoma incidents among Utahns for a specific time period (ICD-O3 Site C440-C449 and Histology 8720-8790: Melanoma of the Skin, which corresponds to ICD-10 code C43). DenominatorThe total population of Utah for a specific time period. How Are We Doing?Utah's age-adjusted rate for melanoma incidence has been rising steadily from 20.5 per 100,000 population in 2000 to 42.4 per 100,000 population in 2014. Among local health districts, during 2012-2014 the highest age-adjusted melanoma of the skin incidence rate was in Summit County an age-adjusted rate of 76.8 per 100,000 population. Summit County health district was also the only health district to be significantly above the state rate during this time frame. From 2012-2014 Southeast health district had the lowest age-adjusted melanoma incidence rate of 21.6 per 100,000 population. Utah males were significantly more likely than Utah females to be diagnosed with melanoma of the skin. This difference between males and females increases with age; from 2012-2014 among those 65 years of age and over, the melanoma incidence rate for Utah males was 217.1 per 100,000 persons while the rate for females was 80.8 per 100,000 persons. How Do We Compare With the U.S.?Utah has consistently ranked as the highest state nationally in terms of melanoma death and incidence. In 2013, the age-adjusted melanoma incidence rate in Utah was 37.4 per 100,000 and 20.7 in the U.S. What Is Being Done?The Utah Department of Health initiated the Utah Cancer Action Network (UCAN), a statewide partnership whose goal is to reduce the burden of cancer. The mission of the UCAN is to lower cancer incidence and mortality in Utah through collaborative efforts directed toward cancer prevention and control. As a result of this planning process, objectives and strategies have been developed by community partners regarding the early detection of cervical, breast, and colorectal cancers as well as the promotion of physical activity, healthy eating habits, melanoma cancer prevention, and cancer survivorship advocacy. Page Content Updated On 06/06/2017, Published on 06/06/2017
1
9
<urn:uuid:a3c741bc-106b-4dfb-9a00-562430c4082e>
The Battle of Glenn Máma, Dublin and the high-kingship of Ireland: a millennial commemoration by Ailbhe MacShamhráin. - Published in Medieval Dublin, edited by Sean Duffy (pages 53-64). The battle of Glenn Máma (coordinates 53°16′25″N 6°32′57″W) and the sack of Dublin which followed, taking place over the year 999-1000, has been accorded relatively little focus, in comparison with Clontarf, by historians of the later Viking Age in Ireland and Scandinavia. The importance of Clontarf has been appreciated at least since the twelfth century, that particular conflict is at once the highpoint and tragic finale of the Ua Briain propaganda work Cogadh Gaedhel re Gallaib (the War of the Irish with the Foreigners), which purports to chart the rise of the Dál Cais, the reign of Brian Boróma and his death in the struggle against the heathen. Clontarf is almost as prominent in Scandinavian saga. It forms an important part of Njal’s Saga and presumably featured in the now lost Brian’s Saga (Bugge 1908, 55, 59-66; Magnusson & Paulsson 1960 341; Ó Corráin 1998 447-452). Quite aside from its survival in popular tradition, Clontarf attracted the attention of medievalists in the earlier decades of the twentieth century, including Goedheer (1938) and Ryan (1938); it continues to be used as a marker by present-day historians (for example Ó Cróinín 1995, 272), and justifiably so, if only because the removal of Brian opened up opportunities for other dynasties. However, scholars appear to have been slower to recognize the importance, in its own right, of the Battle of Glenn Máma, and it has attracted rather little discussion. In more recent decades, several commentators have indeed acknowledged that the battle (fought within a days march of Dublin) represented a turning point in Brian’s career, Ryan (1967, 361) and (Ó Corráin (1972, 123). have both viewed the battle as a key stage in the conflict between Brian and Máel Sechnaill, a victory which gave him the confidence to challenge the Uí Néill king of Tara for supremacy in Ireland. Such attention, therefore, as has been accorded to Glenn Máma concerns its place in that context for ascendancy which many, contemporary commentators and historians alike, have viewed as a struggle for the high kingship of Ireland. While accepting the importance of its place in this broader picture, it seems appropriate here to view the engagement more in the context of its Dublin dimension. There are indications, as is argued below, that Brian had designs on Dublin for some years prior to Glenn Máma as, indeed, had his rival Máel Sechnaill. The question of Brian’s responsibility for exacerbating the unrest which led to Glenn Máma – perhaps through misreading the complex political relationship between the Hiberno-Scandinavian kingdom and the north Leinster dynasty of Uí Dúnlainge – and the extent to which he may have appreciated the importance of Dublin in the context of Irish Sea politics, are among the other issues addressed. The career of Brian has been widely discussed, and the stages by which he rose to challenge for supremacy in Ireland are charted in several surveys (Ó Corráin 1972, 120-131; Duffy 1997, 31-6). For present purposes, a brief synopsis will suffice. Without doubt, he exercised considerable political power; ultimately, he did force submissions from each of the other provincial kings and from the Norse of Dublin, displacing (and arguably exceeding) the ascendancy long claimed by the Uí Néill kings of Tara and achieving, as has been claimed, a special status. On he occasion of his visit to Armagh in 1005 when, as is recorded in the Annals of Ulster, he deposited the sum of twenty ounces of gold on St Patrick’s altar, a not in the Book of Armagh (albeit entered by a partisan) style him Imperator Scotorum of Emperor of the Irish (Gwynn 1978-9; Duffy 1997, 33-4). However, it seems unlikely that, from the commencement of his reign, he possesses a blueprint for achieving this supremacy. Nor was it the case that he had come from nowhere to challenge for political dominance in Ireland, despite the claims of Cogadh Gaedhel re Gallaib, whose authors wished to dramatise his rise to power. For his very accession to kingship, much less by the time of Glenn Máma, his dynasty of Dál Cais was well placed at regional level. As shown more than thirty years ago by Kelleher (1967 230-41), it is certainly the case that Brian’s inheritance from his father and brothers – if it did not quite amount to mastery of Munster – certainly provided him with a firm basis from which to establish overkingship of the province. The possibility should perhaps not be discounted that Brian’s father, Cennétig son of Lorcán, did benefit politically from a strategic marriage with the still powerful Clann Cholmáin dynasty of Uí Néill, as long as the importance of this connection is not overestimated (Kelleher 1967, 230-1, cf Ó Corráin, 114-5). It happens that Cennétig’s daughter Orlaith was married, while apparently still in her teens, to the ageing king of Tara, Donnchad son of Flann. In 941, Donnchad captured the Eoganachta king of Cashel, Cellachán, which probably did help the Dál Cais cause, whether or not that was the intention. That same year, however, the year in which Brian was born according to a retrospective entry in the Annals of Ulster, the king of Tara had his young wife executed for adultery; his apparent willingness to do so might indeed suggest a marriage of convenience with a lesser dynasty, the continued support of which was not crucial for him. It may merely be coincidental, then the death of Donnchad in 944 was quickly followed by a reassertion of authority on the part of Cellachán of Cashel, involving a battle which cost the lives of two of Cennétig’s sons. The question of Uí Néill support aside, however, other circumstances can be discerned which almost certainly facilitated Dál Cais expansion in the early to mid tenth century; not least among those is political fragmentation within the Eoganachta dynasties, and the potential which existed to exploit a conglomeration of minor ruling lineages in the lower Shannon basin. The fact remains that Cellachán, at his death in 951, is styled Rí Tuathmuman (king of Northern Munster) in the annals of Innisfallen, and Rígdamna Caisil (one eligible for the king of Cashel) in the Annals of Ulster. Although his son and successor Lachta, slain in an internal conflict in 853, is not accorded a similar accolade in his obit, he may nonetheless have advanced the cause of the Dál Cais. His short reign saw an attack against Clonmacnois which involved the Munstermen supported by foreigners, presumably Hiberno-Scandinavians. Later his brother Mathgamain defeated and subjugated the Norsemen of Limerick and managed to overawe some of the Eoganachta dynasties, but clearly not the Eoganachta of Raithlenn under Máel-muad son of Bran, whose base kingdom lay in Co Cork and who remained powerful in south Munster. Although Mathgamain did not achieve effective control over the province but was captured put to death by Máel-muad in 976, it may be noted that the annals style him king of Cashel at his death. The immediate priority for Brian, nonetheless, was to re-establish the position of his dynasty in Munster, a task which took several years. The annals relate how he suppressed the Norsemen of Limerick, Uí Fidgenti (a kingdom in west Co Limerick), and the Eoganachta of Raithlenn – defeating and slaying Máel-muad at Belach Lechta. By 982, he was ready to move against the Nore Valley kingdom of Osraige, in effect a buffer-state between Munster and Leinster. This was apparently a cause of concern for the Clann Cholmáin king of Tara. Máel Sechnaill son of Domnall, who only two years earlier had severely defeated the Norse of Dublin at the important Battle of Tara. It required no great imagination to see a foray into Osraige by a Munster overking as a step towards asserting authority over Leth Moga (the southern half of Ireland); this at any rate would mean Munster lordship over Leinster – whether or not it was extended to include Dublin. Opting for a pre-emptive strike, Máel Sechnaill attacked the heartland of Dál Cais (in east Co Clare) and leveled the Tree of Mág n-Adair, which had once been held sacred by the ancestors of Dál Cais and had retained a special significance. Undeterred by such calculated insults Brian, in 983, directed his forces up the Shannon against Connacht, over which Máel Sechnaill claimed sway. The same year, he again harried Osraige, taking hostages. This time he did, as Clann Cholmáin had feared, continue into Mág nAilbe (a plain in Co Carlow) and took hostages from the Leinster dynasty of Uí Dunlainge. It seems that Brian, by this time, had designs on Dublin, he had witnessed the heavy defeat inflicted on the Norsemen by Máel Sechnaill, and perhaps anticipated that his rival might move to dominate the town. In 984, he formed an alliance with sons of Harald, after they brought a fleet to Waterford. There were representatives of the Limerick dynasty, which his brother Mathgamain had earlier banished and, understandably, might have welcomed an opportunity to establish themselves in a new kingship. For Brian, the prospect of dominating a wealthy east-coast trading centre through compliant sub-kings was, presumably, attractive as an end in itself; however, it is possible that, mindful of the close Hebridean connections of his new allies, he already realised the potential of Dublin as a key to the Irish Sea. In any event, Dál Cais and the Limerick Norsemen planned a joint initiative. According to the Annals of Inisfallen, the two sides exchanged hostages and mutual guarantees, in preparation for an expedition against Dublin. A campaign was launched against Osraige and Leinster, and might have proceeded further but for a revolt of the Deisi, a sub-kingdom in eastern Munster (mainly in modern Co Waterford), which destabilised the province to a considerable degree. It took Brian at least two years to re-establish control. He harried the lands of the Deisi, bringing them to heel, but in 986 found it necessary to imprison his nephew, Aed son of Mathgamain, and the following year led a hosting across Desmumu (south Munster), taking hostages at Lismore, Cork and Emly. These campaigns occupied his attention during the interval in which his initiative against Leinster and Dublin had lost its momentum. The late 980s and early 990s witnesses a renewal of the contest between Brian and Máel Sechnaill; the Munstermen campaigned up the Shannon waterway, striking not only at Connacht but now at the Westmeath lakelands. Their achievement, in military terms, could at best be described by the Connachts, while on another they reached beyond Lough Ree and into the land of Breifne. Gradually, they wore down the resolve of Máel Sechnaill. The latter had certainly been active; in 989, he had subjugated Dublin taking hostages and tribute, and then carried the war southwards. In 990 he defeated the Munster men at Carn Fordroma and in 993, according to the Four Masters, he sacked Nenagh. The very fact that Brian appeared so resilient in the face of these setbacks may have caused Máel Sechnaill to become discouraged,. The indications are that Brian was still determined to extend his lordship over Leth Moga and, conscious no doubt of his rival’s assertion of lordship over Dublin, had not abandoned his ambitions in relation to the Hiberno-Norse kingdom. In the military sphere, Dál Cais proceeded slowly and with caution from 991 onwards. That year, Brian led a hosting against Leinster. In 995, he constructed extensive fortifications around his home province of Munster, before leading another expedition into Leinster in the following year. This time, he marched to Mág nAilbhe (as he had done thirteen years earlier) in Co Carlow and took the hostages of the Uí Chennselaig dynasty and of the west of the province in general. In parallel with these developments, some diplomatic realignments on Brian’s part may be viewed in the context of his ambitions in relation to Dublin. It seems that he divorced his wife Gormlaith, whose father Murchad (sl 972) of the Uí Faeláin dynasty, had reigned as a nominal overking of Leinster. The separation of Brian and Gormlaith clearly had the potential to complicate political relationships, as her former husbands included Amlaib Cuarán (d981), king of Dublin, and Máel Sechnaill, king of Tara. Her offspring included Sitriuc Silkbeard, king of Dublin (Ó Cronin 1995, 263-4). There was certainly a belief that she used her personal influence against Brian’s interests after she returned to her own kindred. Episodes of Cogadh Gaedhel re Gallaib and Njal’s Saga credit her with inciting her brother Máel-Mórda, and her son Sitriuc Silkbeard, against Brian, prior to the Battle of Clontarf (Magnusson & Palsson 1960, 342, 344); no surviving evidence, however indicates that she played any such role in the lead-up to Glenn Máma. Meanwhile, in the early 990s, Brian embarked on his third marriage, to Echrad, daughter of Carlus of Uí Aeda Odba, king of Gailenga Becca (Ryan 1967, 365-5). Given the relative unimportance of his new wife’s lineage, the motive for such alliance might at first sight appear obscure. The explanation, however, probably lies in the geography of politics. While branches of the Gailenga were scattered through counties Meath and Westmeath, Gailenga Becca lay in the south of the plain of Brega – well within the Dublin over-kingdom known as Fine Gall (Fingal). It is said that Glasnevin lay in this statelet of Gailenga Becca (Byrne 1968, 393). A difficulty remains, insofar as the ruling lineage of the Uí Aeda was associated with Odba, a site which the nineteenth century scholar O’Donovan located near Navan; however, as the placename merely designates a knoll, it seems reasonable that there was ore than one site of the same name. Some references to Odba suggest a location in southern Bregga, which certainly leaves it possible that the location lay within Fingal. For an ambitious ruler anxious to challenge for overlordship of Dublin, an opportunity to secure a foothold, within the Hiberno-Scandinavian overkingdom would, no doubt, have been welcome. As the 990s came to a close, Máel Sechnaill, acknowledging his inability to subdue Brian, appears to have decided to ‘cut his losses.’ A rigdál (royal meeting) was arranged in 997 at Clonfert (more precisely at Port da Chaineoc, according to the Annals of Innisfallen), and agreement reached on the partition of Ireland into its ‘traditional divisions’ of Leth Cuinn (the northern half) and Leth Moga. In effect, therefore, suzerainty of the southern half of Ireland was being conceded to Brian by the Uí Néill king of Tara. In certain respects, such a meeting and such a concession was not entirely unprecedented. More than two-and-a-half centuries earlier, overkings of the Uí Néill and of Munster had met at Terryglass to agree spheres of influence along such lines (Ni Chon Uladh 1999, 190-6). On this occasion, however, Dublin was in the equation; Máel Sechnaill gave over to Brian the hostages of Leinster and of the Hiberno-Norse kingdom of Dublin, which he had taken in 989. This particular transfer of lordship would have serious import. In the event, it transpired that the ‘Agreement of Clonfert’ was short-lived, the two kings campaigned together in 998, and took further hostages from the ‘Foreigners’ meaning, it seems, the Norse of Dublin. Brian then took hostages from Connacht and presented them to Máel Sechnaill – a magnanimous gesture intended, to convey, that he was the superior ruler. At this pint, serious unrest broke out in Leinster and Dublin which drew the attention of Brian. The immediate cause of this unrest may well lie in the relationship between Brian and his new tributary kings. It is probable that Dublin, in particular, having resented the overlordship of the Uí Néill, was even less willing to yield to Dál Cais (Ó Corráin (1972, 123). Furthermore, it appears that Brian, responding to the mounting crisis in 998, fanned the embers of discontent. Ultimately, however, the roots of the problem lay in the dynastic politics of Leinster and Dublin, the complexities of which Brian either failed to comprehend fully or chose to disregard for the sake of expedience. The north Leinster dynasty of Uí Dunlainge had, some two centuries earlier, become divided into rival lineages, two of which – Uí Dunchada, based at Liamain (Newcastle-Lyons, on the Dublin-Kildare border), and Uí Faeláin, based in north Co Kildare – were, in the late tenth century, contesting a nominal overkingship of the Leinstermen. Since the mid 980s, this particular dignity had been held by a member of the Uí Donchada, Donnchad son of Donall, who by this time was subordinate to Brian.Geneaology of the Ui Dunlainge kings of Leinster summarised by FJ Byrne For many year, Uí Dunchada, along with other Irish dynasties adjacent to Dublin, had been severely repressed by the Norse rulers, and remained in their shadow even after Máel Sechnaill had reduced the Norse kingdom’s military might and placed it under tribute. When, in 993, a conflict between the Hiberno-Norse dynasties of Dublin and Waterford (Duffy 1992, 96) caused the temporary expulsion from Dublin of its king, Sitriuc Silkbeard son of Amlaib Cuarán, Uí Dunchada backed the Waterford Norsemen. Presumably the intention was to secure greater support for their own cause from a new regime. Unfortunately for them, they had made the wrong choice and paid the penalty when, a short time later, the Uí Faelán ruler, Máel-mórda son of Murchad, engineered the return of Sitriuc. These two were, of course, related, being uncle and nephew: Máel-mórda’s sister Gormlaith, once the wife of Amlaibh, was Sitriuc Silkbeard’s mother. Acting as allies, Máel-mórda and Sitriuc pursued a vendetta against Uí Dunchada (MacShamhrain 1996 88-89), before the end of 994, they killed Donnchad’s cousin Gilla-céile son of Cerball. Two years later, they slew another of his cousins, Mathgamain, a brother of Gilla-céile. They also killed Ragnaill, a member of the Waterford dynasty. By 998, it seems that the situation had deteriorated further. Late in the year, Brian intervened directly and ravaged Leinster. It is not clear whether this action was intended merely to overawe Leinster and re-state Dál Cais supremacy, or had the more strategic purpose of supporting the Uí Dunchada rulers against is rivals. In any event, it does not appear to have helped matters. In 999, Donnchad son of Domnall was taken prisoner by Sitriuc Silkbeard of Dublin, again acting in collaboration with Máel-mórda, who assumed overkingship of the Leinstermen. Uncle and nephew then revolted against the overlordship of Brian. Indications are that a delay of a couple of months ensued as the latter, having duly considered his options, mustered his forces. That December, Brian, (styled king of Cashel in the Annals of Ulster) led an army towards Dublin. Regarding the composition of this army, it may be noted that only the Clonmacnois set of annals, which may be seen as pro-Uí Néill, mention Máel Sechnaill in connection with the expedition. The Annals of Innisfallen and Cogadh Gaedhel re Gallaib, both of which may be accused of being (to varying degrees) pro- Dál Cais, are quite explicit in their claim that Brian’s command consisted solely of the “choice troops of Munster.” It may be significant, then, that the generally reliable Annals of Ulster, in its account of Glenn Máma and its sequel, refer only to Brian and indeed imply that he alone profited from subsequent submission. None of the sources state that the army spent Christmas on the march but it might be suggested that it did, for on Thursday the 30 of December it was intercepted, but the combined forces of Norse Dublin and the Leinstermen, at Glenn Máma where a decisive engagement was fought. Agreement has been reached on the location of this important battle. The route taken by Brian’s army marching from Munster, clearly a crucial factor in the equation, is of course unknown. The suggestion that he perhaps followed what later became the coach road from Limerick is plausible, but remains unprovable. Nineteenth century scholars, including O’Donovan (1851, II, 739 n z) and Todd (1867, cxlii-iv), were tempted to locate the battle-site in the vicinity of Dunlavin, Co Wicklow. It was later realised that Glenn Máma featured in the itinerary of the tenth-century king of Ailech, Muirchertach ‘na cCochall Croicinn (‘of the leather cloaks) who allegedly make a circuit of Ireland in the winter of 941. Even, as it now appears, much of the ‘circuit’ reflects the achievement of the later Muirchertach mac Lochlainn (Ó Corráin 2000 238-250), the position of Glenn Máma in the sequence of place names indicates a location not far to the west of Dublin. On this basis Hogan (1910) considered that the site lay in the vicinity of Newcastle Lyons, Co Dublin, and proposed that it be identified with the Glen of Saggart. Given the propensity for battles to take place in border regions (Ó Riain 1974, 68), it seems reasonable to seek a location close to the perimeter of the Hiberno-Norse kingdom of Dublin. On that account, the suggestion of Lloyd (1914, 305ff), which places the battle at a gap now crossed by the Naas Road on the section between Kill and Rathcoole, is still worthy of consideration. In any event, the engagement took place within an easy day’s march of Dublin, as Brian pressed on immediately afterwards to reach the town on the following day. The sources which offer comment on the scale of the battle indicate that it was no trivial encounter. According to the Annals of Innisfallen, which admittedly tends to represent a Munster perspective, formna Gall herend (‘the best part of the foreigners of Ireland’) fell therein. The more blatantly partisan ‘Cogadh Gaedhel re Gallaib’) bursts into hyperbole (667), claiming that ‘since the Battle of Mág Roth to that time there had not taken place a greater slaughter.” It certainly seems that there was high mortality on both sides. The fallen included Harald son of Amlaib (a brother of Sitriuc Silkbeard) and ‘other nobles of the foreigners’, amongst whom was one Cuilén son of Eitigén, who apparently belonged to the Gailenga; he was perhaps a brother of Ruadacán son of Eitegén, king of Airther Gaileng, who died in 953 (Laski 1997, 134). If so, it appears that Brain’s efforts to secure the support of the Irish dynasties within Fingal had not been entirely successful. As for the casualties on Brian’s side, even the ‘Cogadh’ acknowledges that ‘there fell many multitudes of the Dál Cais, but no details are provided. In the immediate aftermath of the battle, Brian’s forces converged to Dublin reaching the town on New Year’s Eve 999. They entered its defences (apparently without any great resistance) and, as the Annals of Innisfallen attest, on New Year’s Day (the Kalends of January) 1000, following an intensive sack, burned both the settlement itself and the nearby wood known as Caill Tomair (it apparently stood on the north side of the Liffey). The plunder of the town, for the second time in ten years, is described in considerable detail in the Cogaidh (s68). Allowance must be made here for poetic license but, event itself, some picture can be obtained of the wealth of the trading centre that was Dublin (Smyth 1979, II 209, 242). According to the account Brian, having plundered the dún (fortress), entered the margadh (market area) and here seized the greatest wealth. Meanwhile, on the approach of the Munster forces, King Sitriuc had fled northward hoping to obtain asylum among the Ulstermen. His ally, Máel-mórda of Uí Faeláin, was captured, in ignominious circumstances according to the Cogadh (s71). At the conclusion of hostilities, the (nominal) overkingship of Leinster was bestowed upon the Uí Dunchada candidate, Dunchad son of Domnail, who retained this dignity until he was deposed in 1003. Brian, however, asserted his overlordship of province by taking hostages from the Leinstermen. Before long, Sitriuc Silkbeard returned having found no asylum in the north. The annal accounts concur that he, too, yielded hostages to Brian while the Annals of Innisfallen add that the latter in a suitable magnanimous gesture, ‘gave the fort (dún) to the Foreigners’; the implications here is that, from this time onwards, the Hiberno-Scandinavian ruler would hold his kingship from his Munster overlord. Apparently Brian, at this stage, aspired to an even tighter dominance of Dublin that than secured by his rival, Máel Sechnaill , ten years earlier. There seems to be little doubt that the longer term beneficiary of Glenn Máma was Brian alone. With renewed confidence, he again moved against Máel Sechnaill, even if his initiatives of 1000-1001 resulted in setbacks, one expedition into Brega resulted in his advance cavalry being slaughtered by the Uí Néill, another foray was reversed in Míde (Co Westmeath), and the Dál Cais river-fleet was impeded by the King of Tara and his Connachta allies having constructed a barrier across the Shannon. Brian, however, found a way of circumventing it and, early in 1002, brought a large army through to Athlone and took the hostages of Connacht. At this point Máel Sechnaill, finding the support of the northern kings slipping away (Jaski 2000, 227) felt obliged to submit and a new political order was created. The capitulation of the king of Tara left Brian as the most powerful king in Ireland – the first non Uí Néill king to achieve such prominence. However, it lies beyond the scope of this short essay to attempt any evaluation of his recognised success, in breaking the Uí Néill hegemony and making political supremacy appear an achievable goal (Byrne 1973, 267, Charles Edwards 2000, 570), much less to contemplate the realities underlying claims to high Kingship of Ireland. It remains to consider, though, however briefly the importance of Brian’s victory over the Máel-mórda/Sitriuc coalition as a prelude to renewing the challenge to Máel Sechnaill. Presumably, Glenn Máma gave him a psychological advantage over the king of Tara and increased his readiness to break the Agreement of Clonfert. As a result of the battle, he had achieved domination, in a meaningful sense, of Leinster and Dublin. Perhaps h had some appreciation of the potential of Dublin in accessing the Irish Sea area, certainly, in 1011, he brought a maritime fleet to Cenél Conaill (Co Donegal) – although it is not clear that this included Dublin ships or only those of Limerick. At the very least, he could command the land forces of Sitriuc Silkbeard – and he made us of them in his hostings from 1000 onwards (Duffy 1992, 95). It appears, therefore, that through achieving effective dominance of Dublin, Brian acquired a military (aside from a psychological) advantage over Máel Sechnaill, which helped him in his endeavors to reach beyond the lordship of Leth Moga. His success in this regard was probably instrumental in tying Dublin into the sphere of Leth Moga for at least a century to follow. When the Hiberno-Scandinavian kingdom was eventually, in 1052, brought directly under Irish control, it was by Leinster and subsequently Munster rulers, a pattern which was maintained until 1118. By this time, however, the compilers of Cogadh Gaedhel re Gallaib were already piecing together the orthodox Dál Cais history of the dynasty’s rise to prominence – and Glenn Máma was already part of that past. - S Bugge 1908 Norsk sagafoortuaelling og sagaskrivning I Irland Kristian - FJ Byrne 1968 Historical note on Cnogba (Knowth), RIA proc, 66C, 383-400 - FJ Byrne 1973 Irish Kings and High Kings London - T Charles Edwards: 2000 Early Christian Ireland Cambridge - S Duffy 1992 Irishmen and Islesmen in the Kingdoms of Dublin and Man 1052-1171, Eriu xliii 93-133 - S Duffy 1997 Ireland in the middle ages London & Dublin - AJ Goedheer 1938 Irish and Norse traditions about the Battle of Clontarf, Harlem - A Gwynn 1979 Brian in Armagh Seanchas Ard Mhacha ix 35-50 - E Hogan 1910 (ed) Onomiasticon Goedelicum Dublin - B Jaski 1997 Additional notes on the Annals of Ulster Eriu xiviii, 103-52 - B Jaski 2000 Early Irish Kingship and succession, Dublin - JV Kelleher 1967 The rise of the Dál Cais, in Rynne E (ed), North Munster studies: papers in honour of Michael Maloney PP 230-41 Limerick - J Lloyd 1914 The Identification of the Battlefield of Glenn Mama Co Kildare Arch Society Journal, vii 305ff - S MacAirí 1951 (ed), Annals of Innisfallen Dublin - S MacAirí and G MacNiocaill 1984 (eds) Annals of Ulster to AD 1131, Dublin - A MacShamhráin 1996 Church and Polity in pre-Norman Ireland, Maynooth - M Magnusson & H Palsson 1960 (eds) Njal’s Saga London - P Uladh Ni Chon 1999 The Rigdál at Terryglass 737AD Tipperary’s Historical Journal 190-67 - Donncha Ó Corráin 1973 Ireland Before the Normans Dublin - Donncha Ó Corráin 1998 Viking Ireland – afterthoughts, in Clarke HB M Ni Mhaonaigh & R Ó Floinn (eds) Ireland and Scandinavia in the early Viking Age 421-52 Dublin - Donncha Ó Corráin 2000 Muircheartach MacLochlainn and the Circuit of Ireland in Smyth AP (ed) Seanchas: Studies in early and medieaval Irish archaeology, history and literature in honour of Francis J Byrne 238-50 Dublin - D Ó Croinín 1995 Early Medieval Ireland 400-1200 London - J O’Donovan 1851 (ed) Annals of the Kingdom of Ireland by the Four Masters; 7 Vols Dublin - P Ó Riain 1974 Battle Site and territorial extent in early Ireland, Zestschrift fur Celtische Philologie xxxii, 67-80 - J Ryan 1938 The Battle of Clontarf RSAI Jn lxviii, 1-50 - J Ryan 1967 Brian Boruma King of Ireland in Rynne E (ed) North Munster studies: Papers in Honour of Michael Moloney PP 355-74, Limerick - Alfred P Smyth 1975-9 Scandinavian York and Dublin, 2 vols Totowa NJ & Dublin
1
3
<urn:uuid:de1034b0-16d2-46eb-8ae0-bb715af163f8>
Nocardia are aerobic and infectious (Nocardiosis), producing pulmonary disease, skin infections, lymphocutaneous lesions and brain abscesses (Mari et al, 2002; Shook & Rapini, 2006; Bennett et al, 2007). The genus contains approximately 15 known species. The species identified in human pulmonary and systemic infections include asteroides, pseudobrasilenisis, otitidis-cavriarum, abscessus, farcinica, nova, transvalensis (Georghiou and Blacklock 1992; Groves; 1997; Yourke and Rouah, 2003; Bennett et al, 2007; Kennedy et al, 2007). N. cyriacigeorgica recently was identified as an emerging pathogen in the U.S. and probably worldwide (Schlaberg et al, 2008). Lymphocutaneous, subcutaneous mycetoma with sulfur granules and superficial skin infections also occur (Shook and Rapini, 2007). N. asteroides was identified with pneumonia and empyema (thoracis) in a healthy 40-day-old neonate after presumed inhalation exposure (Tantracheewathorn, 2004). Nocardia are gram positive and stain partially with acid fast. Serological tests are not available. Predisposing factors are immunocompromised individuals, pre-existing lung disease, corticosteroid therapy and diabetes mellitus (Georghiou and Blacklock, 1992; Mari et al, 2001; Bennett et aI, 2007). As occurs with Streptomyces, the disease process can exhibit mimicry. Case reports of immunocompetent patients include brain abscesses (Chakrabarti et al, 2008; Kandasamy J et al, 2008; Dias et al, 2008), spinal cord abscess (Samkoff et al, 2008), mimicry of metastatic brain tumor (Kawakami et al, 2008), ventriculitis/choroid plexitis (Mongkolrattanothai et al, 2008), lymphangitis (Dinubile 2008), lung abscesses (Mari et al, 2001; Martinez et al, 2008 Tada et al, 2008), endophthalmitis (Ramakrishnan et al, 2008), and sternal osteomyelitis with mediastinal abscess (Baraboutis et al, 2008). It is recommended that 16s rDNA sequencing should be used to identify infections of novel bacteria (Woo et al, 2008). Mycobacteria are common in moisture-damaged building materials (ceramic, wood and mineral insulation), and their occurrence increases with the degree of mold damage (Rautiala et al, 2004; Torvinen, et al, 2006). They are environmental (soil, water, sewage), opportunistic gram positive bacteria capable of causing hypersensitivity pneumonitis as well as cervical lymphadenitis in children. Mycobacteria have been isolated from water systems, spas, hot tubs, and humidifiers and are resistant to disinfection (Primm et al, 2004; Torvinen et al, 2007). The CDC has implicated Mycobacterium avium, terrae and immunogenum in outbreaks of hypersensitivity pneumonitis (Falkinham, 2003a, b). M. terrae isolated from the indoor air of a moisture-damaged building induced a biphasic inflammatory response after intrathecal instillation into mouse lungs. There was an initial increase in TNF alpha and IL-6 at 6 hr. to 3 days, followed by a second phase at 7 to 28 days (Jussila et al, 2002a, b). The genus Mycobacterium consists of approximately 117 species of which 20 are potential human pathogens. They cause nontuberculous mycobacteria (NTM) lung disease (American Thoracic Society, 2007). Mycobacterium avium-intracellulare organisms are increasingly significant pathogens in North America causing a pulmonary infection named MAC (M. avium Complex). M. kansasii, chelonae and fortuitum are other important pathogens (Iseman et aI, 1985; Fujita et aI, 2002; Kuhlann and Woeltje, 2007; Fritz and Woeltje, 2007; Agrawal and Agrawal, 2007). According to the American Thoracic Society (2007), "The minimum evaluation for NTM should include the following: 1) Chest radiograph or, in absence of cavitation, chest high-resolution computed tomography (HRCT) scan; 2) Three or more sputum specimens stained for acid-fast bacilli (AFG) analysis; and 3) Exclusion of other disorders such as tuberculosis. Clinical, radiographic and microbiologic criteria are equally important and all must be present to make a diagnosis of NTM lung disease. The following criteria apply to symptomatic patients with radiographic opacities, nodular or cavitary, or HRCT scan that shows multifocal bronchiectasis with multiple nodules. These criteria fit best with Mycobacterium avium complex (MAC), M. kansasii, and M. abscessus. There is not enough known about NTM of other species to be certain that these diagnostic criteria are universally applicable to all NTM respiratory pathogens. A microbiologic diagnosis includes one of the following: 1) Positive cultures from two separate expectorated samples; 2) Positive culture from at least one bronchial wash; 3) transbronchial or other lung biopsy with mycobacteria histopathologic features. Patients suspected of having NTM lung disease but who do not meet the diagnostic criteria should be followed until the diagnosis is firmly established or excluded. NTM is on the rise worldwide. Mycobacteria have been isolated from water-damaged building materials from indoor environments. Finally, individuals treated with corticosteroids are at an increased risk. M. ulcerans is a significant human pathogen that causes Buruli ulcer (BU). Cases of BU have been reported worldwide with the greatest burden of disease occurring in West and Central Africa. Its transmission source is not fully understood, but it may be waterborne. The disease is characterized by progressive, severe necrotizing skin lesions that do not respond to antimicrobial therapy and may either require either surgical excision or amputation as treatment. M. ulcerans is an intracellular pathogen. It produces a polyketide-derived macrolide, mycolactone. Mycolactone is cytotoxic at 2 ng/ml, and is the organism’s virulence factor. Mycobacterium scrofulaceum and kansasii and other mycobacteria produce a less cytotoxic (33 to 1,000 ug/ml) lipid chemical when tested on fibroblast in vitro (Daniel et al, 2004; Yip et al, 2007). The gram positive toxic organisms identified in indoor environments also include Bacillus spp, Nocardia spp. and Streptomyces spp. (Peltola et al, 2001a,b). Mycobacteria have been isolated from damp indoor environments (Jusilla et al, 2001, 2002a; Falkinham, 2003a, b). Examples of additional gram positive bacteria are species of Atrhrobacter, Bacillus, Cellumonas, Gordona, and Paeniibacillus (Andersson et al, 1997). Bacillus simplex and Amyloliquefaciens isolated from moisture-damaged buildings produce surfactin (Iipopeptide) and peptides that adversely affect cell membranes and mitochondria (Mikkola et al, 2004, 2007). Finally, there were elevated concentrations of Staphylococci and Actinomycetes in a water-damaged home in which a 3-month-old infant died from a Reye's-like syndrome with mitochondrial damage resulting in decreased enzymatic activity of complexes I-IV. Mitochondrial DNA mutation testing of the infant resulted in negative findings for known mitochondrial diseases. This home also contained several species of Aspergillus, Penicillium and S. chartarum (Gray et al, in preparation).
1
5
<urn:uuid:d470744e-76c0-47b9-bbc7-4defdff56d6c>
Selenium is a chemical element with atomic number 34, chemical symbol Se, and an atomic mass of 78.96. It is a nonmetal, whose properties are intermediate between those of adjacent chalcogen elements sulfur andtellurium. It rarely occurs in its elemental state in nature, but instead is obtained as a side-product in the refining of other elements.Sony VAIO VPCF11S1E Battery Selenium is found in sulfide ores such as pyrite, where it partially replaces the sulfur. Minerals that are selenide or selenate compounds are also known, but are rare. The chief commercial uses for selenium today are in glassmaking and inpigments. Uses in electronics, once important, have been supplanted by silicon semiconductor devices.Sony VAIO VPCF11S1E/B Battery It is a semiconductorwith the unusual property of conducting electricity better in the light than in the dark, and is used in photocells. Selenium is a trace mineral that is essential to good health but required only in small amounts. Selenium is incorporated into proteins to make selenoproteins, which are important antioxidant enzymes.Sony VAIO VPCF11Z1E Battery The antioxidant properties of selenoproteins help prevent cellular damage from free radicals. Free radicals are natural by-products of oxygen metabolism that may contribute to the development of chronic diseases such as cancer and heart disease. Other selenoproteins help regulate thyroid function and play a role in the immune system.Sony VAIO VPCF11Z1E/BI Battery Selenium salts are toxic in large amounts, but trace amounts are necessary for cellular function in many organisms. It is a component of the enzymes glutathione peroxidase and thioredoxin reductase (which indirectly reduce certain oxidized molecules in animals and some plants).Sony VAIO VPCF11ZHJ Battery It is also found in three deiodinase enzymes, which convert one thyroid hormone to another. Selenium requirements in plants differ by species, with some plants, it seems, requiring none. The most stable allotrope of selenium is a dense purplish-gray solid. In terms of structure, it adopts a helical polymeric chain.Sony VAIO VPCF127HGBI Battery The Se-Se distance is 2.37 Å and the Se-Se-Se angle is 103°. It is a semiconductor with the unusual property of conducting electricity better in the light than in the dark, and is used in photocells. Gray selenium resists oxidation by air and is not attacked by non-oxidizing acids. With strong reducing agents, it forms polyselenides.Sony VAIO VPCF137HG/BI Battery Red Se and related molecular allotropes The second major allotrope of Se is red selenium. The solid consists of individual molecules of eight-membered ringmolecules, like its lighter cousin sulfur. The Se-Se distance is 2.34 Å and the Se-Se-Se angle is 106°. Unlike sulfur, however, the red form converts to the gray polymeric allotrope with heat. Other rings with the formula Sen are also known.Sony VAIO VPC-P111KX/B Battery Elemental selenium produced in chemical reactions invariably appears as the amorphous red form: an insoluble, brick-red powder. When this form is rapidly melted, it forms the black, vitreous form, which is usually sold industrially as beads. The red allotrope crystallises in three habits.Sony VAIO VPC-P111KX/D Battery Selenium does not exhibit the unusual changes in viscosity that sulfur undergoes when gradually heated. Selenium has six naturally occurring isotopes, five of which are stable: 74Se, 76Se, 77Se, 78Se, and 80Se. The last three also occur as fission products, along with 79Se, which has a half-life of 327,000 years.Sony VAIO VPC-P111KX/G Battery The final naturally occurring isotope, 82Se, has a very long half-life (~1020 yr, decaying via double beta decay to 82Kr), which, for practical purposes, can be considered to be stable. Twenty-three other unstable isotopes have been characterized.Sony VAIO VPC-P111KX/P Battery See also Selenium-79 for more information on recent changes in the measured half-life of this long-lived fission product, important for the dose calculations performed in the frame of the geological disposal of long-lived radioactive waste.Sony VAIO VPC-P111KX/W Battery Native selenium is a rare mineral, which does not usually form good crystals, but, when it does, they are steep rhombohedrons or tiny acicular (hair-like) crystals. Isolation of selenium is often complicated by the presence of other compounds and elements.Sony VAIO VPC-P112KX/B Battery Selenium occurs naturally in a number of inorganic forms, including selenide-,selenate-, and selenite-containing minerals. In living systems, selenium is found in the amino acids selenomethionine, selenocysteine, and methylselenocysteine.Sony VAIO VPC-P112KX/D Battery In these compounds, selenium plays a role analogous to that of sulfur. Another naturally occurring organoselenium compounds is dimethyl selenide. Certain solids are selenium-rich, and selenium can be bioconcentrated by certain plants. In soils, selenium most often occurs in soluble forms such as selenate (analogous to sulfate), which are leached into rivers very easily by runoff.Sony VAIO VPC-P112KX/G Battery Anthropogenic sources of selenium include coal burning and the mining and smelting of sulfide ores. Selenium is most commonly produced from selenide in many sulfide ores, such as those ofcopper, silver, or lead.Sony VAIO VPC-P112KX/P Battery It is obtained as a byproduct of the processing of these ores, e.g., from the anode mud of copper refineries and the mud from the lead chambers ofsulfuric acid plants. These muds can be processed by a number of means to obtain selenium. Specifically, most elemental selenium comes as a byproduct of refining copperor producing sulfuric acid.Sony VAIO VPC-P112KX/W Battery Industrial production of selenium often involves the extraction of selenium dioxide from residues obtained during the purification of copper. Common production begins by oxidation with sodium carbonate to produce selenium dioxide. The selenium dioxide is then mixed with water and the solution is acidified to form selenous acid (oxidation step).Sony VAIO VPCP113KX/B Battery Selenous acid is bubbled with sulfur dioxide (reduction step) to give elemental selenium. Selenium forms two stable oxides: selenium dioxide (SeO2) and selenium trioxide (SeO3). Selenium dioxide is formed by the reaction of elemental selenium with oxygen:Sony VAIO VPC-P113KX/B Battery It is a polymeric solid that forms monomeric SeO2 molecules in the gas phase. It dissolves in water to form selenous acid, H2SeO3. Selenous acid can also be made directly by oxidising elemental selenium with nitric acid: 3 Se + 4 HNO3 ? 3 H2SeO3 + 4 NO Unlike sulfur, which forms a stable trioxide, selenium trioxide is unstable and decomposes to the dioxide above 185 °C:Sony VAIO VPCP113KX/D Battery 2 SeO3 ? 2 SeO2 + O2 (?H = ?54 kJ/mol) Salts of selenous acid are called selenites. These include silver selenite (Ag2SeO3) and sodium selenite (Na2SeO3). Hydrogen sulfide reacts with aqueous selenous acid to produce selenium disulfide: H2SeO3 + 2 H2S ? SeS2 + 3 H2O Selenium disulfide consists of 8-membered rings of a nearly statistical distribution of sulfur and selenium atoms.Sony VAIO VPC-P113KX/D Battery It has an approximate composition of SeS2, with individual rings varying in composition, such as Se4S4 and Se2S6. Selenium disulfide has been use in shampoo as an anti-dandruff agent, an inhibitor in polymer chemistry, a glass dye, and a reducing agent in fireworks.Sony VAIO VPCP113KX/G Battery Selenium trioxide may be synthesized by dehydrating selenic acid, H2SeO4, which is itself produced by the oxidation of selenium dioxide with hydrogen peroxide: SeO2 + H2O2 ? H2SeO4 Hot, concentrated selenic acid is capable of dissolving gold, forming gold(III) selenate.Sony VAIO VPC-P113KX/G Battery Iodides of selenium are not well known. The only stable chloride is Se2Cl2; the corresponding bromide is also known. These species are structurally analogous to the corresponding disulfur dichloride. Selenium dichloride is an important reagent in the preparation of selenium compounds (e.g. the preparation of Se7).Sony VAIO VPCP113KX/P Battery It is prepared by treating selenium with SO2Cl2. Selenium reacts with fluorine to form selenium hexafluoride: Se8 + 24 F2 ? 8 SeF6 In comparison with its sulfur counterpart (sulfur hexafluoride), SeF6 is more reactive and is toxic pulmonary irritant. Some of the selenium oxyhalides, such asSeOF2 and selenium oxychloride have been used as specialty solvents.Sony VAIO VPC-P113KX/P Battery Analogous to the behavior of other chalcogens, selenium forms a dihydride H2Se. It is a strongly odiferous, toxic, and colourless gas. It is more acidic than H2S. In solution it ionizes to HSe-. The selenide dianion Se2- forms a variety of compounds, including the minerals from which selenium is obtained commercially.Sony VAIO VPCP113KX/W Battery Illustrative selenides include mercury selenide (HgSe), lead selenide (PbSe), zinc selenide (ZnSe), and copper indium gallium diselenide (Cu(Ga,In)Se2). These materials are semiconductors. With highly electropositive metals, such as aluminium, these selenides are prone to hydrolysis:Sony VAIO VPC-P113KX/W Battery Al2Se3 + 6 H2O ? 8 Al2O3 + H2Se Alkali metal selenides react with selenium to form polyselenides, Sex2-, which exist as chains. Tetraselenium tetranitride, Se4N4, is an explosive orange compound analogous to S4N4. It can be synthesized by the reaction of SeCl4 with[((CH3)3Si)2N]2Se.Sony VAIO VPC-P114KX/B Battery Selenium reacts with cyanides to yield selenocyanates: 8 KCN + Se8 ? 8 KSeCN Selenium, especially in the II oxidation state, forms stable bonds to carbon.Sony VAIO VPC-P114KX/D Battery Typical compounds include selenols with the formula RSeH (e.g., benzeneselenol), selenides with the formula RSeR (e.g., diphenylselenide), and diselenides, with the formula RSeSeR (e.g., diphenyldiselenide). Selenoxides, with the formula RSe(O)R, and selenyl chlorides, with the formula RSeCl, are useful intermediates in organic chemistry.Sony VAIO VPC-P114KX/G Battery History and global demand Selenium (Greek ?????? selene meaning "Moon") was discovered in 1817 by Jöns Jakob Berzelius, who found the element associated with tellurium (named for the Earth). It was discovered as a byproduct of sulfuric acid production.Sony VAIO VPC-P114KX/P Battery It came to medical notice later because of its toxicity to humans working in industry. It was also recognized as an important veterinary toxin, seen in animals eating high-selenium plants. In 1954, the first hints of specific biological functions of selenium were discovered in microorganisms.Sony VAIO VPC-P114KX/W Battery Its essentiality for mammalian life was discovered in 1957. In the 1970s, it was shown to be present in two independent sets of enzymes. This was followed by the discovery of selenocysteine in proteins. During the 1980s, it was shown that selenocysteine is encoded by the codon TGA.Sony VAIO VPCP115JC Battery The recoding mechanism was worked out first in bacteria and then in mammals(see SECIS element). Health effects and nutrition Although it is toxic in large doses, selenium is an essential micronutrient for animals.Sony VAIO VPCP115JC/B Battery In plants, it occurs as a bystander mineral, sometimes in toxic proportions in forage (some plants may accumulate selenium as a defense against being eaten by animals, but other plants such as locoweed require selenium, and their growth indicates the presence of selenium in soil). See more on plant nutrition below.Sony VAIO VPCP115JC/D Battery Selenium is a component of the unusual amino acids selenocysteine and selenomethionine. In humans, selenium is a trace element nutrient that functions as cofactor for reduction of antioxidant enzymes, such as glutathione peroxidases and certain forms of thioredoxin reductase found in animals and some plantsSony VAIO VPCP115JC/G Battery (this enzyme occurs in all living organisms, but not all forms of it in plants require selenium). The glutathione peroxidase family of enzymes (GSH-Px) catalyze certain reactions that remove reactive oxygen species such as hydrogen peroxide and organic hydroperoxides:Sony VAIO VPCP115JC/P Battery 2 GSH + H2O2----GSH-Px ? GSSG + 2 H2O Selenium also plays a role in the functioning of the thyroid gland and in every cell that uses thyroid hormone, by participating as a cofactor for the three known thyroid hormone deiodinases, which activate and then deactivate various thyroid hormones and their metabolites.Sony VAIO VPCP115JC/W Battery It may inhibitHashimotos's disease, in which the body's own thyroid cells are attacked as alien. A reduction of 21% on TPO antibodies was reported with the dietary intake of 0.2 mg of selenium. Dietary sources of Se Dietary selenium comes from nuts, cereals, meat, mushrooms, fish, and eggs.Sony VAIO VPCP115KG Battery Brazil nuts are the richest ordinary dietary source (though this is soil-dependent, since the Brazil nut does not require high levels of the element for its own needs). In descending order of concentration, high levels are also found in kidney, tuna,crab, and lobster.Sony VAIO VPCP116KG Battery The human body's content of selenium is believed to be in the 13–20 milligram range. Certain species of plants are considered indicators of high selenium content of the soil, since they require high levels of selenium to thrive. The main selenium indicator plants are Astragalus species (including some locoweeds), prince's plume (Stanleya sp.), woody asters (Xylorhiza sp.), and false goldenweed (Oonopsissp.)Sony VAIO VPC-P116KX/B Battery Although selenium is an essential trace element, it is toxic if taken in excess. Exceeding the Tolerable Upper Intake Level of 400 micrograms per day can lead to selenosis. This 400 microgram (µg) Tolerable Upper Intake Level is based primarily on a 1986 study of five Chinese patients who exhibited overt signs of selenosis and a follow up study on the same five people in 1992.Sony VAIO VPC-P116KX/D Battery The 1992 study actually found the maximum safe dietary Se intake to be approximately 800 micrograms per day (15 micrograms per kilogram body weight), but suggested 400 micrograms per day to not only avoid toxicity, but also to avoid creating an imbalance of nutrients in the diet and to account for data from other countries.Sony VAIO VPC-P116KX/G Battery In China, people who ingested corn grown in extremely selenium-rich stony coal (carbonaceous shale) have suffered from selenium toxicity. This coal was shown to have selenium content as high as 9.1%, the highest concentration in coal ever recorded in literature.Sony VAIO VPC-P116KX/P Battery Symptoms of selenosis include a garlic odor on the breath, gastrointestinal disorders, hair loss, sloughing of nails, fatigue, irritability, and neurological damage. Extreme cases of selenosis can result in cirrhosis of the liver, pulmonary edema, and death.Sony VAIO VPC-P116KX/W Battery Elemental selenium and most metallic selenides have relatively low toxicities because of their low bioavailability. By contrast, selenates and selenites are very toxic, having an oxidant mode of action similar to that of arsenic trioxide. The chronic toxic dose of selenite for humans is about 2400 to 3000 micrograms of selenium per day for a long time.Sony VAIO VPCP118JC Battery Hydrogen selenide is an extremely toxic, corrosive gas. Selenium also occurs in organic compounds, such as dimethyl selenide, selenomethionine, selenocysteine and methylselenocysteine, all of which have high bioavailability and are toxic in large doses.Sony VAIO VPCP118JC/B Battery On 19 April 2009, twenty-one polo ponies began to die shortly before a match in the United States Polo Open. Three days later, a pharmacy released a statement explaining that the horses had received an incorrect dose of one of the ingredients used in a vitamin/mineral supplement compound, with which the horses had been injected.Sony VAIO VPCP118JC/P Battery Such nutrient injections are common to promote recovery after a match, but this mixture had been compounded by a compounding pharmacy not familiar with it. Analysis of blood levels of inorganic compounds in the supplement indicated the selenium concentrations were ten to fifteen times higher than normal in the horses'blood samples, and 15 to 20 times higher than normal in their liver samples.Sony VAIO VPCP118JC/W Battery It was later confirmed that selenium was the ingredient in question. Selenium is active in only tiny amounts, and has a history of causing accidental poisonings in supplements when the dose that is supposed to be in micrograms is given by mistake in milligrams (1000 times as much).Sony VAIO VPC-P118KX/B Battery Selenium poisoning of water systems may result whenever new agricultural runoff courses through normally dry, undeveloped lands. This process leaches natural soluble selenium compounds (such as selenates) into the water, which may then be concentrated in new "wetlands" as the water evaporates.Sony VAIO VPC-P118KX/D Battery High selenium levels produced in this fashion have been found to have caused certain congenital disorders in wetland birds. Selenium deficiency is rare in healthy, well-nourished individuals. It can occur in patients with severely compromised intestinal function, those undergoing total parenteral nutrition, and in those of advanced age (over 90).Sony VAIO VPC-P118KX/G Battery Also, people dependent on food grown from selenium-deficient soil are at risk. Although New Zealandhas low levels of selenium in its soil, adverse health effects have not been detected. Selenium deficiency may only occur when a low selenium status is linked with an additional stress, such as chemical exposure or increased oxidant stress due to vitamin E deficiency.Sony VAIO VPC-P118KX/P Battery There are interactions between selenium and other nutrients, such as iodine and vitamin E. The interaction is observed in the etiology of many deficiency diseases in animals and pure selenium deficiency is rare. The effect of selenium deficiency on health remains uncertain, particularly in relation to Kashin-Beck disease.Sony VAIO VPC-P118KX/W Battery Controversial health effects Several studies have suggested a possible link between cancer and selenium deficiency. One study, known as the NPC, was conducted to test the effect of selenium supplementation on the recurrence of skin cancers on selenium-deficient men.Sony VAIO VPCP119JC Battery It did not demonstrate a reduced rate of recurrence of skin cancers, but did show a reduced occurrence of total cancers, particularly for lung, colorectal and prostate cancers (Relative Risk 0.63). There was also a significant reduction in total cancer mortality (?50 %), although without a statistically significant change in overall mortality.Sony VAIO VPCP119JC/BI Battery The preventative effect observed in the NPC was greatest in those with the lowest baseline selenium levels. In 2009, the 5.5 year SELECT study reported selenium and vitamin E supplementation, both alone and together, did not significantly reduce the incidence of prostate cancer in 35,000 men who "generally were replete in selenium at baseline".Sony VAIO VPCP11S1E/B Battery The SELECT trial reported vitamin E did not reduce prostate cancer as it had in the alpha-tocopherol, beta carotene (ATBC) study, but the ATBC had a large percentage of smokers, while the SELECT trial did not. There was a slight trend toward more prostate cancer in the SELECT trial, but in the vitamin E only arm of the trial, where no selenium was given.Sony VAIO VPC-P11S1E/B Battery Dietary selenium prevents chemically induced carcinogenesis in many rodent studies. It has been proposed that selenium may help prevent cancer by acting as anantioxidant or by enhancing immune activity. Not all studies agree on the cancer-fighting effects of selenium.Sony VAIO VPCP11S1E/D Battery One study of naturally occurring levels of selenium in over 60,000 participants did not show a significant correlation between those levels and cancer. The SU.VI.MAX study concluded low-dose supplementation (with 120 mg of ascorbic acid, 30 mg of vitamin E,Sony VAIO VPC-P11S1E/D Battery 6 mg of beta carotene, 100 µg of selenium, and 20 mg of zinc) resulted in a 30% reduction in the incidence of cancer and a 37% reduction in all-cause mortality in males, but did not get a significant result for females.A Cochrane review of studies concluded that there is no convincing evidence that individuals,Sony VAIO VPCP11S1E/G Battery particularly those who are adequately nourished, will benefit from selenium supplementation with regard to their cancer risk.However, there is evidence selenium can help chemotherapy treatment by enhancing the efficacy of the treatment, reducing the toxicity of chemotherapeutic drugs, and preventing the body's resistance to the drugs.Sony VAIO VPC-P11S1E/G Battery Studies of cancer cells in vitro showed that chemotherapeutic drugs, such as taxol andAdriamycin, were more toxic to strains of cancer cells when selenium was added. In March 2009, vitamin E (400 IU) and selenium (200 micrograms) supplements were reported to affect gene expression and can act as a tumor suppressor.Sony VAIO VPCP11S1E/P Battery Eric Klein, MD from the Glickman Urological and Kidney Institute in Ohio said the new study “lend[s] credence to the previous evidence that selenium and vitamin E might be active as cancer preventatives”. In an attempt to rationalize the differences between epidemiological and in vitro studies and randomized trials like SELECT,Sony VAIO VPCP11S1E/W Battery Klein said randomized controlled trials “do not always validate what we believe biology indicates and that our model systems are imperfect measures of clinical outcomes in the real world”. Some research has indicated a geographical link between regions of selenium-deficient soils and peak incidences of HIV/AIDS infection.Sony VAIO VPC-P11S1E/W Battery For example, much of sub-Saharan Africa is low in selenium. However, Senegal is not, and also has a significantly lower level of AIDS infection than the rest of the continent. AIDS appears to involve a slow and progressive decline in levels of selenium in the body.Sony VAIO VPCP11Z9E/B Battery Whether this decline in selenium levels is a direct result of the replication of HIV or related more generally to the overall malabsorption of nutrients by AIDS patients remains debated. Low selenium levels in AIDS patients have been directly correlated with decreased immune cell count and increased disease progression and risk of death.Sony VAIO VPCS111FM/S Battery Selenium normally acts as an antioxidant, so low levels of it may increase oxidative stress on the immune system, leading to its more rapid decline. Others have argued T-cell-associated genes encode selenoproteins similar to human glutathione peroxidase. Depleted selenium levels in turn lead to a decline in CD4 helper T-cells, further weakening the immune system.Sony VAIO VPCS115EC Battery Regardless of the cause of depleted selenium levels in AIDS patients, studies have shown selenium deficiency does strongly correlate with the progression of the disease and the risk of death. Some research has suggested selenium supplementation, along with other nutrients, can help prevent the recurrence of tuberculosis.Sony VAIO VPCS115FG Battery A well-controlled study showed selenium levels are positively correlated with the risk of having type 2 diabetes. Because high serum selenium levels are positively associated with the prevalence of diabetes, and because selenium deficiency is rare, supplementation is not recommended in well-nourished populations, such as the U.S.Sony VAIO VPCS117GG Battery More recent studies, however, have indicated selenium may help inhibit the development of type 2 diabetes in men, though the mechanism for the possible preventative effect is not known. The demand for Se was around 2300 tonnes/y in the years 1989–1991.Sony VAIO VPCS117GGB Battery The largest commercial use of Se, accounting for about 50% of consumption, is for the production of glass. Se compounds confer a red colour to glass. This colour cancels out the green or yellow tints that arise from iron impurities that are typical for most glass.Sony VAIO VPCS118EC Battery For this purpose various salts of selenite and selenate are added. For other application, the red colour is desirable, in which case mixtures of CdSe and CdS are added. Small amounts of organoselenium compounds are used to modify the vulcanization catalysts used in the production of rubber.Sony VAIO VPCS119FJ/B Battery Uses in the laboratory Selenium is a catalyst in some chemical reactions but it is not widely used because of issues with toxicity. In X-ray crystallography, incorporation of one or more Se atoms helps with MAD and SAD phasing. Selenium is used with bismuth in brasses to replace more toxic lead.Sony VAIO VPCS119GC Battery The demand for Se by the electronics industry is declining, despite a number of continuing applications. Because of its photovoltaic and photoconductiveproperties, selenium is used in photocopying, photocells, light meters and solar cells.Sony VAIO VPCS11AFJ Battery Its use as a photoconductor in plain-paper copiers once was a leading application but in the 1980s, the photoconductor application declined (although it was still a large end-use) as more and more copiers switched to the use of organic photoconductors. It was once widely used in selenium rectifiers.Sony VAIO VPCS11AGJ Battery These uses have mostly been replaced by silicon-based devices or are in the process of being replaced. The most notable exception is in power DC surge protection, where the superior energy capabilities of selenium suppressors make them more desirable thanmetal oxide varistors.Sony VAIO VPCS11AHJ Battery Sheets of amorphous selenium convert x-ray images to patterns of charge in xeroradiography and in solid-state, flat-panel x-ray cameras. Zinc selenide is used in blue and white LEDs.Sony VAIO VPCS11AVJ Battery Selenium is used in the toning of photographic prints, and it is sold as a toner by numerous photographic manufacturers including Kodak and Fotospeed. Its use intensifies and extends the tonal range of black-and-white photographic images as well as improving the permanence of prints.Sony VAIO VPCS11J7E/B Battery Early photographic light meters used selenium, but this application is now obsolete. The substance loosely called selenium sulfide (approximate formula SeS2) is the active ingredient in some anti-dandruff shampoos.Sony VAIO VPCS11M1E/W Battery The selenium compound kills the scalp fungus Malassezia, which causes shedding of dry skin fragments. The ingredient is also used in body lotions to treat Tinea versicolor due to infection by a different species of Malassezia fungus. Selenium is used widely in vitamin preparations and other dietary supplements, in small doses (typically 50 to 200 micrograms per day for adult humans). Some livestock feeds are fortified with selenium as well.Sony VAIO VPCS11V9E Battery Detection in biological fluids Selenium may be measured in blood, plasma, serum or urine to monitor excessive environmental or occupational exposure, confirm a diagnosis of poisoning in hospitalized victims or to assist in a forensic investigation in a case of fatal overdosage. Sony VAIO VPCS11V9E/B Battery Some analytical techniques are capable of distinguishing organic from inorganic forms of the element. Both organic and inorganic forms of selenium are largely converted to monosaccharide conjugates (selenosugars) in the body prior to being eliminated in the urine.Sony VAIO VPCS11X9E/B Battery Cancer patients receiving daily oral doses of selenothionine may achieve very high plasma and urine selenium concentrations. Evolution in biology Over three billion years ago, blue-green algae were the most primitive oxygenic photosynthetic organisms and are ancestors of multicellular eukaryotic algae.Sony VAIO VPCS123FGB Battery Algae that contain the highest amount of antioxidant selenium, iodide, and peroxidase enzymes were the first living cells to produce poisonous oxygen in the atmosphere. It has been suggested that algal cells required a protective antioxidant action, in which selenium and iodides, through peroxidase enzymes, have had this specific role.Sony VAIO VPCS125EC Battery Selenium, which acts synergistically with iodine, is a primitive mineral antioxidant, greatly present in the sea and prokaryotic cells, where it is an essential component of the family of glutathione peroxidase (GSH-Px) antioxidant enzymes; seaweeds accumulate high quantity of selenium and iodine.Sony VAIO VPCS128EC Battery In 2008, a study showed that iodide also scavenges reactive oxygen species (ROS) in algae, and that its biological role is that of an inorganic antioxidant, the first to be described in a living system, active also in an in vitro assay with the blood cells of today’s humans."Sony VAIO VPCS129GC Battery From about three billion years ago, prokaryotic selenoprotein families drive selenocysteine evolution. Selenium is incorporated into several prokaryotic selenoprotein families in bacteria, archaea and eukaryotes as selenocysteine, where selenoprotein peroxiredoxins protect bacterial and eukaryotic cells against oxidative damage.Sony VAIO VPCS12C7E/B Battery Selenoprotein families of GSH-Px and the deiodinases of eukaryotic cells seem to have a bacterial phylogenetic origin. The selenocysteine-containing form occurs in species as diverse as green algae, diatoms, sea urchin, fish and chicken. Selenium enzymes are involved in utilization of the small reducing molecules glutathione and thioredoxin.Sony VAIO VPCS12L9E/B Battery One family of selenium-containing molecules (the glutathione peroxidases) destroy peroxide and repair damaged peroxidized cell membranes, using glutathione. Another selenium-containing enzyme in some plants and in animals (thioredoxin reductase) generates reduced thioredoxin,Sony VAIO VPCS12V9E/B Battery a dithiol that serves as an electron source for peroxidases and also the important reducing enzyme ribonucleotide reductase that makes DNA presursors from RNA precursors. At about 500 Mya, plants and animals began to transfer from the sea to rivers and land, the environmental deficiency of marine mineral antioxidants (as selenium, iodine, etc.)Sony VAIO VPCW111XX/P Battery was a challenge to the evolution of terrestrial life. Trace elements involved in GSH-Px and superoxide dismutase enzymes activities, i.e. selenium,vanadium, magnesium, copper, and zinc, may have been lacking in some terrestrial mineral-deficient areas.Sony VAIO VPCW111XX/PC Battery Marine organisms retained and sometimes expanded their seleno-proteomes, whereas the seleno-proteomes of some terrestrial organisms were reduced or completely lost. These findings suggest that, with the exception ofvertebrates, aquatic life supports selenium utilization, whereas terrestrial habitats lead to reduced use of this trace element.Sony VAIO VPCW111XX/T Battery Marine fishes and vertebrate thyroid glands have the highest concentration of selenium and iodine. From about 500 Mya, freshwater and terrestrial plants slowly optimized the production of “new” endogenous antioxidants such as ascorbic acid (Vitamin C), polyphenols (including flavonoids), tocopherols, etc.Sony VAIO VPCW111XX/W Battery A few of these appeared more recently, in the last 50–200 million years, in fruits and flowers of angiosperm plants. In fact, the angiosperms (the dominant type of plant today) and most of their antioxidant pigments evolved during the late Jurassic period.Sony VAIO VPCW111XXP Battery The deiodinase isoenzymes constitute another family of eukaryotic selenoproteins with identified enzyme function. Deiodinases are able to extract electrons from iodides, and iodides from iodothyronines. They are, thus, involved in thyroid-hormone regulation, participating in the protection of thyrocytes from damage by H2O2produced for thyroid-hormone biosynthesis.Sony VAIO VPCW111XXT Battery About 200 Mya, new selenoproteins were developed as mammalian GSH-Px enzymes. Bromine is achemical element with the symbol Br, an atomic number of 35, and an atomic mass of 79.904. It is in the halogen element group.Sony VAIO VPCW111XXW Battery The element was isolated independently by two chemists, Carl Jacob Löwig and Antoine Jerome Balard, in 1825–1826. Elemental bromine is a fuming red-brown liquid at room temperature, corrosive and toxic, with properties between those ofchlorine and iodine.Sony VAIO VPCW115XG Battery Free bromine does not occur in nature, but occurs as colorless soluble crystalline mineral halide salts, analogous to table salt. Bromine is rarer than about three-quarters of elements in the Earth's crust, however the high solubility of bromide ion has caused its accumulation in the oceans,Sony VAIO VPCW115XGP Battery and commercially the element is easily extracted from brine pools, mostly in the United States, Israel and China. About 556,000 tonnes were produced in 2007, an amount similar to the far more abundant element magnesium. At high temperatures, organobromine compounds are easily converted to free bromine atoms, a process which acts to terminatefree radical chemical chain reactions.Sony VAIO VPCW115XW/P Battery This makes such compounds useful fire retardants and this is bromine's primary industrial use, consuming more than half of world production of the element. The same property allows volatile organobromine compounds, under the action of sunlight, to form free bromine atoms in the atmosphere which are highly effective in ozone depletion.Sony VAIO VPCW115XW/T Battery This unwanted side-effect has caused many common volatile brominated organics like methyl bromide, a pesticide that was formerly a large industrial bromine consumer, to be abandoned. Remaining uses of bromine compounds are in well-drilling fluids, as an intermediate in manufacture of organic chemicals, and in film photography.Sony VAIO VPCW115XW/W Battery Bromine has no essential function in mammals, though it is preferentially used over chloride by one antiparasitic enzyme in the human immune system. Organobromides are needed and produced enzymatically from bromide by some lower life forms in the sea, particularly algae, and the ash of seaweed was one source of bromine's discovery.Sony VAIO VPCW117XC/P Battery As a pharmaceutical, simple bromide ion, Br–, has inhibitory effects on the central nervous system, and bromide salts were once a major medical sedative, before being replaced by shorter-acting drugs. They retain niche uses as antiepileptics. Elemental bromine exists as a diatomic molecule, Br2.Sony VAIO VPCW117XC/T Battery It is a dense, mobile, slightly transparent reddish-brown liquid, that evaporates easily at standard temperature and pressures to give an orange vapor (its color resembles nitrogen dioxide) that has a strongly disagreeable odor resembling that of chlorine. It is one of only two elements on the periodic table that are liquids at room temperature (mercury is the other, althoughcaesium, gallium, and rubidium melt just above room temperature).Sony VAIO VPCW117XC/W Battery At a pressure of 55 GPa bromine converts to a metal. At 75 GPa it converts to a face centered orthorhombic structure. At 100 GPa it converts to a body centered orthorhombic monoatomic form.Sony VAIO VPCW119XJ Battery Being less reactive than chlorine but more reactive than iodine, bromine reacts vigorously with metals, especially in the presence of water, to give bromide salts. It is also reactive toward most organic compounds, especially upon illumination, conditions that favor the dissociation of the diatomic molecule into bromine radicals:Sony VAIO VPCW119XJ/P Battery Br2 2 Br· It bonds easily with many elements and has a strong bleaching action. Bromine is slightly soluble in water, but it is highly soluble in organic solvents such as carbon disulfide, carbon tetrachloride, aliphatic alcohols, and acetic acid.Sony VAIO VPCW119XJ/W Battery Bromine has two stable isotopes, 79Br (50.69 %) and 81Br (49.31%). At least 23 other radioisotopes are known. Many of the bromine isotopes are fission products. Several of the heavier bromine isotopes from fission are delayed neutron emitters. All of the radioactive bromine isotopes are relatively short lived.Sony VAIO VPCW11AXJ Battery The longest half-life is the neutron deficient 77Br at 2.376 days. The longest half-life on the neutron rich side is 82Br at 1.471 days. A number of the bromine isotopes exhibit metastable isomers. Stable 79Br exhibits a radioactive isomer, with a half-life of 4.86 seconds. It decays by isomeric transition to the stable ground state.Sony VAIO VPCW11S1E/P Battery Bromine was discovered independently by two chemists, Carl Jacob Löwig and Antoine Balard, in 1825 and 1826, respectively. Balard found bromide chemicals in the ash of seaweed from the salt marshes of Montpellier. The seaweed was used to produce iodine, but also contained bromine.Sony VAIO VPCW11S1E/T Battery Balard distilled the bromine from a solution of seaweed ash saturated with chlorine. The properties of the resulting substance resembled that of an intermediate of chlorine and iodine; with those results he tried to prove that the substance was iodine monochloride (ICl),Sony VAIO VPCW11S1E/W Battery but after failing to do so he was sure that he had found a new element and named it muride, derived from the Latin word muria for brine. Löwig isolated bromine from a mineral water spring from his hometown Bad Kreuznach in 1825.Sony VAIO VPCW121AX Battery Löwig used a solution of the mineral salt saturated with chlorine and extracted the bromine with diethyl ether. After evaporation of the ether a brown liquid remained. With this liquid as a sample for his work he applied for a position in the laboratory of Leopold Gmelin inHeidelberg. The publication of the results was delayed and Balard published his results first.Sony VAIO VPCW126AG Battery After the French chemists Louis Nicolas Vauquelin, Louis Jacques Thénard, and Joseph-Louis Gay-Lussac approved the experiments of the young pharmacist Balard, the results were presented at a lecture of the Académie des Sciences and published in Annales de Chimie et Physique.Sony VAIO VPCW127JC/P Battery In his publication Balard states that he changed the name from muride to brômeon the proposal of M. Anglada. (Brôme (bromine) derives from the Greek ?????? (stench). ) Other sources claim that the French chemist and physicist Joseph-Louis Gay-Lussac suggested the name brôme for the characteristic smell of the vapors.Sony VAIO VPCW127JC/T Battery Bromine was not produced in large quantities until 1860. The first commercial use, besides some minor medical applications, was the use of bromine for the daguerreotype. In 1840 it was discovered that bromine had some advantages over the previously used iodine vapor to create the light sensitive silver halide layer used for daguerreotypy.Sony VAIO VPCW127JC/W Battery Potassium bromide and sodium bromide were used as anticonvulsants and sedatives in the late 19th and early 20th centuries, until they were gradually superseded bychloral hydrate and then the barbiturates. The diatomic element Br2 does not occur naturally. Instead, bromine exists exclusively asbromide salts in diffuse amounts in crustal rock.Sony VAIO VPCW127JC/WZ Battery Owing to leaching, bromide salts have accumulated in sea water at 65 part per million (ppm), which is less than chloride. Bromine may be economically recovered from bromide-rich brine wells and from the Dead Seawaters (up to 50,000 ppm). It exists in the Earth's crust at an average concentration of 0.4 ppm, making it the 62nd most abundant element.Sony VAIO VPCW12AAJ Battery The bromine concentration in soils varies normally between 5 and 40 ppm, but some volcanic soils can contain up to 500 ppm. The concentration of bromine in the atmosphere is extremely low, at only a few ppt. China's bromine reserves are located in the Shandong Province and Israel's bromine reserves are contained in the waters of the Dead Sea. The largest bromine reserve in the United States is located in Columbia County and Union County, Arkansas, U.S.Sony VAIO VPCW12AVJ Battery Bromine production is rather dynamic and has increased sixfold since the 1960s. Approximately 556,000 tonnes (worth around US $2.5 billion) was produced in 2007 worldwide, with the predominant contribution from the United States (226,000 t) and Israel(210,000 t).Sony VAIO VPCW12S1E/P Battery US production was excluded from the United States Geological Survey after 2007, and from the 380,000 tonnes mined by other countries in 2010, 140,000 t were produced by China, 130,000 t by Israel and 80,000 t by Jordan. Bromide-rich brines are treated with chlorine gas, flushing through with air. In this treatment, bromide anions are oxidized to bromine by the chlorine gas.Sony VAIO VPCW12S1E/T Battery 2 Br? + Cl2 ? 2 Cl? + Br2
1
4
<urn:uuid:710db18b-eb5a-4f07-8d70-2c8d1243b8ca>
HDD and SSD Explained The traditional spinning hard drive (HDD) is the basic nonvolatile storage on a computer. That is, it doesn’t “go away” like the data on the system memory when you turn the system off. Hard drives are essentially metal platters with a magnetic coating. That coating stores your data, whether that data consists of weather reports from the last century, a high-definition copy of the Star Wars trilogy, or your digital music collection. A read/write head on an arm accesses the data while the platters are spinning in a hard drive enclosure. An SSD does much the same job functionally (saving your data while the system is off, booting your system, etc.) as an HDD, but instead of a magnetic coating on top of platters, the data is stored on interconnected flash memory chips that retain the data even when there’s no power present. The chips can either be permanently installed on the system’s motherboard (like on some small laptops and ultrabooks), on a PCI/PCIe card (in some high-end workstations), or in a box that’s sized, shaped, and wired to slot in for a laptop or desktop’s hard drive (common on everything else). These flash memory chips differ from the flash memory in USB thumb drives in the type and speed of the memory. That’s the subject of a totally separate technical treatise, but suffice it to say that the flash memory in SSDs is faster and more reliable than the flash memory in USB thumb drives. SSDs are consequently more expensive than USB thumb drives for the same capacities. A History of HDDs and SSDs Hard drive technology is relatively ancient (in terms of computer history). There are well-known pictures of the infamous IBM 350 RAMAC hard drive from 1956 that used 50 24-inch-wide platters to hold a whopping 3.75MB of storage space. This, of course, is the size of an average 128Kbps MP3 file, in the physical space that could hold two commercial refrigerators. The IBM 350 was only used by government and industrial users, and was obsolete by 1969. Ain’t progress wonderful? The PC hard drive form factor standardized in the early 1980s with the desktop-class 5.25-inch form factor, with 3.5-inch desktop and 2.5-inch notebook-class drives coming soon thereafter. The internal cable interface has changed from Serial to IDE to SCSI to SATA over the years, but it essentially does the same thing: connects the hard drive to the PC’s motherboard so your data can be processed. Today’s 2.5- and 3.5-inch drives use SATA interfaces almost exclusively (at least on most PCs and Macs). Capacities have grown from multiple megabytes to multiple terabytes, an increase of millions fold. Current 3.5-inch HDDs max out at 6TB, with 2.5-inch drives at 2TB max. 275 The SSD has a much more recent history. There was always an infatuation with non-moving storage from the beginning of personal computing, with technologies like bubble memory flashing (pun intended) and dying in the 1970s and ’80s. Current flash memory is the logical extension of the same idea. The flash memory chips store your data and don’t require constant power to retain that data. The first primary drives that we know as SSDs started during the rise of netbooks in the late 2000s. In 2007, the OLPC XO-1 used a 1GB SSD, and the Asus Eee PC 700 series used a 2GB SSD as primary storage. The SSD chips on low-end Eee PC units and the XO-1 were permanently soldered to the motherboard. As netbooks, ultrabooks, and other ultraportables became more capable, the SSD capacities rose, and eventually standardized on the 2.5-inch notebook form factor. This way, you could pop a 2.5-inch hard drive out of your laptop or desktop and replace it easily with an SSD. Other form factors emerged, like the mSATA miniPCI SSD card and the DIMM-like SSDs in the Apple MacBook Air, but today many SSDs are built into the 2.5-inch form factor. The 2.5-inch SSD capacity tops out at 1TB currently, but will undoubtedly grow as time goes by. Advantages and Disadvantages Both SSDs and HDDs do the same job: They boot your system, store your applications, and store your personal files. But each type of storage has its own unique feature set. The question is, what’s the difference, and why would a user get one over the other? We break it down: Price: To put it bluntly, SSDs are very expensive in terms of dollar per GB. For the same capacity and form factor 1TB internal 2.5-inch drive, you’ll pay about $75 for an HDD, but as of this writing, an SSD is a whopping $600. That translates into eight-cents-per-GB for the HDD and 60 cents per GB for the SSD. Other capacities are slightly more affordable (250 to 256GB: $150 SSD, $50 HDD), but you get the idea. Since HDDs are older, more established technologies, they will remain less expensive for the near future. Those extra hundreds may push your system price over budget. Maximum and Common Capacity: As seen above, SSD units top out at 1TB, but those are still very rare and expensive. You’re more likely to find 128GB to 500GB units as primary drives in systems. You’d be hard pressed to find a 128GB HDD in a PC these days, as 250 or even 500GB is considered a “base” system in 2014. Multimedia users will require even more, with 1TB to 4TB drives as common in high-end systems. Basically, the more storage capacity, the more stuff (photos, music, videos, etc.) you can hold on your PC. While the (Internet) cloud may be a good place to share these files between your phone, tablet, and PC, local storage is less expensive, and you only have to buy it once. Solid State Drive Speed: This is where SSDs shine. An SSD-equipped PC will boot in seconds, certainly under a minute. A hard drive requires time to speed up to operating specs, and will continue to be slower than an SSD during normal operation. A PC or Mac with an SSD boots faster, launches apps faster, and has higher overall performance. Witness the higher PCMark scores on laptops and desktops with SSD drives, plus the much higher scores and transfer times for external SSDs vs. HDDs. Whether it’s for fun, school, or business, the extra speed may be the difference between finishing on time or failing. Fragmentation: Because of their rotary recording surfaces, HDD surfaces work best with larger files that are laid down in contiguous blocks. That way, the drive head can start and end its read in one continuous motion. When hard drives start to fill up, large files can become scattered around the disk platter, which is otherwise known as fragmentation. While read/write algorithms have improved where the effect in minimized, the fact of the matter is that HDDs can become fragmented, while SSDs don’t care where the data is stored on its chips, since there’s no physical read head. SSDs are inherently faster. Durability: An SSD has no moving parts, so it is more likely to keep your data safe in the event that you drop your laptop bag or your system is shaken about by an earthquake while it’s operating. Most hard drives park their read/write heads when the system is off, but they are flying over the drive platter at hundreds of miles an hour when they are in operation. Besides, even parking brakes have limits. If you’re rough on your equipment, a SSD is recommended. Availability: Hard drives are simply more plentiful. Look at the product lists from Western Digital, Toshiba, Seagate, Samsung, and Hitachi, and you’ll see many more HDD model numbers than SSDs. For PCs and Macs, HDDs won’t be going away completely, at least for the next couple of years. You’ll also see many more HDD choices than SSDs from different manufacturers for the same capacities. SSD model lines are growing in number, but HDDs are still the majority for storage devices in PCs. Seagate 600 Pro: Angle Form Factors: Because HDDs rely on spinning platters, there is a limit to how small they can be manufactured. There was an initiative to make smaller 1.8-inch spinning hard drives, but that’s stalled at about 320GB, since the MP3 player and smartphone manufacturers have settled on flash memory for their primary storage. SSDs have no such limitation, so they can continue to shrink as time goes on. SSDs are available in 2.5-inch laptop drive-sized boxes, but that’s only for convenience, as stated above. As laptops become slimmer and tablets take over as primary Web surfing platforms, you’ll start to see the adoption of SSDs skyrocket. Noise: Even the quietest HDD will emit a bit of noise when it is in use from the drive spinning or the read arm moving back and forth, particularly if it’s in a system that’s been banged about or in an all-metal system where it’s been shoddily installed. Faster hard drives will make more noise than slower ones. SSDs make virtually no noise at all, since they’re non-mechanical. Overall: HDDs win on price, capacity, and availability. SSDs work best if speed, ruggedness, form factor, noise, or fragmentation (technically part of speed) are important factors to you. If it weren’t for the price and capacity issues, SSDs would be the winner hands down. As far as longevity goes, while it is true that SSDs wear out over time (each cell in a flash memory bank has a limited number of times it can be written and erased), thanks to TRIM technology built into SSDs that dynamically optimizes these read/write cycles, you’re more likely to discard the system for obsolescence before you start running into read/write errors. The possible exceptions are high-end multimedia users like video editors who read and write data constantly, but those users will need the larger capacities of hard drives anyway. Hard drives will eventually wear out from constant use as well, since they use physical recording methods. Longevity is a wash when it’s separated from travel and ruggedness concerns. The Right Storage for You So, does an SSD or HDD (or a hybrid of the two) fit your needs? Let’s break it down: • Multimedia Mavens and heavy downloaders: Video collectors need space, and you can only get to 4TB of space cheaply with hard drives. • Budget buyers: Ditto. Plenty of space for cheap. SSDs are too expensive for $500 PC buyers. • Graphics Arts: Video and photo editors wear out storage by overuse. Replacing a 1TB hard drive will be cheaper than replacing a 500GB SSD. • General users: Unless you can justify a need for speed or ruggedness, most users won’t need expensive SSDs in their system. • Road Warriors: People that shove their laptops into their bags indiscriminately will want the extra security of a SSD. That laptop may not be fully asleep when you violently shut it to catch your next flight. This also includes folks that work in the field, like utility workers and university researchers. • Speed Demons: If you need things done now, spend the extra bucks for quick bootups and app launches. Supplement with a storage SSD or HDD if you need extra space (see below). • Graphics Arts and Engineering: Yes, I know I said they need HDDs, but the speed of a SSD may make the difference between completing two proposals and completing five for your client. These users are prime candidates for dual-drive systems (see below). • Audio guys: If you’re recording music, you don’t want the scratchy sound from a hard drive intruding. Go for the quieter choice. Now, we’re talking primarily about internal drives here, but the same applies to external hard drives. External drives come in both large desktop form factors and compact portable form factors. SSDs are becoming a larger part of the external market as well, The same sorts of affinities apply, i.e., road warriors will want an external SSD over a HDD if they’re rough on their equipment. Hybrid Drives and Dual-Drive Systems Back in the mid 2000s, some of the hard drive manufacturers like Samsung and Seagate theorized that if you add a few GB of flash chips to a spinning HDD, you’d get a so-called “hybrid” drive that approaches the performance of an SSD, with only a slight price difference with a HDD. All of it will fit in the same space as a “regular” HDD, plus you’d get the HDD’s overall storage capacity. The flash memory acts as a buffer for oft-used files (like apps or boot files), so your system has the potential for booting faster and launching apps faster. The flash memory isn’t directly accessible by the end user, so they can’t, for example, install Windows or Linux on the flash chips. In practice, drives like the Seagate Momentus XT work, but they are still more expensive and more complex than simple hard drives. They work best for people like road warriors who need large storage, but need fast boot times, too. Since they’re an in-between product, they don’t necessarily replace dedicated HDDs nor SSDs. In a dual-drive system, the system manufacturer will install a small SSD primary drive (C:) for the operating system and apps, while adding a large storage drive (D: or E:) for your files. While in theory this works well, in practice, manufacturers can go too small on the SSD. Windows itself takes up a lot of space on the primary hard drive, and some apps can’t be installed on the D: or E: drive. Some capacities like 20GB or 32GB may be too small. For example, the Polywell Poly i2303 i5-2467M comes with a 20GB SSD as the boot drive, and we were unable to complete testing, let alone install usable apps, since there was no room left over once Windows 7 was installed on the C: drive. In our opinion, 80GB is a practical size for the C: drive, with 120GB being even better. Space concerns are like any multi-drive system: You need physical space inside the PC chassis to hold two (or more) drives. Western Digital Black2 Dual Drive Last but not least, an SSD and an HDD can be combined (like Voltron) on systems with technologies like Intel’s Smart Response Technology. SRT uses the SSD invisibly to help the system boot faster and launch apps faster. Like a hybrid drive, the SSD is not directly accessible by the end user; rather, it acts as a cache for files the system needs often (you’ll only see one drive, not two). Smart Response Technology requires true SSDs, like those in 2.5-inch form factors, but those drives can be as small as 8GB to 20GB and still provide performance boosts. Since the operating system isn’t being installed to the SSD directly, you avoid the drive space problems of the dual-drive configuration mentioned above. On the other hand, your PC will require space for two drives, a requirement that may exclude some small form factor desktops and laptops. You’ll also need the SSD and your system’s motherboard to support Intel SRT for this scenario to work. All in all it’s an interesting workaround. It’s unclear whether SSDs will totally replace traditional spinning hard drives, especially with shared cloud storage waiting in the wings. The price of SSDs is coming down, but still not enough to totally replace the TB of data that some users have in their PCs and Macs. Cloud storage isn’t free either: you’ll continue to pay as long as you want personal storage on the Internet. Home NAS drives and cloud storage on the Internet will alleviate some storage concerns, but local storage won’t go away until we have ubiquitous wireless Internet everywhere, including planes and out in the wilderness. Of course, by that time, there may be something better. I can’t wait.
1
5
<urn:uuid:8bb4467d-c9f3-4803-8236-cf27a35afcf9>
Editor’s note: The following is the second in a two-part series. The first part appeared in the Feb. 24-March 2, 2010, issue. By Susan Johnson This is the second half of the program at Stockholm Inn Saturday, Feb. 20, an all-day event hosted by the Northern Illinois Tea Party. The Tea Party is a national movement, loosely organized, with people of different beliefs, but all share a core belief that America is the land of the free, that government should play a limited role in our life, and has a constitutionally-specified role that it is presently exceeding with agendas such as the health care bill. According to their mission statement: “As an organization, we believe that adherence to the U.S. Constitution will result in a limited federal government, fiscal responsibility, and free markets. … The Northern Illinois Tea Party’s mission is to organize like-minded individuals, educate and inform others based on our core values, to secure public policy consistent with those values, and to positively affect the outcome of elections.” Lessons in economics, U.S. history Savannah Liston, a homeschooled student and member of Rockford Campaign for Liberty, asked, “Did capitalism cause the 2008 recession?” Her presentation included a PowerPoint video showing how economic trends influenced by political decisions dating back to 1913 have contributed to and exacerbated the decline of the dollar. Liston related how the incorrect principles of Keynesian economics—still being taught in many schools—are creating a misunderstanding of what really happened in the 1920s and 1930s, through the Great Depression and up to the current mortgage crises of Fannie Mae, Freddie Mac, and the “bailouts.” This mis-information leaves young people ill-prepared to deal with today’s economy as manipulated by the government, Liston suggested. As Liston said, “The U.S. economy has not had a free market since 1913,” when the income tax was adopted. Liston explained that according to free market principles, “what is money?—the general medium of exchange. The paper money was originally linked to gold and silver; the dollar was ‘silver certificate,’ but now the Federal Reserve just prints money out of nothing. Inflation means ‘to expand.’ “‘Marginal utility’ is defined as ‘the more units of a good you have, the less value it has,’” Liston continued. “Time preference—consume now or later? Time preference divides labor and the economy. There are ‘high-time preference’ goods and ‘low-time preference’ goods. ‘What is a recession?’ It is a constriction of the economy lasting more than six months. If the interest rate goes down, it signals to the producer that people want more consumer goods. But the false boom has to end. “Historic depressions: In the depression of 1920, the money supply doubled; war production stopped,” Liston said. “The unemployment rate was up to 21 percent. The GDP shrunk 17 percent. President Harding responded by cutting the federal budget in half, reducing the income tax and reducing the federal deficit by one-third. The Federal Reserve did nothing. By 1923, unemployment was down to 2.4 percent. The economy had readjusted and was stronger without government interference. “The Great Depression: What caused it? Deflation did not cause or prolong it,” Liston said. “After the stock market crash, the Fed pushed massive amounts of money into the economy, but it didn’t help. What really happened was, the stock market boomed in the ’20s. Stock market value is directly related to capital value. But it was a false boom. The high plateau crashed; unemployment was 2.4 percent; GDP under $2 billion; stocks down 90 percent. Hoover was just like [Franklin D.] Roosevelt. He started the home loan bank discount system; he started the Public Works Administration, and Roosevelt only made it worse. Roosevelt did this through the Agricultural Act. Pigs were slaughtered, and crops left to rot. “Regime uncertainty—People were afraid to make any new investments because they had a fear of the government,” Liston continued. “Economic recovery? War prosperity is like the prosperity that an earthquake or a plague brings. After the war, the economy returned to a more capitalistic structure, making it easier for businesses to grow. By the end of the war, many products had to be replaced. Skewed data led people to have greater optimism about the future. The economy grew strong again, not because of FDR and his New Deal, but in spite of it. “The 2008 recession: A false bubble in the housing industry caused the recession, affecting Fannie Mae, Freddie Mac and the National Mortgage Association,” Liston said. “But what caused the housing bubble? (1) Adjustable rate mortgages; (2) Fannie Mae & Freddie Mac’s lending requirements were eased for political purposes. Banks were forced to give out higher-risk loans. (3) The rating agencies were pressured into unwise actions. (4) Pro-ownership tax codes provided incentives for people to buy homes. (5) Banks could not loan out money they did not have. (6) The idea of a firm being ‘too big to fail.’ The government’s response: more bailouts for everyone. This is not capitalism but socialism. “The TARP bill was used for propping up consumer credit, Cash for Clunkers and housing rebates,” Liston said. “The government is trying to keep demand artificially high. Will there be recovery? Recessions aren’t bad; the false boom is. The power of the market should not be interfered with by government. Get rid of Fannie Mae and Freddie Mac and the Federal Reserve. We are selling Treasury notes to other countries, and we are going to be in more trouble if we don’t stop this.” Skip Skinner’s topic was “Basic Principles of Liberty.” He drew parallels with what happened in the late colonial period of America, when people finally decided to separate from English rule. “They started a revolution to get a representative democracy,” he said. “It is now happening again. We are part of the re-establishment of the liberty for which they fought. We will have to make many sacrifices, as they did. But our Founding Fathers had in view what they were handing down to their posterity. The freedoms we had are fast fading away; our children and grandchildren will have trillions of dollars in debt. We must be willing to enter the battle at hand.” He advised people to check out their sources of information by going to the primary source as much as possible. Support the Constitution Randy Stufflebeam, Constitution Party candidate for Illinois governor in the November election, challenged the audience with a fast-paced, energetic speech that included an oral quiz about the Illinois State Constitution. Those who answered questions correctly received a copy of the constitution, and everyone was urged to read both the U.S. and Illinois constitutions. A former Marine who is proud of his military service, Stufflebeam said he “took an oath to defend the U.S. Constitution against all enemies, foreign and domestic. It is not a ‘living document’ that can be interpreted any way people decide. We have had an unconstitutional army since 1932.” He noted that the Constitution states that “the Federal Government can raise armies, but no appropriation for that end shall be longer than two years.” Stufflebeam said 1913 was the year tyranny and evil got a foothold in this country, beginning with the 16th Amendment. Stufflebeam got the entire audience on their feet when he asked who would take an oath to “support and defend the United States Constitution.” His closing remarks echoed the title of his topic: “Remember Your Oath!” ‘Warming’ to the subject Dr. Tom Wentland, physician, spoke at different times about two topics: “The Global Warming Scam” and “Healthcare in the VA—Hints of the Coming ObamaCare.” On the first subject, Wentland referred to Al Gore’s documentary, An Inconvenient Truth, stating Gore “conveniently left out” research that did not agree with his theory. “Having a medical degree can sometimes come in handy for global warming,” Wentland observed. “In medical school, we were taught how to do research and how to evaluate research. It should be able to be repeated by other scientists. If it can’t be repeated, it should be looked at as either invalid or ‘junk science.’ Most of my facts came from Lord Christopher Monckton, a highly-regarded scientist in Great Britain, who has written about global warming. He sued Al Gore in court and won the right to present alternative facts that disproved some of his claims in the movie. The judge in the trial ruled that there were about 44 mistakes that needed to be corrected.” This information is available on YouTube. “There are three assumptions about global warming,” Wentland continued. “(1) That there is global warming; (2) that it is manmade; (3) that the warming will have disastrous results. … The IPCC (Intergovernmental Panel on Climate Change) of the U.N. is responsible for information. … The IPCC now has admitted that those e-mails [disputing the research] were theirs. … Professor Phil Jones resigned from the IPCC and this week admitted three important things: (1) The data for the hockey stick graph [in the movie] has somehow gone missing; (2) there has been no global warming since 1995; in the hockey stick graph, that was what Al Gore pointed to—there is a steep incline in the last 15 years; (3) the warming periods have happened before, like the Medieval Warming Period, and these warming periods did not result from manmade environmental factors.” Wentland added more information for which space does not allow here (see petitionproject.org), but he also stated unequivocally: “Cap and trade will destroy our economy. All jobs will go to a country that has no restrictions on carbon. Unemployment will go up to 40 percent.” Pete Malone spoke about the Fair Tax, under which income and payroll taxes would no longer be withheld from employees’ paychecks. The self-employed would not be subject to income and self-employment taxes. Foreign and domestically-produced goods would be taxed equally, instead of foreign-produced enjoying a tax advantage as under current law. For more information, go to www.fairtax.org. Veterans’ experience presages ObamaCare Dr. Wentland then spoke about the type of health care available to veterans now, which might be coming for all U.S. citizens. Wentland said: “Currently, some treatments are denied by insurance companies who refuse to accept some patients. This is rationing. The rationing will be worse with government-sponsored health care. You will only receive so much health care per year—$5,000 per person. If you need more, you will have to pay extra. A government committee will decide what benefits and treatments you receive. Health care will be provided to all non-U.S. citizens, illegal or otherwise. Government will have direct access to your bank account for funds. Unions and community organizations will be exempt from Cadillac government plans. No judicial review or lawsuits on health care decisions.” Rosanna Pulido of The Minuteman Project spoke about “The Cost of Unfettered Immigration.” “What is going on in the U.S. is directly related to our lack of security on the borders and us supporting 30 million illegal aliens,” Pulido suggested. “[State Rep.] Dave Winters sponsored driver’s certificates for illegal aliens. … Congress imports 125,000 foreign workers a month—competing for our jobs!” Pulido also criticized the temporary help agencies, which she said sometimes offer a “back-door” approach for illegal aliens, who are then referred to other employers. “Who’s minding the temp agencies to see that they are observing the law against hiring illegals?” she asked. Pulido urged people to question their legislators about taking action on the issue. The three sponsoring organizations for the Feb. 20 event were: Rockford Tea Party ([email protected]); Northern Illinois Tea Party ([email protected]); and Stephenson County Tea Party ([email protected]). From the Mar. 3-9, 2010 issue
1
2
<urn:uuid:39eab5e3-38f9-4c13-a9bd-43034dc55f85>
In an article by Dale Killingbeck, Californian Ray Nelson explained how he invented the propellor beanie. It happened in 1947 when Nelson was a high school sophomore in Cadillac, Michigan. He invited some local science fiction fans over to his house, and they had fun taking futuristic hero/alien photos in the style of pulp covers. A hero costume had to be improvised. Nelson remembered the historic occasion, "I said, 'Wait a second,' and I dashed up to my room. In a frenzy, I stapled together a little cap made of strips of plastic and affixed a model airplane propeller to it on a wire, putting a few beads on the wire first so the propeller could spin freely." George Young, in the hero role, donned the revamped beanie, took it home and later wore it to a science-fiction convention. While visiting relatives in California, Nelson won a contest by designing a character wearing a beanie. Soon, anyone who owned a tv set was watching Bob Clampett's Time for Beany (with voices by Stan Freberg and Daws Butler). ”I never bothered to patent it. I never made a dime off it,“ said Nelson. Later, Nelson drew numerous fanzine cartoons with the propellor beanie, as he recalled,"I and other amateur cartoonists began drawing cartoons in which the propeller beanie was the symbol of science fiction the way the yarmulke is the symbol of the Orthodox Jew... It used to be science fiction fans wanted to wear them, but now computer people want to wear them. They are very popular with people out here with Mac computers.“ In addition to Ray Nelson, there was also physicist Sidney Richard Coleman, born in 1937, who wore a propellor beanie. "His mother had to put a beanie with a propeller on him, for when he walked across the street," cousin Rick Shanas said. "He wouldn't be paying attention, he'd just be so wrapped up in his thoughts, that she thought someone might hit him. He'd get noticed with the beanie." Retracing this somewhat sketchy history, the clever writer-illustrator and art historian Dan Steffan probes to uncover more details, fanzine fandangos and mimeograph madness, plus puppetry puzzlements and cartoonery connections: I've long been a fan of Ray Nelson's and was lucky enough to get him to draw for me back when I was publishing fanzines in the 1970s, 1980s and 1990s. He is a fannish treasure. I love his fanwriting and, of course, his cartoons are iconic. I think he's up in the big fannish three along with Arthur Thomson and Bill Rotsler. The three of them defined fannish cartooning. While I recall having read other accounts of how Ray "invented" the propeller beanie, I don't think I ever heard the part about the Time for Beany contest before, but I just may have forgotten it over time. That's pretty cool, if it's true. All the histories I've found mention the television show starting on February 28, 1949, which means the puppet was built by that time. Ray says that he thought it up in Michigan in 1947, so the time frame is correct. He would have had to visit his relatives in California sometime in 1948 to enter the contest. And since Time for Beany was initially shown on local LA television, his relatives had to have lived in the Los Angeles area. Also possible. I know he has always been acknowledged for having introduced the propeller beanie into fandom and made it the symbol of fannish fans. It was widely being drawn by several fan cartoonists by the late 1950s--like Bjo, LeeH Hoffman, Dave Rike and ATom--as well as Ray. Dick Eney's Fancyclopedia 2 (1959) doesn't have an entry for the Propeller Beanie, but it does have an entry about "The Beanie Brigade," which credits Bob Bloch with giving that nickname to a group of young fanboys who were running around the 1949 worldcon, Cinvention (Cincinnati, Ohio, September 3-5, 1949). Bloch called them "an army of goons wearing beanies, false beards and Buck Rogers blasters." The entry intimates, however, that this "army" may have been nothing but two fans--Art Rapp, who wore a large fake beard to the con, and George Young, who wore a propeller beanie. Young is the fellow in the photograph that accompanies Ray's article about inventing the beanie. And I suspect that the photo was probably taken at the Cinvention, and that the beanie he's wearing is the same one Ray made and gave to him. Neat account of the propellor beanie. Robin White made me a propellor beanie, in the 70s I think, and after a while I found a battery-powered spinning bowtie. I never could see how that would - there's too much in the way where a bowtie goes - but it was easily converted to spin the propellor. It used a control box with one C-cell. I let my hair grow over my collar to hide the wire and wore it to a con. Ned Brooks
1
4
<urn:uuid:59bd8607-2bb4-47cb-8160-c1fd7ecb59df>
Much has been written about Fort William, Port Arthur and the current city of Thunder Bay. The Schreiber to Thunder Bay run over the Nipigon Subdivision was never as interesting to me as the "East End". My favourite trips were between Schreiber and White River through the wilderness of the Heron Bay Subdivision. On the highway signs of the time, Schreiber had a population of 2000 and White River had a population of 1200. With further streamlining of railway operations in the area, both of these towns are a little less crowded than they were in the late 1970s. How did these little railway towns start? In the late 1800s, several surveys were run through the country north of Lake Superior, as politicians were musing about an all Canadian railway route to unite 'the east' with the tiny colony of British Columbia. The proposed railway, and settlement of the land along its route, would help ensure that the traditional Hudson's Bay Company lands and the rest of 'the west' did not become part of the United States. Some very interesting books were written at this time about how to organize railway surveying expeditions into the wild. Pack horses, dried food (hunting was useless as a source of food), simple optical and analog instruments, and significant backwoods skills to survive and survey in the middle of nowhere were required. There were no warm clothes and tents made out of synthetic fibres, no cell phones, no GPS, no police, no hospitals, no doctors. There was only the occasional fur trading post to provide any needed help. Serious injury or starvation could change your plans for the rest of your life pretty quickly. The job was to locate a nearly flat railway line as economically as possible. This was difficult because there were only hundreds of miles of rounded granite mountains, lakes, and swamps to build through. The best steam locomotives of the era could run about 125 miles before they needed more fuel (coal) and fresh crews. So, along the north shore of Lake Superior, the railway builders needed to establish at least two townsites (called "division points") which met the following criteria: - Flat land on which to build long railway yard tracks - to receive the trains and to switch their cars. - Adequate flat space for stations, locomotive and car shops, coaling towers, worker housing, 'rest & exercise' stockyards, and community buildings. - Abundant fresh water for locomotives and people. These locations had to be at roughly 125 mile intervals from the established settlement of Fort William and other planned railway terminals farther to the east. |MAP: Lake Superior and the CPR. | Fort William; Schreiber (centre of map); and White River (at right) became the 'railway towns' for the CPR north of Lake Superior. Building along the shore of Lake Superior in 1883-1885 was more effective than building inland: - Steam-powered ships could bring in rails, additional workers, food and supplies between winter freeze-ups. - Construction was completed more quickly because shoreline building could be started at many landing points. Furthermore, building could proceed both east and west of each particular landing. (Contrast this to building west from the single 'railhead' on flat prairie land.) - Given the difficulty of blasting through granite, inland construction done only from 'the end of track' would have probably run afoul of the company's guarantees to the government to complete the line on schedule. - However, powerful Lake Superior storms quickly washed away the broken granite 'fills' for the roadbed along the shore if they weren't built carefully White River: Home away from home My eastern 'objective terminal' in 1977. Dr. W.G. Houston was White River's physician from 1933 until his death in 1966. In 1985, the centennial of White River and the railway line, his wife Mary (Whent) Houston put together a wonderful, comprehensive pictorial history of White River. Using old photographs shared by White River's citizens, it shows all aspects of community life in the little railway town from its very beginnings. Most of the following information on White River's development comes from her book. Mary Houston passed away in December 2005 at the age of 86 having lived all of her life in White River. There are at least two points she would want me to bring to your attention. First of all, CPR records show it was NEVER officially named 'Snowdrift' - always White River. In 1937, a record 13.1 feet of snow fell there. More importantly, White River was long advertised as 'the coldest spot in Canada' at -72 degrees Fahrenheit. This is not correct because that reading came from a shattered thermometer. The lowest temperature recorded was -61.2 degrees F on January 23, 1935 and this is proven in her book through the use of official weather records. I'd like to elaborate that a more accurate White River title would be: 'The coldest spot in Canada ... of those which provided daily telegraph reports of their temperatures ... back then ... which were regularly published in newspapers ' This doesn't fit on a souvenir T-shirt, though. |White River, Ontario, in its earliest days| The caption reads: "First CPR company houses and lodging house built on the river bank with view of rail yard and station. Later these houses were moved across the tracks to the east end of town which became known as 'Little England'. Photo: Courtesy M. Leadbeater" The photo may be from around 1890. In 1911, White River is shown as the headquarters for District No. 2 of the Lake Superior Division. District No. 2 included 516.6 miles of mainline. Then, it included the following Subdivisions: - Cartier to Chapleau - Chapleau Sub - 137.4 mi - Chapleau to White River - White River Sub - 131.8 mi - White River to Schreiber - Schreiber Sub - 118.9 mi - Schreiber to near Port Arthur - Nipigon Sub - 128.5 mi The Chief Train Dispatcher and six train dispatchers were at White River along with the Division Superintendent. There was an Assistant Superintendent at Chapleau and a Trainmaster at Schreiber. White River station enlarged - photographed in 1907. Division offices and the dispatchers were upstairs. The train order and register office was at the far end. District No. 1 of the Lake Superior Division ran from Chalk River to Cartier, both in Ontario, and included the line to Sault Ste Marie and the US. Schreiber received a fancy new station around 1924 and this was likely co-incident with the change in the location of the railway's District headquarters. In 1977, White River was where we waited for our turn to work the next train back home to Schreiber. As you can tell, the land quite flat ... and the meandering White River flows right by the yard and shops. In May 1936 and May 1979, the White River flooded the town. |Map of White River, Ontario| - Locomotive and car shops and a passenger station with business offices. - A coaling facility and water tank. - Worker housing and a CPR rooming house. - A building for storage of block ice cut in the winter - which was used for freight car ice bunker refilling, and likely some local refrigeration during the summer. - Stock pens to feed and exercise entire trains of live cattle coming east for slaughter (before reliable refrigeration and the ability to freeze meat for transport). Other livestock, such as sheep and horses, was also sent across the country. In the late 1970s, loaded stock cars were still coming east, marshalled at the headend of some of 'my' freights. However, the stock pens were demolished in 1976 because the Winnipeg to Toronto journey could be made without cattle rest stops within the federally required 40 hour period. The pens were located where it says "First Ck" on the map, by Little Lake. When in White River, stay at ... White River once had a YMCA which served as a pleasant centre for townspeople to meet socially and a place for visitors to stay. Its main function was providing rest facilities for off duty train crews and junior railway personnel from out of town. It burned down in the 1950s. On the same location, CP built a bunkhouse where we slept and/or waited until our trains back home to Schreiber arrived. This bunkhouse is located at the bottom tip of the red shaded area of the map. White River CPR bunkhouse. The original CPR bunkhouse is the brown building, and we slept behind in the green "portables" Before the 1960s (approximately) particular vans (cabooses) were assigned to particular conductors. When away from home and off duty (eg. at White River) the conductor and the two brakemen slept on wooden benches - with mattresses and bedding - in the van. But the engine crew - the engineer and fireman - slept in the brown building above when off duty and away from home. These arrangements reflected the traditional practices of the railway and the separate collective agreements covering train and engine crews. Consider a single typical eastbound freight running from Winnipeg to Montreal ... Ten (5 + 5) crew members were required to move it from Schreiber to Chapleau. At White River, the eastbound's Schreiber crew rested and would return home to Schreiber on a later westbound train. At White River, a rested Chapleau crew would take over this particular eastbound freight, thus returning home to Chapleau. |Kwik Van Location| |Train crew during off-duty time. Inside a steam-era caboose on an American road with the train crew - washing dishes.| Later in history, when the van was left attached to the through freight train, and not assigned to the conductor living in Schreiber or Chapleau ... the conductor and brakemen bunked in with the engine crews at the enlarged bunkhouse. A big happy family. Getting back to the White River bunkhouse in 1977 ... Each room had its own desk, chair and bed. The place had showers, cooking facilities and satellite TV. On arrival, you simply wrote your name beside any room number on the chalkboard where the 'name space' was blank. When the crew callers came for us, they had four names (diesel era) for the ordered train. They looked for your name ... and came to your room if you weren't in the 'common room' watching the Peter Gzowski midnight talk show on CBC TV. At your room, the crew caller would: - Go to your room and bang on the door. - Immediately open the door. - Immediately turn on the light and walk right in. - Say something like "Gagnon, extra west at 0300". - Squint at you for a second to satisfy themselves you were past the point of going back to sleep. - Leave, closing the door and leaving the light on. - You then got ready and reported to the station to take your westbound freight train back to Schreiber at 3 AM. |White River, Ontario in the 1970s| Here is White River in the 1970s. The Trans-Canada Highway runs across the top of the town on this postcard and you can see the souvenir shops and services which sprang up in the 1960s to sell those "coldest spot in Canada" T-shirts to tourists. You can also imagine that when The White River (lower left and bottom) flooded there would be wet basements all around. |CP Rail White River, Ontario, railway yard looking east in 1984| Looking back from the dome car on the westbound Canadian in 1984, you can see the station and offices to the left of our track, then the yard, and the water and fuel facilities to the right. At this point, White River was losing its ability to repair locomotives and cars to larger facilities such as Thunder Bay. In the White River yard office In researching this piece, I was reminded of one of the nicest experiences I had during my Lake Superior effort. For a short while I worked as an 'intern' at the White River yard office with the night shift machine operator Bob Mura. He taught me how to work on the old IBM teletypes - which produced a punched tape record of a train's cars (a train consist). These long tapes were wrapped in a figure-eight motion around your thumb and pinkie finger and hung up on pegs to be fed into the machines for later transmission to Montreal to update their car control computer. Today, you could easily record the data from rooms full of these primitive paper "storage media" on a single USB key. |Baudet five-unit automtic code punched in tape| (This 6 Hertz 'processor speed' sounds about right as I remember the machines) One cold dark winter night, the powerful White River yard office radio crackled with an engineer's call that their freight had just put fifty ("five nought") cars in the bush to the east of White River. "Man, that's railroading!!" said Bob Mura. Whenever there was a derailment, the trains would all cram into terminals like White River to wait for hours or days until the line could be reopened. Stranded train crews at the bunkhouse were sometimes called for duty just to refuel the diesel-powered refrigeration units of similarly stranded semi-trailers and shipping containers travelling on flatcars. |Eddie Doyon's retirement at White River station. Photo courtesy of Jason Cottom.| Edna & Ed Doyon, R.J. Mura (dark glasses), Bill Card, Charlie Linklater, Ernie Gionet, Tommy Hogan, Mac McLeod, Irvin Baziuk, Bob Roffey. I am very grateful to Bob Mura's grandson for reading my memories of White River, contacting me, and sending this photograph (April 2009). The event shown dates back over 30 years and I recognize many of the faces. I think the facial expressions here communicate a lot to you about my experiences in White River. Bob Mura taught me how to use the IBM teletypes during the night shifts. Assistant Superintendent Tees had suggested I could stay at the bunkhouse during the training. I was preparing for a summer relief vacancy at a station to the east. Bob was a good patient teacher - even when the Montreal computer repeatedly rejected the train consist tapes I had typed off-line and then submitted on a live machine. He explained to me that the running trades required a 'different kind of cat', and that life on the road could be very demanding, with significant stress and time away from home. In our short time together, he certainly made me feel much better about some decisions I had made. Eddie's Retirement wasn't the only 'end of career event' which Bob attended. I departed White River (and CP Rail) for eastern Ontario on Train Number 2 in the middle of the night. It was a clear and cold, and the patches of snow remaining on the hard asphalt platform squeaked underfoot. Bob came out with me in his shirt-sleeves to see me off and waited there with the station lights behind him as I boarded. I'll always remember his friendliness and support during those last few days. A scan of a 1958 CPR Spanner magazine. You can see the figure-8 wound tape consists - usually one for each train - on the varnished wood pegboard. And there's Bob at the centre of the photograph. After seeing this photo, Jason Cottom advised me that P.E. Linklater was R.J. Mura's father-in-law. Many railroaders have nicknames and Yardmaster Linklater was known as 'Hot Cakes'. If you are doing the math at home ... this is a photo of Jason's Grandfather and Great Grandfather. |Clarence Cottom retirement from Mary Houston's history of White River| Photo: Courtesy J. Dillabough" This is one of my favourite photos from Mary Houston's 1985 book. It shows Jason's other grandfather. Sometimes cakes or other mementos were decorated with the locomotive number of a retiree's last run. Here is a 1932 snapshot of another Angus Shops built engine of the same class as Clarence Cottom's 'last run'. |CPR locomotive 2222 at Toronto, 1932| White River flooding 1936 |Flood - May, 1936. Clem Cowan and Mary Whent paddling on Winnipeg Street at White River| The tallest structure between them is the locomotive coaling tower. It must have been quite a project to pull together, safeguard, sort, select, caption, reproduce, and return a collection of photos to represent White River's 100 years of history in 1985. Mary Houston (shown above as a teenager) and the citizens of White River have made a great contribution to the understanding of 'the human experience' of railroading and living together in a small close-knit CPR community from its earliest days. My 'home terminal' in 1977 In Schreiber, there is an Ontario historic plaque which reads: |Sir Collingwood Schreiber 1831-1918| James Isbester, CPR contractor Photo from Rolly Martin - Born: Orkney Islands and came to Canada as a youngster when his family settled near Woodstock. - As a young man, worked as a mechanical engineer on the Great Western Railway (Canada). - Began contracting experience in 1869, building a portion of the Rimouski Bridge under Alexander Macdonald. - Subcontractor on the Intercolonial Railway. - Built Section B of the CPR in partnership with Manning and Macdonald - along the north shore of Lake Superior. - Contracted with R.G. Reid (of the Newfoundland Railway) on the Cape Breton extension of the Intercolonial Railway. - Completed a contract on the Crow's Nest Pass in the spring of 1899. - Died of complications from diabetes during a trip to inspect a contract in the Rainy River district. - 'Ardent Conservative', Presbyterian, and 'warm personal friend' of Sir John A. Macdonald. From the text of an obituary supplied by Brian Westhouse, December 2005 |Little Pic River bridge| |CPR Schreiber station was initially located on the north side of the mainline.| Photo circa 1890-1900. Schreiber station on the north side of mainline. Schreiber before I went on brake. Where do we start with all the interesting details? There is a brakeman on the roofwalk at the three livestock cars just ahead of the conductor's van. At the edge of civilization at the rear of the photo ... left to right ... shop (?) ... passenger station and freight shed ... food and boarding facilities (?) ... ice house (?). |Schreiber, Ontario showing railway livestock pens| |Schreiber, Ontario topographic map, 1977| During railway construction, nearby Rossport and Jackfish harbours would have been more important than Schreiber for heavy supply. As you can see from the map, Schreiber is isolated from Lake Superior, except for the trail down to Schreiber Beach - i.e. Isbester's Landing - for supply boats during early construction. The contour lines show that the town and its facilities are ringed by high hills and that the yard tracks had to be bent around these hills in order to be located on relatively flat land. The blue squares are 1000 metres across. Schreiber existed for the railway, but gradually a more balanced community developed. Here is a Schreiber news column from the Fort William Journal of July 18, 1894 - nine years after the line was completed. |Schreiber Scribblings 1894 newspaper column| "The Strike" referred to above was the 50,000 worker Pullman Strike in Chicago which had been called by Eugene V. Debs and the American Railway Union because sleeping car workers' wages had been cut 25% and their union representatives fired. An injunction was obtained by the U.S. attorney general (who also happened to be a director on two railroad boards), and U.S. President Cleveland sent in troops to enforce the injunction (34 strikers dead, hundreds of railway cars burned by strikers) on July 4, 1894. This violence occurred a week or so before the newspapers would have reached Schreiber if they had not all 'been disposed of' from CPR train Number 2 as recorded above. The railways were BIG business and you didn't fool around with them. CPR officials had no interest in keeping local workers in a company town up to date on strikes on other railways. Speaking of officials ... from the same edition : |Fort William journal July 18, 1894| Schreiber's Italian Community Immigrants from many countries and many other parts of Canada have come to call Schreiber home over the years. The most noted group all had an interesting common background. Around 1905, Cosimo Figliomeni arrived in Schreiber from Siderno, Italy, beginning a sequence of 'chain' migration of families from Siderno to Schreiber. Put simply, 'chain migration' is knowing someone who can help you find a place to live and help you get a job - then you help someone you know, etc. Back then, the railway was a very labour intensive business. It is hard to imagine how it could have functioned without the contribution of newcomers to Canada who often took on unpleasant, dangerous, lonely and demanding jobs to become established in this harsh and challenging country. All year, the track would need to be patrolled and maintained with heavy repairs being performed during the summer. This would employ hundreds of workers on the Division. In winter, with the roadbed frozen, shimming would be performed to correct minor track surfacing defects. Cold and brittle rails breaking under the pounding of trains would need to be replaced. Snowstorms would bring a great demand for switch cleaning to keep the yards and sidings functioning. Slides of rock, snow, and ice would need to be cleared. Inevitably, trains suddenly coming into contact with winter track defects would require labour to clear derailments and rebuild the track. |Cleaning switches in White River yard| White River yard switch cleaning in the 1930s - from 'Pictorial History of White River' To maintain safety, trackwalkers were used in a number of lonely areas on the line which were likely to be struck by rock and snow slides. The waves and ice of Lake Superior storms could also attack the right of way. These workers would be out in all hours, and in all weather, to inspect the track and signal trains that it was safe to proceed if all was well. This was particularly important before the passage of a passenger train. Another lonely job would be to maintain the water tanks used to refill steam locomotive tenders. All year, the water pumps to fill the tanks would need to be operated. During the winter, the fires in heaters at the base of the tanks would also need to be maintained to keep the tanks from freezing. Spring thaw, heavy rain and beaver dam flooding repairs; ditch and culvert maintenance; brush clearing; bridge and building maintenance; locomotive servicing and car repairs; inspecting journals and topping them up with oil; hauling blocks of cooling ice; maintaining kerosene markers and switchlamps ... there seems to be no end to the list of jobs the railway needed done. Today, descendants of the immigrants from Siderno are said to make up half of the population of Schreiber. To understand the history of Schreiber is to understand the contribution of those who worked under the most difficult and dangerous conditions to keep the trains rolling. |Winter scene, postcard, Schreiber, Ontario| The message mailed to Red Deer, Alberta reads: "This is pretty true to life. Gee I think all the snow in the world is here, and more to come and it's cold." Schreiber Grows and Changes Begun in the 1930s, the Trans-Canada Highway was not completed in the region as a through road until 1960. So the early development of both Schreiber and White River was centred around the railway station and yards, beginning with the CPR's completion in 1885. - Water transportation was minimal because neither had good harbours on Lake Superior. - There were few roads and no through roads in the early years. - There were few motor vehicles then. - The towns existed only for the railway as single industry towns. The railway provided Schreiber with: - Some company housing, particularly worker dormitories and houses for officials who were transferred to Schreiber. - Coal oil street lighting and eventually a diesel motor generated electrical system in 1936. - Telegraph service with the outside world with some telephones in the late 1930s. Social services were provided mainly by the churches. The YMCA was one place where some community events were held and it was the place where travellers and transferred or "bumped" railway workers could get a room. Retail stores, their goods transported in by the railway, also became established. My brother obtained the following photocopies of diagrams for me years ago. They show the plans for the proposed station at Schreiber, dated 1924. They're not presented because they are beautiful ... but because I think others should have and preserve the history of the CPR line north of Superior. It was this building which made Schreiber the headquarters for the division. With this came the salaried positions which - I was told in 1977 - made Schreiber 'the town with the highest per capita income in Ontario'. |Proposed CPR station, Schreiber, 1924.| |Proposed CPR station, Schreiber, 1924.| With the completion and opening of the Trans-Canada Highway from Sault Ste Marie to the lakehead in September 1960, the Schreiber I knew evolved. To me, the most memorable features of Schreiber were: - The friendliness of the people - everyone was very helpful to a teenager arriving in town to 'go on brake'. - The importance of the railway to Schreiber and vice versa (i.e. the division offices and the shops to repair rolling stock). - The silence of the town and the height of the snowbanks - as I walked to work in the middle of the night in a snowstorm. |Spadoni Brothers advertisement circa 1965| |Spadoni Brothers advertisement circa 1965| With the opening of the Trans-Canada, 'getting over the road' took on a new meaning. Back in these days professional salaries were probably around $1000 per year. Old cars are always interesting, but also take note of the local telephone exchange in this 1965 flyer. |Schreiber, Ontario and the Trans-Canada Highway| About 500 people were employed by CP Rail when I worked there in the late 1970s. About 120 more were employed by the expanding Kimberly-Clark mill in nearby Terrace Bay and there was a housing crunch in Schreiber as construction workers crowded in to every available lodging. Schreiber - a final trip back through time The dispatchers, locomotive and car shops, division offices, and half of the running trades employees (the trainmen) are no longer to be found in Schreiber. Today, the Schreiber station stands as a reminder of the thousands of railroaders who called Schreiber home over the years. |Schreiber, Ontario CP Rail station, 1980s| Travelling back, here is a postcard from sometime in the mid-1960s. The railway is prominent in the postcard photo by Harry R. Oakman of Peterborough. A long summertime Train Number 1 with three locomotives is changing crews at Schreiber station. The roundhouse is gone, but the yard is full of paper cars and the car shop is in business. |Schreiber, Ontario in the 1960s| Let's go back to the late 1930s. There is no Trans-Canada. There is no through road featured in the next postcard. |Schreiber, Ontario 1930s| The westbound mainline - the sole reliable link between eastern and western Canada for the first 30 years of the CPR - quickly disappears among the forested granite hills. Schreiber today is a thoroughly modern town. But in the photo above you can catch a glimpse of its origin. It has been over 125 years since the first of thousands of Schreiber running trades employees began performing the final rituals of departure: pulling themselves up into the hot cabs of waiting CPR locomotives; shouting 'board', giving the headend the 'high sign', and mounting the passenger car steps; or swinging themselves up onto the steps of passing CPR freight train vans. Today, these railroaders continue to provide a link to Schreiber's beginnings. |Schreiber (now Heron Bay) Subdivision 1911, Employee Timetable|
1
17
<urn:uuid:dd7f090c-d5a8-4fe2-a375-895c2cf979a3>
FLORA AND FAUNA ENERGY AND POWER SCIENCE AND TECHNOLOGY BALANCE OF PAYMENTS BANKING AND SECURITIES CUSTOMS AND DUTIES LIBRARIES AND MUSEUMS TOURISM, TRAVEL, AND RECREATION CAPITAL: Dublin (Baile Átha Cliath) FLAG: The national flag is a tricolor of green, white, and orange vertical stripes. ANTHEM: Amhrán na bhFiann (The Soldier's Song). MONETARY UNIT: The euro replaced the Irish punt as the official currency in 2002. The euro is divided into 100 cents. There are coins in denominations of 1, 2, 5, 10, 20, and 50 cents and 1 euro and 2 euros. There are notes of 5, 10, 20, 50, 100, 200, and 500 euros. €1 = $1.25475 (or $1 = €0.79697) as of 2005. WEIGHTS AND MEASURES: Since 1988, Ireland has largely converted from the British system of weights and measures to the metric system. HOLIDAYS: New Year's Day, 1 January; St. Patrick's Day, 17 March; Bank Holidays, 1st Monday in June, 1st Monday in August, and last Monday in October; Christmas Day, 25 December; St. Stephen's Day, 26 December. Movable religious holidays include Good Friday and Easter Monday. An island in the eastern part of the North Atlantic directly west of the United Kingdom, on the continental shelf of Europe, Ireland covers an area of 70,280 sq km (27,135 sq mi). Comparatively, the area occupied by Ireland is slightly larger than the state of West Virginia. The island's length is 486 km (302 mi) n–s, and its width is 275 km (171 mi) e–w. The Irish Republic is bounded on the n by the North Channel, which separates it from Scotland; on the ne by Northern Ireland; and on the e and se by the Irish Sea and St. George's Channel, which separate it from England and Wales. To the w, from north to south, the coast is washed by the Atlantic Ocean. Ireland's capital city, Dublin, is located on the Irish Sea coast. Ireland is a limestone plateau rimmed by coastal highlands of varying geological structure. The central plain area, characterized by many lakes, bogs, and scattered low ridges, averages about 90 m (300 ft) above sea level. Principal mountain ranges include the Wicklow Mountains in the east and Macgillycuddy's Reeks in the southwest. The highest peaks are Carrantuohill (1,041 m/3,414 ft) and Mt. Brandon (953 m/3,127 ft), near Killarney, and, 64 km (40 mi) south of Dublin, Lugnaquillia (926 m/3,039 ft). The coastline, 1,448 km (900 mi) long, is heavily indented along the south and west coasts where the ranges of Donegal, Mayo, and Munster end in bold headlands and rocky islands, forming long, narrow fjordlike inlets or wide-mouthed bays. On the southern coast, drowned river channels have created deep natural harbors. The east coast has few good harbors. Most important of the many rivers is the Shannon, which rises in the mountains along the Ulster border and drains the central plain as it flows 370 km (230 mi) to the Atlantic, into which it empties through a wide estuary nearly 110 km (70 mi) long. Other important rivers are the Boyne, Suir, Liffey, Slaney, Barrow, Blackwater, Lee, and Nore. Ireland has an equable climate, because the prevailing west and southwest winds have crossed long stretches of the North Atlantic Ocean, which is warmer in winter and cooler in summer than the continental land masses. The mean annual temperature is 10°c (50°f), and average monthly temperatures range from a mild 4°c (39°f) in January to 16°c (61°f) in July. Average yearly rainfall ranges from less than 76 cm (30 in) in places near Dublin to more than 254 cm (100 in) in some mountainous regions. The sunniest area is the extreme southeast, with an annual average of 1,700 hours of bright sunshine. Winds are strongest near the west coast, where the average speed is about 26 km/hr (16 mph). Since Ireland was completely covered by ice sheets during the most recent Ice Age, all existing native plant and animal life originated from the natural migration of species, chiefly from other parts of Europe and especially from Britain. Early sea inundation of the land bridge connecting Ireland and Britain prevented further migration after 6000 bc. Although many species have subsequently been introduced, Ireland has a much narrower range of flora and fauna than Britain. Forest is the natural dominant vegetation, but the total forest area is now only 9.6% of the total area, and most of that remains because of the state afforestation program. The natural forest cover was chiefly mixed sessile oak woodland with ash, wych elm, birch, and yew. Pine was dominant on poorer soils, with rowan and birch. Beech and lime are notable natural absentees that thrive when introduced. The fauna of Ireland is basically similar to that of Britain, but there are some notable gaps. Among those absent are weasel, polecat, wildcat, most shrews, moles, water voles, roe deer, snakes, and common toads. There are also fewer bird and insect species. Some introduced animals, such as the rabbit and brown rat, have been very successful. Ireland has some species not native to Britain, such as the spotted slug and certain species of wood lice. Ireland's isolation has made it notably free from plant and animal diseases. Among the common domestic animals, Ireland is particularly noted for its fine horses, dogs, and cattle. The Connemara pony, Irish wolfhound, Kerry blue terrier, and several types of cattle and sheep are recognized as distinct breeds. As of 2002, there were at least 25 species of mammals, 143 species of birds, and over 900 species of plants throughout the country. Ireland enjoys the benefits of a climate in which calms are rare and the winds are sufficiently strong to disperse atmospheric pollution. Nevertheless, industry is a significant source of pollution. In 1996, carbon dioxide emissions from industrial sources totaled 34.9 million metric tons. In 2002, the total of carbon dioxide emissions was at 42.2 million metric tons. Water pollution is also a problem, especially pollution of lakes from agricultural runoff. The nation has 49 cu km of renewable water resources. Principal responsibility for environmental protection is vested in the Department of the Environment. The Department of Fisheries and Forestry, the Department of Agriculture, and the Office of Public Works also deal with environmental affairs. Local authorities, acting under the supervision of the Department of the Environment, are responsible for water supply, sewage disposal, and other environmental matters. In 2003, about 1.7% of the total land area was protected, including 45 Ramsar wetland sites. According to a 2006 report issued by the International Union for Conservation of Nature and Natural Resources (IUCN), threatened species included four types of mammals, eight species of birds, six species of fish, one type of mollusk, two species of other invertebrates, and one species of plant. Threatened species include the Baltic sturgeon, Kerry slug, and Marsh snail. The great auk has become extinct. The population of Ireland in 2005 was estimated by the United Nations (UN) at 4,125,000, which placed it at number 122 in population among the 193 nations of the world. In 2005, approximately 11% of the population was over 65 years of age, with another 21% of the population under 15 years of age. There were 99 males for every 100 females in the country. According to the UN, the annual population rate of change for 2005–10 was expected to be 0.8%, a rate the government viewed as satisfactory. The projected population for the year 2025 was 4,530,000. The population density was 59 per sq km (152 per sq mi). The UN estimated that 60% of the population lived in urban areas in 2005, and that urban areas were growing at an annual rate of 1.37%. The capital city, Dublin (Baile Átha Cliath), had a population of 1,015,000 in that year. The other largest urban centers (and their estimated populations) were Cork (193,400), Limerick 84,900), Galway (65,832), and Waterford (44,594). The great famine in the late 1840s inaugurated the wave of Irish emigrants to the United States, Canada, Argentina, and other countries: 100,000 in 1846, 200,000 per year from 1847 to 1850, and 250,000 in 1851. Since then, emigration has been a traditional feature of Irish life, although it has been considerably reduced since World War II. The net emigration figure decreased from 212,000 for 1956–61 to 80,605 for 1961–66 and 53,906 for 1966–71. During 1971–81, Ireland recorded a net gain from immigration of 103,889. As of November 1995, more than 150,000 people had left Ireland in the previous 10 years, unemployment being the main reason. The top two destinations were the United Kingdom and the United States. During the 1990s there was a considerable rise in the number of asylum seekers, from 39 applications in 1992 to 4,630 in 1998. The main countries of origin were Nigeria, Romania, the Democratic Republic of the Congo, Libya, and Algeria. Also, during the Kosovo crisis in 1999, Ireland took in 1,033 Kosovar Albanians who were evacuated from Macedonia under the UNHCR/IOM Humanitarian Evacuation Programme. In 2004 Ireland had 7,201 refugees and 3,696 asylum seekers. Asylum seekers are primarily from Nigeria and the Democratic Republic of the Congo and six other countries. In 2005, the net migration rate was estimated as 4.93 migrants per 1,000 population, up from -1.31 in 1999. Within historic times, Ireland has been inhabited by Celts, Norsemen, French Normans, and English. Through the centuries, the racial strains represented by these groups have been so intermingled that no purely ethnic divisions remain. The Travellers are group of about 25,000 indigenous nomadic people who consider themselves to be a distinct ethnic minority. Two languages are spoken, English and Irish (Gaelic). During the long centuries of British control, Irish fell into disuse except in parts of western Ireland. Since the establishment of the Irish Free State in 1922, the government has sought to reestablish Irish as a spoken language throughout the country. It is taught as a compulsory subject in schools and all government publications, street signs, and post office notices are printed in both Irish and English. English, however, remains the language in common use. Only in a few areas (the Gaeltacht), mostly along the western seaboard, is Irish in everyday use. In 1995, a national survey found that only 5% of Irish people frequently used the Irish language and only 2% considered it their native tongue. About 30% of the population, however, claims some proficiency in Gaelic. According to the 2002 census, about 88.4% of the population were nominally Roman Catholic. The next largest organization was the Church of Ireland (Anglican), with a membership of about 2.9% of the population. About 0.52% of the population were Presbyterian, 0.25% were Methodist, 0.49 were Muslim, and less than 0.1% were Jewish. There are small communities Jehovah's Witnesses. For ecclesiastical purposes, the Republic of Ireland and Northern Ireland (UK) constitute a single entity. Both Roman Catholics and Episcopalian churches have administrative seats at Armagh in Northern Ireland. The Presbyterian Church has its headquarters in Belfast. The constitutional right to freedom of religion is generally respected in practice. The Irish Transport System (Córas Iompair Éireann-CIE), a state-sponsored entity, provides a nationwide coordinated road and rail system of public transport for goods and passengers. It is also responsible for maintaining the canals, although they are no longer used for commercial transport. Ireland's railroads, like those of many other European countries, have become increasingly unprofitable because of competition from road transport facilities. There were 3,312 km (2,056 mi) of track in 2004, all of it broad gauge. CIE receives an annual government subsidy. A network of good main roads extends throughout the country, and improved country roads lead to smaller towns and villages. Ninety-six percent of all inland passenger transport and 90% of inland freight are conveyed by road. Bus routes connect all the major population centers and numerous moderate-sized towns. In 2002, there were 95,736 km (59,548 mi) of roads, of which all were surfaced. In 2003 there were 1,520,000 passenger cars and 272,000 commercial vehicles in use. In 2005, Ireland's merchant fleet consisted of 39 vessels of 1,000 GRT or more. The state-supported shipping firm, the British and Irish Steam Packet Co. (the B and I Line), is largely engaged in cross-channel travel between Ireland and the United Kingdom, providing passenger and car ferry services as well as containerized freight services, both port to port and door to door. The Irish Continental Line operates services to France, linking Rosslare with Le Havre and Cherbourg; it also runs a summer service between Cork and Le Havre. Brittany Ferries operates a weekly service between Cork and Roscoff. Other shipping concerns operate regular passenger and freight services to the United Kingdom and freight services to the Continent. There are deepwater ports at Cork and Dublin and 10 secondary ports. Dublin is the main port. As of 2004, Ireland had 753 km (468 mi) of navigable inland waterways, but which were accessible only by pleasure craft. In 2004 there were an estimated 36 airports, of which 15 had paved runways as of 2005. Aer Lingus (Irish International Airlines), the Irish national airline, operates services between Ireland, the United Kingdom, and continental Europe as well as transatlantic flights. Many foreign airlines operate scheduled transatlantic passenger and air freight services through the duty-free port at Shannon, and most transatlantic airlines make nonscheduled stops there; foreign airlines also operate services between Ireland, the United Kingdom, and continental Europe. The three state airports at Dublin, Shannon, and Cork are managed by Aer Rianta on behalf of the Ministry for Transport and Power. A domestic airline, Aer Arann Teo, connects Galway with the Aran Islands and Dublin. In 2003, about 28.864 million passengers were carried on scheduled domestic and international airline flights. The pre-Christian era in Ireland is known chiefly through legend, although there is archaeological evidence of habitation during the Stone and Bronze ages. In about the 4th century bc, the tall, red-haired Celts from Gaul or Galicia arrived, bringing with them the Iron Age. They subdued the Picts in the north and the Érainn tribe in the south, then settled down to establish a Gaelic civilization, absorbing many of the traditions of the previous inhabitants. By the 3rd century ad, the Gaels had established five permanent kingdoms—Ulster, Connacht, Leinster, Meath (North Leinster), and Munster—with a high king, whose title was often little more than honorary, at Tara. After St. Patrick's arrival in ad 432, Christian Ireland rapidly became a center of Latin and Gaelic learning. Irish monasteries drew not only the pious but also the intellectuals of the day, and sent out missionaries to many parts of Europe. Toward the end of the 8th century, the Vikings began their invasions, destroying monasteries and wreaking havoc on the land, but also intermarrying, adopting Irish customs, and establishing coastal settlements from which have grown Ireland's chief cities. Viking power was finally broken at the Battle of Clontarf in 1014. About 150 years later, the Anglo-Norman invasions began. Gradually, the invaders gained control of the whole country. Many of them intermarried, adopted the Irish language, customs, and traditions, and became more Irish than the Gaels. But the political attachment to the English crown instituted by the Norman invasion caused almost 800 years of strife, as successive English monarchs sought to subdue Gaels and Norman-Irish alike. Wholesale confiscations of land and large plantations of English colonists began under Mary I (Mary Tudor) and continued under Elizabeth I, Cromwell, and William III. Treatment of the Irish reached a brutal climax in the 18th century with the Penal Laws, which deprived Catholics and Dissenters (the majority of the population) of all legal rights. By the end of the 18th century, many of the English colonists had come to regard themselves as Irish and, like the English colonists in America, resented the domination of London and their own lack of power to rule themselves. In 1783, they forced the establishment of an independent Irish parliament, but it was abolished by the Act of Union (1800), which gave Ireland direct representation in Westminster. Catholic emancipation was finally achieved in 1829 through the efforts of Daniel O'Connell, but the great famine of the 1840s, when millions died or emigrated for lack of potatoes while landlords continued to export other crops to England, emphasized the tragic condition of the Irish peasant and the great need for land reform. A series of uprisings and the growth of various movements aimed at home rule or outright independence led gradually to many reforms, but the desire for complete independence continued to grow. After the bloodshed and political maneuvers that followed the Easter Uprising of 1916 and the proclamation of an Irish Republic by Irish members of Parliament in 1919, the Anglo-Irish Treaty was signed in 1921, establishing an Irish Free State with dominion status in the British Commonwealth. Violent opposition to dominion status and to a separate government in Protestant-dominated Northern Ireland precipitated a civil war lasting almost a year. The Free State was officially proclaimed and a new constitution adopted in 1922, but sentiment in favor of a reunified Irish Republic remained strong, represented at its extreme by the terrorist activities of the Irish Republican Army (IRA). Powerful at first, the IRA lost much of its popularity after Éamon de Valera, a disillusioned supporter, took over the government in 1932. During the civil violence that disrupted Northern Ireland from the late 1960s on, the Irish government attempted to curb the "provisional wing" of the IRA, a terrorist organization that used Ireland as a base for attacks in the north. Beginning in 1976, the government assumed emergency powers to cope with IRA activities, but the terrorist acts continued, most notably the assassination on 27 August 1979 of the British Earl Mountbatten. The Irish government continued to favor union with Northern Ireland, but only by peaceful means. In November 1985, with the aim of promoting peace in Northern Ireland, Ireland and the United Kingdom ratified a treaty enabling Ireland to play a role in various aspects of Northern Ireland's affairs. On 10 April 1998 the Irish Republic jointly signed a peace agreement with the United Kingdom to resolve the Northern Ireland crisis. Ireland pledged to amend articles 2 and 3 of the Irish Constitution, which lay claim to the territory of the North, in return for the United Kingdom promising to amend the Government of Ireland Act. On 22 May 1998, 94.4% of the electorate voted in a referendum to drop Ireland's claim to Northern Ireland. A year after the agreement, several key provisions of the Good Friday Agreement had been implemented. The peace process has since then witnessed long moments of gloom in spite of the ongoing involvements of the British and Irish prime ministers to resolve the situation in Northern Ireland. One of the largest obstacles was the disarmament of the IRA and the reservations on the part of the Ulster Unionists to share power with Sinn Feìn, the political arm of the IRA. Finally, in May 2000, the IRA proposed that outside observers be shown the contents of arms dumps and reinspect them at regular intervals to ensure that weaponry had not been removed and was back in circulation. The Ulster Unionists agreed to power-sharing arrangements and to endorse devolution of Northern Ireland. Decommissioning of the IRA did not progress in early 2001, however, and David Trimble, the first minister of the power-sharing government, resigned in July 2001. Sinn Feìn's offices at Stormont, the Northern Ireland Assembly, were raided by the police in October 2002, due to spying allegations. On 14 October 2002, devolution was suspended and direct rule from London returned to Northern Ireland. Elections planned for the assembly in May 2003 were indefinitely postponed by British Prime Minister Tony Blair, due to a lack of evidence of peaceful intentions on the part of the IRA. Talks aimed at restoring devolved government in 2004 failed due to the continued IRA possession of illegal arms and its refusal to disband and pull out of illegal activities. Progress did not look eminent as of January 2005, when some IRA members were brutally murdered and the provisional government seemed to make attempts to protect those responsible for the murders from prosecution. The years since the proclamation of the Irish Free State have witnessed important changes in governmental structure and international relations. In 1937, under a new constitution, the governor-general was replaced by an elected president, and the name of the country was officially changed to Ireland (Éire in Irish). In 1948, Ireland voted itself out of the Commonwealth of Nations, and on 18 April 1949, it declared itself a republic. Ireland was admitted to the UN in 1955 and became a member of the EC in 1973. Ireland, unlike the United Kingdom, joined the European economic and monetary union in 1999 without problem, and adopted the euro as its currency. However, Irish voters in June 2001 rejected the Treaty of Nice, which allowed for the enlargement of the EU. The other 14 members of the EU all approved the treaty by parliamentary vote, but Ireland's adoption required amending the constitution, which stipulated a popular vote. Voter turnout was low (34.8%), and when the treaty was put to Irish voters once again in October 2002, the government conducted a massive education campaign to bring voters to the polls. This time, voter turnout was 48.5%, and 63% of voters in the October referendum approved the Nice Treaty. Ten new EU candidate countries joined the body on 1 May 2004. Ireland has also benefited from progressive leadership. Mary Robinson, an international lawyer, activist, and Catholic, was elected president in November 1990. She became the first woman to hold that office. In 1974, while serving in the Irish legislature, she shocked her fellow country people by calling for legal sale of contraceptives. Her victory came at a period in Irish history dominated by controversy over the major issues of the first half of the 1990s: unemployment, women's rights, abortion, divorce, and homosexuality. Robinson promoted legislation that enabled women to serve on juries and gave 18-year-olds the right to vote. In 1997, Mary McAleese, who lived in Northern Ireland, became the first British subject to be elected president of the Irish Republic until 2004. In March 2002, Irish voters rejected a referendum proposal that would further restrict abortion laws. The vote was 50.4% against the proposal and 49.6% in favor. The vote was a setback to Prime Minister Bertie Ahern. However, Ahern's Fianna Fáil party overwhelmingly defeated the opposition Fine Gael party in the May 2002 elections. In June 2004, local and European elections were held. In October 2004, McAleese won a second seven-year term as President; however, this was in light of the fact that opposing parties didn't nominate alternative candidates. She will not be eligible for another reelection in the October 2011 elections. Senate elections were scheduled to occur in July 2007, and the House of Representatives were scheduled to be held one month prior, in May 2007. Constitutionally, Ireland is a parliamentary democracy. Under the constitution of 1937, as amended, legislative power is vested in the Oireachtas (national parliament), which consists of the president and two houses—Dáil Éireann (house of representatives) and Seanad Éireann (senate)—and sits in Dublin, the capital city. The president is elected by popular vote for seven years. Members of the Dáil, who are also elected by popular suffrage, using the single transferable vote, represent constituencies determined by law and serve five-year terms. These constituencies, none of which may return fewer than three members, must be revised at least once every 12 years, and the ratio between the number of members to be elected for each constituency and its population as ascertained at the last census must be the same, as far as practicable, throughout the country. Since 1981, there have been 166 seats in the Dáil. The Seanad consists of 60 members: 49 elected from five panels of candidates representing (a) industry and commerce, (b) agricultural and allied interests and fisheries, (c) labor, (d) cultural and educational interests, and (e) public administration and social services; 6 elected by the universities; and 11 nominated by the taoiseach (prime minister). Elections for the Seanad must be held within 90 days of the dissolution of the Dáil; the electorate consists of members of the outgoing Seanad, members of the incoming Dáil, members of county councils, and county borough authorities. The taoiseach is assisted by a tánaiste (deputy prime minister) and at least six but not more than 14 other ministers. The constitution provides for popular referendums on certain bills of national importance passed by the Oireachtas. Suffrage is universal at age 18. The chief of state is the president, who is elected by universal suffrage to serve a seven-year term and may be reelected only once. The presidency is traditionally a figurehead role with limited powers. The president appoints a cabinet based upon a nomination from the prime minister and approval from the house of representatives. As of 2005, Mary McAleese held the presidential office. The head of government is the prime minister, who is nominated by the house of representatives and appointed by the president. As of 2005 Bertie Ahern was prime minister and had occupied the position since 26 June 1997. A number of amendments having to do with European integration, Northern Ireland, abortion, and divorce have been added to the 1937 constitution, which may only be altered by referendum. A recent referendum in 2004 ended in a 4-to-1 vote that native-born children could not be granted automatic citizenship. The major political parties are the Fianna Fáil, the Fine Gael, Labour, and the Progressive Democrats. Because the members of the Dáil are elected by a proportional representation system, smaller parties have also at times won representation in the Oireachtas. In 1986, Sinn Feìn, the political arm of the Provisional IRA, ended its 65-year boycott of the Dáil and registered as a political party winning one seat in the Dáil in the 6 June 1997 elections. Fianna Fáil, the Republican Party, was founded by Éamon de Valera. It is the largest party since 1932 and has participated in government during 55 of the past 73 years, as of 2004. When the Anglo-Irish Treaty of 1921 was signed, de Valera violently opposed the dominion status accepted by a close vote of the Dáil. Until 1927, when the government threatened to annul their election if they did not fulfill their mandates, de Valera and his followers boycotted the Dáil and refused to take an oath of allegiance to the English crown. In 1932, however, de Valera became prime minister, a position he held continuously until 1947 and intermittently until 1959, when he became president for the first of two terms. From 1932 to 1973, when it lost its majority to a Fine Gael–Labour coalition, Fianna Fáil was in power for all but six years. Fine Gael is the present name for the traditionally center-right party (of the Christian democratic type) and is second-largest party in Ireland. It grew out of the policies of Arthur Griffith, first president of the Irish Free State, and Michael Collins, first minister for finance and commander-in-chief of the army. W. T. Cosgrave, their successor, accepted the conditions of the 1921 treaty as the best then obtainable and worked out the details of the partition boundary and dominion status. This party held power from the first general election of 1922 until 1932. Since 1948, as the principal opponent of Fianna Fáil, it has provided leadership for several coalition governments. The policies of Fine Gael traditionally have been far more moderate than those of Fianna Fáil, although it was an interparty coalition government dominated by Fine Gael and Labour that voted Ireland out of the Commonwealth in 1948. The Labour party incorporated the Democratic Left into its party in 1998, but still failed to increase its seats in the 2002 election (it is much smaller than Fine Gael). The party moved toward the center under the leadership of Pat Rabitte. In 1985, a group of parliamentarians broke away from Fianna Fáil because of the autocratic leadership of Charles Haughey. They formed the Progressive Democrats (PDs) party, which supported liberal economic orthodoxy in the 1980s. It joined in a coalition with Fianna Fáil in 1997 and has been influential in economic policy making. In the 2002 elections, two smaller parties increased their seat holdings. Sinn Fein, the political wing of the IRA, added four seats to the one it had won in the Dáil in 1997. The Green Party increased its holdings from two to six seats. It opposed European integration and participation in European security structures. In the general elections of 24 November 1982 (the third general election to be held within a year and a half), Fianna Fáil won 75 seats, Fine Gael 70, and the Labour Party 16. Two members of the Workers' Party and three independents were also elected. Garret FitzGerald was elected taoiseach (1983-1987), heading a Fine Gael-Labour coalition. It was the second time in a year that he had replaced Charles J. Haughey of the Fianna Fáil in that office. In December 1979, Haughey had replaced Jack Lynch as head of his party and become prime minister. The 1987 elections saw Fianna Fáil raise its representation, despite a drop in its proportion of the vote compared to the 1982 elections. Fine Gael and Labour lost seats, while the Progressive Democrats and Workers' Party (which increased its representation from two to four seats) increased their seat holding. In a bitter contest, Charles Haughey was elected taoiseach (1987-1991) and formed a minority Fianna Fáil government. Albert Reynolds was taoiseach (prime minister) from 1991 to 1994. An early general election in 1992 saw the two largest parties—Fianna Fáil and Fine Gael—lose seats to the Labour Party. Albert Reynolds of Fianna Fáil was reelected taoiseach of the Fianna Fáil-Labour Coalition. From 1994 to 1997, John Bruton, of the Fianna Gael-Labour-Democratic Left was prime minister. However, a center-right alliance led by Bertie Ahern of Fianna Fáil defeated Prime Minister Bruton's three-party left-of-center coalition in the 6 June 1997 general election. Although Bruton's own party, Fine Gael, increased its share of the vote, its coalition partners, the Labour Party and the Democratic Left, both lost seats. Fianna Fáil won 77 seats outright, 6 shy of the 83 required for a majority. Other parties winning seats were Labor (17), Democratic Left (4), Progressive Democrats (4), Greens (2), Sinn Fein (1), Socialists (1), and Independents (6). Fianna Fáil joined with the Progressive Democrats and Independents to form a new government with Bertie Ahern as taoiseach (prime minister). In 1999, the Labour Party and the Democratic Left merged and the new party is called the Labour Party. The electoral significance of this realignment of the left is not yet clear, but the merger provides the Irish electorate with a more viable social democratic alternative to the governing coalition. Bertie Ahern remained prime minister after Fianna Fáil won 41.5% of the vote on 16 May 2002, capturing 81 seats in the Dáil. Fine Gael won 22.5% of the vote and 31 seats, its worst defeat in 70 years. The Labour Party took 10.8% of the vote and 21 seats. Other parties winning seats were the Progressive Democrats (8), the Greens (6), Sinn Feìn (5), the Socialist Party (1), and Independents (13). The next presidential election was scheduled for October 2011 and the next legislative elections were scheduled for 2007. The provinces of Ulster, Munster, Leinster, and Connacht no longer serve as political divisions, but each is divided into a number of counties that do. Prior to the passage of the new Local Government Act of 2001 and its implementation in 2002, Ireland was divided into 29 county councils, 5 boroughs, 5 boroughs governed by municipal corporations, 49 urban district councils, and 26 boards of town commissioners. Under the new system, the county councils remain the same, but the corporations no longer exist. The cities of Dublin, Cork, Limerick, Waterford, and Galway are city councils, while Drogheda, Wexford, Kilkenny, Sligo, and Clonmel are the five borough councils. The urban district councils and town commissions are now one and the same and known as town councils, of which there are 75. Local authorities' principal functions include planning and development, housing, roads, and sanitary and environmental services. Health services, which were administered by local authorities up to 1971, are now administered by regional health boards, although the local authorities still continue to pay part of the cost. Expenditures are financed by a local tax on the occupation of property (rates), by grants and subsidies from the central government, and by charges made for certain services. Capital expenditure is financed mainly by borrowing from the Local Loans Fund, operated by the central government, and from banking and insurance institutions. Responsibility for law enforcement is in the hands of a commissioner, responsible to the Department of Justice, who controls an unarmed police force known as the civil guard (Garda Síochána). Justice is administered by a Supreme Court, a High Court with full original jurisdiction, eight circuit courts, and 23 district courts with local and limited jurisdiction. Judges are appointed by the president, on the advice of the prime minister and cabinet. Individual liberties are protected by the 1937 constitution and by Supreme Court decisions. The constitution provides for the creation of "special courts" to handle cases which cannot be adequately managed by the ordinary court system. The Offenses Against the State Act formally established a special court to hear cases involving political violence by terrorist groups. In such cases, in order to prevent intimidation, the panel of judges sits in place of a jury. The judiciary is independent and provides a fair, efficient judicial process based upon the English common law system. Judicial precedent makes it a vital check on the power of the executive in Ireland. It can declare laws unconstitutional before and after they have been enacted, as well. Typically, however, the relationship between the judiciary and the other two branches of government has been untroubled by conflict. The Supreme Court has affirmed that the inviolability of personal privacy and home must be respected in law and practice. This is fully respected by the government. Revelations about corruption by leading politicians forced the government to set up an independent tribunal. It investigated payments to politicians, especially to the former prime minister Charles Haughey, who was a recipient of large sums of money from businessmen for his personal use. A former judge, Hugh O'Flaherty, was forced to resign from the Supreme Court over his handling of a dangerous driving case in 1999. His case provoked much public outrage after it was discovered that the government quickly boosted his annual pension prior to his resignation. The Irish army and its reserves, along with the country's air corps, and navy, constitute a small but well-trained nucleus that can be enlarged in a time of emergency. In 2005, the active defense force numbered 10,460, with reserves numbering 14,875. The army had 8,500 active personnel equipped with 14 Scorpion light tanks, 33 reconnaissance vehicles, 42 armored personnel carriers, and 537 artillery pieces. Navy personnel totaled 1,100 in 2005. Major naval units included eight patrol/coastal vessels. The air corps consisted of 860 personnel, outfitted with two maritime patrol and three transport aircraft. The navy also operated two assault and 11 utility helicopters. Ireland provided support to UN, NATO and European Union peacekeeping or military operations in 10 countries or regions. The defense budget in 2005 was $959 million. Ireland, which became a member of the United Nations on 14 December 1955, belongs to ECE and several nonregional specialized agencies, such as the FAO, UNESCO, UNHCR, IFC, the World Bank, and WHO. On 1 January 1973, Ireland became a member of the European Union. The country is also a member of the WTO, the European Bank for Reconstruction and Development, the Paris Club, the Euro-Atlantic Partnership Council, and the OSCE. Ireland is a founding member of OECD and the Council of Europe. The country also participates as an observer in the OAS and the Western European Union. Irish troops have served in UN operations and missions in the Congo (est. 1999), Cyprus (est. 1964), Kosovo (est. 1999), Lebanon (est. 1978), Liberia (est. 2003), and Côte d'Ivoire (est. 2004), among others. Ireland is a guest of the Nonaligned Movement, It is also a part of the Australia Group, the Zangger Committee, the Nuclear Suppliers Group (London Group), the Organization for the Prohibition of Chemical Weapons, and the Nuclear Energy Agency. In environmental cooperation, Ireland is part of the Basel Convention; Conventions on Biological Diversity, Whaling, and Air Pollution; Ramsar; the London Convention; International Tropical Timber Agreements; the Kyoto Protocol; the Montréal Protocol; MARPOL; the Nuclear Test Ban Treaty; and the UN Conventions on the Law of the Sea, Climate Change and Desertification. Until the 1950s, Ireland had a predominantly agricultural economy, with agriculture making the largest contribution to the GNP. However, liberal trade policies and the drive for industrialization stimulated economic expansion. In 1958, agriculture accounted for 21% of the GNP, industry 23.5%, and other sectors 55.5%. By 2002, however, agriculture accounted for only 5% of the total, industry 46%, and services 49%. Ireland's economy was initially slower in developing than the economies of other West European countries. The government carried on a comprehensive public investment program, particularly in housing, public welfare, communications, transportation, new industries, and electric power. Growth rose quickly in the 1960s and, since then, the government has tried to stimulate output, particularly of goods for the export market. Thus, manufactured exports grew from £78.4 million in 1967 to £11,510 million in 1992. In the 1970s Ireland began to approach the income of the rest of Western Europe until it lost fiscal control in the latter part of the 1970s due to the oil crisis. During the early 1980s, Ireland suffered considerably from the worldwide recession, experiencing double-digit inflation and high unemployment. The economy continued to lag through 1986, but the GNP grew 30% between 1987 and 1992, and continued at a yearly pace of about 7.5% until 1996 when it was expected to slow to about 5.25%. However, the Irish economy grew faster than any other in the European Union during the so-called "Celtic Tiger" years of the second half of the 1990s, when growth rates were in double digits. The good economic performance was mainly due to strong consumer and investor confidence and strong export opportunities. Ireland suffered from the global economic slowdown that began in 2001, however, and the average annual growth 2000–04 was 6.1%. Though Ireland started out the decade with a growth rate of 6.2%, it dropped to 4.4% in 2003 and had not regained even a percentage point as of 2005. Although substantially lower than in 1986 when it topped 18%, unemployment remained high until 1998, when it dropped to 7.7%. The estimated unemployment rate in 2005 was 4.2%. The inflation rate stood at 2.4% in 1998 and was 2% in 2003 and 3% in 2004. Inflation was steadily falling, from a rate of 4.9% in 2000 to 2.2% in 2004. Ireland has depended on substantial financial assistance from the European Union designed to raise the per capita gross national product to the EU average. Almost $11 billion was allocated for the period 1993–99 from the EU's Structural and Cohesion Funds. During the 1990s, living standards rose from 56% to 87% of the EU average. In the latter half of the 1990s, the economic situation greatly improved and Ireland recorded growth rates of 7% 1996–2000. Unemployment fell from 16% in 1993 to 5% in 2000. Due to the global economic downturn that began in 2001, however, even Ireland's booming economy slowed. Services, pharmaceuticals, and information technology are important sectors of the economy in the 21st century. The US Central Intelligence Agency (CIA) reports that in 2005 Ireland's gross domestic product (GDP) was estimated at $136.9 billion. The CIA defines GDP as the value of all final goods and services produced within a nation in a given year and computed on the basis of purchasing power parity (PPP) rather than value as measured on the basis of the rate of exchange based on current dollars. The per capita GDP was estimated at $34,100. The annual growth rate of GDP was estimated at 4.9%. The average inflation rate in 2005 was 2.7%. It was estimated that agriculture accounted for 5% of GDP, industry 46%, and services 49%. According to the World Bank, in 2003 remittances from citizens working abroad totaled $337 million or about $84 per capita and accounted for approximately 0.2% of GDP. The World Bank reports that in 2003 household consumption in Ireland totaled $54.84 billion or about $13,730 per capita based on a GDP of $153.7 billion, measured in current dollars rather than PPP. Household consumption includes expenditures of individuals, households, and nongovernmental organizations on goods and services, excluding purchases of dwellings. It was estimated that for the period 1990 to 2003 household consumption grew at an average annual rate of 5.6%. In 2001 it was estimated that approximately 21% of household consumption was spent on food, 10% on fuel, 4% on health care, and 7% on education. It was estimated that in 1997 about 10% of the population had incomes below the poverty line. In 2005, Ireland's workforce was estimated at 2.03 million. Of those employed in 2003, an estimated 6.4% were in agriculture, 27.8% in industry, and 65.4% in services. The estimated unemployment rate in 2005 was 4.2%. The right to join a union is protected by law, and as of 2002, about 31% of the labor force were union members. The Irish Congress of Trade Unions (ICTU) represents 64 unions and is independent of political parties and the government. The right to strike, except for police and military personnel, is exercised in both the public and private sectors. Employers are legally prohibited from discriminating against those who participate in union activity. Collective bargaining is used to determine wages and other conditions of employment. Children under age 16 are legally prohibited from engaging in regular, full-time work. Under certain restrictions, some part-time or educational work may be given to 14- and 15-year-olds. Violations of child labor laws are not common. The standard workweek is 39 hours, and the legal limit on industrial work is nine hours per day and 48 hours per week. A national minimum wage of $5.45 went into effect in 2001. About 1,184,000 hectares (2,926,000 acres), or 17.2% of the total area, were devoted to growing crops in 2003. About 6% of the agricultural acreage is used for growing cereals, 1.5% for growing root and green crops, and the balance for pasture and hay. Thus most of the farmland is used to support livestock, the leading source of Ireland's exports. Most farms are small, although there has been a trend toward consolidation. Agriculture accounts for about 10% of Irish employment. In 2003, there were 135,250 agricultural holdings, with a farm labor force of 104,540 full-time and 140,980 part-time workers. Principal crops (with their estimated 2004 production) include barley, 1,159,000 tons; sugar beets, 1,500,000 tons; wheat, 849,000 tons; potatoes, 500,000 tons; and oats, 134,000 tons. Over half of agricultural production, by value, is exported. The benefits of the EU's Common Agricultural Policy, which provides secure markets and improved prices for most major agricultural products, account in part for the increase of Ireland's agricultural income from £314 million in 1972 (before Ireland's accession) to £1,919.9 million in 1995. The estimated value of crop output was €1.3 billion in 2005. The government operates a comprehensive network of services within the framework of the Common Agricultural Policy, including educational and advisory services to farmers. Under a farm modernization scheme, capital assistance is provided to farmers for land development, improvement of farm buildings, and other projects, with part of the cost borne by the EU. In 1974, pursuant to an European Community directive, incentives were made available to farmers wishing to retire and make their lands available, by lease or sale, for the land reform program. With some 90% of Ireland's agricultural land devoted to pasture and hay, the main activity of the farming community is the production of grazing animals and other livestock, which account for about 53% of agricultural exports. In 2005, total livestock output was valued at €2.17 billion, with cattle and milk each accounting for around 40%. During 2002–04, livestock output was down 4.5% from 1999–2001. The estimated livestock population in 2005 was 7,000,000 head of cattle (including 1.1 million dairy cows), 1,757,000 pigs, and 12,700,000 poultry. In 2005, butter production was estimated at 142,000 tons, cheese 118,750 tons, and wool (greasy) 12,000 tons. Milk production in 2005 was 5,500,000 tons. Since livestock is a major element in the country's economy, the government is particularly concerned with improving methods of operation and increasing output. A campaign for eradication of bovine tuberculosis was completed in 1965, and programs are under way for eradication of bovine brucellosis, warble fly, and sheep scab. Salmon, eels, trout, pike, perch, and other freshwater fish are found in the rivers and lakes; sea angling is good along the entire coast; and deep-sea fishing is done from the south and west coasts. The fishing industry has made considerable progress as a result of government measures to improve credit facilities for the purchase of fishing boats and the development of harbors; establishment of training programs for fishermen; increased emphasis on market development and research; establishment of hatcheries; and promotion of sport fishing as an attraction for tourists. The Irish fishing fleet consisted of 1,376 vessels with a capacity of 77,888 gross tons in 2002. Leading varieties of saltwater fish are mackerel, herring, cod, whiting, plaice, ray, skate, and haddock. Lobsters, crawfish, and Dublin Bay prawns are also important. In 2003, the value of fish exports was $453.5 million, up 32% from 2000. Aquaculture accounted for 19% of the volume. The total fish production in 2003 was 364,861 tons. Mackerel, herring, and blue whiting accounted for 24% of the volume that year. Once well forested, Ireland was stripped of timber in the 17th and 18th centuries by absentee landlords, who made no attempt to reforest the denuded land, and later by the steady conversion of natural forest into farms and grazing lands. In an effort to restore part of the woodland areas, a state forestry program was inaugurated in 1903; since then, over 350,000 hectares (865,000 acres) have been planted. More than half the planting is carried out in the western counties. In 2000, about 9.6% of Ireland was forested; about 95% of the trees planted are coniferous. The aim of the forestry program is to eliminate a large part of timber imports—a major drain on the balance of payments—and to produce a surplus of natural and processed timber for export. Roundwood removals totaled 2.5 million cu m (88 million cu ft) in 2004. Ireland was a leading European Union (EU) producer of lead and zinc in 2003, and an important producer of lead, alumina, and peat. Mineral production in 2003 included zinc, 419,014 kg, compared to 252,700 kg in 2002; mined lead, 50,339,000 tons, compared to 32,486,000 tons in 2002; and an estimated 1.2 million metric tons of alumina. Other commercially exploited minerals were silver, hydraulic cement, clays for cement production, fire clay, granite, slate, marble, rock sand, silica rock, gypsum, lime, limestone, sand and gravel, shales, dolomite, diatomite, building stone, and aggregate building materials. Zinc production centered on three zinc-lead mines, the Lisheen (a joint venture of Anglo American PLC and Ivernia West PLC), the Galmoy (Arcon International Resources PLC), and the Tara (Outokumpu Oyj), three of Europe's most modern mines. Outokumpu announced that because of low zinc prices, it was closing the Tara Mine (at Navan, County Meath), the largest lead-zinc field in Europe, and putting it on care and maintenance; the Tara came into production in the late 1970s. The Galmoy Mine was producing 650,000 tons per year of ore at target grades of 11.3% zinc and 1% lead, and the Lisheen Mine, which mined its first ore in 1999 and began commercial production in 2001, initially planned to produce 160,000 tons per year of zinc concentrate, to be increased to 330,000 tons per year of zinc concentrate and 40,000 tons per year of lead in concentrate at full production; both were on the Rathdowney Trend mineralized belt, southwest of Dublin. Cambridge Mineral Resources PLC continued diamond and sapphire exploration work, identifying numerous diamond indicator minerals and recovering significant quantities of ruby and sapphire. Gold was discovered in County Mayo in 1989, with an estimated 498,000 tons of ore at 1.5 grams per ton of gold. There was a marked increase in mining exploration beginning in the early 1960s, resulting in Ireland becoming a significant source of base metals. Ireland's energy and power sector is marked by a lack of any oil reserves, thus making it totally dependent upon imports. However, the country has modest natural gas reserves, and a small refining capacity. In 2002, Ireland's imports of crude and refined petroleum products averaged 211,230 barrels per day. Domestic refinery production for that year averaged 65,230 barrels per day. Demand for refined oil products averaged 180,440 barrels per day. Ireland's proven reserves of natural gas were estimated as of 1 January 2002 at 9.911 billion cu m. Output in 2001 was estimated at 815 million cu m, with demand and imports estimated at 4.199 billion cu m and 3.384 billion cu m, respectively, for that year. Ireland's electric power generating sector is primarily based upon the use of conventional fossil fuels to provide electric power. Total generating capacity in 2002 stood at 4.435 million kW, of which conventional thermal capacity accounted for 4.049 million kW, followed by hydropower at 0.236 million kW and geothermal/other at 0.150 million kW. Total power production in 2002 was 22.876 billion kWh, of which 94% was from fossil fuels, mostly thermal coal and oil stations, 3.9% from hydropower, and the rest from geothermal/other sources. Ireland's Coal production consists of high-ash semibituminous from the Connaught Field, and is used for electricity production. In 2002, Ireland imported 3,148,000 short tons of coal, of which 3,090,000 short tons consisted of hard coal, and 58,000 short tons of lignite. Since the establishment of the Irish Free State, successive governments encouraged industrialization by granting tariff protection and promoting diversification. Following the launching of the First Program for Economic Expansion by the government in 1958, considerable progress was made in developing this sector of the economy, in which foreign industrialists played a significant role. The Industrial Development Authority (IDA) administers a scheme of incentives to attract foreign investment. In addition, several government agencies offer facilities for consulting on research and development, marketing, exporting, and other management matters. Official policy favors private enterprises. Where private capital and interest were lacking, the state created firms to operate essential services and to stimulate further industrial development, notably in the fields of sugar, peat, electricity, steel, fertilizers, industrial alcohol, and transportation. Although efforts have been made to encourage decentralization, about half of all industrial establishments and personnel are concentrated in Dublin and Cork. Industry grew by an average annual rate of more than 5% from 1968 to 1981, and peaked at 12% in 1984 before subsiding to an annual rate of about 4%. The greatest growth was in high technology industries, like electronics and pharmaceuticals, where labor productivity also was growing substantially, thus limiting increases in the number of jobs. The most important products of manufacturing, by gross output, are food, metal, and engineering goods, chemicals and chemical products, beverages and tobacco, nonmetallic minerals, and paper and printing. The making of glass and crystal are also important industries. Industrial production continued to grow into the late 1990s, the "Celtic Tiger" years, posting a 15.8% growth in 1998. Industry employed 28% of the labor force in 2000, and accounted for 36% of GDP in 2001. The value of industry output in 2000 was 12.3% higher than in 1999. Computer and pharmaceutical enterprises, largely owned by foreign companies, were responsible for high manufacturing output in 2000. Although there is no formal governmental privatization plan, the government planned to privatize the state-owned natural gas distributor (Bord Gas), the state-owned airline (Aer Lingus), and the state-owned electricity distributor (ESB) as of 2002. Ireland was shifting attention away from industry and towards services. Activity was quickened by preferential corporation tax rates for manufacturers and manufactures were decreasing relative to services and agriculture. Yet, in 2004 the industrial production growth rate was 7%. The major organizations doing scientific research in Ireland are the Agricultural Institute (established in 1958) and the Institute for Industrial Research and Standards (1946). The Dublin Institute of Advanced Studies, established by the state in 1940, includes a School of Theoretical Physics and a School of Cosmic Physics. The Royal Irish Academy, founded in 1785 and headquartered in Dublin, promotes study in science and the humanities and is the principal vehicle for Ireland's participation in international scientific unions. It has sections for mathematical and physical sciences and for biology and the environment. The Royal Dublin Society (founded in 1731) promotes the advancement of agriculture, industry, science, and art. Ireland has 13 other specialized learned societies concerned with agriculture, medicine, science, and technology. Major scientific facilities include the Dinsink Observatory (founded in 1785) and the National Botanic Gardens (founded in 1795), both in Dublin. Most scientific research is funded by the government; the government advisory and coordinating body on scientific matters is the National Board for Science and Technology. Medical research is supported by the Medical Research Council and Medico-Social Research Board. Veterinary and cereals research is promoted by the Department of Agriculture. The Department of Fisheries and Forestry and the Department of Industry and Energy have developed their own research programs. The UNESCO prize in science was awarded in 1981 for the development of clofazimines, a leprosy drug produced by the Medical Research Council of Ireland with aid from the Development Cooperation Division of the Department of Foreign Affairs. Research and development (R&D) expenditures in 2001 (the latest year for which data was available) totaled $1.427 million, or 1.14% of GDP. Of that amount, 67.2% came from the business sector, with 25.2% coming from the government. Foreign sources accounted for 6%, while higher education provided 1.7%. As of 2002, there were some 2,471 researchers per one million people that were actively engaged in R&D. In that same year, high-tech exports were valued at $31.642 billion and accounted for 41% of manufactured exports. Ireland has 21 universities and colleges that offer courses in basic and applied science. In 1987–97, science and engineering students accounted for 31% of university enrollment. In 2002, a total of 29.3% of all bachelor's degrees awarded were in the sciences (natural, mathematics and computers, and engineering). Dublin is the financial and commercial center, the distribution point for most imported goods, and the port through which most of the country's agricultural products are shipped to Britain and the Continent. Cork, the second-largest manufacturing city and close to the transatlantic port of Cobh, is also important, as is Limerick, with its proximity to Shannon International Airport. Other important local marketing centers are Galway, Drogheda, Dundalk, Sligo, and Waterford. The trend in retail establishments was changing from small shops owned and operated by individuals, to larger department stores, outlets, and chain stores operated by management companies. As of 2002, there were about 52,000 retail and 2,500 wholesale outlets across the country. There were about 9,000 retail food outlets. A 21% value-added tax applies to most goods and services. |Italy-San Marino-Holy See||4,233.5||1,203.9||3,029.6| |(…) data not available or not significant.| Office business hours are usually 9 or 9:30 am to 5:30 pm. Shops are generally open from 9 am to 6 pm, although most supermarkets are open until 9 pm on Thursday and Friday. In general, banking hours are 10 am to 12:30 pm and 1:30 to 4 pm, Monday through Friday, and 3 to 5 pm on Thursday. Most offices are closed on Saturday, and shops close on either Wednesday or Saturday afternoon. Businesses may close for extended periods during the months of July and August. Ireland began opening to free trade in the 1960s. It is now one of the most open and largest exporting markets (on a per capital level). Growth was heavily encouraged by the export sectors in the 1990s and the average annual export volume growth was near an annual rate of 20% between 1996 and 2000. Computers and office products have become some of Ireland's most profitable export products (28%). The country also manufactures musical instruments (5.2%), making 12.7% of the world's exports. Other export items include chemicals like nitrogen compounds (10.9%), electronic circuitry (5.2%), and medicines (4.9%). As of 2003, the United States absorbed 20.5% of Ireland's exports, the United Kingdom 18.1%, Belgium 12.6%, Germany 8.3%, France 6.1%, Netherlands 5.1%, and Italy 4.6%. Import partners include the United Kingdom (34.8% of imports), the United States (15.6%), Germany (8.1%), and the Netherlands (4.1%). Imported commodities include data processing equipment, machinery and equipment, chemicals, petroleum and petroleum products, textiles, and clothing. The volume of Irish exports increased dramatically 1995–2000, registering an average annual growth of 16.9%; the rate of import growth over the same period was only slightly lower at 16.6%. The year 2000 was the first since 1991 that the current account was not in surplus. The reduction of the balance of payments surplus in the early 2000s suggested that the level of Irish imports was increasing due to increased demand for luxury items and services, rather than from a decline in exports. The US Central Intelligence Agency (CIA) reported that in 2002 the purchasing power parity of Ireland's exports was $85.3 billion while imports totaled |Balance on goods||37,807.0| |Balance on services||-14,306.0| |Balance on income||-26,142.0| |Direct investment abroad||-3,528.0| |Direct investment in Ireland||26,599.0| |Portfolio investment assets||-161,319.0| |Portfolio investment liabilities||106,389.0| |Other investment assets||-48,864.0| |Other investment liabilities||84,028.0| |Net Errors and Omissions||-1,178.0| |Reserves and Related Items||1,890.0| |(…) data not available or not significant.| $48.3 billion resulting in a trade surplus of $37 billion. Irish export growth during those years, in fact, consistently surpassed EU growth. However, the slowdown in the global economy and the slower than predicted growth in the euro area was expected to negatively impact Irish exports. In 1979, Ireland joined the European Monetary System, thus severing the 150-year-old tie with the British pound. The Central Bank of Ireland, established in 1942, is both the monetary authority and the bank of issue. Its role quickly expanded considerably, particularly in monetary policy. Commercial deposits with the Central Bank have strongly increased since 1964, when legislation first permitted it to pay interest on deposits held for purposes other than settlement of clearing balances. Since July 1969, the Central Bank has accepted short-term deposits from various institutions, including commercial and merchant banks. With the advent of the European Monetary Union (EMU) in 1999, authority over monetary policy shifted to the European Central Bank. The commercial banking sector is dominated by two main Irish-owned groups, the Bank of Ireland Group and the Allied Irish Banks Group. Successive governments have indicated that they would like to see a third banking force (possibly involving a strategic alliance with a foreign bank). Other major banks include the National Irish Bank, a member of the National Australia Bank, and Ulster Bank, a member of the National Westminster Bank Group. The International Monetary Fund reports that in 2001, currency and demand deposits—an aggregate commonly known as M1—were equal to $21.1 billion. In that same year, M2—an aggregate equal to M1 plus savings deposits, small time deposits, and money market mutual funds—was $94.1 billion. The money market rate, the rate at which financial institutions lend to one another in the short term, was 3.31%. A number of other commercial, merchant, and industrial banks also operate. Additionally, Ireland's post office operates the Post Office Savings Banks and Trustee Savings Banks. The Irish stock exchange has its trading floor in Dublin. All stockbrokers in Ireland are members of this exchange. The Irish Stock Exchange is small by international standards, with a total of 76 domestic companies listed at the end of 2001. Total market capitalization at the end of 2001 was (21.8 billion for the government securities market, making it one of the EU's smallest stock markets, however fast-growing. The Stock Exchange Act came into effect on 4 December 1995, and separated the Dublin Stock Exchange from the London Stock Exchange. Since that date, the Dublin Stock Exchange has been regulated by the Central Bank of Ireland. As of 2004, there were a total of 53 companies listed on the Irish Stock Exchange, which had a market capitalization of $114.085 billion. In 2004, the ISEQ index rose 26% from the previous year to 6,197.8. Insurance firms must be licensed by the Insurance Division of the Ministry of Industry, Trade, Commerce, and Tourism. The regulatory body is the Irish Brokers' Association. The Insurance Acts of 1936 and 1989 outline the monitoring of insurers, brokers, and agents. In Ireland, workers' compensation, third-party automobile, bodily injury, and property damage liability are compulsory. In 1997, shareholders of Irish Life, Ireland's largest life assurance company, unanimously approved the company's £100 million ($163 million) takeover of an Illinois life assurance company, Guarantee Reserve. In 2003, the value of direct premiums written totaled $17.328 billion, of which life premiums accounted for $9.037 billion. Hibernian General in 2003 was Ireland's top non-life insurer, with net written nonlife premiums (less reinsurance) of $992.2 million, while Irish Life was the nation's leading life insurer with gross written life premiums of $2.362 million. |Revenue and Grants||17,762||100.0%| |General public services||3,810||21.9%| |Public order and safety||…||…| |Housing and community amenities||366||2.1%| |Recreational, culture, and religion||119||0.7%| |(…) data not available or not significant.| Ireland's fiscal year follows the calendar year. Expenditures of local authorities are principally for health, roads, housing, and social welfare. The US Central Intelligence Agency (CIA) estimated that in 2005 Ireland's central government took in revenues of approximately $70.4 billion and had expenditures of $69.4 billion. Revenues minus expenditures totaled approximately $1 billion. Public debt in 2005 amounted to 27.5% of GDP. Total external debt was $1.049 trillion. Government outlays by function were as follows: general public services, 21.9%; defense, 2.9%; economic affairs, 16.7%; housing and community amenities, 2.1%; health, 16.3%; recreation, culture, and religion, 0.7%; education, 13.6%; and social protection, 25.9%. To stimulate economic expansion and encourage investment in Irish industry, particularly in the area of industrial exports, tax adjustments have been made to give relief to export profits, expenditures for mineral development, shipping, plant and machinery, new industrial buildings, and investments in Irish securities. As of 1 January 2003, with Ireland's accession to the EU, the government had mostly completed the transition of the tax regime from an incentive regime to a low, single-tax regime with 12.5% as the country's rate for most corporate profits. Passive income, including that from interest, royalties, and dividends, is taxed at 20%. Capital gains are also taxed at 20%. As of 2005, Ireland was party to double-taxation agreements with 42 countries the terms of which provide for the reduction or elimination of many capital income tax rates and related withholding taxes. The incentive 10% corporation tax rate, applied to industrial manufacturing, to projects licensed to operate in the Shannon Airport area, and to various service operations, was still in effect in 2003, but, in an agreement with the European Commission, was scheduled to be phased out by 2010. Ireland has a progressive personal income tax with a top rate of 42% on incomes above €29,400 for single taxpayers. Married taxpayers are subject to a higher income threshold level. For those over 65 years old, tax exemptions amounted to €15,000 per person. Deductions were available for mortgage payments and pension contributions. Since 1969, the government has encouraged artists and writers to live in Ireland by exempting from income tax their earnings from their works of art. Royalties and other income from patent rights are also tax-exempt. The gift and inheritance taxes are based upon the relationship of the beneficiary to the donor. Between a parent and child, the tax-free threshold in 2003 was €441,200; for any other lineal descendent, the tax-free threshold was one-tenth this amount, or €44,120; and for any other person, one-twentieth, or €22,060. Land taxes are assessed at variable rates by local governments, and there is a buildings transfer tax based on the price of the transfer. The major indirect tax is Ireland's value-added tax (VAT) instituted 1 January 1972 with a standard rate of 16.37% plus a number of reduced, intermediate, and increased rates. As of 1 March 2002, the standard rate was increased to 21% from 20%, and the reduced rate of 12.5% increased to 13.5% as of 1 January 2003. The reduced rate applies to domestic fuel and power, newspapers, hotels and new housing. Ireland also has an extensive list of goods and services to which a 0% VAT rate is applied including, books and pamphlets, gold for the Central Bank, basic foodstuffs and beverages, agricultural supplies, medicines and medical equipment, and, more unusually, children's clothing and footwear, and wax candles. A 4.8% rate applies to livestock by unregistered farmers. Excise duties are charged on tobacco products, alcohol, fuel, and motor vehicles. Per unit and/or annual stamp taxes are assessed on checks, credit cards, ATM cards, and Laser cards. From the time of the establishment of the Irish Free State, government policy was to encourage development of domestic industry by maintaining protective tariffs and quotas on commodities that would compete with Irish-made products. Following Ireland's admission to the European Community (now the European Union), the country's tariff schedule was greatly revised. The schedule vis-à-vis third-world countries and the United States was gradually aligned with EC tariffs and customs duties between Ireland and the EC were phased down to zero by July 1977. Duty rates on manufactured goods from non-EU countries range from 5–8%, while most raw materials enter duty-free. Certain goods still require import licenses and tariffs are based on the Harmonized System. The Shannon Free Trade Zone, the oldest official free trade area in the world, is located at the Shannon International Airport. The Irish government has successfully attracted FDI (foreign direct investment) over the years with various policies and preferential tax rates. To stimulate economic expansion, the Industrial Development Authority encourages and facilitates investment by foreign interests, particularly in the development of industries with export potential. Special concessions include nonrepayable grants to help establish industries in underdeveloped areas and tax relief on export profits. Freedom to take out profits is unimpaired. Engineering goods, computers, electronic products, electrical equipment, pharmaceuticals and chemicals, textiles, food-stuffs, leisure products, and metal and plastic products are among the items produced. Much of the new investment occurred after Ireland became a member of the European Union. Annual foreign direct investment (FDI) inflows into Ireland increased steadily through the 1990s. In the period 1988 to 1990, Ireland's share of world FDI inflows was only 70% of its share of world GDP, but for the period 1998 to 2000, Ireland's share of FDI inflows was over five times its share of world GDP. In 1998, annual FDI inflow reached $11 billion, up from $2.7 billion in 1997, and then jumped to almost $15 billion in 1999. FDI inflows to Ireland peaked in 2000, at over $24 billion, mainly from high-tech computer and pharmaceutical companies. FDI inflow dropped sharply to $9.8 billion in 2001 with the global economic slowdown. Leading sources of foreign investors, in terms of percent of foreign companies invested in Ireland, have been the United States (43%), the United Kingdom (13%), Germany (13%), other European countries (22%), Japan (4%), and others (5%). As of 2000, the primary destinations of foreign investment were, in order, manufacturing, finance, and other services. Government policies are premised on private enterprise as a predominant factor in the economy. Specific economic programs adopted in recent decades have attempted to increase efficiency in agriculture and industry, stimulate new export industries, create employment opportunities for labor leaving the agricultural sector, and reduce unemployment and net emigration. In pursuit of these objectives, the government provides aids to industry through the Industrial Development Authority (IDA), the Industrial Credit Co., and other agencies. Tax concessions, information, and advisory services are also provided. The IDA seeks to attract foreign investment by offering a 10% maximum corporation tax rate for manufacturing and certain service industries, generous tax-free grants for staff training, ready-built factories on modern industrial estates, accelerated depreciation, export-risk guarantee programs, and other financial inducements. IDA also administers industrial estates at Waterford and Galway. The Shannon Free Airport Development Co., another government-sponsored entity, administers an industrial estate on the fringes of Shannon Airport, a location that benefits from proximity to the airport's duty-free facilities. A third entity, Udaras Na Gaeltachia, promotes investment and development in western areas where Irish is the predominant language. As of 1986 there were some 900 foreign-owned plants in Ireland. Price control legislation was introduced under the Prices Act of 1958, amended in 1965 and 1972. In general, manufacturers, service industries, and professions are required to obtain permission from the Ministry of Commerce and Trade for any increase. Price changes are monitored by a National Prices Commission, established in 1971. The economic plan for 1983–1987, called The Way Forward, aimed at improving the cost-competitiveness of the economy by cutting government expenditures and restraining the growth of public service pay, among other measures. The 1987–1990 Program for National Recovery is generally credited with creating the conditions to bring government spending and the national debt under control. The 1991–1993 Program for Economic and Social Progress was to further reduce the national debt and budget deficit and to establish a schedule of wage increases. A 1994–1999 national development plan called for investment of £20 billion and aimed to achieve an average annual GDP growth rate of 3.5%. The government hoped to create 200,000 jobs through this plan, with funding by the state, the EU, and the private sector. Half of the money was earmarked for industry, transport, training, and energy. At the end of the 1990s, Ireland boasted the fastest growing economy in the EU with a 9.5% GDP real growth rate in 1998. Total expenditures on imports and exports in 2000 were equivalent to 175% of GDP, far ahead of the EU average, which made Ireland's economy one of the most open in the world. Ireland became known as the "Celtic Tiger," to compare with the formerly fast-growing economies of East Asia prior to the Asian financial crisis of 1997. In 2000, the economy grew by 11.5%, the highest growth rate ever recorded in an OECD member country. Wage inequality grew, however, and spending on infrastructure failed to keep pace with social or industrial demands. Corporate taxes were as low as 12.5% in some circumstances in the early 2000s. Economic growth decelerated rapidly in 2001, to 6%. Inflation fell as did housing prices, but they rose again in 2002. Tax increases were expected in 2003 and 2004, and the government was facing pressures to cut spending. GDP growth was 4.4% in 2003 and 4.5% in 2004. A social insurance program exists for all employees and self-employed persons, and for all residents with limited means. The system is financed through employee contributions, employer contributions, and government subsidies. Benefits are available for old age, sickness, disability, survivorship, maternity, work injury, unemployment, and adoptive services. There are also funds available for those leaving the workforce to care for one in need of full time assistance. The system also provides bereavement and a widowed parent's grant. The universal medical care system provides medical services to all residents. The workmen's compensation act was first initiated in 1897. Parents with one or more children are entitled to a family allowance. The predominance of the Roman Catholic Church has had a significant impact on social legislation. Divorce was made legal only in 1995. Contraceptives, the sale of which had been entirely prohibited, became available to married couples by prescription in the early 1980s. In 1985, the need for a prescription was abolished, and the minimum age for marriage was raised from 14 to 18 for girls and from 16 to 18 for boys. Abortion remains illegal. Domestic abuse and spousal violence remain serious problems, although improvements were seen in 2004. The government funds victim support centers, and there are active women's rights groups to address these issues. The law prohibits gender discrimination in the workplace, but inequalities persist regarding promotion and pay. The government addresses the issue of child abuse, and funds systems to promote child welfare. The government attempts to curb discrimination against foreign workers and the ethnic community known as "Travellers." There have been reports of racially motivated incidents including violence and intimidation. In general, the government respects the human rights of its citizens. Health services are provided by regional boards under the administration and control of the Department of Health. A comprehensive health service, with free hospitalization, treatment, and medication, is provided for low-income groups. The middle-income population is entitled to free maternity, hospital, and specialist services, and a free diagnostic and preventive service is available to all persons suffering from specified infectious diseases. Insurance against hospital and certain other medical expenses is available under a voluntary plan introduced in 1957. Since World War II, many new regional and county hospitals and tuberculosis sanatoriums have been built. As of 2004, there were an estimated 237 physicians, 51 dentists, and 83 pharmacists per 100,000 population. In addition, there were more than 1662 nurses per 100,000 people, the third most per capita in the world. While deaths from cancer, particularly lung cancer, and heart disease are rising, those from many other causes have been decreasing rapidly. Infant mortality has been reduced from 50.3 per 1,000 live births in 1948 to 5.39 in 2005. Tuberculosis, long a major cause of adult deaths, declined from 3,700 cases in 1947 to only 15 per 100,000 in 2000. Average life expectancy at birth in 2005 was 77.56 years. The general mortality rate was an estimated 8 per 1,000 people as of 2002. The major causes of death were heart and circulatory disease, cancer, and ischemic heart disease. Heart disease rates were higher than average for highly industrialized countries. The HIV/AIDS prevalence was 0.10 per 100 adults in 2003. As of 2004, there were approximately 2,800 people living with HIV/AIDS in the country. There were an estimated 100 deaths from AIDS in 2003. The aim of public housing policy is to ensure, so far as possible, that every family can obtain decent housing at a price or rent it can afford. Government subsidies are given to encourage home ownership, and local authorities provide housing for those unable to house themselves adequately. Housing legislation has encouraged private construction through grants and loans. Projected and existing housing needs are assessed regularly by local authorities, and their reports are the basis for local building programs, which are integrated with national programs and reconciled with available public resources. According to the 2002 census, there were about 1,279,617 dwellings available in permanent housing units. Of these, about 74% were owner occupied. The number of households was listed as 1,287,958, with 43.7% of all households living in single-family detached homes. The average number of persons per household was 2.95. Ten years of education are compulsory. Primary school covers eight years of education, with most students entering at age four. This is followed by a three-year junior secondary school and a two-year senior secondary program. Some schools offer a transition year program between the junior and senior levels. This transition year is meant to be a time of independent study for the student, when he or she focuses on special interests, while still under the guidance of instructors, in order make a decision concerning the direction of their future studies. At the senior level, students may choose to attend a vocational school instead of a general studies school. While private, religious-based secondary schools were once the norm, there are now many multi-denominational, public schools available at all levels. Coeducational programs have also grown substantially in recent years. The academic year runs from September to June. The primary languages of instruction are Irish and English. Primary school enrollment in 2003 was estimated at about 96% of age-eligible students. The same year, secondary school enrollment was about 83% of age-eligible students; 80% for boys and 87% for girls. It is estimated that nearly all students complete their primary education. The student-to-teacher ratio for primary school was at about 19:1 in 2003. Ireland has two main universities: the University of Dublin (Trinity College) and the National University of Ireland, which consists of three constituent colleges in Dublin, Galway, and Cork. St. Patrick's College, Maynooth, is a recognized college of the National University. Universities are self-governing, but each receives an annual state grant, as well as supplementary grants for capital outlays. There are also various colleges of education, home economics, technology, and the arts. In 2003, about 52% of the tertiary age population were enrolled in some type of higher education program. The adult literacy rate has been estimated at about 98%. As of 2003, public expenditure on education was estimated at 4.3% of GDP, or 13.5% of total government expenditures. Trinity College Library, which dates from 1591 and counts among its many treasures the Book of Kells and the Book of Durrow, two of the most beautiful illuminated manuscripts from the pre-Viking period, is the oldest and largest library in Ireland, with a stock of 4.1 million volumes. The Chester Beatty Library, noted for one of the world's finest collections of Oriental manuscripts and miniatures, is also in Dublin. The National Library of Ireland, which also serves as a lending library, was founded in 1877 and houses over one million books, with special collections including works on or by Jonathan Swift and W. B. Yeats. The National Photographic Archive of over 600,000 photographs is also housed in the National Library. The University College Dublin library has more than one million volumes. The Dublin City Public Library system has about 31 branches and service points and holdings of over 1.5 million items. Dublin, the center of cultural life in Ireland, has several museums and a number of libraries. The National Museum contains collections on Irish antiquities, folk life, fine arts, natural history, zoology, and geology. The National Gallery houses valuable paintings representing the various European schools from the 13th century to the present. The National Portrait Gallery provides a visual survey of Irish historical personalities over the past three centuries. The Municipal Gallery of Modern Art has a fine collection of works by recent and contemporary artists. There is a Heraldic Museum in Dublin Castle; the National Botanic Gardens are at Glasnevin; and the Zoological Gardens are in Phoenix Park. There is a James Joyce Museum in Dublin housing personal memorabilia of the great writer, including signed manuscripts. Yeats Tower in Gort displays memorabilia of W. B. Yeats. The Dublin Writers' Museum opened in 1991. Public libraries and small museums, devoted mostly to local historical exhibits, are found in Cork, Limerick, Waterford, Galway, and other cities. In 2003, there were an estimated 491 mainline telephones for every 1,000 people. The same year, there were approximately 880 mobile phones in use for every 1,000 people. An autonomous public corporation, Radio Telefis Éireann (RTE), is the Irish national broadcasting organization. Ireland's second radio service, Raidio na Gaeltachta, an Irish language broadcast, was launched by RTE in 1972; it broadcasts VHF from County Galway. In 2004, there were an additional 49 independent radio stations. RTE operates three television networks and there is one independent television station. In 2003, there were an estimated 695 radios and 694 television sets for every 1,000 people. About 134 of every 1,000 people were cable subscribers. Also in 2003, there were 420.8 personal computers for every 1,000 people and 317 of every 1,000 people had access to the Internet. There were 1,245 secure Internet servers in the country in 2004. In 2001, there were eight independent national newspapers, as well as many local newspapers. There were three major independent current affairs magazines along with hundreds of special interest magazines. Ireland's major newspapers, with political orientation and estimated 2002 circulation, are: Sunday Independent, Fine Gael, 310,500; Sunday World, independent, 229,000; Irish Independent, Fine Gael, 168,200; Irish Times, independent, 119,200; Irish Examiner (in Cork), 63,600; and Cork Evening Echo, Fine Gael, 28,800. Waterford, Limerick, Galway, and many other smaller cities and towns have their own newspapers, most of them weeklies. The Censorship of Publication Board has the right to censor or ban publication of books and periodicals. In 2003, the Board censored nine magazines for containing pornographic materials. The constitution provides for free speech and a free press; however, government bodies may decree without public hearing or justification any material unfit for distribution on moral grounds. The Office of Film Censor, which rates films and videos before they can be distributed, can ban or require edits of movies which contain content considered to be "indecent, obscene, or blasphemous," or which expresses principles "contrary to public morality." In 2001, 26 videos were banned, primarily for violent or pornographic content. In 2004, one video was banned. The Chambers of Commerce of Ireland in Dublin is the umbrella organization for regional chambers. The Irish Congress of Trade Unions is also based in Dublin. There are trade unions and professional associations representing a wide variety of occupations. The Consumers Association of Ireland is active in advocating consumer information services. The oldest and best known of the learned societies are the Royal Dublin Society, founded in 1731, and the Royal Irish Academy, founded in 1785. The Royal Irish Academy of Music was added in 1856, the Irish Society of Arts and Commerce in 1911, the Irish Academy of Letters in 1932, and the Arts Council of Ireland in 1951. Many organizations exist for research and study in medicine and science, including the Royal Academy of Medicine in Ireland. National youth organizations include the Church of Ireland Youth Council, Comhchairdeas (the Irish Workcamp Movement), Confederation of Peace Corps, Federation of Irish Scout Associations, Irish Girl Guides, Girls' Brigade Ireland, Junior Chamber, Student Christian Movement of Ireland, Voluntary Service International, Workers Party Youth, Young Fine Gael, and chapters of YMCA/YWCA. The Irish Sports Council serves as an umbrella organization for numerous athletic organizations both on amateur and professional levels. Civil rights organizations include the Irish Council for Civil Liberties and the National Women's Council of Ireland. Several organizations are available to represent those with disabilities. International organizations with chapters in Ireland include the Red Cross, Habitat for Humanity, and Amnesty International. Among Ireland's numerous ancient and prehistoric sights are a restored Bronze Age lake dwelling (crannog ) near Quin in County Clare, burial mounds at Newgrange and Knowth along the Boyne, and the palace at the Hill of Tara, the seat of government up to the Middle Ages. Numerous castles may be visited, including Blarney Castle in County Cork, where visitors kiss the famous Blarney Stone. Some, such as Bunratty Castle and Knappogue Castle, County Clare, and Dungaire Castle, County Galway, offer medieval-style banquets, and some rent rooms to tourists. Among Dublin's tourist attractions are the Trinity College Library, with its 8th-century illuminated Book of Kells; Phoenix Park, the largest enclosed park in Western Europe and home of the Dublin Zoo; and literary landmarks associated with such writers as William Butler Yeats, James Joyce, Jonathan Swift, and Oscar Wilde. Dublin has long been noted for its theaters, foremost among them the Abbey Theatre, Ireland's national theater, which was founded in 1904 by Yeats and Lady Gregory. Dublin was the European Community's Cultural Capital of Europe for 1991, during which time the National Gallery, Civic Museum, and Municipal Gallery were all refurbished and several new museums opened, including the Irish Museum of Modern Art. Traditional musical events are held frequently, one of the best known being the All-Ireland Fleadh at Ennis in County Clare. Numerous parades, concerts, and other festivities occur on and around the St. Patrick's Day holiday of 17 March. Ireland has numerous golf courses, some of worldwide reputation. Fishing, sailing, horseback riding, hunting, horse racing, and greyhound racing are other popular sports. The traditional sports of Gaelic football, hurling, and camogie (the women's version of hurling) were revived in the 19th century and have become increasingly popular. The All-Ireland Hurling Final and the All-Ireland Football Final are held in September. A passport is required of all visitors. Visas are not required for stays of up to 90 days, although an onward/return ticket may be needed. Income from tourism and travel contributes significantly to the economy. Approximately 6,774,000 tourists visited Ireland in 2003, about 61% of whom came from the United Kingdom. That same year tourism receipts totaled $5.2 billion. There were 62,807 hotel rooms in 2002, with a 59% occupancy rate. According to the US Department of State in 2005, the daily cost of staying in Dublin was $403; in Cork, $292. A list of famous Irish must begin with St. Patrick (c.385–461), who, though not born in Ireland, represents Ireland to the rest of the world. Among the "saints and scholars" of the 6th to the 8th centuries were St. Columba (521–97), missionary to Scotland; St. Columban (540?–616), who founded monasteries in France and Italy; and Johannes Scotus Erigena (810?–80), a major Neoplatonic philosopher. For the thousand years after the Viking invasions, the famous names belong to warriors and politicians: Brian Boru (962?–1014), who temporarily united the kings of Ireland and defeated the Vikings; Hugh O'Neill (1547?–1616), Owen Roe O'Neill (1590?–1649), and Patrick Sarsfield (d. 1693), national heroes of the 17th century; and Henry Grattan (1746–1820), Wolf Tone (1763–98), Edward Fitzgerald (1763–98), Robert Emmet (1778–1803), Daniel O'Connell (1775–1847), Michael Davitt (1846–1906), Charles Stewart Parnell (1846–91), Arthur Griffith(1872–1922), Patrick Henry Pearse (1879–1916), and Éamon de Valera (b.US, 1882– 1975), who, with many others, fought Ireland's political battles. The politician and statesman Seán MacBride (1904-88) won the Nobel Peace Prize in 1974. Irishmen who have made outstanding contributions to science and scholarship include Robert Boyle (1627–91), the physicist who defined Boyle's law relating to pressure and volume of gas; Sir William Rowan Hamilton (1805–65), astronomer and mathematician, who developed the theory of quaternions; George Berkeley (1685–1753), philosopher and clergyman; Edward Hincks (1792–1866), discoverer of the Sumerian language; and John Bagnell Bury (1861–1927), classical scholar. The nuclear physicist Ernest T. S. Walton (1903–95) won the Nobel Prize for physics in 1951. Painters of note include Sir William Orpen (1878–1931), John Butler Yeats (1839–1922), his son Jack Butler Yeats (1871–1957), and Mainie Jellet (1897–1944). Irish musicians include the pianist and composer John Field (1782–1837), the opera composer Michael William Balfe (1808–70), the tenor John McCormack (1884–1945), and the flutist James Galway (b.Belfast, 1939). After the Restoration, many brilliant satirists in English literature were born in Ireland, among them Jonathan Swift (1667–1745), dean of St. Patrick's Cathedral in Dublin and creator of Gulliver's Travels; Oliver Goldsmith (1730?–74); Richard Brinsley Sheridan (1751–1816); Oscar Fingal O'Flahertie Wills Wilde (1854–1900); and George Bernard Shaw (1856–1950). Thomas Moore (1779–1852) and James Clarence Mangan (1803–49) wrote patriotic airs, hymns, and love lyrics, while Maria Edgeworth (1767–1849) wrote novels on Irish themes. Half a century later the great literary revival led by Nobel Prize-winning poet-dramatist William Butler Yeats (1865–1939), another son of John Butler Yeats, produced a succession of famous playwrights, poets, novelists, and short-story writers: the dramatists Lady Augusta (Persse) Gregory (1859?–1932), John Millington Synge (1871–1909), Sean O'Casey (1884–1964), and Lennox Robinson (1886–1958); the poets AE (George William Russell, 1867–1935), Oliver St. John Gogarty (1878–1957), Pádraic Colum (1881–1972), James Stephens (1882–1950); Austin Clarke (1890–1974), Thomas Kinsella (b.1928), and Seamus Heaney (b.1939), who won the 1995 Nobel Prize in literature; the novelists and short-story writers George Moore (1852–1932), Edward John Moreton Drax Plunkett, 18th baron of Dunsany (1878–1957), Liam O'Flaherty (1896–1984), Seán O'Faoláin (1900–91), Frank O'Connor (Michael O'Donovan, 1903–66), and Flann O'Brien (Brian O'Nolan, 1911–66). Two outstanding authors of novels and plays whose experimental styles have had worldwide influence are James Augustine Joyce (1882–1941), the author of Ulysses, and Samuel Beckett (1906–89), recipient of the 1969 Nobel Prize for literature. The Abbey Theatre, which was the backbone of the literary revival, also produced many outstanding dramatic performers, such as Dudley Digges (1879–1947), Sara Allgood (1883–1950), Arthur Sinclair (1883–1951), Maire O'Neill (Mrs. Arthur Sinclair, 1887–1952), Barry Fitzgerald (William Shields, 1888–1961), and Siobhan McKenna (1923–1986). For many years Douglas Hyde (1860–1949), first president of Ireland (1938–45), spurred on the Irish-speaking theater as playwright, producer, and actor. In addition to the genres of Irish folk and dance music, contemporary Irish popular and rock music has gained international attention. Van Morrison (b.1945), is a singer and songwriter from Belfast whose career began in the 1960s and was going strong in the 2000s. Enya (b.1961), is Ireland's best-selling solo musician. The Irish rock band U-2 is led by Bono (b.1960): Bono has also spearheaded efforts to raise money for famine relief in Ethiopia, to fight world poverty, to campaign for third-world debt relief, and to raise world consciousness to the plight of Africa, including the spread of HIV/AIDS on the continent. Ireland has no territories or colonies. Aughey, Arthur. The Politics of Northern Ireland: Beyond the Belfast Agreement. New York: Routledge, 2005. Gibbons, Luke, Richard Kearney, and Willa Murphy (eds.). Encyclopedia of Contemporary Irish Culture. London: Routledge, 2002. Hachey, Thomas E. The Irish Experience: A Concise History. Armonk, N.Y.: M. E. Sharpe, 1996. Harkness, D. W. Ireland in the Twentieth Century: Divided Island. Hampshire, England: Macmillan Press, 1996. International Smoking Statistics: A Collection of Historical Data from 30 Economically Developed Countries. New York: Oxford University Press, 2002. Maillot, Agnes. The New Sinn Fé'in: Irish Republicanism in the Twenty-First Century. New York: Routledge, 2004. McElrath, Karen (ed.). HIV and AIDS: A Global View. Westport, Conn.: Greenwood Press, 2002. O'Dowd, Mary. A History of Women in Ireland, 1500–1800. New York: Pearson Longman, 2005. Roy, James Charles. The Fields of Athenry: A Journey through Irish History, Boulder, Colo.: Westview Press, 2001. Turner, Michael Edward. After the Famine: Irish Agriculture, 1850-1914. Cambridge, UK: Cambridge University Press, 1996. Wessels, Wolfgang, Andreas Maurer, and Jürgan Mittag (eds.). Fifteen into One?: the European Union and Its Member States. New York: Palgrave, 2003. Whelan, Kevin. The Tree of Liberty: Radicalism, Catholicism, and the Construction of Irish Identity, 1760-1830. Cork, Ireland: Cork University Press in association with Field Day, 1996. "Ireland." Worldmark Encyclopedia of Nations. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." Worldmark Encyclopedia of Nations. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/ireland Cashel, Cavan, Cóbh, Dún Laoghaire, Kilkenny, Killarney, Tralee, Wexford This chapter was adapted from the Department of State Post Report 2000 for Ireland. Supplemental material has been added to increase coverage of minor cities, facts have been updated, and some material has been condensed. Readers are encouraged to visit the Department of State's web site at http://travel.state.gov/ for the most recent information available on travel to this country. It is said that Ireland, once visited, is never forgotten. The Irish landscape has a mythic resonance, due as much to the country's almost tangible history as its claim to being the home of the fairies and the "little people." Sure, the weather may not always be clement, but the dampness ensures there are 50 shades of green to compensate, just one of the reasons Ireland is called the Emerald Isle. Scattered mountains and hills rim a central plain, where the River Shannon flows past green woodlands, pastures, and peat bogs. Ireland was the seat of learning and sent scholar-missionaries throughout Europe in the Dark Ages. Now it draws visitors with a composite charm shaped of lilting laughter, Irish eyes, and the Blarney Stone; of soils man-made from seaweed and sand in the harsh Aran Islands, or palms waving in warm Glengarriff, of Donegal's lava and Killarney's lakes; of voluble, tempestuous people with a remarkable roll of literary lights-such names as Swift, Yeats, Wilde, Shaw, Joyce, O'Casey, Synge. Eight centuries of strife with Britain brought formal establishment of the republic in 1949. Its name in Gaelic is tire. Although English is the main language of Ireland, it's spoken with a mellifluous lilt and a peculiar way of structuring sentences, to be sure. There remain areas of western and southern Ireland, known as the Gaeltacht, where Irish is the native language-they include parts of Kerry, Galway, Mayo, the Aran Islands, and Donegal. Since Independence in 1921, the Republic of Ireland has declared itself to be bilingual, and many documents and road signs are printed in both Irish and English. Jigging an evening away to Irish folk music is one of the joys of a trip to Ireland. Most traditional music is performed on fiddle, tin whistle, goatskin drum, and pipes. Almost every village seems to have a pub renowned for its music where you can show up and find a session in progress, even join in if you feel so inclined. Irish meals are usually based around meat-in particular, beef, lamb and pork chops. Traditional Irish breads and scones are also delicious, and other traditional dishes include bacon and cabbage, a cake-like bread called barm brack and a filled pancake called a boxty. Though the nation's charms are fabled, it faces problems. The "troubles" are far from over in the North, but the recent referendum clearly signaled a willingness for peace and a genuine solution may be in sight. The country is home to one of the most gregarious and welcoming people in Europe. Like most ancient cities, Dublin lies sprawled along a river. In fact, three visible and underground rivers converge and flow into the Irish Sea. The greatest of these is the Liffey, which has divided Dublin into north and south for more than 1,000 years, much as tracks divide the core of a railroad town. Today, nearly one-third of the Irish population live in the greater Dublin area. It is the political, cultural, and economic heart of the nation. The great public buildings, the red brick Georgian rowhouses, and the fine parks that give the city its distinctive character originated in the 18th century. The Grand and Royal Canals encircle the Georgian core of the city. Quaint shop fronts and pubs of the 19th and early 20th centuries add to the flavor of downtown. Dublin has begun reclaiming some of the historic past, though many once-fine areas have decayed badly from years of poverty and neglect. New office developments have changed the city center's skyline. The outer rim is ringed by newly built housing tracts and industrial parks. The quays along the Liffey River are beginning to change the image of a rundown seaport. New business has started to develop as well as seafront apartment buildings. Small villages, until this century a short journey away, are now enclosed within the city's sprawl. Dublin, whose name in Irish (Gaelic) is Baile Átha Cliath, was a Norse stronghold in the ninth century. The forces of Brian Boru, high king of Ireland, took the site in a fierce battle at nearby Clontarf in 1014, forever ending Danish claim to the territory. In 1172, Richard Strongbow, the earl of Pembroke, captured the city for England; it was given a charter and made the center of the Pale, the indefinite limits around Dublin which were dominated by English rule (hence the saying, "beyond the Pale"). All of Ireland was besieged and colonized in the ensuing centuries, but Dublin enjoyed a period of prosperity in the late 1700s, during temporary respite from English authority. Intense nationalist efforts arose during the 19th century. On April 24, 1916, Dublin was the scene of the bloody and unsuccessful Easter Rebellion against British rule. It was not until 1922 that the Irish Free State was finally established. Single-phase, 200v-220v, 50-cycle, AC electricity is standard throughout Ireland. Outlets take British-type three-prong plugs. The wiring in many houses cannot take heavy loads. American 60-cycle clocks will not operate satisfactorily in Ireland. Most types of electrical equipment are available locally; however, they are more expensive. Food in Dublin is more expensive than in the U.S. Meats, poultry, and fish are sold year round. Greengrocers offer a wider range of imported fruits and vegetables, but prices are higher than at supermarkets. Fresh meats and produce in Ireland pose no special hygiene problems. Canned fruits and juices are available, and good-quality dairy and bakery products abound. Baby food in cans and jars can be found in any supermarket. Although most shopping needs can be met through diligent shopping, bring special spices and condiments to prepare favorite ethnic dishes. Because of the cool damp climate, woolens can be worn most of the year. Even in summer, light cotton clothing is rarely worn. Irish houses are frequently cold compared to those in the U.S. In selecting clothes, include sweaters, gloves, scarves, and sturdy weatherproof coats and footwear. Flannel pajamas and bed socks are desirable for overnight travel and even at home. Rainwear for adults and children can be purchased locally at reasonable prices. Ready-made clothing of all types is sold in Dublin. Good-quality articles, especially woolens and shoes, are expensive but on par with U.S. prices for similar quality. Narrow shoe sizes are hard to find. Men: Good-quality, ready-made, and tailor-fitted wool suits can be found at reasonable prices in Dublin. Nonetheless, bring several medium- or heavyweight wool suits, a topcoat, and a raincoat. Although dark suits are worn for most evening functions, a black dinner jacket (tuxedo) is occasionally required. Tuxedos and other formal wear can be rented or purchased locally. Women: Department stores and discount stores stock a wide choice of fashions for women, priced according to quality. Comfortable closed walking shoes are invaluable. Boots are preferred by many during the winter. Although you can easily find a wide choice from fashions to shoes and accessories, it is advisable to bring complete wardrobes. Children: Although quality is good, clothes can be very expensive for growing children. Bring complete children's wardrobes, anticipating larger sizes that will be needed. Good-quality sweaters and rain-wear can be bought locally at reasonable prices. School uniforms are required and most items must be purchased at specified stores. Supplies and Services Cosmetics, toiletries, cigarettes, home medicines, and drugs are sold locally in considerable variety at prices above those in the U.S. English, French, and a few American brands are sold. Bring special cosmetics and home medicines if preferred, including sufficient prescription drugs to last until arrangements can be made with a local pharmacy. Most essential conveniences commonly used for housekeeping, entertaining, and household repairs are obtainable locally. All basic community services, such as drycleaning, tailoring, beauty and barbershops, and shoe and auto repairs, are available in Dublin. A few dressmakers are also available. Mechanical services do not measure up to American standards. Delays are common, appointments are a must, and the quality of workmanship varies widely. Numerous religious denominations hold regular services in Dublin-Roman Catholic, Church of Ireland (Anglican), Presbyterian, Methodist, Baptist, Lutheran, Greek Orthodox, Christian Science, Congregational, Evangelical, Seventh-day Adventist, Moravian, Society of Friends, Mormon, and Unitarian churches, four Jewish congregations, and the Dublin Islamic Center. Private primary and secondary schools are good. Instruction is in English. Credits are usually accepted in the U.S. for schoolwork completed in Dublin. A typical curriculum in a Dublin secondary school includes English, Irish (foreign students are exempted on request), mathematics, geography, history, foreign languages, science, art, music, and physical training. Athletic activities include rugby, soccer, netball, track & field, cricket, hurling, field hockey, swimming, and tennis. Instruction in dancing, riding, music, and art is available at extra cost. Depending on the location, many parents cannot rely on public transportation and must drive their children to and from school. Most American children attend St. Andrew's College. Founded by the Presbyterians, St. Andrew's is now a nonsectarian, coeducational school with a curriculum comparable to those in the U.S., although sequence of coursework follows the Irish system. American secondary students may opt to follow either the Irish School Leaving or International Baccalaureate curriculum during their last 2 years. Credit is easily transferred to U.S. schools. With the aid of a State Department grant, the school has an American teacher of U.S. studies. The Irish grading system is more rigorous. Report cards are meant to be shared only by the student, parents, and teachers. American college applicants need special guidance in preparing applications that adequately explain the Irish system or their reported grades may often appear low. St. Andrew's College will prepare transcripts for U.S. colleges that explain Irish grades. St. Andrew's is accredited by the New England Association of Schools and Colleges, Ireland's Department of Education, and the European Council of International Schools. Irish ninth graders must take a rigorous examination called the Junior Certificate. The examination covers a 3-year cycle in mathematics, science, English, history, geography, Irish, and business studies. Although foreign students who have not made the entire cycle may be exempted from the exam, some may choose to take it as much of the ninth year is spent preparing for it. The 10th year is seen as a decompression year sandwiched between the high pressure Junior Certificate exam and the even more intense Leaving Certificate test held at the end of the senior (12th grade) year of high school. Although the Ministry of Education dictates the subjects covered during the 10th grade, methods of instruction differ from school to school. It is the only opportunity Irish students have to sample many different subjects without the pressure of external examination. The 11th and 12th grades are geared to passing the highly competitive Leaving Certificate, the key to admission to Irish universities. Although foreign students may be exempted from the Leaving Certificate, juniors and seniors should join their Irish classmates in preparing for it. Leaving Certificate studies provide good preparation for the American SAT examinations that are also given in Dublin. School uniforms are required for students. Our Junior School. The Junior School has its own principal and specially trained staff. The full range of elementary education subjects is taught: reading, writing, mathematics, environmental studies, art, music, nature study, hand-work, Irish, Latin, a basic introduction to continental languages, and computer studies. Project work, physical education, and sports are also an important part of the curriculum. The final year of the Junior School course is specially designed to prepare pupils for transition to the Senior School. This transition takes place at the age of 11-12. Saint Andrew's also receives a large influx of pupils from other elementary schools at this stage. Special Educational Opportunities Dublin has five universities-Trinity College, University College Dublin, Dublin City University, American College, Portobello College. Some technical, business, and professional (e.g., medicine, law) courses have higher fees. Ample opportunities exist for continuing education in Dublin through the universities, community and vocational schools, and foreign cultural institutes. A Guide to Evening Classes in Dublin is published each fall and also lists many daytime classes and activities for children. Purchase it at any bookstore or newsstand. In addition to such things as crafts, hobbies, business, and domestic skills, nearly all community and vocational schools offer lessons in Irish. Many schools offer classes on Irish culture, history, literature, and music and dance. Despite the changeable weather, the Irish are great sports enthusiasts. Many opportunities exist for the active sportsperson and spectator alike. The Irish Tourist Board, "Bord Failte," has detailed information on sports activities. All equipment and clothing for locally popular sports are sold in Dublin. Horse racing is a central feature of Irish sporting life. Irish horses have a fine record in events in England and other countries. Several leading courses are within easy reach of Dublin. The world-famous Irish Derby, the Irish St. Ledger, the Guinness Oaks, and other events are held at the Curragh in County Kildare, about an hour's drive from Dublin. The flat racing season is March to November. Steeplechase meetings take place throughout the year. Point to Point meetings are held in the spring. Racecourses within easy reach of Dublin are: Leopardstown, Fairyhouse, Nass, the Curragh, Navan, and Punchestown. Greyhound racing is well established with many tracks throughout Ireland. Clomnel, County Tipperary, is the home of the Irish Coursing Club. Many thousands of dogs are registered in the Irish studbook each year, and greyhounds are a major Irish export. Good riding stables are located near Dublin, and dozens more across the country offer both instruction and horses for hire. The Irish Horse Board, "Bord nag Capall," publishes a pamphlet called Where to Ride in Ireland. Fish are plentiful in the rivers, lakes, and coastal waters of Ireland. The most common are lake and sea trout, salmon, and coarse fish. Although the best salmon streams are privately owned and strictly controlled, you can arrange a lease for a specified period at a moderate price. In addition, salmon and trout fishing are free in many areas subject only to the boat and boatman's hire fees. Those traveling to western Ireland for their angling can make all the arrangements, including any required permits, through their hotel or guesthouse. Sea fishing is good all around the Irish coast; the more popular areas are off the coasts of Cork, Mayo, Kerry and Wexford. Hunting in Ireland usually means fox hunting, but there are also stag hunts and harriers. The season starts in October and ends in March. Club hunting takes place from September to November; these events are held early in the morning and arrangements can be made through a riding stable or the Honorary Secretary of the Hunt. Shooting facilities in Ireland for sportsmen are limited and strictly controlled. Firearms certificates and hunting licenses are generally issued to visitors who have access to bona fide shooting arrangements or who have made advance booking with a recognized shoot; the number of certificates granted in respect to each shoot is controlled. Excellent shooting grounds, especially in the west of Ireland can be found. For queries on how to obtain a firearm certificate, you may call the Irish Department of Justice at 01-602-8202. Within 20 miles of Dublin, you can find more than 45 private and public golf courses in all, many situated in splendid surroundings. Visitors are welcome at any club. Membership is difficult to obtain, some clubs have a 12-year waiting list, and is very expensive, since temporary membership fees are nonrefundable. It is possible to play on these courses for modest greens fees. The most popular courses in Dublin are Carrickmines, Elm Park, Killiney, and Portmarnock. Dublin has many tennis, badminton, and squash clubs. Membership in these can also be expensive and difficult to arrange, and nonmembers are not permitted to use the courts. Public tennis courts are also available, but they can be crowded on weekends and evenings in summer. Camping, hill walking, and cycling are popular. Access to mountain and moorland trails is free. The Irish Tourist Board has information on campgrounds, national parks and forests, organized trails, and hostels. Strong winds and rough seas limit water activities. Swimming is popular among the Irish who are not deterred by the cold water. Dublin also has scuba diving schools and clubs that offer introductory lessons. Yachting is popular for those who can afford it, with centers located in Dublin and Cork harbors. Rowing is more popular than yachting, and numerous rowing clubs abound. The rivers and canals are easily navigated and offer beautiful countryside. You can also hire cruise boats for a splendid holiday on the Shannon River. Irish hurling, a kind of field hockey, is one of the world's fastest field games. Hockey sticks and head injuries symbolize this rough-and-tumble sport. Camogue, a woman's game based on hurling, is played by many schoolgirls. Gaelic football is related to rugby and soccer. The annual all-Ireland finals of both hurling and Gaelic football command national attention. Both games are regulated by the Gaelic Athletic Association (GAA), founded in 1884 and a major force in the national revival movement in the late 19th and early 20th centuries. Handball, played with an extremely fast hard ball, is also a traditional game in Ireland. Many young people play rugby, cricket, and soccer at school and in athletic clubs. Touring and Outdoor Activities In and around Dublin are many places of interest to visit. In the oldest part of the city are the Church of Ireland Cathedrals of St. Patrick and Christ Church, and other interesting churches such as St. Michan's. You may visit Dublin Castle, parts of which date to the 13th century, which was the center of British rule in Ireland for centuries. Many fine 18th-century public buildings are open to the public, including the Bank of Ireland, formerly the Parliament House; Leinster House, seat of the Dail; Mansion House, residence of Dublin's Lord Mayor; the Custom House; Four Courts and King's Inn; the General Post Office; and the earlier Royal Hospital at Kilmainham. Trinity College, aside from its lovely squares and notable buildings, houses the nation's finest library. Among the famous manuscripts and early printed books is the Book of Kells, a masterpiece of Celtic illumination. Dublin also offers a small number of very interesting museums. The National Museum houses the finest collection of Irish antiquities and an assortment of decorative arts. The National Gallery of Ireland contains an important collection of European paintings, while the emphasis at the Hugh Lane Municipal Gallery is on changing exhibitions of contemporary work. The Chester Beatty Library and Gallery of Oriental Art is devoted to the arts of the book and offers changing selections from one of the world's great collections of Islamic and Asian manuscripts. Kilmainham Gaol Historical Museum is the prison that held generations of Irish patriots. Within its walls, the leaders of the 1916 uprising were executed. It reopened in 1966 as a historical museum and has conducted tours. Several beautiful parks can be found throughout Dublin. Phoenix Park, one of the world's largest urban parks, encloses the Zoological Gardens and the residences of the President of Ireland and the U.S. Ambassador. The National Botanic Gardens are located in Glasnevin in north Dublin. The fine Georgian squares of Dublin-St. Stephen's Green, Merrion Square, and Fitzwilliam Square-are also worth seeing. Well-preserved rows of Georgian houses surround Fitzwilliam and Merrion Squares. Within an hour's drive of Dublin are many historic sights. Beautifully situated in the Wicklow Mountains are the ruins of the medieval, monastic community of Glendalough. The Hill of Tara, the ancient religious, political, and cultural capital of Ireland, lies north of the city. In a better state of preservation are two great houses-Castletown House and Russborough House; a castle, Malahide Castle; and the magnificent gardens of Powerscourt. Rising just south of the city, the Wicklow Mountains offer grand scenery of green hills, bogs, forest, lakes, and waterfalls for those who like to hike, cycle, camp, or just go for a day's drive from the city. Ireland is a small country; you can reach almost any point within a 5-hour drive from Dublin. The roads are paved, but mostly narrow and winding. The Irish countryside offers a change of scenery. The western coastline attracts many tourists with its sea cliffs and low-lying but rugged mountains: the Ring of Kerry, the Cliffs of Moher, and further north, the wild countrysides of Connemara and Donegal. On the Aran Islands off Galway Bay, the everyday language is Irish, and many aspects of traditional life are preserved. Indeed, in the villages and farms, you may glimpse the slower, more traditional lifestyle of the Irish. Among the sights to explore are many ruined and restored castles such as Blarney, near Cork, with its fabled stone of eloquence; Bunratty, which holds nightly medieval banquets; and the well-preserved stronghold at Cahir. Medieval churches and monasteries include the great complex atop a rocky out-cropping at Cashel, the ancient monastic city of Clonmacnoise, the Romanesque church at Clonfert, and the Gothic abbeys of Jerpoint and Holycross. The country is littered with pre-Christian ring forts, stone circles, and tombs. One of the best is Newgrange, 30 miles north of Dublin. At the Craggaunowen Project near Limerick, a neolithic ring fort and island crannog (lake dwelling) have been completely reconstructed. Many great houses of the 18th and 19th centuries are open to the public, including Muck-ross House, overlooking the lakes of Killarney, Bantry House, and Westport. Downtown Dublin has a dozen movie theaters, several of them multiscreen cinemas, showing recent American and British films, usually within a few months of their release. The Abbey, Peacock, and Gate Theaters are among the best theaters in Dublin, and each presents a new play every month or two. The Gaiety and Olympia also present frequent changing shows ranging from serious dramas to musical reviews and rock concerts. Several small playhouses are active in Dublin and present first-rate theater. During the Dublin Theater Festival in the fall, dozens of foreign troupes perform. The Dublin Grand Opera Society and Dublin City Ballet are not world-class companies but do provide appealing entertainment. The RTE (Radio Telefis Eireann) Symphony Orchestra performs regularly at the National Concert Hall. Many visiting chamber groups and soloists keep the musical calendar full. For traditional Irish music, attend major concerts or simply frequent one of the "singing pubs," where informal sessions are regularly held. Dublin has several cabaret shows, mostly a combination of folk musicians, singers, dancers, and comedians. Choose from among several discos, nightclubs, and ice-skating rinks for an evening out. The most complete guide to regular and changing events is published in the biweekly magazine, In Dublin. A publication by the Dublin Tourism Board, The Events Guide in Dublin, is published biweekly and is also a good guide. Many music festivals are held during the year. Among the more interesting are the Wexford Opera Festival, the Kilkenny Arts Week, and the Festival of Music in Great Irish Houses. The Royal Dublin Society's Spring Show, similar to a U.S. county fair, and the Horse Show in August present trade, livestock, and flower displays and some of the finest horse and pony jumping in Europe. Dublin has many restaurants. Some are expensive, and the quality is generally excellent. Basic meals are wholesome and filling. Many pubs serve lunch and some have evening meals available. Numerous clubs and classes in Dublin are open for membership and include: hunting, swimming, horseback riding, boating, yachting, shooting, fishing, hurling, Gaelic football, handball, squash, tennis, rugby, soccer, athletic, tenpin bowling, lawn bowling, cricket, camping, hiking, cycling, dieting, automobile, social, and cultural. Americans living in Dublin include business representatives, students, spouses of Irish citizens, and many U.S. citizens of Irish background who reside in Ireland. American women can join the American Women's Club. In addition to regular meetings, the club offers diverse interest groups and courses on Irish cultural heritage and tours. The International Women's Club formed in 1982. The Club is composed of representatives from the various missions posted in Dublin, foreign women who have resided in Dublin a long time, and representatives from Ireland. The Irish people are noted for their hospitality and affability. Ties between Irish and American families can be a key feature of Irish American relationships. Social entertainment outside the home usually consists of restaurant dinners or receptions. Members of the Rotary Club and Masonic Lodges can also attend regular meetings. Cork, on the River Lee, is a principal port city with a long history of rebellion against English oppression. It is said to date from the seventh century, and was occupied and walled about two centuries later by the Danes. It established allegiance to England in 1172 but, during and after the Middle Ages, experienced much discontent and rebellion. Cork figured prominently in the 1920 fight for independence. Many of its beautiful public buildings were destroyed during the disturbances, and its lord mayor was assassinated. Cork, whose old meaning is "marsh," has a population of approximately 133,000. It is Ireland's second largest city and a major shipping and brewing center. On Great Island in Cork Harbor, is Cóbh (formerly Queenstown), the starting point for the hundreds of immigrant vessels sailing for the New World in the last century. Cork received its charter in 1185 from Henry II of England, and recently celebrated its 800th anniversary as a city with parades, festivals, regattas, and a full season of drama and music. Historical pageants revived ancient stories and traditions. The city of Cork offers many attractions, among them noted University College (formerly Queen's); a fine municipal school of art with renowned galleries; churches and cathedrals, including St. Finn Barre's, on whose site the original community was established; a fascinating open-air market; and a popular race course. The Royal Cork Yacht Club, the first of its kind in the world, was founded in 1720 at the seaside village of Crosshaven in Cork Harbor; it remains the site of international races and Irish championships today. A few miles from Cork is the mecca of Ireland's tourist attractions, Blarney Castle, whose famous Kissing Stone is reputed to bestow the gift of eloquence (or, more specifically, skillful flattery). The castle is in two sections—the narrow tower and battlements and, below, the fortress in whose wall the Kissing Stone is set. The small village of Blarney, now a craft center, was once a linen and wool hub. A number of market and seaport towns surround Cork, some in the spacious upland country to the northwest, others in the rolling farmlands and along the coast. Limerick, in the southwest of Ireland, is a familiar spot to the hundreds of thousands of travelers who use nearby Shannon Airport. It is a city replete with relics of Ireland's past, but also a bustling business, dairy, and agricultural center, and a hub for the salmon industry. Limerick is famous for the making of beautiful lace. The population here is about 56,200, but a drive through the narrow, crowded streets gives the impression of a much larger city. During rush hour, traffic often is at a standstill. Limerick was England's first stronghold after the Revolution of 1688, and became known as the City of the Violated Treaty, a reference to the oft-violated agreement of political and religious rights which was signed with England in 1691. The Treaty Stone is preserved as a monument to the breached covenant. Limerick was a Norse settlement in the ninth and 10th centuries, and was chartered in 1197. King John's Castle, built in the following century, is among the structures remaining from that era. St. Mary's Cathedral, even older, is another interesting historical spot here. Close to Limerick are Adare, Ireland's prize-winning village; and the national forest park of Currah-chase, once an estate belonging to the 19th-century poet, Sir Aubrey de Vere. Galway, the most Gaelic of the Irish cities, faces the Atlantic on the west coast of the republic. The Spanish influence of its early traders still is conspicuous in much of its architecture and in the colorful dress of its people. Galway and the surrounding area are known for unsurpassed salmon fishing (in the Corrib River) and for the many and extensive oyster beds. An annual international oyster-opening competition, the longest running of Ireland's festivals, is held at Clarenbridge in County Galway; until recent years, when the festival became so large that it could no longer be accommodated there, its site was the nearby village of Kilcolgan, on the Weir. The population of Galway proper is about 50,800. In the midst of the Great Famine of the last century, the town was a teeming way station for immigrants bound for the United States. In earlier times, it was known as the "City of the Tribes" because of the 14 families (or tribes) who settled and developed it. Galway became a flourishing center for trade with Spain and France. The city itself is the center of what is called the "haunting wilderness of the west." The surrounding area is Yeats country, and was described by writer Eilís Dillon during Galway's fifth centenary celebration in 1984 as a "land of soft mists and silences." In this part of the country, the Irish language (not generally called Gaelic) is heard often in the shops and pubs and on radio and television. Galway was a major seaport in medieval times but, according to Áras Fáilte (the Ireland West Board of Tourism), the town fell into decline during the next few centuries by backing the losing side in England's civil wars and other upheavals. The famine of 1846-47 produced such heavy setbacks that it was not until the beginning of this century that Galway began its regrowth toward prosperity and prominence. Among the city's many points of interest are St. Nicholas' Collegiate Church, built in 1320, and known by legend as the spot where Christopher Columbus attended mass before setting sail for America; University College, constituent of the National University of Ireland; Lynch's Castle, built in 1600 and now housing a bank; the Claddagh, an ancient fishing village across the river; Galway City Museum at the Spanish Arch; and the new Cathedral of Our Lady and St. Nicholas, built in 1965 on the outskirts of the central city. Across Galway Bay, about 30 miles from the mainland, lie the Aran Islands (Arana Naomh) of Inish-more, Inishmaan, and Inisheer, communities of fishermen and subsistence farmers who live and work much as they did centuries ago. The men still fish in currachs, traditional canvas crafts, and the women still spin and weave their wool and knit the famous Aran sweaters which withstand the brutal winds and waters of the Atlantic. Irish is spoken here more than English, and there is a primitive quality to the islands that creates much interest for tourists and native Irishmen alike. The prehistoric architectural remains are in extraordinary condition. Kilronan, on Inishmore, is the chief town. It is possible to reach the islands by boat or air ferry. Waterford, on the River Suir, is a port city in the southeast of Ireland. It has a population of approximately 39,500. Once a walled Danish settlement named Vradrefjord, it is now called Port Láirge in the Irish language. Waterford is probably known best throughout the world for the magnificent and much-coveted lead crystal which is manufactured here, but it also has other major industries, such as meat packing and dairy production. Waterford has many places of interest. The towers of the Franciscan and Dominican monasteries date to the 13th century, a time soon after the charter of Waterford was issued by King John. There are both Catholic and Protestant cathedrals in the city (episcopal sees are located here) and St. John's College, a Protestant theological seminary. Sections remain of the city walls, built at the time of the Danish invasion, as does a massive fortress erected in the early years of the 11th century. Each year, Waterford hosts the Festival of Light Opera, drawing visitors from throughout the British Isles and parts of Europe. Other major activities in the area include horse racing and golf at the nearby resort of Tramore. CASHEL , in County Tipperary, southern Ireland, is famed for its Rock of Cashel, on which are the ruins of an ancient cathedral and tower. Cashel was the seat of the kings of Munster. Legend has it that it was here St. Patrick explained the Trinity by using a three-leaf clover. The town itself is small, with a population of about 2,500, but tourist activity swells its numbers considerably during the summer months. CAVAN , the capital of County Cavan, is located in northeastern Ireland, about 60 miles north of Dublin. Cavan, situated in a rural county, produces bacon. The town developed around a Franciscan monastery during the 1300s; only the bell tower still stands. Cavan suffered damages in 1690 under repeated attacks by William III's English forces. The city has a modern Roman Catholic cathedral. Its population is around 3,300. Situated nine miles southeast of Cork, CÓBH is a city of 6,590 in southwestern Ireland. It was renamed Queenstown in 1849 to honor Queen Victoria's visit, but resumed its ancient name in 1922. An important port of call for mail steamers and ocean liners, (the Titanic made her last port of call here) Cóbh has excellent facilities for docking. On the dock here is memorial to the victims of the Lusitania, many of whom are buried in the old church cemetery. The ship was sunk off Kinsdale in 1915 by a German submarine, thus bring the United States into World War I. DÚN LAOGHAIRE (pronounced Dun Leary), lies six miles down the seacoast from Dublin. It is the main steamer terminus and mail port on the Irish Sea, and is a major sailing and regatta center. It also is the terminus for the car ferry from Holy-head (Wales). Its Martello Tower houses a James Joyce museum, and some of the author's original manuscripts are kept here. KILKENNY , home of the 16th-century College of St. John, is located in the southeastern part of the country. It has a noted castle and cathedral. Its modern Kilkenny Design Workshops, which encourage and promote the work of Irish designers, have created much interest both in and outside of Ireland. Retail stores connected with the workshops are here in the town, and also in central Dublin. Kilkenny, a parliamentary seat in the mid-14th century, has a population of approximately 10,000. KILLARNEY is a noted tourist spot in the center of the beautiful lake country. Traveling by car from the city, one can drive through the famous "Ring of Kerry," 110 miles of breathtaking beauty and enchantment, and one of the most spectacular drives in all of Europe. An unusual aspect of this journey deep into Ireland's southwest is the surprise of finding palm trees growing in a country thought to be cool and damp most of the year. The coastline temperatures here are warmed by the Gulf Stream, and subtropical vegetation becomes apparent in the farthest reaches of this corner of the nation. The town of Killarney, which is the urban district of County Kerry, has a population of around 8,000. TRALEE , 20 miles northwest of Killarney, is a seaport and the capital city of County Kerry. Its population is about 14,000. It was in this city that William Mulchinock wrote the popular ballad, The Rose of Tralee, during the mid-1800s. WEXFORD , in southeast Ireland, is a seaport city of approximately 12,000 residents. The town was long held by Anglo-Norman invaders, and some of its early fortifications remain. An international opera festival is sponsored here annually in late autumn. Geography and Climate The island of Ireland ("Eire" in the Irish language) is divided politically into two parts: Ireland and Northern Ireland. Ireland (informally referred to as the "Republic of Ireland") contains 26 of the island's 32 counties. Northern Ireland contains the six counties in the northeast and has been administered as a part of the U.K. since partition in 1922. The 26 counties cover 27,136 miles, with the greatest length from north to south being 302 miles and the greatest width 171 miles. Ireland is separated from Britain by the Irish Sea, ranging 60-120 miles across. The central limestone lowland of the island is ringed by a series of coastal mountains. The central plain is primarily devoted to family farming and is also notable for its bogs and lakes. The highest peak is Carrantuohill in Kerry at 3,414 feet. Newcomers are immediately impressed with the beauty and charm of the countryside, which is dotted with historic landmarks and alternating rolling hills and pastures, mountain lake country, and stark sea cliffs. Dublin has a moderate climate. Temperatures range from 16°F to 75°F. The mean temperature during the winter is 40°F; in summer 60°F Annual rainfall is about 30 inches, distributed evenly throughout the year. Noted for its soft weather, rarely do more than a few days go by without at least a shower. Temperatures occasionally drop below freezing during winter, and light snow sometimes falls. During December, there are about 7 hours of daylight and an average of 1½ hours of sunshine. During summer, the average daily sunshine is 6 hours. Mild winds and fog are common and winds of gale proportion may occur, especially at night, from November to May. Humidity is fairly constant, averaging 78%. The climate is similar to that of Seattle, London, and The Hague. The population totals 3.62 million. About a million people are in the greater Dublin area, with approximately 480,000 in the city itself. The next largest city is Cork (180,000), followed by Limerick (79,000), Galway (57,000), and Waterford (44,000). A high birth rate and the end of net emigration for the first time since the mid-19th century have led to a remarkably young population with roughly half under age 30. Although English and Irish (Gaelic) are the official languages, Irish is commonly spoken only in small enclaves, called the Gaeltacht, which are located in the south and west. The government is encouraging a revival of the Irish language, which about 55,000 natives speak. The population is predominantly Roman Catholic (about 92%). The second largest religious group (about 2.3%) belongs to the Church of Ireland, an independent Anglican Episcopal Church. After a prolonged struggle for home rule, Ireland received its independence from the U.K. as a free state within the British Commonwealth in 1921. The constitution was revised by referendum in 1937 and declared Ireland a sovereign, independent, democratic state. When the Republic of Ireland Act was passed in 1948, Ireland left the British Commonwealth. Ireland is a parliamentary democracy, governed by the "Oireachtas" (Parliament) of two houses, an elected Uachtarán (President), who is head of state, and a "Taoiseach" (Prime Minister), who is head of government and holds executive powers. The two houses of Parliament are Dáil Éireann and the "Seanad Éireann." The 166 members of the Dáil called "Teachtaí Dála" or more commonly, T.D's, are elected by vote of all Irish citizens over the age of 18 under a complex system of proportional representation. An election must be held at least every 5 years. The Dáil nominates the Taoiseach, who selects all other ministers from among the Dáil and the Seanad (but not more than two from the latter). The President, elected by direct popular vote for a 7-year term, formally appoints the Taoiseach. The Seanad has 60 members, 11 nominated by the Taoiseach, and the rest chosen by panels representing the universities and various vocational and cultural interests. Although the Dáil is the main legislative body, the Seanad may initiate bills and pass, amend, or delay, but not veto, the bills sent to it by the Dáil. Ministers exercise the executive power of the state and are responsible to the Dáil. The "Tanaiste" (Deputy Prime Minister) assumes executive responsibility in the absence of the Taoiseach. Under the constitution, the cabinet consists of 7 to 15 members. Junior ministers are also provided. The Taoiseach, Tanaiste, and Minister for Finance must be members of the Dáil. The Taoiseach resigns when his government ceases to retain majority support in the Dáil. The three major political parties are Fianna Fáil Fine Gael, and Labour. Fianna Fáil is Ireland's largest political party and the one that has ruled Ireland more often than any other. Fianna Fáil is currently in a coalition government with the Progressive Democrats, under the leadership of Taoiseach Bertie Ahern, after winning a June 1997 election. The government must call the next election by the year 2002, but also may do so before that time. A merger between Labour and the small Democratic Left was approved by both parties in December 1998. Ireland considers itself militarily neutral and is not a member of NATO. Since 1973, Ireland has been a member of the European Community. Irish law is based on English common law, statute law, and the 1937 Constitution. All judges exercise their functions independently, subject only to the constitution and the law. Appointed by the President, they may be removed from office only for misbehavior or incapacity, and then only by a resolution of both houses of the Oireachtas. Ireland has a multitiered court system. The district and circuit courts have wide civil jurisdiction and, in addition, may try all serious offenses except murder and treason. Most civil and criminal trials take place before a judge and a jury of 12 citizens. The High Court has original jurisdiction over all matters civil and criminal, but normally handles only appeals from the lower courts and rules on questions of constitutionality in an appeal or a bill referred by the President. Its members also sit on the Central Criminal Court and the Court of Criminal Appeals. The Supreme Court is the Court of Final Appeal and is empowered to hear appeals from the High Court, the Court of Criminal Appeals, and the Circuit Court, and to decide on questions of constitutional law. Its president is the Chief Justice of Ireland. Arts, Science, and Education Traditionally, the Irish have excelled in the literary arts, from ancient Irish sagas and legends to the rich folklore which plays its part in country life. Anglo-Irish writers such as Jonathan Swift and Edmund Burke were active in the flowering of Irish Arts in the 18th century, while the 20th century has produced many writers and poets of note: William Butler Yeats, Seamus Heaney, Frank O'Connor, Flann O'Brian, and the foremost chronicler of Dublin life, James Joyce. Irish dramatists have played an influential role in the development of English-language theater: from Oliver Goldsmith, Richard Sheridan, and Oscar Wilde, to the 20th-century works of George Bernard Shaw, J. M. Synge, Brendan Behan, Samuel Beckett, and more recently, Frank McGuinness and Martin McDonagh. Each fall, Dublin hosts drama groups from around the world during the Dublin Theatre Festival. During the rest oft he year, you may choose from among 6-10 plays each week in the city's large and small theaters. Music plays a central role in Irish culture. The national emblem is the harp, and Irish folk music continues as a lively tradition. Frequent concerts and recitals of classical music are held throughout the year. The National Concert Hall, which opened in 1981, is the venue for several concerts each week. Artists in Celtic and early Christian Ireland excelled in metalwork, stone carving, and manuscript painting. Among the finest examples are the Ardagh Chalice and the Book of Kells. The countryside abounds with the archeological and architectural remains of many periods, including megalithic tombs, ring forts of the Iron Age, medieval abbeys, and castles. Around the country, but especially in and around Dublin, are many great houses and public buildings from the 18th century, when architecture and other arts flourished in Ireland. Scientific research in Ireland is supported by several public and private institutions. The regional universities are active in many fields. The Dublin Institute for Advanced Studies specializes in theoretical and cosmic physics; the National Board for Science and Technology is a major source of funding; and the Agricultural Institute is the largest research organization in Ireland. Two private institutions provide significant support for the sciences. The Royal Dublin Society (RDS) was founded in 1713 to encourage the arts and sciences and to foster improved methods of agriculture and stock breeding. The RDS sponsors a Spring Show devoted to these methods and the famous Dublin Horse Show every August. The Royal Irish Academy, founded in 1785, promotes research in the natural sciences, mathematics, history, and literature. The Irish Department of Education provides free primary and secondary education. Most schools are state aided, yet remain private and managed by their individual boards. Almost all have religious affiliations; many are not coeducational. Ireland has two universities: the National University of Ireland (NUI) and Dublin University. NUI has four principal constituent universities: National University of Ireland, Dublin; National University of Ireland, Cork; National University of Ireland, Galway; and National University of Ireland, Maynooth, which is also a seminary and Pontifical University NUI also has two "recognized" colleges: Dublin City University and University of Limerick, which emphasizes applied sciences and business. Dublin University, founded in 1591, has one college, Trinity College, Dublin (TCD). Other third-level institutions include Dublin Institute of Technology, the Royal College of Surgeons in Ireland, a medical school; the Honourable Society of King's Inns, which trains barristers; and the National College of Art and Design. Commerce and Industry The 1990s have been a period of rapid economic development in Ireland. Dubbed Europe's "celtic tiger," the Irish economy in 1999 will likely enjoy the fastest growth of any industrialized nation in the world for a fifth consecutive year (average annual GDP growth has measured 9% since 1994). From being one of the EU's least developed countries in the 1980s, per capita incomes in Ireland have grown from just 69% of the EU average in 1991 to just under 90% of the average by 1998, and now measure an estimated $21,823. Most commentators attribute Ireland's "economic miracle" to the following factors: the decade-old "social consensus" on economic policy between employers, trade unions, and successive governments that has ensured modest wage growth and harmonious industrial relations; low corporate taxes and generous grant-aid for foreign investors; a high degree of macroeconomic stability with low inflation and interest rates; Ireland's membership in the single European market and its adoption of the single European currency, the euro, from 1999; and high levels of investment in education and training. The Irish economy is highly dependent on international trade, with Irish exports of goods and services equivalent to an estimated 93% of GDP in 1998 and imports equivalent to an estimated 81%. In 1998, Ireland had a surplus on the current account of the balance of payments of 2% of GDP. Ireland's industrial structure differs from most other developed countries. Much of Ireland's economic growth in the 1990s is the result of rapid expansion by export-oriented, foreign-owned high-tech manufacturing industries, particularly in pharmaceuticals, chemicals, and computer hardware and software (over two-thirds of Irish manufactured exports are produced by foreign-owned industry). Accordingly, at just under 40% of GDP, manufacturing industry accounts for a much higher proportion of total economic activity in Ireland than most other developed countries. In contrast, nongovernment services, which are dominated by retailing, tourism, and finance, are less developed than elsewhere in the OECD. Agriculture, forestry, and fishing, which account for around 6% of Irish GDP, has declined rapidly in importance over the last 30 years, although they are still important employers in rural and peripheral regions of the country. Although Ireland has a market economy, state-owned companies in transport, energy, communications, and finance still account for over 5% of Irish GDP. Total public expenditure as a proportion of total income, at an estimated 33% in 1999, is well below both the OECD and EU average. Although real incomes have improved markedly in recent years, the main benefit of rapid Irish economic growth has been a dramatic increase in new jobs. This has helped reduce unemployment, increase female participation in the labor force, and bring Irish workers living abroad back to Ireland. Unemployment fell to 6.7% in March 1999, down from an average of 15.6% in 1993. The main danger facing Ireland's fast-growing economy is overheating. Shortages of both skilled and unskilled labor contributed to growth in average hourly industrial wages of around 6% in 1998, up from an average growth of 3.6% in 1997. Other economic challenges facing Ireland include widening income disparities caused by rising wages for skilled workers in Ireland's high-tech industries, increasing infrastructure congestion (as evidenced by the traffic " gridlock " in Dublin's streets), fast growth in house prices, and the widening economic divide between the prosperous southern and eastern regions of the country and generally poorer regions along west coast and border areas of the country. Ireland's economic "golden age" has been accompanied by an intensification of U.S.-Irish economic relations, both in terms of trade and bilateral investment. In 1997, the U.S. overtook Germany to become Ireland's second largest trading partner, behind only the U.K. Total exports from Ireland to the U.S. in 1998 were valued at $8.7 billion, while total imports into Ireland from the U.S. were valued at $6.8 billion. U.S. companies operating in Ireland account for much of the fast growth in Irish exports to the U.S. According to the U.S. Department of Commerce, the stock of U.S. investment in Ireland in 1997 was valued at $14.5 billion, up from $8.4 billion in 1995. Furthermore, in 1997 Ireland was estimated to have received almost 25% of all greenfield investment by U.S. companies into the EU that year. Of the 1,500 foreign companies in Ireland in March 1998, the U.S. had 570. These U.S. operations employ almost 70,000 workers in Ireland, which represents a staggering 5% of total employment. In May 1998, Ireland, along with 10 other EU member states, was confirmed as meeting the requirements for EMU participation. Accordingly, on January 1, 1999, the Irish pound ceased to exist as Ireland's national currency, and the new single European currency, the Euro, became Ireland's official unit of exchange. Irish currency will continue to circulate until the introduction of Euro notes and coins in 2002. Although the Euro will not exist in physical form until 2002, from 1999 on, inter-bank, capital, and foreign exchange markets will be conducted in Euros. All government debt will be redenominated into Euros, and stock prices will also be quoted in Euros. Retail banks will also be obliged to offer private and corporate customers Euro bank accounts. The loss of national control over monetary and exchange rate policy presents a major challenge to Irish policymakers. Under EMU, changes in wages or employment levels, rather than adjustments to exchange and interest rates, are the primary mechanisms for the economy to react to external economic shocks. For the average Irish citizen, however, this first stage in progress toward EMU has had no concrete immediate effect. The Irish Congress of Trade Unions (ICTU) is the umbrella organization for most of Ireland's trade unions. Since 1987, collective bargaining has occurred in the context of national economic programs negotiated by representatives of government, trade unions, employers, farmers, and other "social partners." These 3-year programs establish minimum-wage increases and broad economic and social objectives, and have been credited with Ireland's strong economic performance and sustained period of peaceful industrial relations during the 1990s. Just less than half of the Irish workforce is unionized. Dublin boasts dealerships and service facilities for most European and Japanese vehicles. Many drivers prefer smaller vehicles for negotiating the narrow, winding roads. Traffic moves on the left in Ireland, and right-hand drive vehicles prevail, though they are not mandatory. If you import left-hand drive vehicles, you should be aware that not only will driving be more difficult, but also, liability insurance premiums will be higher by about 20%. Third-party liability insurance is mandatory and must be purchased from a local insurer. Insurers offer discounts for recent clean driving records, so bring a letter from your insurer indicating the length of claim-free driving. Currently, gasoline costs about $3 a gallon on the local market. Dublin city bus service is uneven and ceases after midnight. A commuter train line follows the coast north and south of the city. Buses and trains are usually crowded. Taxis are expensive and may be difficult to obtain. Many are radio-dispatched, however, and most are clean and well maintained. Outside of rush hours, taxis may be hailed on the street with varying degrees of success. All of the larger cities in Ireland can be reached from Dublin by private auto, rail, or intercity buses within 5 hours. Only intermittent stretches of four-lane highways exist in Ireland. Most roads outside the city are narrow, winding, and need repair. Ferryboats travel between Dublin and Holyhead (Wales); Rosslare and Fishguard (Wales); Rosslare and Pembroke (Wales); Rosslare and Le Havre (France); Rosslare and Cherbourg (France, March-October only); Cork and Le Havre; Cork and Roscoff (France); Cork and Swansea (Wales). London is 1 hour by air from Dublin, and flights to the Continent from Dublin are frequent. Delta Airlines, Continental, and Aer Lingus fly directly to Dublin from the U.S. Telephone and Telegraph Modernization of the telecommunications network has been underway to bring an outdated system into line with the high technology being employed in other countries. You can dial directly to about 180 destinations, including the U.S., and contact about 40 more via the operator. Improvements have progressed to such an extent that, except for the more remote areas and parts of Dublin, a telephone can be installed within 6-10 weeks of application. Airmail, air express, and surface mail between the U.S. and Ireland is reliable. International airmail between Dublin and New York takes about 8 days, and surface parcels take 4-6 weeks. Radio and TV An autonomous public corporation, Radio Telefis Eireann (RTE), operates the radio and TV services with revenue from license fees and advertising. RTE radio broadcasts on three networks nationwide on VHF in stereo-Radio One, 2FM (popular music channel), and Raidio na Gaeltachta/FM3 Music (Raidio na Gaeltachta is the Irish language program, and FM3 MUSIC is a quality/classical music station). Radio One and 2FM also broadcasts on AM nationwide, and Raidio na Gaeltachta also broadcasts on AM in the Irish-speaking areas (The Gaeltacht). There are also many independent radio stations playing a variety of music. RTE TV is broadcast nationwide on 2 channels-RTE 1 and NETWORK 2. An independent station, TV3, started broadcasting during 1998. The stations broadcast from early morning until approximately 4 a.m. or 5 a.m. weekdays, with extended schedules on weekends. In addition, with a cable system (available in most parts of Dublin) you can receive two BBC channels, two British ITV (Independent Television) channels, sports, and movie channels. U.S. TV's will not receive local broadcasts without expensive modifications. Newspapers, Magazines, and Technical Journals Seven daily papers are published in Ireland, all in English. Most emphasize local and national news, but the Irish Times provides more international coverage than the others. The leading British dailies and the International Herald Tribune appear on Dublin newsstands on the day they are published. A few popular U.S. magazines are also promptly available at the newsstands, e.g., the overseas editions of Time, Newsweek, Scientific American, and Omni. British journals are freely available. Magazines ordered by U.S. subscriptions are much less expensive but arrive about 3 weeks late by pouch. Dublin has several good bookstores; some offer secondhand books at reasonable prices. The public libraries are an alternative. Health and Medicine Competent specialists in all fields of medicine and dentistry provide satisfactory services, but their equipment is not always as modern as in the U.S. Obtain special medical or dental treatment before coming. Drugs and medical supplies of almost every variety are sold locally. Some drugs normally found in the U.S. and other countries are not available. Public hospitals and private nursing homes provide adequate treatment. Children under 12 are admitted only to children's hospitals. The sewage system is modern, and community sanitation is good although below that for some U.S. cities. Water is potable and fluoridated. Food handling is sometimes below U.S. sanitary standards. Because of the cool climate, refrigeration is used to a lesser extent. Meats may be displayed in uncovered cases. Nevertheless, these practices do not appear to present a special health hazard. Among the general population, rheumatism and arthritis are common. Young children are now vaccinated against measles, mumps, and rubella with the MMR vaccine at about 15 months. Respiratory diseases such as bronchitis and asthma, glandular infections, and head colds are prevalent. No serious epidemics have occurred in Ireland for several years. Have the triple vaccine (tetanus, diphtheria, and pertussis) and TOPV for polio for all children. Immunizations of all kinds are available in Dublin. NOTES FOR TRAVELERS Passage, Customs and Duties A passport is necessary, but a visa is not required for tourist or business stays of up to three months. For information concerning entry requirements for Ireland, travelers can contact the Embassy of Ireland at 2234 Massachusetts Avenue N.W., Washington, D.C. 20008; telephone: (202) 462-3939, fax: 202-232-5993, or the nearest Irish consulate in Boston, Chicago, New York, or San Francisco. The Internet address of the Irish Embassy is: http://www.irelandemb.org. Americans living in or visiting Ireland are encouraged to register with the Consular Section of the U.S. Embassy and obtain updated information on travel and security in Ireland. The U.S. Embassy in Dublin is located at 42 Elgin Road, Balls-bridge, tel. (353)(1)668-7122; after hours tel. (353)(1)668-9612/9464; fax (353)(1) 668-9946. Ireland has strict quarantine laws. Most pets entering the country must be placed in quarantine for 6 months at the owner's expense. There is only one quarantine facility in Ireland and reservations are necessary and this process can amount to as much as $4,000. An excellent selection of all breeds of pets, reasonably priced, may be found in Ireland. Importation of certain types of birds is prohibited. Firearms and Ammunition Certain types of nonautomatic firearms and ammunition may be imported into Ireland. Currency, Banking, and Weights and Measures As a member of the European Community, the Irish monetary unit is the Euro, which is divided into 100 cent. Coins in circulation are 1, 2, 5, 10, 20, 50 cent and 1 & 2 Euro. Bank notes are 5, 10, 20, 50, 100, 200 and 500 euros. The exchange rate approximates 1.15 euro to $1 US. All banks in Dublin handle exchange transactions, and many offer Irish pound checking accounts. Banks will cash a personal dollar check, but might delay payment. Dublin has branches of Citibank, Chase Manhattan Bank, Bank of America, and First National Bank of Chicago. The avoirdupois weight system and long measure are used. Liquid measure is based on the British imperial gallon. Ireland adopted the metric system in 1976 and is gradually eliminating nonmetric measures. Jan.1…New Year's Day Mar. 17…St. Patrick's Day May (first Monday)… May Bank Holiday* June (first Monday)… June Bank Holiday* Aug. (first Monday)… August Bank Holiday* Oct. (last Monday)…October Bank Holiday* Dec. 25…Christmas Day Dec. 26…St. Stephen's Day The following titles are provided as a general indication of the material published on this country: Beckett, J. E. A Making of Modern Ireland 1603-1923. Faber and Faber: London 1981. Fanning, R. Independent Ireland. Helicon Dublin, 1983. Fisk, R. In Time of War. Andre Deutsch London, 1983. Foster, R. F. Modern Ireland 1600-1972. Penguin Press: Cambridge, 1988. Harkness, D. Northern Ireland Since 1920 Helicon: Dublin, 1983. Kee, Robert. The Green Flag, A History of Irish Nationalism. Weidenfeld and Nicolson: London, 1972. Lee, J. J. Ireland 1912-1985: Politics ant Society. Cambridge University Press Cambridge, 1989. Lyons, F. S. L. Ireland Since the Famine. Weidenfeld and Nicolson: Lon don, 1973. Martin, F. X. and T. W. Moody, ed. The Course of Irish History. Mercier Press Dublin, 1984. Moody, T. W. The Ulster Question 1603-1973. Mercier Press: Cork, 1974. O'Brien, Marie and Conor Cruise. A Concise History of Ireland. Thames and Hudson: London, 1973. Government and Politics Chubb, B. The Government and Politics of Ireland. 2nd ed. Oxford University Press: London, 1982. Coombes, D., ed. Ireland and the European Communities. Gill and McMillan Dublin, 1983. Gallagher, M. Political Parties in the Republic of Ireland. Gill and McMillan Dublin, 1985. Keatinge, E. A Place Among the Nations Issues in Irish Foreign Policy. Institute of Public Administration: Dublin 1978. Keatinge, P. A Singular Stance, Irish Neutrality in the 1980s. Institute of Public Administration: Dublin, 1984. Kelly, J. M. The Irish Constitution. 2nd ed. Jurist: Dublin, 1984. Northern Ireland: Questions of Nuance. Blackstaff Press: Belfast, 1990. O'Malley, E. The Uncivil Wars, Ireland Today. Houghton, Mifflin: Boston, 1983. Meenan, James. The Irish Economy Since 1922. Liverpool University Press: Liverpool, 1970. The New Ireland Forum: Studies and Reports on Specific Matters. The Stationer Office: Dublin, 1984. OECD Economic Surveys, Ireland. OECD, Paris: April, 1985. O'Hagen, T., ed. The Economy of Ireland. Irish Management Institute: Dublin, 1976. Understanding and Cooperation in Ireland (8 pages). Cooperation North: Belfast and Dublin, 1983. De Breffney, B. Ireland: A Cultural Encyclopedia. Thames and Hudson: London, 1983. Fogarty, M., L., and J. Lee. Irish Values and Attitudes. Dominican Publications: Dublin, 1984. Greeley, Andrew. The Irish Americans. Harper and Row: New York, 1981. Kennelly, B., ed. The Penguin Book of Irish Verse. 2nd ed: London, 1981. O'Murchin, M. The Irish Language. Department of Foreign Affairs and Board na Gaeilge: Dublin, 1985. O'Siadhall, M. Learning Irish. Institute for Advanced Studies: Dublin, 1980. Reference Works and General Interest Administration Yearbook and Diary. Institute of Public Administration: Dublin (yearly). American Business Directory. U.S. Chamber of Commerce in Ireland: Dublin (yearly). Cairnduff, M. Who's Who in Ireland. Vesey: Dublin, 1984. DeBreffney, B. Castles in Ireland. Thames and Hudson: London, 1977. Ernest, Berm. Blue Guide to Ireland. 4th ed. London, 1979. Facts About Ireland. Department of Foreign Affairs: Dublin, 1985. Joyce, James. Ulysses. Nealon, T. and Brennan. S. Nealon's Guide, 24th Dail and Seanad. 2nd ed., 1982. Platform Press: Dublin, 1983. Shannon, E. Up in the Park. Atheneum: New York, 1983. Uris, Leon. Trinity. "Ireland." Cities of the World. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." Cities of the World. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/ireland |Official Country Name:||Ireland| |Language(s):||English, Irish (Gaelic)| |Number of Primary Schools:||3,391| |Compulsory Schooling:||9 years| |Public Expenditure on Education:||6.0%| |Foreign Students in National Universities:||5,975| |Educational Enrollment:||Primary: 358,830| |Educational Enrollment Rate:||Primary: 104%| |Student-Teacher Ratio:||Primary: 22:1| |Female Enrollment Rate:||Primary: 104%| History & Background The Republic of Ireland is the second largest British isle, covering 27,136 square miles and bordered to the northwest by Northern Ireland; in the past it went by the Irish Free State (1922-1937) and Eire (1937-1949). Eire is still used by many persons as their name of choice for Ireland, also causing some confusion outside the country's borders. The capital city is Dublin, containing one-third of the Irish Republic's population. During the second half of the twentieth century, the presence of so many fine higher education institutions in Dublin led to the renovation or restoration of many neighborhoods that had been reduced to slums. The predominant religion is Catholic. Ireland's 26 counties have been free of British rule since 1922, which has resulted in some educational changes, including great emphasis on the Irish language, literature, customs, and history. Beginnings: Ireland's history began during the Mesolithic Era. Hunters from faraway British Isles and likely even southwest Europe first settled this island west of present-day Great Britain. The country began to show signs of civilized development in the Neolithic period about 4000 to 2000 B.C. A communal people, the language of these Pre-Celtic people has been lost. Celtic & Roman Influences: Ireland's rugged beauty has always attracted settlers and conquerors. The best known of these were the Celts, likely hailing from the Iberian Peninsula (Spain and Portugal), known for their skills as goldsmiths and artisans. Shortly before the birth of Christ, Celtic was the primary language of the country under the ruler of Celtic chiefs. For hundreds of years, the Celts failed to develop a sophisticated form of writing other than a means of documenting family names. In 54 and 55 B.C., Julius Caesar won some skirmishes with the natives he encountered in Britain. His documentary writing preserved his experiences, and schoolboys in England and America at one time translated them for practice. Caesar referred to Ireland as Hibernia, translated literally as the place of winter. Catholic Church's Preservation of Scholarship: During the Middle Irish period, poets and scholars were trained at church schools, historians believe. The evidence comes from writings that survive as clues to the period. Irish tracts reveal that a mentor called a foster father tutored a pupil known as a felmac. Scholars were trained in Irish law, history, and literature, as well as in Latin. These schools, by the fourteenth century, had changed. Instead of religious scholars acting as tutors, non-clergy scholars taught subjects, such as verse writing, to their pupils. Students of medicine learned from Irish texts that had been translated from English medical books. After Caesar, the name most renowned and associated with Ireland is St. Patrick (circa 385-461). In addition to his many successes as a missionary, Patrick is said to have encouraged the preservation of the old warrior chants by having the words set down for posterity. Although the details of Patrick's life are blurred (partly because his own Latin writings show no mastery of the subject), he was a Brit whose father was a Roman bureaucrat and, while young, he was captured in Ireland and spent six years in slavery as a herder; he escaped and was schooled in Latin and theology, though precisely where is mere speculation. Patrick returned to Ireland in 432 and set out to convert to Catholicism the people whose nation he had come to love. One result of these conversions is that Ireland by the sixth century had several established monasteries that were havens for the preservation (and copying) of manuscripts, culture, and learning. After an invasion by Norsemen in the eighth century, Ireland was under Viking influence until the Irish king Brian Boru fashioned an army that fought for independence. In the eighth century, the population with the name Gael then, replaced the term Erainn that had been the name for the people of Ireland. In time, the term Irish became applied to the people of this nation, even though the term was derived from a Welsh word meaning "savage." The natives, to distinguish themselves from the Viking conquerors, used Gael. During the beginning of the Middle Ages, Ireland maintained a reverence for teachings of the Church and Church documents. In turn, the monasteries preserved the old Irish tales and accounts of heroes and everyday life. These clearly would not have survived had the monks not copied them into their manuscript books. Ironically, it was the Catholic nation's policy of putting no local ruler above the Pope in the Vatican that led to Ireland's longstanding domination by Great Britain. The only pope of English ancestry, Pope Adrian IV, in a political agreement, gave Henry II, the former Duke of Normandy (who gained control of England by invasion), permission to serve as overlord of Ireland. This decision to turn Ireland into a fiefdom was disputed by the Irish as an illegitimate transfer of power. Lands owned by the Irish were given to absentee landlords in Britain, creating a peasant class existing in woeful ignorance and poverty. In spite of Henry II's edicts maintaining that there existed separate areas of church and state, in Ireland even in the twenty-first century, that line of separation frequently dissolved. Political, Social, & Cultural Bases: Just as religion influenced the daily life, social divisions, and political upheavals of Irish life for centuries, so too has it had a profound effect on education in the Emerald Isle. That very upheaval and strong allegiances to the Church interfered with the development of a unified system of education in Ireland. In the late 1500s, coinciding with the growth of Protestantism in the country, non-Catholics had decidedly better schools. While Protestant diocese schools and "royal schools" set up by the Crown benefited the wealthier Protestant class, charity schools inadequately supplied the needs of the children of poor Protestants during the seventeenth and eighteenth centuries. The Catholic poor were largely ignored, their children termed urchins. One minister in 1712 said that when all the needs of the poor Protestant children were met, the schools then should try to assist the Catholic children. The charity schools were run by the Church of Ireland and were similar to those in Britain. Funding was supplied variously by parishes, landlords, clergy, and district governing boards. The Church of Ireland was declared the state church in 1537 and remained so until 1870. In 1539, monasteries were declared dissolved, although it took some years for many to disappear. However, during much of the sixteenth century, nearly all areas of the country outside Dublin and areas of Northern Ireland were Catholic. The Crown brought Scottish settlers to Northern Ireland that were members of the Church of Ireland. During the closing years of the sixteenth century, the Church of Ireland made a conscious attempt to establish parishes in every county of the nation. The royal schools were grammar schools started at the insistence of James I, the king of England, Scotland, and Ireland, who ascended the throne with the help of Elizabeth I. (Elizabeth, in 1587, executed his mother, Mary (Stuart) Queen of Scots, with no protest from James, after she was found guilty of plotting the death of Elizabeth). James, who authorized a version of the Bible still used today, was an erratic man who believed in the divine right of kings. These royal schools were started in the 1600s by Church of Ireland bishops, but perhaps because they were founded under coercion, had many deficiencies and poor supervision. Higher Education History & Background: Like other political areas, higher education in Ireland has always had confrontations, although much less in the late 1990s and early twenty-first century. In 1591 (or 1592, as some claim), the oldest continuous university in the country, the University of Dublin was begun, with Trinity College as its only college. Throughout its history, the school's agenda and even curriculum displayed a marked Protestant orientation, though the state had a loosely enforced policy of giving no money for denominational higher education. In spite of politics and religious rancor at times, Trinity, since the 1700s, has been one of Europe's respected institutions, highly competitive and fiercely proud of the highest academic standards. Its senior fellows ran the school as a sort-of personal fiefdom, and seniority among fellows, rather than scholarly accomplishment, was used to establish a pecking order. By 1792, the institution enrolled 933 students. The Catholic Church in Ireland entered the realm of higher education in 1851, establishing Catholic University with famed author and educator John Henry Newman as rector; Newman, a one-time Church of England minister who converted to Catholicism and became a Cardinal, was world famous for his book, The Idea of a University, and other writings. In 1883, it became the University College, Dublin, operated under control of the Jesuit Order (known also as the Society of Jesus). When all of Ireland was under British rule, Catholics in the nineteenth century were given first the Queen's University and then the Royal University of Ireland. But the government found it could not run a school catering to just one denomination, and Royal University became open to anyone passing entrance requirements. Until 1970 when a long-standing Catholic boycott was lifted, Catholics tended to avoid enrollment at the Anglican-run Trinity College in Dublin, perhaps the best-known Irish university. Some Irish students of Presbyterian background also preferred to pursue their higher education in Scotland, rather than accept the dominion of the established faith. In truth, this religious atmosphere could not be escaped at Trinity since many prospective religious leaders of the Church of Ireland took their degrees here. After 1970, the student population became more diverse. Constitutional & Legal Foundations The fundamental rights of citizens to an education are among the rights guaranteed in Article 42 of the 1937 constitution of Ireland. The constitution was largely prepared by New York-born Eamond deValera (1882-1975), Ireland's most visible leader following the granting of independence from Britain, and the country's two-time president. The constitution acknowledges the responsibility of the nation to work with parents to entitle children to receive an education without cost to the family. There also have been a number of important statutes directly concerning education. For example, the Medical Act of 1886 was concerned with ensuring the quality education of doctors; the law stated that graduates had to be educated in surgery, medicine, and obstetrics. The education of girls was done sporadically until 1892, when a law mandating compulsory attendance was passed. At the time, it only assured students of a primary school education and little more. In 1972, the law was changed regarding compulsory education, raising the age of required education to 15 years old. The Vocational Education Act of 1930 established Vocational Education Committees (VEC) throughout Ireland. Such committees oversaw what then was defined as "technical and continuation education." Today, about 10 percent of costs pertaining to this area of education is VEC funded, while the Department of Education foots 90 percent of the costs. Also related to education are the provisions of the Dublin Institute of Technology Act, 1992, and section 9 of the Universities Bill, 1997, that formalized by statute whether a new school of higher learning should be granted a charter or not. If there is a deficiency in Irish education, it has been the lack of a guiding educational philosophy. However, the new curriculum that became effective around the turn of the twenty-first century may be a step in that direction. Child-centered learning is the goal, along with developing skills in all subjects, particularly science and instructional technology, while also concentrating on training students in the traditional basic subjects. Around 1800 the Anglican Church was responsible for supervising the education of boys and girls at both the primary and secondary levels. But many areas of the country that were heavily Catholic were resistant, and some rural Catholic areas either had no schools or offered little financial support for them. There were a few superior schools in Ireland, the education historian R.B. McDowell has written—the well-funded Royal School at Armagh, Enniskillen, and Burrowes. But these were the exception. Hence, Ireland, in many pockets of the country, relied upon numerous private academies taught by schoolmasters of various skill levels and education levels to educate students in cities and rural towns. Some of the schoolmasters were clergy. Others were women, and limited their students to young ladies (in the parlance of the time). Some offered room and board or meals only for the young people. Standard subjects were elocution, arithmetic, bookkeeping, foreign languages, and geography. The girls' schools added "finishing school" classes to raise cultured pupils. Almost as it was in the Middle Ages when scholars traveled far and wide to recruit students and teach, in Ireland during the late 1700s and early 1800s, poor, learned men traveled to offer classes in barns and anywhere else a few students might be assembled. The schools were nicknamed "hedge" schools because they were as apt to be taught under the shade of a hedge as in a building, and they were of uneven quality—as likely to be taught by an itinerant, unqualified teacher as a scholar. In time, however, even some of the secret, underground hedge schools became permanent fixtures in a community, and the classrooms sometimes were the equivalent of mainstream classrooms with proper textbooks instead of merely a handy Bible or popular novels. Nonetheless, Catholics, in particular, considered them a better alternative to Protestant schools or no schooling at all. Estimates during the 1820s were that as many as 400,000 pupils were in attendance at hedge schools. There were 9,000 such schools in existence in 1824, according to The Oxford Companion to Irish History. In sharp contrast to the hedge schools, a handful of day schools associated with the Church of Ireland opened in Ireland that were the equivalent of day schools for younger children in England. In 1811, impressed with those schools, some business leaders from Dublin (who were Quakers and members of other sects) resolved to try to improve educational opportunities for poverty-stricken youth. These reformers called their organization the Society for Promoting the Education of the Poor in Ireland, and their crusade resulted in the state granting funding. The Society also admired the pioneering work of English educational reformer Joseph Lancaster, founder (in 1801) of a free elementary school that organized one-room schoolhouses for the poor. Teachers enlisted their better students and designated them as monitors to train younger or less-quick-to-learn peers. Following Lancaster's precepts, a monitorial system was installed at the Society's headquarters in Kildare Place in Dublin, and the hope was that superior teachers could be trained here. Each student monitor was given a bench with 10 students to school. In contrast to brutal methods of some schoolmasters, Kildare Place eschewed beatings in favor of shaming miscreants. But the daily practice of Bible reading infuriated Catholics in the country; they refused to accept the validity of the King James Bible and disagreed with the school'srefusal to interpret the scripture reading for students. By 1831, funding for the school dried up and went to the national schools where separation of church and state was followed in theory, though not in practice. Enough students possessed sufficient literacy for the cities to support at least one newspaper and occasionally many papers. More sophisticated subscribers read Hibernian Magazine. Theatres did a brisk business entertaining a story-loving people. Dublin supported a lending library, and booksellers made a living off scholars and the well-to-do. But McDowell, the critic, said that the general state of Irish letters was poor then, the glory years of the great Irish playwrights at the Abbey Theatre and poets such as Yeats were still one century away. McDowell stressed that Ireland failed to measure up to comparison with the intellectual accomplishments of Scotland, let alone Britain. Perhaps the most significant time in the establishment of a countrywide, state-aided educational system of elementary schools was in 1831, championed by Lord Edward G.S.M. Stanley. Conflicts immediately arose over the matter of keeping religious influence out of schools because the elementary schools were told that churches had the right to provide pupils with supplementary religious education. Even though, in theory, no aid was to be given to the primary schools and emerging secondary schools, in reality, religious influences permeated all levels of the educational system, particularly the school boards, which were headed by priests or vicars, depending on the district's religious makeup. At first, however, Protestants were the main critics against "godless" schools, while Catholic leaders, worried about high illiteracy rates among their people, generally supported the state-run educational system, at least at first. Eventually, Catholics came to despise the system, saying students were exposed to pro-British and anti-Catholic influences. Nonetheless, the formation of national schools was an important step forward in the history of education in Ireland. It was intended to give an equal education to all pupils without meddling from churches. It gave Irish schools a semblance of structure, and it established a policy of local districts to pick up their fair share of costs for teacher salaries, school lots and building costs, and schoolbooks. During the nineteenth century, as classes were taught in English, there eventually occurred a downplaying of Irish as the native tongue. During the twentieth century, following a great surge of nationalism after Ireland gained its independence from Britain, there was a clamor to restore the teaching of Irish once again in schools of all levels. However, as native speakers age and die, there are linguists who predict that the "true Gaeltacht" dialect may disappear; others are dedicated to its preservation. With Catholicism further losing its influence in the twenty-first century, some nationalists feel it is important to preserve all forms of the Irish tongue as a way to unify the nation. Literacy: The INTO teachers union in 1998 founded a committee for the study of literacy issues in Ireland. The union announced that it was looking into strategies for assisting children with literacy problems. The committee concluded that Irish children too often perform below the literacy levels of other European countries. They have performed in substandard fashion in reading levels. INTO concluded that teachers must be recruited who are particularly trained in developmental studies and remedial education. In addition, areas of particular concern to INTO are adult literacy problems and the literacy deficiency of people living in disadvantaged areas of the Irish Republic. Special Needs Education: In 1998, Micheal Martin, Minister for Education and Science, announced that the government had made the needs of special education students a priority. In particular, the government has ensured that children with autism will have automatic access to special classes. There also will be trained teachers available and the support and infrastructure to serve their needs. The pupil-teacher ratio of special needs youngsters is 6:1. The cost of the reforms in 1999 was estimated at nearly 4 million pounds. Compulsory Education: In Ireland, compulsory education is from the age of six, theoretically. However, given the increasing role women have played in the Irish labor force, the majority of children enroll by the age of four or five. In 2000, some government spokespersons advocated cutting off free primary education at 18-years-old, but the proposal has met with parent indignation and media expressions of outrage in favor of giving slow learners all the time they need to graduate. Female Enrollment: As in other countries, the education of girls and women was slow to take hold as a concept in Ireland. During the Middle Ages, Ireland truly was a land living in the Dark Ages when it came to schooling females. There were some gains in the 1500s, but those were lost the following century. Not until the 1700s did some women from wealthier backgrounds not only show their aptitude for serious study, but also a number of female poets, writers, and intellectuals contributed significantly to Irish letters. That somewhat of a turnabout had been achieved by 1831 is seen in the creation of a national school system that provided the same curriculum for males and females, as well as access to scholarships to acquire training to serve as teachers. However, clear to the end of the 1870s, those schools that charged tuition put emphasis on graduating ladies able to take their place in society. Finally, in the late 1870s and 1880s, attitudes changed dramatically in Ireland, and women earned the right to pursue rigorous studies at the university level, forcing schools at the lower level to upgrade curriculum choices for women. At individual universities, administrators showed varying degrees of acceptance for female equality in education. In Belfast, Cork, and Galway, women who could afford the tuition took classes alongside males in the 1890s, but Dublin schools of higher education resisted compliance until 1910. With the worldwide spread of feminism in the last half of the twentieth century, many inequities in the education of all females came under criticism. Slowly, the country moved ahead to enable women from lower income families to gain an education with the aid of public funding targeted for that purpose. Academic Year: Many Irish schools are in session far fewer days than schools in other industrialized nations. The exceptionally shortened school calendar has been linked to dismal scores of many Irish students in science and mathematics, according to educational experts inter-viewed by The Irish Times in 1995. Only 35 percent of Irish schools remain in session for more than 175 days (with a high of 200 days), while 90 percent of schools in Scotland and England do so. While 65 percent of Irish students who are 13-years-old go to school only between 151 and 175 days, in England and Scotland, less than 3 percent of students are in school for fewer than 175 days. Irish 13-year-olds scored next to lowest in a ranking of competing countries in science and scored eighth out of 14 in mathematics. In 2001, as secondary teachers were involved in a dispute over salary, commentators noted that if higher pay scales were granted, teachers might be asked to teach additional school days to equal the number of days scheduled by English and Scottish schools. Preprimary & Primary Education Irish children tend to start school at a younger age than do other world children. Both junior and senior infant classes are the equivalent of preschool classes in most other countries. Ninety-five percent of all five- to six-year-olds are in senior infant classes, and 59 percent of four- to five-year-olds are in junior infant classes. Provision in national schools for children aged four and five is an integral part of the regular school system. Children in infants' classes follow a prescribed curriculum that was introduced in September of 1999. Teachers are trained national school teachers; however, parents and media critics are loud in their denunciation of the preprimary school program and what is perceived as less-than-strong interest on the part of the state in this area. Eleven major reports from 1980 to 2000 have criticized the preschool program. According to the latest figures (1998), slightly more than one percent of three-yearolds in Ireland were in school full-time. The Department of Education, in addition to regular classes offered mainly at private preprimary schools, also sponsors an Early Start Preschool pilot program, a program for children with disabilities, and the Breaking the Cycle pilot project for at-risk children. Children are not legally mandated to attend school until their sixth birthday. Nonetheless, nearly 100 percent of five-year-olds and 52 percent of four-year-olds attend primary schools. Four-year-old girls are four to five percentage points more likely to be in primary school than are boys. Primary schools have expenses for the site and 15 percent of the capital costs paid by local communities. The state pays 85 percent of capital costs, plus an additional 10 percent in areas designated to be disadvantaged. The Department of Education pays the salary of teachers. Schools are given a grant for a portion of expenses such as lighting, heating, cleaning, maintenance, and teaching materials. At this level, Ireland's educators have been asked to increased emphasis on active learning and problem solving in their classrooms. Parent satisfaction with primary schools has generally been high. However, the Irish National Teachers Organization in 1994 conducted a study of six comparable schools in Limerick and Derry, finding wide differences in school funding between the two jurisdictions. Primary schools in the Republic of Ireland were said to be "under-funded and under-resourced" compared to Northern Ireland schools. The Republic of Ireland also displayed higher pupil-teacher ratios than their counterparts in Northern Ireland. The findings created considerable concern in Ireland, and led to cries for curriculum reform and additional government funding. Six years later, a curriculum reform committee and consultants had addressed most of the major weaknesses in the primary system. A new primary curriculum was approved by the Minister of Education and introduced by the Department of Education in 1999-2000 to 3,000 primary schools for the first time since 1971, but some of the courses such as a social, environmental, and science course were delayed until 2002. Initial reaction to the curriculum was positive from both an important teachers union and the National Parents Council, both of which were involved in curriculum discussion. More than 10 years in the writing, the new curriculum attempted to address low rankings in science among Irish students who had earned schools the criticism of media writers and parents. The curriculum emphasizes child-centered learning with skills development. Math (with an emphasis on problem solving), history, and geography were also given emphasis, according to The Times Educational Supplement. Science; educational drama; and social, personal, and health education were added to the new curriculum. The changes were implemented by 21,000 primary school teachers to their 460,000 pupils. The curriculum was broken into 6 main areas and then subdivided into 11 subjects. Other important aspects include a revised Irish curriculum "based on a communicative approach;" a new English syllabus; and updated educational methods in language learning, reading, and writing. In the Republic of Ireland, the National Council for Curriculum and Assessment (NCCA), although not a statutory body, takes an advisory role to assist with the formation of a new curriculum. The NCCA consulted with course committees for each subject before sending a recommended curriculum to the Department of Education. Textbooks: With the adoption of the new curriculum, educators and administrators have also discussed what they perceive has been an over-dependence on textbooks in the primary school curriculum. Educators say that too many teachers allow textbooks to drive their classes rather than using them as a resource in moderation. A national system of education was established in 1831 that was intended to be nondenominational, but struggles between the Catholics and Church of Ireland members made that a near impossible goal to accomplish. That principle was reaffirmed in 1878 when the government established the Intermediate Education Board. In the first half of the twentieth century, Catholic parochial schools included both minor seminaries and elementary and secondary schools. Facilities were generally aged and decaying. More emphasis was put on religion and the preservation of morals than on academic preparation. Textbooks were outdated. In part, some of the blame goes to shortsighted religious leaders, but some also goes to the exclusion of Catholics from Irish schools for so many years. One of the major reforms in Irish education occurred in 1947 when the Education Act provided free secondary education in national schools. Then, in 1963, the minister of education carefully restructured postprimary schools into secondary and vocational programs. This coincided with increased secondary attendance owing to an increase in the birthrate following World War II. The government announced its commitment to education as crucial to the growth of industry and professions, as well as the nation's economic health and stability. Because the Leaving Certificate, administered in the thirteenth year, is the primary entrance requirement for higher education, secondary teachers put considerable emphasis into getting their classes fully prepared. With only so many students accepted, there is pressure since even students that graduate in Ireland do not automatically qualify to get in. Far more applicants send in their application papers than can be admitted. Acceptances are given based on merit and scores on the final secondary school-leaving examination. Places for medicine and veterinary studies are especially competitive. Curriculum Requirements: Republic of Ireland schools have set Irish (Gaelic) as the primary language of instruction since 1922 (part of the mandatory curriculum in 1928), although English is so widely used that nearly all Republic of Ireland schools qualify as bilingual. In the Republic of Ireland, the main academic subjects in the curriculum are mathematics, history, geography, and a choice of other recognized subjects, usually science. A revised curriculum in all of Ireland is being implemented, marked by increased science emphasis. Students are asked to observe, perform experiments, and develop reasoning and inductive skills. Much of the push for increased science emphasis can be credited to an organization called Forfás, overseeing the National Policy and Advisory Board for Enterprise, Trade, Science, Technology, & Innovation. Forfás encourages and promotes the development of enterprise, science, and technology in Ireland, including support for education at all levels. Educational System: Pupils that expect to apply to university take up to nine subjects and a minimum of six subjects. After three years of secondary education, students complete the junior cycle and the junior certificate is then taken. The certificate measures achievement, but it is not used by universities for admission purposes. At the end of the final year of secondary school, students take the leaving certificate. There are two levels of achievement: the ordinary level and the higher level. Although both cover the same school material, the higher level requires more sophisticated responses. Expenditure: Secondary schools have 90 percent of total expenses for approved building and equipment costs paid by the government. Teacher salaries and allowances, with minor exceptions, are paid by the Department of Education. Schools are expected to operate within the limits of a budget provided to administrators at the start of the school year. A capitation grant pays for ordinary over-head, library books, and partial computer expenses. Until late in the twentieth century, when educators placed increasing value on instructional technology, computers were considered a luxury. If additional funding is required for computers, schools must participate in fundraising activities to meet the costs. Musical instruments and school trips also are paid with money raised through volunteer efforts. In 1994, critics of fundraising for free schools argue that the practice likely hurts the parents of school children in disadvantaged areas. Parents who are poor may feel obligated to make contributions and may suffer financially for their payments. Other critics say such parents have enough trouble putting money aside to send their children off to college eventually, as the poor of Ireland have long been underrepresented at the higher-education level. Then too, in 2000 and 2001, employers have claimed that a shortage exists in workers trained to use computers, which has resulted in recent governmental attention to the perceived oversight. A national project called Schools IT 2000 was set in place to correct the computer shortages in education. To administer the program, The National Center for Technology in Education (NCTE) was established and asked to coordinate the program. An administrator and four staff members were hired to see that the directive would be carried out. The program is both exciting and extensive. Telecom Eireann gave each school a multimedia computer with an Internet connection. Also provided was a telephone line, free rental of the line for two years, and five hours of free Internet access. Previously, the NCTE, together with the Department of Education and Science, provided schools with 15 million pounds in funding to buy 15,000 new computers and equipment in 1998 under the Technology Integration Initiative scheme. All schools in the free education system at primary and postprimary levels were given generous per-pupil grants. Because the equipment without teacher training is not useful, another 1.4 million pounds were granted to buy hardware for Teacher Training Institutions, Education Centers, and the School Integration Project. There also were nationwide seminars for teachers, and the NCTE provided hardware specifications and discounts from suppliers to help schools make wise computer choices. Because teachers are expected to require computer support, the Schools Support Initiative developed a support network called ScoilNet, to give advice and assistance. The Department of Education and other offices are forming partnerships with corporations such as IBM as well. In 2001, arrangements were set in place for a National Policy Advisory and Development Committee (NPADC) to act as a support group for the Minister and the Department of Education and Science on the future implementation of computers and technology in the schools. Foreign Influences on Educational System: Ireland continues to be an attractive destination for students pursuing an undergraduate, postgraduate, and professional education. Medical students find Ireland's prestigious programs, up-to-date facilities, and attractive setting especially appealing. The National University of Ireland or NUI, which offers a full-time undergraduate degree in Medicine plus specialist training at postgraduate level, reports that two-thirds of its full-time student population is made up of international students. The Royal College of Surgeons in Ireland (RCSI) attracts both undergraduate and postgraduate students from more than 40 different countries and from all five continents. More than 65 percent of places offered to undergraduates each year are allocated to students from outside Ireland. Dropouts: Since 1988, an educational program for those leaving school early was operated with the cooperation of local education and labor training authorities. The Youthreach Program provides two years of education, training, and placement for those between 15 and 18 who fail to earn a formal diploma. In 1991, some 3,336 persons enrolled in Youthreach but, by 1995, that number had dropped to 1,630 boys and girls. The first or "Foundation" year provides skills classes, on-site job training, general education, and counseling services. The second, or "Progression Year," provides similar training, plus options such as training in specific skills, temporary employment, or additional education. In addition to secondary school dropouts, vocational colleges in Ireland have also become concerned about dropout rates for students that many educators perceive are rising at a troubling rate. Several colleges formed committees to get a handle on the problem in 1998. Colleges were also asked to compile accurate records showing what percentage of the entering class leaves prior to the start of the second year. University, non-university, and private colleges provide higher education in Ireland. The number of applicants for places in third-level colleges outnumbers openings for students, and the dropout rate of first-year students is a national concern, causing critics to question the quality of the nation's secondary schools. Perhaps the most important occurrence in the behind-the-scenes running of Ireland's colleges was the establishment of a Higher Education Authority. This advisory board was an important adjunct to the minister for education, making recommendations on fiscal matters and on ways to upgrade colleges and universities. The Higher Education Authority and the Department of Education work in cooperative fashion. Higher Education in Ireland takes the form of universities, technology institutes, and colleges for teacher education. Additional institutions provide specialized training in art, design, medicine, theology, music, and law. Since the 1960s, industry in Ireland has reported a shortage in skilled workers, particularly, after 1995, those with sophisticated computer skills. Since universities were unable or unwilling to address these needs, the government of Ireland set up the National Institute for Higher Education (NIHE) to upgrade and start technical colleges graded as third-level educational institutions. Higher education in Ireland has changed considerably throughout the past two decades. The number of students enrolled has increased markedly with the establishment of teaching institutions with a technology emphasis such as the Regional Technical Colleges (RTCs). Most institutions of higher education are state-supported, meaning they receive more than 90 percent of their income from the State. Since 1975, additional universities in Limerick and Dublin were opened, and the Institutes of Technology were expanded to take more enrollees. Disciplines gaining favor from students since 1965 are in the arts, social sciences, technology, and business. Also since 1965, Ireland's universities have experienced a significant jump in enrollment from 21,000 in 1965 to nearly 97,000 in 1997. Since the passing of the Irish Universities Act in 1997, eight universities operate in Ireland. These are the University Colleges at Dublin, Cork, and Galway; the National University of Ireland (NUI); the University of Dublin (Trinity College); Dublin City University; University of Limerick; and Maynooth University. Each of these colleges offers courses as varied as social science, the arts, Celtic studies, law, medicine, dairy science, veterinary studies, architecture, and agriculture. In addition, there are a number of designated third-level institutions that interact with the Higher Education Authority. These are the Royal College of Surgeons in Ireland, The Royal Irish Academy, the National College of Art and Design, and The National Council for Educational Awards. In Ireland there also is a higher education unit called non-universities, and in 2000 there were 14 of them located throughout the country, including Tallaght and R.T.C. Co. Dublin, which opened in September of 1992. They provide higher technical and technological education. In 1995, the government published a document called "Charting our Education Future" that said the nation was striving "to ensure the highest standards of quality in all fields, in order to provide students with the best possible education." The government's "White Paper," as the report was called, said, "the restructured Higher Education Authority will be responsible for monitoring and evaluating the quality audit systems within individual institutions. The system will be based on cyclical evaluation of departments and faculties by national and international peers preceded by an internal evaluation; arrangements for the implementation and monitoring of evaluation findings; and the development of appropriate performance indicators." The Department of Education, university presidents, and the Higher Education Authority developed performance indicators for higher education institutions and their faculties that assess all activities, particularly teaching and research. Admission Procedures: Admission procedures for universities and colleges of higher education set their own minimum entrance requirements. The office that acts as a coordinator for applications is the Central Applications Office. Scores on the school leaving-certificate examination are used to reserve places for students on a point system. Applicants may be admitted to an Irish university if they have earned a Leaving Certificate or diploma that signifies the successful completion of 13 years of schooling with a minimum overall average. (Prior to 1999, a student had to show evidence of passing the Matriculation Examination of the National University of Ireland; the exam was phased out in 1992). Most higher education institutions use the Central Applications Office in Galway to screen applications. The Central Applications Office was established in 1976. Enrollment: According to the Central Statistics Office, in the decade between 1988 and 1998, the number of Republic of Ireland students enrolled in full-time or part-time undergraduate courses increased by 72 percent. Over the same period, postgraduate students more than doubled. Of the 89,500 students in higher education in 1994, approximately 52,000 attended at the university level. Professional Education: An institute of higher education offering training in medicine began in Dublin during the seventeenth century, but it was run haphazardly until 1711 when a medical school opened at Trinity College, Dublin. Even then, very few doctors chose to earn their degrees here. Most preferred to study medicine at established, prestigious schools in Great Britain or other European countries. In the earliest days of medicine, surgeons were associates of barbers and belonged to the Barbers Surgeons Guild. In time, a Royal College of Physicians of Ireland (RCPI) was established in 1654. Next, Charles II chartered a Fraternity of Physicians in 1667. In 1713, a Dublin physician named Sir Patrick Dunn died and bequeathed a chair of medicine to Trinity College. Even by 1747, the number grew only to two additional distinguished professor chairs. In 1785 the school began a College of Surgeons. In 1816, the school was connected with a hospital and offered clinical studies, ensuring its reputation. Cadavers, as was the custom of the day as recalled in literature by Charles Dickens and Ambrose Bierce, were stolen from cemeteries in the night by grave robbers. The Royal College of Surgeons in Ireland (RCSI) was established in 1784 and now is associated with NUI. Ireland's most prestigious medical school, it is housed in an early nineteenth century building on St. Stephen's Green in Dublin. The renovated building contains state-of-the-art computer laboratories; modern lecture, theatre, and seminar rooms; and laboratories. During the late eighteenth and early nineteenth century, other prominent physicians expanded their practices by opening medical schools. A number of physicians in other cities also began to run them, but these failed to out-live the men who started them. In 1855, Catholic University also operated a hospital that eventually was taken over by University College, Dublin. Members of the legal profession practiced law well before the twelfth century in Ireland. Formal schooling was required of attorneys during the sixteenth century. Prospective attorneys by 1628 were required to study at the Inns of Court in London, a professional school that, at the time, had been in existence for two centuries, for the required five years. Catholics were prevented from becoming attorneys by means of a loyalty oath to the Church of Ireland that they were unable to take, lest their own Church excommunicate them. Lawyers who successfully passed the London Inns of Court and took the oath were admitted to the professional company of judges and lawyers in a society named the King's Inn (after the building that for a long time housed the society). Today, tradition continues as the Honorable Society of King's Inns and the Incorporated Law Society provide academic preparation in law for prospective attorneys to qualify respectively for barrister-at-law and for solicitor. Vocational Colleges: By way of example, students seeking a career in tourism find an internationally acclaimed institute in the Shannon College of Hotel Management. It was founded in 1951 by educator Brendan O'Regan, as a source of trained managers for the Irish hotel trade. Shannon College is a hands-on college that uses internships to enable students to acquire on-site hotel experience to complement management training. Those earning the diploma in International Hotel Management are expected to demonstrate business skills, managerial skills, and fluency in one or more foreign languages. The National Council recognizes the school's diploma for Educational Awards, the National University of Ireland, and several prestigious industry associations such as The International Hotel Association. Religious Institutions: Chief among religious institutions is the National University of Ireland (NUI), established in 1908. NUI is actually made up of three colleges: University College, Galway; University College, Cork; and University College, Dublin. The Royal College of Surgeons and St. Patrick's College, a training school for future priests, also are associated with NUI. Private Colleges: In Ireland there are a small number of private colleges providing third level and professional education. By way of example, four of the major institutions are: - The National College of Ireland (NCI) located in Dublin is an independent institution specializing in industrial relations, management, and related areas; it offers a National Diploma in Personnel Management (4-year evening course) and a B.A. in Industrial Relations (5-year evening course) conferred by the NCEA. - The Shannon International Hotel School offers a four-year Diploma in Hotel Management. The final year includes a management internship in the United Kingdom or United States. - The National College of Art and Design (NCAD) offers sub-degree, primary degree, and graduate programs in its specialty areas. - The American College offers degrees and diplomas in the humanities, business, international law, and psychology. Validation is from a university in the United States. Degrees Offered: A bachelor degree is obtained after a three- or four-year full-time course or comparable period of part-time study. This degree is usually pursued in a particular subject or field of study. The Bachelor of Arts (B.A.) program requires three or four years' study, while Bachelor degrees in Medicine and Dentistry require six years of study. Postgraduate Training: A Graduate Higher Diploma is generally obtained after one or two years of postgraduate study. A research thesis is generally required. A Master's degree requires course work, a research project, and examination in a specific field of study. The normal duration of study is from one to three years following the Bachelor degree. The Doctorate is the highest academic qualification awarded in Ireland. The Doctor of Philosophy (PhD.) Degree, Doctor in Letters (D.Litt.), Doctor in Science (D.Sc.), and some others may be obtained only by research and are, in general, completed in one to three years after the Master's Degree. National & Government Educational Agencies: Higher education in Ireland is managed only at the national level and not administered by regional agencies in Ireland. The government has entrusted its Department of Education to oversee and administer the country's system of higher education—known as the third level. The vocational schools, also known as technical institutes, get operating funds from the Department of Education; however, the Universities and some colleges of education apply for funding from the Higher Education Authority (HEA). Other third level institutions provide specialist education in areas such as the arts or the professions and business, but these, too, get the bulk of budgetary funding through the state. The state has reacted to strong criticisms of its higher education facilities by taking a far-reaching role in educational matters. Most conspicuously, it founded the HEA in 1969 to keep a master plan for such institutions, as well as to possess budgetary powers. In addition, an agency was formed to monitor standards and curriculum matters in 1972. The National Council for Educational Awards (NCEA) oversees both undergraduate and graduate school matters under its jurisdiction. Another bureaucratic addition came about in 1976 to take over certain administrative duties such as processing applications from persons applying for courses at the universities, some specialty colleges, and a number of private colleges as well. This agency is called the Central Applications Office (CAO). Expenditures: Public moneys appropriated for pre-school, primary, and secondary schools fall short of those spent by many comparable European nations, but Ireland's spending on higher education compares favorably with rival countries, according to the Organization for Economic Cooperation and Development (OECD). In 1993, the Republic of Ireland spent 1.7 billion pounds (US$2.6 billion) on education. Areas where the Republic of Ireland falls relatively low in preschool, primary, and secondary education were pointed out by a study issued by the Organization for Economic Cooperation and Development in 1995 (though based on 1992 figures). The OECD finds Ireland deficient compared to other European countries in per-pupil expenditures at the preschool, primary, and secondary levels. The Department of Education: The Department of Education administers public education, including primary, postprimary and special education. State subsidies for universities and third level colleges are given out through the Department. The three main levels of the education system are first, second, and third levels. The first and second level is referred to generally as primary and post-primary, respectively. The mission statement of the Department of Education says its purpose is "to ensure the provision of a comprehensive, cost-effective, and accessible education system of the highest quality, as measured by international standards, which will enable individuals to develop to their full potential as persons, and to participate fully as citizens in society, and contribute to social and economic development." Nonformal Education: Teachers in Ireland frequently find teaching aids and sources from 1 of 30 part-time and full-time Education Centers in the country. These centers offer various support services and resources to teachers and to other partners in education. Two of the best known are the Blackrock Educational Center and Dublin West Education Center. These centers also keep an online presence with information on how to access contact persons and information. In 2000 and 2001, many Irish children participated in a multi-center project called Write-a-Book. Meant to be a celebration of writing and artistic abilities by Ireland's children, not a contest, the student authors chronicle their lives, cultures, and homelands. Each participant receives a certificate. A few outstanding books are selected upon merit, and an Irish television star or media personality presents awards to the children. Continuing Education: Students who do not enter a university or technical college but wish postsecondary school training frequently elect to take additional course-work in vocational schools. More than 30,000 part-time students were enrolled in vocational, community, and comprehensive schools in 1994-1995. More than 300 courses are open to such students. Vocational schools, as have other Irish higher education institutions, improved much in the 1990s. With industry jobs going begging in the late 1990s, many additional students found new institutions such as the Regional Technical Colleges (RTCs) a good fit for their needs. In 1996 the Minister of Education unfolded plans to also allow the Dublin Institute of Technology (DIT) to offer degree-granting programs for professional and managerial students. Distance Education: Taking courses via the Internet, television, video, and radio—distance education—can be taken in addition to regular university courses or in place of university courses. Distance learning is equal to the amount of work performed in a regular classroom, but it is done at a time and place chosen by the student. No formal entry requirements are required for applicants aged 23 and older, making distance learning particularly attractive to adults and students getting a second chance at a college degree after dropping out earlier in life. Students also have the option of taking courses through the established Open University and the developing Irish National Distance Education Center (NDEC) headquartered at Dublin City University. For students willing to give up the benefits of classroom instruction and close face-to-face interaction with professors and their fellow students, distance education is an option worth taking to earn a B.S. or B.A. degree that could not be obtained by traditional means. Course offerings include selections from literature, philosophy, history, psychology, and sociology. Another option is a BSc degree in information technology. Students choose from a course menu including management science, computing, and communications technology. In 1834 a systematic teacher-training program began in Dublin at certain model schools for male and female students. There were about 25 model schools there by 1850; the training period lasted six months. For a time, both Protestant and Catholic students attended these schools, but in the mid-1860s Catholic authorities forbade students from attending, not wanting the Protestant influence on the children. When teacher training became more formalized, the schools no longer were used to train teachers but, nonetheless, many of the schools continued to exist until the twentieth century. Irish teacher training involves several differences between primary and second level schoolteachers. Second level teachers usually complete a primary degree at university and then follow up with a Higher Diploma in education at a university. Primary school teachers complete a three-year program, leading to a Bachelor of Education (B.Ed.) degree, at a teacher training college. St. Patrick's College, Church of Ireland College, St. Mary Marino and Froebel College of Education are based in Dublin. Mary Immaculate College is based in Limerick. One criterion for primary school teacher training in Ireland is proficiency in Irish. Student-Teacher Ratio: In 1997-1998 the teacher-student ratio was 19 pupils per teacher in the Republic of Ireland. This was two more pupils per teacher than in Northern Ireland. The Training of Agriculture Instructors: The government involved itself in national agricultural operations, such as the training of teachers in agriculture-related subjects, in 1899. Ireland that year created the Department of Agriculture and Technical Instruction (DATI), hoping that education and scientific farming methods could prevent a recurrence of the Great Famine that ravaged Ireland from 1845-1849. Heavily dependent upon potatoes, a non-native crop brought to Europe from South America by the Spanish in the sixteenth century, Ireland's potato crop was ruined by blight caused by a fungus possibly introduced with imported fertilizer. Up to one million people died from starvation and disease, and many more Irish emigrated to the United States and other countries. In addition to agriculture, the maintenance of fisheries, and the keeping of agricultural statistics, the Department of Agriculture involved itself in the training of teachers in such areas as health, science, plant breeding, and animal husbandry. Unfortunately, the department failed to establish a clear division of powers with the Congested Districts Board (CDB). The CDB, begun in 1891 as a board intending to improve agriculture in areas of extreme poverty, was given large amounts of money in its budget and the power to arrange training of agricultural instructors. As is true of other areas of politics in Ireland, the DATI and CDB never could resolve differences. DATI ceased to exist in 1922 and a Department of Lands and Agriculture came into being. Although both groups were involved in strife, and the CDB was scored for chronic mismanagement of funds, a number of good instructors were trained, and Irish farmers and poor townspeople learned the dangers of relying upon a single crop for sustenance. Students unwilling or unable to obtain a college degree may opt to attend classes and on-farm-site training to qualify for a Certificate in Farming. This three-year agricultural education and training program provides basic skills training in animal and crop husbandry, farm equipment and machinery, and environmental conservation. The Farm Apprenticeship program is carried out by the Farm Apprenticeship Board. An apprentice begins the program with one year of courses at a recognized agricultural college and then begins an apprenticeship with a sponsoring farmer. Unions & Associations: Three unions, the Association of Secondary Teachers, Ireland; the Irish National Teachers' Organization; and the Teachers' Union of Ireland represent Ireland's teachers. The Association of Secondary Teachers is the union representing secondary school teachers in Ireland. The Irish National Teachers' Organization was founded in 1868 and is the largest teachers' trade union in Ireland; it represents teachers at the primary level in the Republic of Ireland. The Teachers' Union of Ireland's teachers and lecturers work in vocational schools, community and comprehensive schools, Institutes of Technology, and colleges of education. The reputation of the teachers' union was dealt a damaging blow in 2001, as media reporters, parents, and students condemned a pay dispute by secondary teachers who used their students as pawns in an effort to get the government to accede to their demands. The striking union, the Association of Secondary Teachers, Ireland, made an attempt to force the government's hand by claiming it might fail to process the Leaving Certificate examination needed by students for entry into Irish universities. Just as upset as parents and students were teachers, among the lowest paid in Europe, and envious of Northern Ireland schools with better resources, who expressed anger and resentment over the nation's failure to reward their hard work as teachers with the competitive pay rate they felt they deserved. The government treated the teachers' demands as a bluff. By April 7, 2001, so many teachers had agreed to correct the Leaving Certificate out of concern for their students or fear for their jobs that the union clearly had been defeated. The other unions also decried low wages but agreed to an arbitration process called benchmarking, which was intended to bring teacher salaries on a par with wages paid to other types of employee groups in Ireland. Since the 1960s, the Irish have been aware of serious deficiencies in the educational system. Reforms, however, have been incomplete and less than satisfactory, as several studies and self-studies note. In 1966, a research team headed by educator Patrick Lynch completed a thorough analysis of the primary and secondary systems and produced a scathing report called "Investment in Education." In 1967, a report completed by a special commission on higher education concluded that the third-level was no less problematic. Changes were implemented immediately, although these were less successful than ministers of education, parents, and politicians hoped they would be. The primary level revamped its curriculum. Smaller secondary schools with aging facilities and other deficiencies were consolidated with stronger schools into institutions with a modern look and characteristics. Of utmost importance, the government made it possible for many of Ireland's sons and daughters to receive an education at state expense. The combination of free schools and better facilities pleased parents immensely. In 1965-1966, there were 143,000 students enrolled in postprimary schools. Fifteen years later, 301,000 students enrolled. For the immediate future, Ireland's educational prospects continue to look promising at the university level in particular. In 1995, the Steering Committee on the Future Development of Higher Education released projections of a total enrollment of 120,000 students in higher education by 2005. The predicted increase has been attributed at an economic boom, technological development, and greater opportunities for lower-income students. According to a new report released in 2001 by census officials, more than 25 percent of all births in the Republic of Ireland now occur outside marriage. The information is contained in a new compendium publication Ireland, North and South —a statistical profile that has been jointly produced by the Northern Ireland Statistics and Research Agency (NISRA) and the Republic of Ireland's Central Statistics Office (CSO). The high number of children from one-parent homes is expected to have an effect on primary education in Ireland by 2005, and it eventually will affect secondary schools. "Brief Description of the Irish Education System." Department of Education, 28 May 1996. Available from http://www.irlgov.ie/educ/21fe33a.htm. "Collated Rapporteurs Report,"INTO Educational Committee, 1998. Available from http://www.into.ie/. Connolly, S.J. The Oxford Companion to Irish History. Oxford: Oxford University Press, 1998. Coolahan, John. Irish Education: History and Structure. Dublin: Institute of Public Administration, 1981. "Decisive Need for Special Needs Education in Ireland." Ireland: Department of Justice, Equality, and Law Reform, 2001. Available from http://www.irlgov.ie/justice/. "Education." Census 2000, Ireland. Available from http://www.irlgov.ie/justice/Press%20Releases/Press-98/pr0611.htm. Fitzgerald, Garret. "A Lesson to be Learned from the Teachers' Strike." The Irish Times, 7 April 2001. Fry, Peter, and Fiona Somerset. A History of Ireland. London: Routledge, 1988. Hachey, Thomas E., Joseph M. Hernon, Jr., and Lawrence J. McCaffrey. The Irish Experience. Englewood Cliffs, NJ: 1989. "Ireland (Eire), International Qualifications for Higher Education: 2000." Universities and College Admittance Service, August 1999. Available from http://www.brunel.ac/uk/registry/. Levey, Judith S., and Agnes Greenhall. The Concise Columbia Encyclopedia New York: Avon, 1983. McMahon, Sean. A Short History of Ireland. Chester Springs, PA: 1996. Moody, T.W., and F.X. Martin, eds. The Course of Irish History. Lanham, MD: Rinehart, 1995. Moody, T.W., and W. E. Vaughn, eds. A New History of Ireland: Ireland Under the Union: 1801-1870. Oxford: Clarendon Press, 1986. Scherman, Katharine. The Flowering of Ireland. Boston: Little, Brown, 1981. Vaughn, W.E. A New History of Ireland: Eighteenth-Century Ireland. Oxford: Clarendon Press, 1986. "Victorian Age, Part Two." Cambridge History of English and American Literature: (1907-21), Vol. 14. Available from http://www.bartleby.com/. Walshe, John. "Irish Unveil Curriculum." The Times Educational Supplement, 24 September 1999. "Ireland." World Education Encyclopedia. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/ireland-0 "Ireland." World Education Encyclopedia. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/ireland-0 The Republic of Ireland LOCATION AND SIZE. The Republic of Ireland constitutes 26 out of the 32 counties that make up the island of Ireland, with 6 northern counties under the jurisdiction of the United Kingdom. Situated in Western Europe, it is bordered on the east by the Irish Sea from the United Kingdom and bordered on the west by the North Atlantic Ocean. With a total area of 70,280 square kilometers (27,135 square miles) and a coastline measuring 1,448 kilometers (900 miles), the Republic of Ireland is slightly larger than the state of West Virginia. The capital city, Dublin, is located on the east coast. The population of Ireland was estimated to be 3,797,257 in 2000. There has been a steady increase in the population since 1994 (3,586,000), marking a historic turn-about in demographic trends. This is attributed to growth in the economy, a decline in previously high levels of emigration , the return of former emigrants, and an increase in immigration to the point where net migration is inward. Despite having one of the lowest population densities in Europe, Ireland's population density has reached the highest sustained level since the foundation of the Republic in 1922. Emigration lowered population to under 3 million in the early 1980s. Birth rates declined from a high of 17.6 per 1,000 in 1985 to a low of 13.4 in 1994, but this trend has slowly been reversed, reaching 15 per 1,000 in late 1998. If the population is to meet the demands of the labor market, further increases will be necessary. Government efforts to attract further immigration and to increase the population are marred by housing shortages and service deficiencies. At the 1996 census, 40 percent of Ireland's population was under 25, and the Irish population is still relatively young, with only 11.33 percent over the age of 65. The people are largely concentrated in urban centers, with almost one-third of the total population living in the city of Dublin and its surrounding county. Population in the other major cities and their surrounding areas is on the increase. In the sparsely populated midlands and in the western and border counties, though, population is either stagnant or declining. OVERVIEW OF ECONOMY An economic policy that emphasized self-sufficiency and was characterized by huge tariffs on imports to encourage indigenous growth dominated in Ireland until the late 1950s. This ideology was then abandoned in favor of a more open economic policy. Ireland's first economic boom followed this change. The failure of domestic over-spending to induce growth, along with negative global influences such as the oil crises of the 1970s, made this boom relatively short lived. The 1980s brought fast-rising inflation (up to 21 percent), unemployment close to 20 percent, emigration at unprecedented high levels (50,000 per year) and a soaring national debt . Since the early 1990s, however, the Irish economy has produced high growth rates. It is integrated into the global trading system and, between 1994 and 1998, was the fastest growing economy in the countries of the Organization for Economic Cooperation and Development (OECD). The economy was forecast to continue expanding well in excess of any of its European Union (EU) partners during 2001 and 2002. Robust growth rates averaged 9 percent from 1995 to 1999 and some analysts predicted growth at 11 percent in 2001. Unemployment, which climbed to record levels beginning in the mid-to late 1980s, reaching 14.8 percent, fell to just 3.8 percent in 2000. Unemployment was predicted to fall below 3 percent by 2002. Living standards, measured by gross domestic product (GDP) per capita, were estimated to have caught up with the European average by late 1998. This transformation can be credited to many forces, both domestic and global. Recent government policies have emphasized tight fiscal control alongside the creation of an environment highly attractive to enterprise, particularly international business. Policies based on "social consensus" and wage agreements negotiated by the government with business, farmers, trade unions and other social partners, have kept wages at moderate, business-friendly levels. A corporation tax of 10 percent, alongside grants to attract foreign business, has further contributed to the pro-business environment, as has the existence of a highly educated workforce. EU regional policy has emphasized cash transfers to economically weaker and poorer member states. This is done to prepare these states to manage in a single market and currency. These transfers developed the Irish economy to a point where it could sustain growth. As an English-speaking country with access to the European market, Ireland is proving attractive as a base for international companies, particularly from the United States. The reason behind the current economic boom is the high-tech manufacturing industry sector; in particular the foreign-owned multinational companies in this sector. Agriculture, while still remaining an important indigenous activity, is in decline. The industrial sector has seen growth rates higher than most industrial economies and accounts for 39 percent of GDP and about 80 percent of exports. It employs approximately 28 percent of the labor force . This dominance can be seen in the gap between GDP and gross national product (GNP), which was 15 percent lower in 1998. Although the service sector is smaller than that of other industrialized countries, it is nonetheless dominant and growing, accounting for 54.1 percent of GDP in 1998. Government remains heavily involved in the provision of health and transport services and, together with the private service sector, employs 63 percent of the workforce. Successive Irish governments have maintained responsible fiscal policies over the last decade that have led to the reduction of national debt from 94.5 percent of GDP in 1993 to 56 percent of GDP in 1998. There have been concerns about the effects of current fiscal policy, with its emphasis on reducing income tax , on the high levels of inflation in the economy since late 1998. The government has argued that inflation is primarily due to external pressures such as the weak euro and high oil prices, which have caused increased consumer prices. Nonetheless, consumer price inflation peaked at 6.8 percent in the 12 months running up to June 2000, considerably higher than any other EU country. POLITICS, GOVERNMENT, AND TAXATION The Republic of Ireland is governed by a parliamentary democracy. Parliament consists of a Lower House, the Dáil (pronounced "doyl") and an upper house, the Seanad (pronounced "shinad"), or Senate. Together, the 2 houses and the president form the Oireachtas (pronounced "irrocktos"), or government. The Irish president, although directly elected, has relatively few formal powers and the government, elected by the Dáil from its membership, is led by the Taoiseach (pronounced "Teeshock"), or prime minister, who presides over a 15-member cabinet of ministers. Fianna Fáil (pronounced "foil"), a highly organized, center-right party, dominates the party system, with popular support of between 35 and 45 percent in 2001. It leads a minority center-right coalition government (with the Progressive Democrats) that depends on the support of a number of independent TDs (member of parliament) in the Dáil for the 1997-2002 term. Fine Gael (pronounced "feena gale"), the second largest political party and commanding between 20 and 30 percent of the popular vote, also occupies the political center-right, though it has shifted more to the center and has developed a social-democratic and liberal agenda over the last 3 decades. Its support base is generally among the more affluent, but these class trends are not especially strong overall and many wealthy people, particularly from the business sector, support Fianna Fáil. Fine Gael led the 1995-97 "Rainbow" coalition government, thus referred to because of its inclusion of 3 parties and representation across the political spectrum. The Rainbow coalition included the Labor Party and the Democratic Left (a party further to the left), which has since merged with Labor. Unlike practically all other European party systems, the Irish party system exhibits no strong left-right division. The 2 largest parties have not traditionally defined themselves in terms of ideology, but grew out of differences over the nationalist agenda at the time of independence. The Labor party, weak in comparison with its European counterparts, has consistently been the third largest party, commanding between 10 and 15 (some-times more) percent of support nationally, and has considerable power in a system dominated by coalitions. A number of tribunals have been in operation since 1997-98, investigating allegations of political corruption. The allegations involve unacceptable links between politicians and big business, corrupt practices in the planning process, and inept and negligent public service on sensitive health issues from the 1970s to the 1990s. The ensuing revelations are assumed to have adversely affected Fianna Fáil's popularity, but opinion polls have proved inconclusive in measuring the amount of support the party might have lost. A number of smaller political parties are also important in Ireland. Polls conducted in 2000-01 gave the Progressive Democrats 4 to 5 percent support, the Green Party 3 to 4 percent and Sinn Féin (pronounced "shin fane"), an all-Ireland Republican party with links to the Irish Republican Army (IRA), between 2 and 6 percent. Sinn Féin's association with the provisional IRA, which is responsible for punishment beatings in Northern Ireland and vigilante activity in the Republic, could, with its increase in popular support, present larger parties with controversial questions over coalition formation. There is currently a broad consensus among the major political parties on how to run the economy. It is unlikely that a new government coalition would significantly alter the current pro-business economic policy. The tax system incorporates standard elements of tax on income, goods and services, capital transfers, business profits, and property, and operates a system of social insurance contributions. Income tax has been reduced substantially, to 20 percent and 40 percent, with incomes over I£17,000 subject to the higher rate (2000 budget). A controversial individualization of income tax was introduced in the 2000 budget, with the object of encouraging more women to enter the labor force. Goods bought and sold are subject to value-added tax (VAT) at 20 percent, which is comparatively high, while luxury goods such as alcohol, tobacco, and petrol are subject to high government excise tax . Capital gains tax on profits has been reduced to 20 percent, and corporation tax, levied at between 10 percent and 28 percent, is to change to 12.5 percent across the board by 2003. Both employers and employees are subject to a social insurance tax, pay-related social insurance (PSRI), and an unusual business-unfriendly measure shifted the burden of the contributions to business in the 2001 budget. In terms of social spending, a means-tested (eligibility determined by financial status) system operates, resulting in about a third of the population receiving free medical and dental treatment. However, state medical-card holders suffer from long waiting lists for treatment, as opposed to the more than 50 percent or so of the population who have private medical insurance. In line with EU policy, recent governments stress the importance of competition. A competition authority with enhanced powers is responsible for investigating alleged breaches of competition law in all sectors. This affects overly regulated private service providers such as taxicab companies, and it is anticipated that the restrictive pub licensing laws will be tackled next. Government control over the economy is restricted by Ireland's membership in the EU and the euro zone, as well as by its own policy that has made Ireland one of the most open economies in the world. While the European Central Bank (ECB) controls monetary policy and largely controls interest rates, the government does retain control over fiscal policy. INFRASTRUCTURE, POWER, AND COMMUNICATIONS Though vastly improved during the 1990s by grants of I£6 billion in European structural funds, the Republic of Ireland's infrastructure is still struggling to cope with the country's unprecedented economic growth. Long traffic delays and below average roads linking major business centers around the country are a potential threat to continued expansion. A late 1990s report commissioned by the Irish Business and Employers Association (IBEC) estimated that a further I£14 billion would have to be spent to raise the quality of the country's infrastructure to generally accepted European levels. Ireland's share of European structural funds for 2000 to 2006 has decreased to approximately I£3 billion, but increased government spending and planned joint public-private funding of projects should make up the shortfall. Ireland has the most car-dependent transportation system in the EU, with roads carrying 86 percent of freight traffic and 97 percent of passenger traffic. Yet full inter-city motorways are not in place, making the links between Dublin and other major cities subject to heavy traffic and delays. Economic growth and increased consumer spending has pushed up car ownership levels dramatically, which, together with increased commercial traffic on the roads, has offset the considerable improvements of the 1990s. The road network is estimated to total 87,043 kilometers (54,089 miles) of paved roads and 5,457 kilometers (3,391 miles) of unpaved roads (1999). Long rush hours and traffic gridlock occur in the major cities and gridlock in Dublin is estimated to cost the national economy around I£1.2 billion every year. Policies aiming to attract more daily users to the public transport system might take effect over the next decade. Following much debate and deliberation, the current government has commenced the implementation of a light rail system (3 lines) to cover some important routes into the capital, most importantly a link to the airport. This will add to the "Dart," Dublin's existing, relatively efficient suburban rail service, which consists of 5 lines covering 257 kilometers (160 miles) and 56 stations. The railway linking Dublin to 2 major cities on the island, Belfast (Northern Ireland) and Cork, has been vastly improved over the last few years, but recent reports by external consultants have highlighted the poor, even dangerous, state of much of the rest of Ireland's 1,947-kilometer (1,210-mile) railway infrastructure. Ireland has 3 international airports—at Dublin (east), Shannon (southwest), and Cork (south)—and 6 independent regional airports. Air traffic increased dramatically during the 1990s, with the number of passengers up from 6.8 million (1992) to 12.1 million (1997), while annual air freight traffic also doubled. Inevitably, these increases have led to congestion, especially at Dublin's airport, and a major capital investment program launched by the government is nearing completion, with similar projects to follow in Cork and Shannon. Cargo traffic is similar, with increases of up to 50 percent in cargo tonnage and passenger traffic passing through the main ports over the 1990s. The government recognizes that capacity must increase if major congestion is to be avoided. Liberalization in the telecommunications sector, completed in 1998, increased the number of providers from just 1 state-owned company to 29 fully licensed telecommunications companies, operating in residential, |Country||Newspapers||Radios||TV Sets a||Cable subscribers a||Mobile Phones a||Fax Machines a||Personal Computers a||Internet Hosts b||Internet Users b| |aData are from International Telecommunication Union, World Telecommunication Development Report 1999 and are per 1,000 people.| |bData are from the Internet Software Consortium (http://www.isc.org) and are per 10,000 people.| |SOURCE: World Bank. World Development Indicators 2000.| corporate, and specialized data services sectors. The government hopes that liberalization and the resulting competition in the market will encourage private investment and improve the state's poorly developed telecommunications infrastructure. The mobile phone market has been dominated by competition between Eircell and Esat Digi-phone. Both have now been bought by the British giants, Vodaphone and British Telecom (BT), respectively, while a third mobile phone company, Meteor, has recently entered the market. Energy consumption is, not surprisingly, on the increase. Total energy consumption rose from 8.5 million metric tons (9.35 million tons) in 1996 to 9.5 million metric tons (10.45 million tons) in 1997, with household use accounting for 3.6 million metric tons (3.6 million tons). Two-thirds of energy is supplied by imported coal and oil, with the remaining third supplied by indigenous peat (12 percent of the total) and natural gas. The distribution of gas, oil, peat, and electricity remains state dominated, though industrial users hope that recent liberalization of the gas and electricity markets will result in a lowering of prices. Strong growth (55 percent growth from 1993 to 1999) has been the recent trend in the Irish economy, but it lacks consistency across all sectors. Agriculture (forestry and fishing), as a share of total GDP, has seen a steady decline, while the fastest growth has occurred in industry, particularly high-tech industry. The expanding service sector accounted for 56 percent of GDP in 1998. Ireland's economy has remained the fastest growing economy in the EU and compares favorably with developed economies worldwide in terms of growth, output, trade volume, and employment levels. Ireland's mild temperature, high rainfall, and fertile land offer ideal conditions for agriculture and, despite a pattern of decline over the past 2 decades, agricultural activity remains an important employer in rural and remote regions of the country. The drop in agricultural output from 16 percent of GDP in 1975 to just 5 percent in 1998 reflects only a relative decline when measured against the steady increase in GDP driven by other sectors. While the fall in prices of agricultural products has been sharp, the volume of output has seen only a small decrease. The industry suffers from over-capacity and falling incomes and is increasingly reliant on EU subsidies and fixed prices. The number of small farmers remains high for an industrialized country, and many small farmers take up other employment to subsidize their income. While average farm size (29.5 hectares or 73 acres) is slowly increasing, the Irish Farmer's Association asserts that farm size remains the single biggest obstacle to generating adequate income in the agricultural sector. Adjusting to EU measures to bring prices more in line with world agricultural prices seems unlikely to help the industry, while reducing high levels of pollution in the waterways to comply with EU regulations is also not expected to aid farming profitability. Average farming incomes fell by 6.2 percent in 1997, even though productivity per individual farmer increased significantly over the last decade. On 40 percent of all farms, the annual income was only I£5,000. On a further 25 percent of farms, it rose to between I£5,000 and I£10,000. Combined employment in agriculture, forestry, and fisheries fell from 175,000 at the beginning of the 1990s to 142,000 at the end of the decade. Figures that include related food-processing industries put employment at 176,000 in 1999, representing 12 percent of all workers in employment. Some figures estimate that agriculture generates so many service sector jobs that it indirectly accounts for 350,000 jobs (23 percent of the labor force). BEEF AND OTHER LIVESTOCK. The most productive agricultural sector is the largely export-oriented beef and livestock industry, which accounted for 50 percent of output value in 1998. Cattle and sheep farming have, however, been hard hit by a number of crises. After EU agreements in 1999 to reduce beef prices, in February 2000, farmers were badly affected when BSE (Bovine Spongi-form Encephalopathy), or "Mad Cow" disease, resulted in a 27 percent drop in beef consumption in the key European market. In February-March 2001, the unprecedentedly severe outbreak of foot and mouth disease in herds in Britain, with pockets in Northern Ireland and France also affected, brought another enormous challenge to the industry, threatening the export markets of Ireland and all the EU countries. Overall, output decreased during the 1990s, with the annual value of livestock falling from I£1,885 million in 1993 to I£1,761 million in 1997. This represents decreases in the overall value of cattle livestock from I£1,349 million to I£1,097 million (1993 to 1997), with the value from pigs, sheep, and lambs showing small net increases in output value. Livestock products, the most prominent of which is milk, also suffered a general, if undramatic decline in output during the mid-1990s, from I£1,132 to I£1,113 million. Crops output, with cereals and root crops dominant, also decreased marginally—from I£3,431 to I£3,315 million—during this period. Sugar beet, wheat, and barley yielded the highest commercial value (1997), with milk, eggs, and fresh vegetables also important products. FORESTRY AND FISHING. Despite its reputation as a land of abundant greenery, Ireland has the lowest level of forest cover in Europe, with only 8 percent of the land under woodland, against a 25 percent average elsewhere. But this 8 percent is a considerable improvement from the 1 percent level of cover at the foundation of the state in 1922 and is the result of government reforestation programs. Current EU policy serves to encourage reforestation and the development of a timber-based agricultural sector. Reflecting this, timber output was expected (EIU estimate) to have reached 3 million square meters of timber by 2000. This would provide for an increase in the domestic market's share of local timber, as it previously imported 45 percent of its timber requirements. Given Ireland's geographical position, fishing has been a naturally important economic activity, particularly in rural coastal areas where there are few other industries. The fishing industry has evolved to incorporate more diverse forms of activity such as fish farming, and employment rates have increased by 40 percent since 1980. Full and part-time workers together accounted for 16,000 jobs either directly or indirectly connected to the fishing industry in 1999. The value of exports increased from I£154 million in the early years of the 1990s to a peak of I£240 million in 1997. EU grants and government spending ensure that the industry will continue to expand. The industrial sector has maintained its share in total economic activity at 39 percent of GDP throughout most of the 1980s and 1990s. This trend is unusual in developed countries and reflects strong growth. Although marking a slight slowdown from 1995 to 1998, growth in 1999 was high at 10.5 percent. Strong performance from both foreign-owned and indigenous Irish industry, primarily in the high-tech manufacturing sector, has driven the growth. Significant reserves of zinc and lead ores, natural gas, and peat are to be found, and the latter 2 supply a third of domestic energy demand. Zinc and lead ores sustain one of the biggest zinc and lead mines in Europe and approximately 4,000 jobs. Ireland is a small country with limited natural resources, and a well-developed, open, and globally-integrated industrial economic policy is therefore essential to economic health. There are more than 1,000 foreign-owned companies operating in Ireland, mostly, though not all, in the high-tech manufacturing sector. Foreign-owned manufacturing accounts for more than half of the country's total manufacturing output. In 1998, foreign companies produced more than two-thirds of export goods and employed around 45 percent of the manufacturing sector's workforce, or 28 percent (468,800) of the total workforce. Most foreign-owned manufacturing is concentrated in high-tech sectors such as chemical production, metals, electrical engineering, and computer hardware. Between 1993 and 1997, output in metals and engineering increased by 96 percent and employment by 49 percent. Leading metal output is the manufacture of agricultural and transport machinery. In the chemicals sector, output increased by 116 percent and employment by 38 percent. Both sectors continue to enjoy high productivity. Performance in the indigenous high-tech sector has also been impressive. The sector's growth in volume of output increased by 37 percent from 1987 to 1995, contributing to a 113 percent increase overall (including foreign-owned). World-class manufacturing and management standards have developed, partly encouraged by the productive foreign-owned companies and by growing links between foreign-owned and indigenous sectors. An increasing percentage of inputs purchased by foreign-owned industry for production are supplied by indigenous Irish industry. Total expenditure of foreign companies in the Irish economy has reached I£6.9 billion, up from I£2.9 billion in 1990. By 1999, this economically healthy situation had brought an unprecedented 30,000 worker increase in employment by Irish-owned manufacturing firms since 1992. Also well represented in this high-tech sector are the Industrial Development Authority (IDA)-targeted sectors of the pharmaceutical and computer software industries. The IDA is a government body charged with the task of attracting foreign investment and is part of an umbrella organization called Enterprise Ireland. The concentration of high-tech industries they have encouraged has created a clustering effect that facilitates self-sustaining growth. TEXTILES, CLOTHING, AND FOOTWEAR. Dominated by indigenous industry, the labor-intensive textile, clothing, and footwear sectors registered no significant growth during the 1990s into the 2000s. They have suffered as a result of competition from cheaper foreign imports. Textile production in Ireland remained stagnant during the late 1990s, and employment in the sector fell by approximately 20 percent. Clothing and footwear output fell by almost 20 percent between 1993 and 1997 and has remained at that level. FOOD, DRINK, AND TOBACCO. Food, drink, and tobacco production recorded the strongest growth in the traditional indigenous manufacturing sector, with production output, which is aimed at both domestic and export markets, increasing by 6.1 percent in 1997. Providing the backbone for the food industry is the production of beef, milk, eggs, fresh vegetables, barley, sugar-beets, and wheat. A combination of increased business investment, infrastructure development and an acute housing shortage resulted in an increase in the value of construction output from I£13.7 billion in 1993 to I£16.1 billion, or 14.2 percent of GDP in 1996. In 1998, the bulk of construction was directed at residential buildings. Quarried stone exists as an important indigenous supply for the construction industry. Conditions ensured that this boom continued into 2001, but it is threatened by a shortage of labor and the accompanying effect of increasing wage demands. The open-market economic policies adopted by successive Irish governments since the late 1980s can, in large part, explain the rapid expansion of the industrial sector, particularly the high-tech industrial sector. Foreign direct investment has been attracted by a number of factors, including a carefully built, business-friendly environment, a relatively inexpensive but highly skilled labor force, access to the EU market, and a range of incentives offered by the Irish government. Economic policy is currently establishing new priorities aimed at attracting industry to the poorer regions of the country, strengthening the roots of foreign-owned industry, and encouraging research and development programs. Services accounted for approximately 63 percent of employment and 54.1 percent of GDP in 1999. Banking and finance and retailing and tourism dominate the private services sector, with software engineering and business consulting services growing in importance. State-owned industries dominate the provision of education, health, distribution, transport, and communication services, accounting for 18 percent of GDP in 1997. Private service providers are slowly entering these markets. Availability of branch banking is dominated by 4 main clearing banks—Bank of Ireland, Allied Irish Banks, Ulster Bank, and National Irish Bank. Since the early 1990s, banks and building societies have become increasingly involved in the providing of financial services, and total employment provided by these institutions increased from 25,200 in 1994 to just under 30,000 in 1998. A scheme introduced in 1987 created incentives to make Ireland an attractive base for foreign financial institutions. A particular incentive was the setting of corporation taxes at a low 10 percent. More than 300 banks, mostly North American and European, are established in the Irish Financial Services Center (IFSC) in Dublin, offering specialized services such as investment banking, fund management, capital markets, leasing, and re-insurance. The IFSC has created direct employment for between 5,000 and 7,000 people, as well as a considerable proportion of indirect employment connected with Dublin's concentration of banks. The country's famously green and beautiful landscape, its fine beaches, a culture of small, atmospheric, and sociable pubs, and the friendliness of its people attract many tourists. Recent tourist expansion has largely resulted from Dublin's elevation to a very popular weekend-break destination, coupled with the government tourist board's overseas promotion programs, which highlight the country's attractions for fishing, walking, and golfing enthusiasts. Total revenue from tourism reached I£2.8 billion—more than 5.7 percent of GDP— in 1997. This dropped slightly in 1999 (I£2.5 billion), but two-thirds of that year's revenue was generated by the arrival of more than 6 million overseas visitors. At the end of the 1990s, at least 120,000 jobs were estimated to depend on tourism. The biggest threat to the tourist industry is the poor quality of services. These are the result of a shortage of skilled labor, as well as increasing industrial unrest that periodically causes transportation disruptions and brings traffic chaos. Workers in the tourist industry have tended to be worse off than those in other sectors, but the I£4.50 per hour minimum wage introduced in 2000 stood to eradicate the worst cases of under-payment. Economic expansion has facilitated increased diversification in the indigenous retailing industry. With consumer spending high, retail sales expanded by 53 percent in real value terms in 1997 and by 32 percent in volume terms. The surge in the growth of the retailing sector has attracted a large number of groups from the United Kingdom (UK), which have brought competition that has helped to control consumer price inflation. The volume of retail sales increased by 14 percent in the first quarter of 2000, with the purchase of new cars in the first half of that year up 42.9 percent. Ireland has achieved the highest trade surplus relative to GDP in the EU and is in the top 20 exporting countries in the world. In 1999, the total value of the country's exports recorded a huge surplus, reaching I£44.8 billion, against imports of I£20.63 billion. The balance of trade between exports and imports continued the strong upward trend from I£13.7 billion (25 percent of GDP) in 1998 to I£24.17 billion in 1999. Figures from the first half of 2000 indicated a further increase. However, despite a robust 24 percent growth in export rates in 2000, trends indicated that import growth rates in response to high consumer demand would exceed export growth rates in 2000-01, thus threatening the surplus in the long run. The EU (including the UK) remains Ireland's most important export market. In 1998, export revenues from the EU accounted for 67 percent (I£30.27 billion) of total exports, with the UK contributing almost I£10 billion, or 22 percent of the total. Germany (14.6 percent), France, Italy, and the Netherlands are the other key European destinations, while the United States accounted for I£6.14 billion (13.7 percent) in 1998. Given the weak euro and the presence of many U.S. multi-nationals in Ireland, there are indications that the United States is set to become Ireland's biggest export market. Exports to U.S. markets increased by 54 percent to I£6.8 billion in the first 6 months of 2000. Exports to the UK, a non-euro zone, also increased by 22 percent during this period to I£6.9 billion. Ireland is a major center of computer manufacture, with |Trade (expressed in billions of US$): Ireland| |SOURCE: International Monetary Fund. International Financial Statistics Yearbook 1999.| U.S.-owned corporations such as Dell conducting operations there. The high-tech sectors recorded Ireland's largest export increases in 2000, with computer equipment leading the field at I£8.1 billion. The export of organic chemicals was valued at I£7.3 billion, and electronic machinery at I£2.9 billion. Chemicals, transport equipment, and machinery (including computers) accounted for 80 percent of the increase in exports between 1993 and 1997. While foreign multinationals dominate these sectors, there are positive signs of increasing domestic production in high tech manufacturing industries, such as the production of chemicals, software development, optical equipment, and electronic equipment. The production of electronic equipment and optical equipment supplied 9.2 percent of domestic exports in 1997. However, exports represented only 34 percent of domestic manufacture, while up to 90 percent of foreign-owned company output was exported. In 1997, food and livestock remained the fourth largest export commodities, with food, drink, and tobacco together accounting for an important, though declining, percentage of indigenous exports (53.9 percent, down from 61.9 percent in 1991). Fuel, lubricants, and crude materials also remain important. The value of imports has increased rapidly, from I£13.1 billion in 1998 to I£34.66 billion in 1999. Their value for the first 6 months of 2000 was at I£20.7 billion, recording a 25 percent increase. Once again, the high-tech sector dominated, with imports of computer equipment increasing by 28 percent and manufacturing industry inputs by 26 percent. Imports of road vehicles also increased dramatically during this period. Despite the weak euro, the UK and the United States remain Ireland's largest sources of imports, both supplying goods in the first half of 2000, showing an increase in volume of 20 percent. Machinery and transport equipment dominated the volume of imports and accounted for I£15.7 billion in 1998, with chemicals and miscellaneous manufacturing goods accounting for I£3.4 billion each. Food and live animals accounted for the next largest share in total import value at I£1.8 billion in 1998. Live animals are both imported and exported. A factor distinguishing |Exchange rates: Ireland| |Irish pounds per US$1| |SOURCE: CIA World Factbook 2001 [ONLINE].| Ireland from its 10 euro-zone partners is its relatively low volume of trade within the euro zone—20 percent of imports and 45 percent of exports in 1998. Current trends do not predict a rapid change in this pattern. Ireland severed its links with the British pound sterling in 1979 and relinquished control over its monetary policy to the European Central Bank (ECB) in 1999. Consequently, the government is no longer free to use exchange rates as part of economic and trade policy. The relationship of the Irish pound to the sterling and the U.S. dollar is determined by their relationship to the euro, which itself has been consistently weak since its launch in January 1999. Higher interest rates have been introduced by the ECB to help the euro, but they would need to be considerably higher to curb Irish domestic spending and demand. A downturn in the U.S. economy could, perhaps, result in a strengthening of the euro. This would reduce the costs of imports and help curb inflation, but would at the same time decrease the value of exports. The Irish Stock Exchange (ISE) separated from the international stock exchange of the United Kingdom and the Republic of Ireland in 1995. Since then, in keeping with global trends, the ISE has grown rapidly, with market capitalization increasing from I£7.4 billion in 1992 to I£66.8 billion in 1998, and 81 companies listed in 2001. It appears, however, to be too small to attract significant levels of venture capital, and Irish technology companies tend to look to the NASDAQ or the EASDAQ (proposed Europe equivalent) for this reason. With this coordination of stock exchanges across Europe, investor participation in Irish stocks may increase. POVERTY AND WEALTH Unprecedented growth in the Irish economy during the late 1990s saw living standards in terms of per capita GDP reach the EU average for the first time in 1998. However, rapid growth does not automatically translate |GDP per Capita (US$)| |SOURCE: United Nations. Human Development Report 2000; Trends in human development and per capita income.| into a better quality of life, and Ireland is by no means immune to the risk in all industrial societies: that of creating a society where the rich get richer and the poor stay poor. Inequality in Ireland falls generally into 2 categories. The first is essentially that of poverty traditionally created by unemployment. Despite almost full employment , pockets of deprivation characterized by long-term unemployment, high dropout rates from education, and a dependency culture, prevail. These disadvantaged groups, frequently plagued by social ills such as the drug-culture, suffer markedly from the considerable increase in the cost of living. To relieve deprivation of this nature requires a sustained effort at introducing more comprehensive social policies. In 2000, the Irish government spent only 16 percent of GDP on social welfare compared to the EU average of 28 percent. The second category of poverty, arising from the disparity of income among the employed, affects a larger number of households. Comparative studies published in Brian Nolan, Chris Whelan, and P.J. O'Connell's Bust to Boom, reveal Ireland, along with the UK and Portugal, to have a high rate of relative income poverty compared to other EU member states. While there were improvements in income earned by the unskilled, skilled, highly |Distribution of Income or Consumption by Percentage Share: Ireland| |Survey year: 1987| |Note: This information refers to income shares by percentiles of the population and is ranked by per capita income.| |SOURCE: 2000 World Development Indicators [CD-ROM].| |Household Consumption in PPP Terms| |Country||All Food||Clothing and footwear||Fuel and power a||Health care b||Education b||Transport & Communications||Other| |Data represent percentage of consumption in PPP terms.| |aExcludes energy used for transport.| |bIncludes government and private expenditures.| |SOURCE: World Bank. World Development Indicators 2000.| skilled, and educated employees alike, the overall trend from 1987 to 1997 brought more opportunities and higher wage increases for the latter 2 groups. This trend is more acute in Ireland than in other European states. The ESRI (Economic and Social Research Institute) points out that while the fortunes of wealthiest 10 percent of the employed population increased rapidly between 1987 and 1997, the top 5 percent rose even more rapidly. The only positive aspect of income distribution trends was that while the bottom, or poorest, 25 percent appeared to fall away from the average income, the bottom 10 percent did not, indicating that the very poor are not actually getting poorer. One further positive aspect is the increase in gender equality, with women moving to take advantage of increased employment opportunities. Women are establishing themselves as fundamental members of the labor force and improving their average take-home pay to 85 percent of that earned by their male counterparts. However, trends in general income disparity are worsened by the crippling house prices. These either prevent many young people on average incomes from buying homes or leaves them with huge mortgage payments. Rents have spiraled due to shortages in the housing market. Exclusively located houses in Dublin have been sold for over I£6 million and, while this is not the norm, an adequate house with easy access to Dublin's city center costs between I£150,000 and I£500,000, having cost perhaps between I£30,000 and I£80,000 at the end of the 1980s. The government does provide safety nets for those in need, granting free medical and dental care on the basis of means testing. Social welfare payments are available to the unemployed, but only to those who can provide an address, and there is some government-provided social, or corporation, housing. This scheme involves making low-rent housing available to the less well off, along with a tenant's long-term option of buying the government out. However, the service has suffered from the housing shortages, which show no signs of letting up (2001), and waiting lists are up to 18 months long. The falling unemployment of the 1990s has accelerated to the extent that the key issue in 2001 is a shortage of skilled and unskilled labor. The labor force increased from 1,650,100 in early 1999 to 1,745,600 in mid-2000, with 1,670,700 in employment (mid-2000). In 1999 and 2000, surveys carried out by the Small Firms Association indicated that 91 percent of surveyed members were experiencing difficulties recruiting staff, particularly at the unskilled level. The labor market increased by 6.2 percent (96,000) in 1999, and the number of long-term unemployed decreased to just 1.7 percent of the workforce. There is a risk that this shrinkage in the volume of available labor will further fuel demands for wage increases. Social partnership agreements over the last decade have kept wages moderate and generally lower than in other EU states. There is an increasingly widespread consensus on the part of workers, particularly in the public sector , that the fruits of economic growth have not been distributed, let alone distributed evenly. It is feared that demands for increased pay may undermine growth by fuelling inflation, thus pushing up the cost of living for individuals and of wages for business, both foreign and domestic owned. The input of trade unions into economic policy-making was formalized with the introduction of national wage agreements in 1989. The umbrella body, the Irish Congress of Trade Unions, incorporates 46 unions, with a total membership of 523,700 (2000). According to the largest union, the Services, Industrial, Professional and Technical Union (SIPTU), membership increased by 60,000 to more than 200,000 in 2000. However, many multinationals do not permit union membership. Despite overall improvements in wage and employment levels, the current industrial climate is at its worst this decade. Strikes are a more regular feature across the public sector, with nurses, the Garda (police), and teachers demanding increases of up to 40 percent. The most recent wage agreement—the Programme for Prosperity and Fairness ness (PPF)—has proved almost impossible to implement, since the agreed annual 5 percent pay increases are no longer considered sufficient by unions; they argue that the cost of living has increased by more and, with inflation having peaked at almost 7 percent in November 2000, they appear to have a case. Hourly rates of pay have increased significantly across all sectors. According to the government's Central Statistics Office, the average industrial wage of I£274.37 for a 40.5-hour week in 1996 rose to I£283.53 in 1997 and I£295.20 in 1998. In 1999, employees in private firms had higher average wage figures. Skilled workers earned I£461.86 for a 45.6-hour week and the unskilled and semi-skilled were paid I£346.55 for a 46.8-hour week. As indicated above, income differentials—the difference between income levels across all sectors from the highest to the lowest—are higher than in other EU countries. COUNTRY HISTORY AND ECONOMIC DEVELOPMENT 1800. British rule over Ireland, present since the 12th century, is extended to the entire country by the 17th and 18th centuries and further centralized with the Act of Union in 1800 (whereby no parliament sat in Dublin anymore). 1870s. Strong national movement emerges in Ireland. The national political movement in favor of "home rule" succeeds in incorporating both members of the Anglo-Irish aristocracy and peasant famers who seek land reform. But resistance on the part of conservative British governments and the strong will of the Protestant population of the northern province—Ulster—to remain in the union delays home rule. 1914-18. A more radical stream of nationalism begins. 1919-21. Guerrilla-style war for independence ensues. The Unionist population of Northern Ireland remains adamant that no granting of either home rule or independence to the island should include them. 1922. The Anglo-Irish treaty gives 26 of the 32 counties of Ireland independence from the United Kingdom with some symbolic restrictions, such as the retention of the crown as head of state. The remaining 6 counties in the north of the island remain part of the UK. 1923. Those for and against the treaty fight a civil war over the spoils of government and some over the retention of symbolic links with Britain, which ends in the capitulation of the anti-treaty forces, who then form the political party Fianna Fáil in 1926. 1925. Partition of the island into Eire and Northern Island is informally made permanent. 1938. More than a decade of politically provoked and disastrous "economic war" with Britain ends. 1940. Ireland declares itself neutral in World War II. 1949. Although informally a republic since 1937, Ireland is formally declared a republic. 1950s. Emigration increases rapidly, and rural poverty becomes widespread. 1960s. The inward looking, tariff-centered economic policies are rejected in favor of an open policy, but the state still plays a huge role in the economy. 1970s. High government spending increases the national debt to unsustainable levels and sparks off high inflation. The oil crisis of 1979 also hits the country hard. 1973. Ireland joins the European Economic Community, along with Britain and Denmark. 1980s. High inflation and unemployment levels alongside income tax that reach over 65 percent. 1987. Ireland endorses the Single European Act, which establishes the common European market. The first social partnership agreements of the 1980s negotiate a plan for national economic recovery. 1990s. Tighter fiscal policies, trade and enterprise-friendly economic policies, and social partnership agreements, alongside other factors such as the long-term benefits of EU transfers, facilitate a turnaround in the economic fortunes of the country. 1991. EU countries sign the Maastricht Treaty, which formalizes the plan for European Monetary Union and agrees on the ground rules for entry into EMU. 1994-98. Following the paramilitary cease-fire in Northern Ireland and long negotiations, a peace process results in political agreements between Britain, Ireland, and Northern Ireland. 1995-96. The economy shows strong growth and a significant increase in employment opportunities. 1998. Ireland endorses the Amsterdam Treaty, which extends EU co-ordination of social and security policy and enlargement. 1999. EMU is introduced and the European Central Bank takes over monetary powers in Ireland. For most of the latter half of the 20th century, Irish policy makers focused on the challenge of how to instigate sustainable economic growth that would serve to reduce high unemployment and emigration levels and to increase standards of living to the European level. In the 21st century, the key challenge is to implement a policy mix that sustains the benefits of growth while dealing with the key interlinked threats posed by inflation and acute labor market shortages. In 2001, rising inflation has seen the cost of living increase considerably, and this, alongside more demand than supply in the labor market, puts strong upward pressure on wages. Dealing with inflation and labor market shortages is complicated by the extent to which external forces affect Ireland's economy, which is a regional, export-oriented economy within a monetary union. For example, the health of the euro and trends in global oil prices will either help or hinder the curbing of inflation. Lower oil prices and a stronger euro would reduce the cost of imports and, thus, inflation. Another important external force is the slow-down in the U.S. economy (2001). This could decrease the United States' domestic demand for imports, at the same time decreasing multi-national companies' investment in the Irish market, thus putting trade volume, employment, and growth at risk. In turn, spiraling inflation could result as job losses cause people to struggle to pay mortgages and the high levels of credit that have been the trend throughout the 1990s and beyond. While there are differing opinions as to which policies are most effective to curb inflation and thus reduce the upward pressure on wages, most commentators agree that a flexible fiscal policy, in particular flexible wages (using wage agreements), is vital if both are to be avoided. Flexibility is necessary because of the dual and uncertain nature of external challenges to economic success. Different external factors call for different reactions. The immediate problem facing the government in 2001 is the threat to social partnership policy-making posed by the increasing demands of unions for higher wage agreements. Higher wages and a break in the partnership would threaten the competitiveness of the Irish labor market, which remains relatively cheap compared to the rest of Europe. But competitiveness is also at risk as a result of labor market shortages. It is likely that moderate wage increases to maintain social consensus (partnership agreements) are required alongside policies to encourage immigration (to increase the labor market supply) and policies to encourage savings (to reduce the threat of inflation). However, different policy responses would be required should the U.S. slowdown reach the point where foreign companies pull out, thus reducing employment. Attempts have been made to prepare for this scenario; the IDA has put more emphasis on health care and e-commerce companies and on research and development functions to deepen the roots of foreign investment, thus lessening the risk of an exodus. A healthy future economy largely depends on how government responds to uncertain threats, and it would appear that the adoption of a flexible approach is vital. This is in turn a prerequisite for improving the quality of life and diverting a percentage of expenditure to programs designed to narrow the disparities in individual prosperity. Ireland has no territories or colonies. Duffy, David, John Fitzgerald, Kieran Kennedy, and Diarmaid Smyth. ESRI Quarterly Economic Commentary. Dublin: Economic and Social Research Institute, December 1999. Economist Intelligence Unit. Country Profile: Ireland. London: Economist Intelligence Unit, 2001. Economist Intelligence Unit. Country Report: Ireland June 2000. London: EIU, 2000. Economist Intelligence Unit. Country Report: Ireland November 2000. London: EIU, 2000. Irish Business and Employers Association (IBEC). "Quarterly Economic Trends." Dublin: IBEC Statements, December 2000. Irish Business and Employers Association (IBEC). "Economy Not All Boom." Dublin: IBEC Statements, January 2001. Irish Farmer's Association. Structure and Competitiveness in Irish Agriculture. Dublin: IFA, July 1999. Nolan, Brian, P.J. O'Connell, and C. Whelan. Bust to Boom: The Irish Experience of Growth and Inequality. Dublin: IPA and ERSI, 2000. Nolan, Brian, and Bertrand Maitre. "Income Inequality." Bust to Boom: The Irish Experience of Growth and Inequality. Dublin: IPA and ERSI, 2000. O'Hagan. The Economy of Ireland. 6th edition. Dublin: IPA, 2000. Small Firms Association. End of Year Statement. Dublin: SFA, 2000. Results of Pay Survey. Dublin: SFA, January 2001. U.S. Central Intelligence Agency. CIA World FactBook 2000. <http:www.odci.gov/cia/publications/factbook/index.html>. Accessed September 2001. Irish Pound (I£). One Irish pound equals 100 pence (p). There are notes of 1, 2, 5, 10, 20, 50, and 100 pounds. There are 1, 2, 5, 10, 20, and 50 pence coins. Ireland is part of the European Monetary Union (EMU) implemented on paper in January 1999. From 1 January 2002, the pound will be phased out with the introduction of the euro. The euro has been set at 0.787564 Irish pence, with I£ equaling approximately 1.21 euros. There are 100 cents in the euro, which is denominated in notes of 5, 10, 20, 50, 100, 200, and 500 euros, and coins of 1 and 2 euros and 1, 2, 5, 10, 20, and 50 cents. Machinery and equipment, computers (hardware and software), chemicals, pharmaceuticals, live animals, animal products. Data processing equipment, other machinery and equipment, chemicals, petroleum and petroleum products, textiles, clothing. GROSS DOMESTIC PRODUCT: US$83.6 billion (2000 est.). BALANCE OF TRADE: Exports: US$73.8 billion (2000 est.). Imports: US$46.1 billion (2000 est.). [CIA World Factbook indicates exports to be US$66 billion (1999 est.) and imports to be US$44 billion (1999 est.).] "Ireland." Worldmark Encyclopedia of National Economies. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/economics/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." Worldmark Encyclopedia of National Economies. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/economics/encyclopedias-almanacs-transcripts-and-maps/ireland |Official Country Name:||Ireland| |Region (Map name):||Europe| |Language(s):||English, Irish (Gaelic)| |Area:||70,280 sq km| |GDP:||93,865 (US$ millions)| |Number of Daily Newspapers:||6| |Circulation per 1,000:||191| |Number of Nondaily Newspapers:||61| |Circulation per 1,000:||462| |Newspaper Consumption (minutes per day):||40| |Total Newspaper Ad Receipts:||307 (Euro millions)| |As % of All Ad Expenditures:||46.80| |Number of Television Stations:||4| |Number of Television Sets:||1,820,000| |Television Sets per 1,000:||473.9| |Television Consumption (minutes per day):||199| |Number of Cable Subscribers:||672,220| |Cable Subscribers per 1,000:||176.9| |Number of Satellite Subscribers:||130,000| |Satellite Subscribers per 1,000:||33.8| |Number of Radio Stations:||115| |Number of Radio Receivers:||2,550,000| |Radio Receivers per 1,000:||663.9| |Radio Consumption (minutes per day):||305| |Number of Individuals with Computers:||1,360,000| |Computers per 1,000:||354.1| |Number of Individuals with Internet Access:||784,000| |Internet Access per 1,000:||204.1| |Internet Consumption (minutes per day):||23| Background & General Characteristics The Republic of Ireland, which occupies 5/6 of the island of Ireland, is roughly equal to the state of South Caxsrolina in terms of size and population. Half the population is urban, with a third living in metropolitan Dublin. Ireland is 92 percent Roman Catholic and has a 98 percent literacy rate. Despite centuries of English rule that sought to obliterate Ireland's Celtic language, one-fifth of the population can speak Gaelic today. Full political independence from Great Britain came in 1948 when the Republic of Ireland was established, but the United Kingdom maintains a strong economic presence in Ireland. The printing press came to Ireland in 1550. Early news sheets appeared a century later. The Irish Intelligencer began publication in 1662 as the first commercial newspaper, and the country's first penny newspaper, the Irish Times, began in 1859. The Limerick Chronicle, which was founded in 1766, is the second-oldest English-language newspaper still in existence (the oldest is the Belfast Newsletter ). Irish newspapers are typically divided into two categories: the national press, most of which is based in Dublin, and the regional press, which is dispersed throughout the country. The national press consists of four dailies, two evening newspapers, and five Sunday newspapers. There are approximately 60 regional newspapers, most of which are published on a weekly basis. Competition among newspapers in Dublin is spirited, but few other cities in Ireland have competing local newspapers. Roughly 460,000 national newspapers are sold in the Republic each day. The sales leader is the Irish Independent, a broadsheet especially popular among rural, conservative readers. The second best seller is the Irish Times, which is regularly read by highly educated urban professionals and managers. The Irish Times is probably Ireland's most influential paper. The third most popular national daily in Ireland is the tabloid The Star, an Irish edition of the British Daily Star. The Irish Examiner sells the least nationally, but it is the sales leader in Munster, Ireland's southwest quarter. Published in the city of Cork, the Irish Examiner is the only national daily issued outside of Dublin. Some 130,000 evening newspapers are sold every day in Ireland. The leader is the Evening Herald, a tabloid popular in Dublin and along the east coast. The other evening paper is the Evening Echo, a tabloid published in Cork, and like its morning sister paper the Irish Examiner, popular in Munster. Five Sunday newspapers are published in Ireland, with a total circulation of 800,000. The Sunday Independent, like the daily Irish Independent, has the largest circulation. Sunday World, a tabloid, comes in a close second. The other broadsheet, the Sunday Tribune, has a circulation of less than a third of Sunday Independent's. The fourth most popular Sunday paper is the Sunday Business Post, read by highly educated professionals and managers. The lowest circulating Sunday paper is the tabloid Ireland on Sunday, which is popular among young adult urban males. The other weekly national newspaper in Ireland is the tabloid Irish Farmer's Journal, which serves Ireland's agricultural sector. Newspaper penetration in Ireland is about the same as that of the United States: 59 percent of adults read a daily paper. The newspapers that are most-often read are Irish. Of the 59 percent of adults who read daily newspapers, 50 percent read an Irish title only, 5 percent read both Irish and UK dailies, and 4 percent read UK titles only. Newspaper reading patterns change on Sunday, when 76 percent of adults read a paper. Of this 76 percent, 51 percent read an Irish title only, 16 percent read both Irish and UK Sundays, and 9 percent read UK titles only. The total readership of UK Sunday newspapers is 25 percent, compared with 9 percent who read UK daily papers. Regional newspapers have a small readership, but one that is loyal, a fact that has turned regional papers into attractive properties for larger companies to buy. Examiner Publications, for example, has bought nine regional papers: Carlow Nationalist, Down Democrat, Kildare Nationalist, The Kingdom, Laois Nationalist, Newry Democrat, Sligo Weekender, Waterford News & Star, and Western People. Other major buyers of regional newspapers include Independent News & Media and Scottish Radio Holdings. Unlike their national counterparts, regional newspapers carry little national news and have traditionally been reluctant to advocate political positions. This pattern did not hold during the 2002 abortion referendum, however. The Longford Leader was reserved, complaining that various organizations had pressured "people to vote, on what is essentially a moral issue, in accordance with what they tell us instead of in accordance with our own consciences." The Limerick Leader, by contrast, urged its readers to vote for the referendum: "Essentially the current proposal protects the baby and the mother. Defeat would open up the possibility of increased dangers to both." There are 30 magazine publishers in Ireland publishing 156 consumer magazines and 7 trade magazines. Only five percent of magazines are sold through subscription in Ireland; most are sold at the retail counter. One-fourth of magazine revenues come from advertising; the remaining three-fourths comes from sales. By far the most popular magazine in Ireland is the weekly radio and television guide, RTÉ Guide, which far outsells the most popular titles for women (VIP and VIP Style ), general interest (Buy & Sell and Magill ), and sports (Breaking Ball and Gaelic Sport ). Like Ireland's newspapers, Ireland's indigenous magazine publishers face strong competition from UK titles. According to the Periodical Publishers Association of Ireland, four out of every ten magazines bought in Ireland are imported. Magazines suffered a financial blow in 2000 when the government banned tobacco advertising, which had been the second-largest source of magazine advertising revenue. Magazines receive a very small share of Ireland's advertising expenditure: in 2000, magazines received only 2 percent; newspapers received 55 percent; and radio and television received 33 percent. A recent survey of 46 Irish book publishers found a vital book publishing industry. Seventy percent of book publishing in Ireland is for primary, secondary, and post-secondary education. Most of the remaining 30 percent is non-fiction, but the market for Irish fiction and children's books is active. Irish book publishers sell most of their books domestically (89 percent), although export sales (11 percent) are notable. More than 800 new titles are published each year, and Irish publishers keep about 7,400 titles in print. During the 1990s, Ireland earned the nickname "Celtic tiger" because of its robust economic growth. No longer an agricultural economy in the bottom quarter of the European Union, Ireland rose to the top quarter through industry, which accounts for 38 percent of its GDP, 80 percent of its exports, and 28 percent of its labor force. Ireland became a country with significant immigration. The economic boom, which included a 50 percent jump in disposable income, also led to increased spending in the media as well as increased numbers of media operators in Ireland. The underside of these achievements is child poverty, real estate inflation, and traffic congestion. By far, the largest Irish media company is Independent News and Media PLC, which sells 80 percent of the Irish newspapers sold in Ireland. Independent News publishes the Irish Independent, the national daily with the highest circulation in Ireland, the two leading national Sunday newspapers, the national Evening Herald, 11 regional newspapers, and the Irish edition of the British Daily Star. Yet these properties, along with a yellow page directory, contribute only 28 percent of the revenues of Independent News; most of the rest comes from its international media properties. Despite the dominance of Independent News in Ireland, the Irish Competition Authority has concluded that the Irish newspaper industry remains editorially diverse. The mind behind Independent News is Tony O'Reilly, whose 30 percent stake in the company is worth $430 million. O'Reilly founded Independent News in 1973 with a $2.4 million investment in the Irish Independent. Now the company includes the largest chains in South Africa and New Zealand, regional papers in Australia, and London's Independent. O'Reilly, who is the richest living Irishman with a personal fortune of $1.3 billion, has been a rugby star, CEO of the H. J. Heinz Company, and chairman of Waterford Wedgwood. He earned a Ph.D. in agricultural marketing from the University of Bradford, England, and was knighted in 2001. "I am a maximalist," O'Reilly says. "I want more of everything." There is some media cross-ownership in Ireland. A few local newspapers own shares in local commercial radio stations. Independent News has a 50 percent financial interest in Chorus, the second largest cable operator in Ireland. Scottish Radio Holdings owns the national commercial radio service Today FM as well as six regional newspapers. O'Reilly is the Chairman not only of Independent News, but also of the Valentia consortium, which owns Eircom, operator of one of the largest online services in Ireland, eircom.net. Most Irish journalists, both in print and in broadcast, belong to the National Union of Journalists, which serves as both a trade union and a professional organization. The NUJ is the world's largest union of journalists, with over 25,000 members in England, Scotland, Wales, and Ireland. Recently, the NUJ fought a newspaper publishers' proposal to give copyright of staff-generated material to media companies to stop them from being able to syndicate material without paying royalties to journalists. Ireland's largest publishers are represented by National Newspapers of Ireland (NNI). Originally formed to promote newspaper advertising, it has expanded to lobby the government on major concerns of the newspaper industry. Although the Irish Constitution does not mention privacy per se, the Supreme Court has said, "The right to privacy is one of the fundamental personal rights of the citizen which flow from the Christian and democratic nature of the State." Legislative measures to protect privacy include the Data Protection Act of 1988, which regulates the collection, use, storage, and disclosure of personal information that is processed electronically. Individuals have a right to read and correct information that is held about them. Wiretapping and electronic surveil-lance are regulated under the Interception of Postal Packets and Telecommunications Messages (Regulation) Act, which was passed after the Supreme Court ruled in 1987 that the unwarranted wiretaps of two reporters violated the constitution. Because there is no press council or ombudsman for the press in Ireland, the main way to deal with complaints about the Irish media is to go to court. Irish libel laws leave the media vulnerable to defamation lawsuits, which are common. Libel suits are hard to defend against, so the press often settles out of court rather than go through the expense of a trial and then pay the increasingly large judgments that juries award to plaintiffs. Defamation suits cost Ireland's newspapers and broadcasters tens of millions of Euros every year. As a result, lawyers are kept on staff to advise on everything from story ideas to book manuscripts. "When in doubt, leave it out," has become editorial wisdom. The media keeps one eye on the courtroom and the other on distributors and stores, some of which have refused to carry publications for fear that they too could be sued for libel. Defamation has not chilled the Irish press entirely, but it has made investigative journalism difficult. Veronica Guerin's exposés about the Irish drug underworld in The Sunday Independent are a case in point. Naming persons as drug dealers is a sure-fire way to elicit a libel suit in Ireland, unless, that is, the criminals explicitly comment on allegations made against them. So despite being threatened, beaten, and even shot, Guerin persisted in confronting drug dealers to get them to say something that she could report in the newspaper. Mostly Guerin used nicknames to identify the criminals, making sure not to use details that would make them readily identifiable. This strategy averred libel suits, because in order to sue, the criminals would have to prove that they were the persons who were nicknamed. In 1995, Guerin received the 1995 International Press Freedom Award from the Committee to Protect Journalists. The following year she was shot to death. According to Irish law, defamation is the publication of a false statement about a person that lowers the individual in the estimation of right-thinking members of society. The Defamation Act of 1961 does not require the plaintiff to prove that the reporter was negligent or that the reporter failed to exercise reasonable care. The plaintiff does not even have to give evidence that he or she was harmed personally or professionally: the law assumes that false reports are harmful. The plaintiff merely has to show that the offensive words referred to him or her and were published by the defendant. It is up to the defendant to prove that the report is true. The truth of media reports can be hard to prove. In 2002, John Waters, a columnist for the Irish Times, sued The Sunday Times of London for an article written by gossip columnist Terry Keane. The article was about a talk that Waters had given before a performance of the Greek tragedy Medea. Keane called Waters' talk "a gender-based assault," and added that she felt sorry for Waters' daughter: "When she becomes a teenager and, I hope, believes in love, should she suffer from mood swings or any affliction of womanhood, she will be truly goosed. And better not ask Dad for tea or sympathy… or help." Waters said that the article tarnished his reputation as a father, and the jury agreed, awarding him 84,000 euro in damages plus court costs. As this case shows, even journalists are not reluctant to sue for libel in Ireland. But other groups sue more frequently. Business people and professionals, particularly lawyers, file libel suits most often; they are followed by politicians. Indeed, Irish libel laws favor public officials and civil servants, who can sue for defamation at government expense. If they lose, they owe the government nothing, but if they win, they get to keep the award. Two strategies are under consideration to reform libel law in Ireland. The first is legislative. National Newspapers of Ireland (NNI) advocates changes in libel law in exchange for formal self-regulation. According to NNI, the Irish public would be better served by having the courts be the forum of last, rather than first, resort in defamation cases. NNI would like to see a strong code of ethics that would be enforced by the ombudsmen of individual media if possible and by a country-wide press council if necessary. But this system can be established only with libel reform, because as the law stands now, a newspaper that publishes an apology is in essence documenting evidence of its own legal liability. The second strategy to reform libel law in Ireland is judicial. Civil libertarians want an Irish media organization to challenge a libel judgment all the way to the European Court of Human Rights in Strasbourg, a court to which Ireland owes allegiance by treaty. If the Irish government loses its case in the Strasbourg court, it is obliged to change its laws to conform to the ruling. Many civil libertarians pin their hope for significant libel reform upon the Strasbourg court because it has ruled that excessive levels of defamation awards impinge free expression. Despite these strategies for libel reform, the Defamation Act of 1961 still stands because politicians tend to view the media with skepticism and have grave doubts about the sincerity and efficacy of media self-regulation. The Irish public, meanwhile, tends to believe that the media want libel reform more for reasons of self interest than for public service. Change, therefore, is slow. Political scientist Michael Foley fears that "the Irish media will remain a sort of lottery in which many of the players win. Freedom of the press will continue to be the big loser." The Irish Constitution simultaneously advocates freedom of expression and, by forbidding expression that is socially undesirable, permits censorship: "The education of public opinion being, however, a matter of such grave import to the common good, the State shall endeavour to ensure that organs of public opinion, such as the radio, the press, the cinema, while preserving their rightful liberty of expression, including criticism of Government policy, shall not be used to undermine public order or morality or the authority of the State." The Constitution also says, "The publication or utterance of blasphemous, seditious, or indecent matter is an offence which shall be punishable in accordance with law." Accordingly, the government has enacted and rigorously enforced several censorship laws including Censorship of Film Acts, Censorship of Publications Acts, Offenses Against the State Acts, and the Official Secrets Act. The history of censorship in Ireland is also a history of diminishing suppression. A case in point is the 1997 Freedom of Information Act, which changed the longstanding Official Secrets Act, under which all government documents were secret unless specified otherwise. Now most government documents, except for those pertaining to Irish law enforcement and other subjects of sensitive national interest, are made available upon request. The number of requests for information under the Freedom of Information Act has increased steadily since the law was passed. The Censorship of Publications Acts of 1929, 1946, and 1967 have governed the censorship of publications. The Acts set up a Censorship and Publications Board, which replaced a group called the Committee of Enquiry on Evil Literature, to examine books and periodicals about which any person has filed a complaint. The Board may prohibit the sale and distribution in Ireland of any publications that it judges to be indecent, defined as "suggestive of, or inciting to sexual immorality or unnatural vice or likely in any other similar way to corrupt or deprave," or that advocate "the unnatural prevention of conception or the procurement of abortion," or that provide titillating details of judicial proceedings, especially divorce. A prohibition order lasts up to twelve years, but decisions made by the Board are subject to judicial review. The first decades under censorship laws were a time of strong enforcement, thanks in large measure to the Catholic Truth Society, which was relentless in its petitions to the Censorship Board. Before the 1980s, hundreds of books and movies were banned every year in Ireland, titles including such notable works as Hemingway's A Farewell To Arms, Huxley's Brave New World, Mead's Coming Of Age In Samoa, and Steinbeck's The Grapes Of Wrath. Playboy magazine was not legally available in Ireland until 1995. The English writer Robert Graves described Ireland as having "the fiercest literary censorship this side of the Iron Curtain," and the Irish writer Frank O'Connor referred to the "great Gaelic heritage of intolerance." Although the flood of censorship has slowed to a trickle, recent cases serve as a reminder that censorship efforts still exist in Ireland today. A Dublin Public Library patron complained to the Censorship Board that Every Woman's Life Guide and Our Bodies, Ourselves contained references to abortion. The library removed the books because, according to a Dublin Public Library spokesperson, "We're not employed to put ourselves at legal risk." In 1994, the Oliver Stone film Natural Born Killers was banned. In 1999, the Censorship Board banned for six months the publication of In Dublin, a twice-monthly events guide, because the magazine published advertisements for massage parlors. The High Court chastised the board for excessiveness and lifted the ban on condition that the offending advertisements would appear no longer. The potential for censorship in Ireland is real, but circumscribed. Not only have recent government censors shown restraint in inverse proportion to the power they have on paper, but most of the censorship that they have exercised is over blatant pornography. Furthermore, Ireland is awash with foreign media, so parochial censorship is likely to be countered readily by information from the UK, Western Europe, and the United States. Like other western European countries, Ireland has an established free press tradition. The Irish Constitution guarantees "liberty of expression, including criticism of Government policy" but makes it unlawful to undermine "the authority of the State." Although not absolute, press freedom is fundamental to Irish society. The extent to which the Irish media exercise their right to criticize government policy is a matter of perspective. Many politicians and government officials believe that the press is critical to the point of being downright carnivorous. Garrett Fitzgerald, former prime minister of Ireland, agreed with Edward Pearce of New Statesman & Society who said that the media "devour our politicians, briefly exalting them before commencing a sort of car-crushing process." However, others disagree, complaining that the press acquiesces to the wishes of the government. "The relationship between the media, especially the broadcast media, and the political establishment is the aorta in the heart of any functioning democracy. Unfortunately, in Ireland this relationship has become so profoundly skewed that it threatens the health of the body politic," Liam Fay complained in the London Sunday Times. While RTÉ [Radio Telefís Éireann] will never become "the proverbial dog to the politicians' lamppost," said Fay, "the station has a responsibility to do more than simply provide a leafy green backdrop against which our leaders and would-be leaders can display their policies in full bloom. Yet this is precisely what much of RTÉ's political coverage now amounts to." Revelations in the 1980s that the government had tapped the phones of three journalists for long periods of time helped to spur further adversity between news reporters and the government. Because the government had tapped the phones of Geraldine Kennedy of the Sunday Tribune and Bruce Arnold of the Irish Independent without proper authorization in an attempt to track down cabinet leaks, the High Court awarded£20,000 each to Kennedy and Arnold and an additional£10,000 to Arnold's wife. Vincent Brown of Magill magazine, whose phone was tapped when his research had put him in touch with members of the IRA, settled out of court in 1995 for £95,000. The government office established to deal with the media is Government Information Services (GIS), made up of the Government Press Secretary, the Deputy Government Press Secretary, and four government press officers. Charged with providing "a free flow of government-related information," GIS issues press releases and statements, arranges access to officials, and coordinates public information campaigns. Attitude toward Foreign Media The government of Ireland has a cooperative relationship with foreign media. The Department of Foreign Affairs keeps domestic and international media up to date on developments in Irish foreign policy by publishing a range of information on paper and electronically, providing press briefings, and arranging meetings between foreign correspondents and the agencies about which they want to report. Opposition to foreign media in Ireland thus comes not from the government but rather from Irish publishers, who complain of unfair competition from British media companies. One complaint involves below-cost selling. Irish publishers protest that their British competitors sell newspapers in Ireland at a cover price with which Irish newspaper companies cannot compete. Irish publishers also complain that the 12.5 percent valued added tax (VAT) on newspaper sales in Ireland causes an unfair burden. Huge British companies, which have no VAT tax at home, are able to absorb the Irish tax more easily than Irish publishers, who lack the cushion British publishers enjoy. Irish publishers claim that below-cost selling and the VAT tax have helped ensure that a significant portion of daily and Sunday newspapers sold in Ireland are British. Besides competing with imported media, Irish companies are increasingly finding themselves competing with foreign-owned companies at home. Scottish Radio Holdings owns Today FM, the national newspaper Ireland on Sunday, and five regional papers. CanWest Global Communications has a 45 percent stake in TV3, as does Granada, the largest commercial television company in the UK. And Trinity Mirror, the biggest newspaper publisher in the UK and the second largest in Europe, owns Irish Daily Mirror, The Sunday Business Post,Donegal Democrat, and Donegal Peoples Press. Until recently, foreign ownership of Irish media has been limited. But the Irish media market is attractive and there is no legislation that prevents foreign ownership of Irish media, so the sale of Irish media properties to foreign (primarily British) companies is expected to continue. Because Ireland is a small country, there are no domestic Irish news agencies. Irish media use international news agencies and their own reporters for news gathering. Although some of the major international agencies have a bureau in Dublin—representatives include Dow Jones Newswires, ITAR-TASS, Reuters, and BBC— many do not, choosing instead to rely upon their London correspondents to report on Ireland as stories arise. Since the 1920s, broadcasting in Ireland has been dominated by RTÉ, a public service agency that is funded by license fees and the sale of advertising time. RTÉ runs four radio and three television channels. Radio 1 is RTÉ's flagship radio station. Begun in 1926, it broadcasts a mixture of news, information, music, and drama. RTÉ's popular music station 2 FM is known for its support of new and emerging Irish artists and musicians. Lyric FM is RTÉ's 24-hour classical music and arts station. The fourth RTÉ station is Radió na Gaeltachta, which was established in 1972 to provide full service broadcasting in Irish. RTÉ also operates RTÉ 1, a television station that emphasizes news and current affairs programming; Network 2, a sports and entertainment channel; and TG4, which televises Irish-language programs. RTÉ is currently in the process of launching four new digital television channels: a 24-hour news and sports channel, an education channel, a youth channel, and a legislative channel. The funding for RTÉ has been a source of contention among Ireland's commercial broadcasters, who complain that license fees contribute to unfair competition. RTÉ receives license fees to support public service broadcasting even though RTÉ's schedule is by no means exclusively noncommercial. Commercial broadcasters, by contrast, are required to program news and current affairs, but without any support from license fees. Meanwhile, because the government is loath to increase license fees, RTÉ is finding it must rely more upon advertising even as increasing competition among broadcasters is making advertising revenue more difficult to obtain. Besides RTÉ's public stations, there are many independent radio and television stations in Ireland. There are 43 licensed independent radio stations in Ireland. In addition to the independent national station, Today FM, there are 23 local commercial stations, 16 non-commercial community stations, and four hospital or college stations. Although many of the independent stations broadcast a rather stock set of music, advertising, disk jockey chatter, and current affairs programs, some serve their communities with unique discussion programs. Pirate radio stations have existed in Ireland as long as Ireland has had radio. Today about 50 pirate stations operate throughout Ireland. The Irish government tolerates these stations as long as they do not interfere with the signals of licensed broadcasters. The only independent indigenous television station in Ireland is TV3. Although licensed to broadcast in 1988, TV3 did not begin broadcasting until 1998. It took ten years to find financial backing, 90 percent of which finally came from the television giants Granada (UK) and CanWest Global Communications (Canada). TV3 produces few programs in house; most TV3 programs are sitcoms and soap operas imported from the United States, UK, and Australia. More than half of Irish households subscribe to cable TV. (Cable penetration in Dublin is an astounding 83 percent.) Those who subscribe to cable receive the three Irish television channels, four UK channels, and a dozen satellite stations. Two companies, NTL and Chorus, control most cable TV in Ireland. The US-owned NTL is the largest. Chorus Communication is owned by a partnership of Independent News and Media, the Irish conglomerate, and Liberty Media International, which is owned by AT&T. The only provider of digital satellite in Ireland is Sky Digital, operated by British Sky Broadcasting (BSkyB). Twenty percent of households in Ireland subscribe to Sky Digital. Sky offers more than 100 broadcast television channels plus audio music and pay-per-view channels. Beginning with two matches between middleweight boxers Steve Collins and Chris Eubank in 1995, and quickly followed by golf tournaments and even an Ireland-USA rugby game, Sky has bought exclusive rights to Irish sports events for broadcast on a pay-per-view basis. Sky's purchases have had the effect of making certain events exclusive that had customarily been broadcast freely. Irish viewers can no longer expect to see every domestic sports event without paying extra. All information transmitted electronically, from broadcast to cable to satellite and Internet, is under the authority of the Broadcasting Commission of Ireland (BCI), as set forth in the Broadcasting Act of 2001. BCI is responsible for the licensing and oversight of broadcasting as well as for writing and enforcing a code of broadcasting standards. Electronic News Media Ireland today is a center for the production and use of computers in Europe. One-third of all PCs sold in Europe are made in Ireland, and many software companies have plants there. Indigenous companies include the Internet security firm Baltimore Technologies and the software integration company Iona Technologies. The longest running Internet news service in the world is The Irish Emigrant, which Liam Ferrie began as an electronic newsletter in 1987 to keep his overseas colleagues at Digital Equipment Corporation informed of news from Ireland. Today, The Irish Emigrant reaches readers in over 130 countries. A hard copy version has appeared on green newsprint in Boston and New York since 1995. The Irish Internet Association gave Ferrie its first Net Visionary Award in 1999. Virtually all broadcasters and newspapers in Ireland have a web page. The Irish Times launched its website in 1994 and transformed it into the portal site, Ireland.com, four years later. This website attracts 1.7 million visits from 630,000 unique users each month, a rate in Ireland second only to the site of the discount airline Ryanair. Following the trend among content-driven web sites throughout the world, Ireland.com began to charge for access to certain sections of the site in 2002. Ireland's Internet penetration rate was 33 percent in 2002; the penetration rate in Dublin was 53 percent. Education & TRAINING Journalism education is becoming increasingly common in Ireland. At least three institutions of higher education offer degrees in journalism. Dublin City University's School of Communications offers several undergraduate and graduate degrees including specialties in journalism, multimedia, and political communication. Dublin Institute of Technology offers a B.A. degree in Journalism Studies and a Language, designed to educate journalists for international assignments or for dual-language careers at home. Griffith College Dublin offers a B.A. degree in Journalism & Media Communications as well as a one-year program to prepare students for a career in radio broadcasting. Irish students who need financial assistance in order to study journalism can apply for the Tom McPhail Journalism Bursary, a scholarship administered by the National Union of Journalists in honor of the Irish Press and Granada Television news editor who co-founded the short-lived Ireland International News Agency. Given its relatively small population of 3.8 million, the Republic of Ireland has a rich media environment. It is served by 12 national newspapers—four dailies, two evenings, five Sundays, and one weekly—and by more than 60 regional newspapers. There are more than 150 indigenous consumer magazines and nearly 50 indigenous book publishers. Ireland has four national television stations, five national radio stations, and dozens of regional radio stations. There is a growing Irish presence on the Internet as well as an increasing Internet penetration rate in Ireland. There are also a plethora of imported books, magazines, and newspapers, as well as radio and television channels available through cable and satellite. Ireland's media environment is both populous and diverse, essential qualities for any healthy democracy. Politically, the media in Ireland is as free from government interference as it has ever been. Before the 1990s, the Censorship Board banned hundreds of books and movies every year, a pattern that inhibited creativity at home and attempts at importing from abroad. Today the Censorship Board screens for pornography, but little more. Literature and film are free to circulate. The government has also granted the media far wider access to its records. Until recently government records in Ireland were presumed to be private and unavailable to the public. But with the Freedom of Information Act of 1997, the press—or any Irish citizen—can now make formal requests to see government records, and with very few exceptions, those requests will be granted. The Freedom of Information Act has had the effect of encouraging more investigative reporting. Libel, however, continues to be a problem for the press in Ireland. Libel suits are relatively easy to win in Ireland because the plaintiff has only to prove publication of defamatory statements, not their falsity, which in Ireland is the defendant's task. Furthermore, the more public the figure in Ireland, the greater the award for defamation that juries are likely to give. Such conditions make investigative reporting risky and, with the cost for lawyers on retainer, expensive. Nevertheless, the national Irish media continue to criticize government officials and discuss important social, political, economic, and religious issues. The chilling effect seems more potential than real at this point. The media in Ireland are also facing economic challenges. One is globalization. Irish media confront stiff competition from magazines, books, and newspapers, as well as radio and television programs, that pour into Ireland from transnational UK companies with such economies of scale that they can undersell indigenous Irish products. Increasingly, media companies from the UK are buying Irish media, and large Irish companies are doing the same, so that there are fewer and fewer owners of the media. This increasing concentration is likely to diminish diversity in media content. The public service tradition in Irish broadcasting is experiencing similar difficulties. RTÉ relies upon license fees supplemented with advertising revenue to fund its programming, which ranges from news and current affairs to entertainment and cultural programming both in English and in Gaelic. At the same time, RTÉ is facing increasing competition from commercial broadcasters that offer popular, lighter fare. Under these circumstances, RTÉ audiences will decline, making it both more difficult to generate advertising revenue and to justify increased license fees. Although RTÉ operates under a mandate to offer programs that serve viewers rather than merely satisfy them, the pressure on RTÉ is to compete with its commercial counterparts by shifting from a model of public service to a marketplace model. The trends toward concentration and commercialization of the media in Ireland are indeed powerful, but their effects are likely to be mitigated, at least in part, by other forces. One of these forces is technology. The Internet, with its small but growing presence in Ireland, offers the very real opportunity to contribute ideas to the public sphere that have little apparent commercial appeal. Businesses and established publishers and broadcasters dominate the Internet, but not exclusively, so the Internet will continue to be available as an avenue for dissent and other alternative expression. Furthermore, as long as the desire to preserve, promote, and explore Irish culture and language is strong, unique, and compelling, Irish communications will continue to circulate, sometimes commercially and sometimes as the result of government planning and investment. - 1997: Freedom of Information Act passed. - 1998: TV3, Ireland's first commercial television station, began broadcasting. - 2001: Broadcasting Act passed. Farrell, Brian. Communications and Community in Ireland. Dublin: Mercier, 1984. Horgan, John. Irish Media: A Critical History Since 1922. London: Routledge, 2001. Kelly, Mary J. and Barbara O'Connor, eds. Media Audiences in Ireland: Power and Cultural Identity. Dublin: University College Dublin Press, 1997. Kiberd, Damien, ed. Media in Ireland: The Search for Ethical Journalism. Dublin: Four Courts, 1999. ——. Media in Ireland: The Search for Diversity. Dublin: Four Courts, 1997. Oram, Hugh. The Newspaper Book: A History of Newspapers in Ireland, 1649-1983. Dublin: MO Books, 1983. Woodman, Kieran. Media Control in Ireland, 1923-1983. Carbondale: Southern Illinois University Press, 1985. John P. Ferré "Ireland." World Press Encyclopedia. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/media/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." World Press Encyclopedia. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/media/encyclopedias-almanacs-transcripts-and-maps/ireland RecipesTraditional Irish Stew................................................. 105 Soda Bread................................................................ 105 Corned Beef with Cabbage ....................................... 106 Colcannon ................................................................ 107 Barm Brack................................................................ 108 Irish Christmas Cake.................................................. 109 Dublin Coddle........................................................... 110 Scones ...................................................................... 110 Apple Cake................................................................ 111 1 GEOGRAPHIC SETTING AND ENVIRONMENT Ireland, or officially the Republic of Ireland, is an island nation in the North Atlantic Ocean. (The northernmost part of the island is Northern Ireland, which is part of the United Kingdom.) Almost 20 percent of the land is devoted to farming. Less than 10 percent of farmland is used to grow crops and the majority is used as grazing land for livestock. 2 HISTORY AND FOOD The arrival of the Anglo-Normans in Ireland in 1169 affected both farming and diet in Ireland. (Anglo-Normans are the Normans who remained in England after the Norman Conquest. Led by William the Conqueror, the Normans came from the Normandy region of France in 1066.) Wheat, peas, and beans became staple foods and people began preparing more elaborate dishes. Food customs were also changing, as French and Italian cooking customs influenced the upper-class cuisine. The potato was introduced to Ireland by the late 1500s. Within 200 years it had replaced older staples, including oats and dairy products. The potato became the mainstay of the Irish diet. In the 1840s, the country's heavy reliance on potatoes led to the disaster known as the Irish Potato Famine. Most Irish farmers grew one particular variety of potato, which turned out to be highly sensitive to disease. A potato blight that had started in Belgium swept the country. It destroyed one-third of Ireland's potato crop in 1845 and triggered widespread famine. In the next two years, two-thirds of the crop was destroyed. More than one million people died as a result of the potato blight, and two million emigrated (moved away) to other countries. Even though they had suffered through the Irish Potato Famine (also called the Great Famine), Irish people continued to love potatoes. As soon as the spread of the disease stopped, the potato returned its place as the staple food in the Irish diet. Farmers began to spray their crops with chemicals to protect them from disease. As of 2001 the Irish were consuming more potatoes than most countries in the world. 3 FOODS OF THE IRISH Irish food is known for the quality and freshness of its ingredients. Most cooking is done without herbs or spices, except for salt and pepper. Foods are usually served without sauce or gravy. The staples of the Irish diet have traditionally been potatoes, grains (especially oats), and dairy products. Potatoes still appear at most Irish meals, with potato scones, similar to biscuits or muffins, a specialty in the north. The Irish have also been accomplished cheesemakers for centuries. Ireland makes about fifty types of homemade "farmhouse" cheeses, which are considered delicacies. Soups of all types, seafood, and meats also play important roles in the Irish diet. Irish soups are thick, hearty, and filling, with potatoes, seafood, and various meats being common ingredients. Since their country is surrounded by water, the Irish enjoy many types of seafood, including salmon, scallops, lobster, mussels, and oysters. However, meat is eaten more frequently at Irish meals. The most common meats are beef, lamb, and pork. A typical Irish dinner consists of potatoes (cooked whole), cabbage, and meat. Irish stew has been recognized as the national dish for at least two centuries. A poem from the early 1800s praised Irish stew for satisfying the hunger of anyone who ate it: Then hurrah for an Irish Stew That will stick to your belly like glue. Bread is an important part of Irish culture. Fresh soda bread, a crusty brown bread made from whole-wheat flour and buttermilk, is a national dish of Ireland. Irish bakers don't stop with soda bread, however. They bake a wide variety of other hearty breads and cakes. The most common everyday beverage in Ireland is tea. Popular alcoholic beverages include whiskey, beer, and ale. Coffee mixed with whiskey and whipped cream is known throughout the world as "Irish coffee." Traditional Irish Stew - 4 potatoes, thinly sliced - 4 medium onions, thinly sliced - 6 carrots, sliced - 1 pound Canadian bacon, chopped - 3 pounds lamb chops, 1-inch thick, trimmed, and cut into small pieces - Salt and pepper to taste - 2½ cups water - 4 potatoes, halved - Fresh parsley, finely chopped - To make Irish stew, all the ingredients are assembled in layers in a large stew pot. - Begin with layers of sliced potatoes, onions, and carrots. - Top with a layer of Canadian bacon and lamb. - Sprinkle liberally with salt and pepper. - Repeat these steps until all the ingredients are used. - Add enough water to just cover the ingredients. - Arrange the halved potatoes on top of the stew, but not in contact with the water, so they can steam as the rest is cooking. - Simmer over a very low heat for about 2 hours. - Sprinkle liberally with the chopped parsley and serve in soup bowls. Makes 4 to 6 servings. Irish Soda Bread - 4 cups flour - 1 teaspoon baking soda - 1 teaspoon salt - ¾ cup raisins - 2 Tablespoons caraway seeds - 1 cup buttermilk - Preheat oven to 425°F. - Mix flour, baking soda, and salt in a bowl. Add raisins and caraway seeds. - Add buttermilk all at once and mix. - Knead the dough on a lightly floured board. (To knead, press the dough flat, fold it in half, turn the dough, and repeat.) Form into a round loaf on a well-greased baking sheet. - With a knife, carefully mark an X across the top of the loaf. Lay a piece of foil over the loaf. Bake for 5 minutes. - Lower heat to 250°F and bake 30 minutes more. Remove foil and bake another 10 minutes, until the loaf is slightly browned. - Cut into wedges and serve with butter. Serves 10 to 12. Corned Beef with Cabbage - 4 pounds corned brisket of beef - 3 large carrots, cut into large chunks - 6 to 8 small onions - 1 teaspoon dry mustard - ¼ teaspoon thyme - ¼ teaspoon parsley - 1 head of cabbage (remove two layers of outer leaves) - Salt and pepper - Boiled potatoes as accompaniment - Place brisket in a large pot. Top with carrots, onions, mustard, thyme, and parsley. - Cover with cold water, and heat until the water just begins to boil. - Cover the pot with the lid, lower the heat, and simmer the mixture for 2 hours. - Using a large knife, cut the cabbage into quarters, and add the cabbage wedges to the pot. - Cook for another 1 to 2 hours or until the meat and vegetables are soft and tender. - Remove the vegetables to a platter or bowl, cover with foil, and keep them warm. - Remove the brisket, place it on a cutting board, and slice it. - Serve the corned beef slices on a platter, surrounded by the vegetables. - Ladle a little of the cooking liquid over the meat and vegetables. Serves 12 to 16. This is one of the most widely eaten potato dishes in Ireland. - 6 to 8 baking potatoes, unpeeled - 1 bunch scallions - 1½ cups milk - 4 to 8 Tablespoons butter (to taste) - Salt and pepper - Scrub potatoes (do not peel), place them in a pot, and cover them with water. - Heat the water to boiling, and cook the potatoes until they can be pierced with a fork (about 25 minutes). - Finely chop the scallions (use both the white bulbs and the green stems) and put them in a small saucepan. - Cover the scallions with the milk and bring slowly just to a boil. - Simmer for about 3 to 4 minutes, stirring constantly with a wooden spoon. Turn off the heat and let the mixture stand. - Peel and mash the hot boiled potatoes in a saucepan. Add the milk and scallions mixture and beat well. - Beat in the butter. Season to taste with salt and pepper. - Serve in 1 large or 4 individual bowls with a pat of butter melting in the center of each serving. May be reheated. Serves 4 to 6. 4 FOOD FOR RELIGIOUS AND HOLIDAY CELEBRATIONS The most festive holiday meal of the year is Christmas dinner, followed by Easter Sunday dinner. During the 40 days of Lent, Irish Catholics choose certain foods they wish to not eat. At one time, all animal products, including milk, butter, and eggs, were not to be consumed during Lent. The poorer Catholics of Ireland were often left to eat only oatcakes for the 40-day period. On Good Friday, the Friday before Easter Sunday, the Irish eat hot cross buns, a light, bread-like pastry topped with a frosting cross that holds spiritual meaning. Another day on the Catholic calendar that the Irish Catholics do not eat meat is All Saints' Day (November 1). Each county has its own special meatless dishes for this occasion. Popular dishes include oatcakes, pancakes, potato pudding, apple cake, and blackberry pies. For Christmas, people throughout Ireland eat spiced beef, and a fancy Christmas cake full of dried and candied fruits for dessert. All Saints' Day Dinner - Nettle soup - Poached plaice fillets - Soda bread - Barm Brack - Carrot pudding - Kidney soup - Christmas goose (roasted) with chestnut stuffing and port sauce - Garden peas with fresh mint - Potato oat cakes - Christmas cake - Mince pies This potato and cabbage dish is traditionally served on Halloween with a ring or lucky charm hidden in the center. - 1 pound kale (or green leafy cabbage) - 1 pound potatoes - 6 scallions (or small bunch of chives) - ⅔ cup milk (or half-and-half) - Salt and freshly ground black pepper - 4 to 8 Tablespoons butter, melted - Remove the tough stalk from the kale or cabbage and shred the leaves finely. - Put about 1 inch of water in a saucepan large enough to hold the kale, and add a teaspoon of salt. - Heat the salted water until it boils, and add the kale. Cook, covered for 10 to 20 minutes until the kale is very tender. Drain well. - Scrub the potatoes and place them in a saucepan, unpeeled. Add water to cover. - Heat the water to boiling, and cook the potatoes until tender (about 25 minutes). - Drain, peel, and return to the pan over low heat to evaporate any moisture (This will take just a minute or so). - Mash the potatoes while warm until they are smooth. - Chop scallions and simmer in the milk or cream for about 5 minutes. - Gradually add this liquid to the potatoes, beating well to give a soft, fluffy texture. - Beat in the kale or cabbage along with the salt and pepper. - Heat thoroughly over low heat and serve in bowls. Make an indentation in the center and pour in some melted butter. Barm Brack is the traditional cake bread eaten at Halloween. - 6 cups flour - ½ teaspoon allspice - 1 teaspoon salt - 1 envelope active dry yeast - 4 Tablespoons sugar - 1¼ cups warm milk - ⅔ cup warm water - 4 Tablespoons butter, softened - 4 Tablespoons currants - 5 Tablespoons orange or lemon peel, chopped - Milk or syrup, to glaze - Powdered sugar, to decorate - The night before baking, make a cup of tea, and put the currants and chopped peel into it to soak overnight. - Mix the flour, allspice, and salt together. Stir in the yeast and sugar. - Make a well in the center of the flour mixture, and pour in the milk and water, and mix into a dough. - Move dough to a floured board and knead for 5 or 6 minutes, adding flour as necessary, until smooth and no longer sticky. (To knead, flatten the dough slightly, fold it over, flatten again, turn.) - Place dough in a clean bowl, cover with plastic wrap, and leave in a warm place for 1 hour to rise (expand) to about double in size. - Turn the dough back out onto the floured board, and add the butter, currants, and chopped peel and knead into the dough. - Return the dough to the bowl and cover again with plastic wrap. Leave to rise for another 30 minutes. - Grease a 9-inch round cake pan. Fit the dough into pan, cover with plastic wrap, and leave until the dough rises to the edge of the tin (about 30 minutes). - Preheat oven to 400°F. - Brush the surface of the dough with milk and bake for 15 minutes. - Cover loosely with foil; reduce the heat to 350°F and bake for 45 minutes more. - Sprinkle with powdered sugar. Irish Christmas Cake The cake tastes best when baked 1–3 weeks ahead of time. This traditional cake is served at holiday festivities throughout December. It is traditionally decorated with marzipan (almond paste), white icing, and holly sprigs. - 2¼ cups dried currants - 2 cups golden raisins - 1 cup dark raisins - ¼ cup candied cherries - ¼ cup candied fruit peel - ⅔ cup almonds, chopped - 1 lemon (juice and grated rind of its peel) - 1½ teaspoons allspice - ½ teaspoon nutmeg, ground - 1 cup Irish whiskey (used in ½-cup amounts; may substitute ½-cup strong tea) - 2 sticks butter, room temperature - 1 cup firmly-packed light brown sugar - 5 eggs - 2 cups flour - Marzipan (almond paste) - White icing (purchased) - Holly sprigs (optional decoration) - The day before baking: Combine all the fruit, peel, rind and juice, spices, and nuts in a large bowl with ½ cup of the whiskey (or tea) and let soak overnight. - The day of baking: Preheat oven to 275°F and grease a 9-inch round cake pan, lining the bottom with cooking parchment paper. - In a large bowl, cream the butter and sugar together until light and fluffy. - Beat the eggs in one at a time, adding flour with each egg. - Mix in the remaining flour and soaked fruit. - Pour the mixture into the cake pan and bake until it is firm to the touch and a toothpick inserted into the center comes out clean, about 2 hours. - Let the cake cool in the pan for 30 minutes. If substituting tea for whiskey, skip this step: Prick the top in several places and pour the remaining ½ cup whiskey over the top. - Wrap in plastic wrap, then foil, and store in a cool, dark place for several weeks to allow the cake to mature (fully absorb the flavors). The cake can be unwrapped occasionally and more whiskey added, if desired. 5 MEALTIME CUSTOMS The Irish value hospitality, and generous portions of food are common at home and in restaurants. A large breakfast was traditionally eaten in rural Ireland. Common breakfast foods included soda bread, pancakes, porridge, eggs, and various meat products. A full oldfashioned country breakfast might include fresh fruit juice, porridge, a "mixed grill" of breakfast meats and black pudding, scones, and soda bread with butter and preserves, tea, and coffee with hot milk. Dinner, the main meal of the day, used to be eaten at lunchtime. A typical dish was "Dublin coddle," a bacon, sausage, potato, and onion soup. Today, however, many Irish people eat lighter meals in the morning and at midday. They have their main meal later in the day, when they come home from work or school. Lunch is often a bowl of hot soup that is served with freshly baked soda bread. However, many pubs (bars) still serve the traditional large midday dinner. "Supper" in Ireland means a late-night snack. A typical supper is a slice of bread with butter and a glass of milk. - 1 pound bacon, sliced - 2 pounds pork sausage links - 2 onions, peeled and sliced - 2 cloves garlic, whole - 4 large potatoes, thickly sliced - 2 carrots, thickly sliced - 1 bouquet garni (bay leaf, tarragon, whole cloves, whole peppercorns; see Procedure step 8) - Black pepper - Apple cider (about 4 cups) - Chopped parsley for garnish - Separate bacon into slices and place them side by side in a large frying pan. (The bacon may be cooked in batches.) Fry over low heat, turning once, until crisp. Drain bacon grease from pan before cooking another batch. - Drain the pan and wipe most of the bacon grease out with a paper towel. - Place sausages in the pan to brown (again, the sausage may be browned in batches). - Place bacon and sausages in a large pot. - Drain frying pan again, wipe it with a paper towel, and add the sliced onions and garlic cloves, cooking them over low heat until the onions are softened. - Add onions and garlic to the bacon and sausage in the pot. - Add the thick slices of potato and carrot. - Make a bouquet garni: In a 3-inch square of cheesecloth, place 1 bay leaf, ½ teaspoon tarragon, 2 whole cloves, and 2 whole peppercorns. Tie with twine, and place in pot. - Cover everything with apple cider (or apple juice). - Cover, and simmer 1½ hours over medium-low heat. The soup should not boil. - Serve, garnished with a sprinkling of parsley and black pepper. Serves 8 to 10. The Irish are known for their rich, dark beer, called stout. The most famous and widely known brand is called Guinness. Tea is another popular beverage. It is served with scones, probably the most popular snack in Ireland. "Fish and chips," or battered and fried fish served with French fries, is also very popular. - 8 cups flour - Pinch of salt - ⅓ cup sugar - 4 teaspoons baking powder - 1½ sticks butter (¾ cup) - 3 eggs - 1¾ cups milk - Preheat oven to 475°F. - Combine flour, salt, sugar, and baking powder in medium mixing bowl. - Cut butter into small cubes and add it to the flour mixture. With clean fingertips, rub the butter into the flour. - In a separate bowl, beat the eggs and milk together. Add to the flour-butter mixture to make a soft dough. - Place mixture on a floured board. Knead lightly for 3 or 4 minutes. - Roll out with a rolling pin to a thickness of about one inch. - Cut dough into 3-inch circles, using a cookie or biscuit cutter. - Place dough circles onto a lightly greased cookie sheet. Bake 10 to 12 minutes until golden brown. - Cool on a wire rack. - Serve, split in half, with berry jam. Makes 18 to 20 scones. - 1 pound of apples (about 3 or 4 medium) - Juice and grated rind of 1 lemon - ¾ cup butter (1½ sticks) - 1 cup sugar - 3 eggs, beaten - 2 cups self-rising flour - ½ teaspoon baking powder - ½ teaspoon cinnamon, ground - 5 Tablespoons raisins - 2 Tablespoons hazelnuts, chopped - 4 Tablespoons powdered sugar - Preheat oven to 350°F and grease a 9-inch round cake pan. - Peel, core, and slice the cooking apples and place them in a bowl. - Sprinkle apples with the lemon juice and set aside. - In another bowl, beat together the butter, lemon rind, and all but 1 Tablespoon of the sugar until light and fluffy. - Gradually beat in the eggs. - Add the flour and baking powder to the butter mixture and mix well. - Spoon half of the mixture into the prepared cake tin. Arrange the apple slices on top. - Mix the remaining Tablespoon of sugar and the cinnamon together in small bowl. Sprinkle evenly over the apples. - Scatter the raisins and hazelnuts on top. - Smooth the remaining cake mixture over the raisins and hazelnuts. - Bake for 1 hour. - Cool in the tin for 15 minutes. Remove, transfer to a serving platter, and sprinkle with powdered sugar. 6 POLITICS, ECONOMICS, AND NUTRITION Modern Ireland has few problems related to availability of food. In the early part of 2001, Irish cattle and sheep farmers, like other farmers in Europe, were fighting against an outbreak of hoof and mouth disease, a deadly viral disease that is fatal to hoofed animals. By summer, the outbreak had been brought under control. Irish citizens generally receive adequate nutrition in their diets, and Irish children are considered healthy by international health care agencies. 7 FURTHER STUDY Albyn, Carole Lisa, and Lois Webb. The Multicultural Cookbook for Students. Phoenix: Oryx Press, 1993. Allen, Darina. The Complete Book of Irish Country Cooking: Traditional and Wholesome Recipes from Ireland. New York: Penguin, 1995. Connery, Clare.In An Irish Country Kitchen. New York: Simon and Schuster, 1992. Drennan, Matthew. Irish: The Taste of Ireland in Traditional Home Cooking. London: Lorenz Books, 1999. Halvorsen, Francine. Eating Around the World in Your Neighborhood. New York: John Wiley & Sons, 1998. Johnson, Margaret M. The Irish Heritage Cookbook. San Francisco: Chronicle Books, 1999. GoIreland.com. [Online] Available http://www.goireland.com/ireland/soda_bread.htm (accessed August 7, 2001). Ireland, The Food Island. [Online] Available http://www.foodisland.com (accessed July 9, 2001). "Ireland." Junior Worldmark Encyclopedia of Foods and Recipes of the World. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/food/encyclopedias-almanacs-transcripts-and-maps/ireland-0 "Ireland." Junior Worldmark Encyclopedia of Foods and Recipes of the World. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/food/encyclopedias-almanacs-transcripts-and-maps/ireland-0 IRELAND. Ireland's history has been shaped by the inescapable facts of geography. A small island at the western edge of Europe, barely within the mainstream of Continental experience, it lay beyond the reach of the Roman Empire (with all that that entailed for the development of law and modes of administration) yet would later become one of the great depositories of Christian art, spirituality, and learning. The European context is crucial to an understanding of Ireland's past, but the critical geographical fact is the island's proximity to Britain. On a clear day, the Mull of Kintyre in southwest Scotland is visible from the Antrim coast in northeast Ireland. Gaelic civilization, moreover, extended like an arc along the western and northern coasts of Ireland into the Scottish Highlands. Scottish Lowlanders and the English referred to Scots Gaelic as the "Irish language." From the importation by Gaelic lords of Highland mercenary soldiers—the gallowglass and the redshanks—to the role of Scots settlers in the Ulster plantation and the Scots army in the North in the 1640s, a strong Scottish dimension runs through early modern Irish history, though ultimately Ireland's troubled relationship with its larger neighbor, England, would have the greater impact. THE FALL OF THE HOUSE OF KILDARE In 1450 Ireland was a lordship, and the king of England its lord. The English crown's claim to sovereignty over the whole island had never been vindicated in practice, however, and during the later Middle Ages English power and jurisdiction were in retreat. Effectively, the king's writ and the common law were confined to the Pale, the area of English settlement around Dublin, capital city and seat of royal authority. Beyond the Pale and the towns, the great Anglo-Norman magnates negotiated the shifting frontiers of Gaeldom through "march law," a bastardized amalgam of common and Irish brehon (native) laws and customs. Even the levers of royal authority began to slip from the king's grasp. The crown in Ireland was represented either by a lord lieutenant, a lord deputy, or, in the absence of one or the other, by lords justices. Between 1447 and 1460, Richard of York's (1411–1460) political standing conferred stature upon the lord lieutenancy and, equally important, kept it within the orbit of the court. Then, between the 1470s and 1520, successive earls of Kildare virtually monopolized the office, using it as a source of patronage to extend their local power base and network of alliances. The local autonomy enjoyed by the "Kildare ascendancy" has struck some historians of the old nationalist school as part of a wider pattern of incipient Anglo-Irish separatism. But it is surely anachronistic to attribute proto-nationalist ambitions to a political community, the descendants of the original Anglo-Norman settlers, that had no concept of an Irish "nation" in the modern sense. It did, however, have a strong sense of English identity, albeit "English by blood" rather than by birth. Nevertheless, from Parliament's declaration that Ireland was "corporate of itself" (1460) to its declaration of legislative independence in 1782, Anglo-Irish constitutional relations provides a major framework for Irish political history. Subordination of Ireland to England (and, after 1707, Great Britain) and Irish resistance to subordination, though rarely rising to outright separatist aspirations, runs like a leitmotiv through these centuries. The ascendancy of the earls of Kildare entailed a sometimes spectacular loss of royal control over Irish affairs, most vividly in 1487 when the Yorkist eighth earl, Garrett Mor, crowned the pretender, Lambert Simnel (c. 1475–1535), king of England in Christ Church Cathedral, Dublin. Kildare's survival in office, despite his treason, underlines the weakness of the English crown in the fifteenth century. From a position of greater strength and internal stability, however, Henry VIII would not countenance such overmighty subjects anywhere within his realm. Thus, when the ninth earl was summoned to London under the shadow of the executioner in 1534, his son, Lord Offaly, "Silken Thomas," led his followers in the Geraldine League into rebellion. The Geraldine revolt, which lasted until 1540, opened a new, blood-drenched chapter in Irish history. The advent of a new era was signaled by the first ever use of artillery—against the Kildare stronghold of Maynooth—by the ruthless suppression of the rebellion, and by the first stirrings of anti-Reformation Catholicism among the rebels. The fall of the house of Kildare also inaugurated a prolonged phase of direct rule from London. That practice became the sine qua non of England's Irish policy, and several illustrious names among England's governing elite occupied Dublin Castle, namely the earls of Essex (1599), Strafford (1633–1640), and Chesterfield (1745–1747). There were notable exceptions to the rule: the Irish-born Protestant first duke of Ormond served as lord lieutenant under both Charles I and Charles II, while the Irish-born old English Catholic, the earl of Tyrconnell, held the office under James II in the 1680s. But after the first decade of the eighteenth century (when the second duke, Ormond's grandson, served) occupation of Dublin Castle was reserved for Englishmen. Until the very end of that century, and the appointments of John Fitzgibbon as lord chancellor and Viscount Castlereagh as chief secretary, Englishmen monopolized all senior executive posts, including the lord lieutenancy, chief secretaryship, lord chancellery, and the archbishopric of Armagh. On one level, official Ireland, especially its established church, functioned merely as a patronage outpost for a British political system oiled by the disbursement of places, preferments, pensions, promotions, titles, and favors. On another level, control of the executive rested on British security considerations. ENGLAND'S DIFFICULTY, IRELAND'S OPPORTUNITY Security underpinned England's Irish policy. In essence, the concern was strategic. As Thomas Waring put it in the wake of the Cromwellian reconquest of 1649–1650, "humane reason and policie dictate's that the hous cannot bee safe so long as the back door is open." Ireland served as England's "back door" as early as 1497, when another Yorkist pretender, Perkin Warbeck, landed at Cornwall with a retinue of Irish supporters. Then, as Reformation and Counter-Reformation Europe split into warring camps, the vulnerability of Protestant England's western seaboard (and the dangers of Spain's sponsorship of Irish Catholic rebels) concentrated the Tudor mind. Spain (and the papacy) twice intervened in Ireland, landing troops at Smerwick, County Kerry (1580), and, in greater force, at Kinsale, County Cork (1601). Strategic necessity lent urgency to the Tudor reconquest of the sixteenth century and galvanized English determination to hold onto Ireland thereafter. Enemies changed, geography did not: French soldiers fought in Ireland in 1690 and 1798. England's dominance depended, at bottom, on coercive force. Beyond that, Whitehall and Westminster exercised an array of political, legislative, and administrative controls. These included the retention in English hands of key public offices and the imposition of restrictive laws limiting the autonomy of the Irish Parliament and regulating Irish trade. A few legislative landmarks plot the troubled course of Anglo-Irish relations. First, "Poynings's Law" (1494), aimed originally at too-powerful lord deputies of the Kildare type, evolved into a procedure whereby all Irish parliamentary bills were subject to amendment—amounting to a veto—by the English Privy Council. The repeal of Poynings's Law constitutes the so-called revolution of 1782. Second, the Irish Parliament's subordinate status, institutionalized under Poynings, received confirmation in the Declaratory Act of 1720, a forthright assertion of Westminster's supremacy in the Kingdom of Ireland. Finally, Westminster used its claim of jurisdiction to impose laws prohibiting the import of Irish cattle to England (1667) and the export of Irish wool (1699). Both laws long caused bitter resentment in Ireland, the preliminary controversy surrounding the latter provoking the classic defense of Ireland's historic right to legislative independence, William Molyneux's The Case of Ireland Being Bound by Acts of Parliament in England, Stated (London, 1698). The roots of England's perennial "Irish problem" lay in the failures of England's Irish policies. By 1450, although the territory of the Pale had contracted, it still boasted the most densely populated, intensively cultivated, and economically diverse region of the country. Yet Gaeldom had also demonstrated its military and cultural vitality. And, as Sir John Davies recognized in his Discovery of the True Causes Why Ireland Was Never Entirely Subdued (1612), the Irish problem would remain intractable for so long as the Gael remained outside—and indeed resistant to—the boon of common law, civility, and, by Davies's time, Protestantism or "true religion." "All the world knows their barbarism," Cromwell remarked of his Irish enemies. Only the adoption of English customs, Reformed religion, language, and law—in a word, anglicization—could save them from their wretched condition. The Gaelic Irish saw matters differently, and while the story of English-Irish conflict supplies the historian with a ready, dramatic, and compelling narrative structure, it is vital that historians not view the past solely in terms of that conflict. Early modern Ireland, viewed from the Atlantic shores of Donegal, looks rather different from the anglophone Ireland mapped and preserved in the Public Record Office. For the historian, the question of perspective is precisely about rescuing the Gaelic-speaking O'Donnell retainer and MacSweeny swordsman from the enormous condescension of the state papers. Gaelic politics, economy, and society are more difficult to reconstruct than Anglo-Ireland because they never generated the sorts of records—tax rolls, bureaucratic memoranda, even paintings—upon which historians usually rely. The Gaelic world has thus either remained hidden, or, as recently as 1988, been caricatured on the basis of the naive or hostile reportage of outsiders. Fortunately, the dearth of conventional sources has been circumvented somewhat by the mining of a rich, if tricky, lode of nontraditional evidence: Irish-language poetry. Excavations (and cataloguing) are still in the heroic phase, but already the findings of scholars working with these hitherto underused sources have altered and enhanced our understanding of, for example, the depth and range of Irish Jacobite sentiment in the eighteenth century. English late medieval society, including the Irish Pale, was organized around legally binding principles of mutual obligation and services based on land tenures. In contrast, in Gaelic society land ownership and inheritance, obligation, and political succession were determined by kinship. A chief's power rested on his ability to enforce it, and under the system of "tanistry" his designated heir was as likely a brother or cousin as an eldest son. Kinship, alliances through marriage and fosterage and the receipt of tribute from lesser clans defined a great chief's status more than territory or even cattle—the staple of the Gaelic pastoral economy. Certain families, notably the O'Neills and O'Donnells in Ulster, the O'Connors in Connacht, and the MacCarthys and O'Briens in Munster, predominated. They inhabited a world of insistent, lowintensity warfare and comparative political instability. Exactions of tribute—in kind, or in military or labor services—lacked regulation, and by the early modern period were epitomized by the abuses of "coign and livery"—the billeting at free quarters by a chief of his dependants on his tenants. The crown and the Dublin administration were not prepared to leave the natives to their own ways for three reasons. First, the inevitable processes of intermarriage, cultural interaction, and linguistic borrowings (in both directions) of the Gaedhil (or Irish) and the Gaill (or foreigners)—which historians call gaelicization but which the English called degeneracy—could not be permitted to continue. Second, the English "common law mind" embraced legal uniformity and abhorred local particularism. Ireland, reported an early-sixteenth-century English observer, comprised a patchwork of over sixty "countries" ruled by captains, each of whom "maketh war and peace for himself, and holdeth by the sword, and hath imperial jurisdiction within his room, and obeyeth to no other person." Worse still, degenerate "captains of English noble family . . . folloeth the same Irish order." The gaelicized Anglo-Norman House of Desmond cast its shadow across the common law mind. Finally, particularistic march law and Gaelic custom rooted in local power bases challenged royal sovereignty as well as legal uniformity. CONQUEST AND "REFORM" Whereas conventional nationalist histories of sixteenth-century Ireland focused on reconquest, revisionist historians have recovered the Tudor commitment to reform, although conquest and, in Brendan Bradshaw's terminology, "the catastrophic dimension of Irish history" are now being reintroduced to a more complicated picture. The set pieces of reform are the Act of Kingly Title (1541), which upgraded Ireland from a lordship to a kingdom, and "surrender and regrant," under which Gaelic chieftains surrendered their titles to the crown and were regranted them in English law. Several leading figures were ennobled, for example "the O'Neill" now became Earl of Tyrone, and succession and inheritance were at least theoretically stabilized by the extension of primogeniture. In the longer run, however, the prospects for reform were dashed by the rise of confessional conflict. In Ireland, the Protestant Reformation assumed the character of an alien imposition. Decisively, the old English, as well as the native Irish, remained Catholic. Protestants were—and remained—a minority. When the Tudors completed the reconquest by the subjugation of Hugh O'Neill (1603), Gaelic Ireland had suffered military defeat but retained its cultural identity. Ethnic origin divided the Gael from his fellow Catholic old English almost as much as from the Protestant new English, yet shared adversity during the first decades of the seventeenth century conspired to forge a common Catholic identity. The defeat of O'Neill was followed by "the flight of the Earls" (1607) when O'Neill and others fled to Catholic Europe. Interpreted as an act of rebellion, the fugitives' lands escheated to the crown and were redistributed to English and Scottish settlers in the plantation of Ulster. The last bastion of Gaelic civilization thereby became the beachhead of British Protestantism in Ireland. The Scottish communities, moreover, laid the seedbed for Presbyterianism. Stuart Ireland thus hosted four major ethno-religious groups: native Irish Catholics, old English Catholics, new English Protestants of the established church, and (before 1642, informally) Scots Presbyterians. Intra-denominational relations, already tense, strained to breaking point with the crisis of the Stuart monarchies in the late 1630s. Ireland, in fact, helped detonate the wars of the three kingdoms with the Ulster rebellion of 1641. Many Protestant planters were killed by insurgents, and lurid tales of massacre swept England, deepening the rage against popery and suspicion of the king, in whose defense the rebels claimed to act. Ireland, like England and Scotland, experienced the trauma of civil war in the 1640s. Alliances and allegiances shifted bewilderingly but, crucially, the old English were forced into military coalition with their Gaelic coreligionists. When Cromwell arrived in 1649 once more to subjugate the Irish and to revenge 1641, he made no ethnic distinctions among his papist enemies. The land confiscations begun in the Tudor era and continued by the Ulster plantation reached unprecedented levels with the Cromwellian settlement. In 1603 Catholics owned more than 60 percent of the land; by 1659 that figure had been reduced to about 9 percent. During the reign of Charles II, Catholic ownership climbed back to around 25 percent, thanks to successful pleas in the court of claims, but fell again to 14 percent by the end of the century as a result of the forfeitures that followed the second defeat of Catholic Ireland in 1691. This time there would be no court of claims, but rather a relentless chipping away, by the implementation of penal laws, at the remaining Catholic-owned land. By 1775 it stood at 5 percent. The political nation, like the landowning elite, of eighteenth-century Ireland was Protestant. But the Protestants were a minority, and if anything is inevitable in history, the Catholics could not be excluded from public life and political power forever. A rising Catholic mercantile class had already begun to articulate its grievances by the 1780s, but once more it was events outside the island that catalyzed Irish politics, including the "Catholic question." With the storming of the Bastille on 14 July 1789, a new epoch opened in European—and Irish—history. See also Cromwell, Oliver ; Dublin ; England ; Landholding ; Law ; Nationalism ; Provincial Government ; Revolutions, Age of . Brady, Ciaran, and Raymond Gillespie, eds. Natives and Newcomers: Essays on the Making of Irish Colonial Society, 1534–1641. Dublin, 1986. Connolly, Sean J. Law, Religion and Power: The Making of Protestant Ireland, 1660–1760. Oxford, 1992. Ellis, Steven G. Ireland in the Age of the Tudors, 1447–1603: English Expansion and the End of Gaelic Rule. London and New York, 1998. Moody, T. W., F. X. Martin, and F. J. Byrne, eds. A New History of Ireland III: Early Modern Ireland, 1534–1691. Oxford, 1976. "Ireland." Europe, 1450 to 1789: Encyclopedia of the Early Modern World. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/ireland-0 "Ireland." Europe, 1450 to 1789: Encyclopedia of the Early Modern World. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/ireland-0 Pagan and Christian Beliefs [For information regarding ancient Ireland, see Celts. ] Although nominally Christianized, there is little doubt that the early medieval Irish retained many remnants of their former paganism, especially those with elements of magic. The writings of the Welsh historian Giraldus Cambrensis (ca. 1147-1220) point to this. This is the first known account of Irish manners and customs after the invasion of the country by the Anglo-Normans. His description, for example, of the Purgatory of St. Patrick in Lough Derg, County Donegal, suggests that the demonology of the Catholic Church had already fused with the animism of earlier Irish tradition. He states: There is a lake in Ulster containing an island divided into two parts. In one of these stands a church of especial sanctity, and it is most agreeable and delightful, as well as beyond measure glorious for the visitations of angels and the multitude of the saints who visibly frequent it. The other part, being covered with rugged crags, is reported to be the resort of devils only, and to be almost always the theatre on which crowds of evil spirits visibly perform their rites. This part of the island contains nine pits, and should any one perchance venture to spend the night in one of them (which has been done, we know, at times, by some rash men), he is immediately seized by the malignant spirits, who so severely torture him during the whole night, inflicting on him such unutterable sufferings by fire and water, and other torments of various kinds, that when morning comes scarcely any spark of life is found left in his wretched body. It is said that any one who has once submitted to these torments as a penance imposed upon him, will not afterwards undergo the pains of hell, unless he commit some sin of a deeper dye. "This place is called by the natives the Purgatory of St. Patrick. For he, having to argue with a heathen race concerning the torments of hell, reserved for the reprobate, and the real nature and eternal duration of the future life, in order to impress on the rude minds of the unbelievers a mysterious faith in doctrines so new, so strange, so opposed to their prejudices, procured by the efficacy of his prayers an exemplification of both states even on earth, as a salutary lesson to the stubborn minds of the people. The ancient Irish believed in the possibility of the transformation of human beings into animals. Giraldus, in another narrative of facts purporting to have come under his personal notice, shows that this belief had lost none of its significance with the Irish of the latter half of the twelfth century. The case is also interesting as being one of the first recorded examples of lycanthropy in the British Isles: "About three years before the arrival of Earl John in Ireland, it chanced that a priest, who was journeying from Ulster towards Meath, was benighted in a certain wood on the borders of Meath. While, in company with only a young lad, he was watching by a fire which he had kindled under the branches of a spreading tree, lo! a wolf came up to them, and immediately addressed them to this effect: 'Rest secure, and be not afraid, for there is no reason you should fear, where no fear is!' The travellers being struck with astonishment and alarm, the wolf added some orthodox words referring to God. The priest then implored him, and adjured him by Almighty God and faith in the Trinity, not to hurt them, but to inform them what creature it was in the shape of a beast uttered human words. The wolf, after giving catholic replies to all questions, added at last: 'There are two of us, a man and a woman, natives of Ossory, who, through the curse of Natalis, saint and abbot, are compelled every seven years to put off the human form, and depart from the dwellings of men. Quitting entirely the human form, we assume that of wolves. At the end of the seven years, if they chance to survive, two others being substituted in their places, they return to their country and their former shape. And now, she who is my partner in this visitation lies dangerously sick not far from hence, and, as she is at the point of death, I beseech you, inspired by divine charity, to give her the consolations of your priestly office.' "At this wood the priest followed the wolf trembling, as he led the way to a tree at no great distance, in the hollow of which he beheld a she-wolf, who under that shape was pouring forth human sighs and groans. On seeing the priest, having saluted him with human courtesy, she gave thanks to God, who in this extremity had vouchsafed to visit her with such consolation. She then received from the priest all the rites of the church duly performed, as far as the last communion. This also she importunately demanded, earnestly supplicating him to complete his good offices by giving her the viaticum. The priest stoutly asserting that he was not provided with it, the he-wolf, who had withdrawn to a short distance, came back and pointed out a small missal-book, containing some consecrated wafers, which the priest carried on his journey, suspended from his neck, under his garment, after the fashion of the country. He then intreated him not to deny them the gift of God, and the aid destined for them by Divine Providence; and, to remove all doubt, using his claw for a hand, he tore off the skin of the she-wolf, from the head down to the navel, folding it back. Thus she immediately presented the form of an old woman. The priest, seeing this, and compelled by his fear more than his reason, gave the communion; the recipient having earnestly implored it, and devoutly partaking of it. Immediately afterwards the he-wolf rolled back the skin and fitted it to its original form. "These rites having been duly, rather than rightly performed, the he-wolf gave them his company during the whole night at their little fire, behaving more like a man than a beast. When morning came, he led them out of the wood, and, leaving the priest to pursue his journey pointed out to him the direct road for a long distance. At his departure, he also gave him many thanks for the benefit he had conferred, promising him still greater returns of gratitude, if the Lord should call him back from his present exile, two parts of which he had already completed. "In our own time we have seen persons who, by magical arts, turned any substance about them into fat pigs, as they appeared (but they were always red), and sold them in the markets. However, they disappeared as soon as they crossed any water, returning to their real nature; and with whatever care they were kept, their assumed form did not last beyond three days. It was also a frequent complaint, from old times as well as in the present, that certain hags in Wales, as well as in Ireland and Scotland, changed themselves into the shape of hares, that, sucking teats under this counterfeit form, they might stealthily rob other people's milk." Witchcraft in Ireland In Anglo-Norman times, sorcery, malevolent magic, was apparently widelly practiced, but records are scarce. It is only by fugitive passages in the works of English writers who constantly comment on the superstitious nature and practices of the Irish that any information concerning the occult history of the country emerges. The great scandal of the accused witch Dame Alice Kyteler did shake the entire Anglo-Norman colony during several successive years in the first half of the fourteenth century. The party of the Bishop of Ossory, the relentless opponent of the Dame Alice, boasted that by her prosecution they had rid Ireland of a nest of sorcerers; and, yet, there is reason to believe that Ireland could have furnished other similar instances of black magic had the actors in them been of royal status—that is, of sufficient importance in the eyes of chroniclers. In this connection St. John D. Seymour's Irish Witchcraft and Demonology (1913) is of striking interest. The author seems to take it for granted that witchcraft in Ireland is purely an alien system, imported into the island by the Anglo-Normans and Scottish immigrants to the north. This is a possibility because the districts of the Pale and of Ulster are concerned, even if it cannot be applied to the Celtic districts of Ireland. Early Irish works contain numerous references to sorcery, and practices are chronicled in them that bear a close resemblance to those of the shamans and medicine men of tribes around the world. The ancient Irish cycles frequently allude to animal transformation, one of the most common feats of the witch, and in Hibernian legend most heroes have a considerable working magic available to them. Wonder-working druids also abound. Seymour claimed that, "In Celtic Ireland dealings with the unseen were not regarded with such abhorrence, and indeed had the sanction of custom and antiquity." He added that "…the Celtic element had its own superstitious beliefs, but these never developed in this direction," by which he meant witchcraft. He lacked support for this observation. An absence of records of such a system is no proof that one never existed, and it is possible that a thorough examination of the subject would prove that a veritable system of witchcraft obtained in Celtic Ireland as elsewhere, although it may not have been of "Celtic" origin. Seymour's book nonetheless is most informative on those Anglo-Norman and Scottish portions of Ireland where the belief in sorcery followed the lines of those in vogue in the mother-countries of the immigrant populations. He sketched the famous Kyteler case; touched on the circumstances connected with the Earl of Desmond; and, he noted the case of the Irish prophetess who insisted upon warning the ill-fated James I of Scotland on the night of his assassination at Perth. It is not stated by the ancient chronicler whom Seymour quotes where in Ireland the witch in question came from—and undoubtedly she was a witch because she possessed a familiar spirit, "Huthart," whom she alleged warned her of the coming catastrophe. This spirit is the Teutonic Hudekin or Hildekin, the wearer of the hood, sometimes also alluded to as Heckdekin, well known throughout Germany and Flanders as a species of house-spirit or brownie. Trithemius alludes to this spirit as a "spirit known to the Saxons who attached himself to the Bishop of Hildesheim" and it is cited here and there in occult history. From this circumstance it might be inferred that the witch in question came from some part of Ireland that had been settled by Teutonic immigrants, probably Ulster. Seymour continued his survey with a review of the witchcraft trials of the sixteenth century; the burning of Adam Dubh; the Leinster trial of O'Toole and College Green in 1327 for heresy; and, the important passing of the statute against witchcraft in Ireland in 1586. He noted the enchantments of the Earl of Desmond, who demonstrated to his young and beautiful wife the possibilities of animal transformation by changing himself into a bird, a hag, a vulture, and a gigantic serpent. One full chapter was devoted to Florence Newton, the witch of Youghal, who was one of the most absorbing in the history of witchcraft. Ghostly doings and apparitions, fairy possession, and dealings with fairies are also included in the volume, and Seymour did not confine himself to Ireland. He followed one of his countrywomen to the United States, where he demonstrated her influence on the "supernatural" speculations of Congregationalist minister Cotton Mather. Seymour completed his survey with seventeenth-century witchcraft notices from Antrim and Island Magee and the affairs of sorcery in Ireland from the year 1807 to the early twentieth century. The last notice is that of a trial for murder in 1911, when a woman was tried for killing another (an old-age pensioner) in a fit of insanity. A witness deposed that he met the accused on the road on the morning of the crime holding a statue or figure in her hand and repeating three times, "I have the old witch killed. I got power from the Blessed Virgin to kill her." It appears that the witch in question had threatened to plague the woman with rats and mice. A single rodent had evidently entered her home and was followed by the bright vision of a lady who told the accused that she was in danger, and further informed her that if she received the senior citizen's pension book without taking off her clothes and cleaning them and putting out her bed and cleaning up the house, she would "receive dirt for ever and rats and mice." During the late nineteenth and early twentieth centuries, Celtic mysticism and legends of ghosts and fairies received a new infusion from Hindu mysticism through the Dublin lodge of the Theosophical Society and the writings of poets William Butler Yeats and "AE" (pseudonym of George W. Russell ). Through the society, Russell was profoundly influenced by Hindu scriptures such as the Bhagavad-Gita and came to understand that mysticism should be interfused with one's everyday social responsibilities. Russell wrote mystical poems and painted pictures of nature spirits. Yeats became a noted member of the Hermetic order of the Golden Dawn, a ritual magic society. Its teachings had a primary influence on the symbolism of his poems and on his own mystical vision. He was also impressed by Hindu mystical teachings, and collaborated with Shri Purohit Swami in the translation of Hindu religious works. After the death of Yeats and Russell, occultism did not make much headway in Irish life and literature. The occult and witchcraft boom of the 1950s and 1960s was largely ignored in Ireland. Janet and Stewart Farrar, both neo-pagan witches trained by Alexander Sanders, did take up residence in the Republic of Ireland. Stewart Farrar has written a number of books on witchcraft, including the early neo-pagan classic What Witches Do: The Modern Coven Revealed (1971). The Fellowship of Isis, headquartered at Huntingdon Castle, Clonegal, Enniscorthy, has become an international association of neo-pagans and witches. It is devoted to the deity in the form of the goddess, and publishes material concerning matriarchal religion and mysticism. Irish writer Desmond Leslie was coauthor with George Adamski of the influential book Flying Saucers Have Landed (1953) an important early book introducing the topic to the English-speaking public. The book was eventually translated into 16 languages. Psychical Research & Parapsychology Although Ireland is traditionally a land of ghosts, fairies, banshees, and haunted castles, there have been few systematic attempts to conduct psychical research there. The exceptions have been some interest in dowsing (water-divining), and the work of medium Kathleen Goligher. In 1914, then 16 year-old Goligher came into the world's attention by Dr. William Crawford, in Belfast. Goligher was from a family of physical mediums, but considered the best of them. The phenomena demonstrated consisted of raps that reportedly shook the room, and levitation of a ten and a half pound table, often for as long as five minutes. Crawford photographed the manifestations that supported the levitations-ectoplasmic structures that resembled rods. Harry Houdini saw the pictures that Crawford had intended to use in his book. He remained completely skeptical and decided that Crawford was insane. Following Crawford's suicide in 1920, another photograph of plasma coming out of Goligher's body was thought to be genuine. By 1922 Dr. E. E. Fournier d'Albe claimed she was a fraud after 20 sittings with her. Following a ten-year period of retirement, it was reported in 1933 that Goligher produced cloth-like ectoplasm. Researchers did not investigate that claim, so no verification could be made. That Crawford introduced technology to verify the investigation is what remained of prime interest historically. Currently, the Belfast Spiritual Fellowship, a group ascribing to Spiritualist beliefs, can be contacted at 44 Barnsmore Drive, Belfast, Northern Ireland BT13 3FF. AE [George W. Russell]. The Candle of Vision. London: Macmillan, 1918. Reprint, New Hyde Park, NY: University Books, 1965. Berger, Arthur S., and Joyce Berger. The Encyclopedia of Parapsychology and Psychical Research. New York: Paragon House, 1991. Curtin, Jeremiah. Tales of the Fairies and of the Ghost World, Collected from Oral Tradition in Southwest Munster. London: D. Nutt, 1895. Reprint, Dublin: Talbot Press, 1974. Dunne, John J. Haunted Ireland: Her Romantic and Mysterious Ghosts. Belfast: Appletree Press, 1977. Farrar, Stewart. What Witches Do: The Modern Coven Revealed. New York: Coward, McCann & Geoghegan; London: Peter Davies, 1971. Giraldus Cambrensis. The Historical Works of Giraldus Cambrensis, Containing the Topography of Ireland, and The History of the Conquest of Ireland. Translated by R. C. Hoare. London: Bohn's Antiquarian Library, 1847. Gregory, Lady. Visions and Beliefs in the West of Ireland. 2 vols. New York: George Putnam's Sons, 1920. Reprint, U.K.: Colin Smythe, 1970. Harper, George Mills. Yeats's Golden Dawn. London: Macmillan, 1974. McAnally, D. R., Jr. Irish Wonders: The Ghosts, Giants, Pookas, Demons, Leprechawns, Banshees, Fairies, Witches, Widows, Old Maids and Other Marvels of the Emerald Isle. Boston: Houghton Mifflin, 1888. Reprint, Detroit: Grand River Books, 1971. O'Donnell, Elliot. The Banshee. London: Sands, 1920. Seymour, St. John D. and Harry L. Neligan. True Irish Ghost Stories. London: Oxford University Press, 1915. Reprint, New York: Causeway Books, 1974. Spiritualist Webring. Available at: http://home.vicnet.net.au/~johnf/welcome.htm. Washington, Peter. Madame Blavatsky's Baboon, A History of the Mystics, Mediums, and Misfits Who Brought Spiritualism to America. New York: Schocken Books, 1993. White, Carolyn. A History of Irish Fairies. Cork, Ireland: Mercier Press, 1976. Yeats, W. B., ed. Fairy and Folk Tales of the Irish Peasantry. London: Walter Scott Publishing, 1888. Reprint, New York: Grosset & Dunlap, 1957. "Ireland." Encyclopedia of Occultism and Parapsychology. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." Encyclopedia of Occultism and Parapsychology. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/ireland The most striking feature of the family in Ireland during the last decades of the twentieth century is the rapid rate at which it has changed. From around the late 1960s the Irish family, in response to a national program of economic development, changed from a traditional rural form typical of economies based on agriculture to a postmodern form typical of postindustrial societies. Although the changes that occurred are common to most Western European societies, the rate of change in Ireland was exceptional. In less than one generation, the Irish family was transformed. Since the time of the Great Famine in 1847, the population of Ireland steadily decreased until the time of economic expansion in the 1960s. The principal causes of this decline were high emigration and low marriage rates due to a stagnant economy and large-scale unemployment. Ireland did not experience the demographic transition typical of most Western European countries in the post-World War II period. It was not until much later that Ireland manifested the characteristics of this transition, giving rise in the 1970s to a baby boom. The effects of this baby boom have been a major influence on Irish families since then, with Ireland having the youngest population in the European Union. With an upsurge in the economy in the 1960s, birth rates increased. By 1971 the birth rate had reached a high of 22.7 per 1000 of population, giving a total period fertility rate of four, which is almost twice the replacement level. Since then, birth rates have declined, and by the 1990s they were below replacement level. By 2000 the birth rate had fallen to 14.3 and the total period fertility rate to 1.89 (Vital Statistics 2001). However, due to an increase in net immigration, largely because of the return of Irish workers and their families to take up employment in Ireland's new booming economy of the 1990s, the population continued to increase. These changes were also accompanied by changes in marriage rates, age at time of marriage, age at the time of first maternity, family size, the number of out-of-wedlock births, marital separation, and cohabitation. By the end of the twentieth century Ireland had caught up with the demographic trends in most Western European countries and, apart from some differences, the overall pattern is much the same. The biggest difference is that while most of Europe experienced these changes over a period of two generations, Ireland went through them in one. Change over time is also evident in the internal structure and dynamics of the family. This is seen when comparing the findings of two classical anthropological studies of the rural Irish family. The first of these studies was carried out by Conrad Arensberg and Solon Kimball (1940) in the 1930s. This study showed that there was a single family type in rural Ireland that was characterized as having a dominant patriarchal authority system with a rigidly defined division of labor based on gender. In contrast, the second study carried out by Damian Hannan and Louise Katsiaouni (1977) in the 1970s, when the process of change had begun, found a wide variety among farm families, including the socialization experiences of spouses and family interaction patterns. They also found that families were more democratic in structure and that there was a move towards a division of labor based on competence rather than gender. The authors concluded that the family was going through a process of change from a traditional to a modern form, and they linked these changes to the changes taking place in the economic, social, and cultural environments in Ireland at the time. Changes in the family are also associated with the decline in the influence of the Catholic Church on Irish family life, especially in the area of sexual morality. The traditional family in Ireland has long been characterized as highly conservative, reflecting the dominant value system of the Catholic Church. Although religious practice continues to be high, evidence shows that the influence of Catholic teaching on family life has greatly diminished. This is seen, for example, with the widespread use of contraception and the extent of sexual activity outside marriage. These behavioral changes were also accompanied by the introduction of extensive new legislation on family matters in the 1980s and 1990s, including the passage of a referendum on divorce that led to the introduction of no-fault divorce. Much of this legislation challenged the traditional ideology of the Catholic Church that promoted the privatization of the family and was strongly opposed to state "interference" in family matters. Under Article 41 of the Irish Constitution, the state pledges to "guard with special care the institution of marriage, on which the family is founded." This position of marriage as the basis of the family was further reinforced in 1966 when the Supreme Court interpreted this Article to mean that the family as structurally defined is based on the institution of marriage. Although this Article in the Constitution reflects the ideology of Ireland in the 1930s and does not represent the reality of Irish family life today, marriage has remained relatively stable when compared to other European countries. Although the marriage rate has decreased from a high of 7.4 per 1,000 of population in the early 1970s, to a low of 4.3 by 1997, marital break-up has remained relatively low. For example, the divorce rate in the European Union for the year 1998 was 1.8 per 1,000 of population, while in Ireland it was 0.6 (Census 1996). However, divorce rates alone are misleading in Ireland because most couples who break up tend to separate rather than divorce. Trends seem to indicate a pattern of people using separation as an exit from marriage and divorce as an entry to a new relationship. In addition, divorce has only been available in Ireland since 1996. In the 1996 census 78,005 people reported themselves as separated, compared to fewer than 10,000 divorced. Nonetheless, even taking account of the numbers reported, marital break-up is comparatively low, and there has been a slight upward turn in the marriage rate, which in 2000 was 5.1 per 1,000 of population (Vital Statistics 2001). Attitude studies also show a strong commitment to marriage, with companionship more highly valued than personal freedom outside of marriage (MacGreil 1996). These attitudes are further reflected in a Eurobarometer study (1993) that showed that 97.1 percent of Irish respondents placed the family highest in a hierarchy of values. In addition, alternatives to marriage, such as cohabitation, are not a strong feature of Irish families, with only 2 percent of couples living in consensual unions. The typical family type is that of two parents and their children, but there has been an increase in single-parent families. In the year 2000 nonmarital births accounted for 31.8 percent of all births (Vital Statistics 2000). These births were to women in their twenties and older, not to teenagers. (Teenage births are not a significant proportion of non-marital births in Ireland.) The average age of non-married mothers is twenty-five. Nonmarital births reflect a diversity of family forms that includes cohabiting couples, reconstituted families following marital separation that have not been legally regulated, and single-parent families. It is not known to what extent these nonmarital births reflect a trend towards increased single-parent households or simply a prelude to marriage. In the year 2000 single-parent families represented 10 percent of all households, and the largest group of these consisted of widows and their children. The presence of children still continues to be an important part of Irish families, even though the birth rate is below replacement level. The traditional large family consisting of four or more children has been replaced by smaller families. In 1968, for example, 37.4 percent of births were to mothers with three or more children. By 1998 this had fallen to 12.7 percent (Health Statistics 1999, p. 28). The trend is for more women to have children, but to have fewer of them. Only 15 percent of couples live in households where there are no dependent children (Social Situation in the EU 2000). This strong positive attitude towards having children is supported by attitude surveys, which show that the Irish adult population places great value on having children for their own sake (MacGrail 1996). Although children are highly valued, they are still at risk of poverty; studies consistently show that single-parent families and families with three or more children are most at risk ( Johnson 1999). In an attempt to combat this, successive governments in the 1990s introduced a range of measures, including significant increases in child benefit and employment incentives for unemployed parents. In an effort to protect children from poverty and abuse, the government launched a National Children's Strategy in 2000 and established an Ombudsman for Children. Mothers and Employment A relatively new feature of family life in Ireland is the increased participation of mothers in the active labor force outside the home. In 1987 only 32.7 percent of mothers with children under the age of fifteen years and at least one child under five were active in the paid labor force (Labour Force Survey 1987). Ten years later in 1997, this had risen to 53.1 percent (Labor Force Survey 1997). Of particular significance is that the highest percentage of mothers in full-time employment are mothers of children under age two. In contrast, the highest percentage of mothers in part-time employment is of mothers with children over age ten. This trend poses difficulties in balancing work and family responsibilities. For example, a 1998 study (National Childcare Strategy 1999) found that 22 percent of mothers of children from infants to four-year-olds, and 68 percent of mothers of children aged five to nine years who were in full-time employment, did not use any form of paid childcare. The study assumed that the younger age group of children were cared for by their fathers and other nonpaid relatives, such as grandparents, but made no comment on who cares for the much larger group of children aged five to nine. These findings seem to support other studies that suggest that the provision of affordable quality childcare, and not attitudes towards paid employment of mothers, is the crucial factor influencing mothers to take up paid employment. The increased participation of mothers in the paid labor force is not, however, matched by any significant increase in the amount of work undertaken by fathers in the home. The only major study on the division of household tasks of urban Irish families (Kiely 1995) found that, while more than 80 percent of mothers in the study thought that husbands should share housework equally, the reality was that mothers not only did most of the housework but also provided most of the care for the children. Fathers were generally inclined to participate in the more pleasurable aspects of childcare such as playing with the children and going on outings with them, while the mothers did most of the less glamorous tasks like changing diapers and putting the children to bed. The study did, however, show that young, educated, middle-class fathers whose wives were also employed had higher rates of participation than other fathers, although this was still relatively low. A reflection of the position of the family in Irish life can be seen by the composition of households. Although many factors influence household composition, the relatively low percentage of households consisting of one adult and no children (7% of all households), compared to households with children (66% of all households), shows the dominance of families composed of one or more adults with dependent children. Only 15 percent of households are composed of two adults without children. The remainder of households are composed of three or more adults without dependent children. When the number of persons living in family households is calculated as a percentage of people living in all private households, the dominance of family households is all the more striking—with almost 88 percent of the population living in such households (Census 1996). With rising house prices in the late 1990s, more young adults appear to remain in their parents' home for longer periods, including young mothers and their children. This probably accounts for the increase in households consisting of three or more generations. These households also include families where an adult child cares for a dependent parent. In both of these three or more generation family types, the key caretakers are women in their midlife, caring either for a parent or a grandchild. These are also the people who have the least attachment to the paid labor force. Family diversity is found not only in family composition, but also in its structure and functions. Thus, while studies show a movement from a traditional to a modern form of the family, this movement is in no way uniform. Some families, for example, are democratic in structure, while others are hierarchical. Also, some continue to fulfill a variety of functions, while others are more specific. Again, some families place a higher value on relationships over the importance of the family as an institution. These variations are consistent with the patterns found in most countries that have gone through a modernizing process and reflect a blend of traditional and modern value positions. Diversity, not uniformity, is the hallmark of modernity, and this is now also the hallmark of Irish families. See also:War/Political Violence arensberg, c. and kimball, s. (1940). family and community in ireland. cambridge, ma: harvard university press. central statistics office. (1996). census 1996: householdcomposition and family units. dublin: stationery office. central statistics office. (1987) labour force survey.dublin: stationery office. cleary, a.; nicghiolla phádraig, m.; and quin, s. eds.(2001). understanding children in ireland. 2 vols. dublin: oaktree press. colgan mccarthy, i., ed. (1995). irish family studies: selected papers. dublin: family studies centre, university college dublin. commission on the family. (1998). strengthening families for life. dublin: stationery office. department of health and children. (1999). health statistics 1999. dublin: stationery office. department of health and children. (2000). vital statistics, 4th quarter. dublin: stationery office. eurostat. (2000). the social situation in the europeanunion: 2000. brussels: european commission. hannan, d., and katsiaouni, l. (1997). traditional families? from culturally prescribed to negotiated roles in farm families. dublin: economic and social research institute. johnson, h. (1999) "poverty in ireland." in irish socialpolicy in context, ed. g. kiely, a. o'donnell, s.kennedy, and s. quin. dublin: university college dublin press. kiely, g. (1995). "fathers in families." in irish family studies: selected papers, ed. i. colgan mccarthy. dublin: family studies centre, university college dublin. macgréil, m. (1996). prejudice in ireland revisited.maynooth, co. kildare: survey and research unit, st patrick's college. mckeown, k.; ferguson, h.; and rooney, d. (1998).changing fathers? fatherhood and family life in modern ireland. cork: the collins press. malpas, n., and lambert, p. (1993). europeans and thefamily. (eurobarometer 39). brussels: commission of the european communities. partnership 2000 expert working group on childcare.(1999). national childcare strategy. dublin: stationery office. "Ireland." International Encyclopedia of Marriage and Family. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." International Encyclopedia of Marriage and Family. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/ireland The first settlers, the Mesolithic people, came to Ireland about 7000 b.c.e. and lived by hunting, fishing, and gathering. Neolithic colonists introduced domestic animals and crops about 4000 b.c.e. Cultivated cereals included emmer wheat (Triticum dicoccum), bread wheat (Triticum aestivum), and barley. Wild foods, such as hazelnuts and dried crabapples, were stored. Farming, crops and livestock, continued during the Bronze Age (2000–700 b.c.e.) and the pre-Christian Iron Age (700 b.c.e.–500 c.e.), as is evident from faunal remains and from the range of quernstones for saddle, beehive, and disk querns used to process cereals for domestic use. In the historical period literary data of various kinds supplement the archaeological record. Religious texts in Old Irish and Latin from the Early Christian period (500–1000 c.e.) describe monastic and penitential diets, and the Old Irish law tracts of the seventh and eighth centuries provide insight into food-production strategies, diets, and hospitality obligations. Prestige foods are correlated with social rank according to the general principle that everyone is to be fed according to his or her rank. Persons of higher social status enjoyed a greater variety and quality of food than those of lower rank. Milk and cereal products were the basis of the diet, and a distinction was sometimes made between winter and summer foods. The former apparently consisted of cereals and meat and the latter mainly of dairy produce. Milk and Milk Products Milk, "good when fresh, good when old, good when thick, good when thin," was considered the best food. Fresh milk was a high-status food of sufficient prestige to be served as a refreshment to guests in secular and monastic settings. Many milk products are mentioned in the early Irish legal tracts and in Aislinge Meic Con Glinne (Vision of Mac Con Glinne), an early medieval satirical text in the Irish language rich in food imagery and probably the most important source of information about food in medieval Ireland. Butter, curds, cheese, and whole-milk or skim-milk whey were common elements of the diet. Butter was often portrayed as a luxury food. It was part of the food rents a client was obliged to give to his or her lord and a festive food for monastic communities. Curds, formed naturally in milk or by using rennet, were a common summer food included in food rents and apparently a normal part of the monastic diet. Cheese, in soft and hard varieties, was of great importance in the early Irish and medieval diet. Whey, the liquid product of the preparation of curds and cheese, was rather sour, but diluted with water it was prominent in the stricter monastic diets of the early Irish Church. Goat's milk whey, considered to have medicinal properties, was still in regular use in parts of Ireland in the early nineteenth century. These milk products held their ancient status in the diet to varying degrees until the threshold of the eighteenth century, when forces of commercialization and modernization during the modern period altered levels of consumption and ultimately the dietary status of some milk products. Milk and butter remained basic foodstuffs, and their dietary and economic significance is reflected in the richness of the repertoire of beliefs, customs, and legends concerned with the protection of cows at the boundary festival of May, traditionally regarded as the commencement of summer and the milking seasons in Ireland, when the milch cows were transferred to the lush green pastures. Cheese making, which essentially died out in the eighteenth century, probably due to the substantial international butter and provisions trade from Ireland, made a significant comeback in the late twentieth century. Wheat products were consumed mainly as porridge and bread in early and medieval Ireland. Porridge was food for children especially, and a watery type figured prominently as penitential fare in monasteries. Wheaten bread was a high-status food. Climatic conditions favored barley and oat growing. Barley, used in ale production, was also a bread grain with monastic and penitential connotations. Oat, a low-status grain, was probably the chief cereal crop, most commonly used for oaten porridge and bread. Baking equipment mentioned in the early literature, iron griddles and bake stones, indicates flat bread production on an ovenless hearth. Thin, unleavened oaten bread, eaten mostly with butter, was universal in medieval Ireland and remained the everyday bread in parts of the north and west until the nineteenth century. Barley and rye breads or breads of mixed cereals were still eaten in parts of eastern Ireland in the early nineteenth century. Leavened wheaten bread baked in built-up ovens also has been eaten since medieval times, especially in strong Anglo-Norman areas in East Ireland, where commercial bakeries were established. English-style breads were available in cities in the early seventeenth century, and public or common bake houses are attested from this period in some urban areas. Built-up ovens might be found in larger inns and prosperous households, but general home production of leavened bread, baked in a pot oven on the open hearth, dates from the nineteenth century, when bicarbonate of soda, combined with sour milk or buttermilk, was used as a leaven. A refreshing drink called sowens was made from slightly fermented wheat husks. Used as a substitute for fresh milk in tea or for sour milk in bread making when milk was scarce, it replaced milk on Spy Wednesday (the Wednesday of Holy Week) and Good Friday as a form of penance. A jelly called flummery, procured from the liquid by boiling, was widely used. Meat, Fowl, and Fish Beef and mutton have been eaten in Ireland from prehistoric times, and meat was still considered a status food in the early twenty-first century. Pigs have been raised exclusively for their meat, and a variety of pork products have always been highly valued foodstuffs. Domestic fowl have been a significant part of the diet since early times, and eggs have also figured prominently. Wild fowl have been hunted, and seafowl provide seasonal, supplementary variations in diet in some seacoast areas. Fish, including shellfish, have been a food of coastal communities since prehistoric times. Freshwater fish are mentioned prominently in early sources and in travelers' accounts throughout the medieval period. Fish were included in festive menus in the nineteenth century and were eaten fresh or cured in many ordinary households while the obligation of Friday abstinence from flesh meat remained in force. Milk and whey were the most popular drinks in early and medieval Ireland, but ale was a drink of great social importance. It was also regarded as a nutritional drink suitable for invalids and was featured in monastic diets at the celebration of Easter. Mead made by fermenting honey with water apparently was more prestigious than beer. Wine, an expensive import, was a festive drink in secular and monastic contexts. Whiskey distillation was known from the thirteenth century. Domestic ale and cider brewing declined drastically after the eighteenth century in the face of commercial breweries and distilleries. Nonalcoholic beverages, such as coffee, chocolate, and tea, were consumed initially by the upper sections of society, as the elegant silverware of the eighteenth and nineteenth century shows. But tea was consumed by all sections of society by the end of the nineteenth century. Vegetables and Fruit From early times the Irish cultivated a variety of plants for food. Garden peas and broad beans are mentioned in an eighth-century law text, and it appears that some member of the allium family (possibly onion), leeks, cabbages, chives, and some root vegetables were also grown. Pulses were significant in areas of strong Anglo-Norman settlement in medieval times but were disappearing as a field crop by 1800, when vegetable growing declined due to market forces. Cabbage remained the main vegetable of the poor. Apples and plums were cultivated in early Ireland, and orchards were especially prominent in English-settled areas. Exotic fruits were grown in the walled gardens of the gentry or were imported for the large urban markets. A range of wild vegetables and fruits, especially crabapples, bilberries, and blackberries, were exploited seasonally. Edible algae have been traditionally used as supplementary food products along the coast of Ireland. Duileasc (Palmaria palmata), anglicized as "dulse" or "dilisk" and frequently mentioned in the early Irish law texts, is one of the most popularly consumed seaweeds in Ireland. Rich in potassium and magnesium, it is eaten raw on its own or in salads, or it is stewed and served as a relish or a condiment for potatoes or bread. Sleabhach (Porphyra), anglicized as "sloke," is boiled, dressed with butter, and seasoned and eaten as an independent dish or with potatoes. Carraigin (Chondrus crispus), or carrageen moss, has traditionally been valued for its medicinal and nutritional qualities. Used earlier as a milk thickener and boiled in milk to make a blancmange, it has come to be regarded as a health food. Collecting shore foods, such as edible seaweeds and shellfish, was a common activity along the Atlantic Coast of Ireland on Good Friday, a day of strict abstinence. The foodstuffs collected were eaten as the main meal rather than as an accompaniment to potatoes. Introduced in Ireland toward the end of the sixteenth century, the potato was widely consumed by all social classes, with varying degrees of emphasis, by the nineteenth century. Its widespread diffusion is evident in the broad context of the evolution in the Irish diet from the seventeenth century. In the wake of the English conquest of Ireland, the seventeenth and eighteenth centuries were a time of sustained transition in Irish economic, demographic, and social life. Demographic expansion beginning in 1600 led to a population in excess of 8 million by 1800. The food supply altered strikingly during that period. The diet of the affluent remained rich and varied, while commercialization gradually removed milk and butter from the diet of the poor and resulted in an increased emphasis on grain products. The commercialization of grain and the difficulty in accessing land during the eighteenth century forced the poorer sections of society to depend on the potato, which was the dietary staple par excellence of about 3 million Irish people by the early nineteenth century. Fungus-induced potato crop failures from 1845 to 1848 caused the great Irish famine, a major human disaster. Diets changed gradually in the postfamine years, and while the potato was but one of many staples by the end of the nineteenth century, it never lost its appeal. The ripening of the new potato crop in the autumn remained a matter for celebration. In many parts of the country the first meal of this crop consisted of mashed potatoes with scallions and seasoning. Colcannon, typically associated with Halloween, is made of mashed potatoes mixed with a little fresh milk, chopped kale or green cabbage, fresh onions, and seasoning with a large knob of butter placed on the top. In some parts people originally ate it from a communal dish. Boiled potatoes are also the basic ingredient for potato cakes. The mashed potatoes are mixed with melted butter, seasoning, and sufficient flour to bind the dough. Cut into triangles, called farls, or individual small, round cakes, they are cooked on both sides on a hot, lightly floured griddle or in a hot pan with melted butter or bacon fat. Apple potato cake or "fadge" was popularly associated with Halloween in northeast Ireland. The potato cake mixture was divided in two, and layers of raw sliced apples were placed on the base, then the apples were covered with the remaining dough. The cake was baked in the pot oven until almost ready. At that point the upper crust was peeled back, and brown sugar was sprinkled on the apples. The cake was returned to the oven until the sugar melted. "Stampy" cakes or pancakes were raw grated potatoes sieved and mixed with flour, baking powder, seasoning, a beaten egg, and fresh milk and cooked on the griddle or pan. The menus of restaurants that offer "traditional Irish cuisine" include such popular foods, which also were commercially produced by the late twentieth century. But as Irish society becomes increasingly pluralistic, the socalled "international cuisine" and a wide range of ethnic restaurants characterize the public provision of food in major urban areas. In the private sphere, however, relatively plain, freshly cooked food for each meal is the norm. Milk, bread, butter, meat, vegetables, and potatoes, though the last are of declining importance, remain the basic elements of the Irish diet. See also Potato; Sea Birds and Their Eggs. Cullen, L. M. The Emergence of Modern Ireland, 1600–1900. London: Batsford, 1981. Danaher, Kevin. The Year in Ireland. Cork, Ireland: Mercier, 1972. Flanagan, Laurence. Ancient Ireland: Life before the Celts. Dublin, Ireland: Gill and Macmillan, 1998. Jackson, Kenneth Hurlstone, ed. Aislinge Meic Con Glinne (Vision of Mac Con Glinne). Dublin: School of Celtic Studies, Dublin Institute of Advanced Studies, 1990. Kelly, Fergus. Early Irish Farming: A Study Based Mainly on the Law-Texts of the 7th and 8th Centuries A . D . Dublin: School of Celtic Studies, Dublin Institute of Advanced Studies, 1997. Lucas, A. T. "Irish Food before the Potato." Gwerin 3, no. 2 (1960): 8–40. Lysaght, Patricia. "Bealtaine: Women, Milk, and Magic at the Boundary Festival of May." In Milk and Milk Products from Medieval to Modern Times, edited by Patricia Lysaght, pp. 208–229. Edinburgh: Canongate Academic, 1994. Lysaght, Patricia. "Food-Provision Strategies on the Great Blasket Island: Sea-bird Fowling." In Food from Nature: Attitudes, Strategies, and Culinary Practices, edited by Patricia Lysaght, pp. 333–336. Uppsala: The Royal Gustavus Adolphus Academy for Swedish Folk Culture, 2000. Lysaght, Patricia. "Innovation in Food—The Case of Tea in Ireland." Ulster Folklife 33 (1987): 44–71. Ó Danachair, Caoimhín. "Bread in Ireland." In Food in Perspective, edited by Trefor M. Owen and Alexander Fenton, pp. 57–67. Edinburgh: John Donald, 1981. O'Neill, Timothy P. "Food." In Life and Tradition in Rural Ireland, edited by Timothy P. O'Neill, pp. 56–67. London: Dent, 1977. Ó Sé, Michael. "Old Irish Cheeses and Other Milk Products." Journal of the Cork Historical and Archaeological Society 53 (1948): 82–87. Sexton, Regina. A Little History of Irish Food. Dublin, Ireland: Gill and Macmillan, 1998. "Ireland." Encyclopedia of Food and Culture. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/food/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." Encyclopedia of Food and Culture. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/food/encyclopedias-almanacs-transcripts-and-maps/ireland Ireland, Irish Eire (âr´ə) [to it are related the poetic Erin and perhaps the Latin Hibernia], island, 32,598 sq mi (84,429 sq km), second largest of the British Isles. The island is divided into two major political units—Northern Ireland (see Ireland, Northern), which is joined with Great Britain in the United Kingdom, and the Republic of Ireland (see Ireland, Republic of). Of the 32 counties of Ireland, 26 lie in the Republic, and of the four historic provinces, three and part of the fourth are in the Republic. Geology and Geography Ireland lies west of the island of Great Britain, from which it is separated by the narrow North Channel, the Irish Sea (which attains a width of 130 mi/209 km), and St. George's Channel. More than a third the size of Britain, the island averages 140 mi (225 km) in width and 225 mi (362 km) in length. A large central plain extending to the Irish Sea between the Mourne Mts. in the north and the mountains of Wicklow in the south is roughly enclosed by a highland rim. The highlands of the north, west, and south, which rise to more than 3,000 ft (914 m), are generally barren, but the central plain is extremely fertile and the climate is temperate and moist, warmed by southwesterly winds. The rains, which are heaviest in the west (some areas have more than 80 in./203 cm annually), are responsible for the brilliant green grass of the "emerald isle," and for the large stretches of peat bog, a source of valuable fuel. The coastline is irregular, affording many natural harbors. Off the west coast are numerous small islands, including the Aran Islands, the Blasket Islands, Achill, and Clare Island. The interior is dotted with lakes (the most celebrated are the Lakes of Killarney) and wide stretches of river called loughs. The Shannon, the longest of Irish rivers, drains the western plain and widens into the beautiful loughs Allen, Ree, and Derg. The River Liffey empties into Dublin Bay, the Lee into Cork Harbour at Cobh, the Foyle into Lough Royle near Derry, and the Lagan into Belfast Lough. Ireland to the English Conquest The earliest known people in Ireland belonged to the groups that inhabited all of the British Isles in prehistoric times. In the several centuries preceding the birth of Jesus a number of Celtic tribes invaded and conquered Ireland and established their distinctive culture (see Celt), although they do not seem to have come in great numbers. Ancient Irish legend tells of four successive peoples who invaded the country—the Firbolgs, the Fomors, the Tuatha De Danann, and the Milesians. Oddly enough, the Romans, who occupied Britain for 400 years, never came to Ireland, and the Anglo-Saxon invaders of Britain, who largely replaced the Celtic population there, did not greatly affect Ireland. Until the raids of the Norse in the late 8th cent., Ireland remained relatively untouched by foreign incursions and enjoyed the golden age of its culture. The people, Celtic and non-Celtic alike, were organized into clans, or tribes, which in the early period owed allegiance to one of five provincial kings—of Ulster, Munster, Connacht, Leinster, and Meath (now the northern part of Leinster). These kings nominally served the high king of all Ireland at Tara (in Meath). The clans fought constantly among themselves, but despite civil strife, literature and art were held in high respect. Each chief or king kept an official poet (Druid) who preserved the oral traditions of the people. The Gaelic language and culture were extended into Scotland by Irish emigrants in the 5th and 6th cent. Parts of Ireland had already been Christianized before the arrival of St. Patrick in the 5th cent., but pagan tradition continued to appeal to the imagination of Irish poets even after the complete conversion of the country. The Celtic Christianity of Ireland produced many scholars and missionaries who traveled to England and the Continent, and it attracted students to Irish monasteries, until the 8th cent. perhaps the most brilliant of Europe. St. Columba and St. Columban were among the most famous of Ireland's missionaries. All the arts flourished; Irish illuminated manuscripts were particularly noteworthy. The Book of Kells (see Ceanannus Mór) is especially famous. The country did not develop a strong central government, however, and it was not united to meet the invasions of the Norse, who settled on the shores of the island late in the 8th cent., establishing trading towns (including Dublin, Waterford, and Limerick) and creating new petty kingdoms. In 1014, at Clontarf, Brian Boru, who had become high king by conquest in 1002, broke the strength of the Norse invaders. There followed a period of 150 years during which Ireland was free from foreign interference but was torn by clan warfare. Ireland and the English In the 12th cent., Pope Adrian IV granted overlordship of Ireland to Henry II of England. The English conquest of Ireland was begun by Richard de Clare, 2d earl of Pembroke, known as Strongbow, who intervened in behalf of a claimant to the throne of Leinster; in 1171, Henry himself went to Ireland, temporarily establishing his overlordship there. With this invasion commenced an Anglo-Irish struggle that continued for nearly 800 years. The English established themselves in Dublin. Roughly a century of warfare ensued as Ireland was divided into English shires ruled from Dublin, the domains of feudal magnates who acknowledged English sovereignty, and the independent Irish kingdoms. Many English intermarried with the Irish and were assimilated into Irish society. In the late 13th cent. the English introduced a parliament in Ireland. In 1315, Edward Bruce of Scotland invaded Ireland and was joined by many Irish kings. Although Bruce was killed in 1318, the English authority in Ireland was weakening, becoming limited to a small district around Dublin known as the Pale; the rest of the country fell into a struggle for power among the ruling Anglo-Irish families and Irish chieftains. English attention was diverted by the Hundred Years War with France (1337–1453) and the Wars of the Roses (1455–85). However, under Henry VII new interest in the island was aroused by Irish support for Lambert Simnel, a Yorkist pretender to the English throne. To crush this support, Henry sent to Ireland Sir Edward Poynings, who summoned an Irish Parliament at Drogheda and forced it to pass the legislation known as Poynings' Law (1495). These acts provided that future Irish Parliaments and legislation receive prior approval from the English Privy Council. A free Irish Parliament was thus rendered impossible. The English Reformation under Henry VIII gave rise in England to increased fears of foreign, Catholic invasion; control of Ireland thus became even more imperative. Henry VIII put down a rebellion (1534–37), abolished the monasteries, confiscated lands, and established a Protestant "Church of Ireland" (1537). But since the vast majority of Irish remained Roman Catholic, the seeds of bitter religious contention were added to the already rancorous Anglo-Irish relations. The Irish rebelled three times during the reign of Elizabeth I and were brutally suppressed. Under James I, Ulster was settled by Scottish and English Protestants, and many of the Catholic inhabitants were driven off their lands; thus two sharply antagonistic communities were established. Another Irish rebellion, begun in 1641 in reaction to the hated rule of Charles I's deputy, Thomas Wentworth, earl of Strafford, was crushed (1649–50) by Oliver Cromwell with the loss of hundreds of thousands of lives. More land was confiscated (and often given to absentee landlords), and more Protestants settled in Ireland. The intractable landlord-tenant problem that plagued Ireland in later centuries can be traced to the English confiscations of the 16th and 17th cent. Irish Catholics rallied to the cause of James II after his overthrow (1688) in England (see the Glorious Revolution), while the Protestants in Ulster enthusiastically supported William III. At the battle of the Boyne (1690) near Dublin, James and his French allies were defeated by William. The English-controlled Irish Parliament passed harsh Penal Laws designed to keep the Catholic Irish powerless; political equality was also denied to Presbyterians. At the same time English trade policy depressed the economy of Protestant Ireland, causing many so-called Scotch-Irish to emigrate to America. A newly flourishing woolen industry was destroyed when export from Ireland was forbidden. During the American Revolution, fear of a French invasion of Ireland led Irish Protestants to form (1778–82) the Protestant Volunteer Army. The Protestants, led by Henry Grattan, and even supported by some Catholics, used their military strength to extract concessions for Ireland from Britain. Trade concessions were granted in 1779, and, with the repeal of Poynings' Law (1782), the Irish Parliament had its independence restored. But the Parliament was still chosen undemocratically, and Catholics continued to be denied the right to hold political office. Another unsuccessful rebellion was staged in 1798 by Wolfe Tone, a Protestant who had formed the Society of United Irishmen and who accepted French aid in the uprising. The reliance on French assistance revived anti-Catholic feeling among the Irish Protestants, who remembered French support of the Jacobite restoration. The rebellion convinced the British prime minister, William Pitt, that the Irish problem could be solved by the adoption of three policies: abolition of the Irish Parliament, legislative union with Britain in a United Kingdom of Great Britain and Ireland, and Catholic Emancipation. The first two goals were achieved in 1800, but the opposition of George III and British Protestants prevented the enactment of the Catholic Emancipation Act until 1829, when it was accomplished largely through the efforts of the Irish leader Daniel O'Connell. Ireland under the Union After 1829 the Irish representatives in the British Parliament attempted to maintain the Irish question as a major issue in British politics. O'Connell worked to repeal the union with Britain, which was felt to operate to Ireland's disadvantage, and to reform the government in Ireland. Toward the middle of the century, the Irish Land Question grew increasingly urgent. But the Great Potato Famine (1845–49), one of the worst natural disasters in history, dwarfed political developments. During these years blight ruined the potato crop, destroying the staple food of the Irish population; hundreds of thousands perished from hunger and disease. Many thousands of others emigrated; between 1847 and 1854 about 1.6 million went to the United States. The population dropped from an estimated 8.5 million in 1845 to 6.55 million in 1851 (and continued to decline until the 1960s). Exascerbating the situation was the lack of attention given to it in England, whose press scarcely mentioned the famine and whose leaders did almost nothing to alleviate Ireland's suffering. Irish emigrants in America formed the secret Fenian movement, dedicated to Irish independence. In 1869 the British prime minister William Gladstone sponsored an act disestablishing the Protestant "Church of Ireland" and thereby removed one Irish grievance. In the 1870s, Irish politicians renewed efforts to achieve Home Rule within the union, while in Britain Gladstone and others attempted to solve the Irish problem through land legislation and Home Rule. Gladstone twice submitted Home Rule bills (1886 and 1893) that failed. The proposals alarmed Protestant Ulster, which began to organize against Home Rule. In 1905, Arthur Griffith founded Sinn Féin among Irish Catholics, but for the time being the dominant Irish nationalist group was the Home Rule party of John Redmond. Home Rule was finally enacted in 1914, with the provision that Ulster could remain in the union for six more years, but the act was suspended for the duration of World War I and never went into effect. In both Ulster and Catholic Ireland militias were formed. The Irish Republican Brotherhood, a descendent of the Fenians, organized a rebellion on Easter Sunday, 1916; although unsuccessful, the rising acquired great propaganda value when the British executed its leaders. Sinn Fein, linked in the Irish public's mind with the rising and aided by Britain's attempt to apply conscription to Ireland, scored a tremendous victory in the parliamentary elections of 1918. Its members refused to take their seats in Westminster, declared themselves the Dáil Éireann (Irish Assembly), and proclaimed an Irish Republic. The British outlawed both Sinn Fein and the Dáil, which went underground and engaged in guerrilla warfare (1919–21) against local Irish authorities representing the union. The British sent troops, the Black and Tans, who inflamed the situation further. A new Home Rule bill was enacted in 1920, establishing separate parliaments for Ulster and Catholic Ireland. This was accepted by Ulster, and Northern Ireland was created. The plan was rejected by the Dáil, but in autumn 1921, Prime Minister Lloyd George negotiated with Griffith and Michael Collins of the Dáil a treaty granting Dominion status within the British Empire to Catholic Ireland. The Irish Free State was established in Jan., 1922. A new constitution was ratified in 1937 that terminated Great Britain's sovereignty. In 1948, all semblance of Commonwealth membership ended with the Republic of Ireland Act. See Ireland, Republic of and Ireland, Northern. See N. Mansergh, The Irish Question, 1840–1921 (1965); J. C. Beckett, The Making of Modern Ireland, 1603–1921 (1966); K. S. Bottigheimer, Ireland and the Irish (1982); R. Munck, Ireland (1985); R. D. Crotty, Ireland in Crisis (1986); R. F. Foster, Modern Ireland, 1600–1972 (1989); J. Lee, Ireland, 1912–1985 (1989); T. Cahill, How the Irish Saved Civilization (1995); C. C. O'Brien, Religion and Nationalism in Ireland (1995); D. Kiberd, Inventing Ireland (1996); N. Davies, The Isles (2000); T. Bartlett, Ireland (2010); J. Crowley et al., Atlas of the Great Irish Famine (2012); J. Kelly, The Graves Are Walking: The Great Famine and the Saga of the Irish People (2012); R. F. Foster, Vivid Faces: The Revolutionary Generation in Ireland, 1890–1923 (2015). "Ireland." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/ireland-0 "Ireland." The Columbia Encyclopedia, 6th ed.. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/ireland-0 Official name: Ireland Area: 70,280 square kilometers (27,135 square miles) Highest point on mainland: Mount Carrantuohil (1,041 meters/3,416 feet) Lowest point on land: Sea level Hemispheres: Northern and Eastern Time zone: Noon = noon GMT Longest distances: 275 kilometers (171 miles) from east to west; 486 kilometers (302 miles) from north to south Coastline: 1,448 kilometers (900 miles) Territorial sea limits: 22 kilometers (12 nautical miles) 1 LOCATION AND SIZE Ireland is located on an island in the eastern part of the North Atlantic Ocean. Situated on the European continental shelf, it lies at the westernmost edge of Europe, to the west of Great Britain. The northeastern corner of the island is occupied by Northern Ireland, which belongs to Britain and is separated from the independent republic to its south by a winding border. Covering an area of 70,280 square kilometers (27,135 square miles), Ireland is slightly larger than the state of West Virginia. 2 TERRITORIES AND DEPENDENCIES Ireland has no territories or dependencies. Ireland's proximity to the Atlantic Ocean gives it a mild maritime climate. Average temperatures range from 4°C to 7°C (39°F to 45°F) in January, and from 14°C to 16°C (57°F to 61°F) in July. Ireland's weather is humid and highly changeable. A common saying about Irish weather is "If you don't like it, wait a couple of minutes!" Average annual rainfall ranges from roughly 76 centimeters (30 inches) in the eastern part of the country to over 250 centimeters (100 inches) in the western highlands. 4 TOPOGRAPHIC REGIONS Ireland's low, central limestone plateau rimmed by coastal highlands has been compared to a gigantic saucer. In spite of these coastal highlands, Ireland is generally a low country. Only about 20 percent of its terrain is higher than 150 meters (500 feet) above sea level, and even its mountains rarely exceed altitudes of 900 meters (3,000 feet). 5 OCEANS AND SEAS Ireland is bounded on the east and southeast by the Irish Sea and St. George's Channel, and on the north and west by the Atlantic Ocean. The North Channel separates Northern Ireland from Scotland. Seacoast and Undersea Features There are deepwater coral reefs off the western coast of Ireland. Their presence is considered a possible indicator of underwater oil and gas reserves. Sea Inlets and Straits The western and northwestern parts of the Irish coast have numerous bays and inlets, of which the largest are Donegal Bay and Galway Bay, where the Aran Islands are located. The deepest coastal indentation is at the mouth of the Shannon River in the southwest. The southwestern corner of Ireland has deep, fjord-like indentations between a series of capes, where the mountains of Kerry and Cork jut out into the sea. Islands and Archipelagos Of the several small islands off the western coast, the best-known are the three Aran Islands situated at the mouth of Galway Bay. Ireland's eastern coast, which faces England and Wales, is smooth, while the coasts to the west and northwest are deeply indented. Much of the Irish coastline is rocky; however, there are also long stretches of sandy beach known as strands. Many are lined with dunes. 6 INLAND LAKES Ireland's slow-moving rivers widen into loughs (lakes) at many points in the central lowlands before moving on to the sea. Among the largest loughs are Lough Corrib, Lough Mask, and Lough Conn, all in the western counties of Galway and Mayo. 7 RIVERS AND WATERFALLS The rivers of Ireland are among the most attractive features of the landscape. The Shannon, which is the longest river, rises near Sligo Bay. Altogether, it drains over 10,360 square kilometers (4,000 square miles) of the central lowlands. Other rivers of the lowlands include the Boyne and the Barrow. The Clare and Moy Rivers flow through the west, the Finn flows in the north, and the Barrow, Suir, and Blackwater are among the southern rivers. There are no deserts in Ireland. 9 FLAT AND ROLLING TERRAIN The average elevation of the central lowlands is about 60 meters (200 feet), although various hills, ridges, and loughs break up this terrain in many places. The Irish peat bogs, although rapidly diminishing in number, are still the country's most distinctive physical feature. Ireland also has both coastal and interior wetlands. 10 MOUNTAINS AND VOLCANOES Ireland has a number of mountain systems. The highest rise to elevations of about 914 meters (3,000 feet), while the lower ranges have peak elevations between 610 and 914 meters (2,000 and 3,000 feet). Among the higher ranges are the Wicklow Mountains between Dublin and Wexford. The country's highest peak, Mount Carrantuohil (1,041 meters/3,416 feet), is found in Macgillycuddy's Reeks, in the southwest. DID YOU KNOW? Lough Hyne, which lies below sea level, is one of Europe's only saltwater lakes (or inland seas). 11 CANYONS AND CAVES Areas of limestone karst are widespread in Ireland, resulting in a large number of caves throughout the country. Major cave sites are found in the counties of Cork and Tipperary in the south, Clare and Kerry in the west, and Sligo and Cavan in the north. The Poulnagollum/Poll Elva cave, the longest in Ireland, is found in the Burren, located in County Clare. 12 PLATEAUS AND MONOLITHS Distinctive areas of karst plateau are found in northwestern Ireland, in the counties of Leitrim, Cavan, Sligo, and Fermanagh. Among these areas is the plateau known as the Burren in County Clare. 13 MAN-MADE FEATURES There are a number of bridges in the capital city of Dublin, which is divided into two parts by the River Liffey. Among these are the Grattan, O'Connell, Butt, Queen Maeve, Ha'Penny, and Heuston Bridges. The Grand Canal connects Dublin with Ireland's longest river, the Shannon. 14 FURTHER READING De Breffny, Brian. In the Steps of St. Patrick. New York: Thames and Hudson, 1982. Hawks, Tony. Round Ireland with a Fridge. New York: Thomas Dunne Books, 2000. Wilson, David A. Ireland a Bicycle and a Tin Whistle. Montreal: McGill-Queens University Press, 1995. GoIreland.com. http://www.goireland.com/ (accessed April 24, 2003). Heritage Ireland. http://www.heritageireland.ie/ (accessed April 24, 2003). "Ireland." Junior Worldmark Encyclopedia of Physical Geography. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." Junior Worldmark Encyclopedia of Physical Geography. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/ireland Land and climateThe central area of Ireland is a lowland with a mild, wet climate. This area is covered with peat bogs (an important source of fuel) and sections of fertile limestone (the location of dairy farming). Most coastal regions are barren highlands. The interior of Ireland has many lakes and wide rivers (loughs). It boasts the longest river in the British Isles, the Shannon. HistoryFrom c.3rd century bc to the late 8th century, Ireland was divided into five kingdoms inhabited by Celtic and pre-Celtic tribes. In the 8th century ad, the Danes invaded, establishing trading towns, including Dublin, and creating new kingdoms. In 1014, Brian Boru defeated the Danes, and for the next 150 years Ireland was free from invasion but subject to clan warfare. In 1171, Henry II of England invaded Ireland and established English control. In the late 13th century, an Irish Parliament was formed. In 1315, English dominance was threatened by a Scottish invasion. In the late 15th century, Henry VII restored English hegemony and began the plantation of Ireland by English settlers. Edward Poynings forced the Irish Parliament to pass Poynings Law (1495), stating that future Irish legislation must be sanctioned by the English Privy Council. Under James I, the plantation of Ulster intensified. An Irish rebellion (1641–49) was eventually thwarted by Oliver Cromwell. During the Glorious Revolution, Irish Catholics supported James II, while Ulster Protestants backed William III. After James' defeat, the English-controlled Irish Parliament passed a series of punitive laws against Catholics. In 1782, Henry Grattan forced trade concessions and the repeal of Poynings Law. William Pitt's government passed the Act of Union (1801), which abolished the Irish Assembly and created the United Kingdom of Great Britain and Ireland. In 1829, largely due to the efforts of Daniel O'Connell, the Act of Catholic Emancipation was passed, which secured Irish representation in the British Parliament. A blight ruined the Irish potato crop and caused the Irish Famine (1845–49). Nationalist demands intensified. Gladstone failed to secure Irish Home Rule amid mounting pressure from fearful Ulster Protestants. In 1905 Arthur Griffith founded Sinn Féin. In 1914 Home Rule was agreed, but implementation was suspended during World War I. In the Easter Rising (April 1916), Irish nationalists announced the creation of the Republic of Ireland. The British Army's brutal crushing of the rebellion was a propaganda victory for Sinn Féin and led to a landslide victory in Irish elections (1918). Between 1918 and 1921 the Irish Republican Army (IRA), founded by Michael Collins, fought a guerrilla war against British forces. In 1920, a new Home Rule Bill established separate parliaments for Ulster and Catholic Ireland. The Anglo-Irish Treaty (1921) led to the creation of an Irish Free State in January 1922 and de facto acceptance of partition. (For history post-1922, see Ireland; Ireland) "Ireland." World Encyclopedia. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." World Encyclopedia. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/ireland The people of Ireland are called Irish. Throughout history, Ireland has been inhabited by Celts, Norsemen, French Normans, and English, and these groups have been so intermingled that no purely ethnic divisions remain. "Ireland." Junior Worldmark Encyclopedia of World Cultures. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/international/encyclopedias-almanacs-transcripts-and-maps/ireland "Ireland." Junior Worldmark Encyclopedia of World Cultures. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/international/encyclopedias-almanacs-transcripts-and-maps/ireland Na hÉireanneach; Na Gaeil Identification. The Republic of Ireland (Poblacht na hÉireann in Irish, although commonly referred to as Éire, or Ireland) occupies five-sixths of the island of Ireland, the second largest island of the British Isles. Irish is the common term of reference for the country's citizens, its national culture, and its national language. While Irish national culture is relatively homogeneous when compared to multinational and multicultural states elsewhere, Irish people recognize both some minor and some significant cultural distinctions that are internal to the country and to the island. In 1922 Ireland, which until then had been part of the United Kingdom of Great Britain and Ireland, was politically divided into the Irish Free State (later the Republic of Ireland) and Northern Ireland, which continued as part of the renamed United Kingdom of Great Britain and Northern Ireland. Northern Ireland occupies the remaining sixth of the island. Almost eighty years of separation have resulted in diverging patterns of national cultural development between these two neighbors, as seen in language and dialect, religion, government and politics, sport, music, and business culture. Nevertheless, the largest minority population in Northern Ireland (approximately 42 percent of the total population of 1.66 million) consider themselves to be nationally and ethnically Irish, and they point to the similarities between their national culture and that of the Republic as one reason why they, and Northern Ireland, should be reunited with the Republic, in what would then constitute an all-island nation-state. The majority population in Northern Ireland, who consider themselves to be nationally British, and who identify with the political communities of Unionism and Loyalism, do not seek unification with Ireland, but rather wish to maintain their traditional ties to Britain. Within the Republic, cultural distinctions are recognized between urban and rural areas (especially between the capital city Dublin and the rest of the country), and between regional cultures, which are most often discussed in terms of the West, the South, the Midlands, and the North, and which correspond roughly to the traditional Irish provinces of Connacht, Munster, Leinster, and Ulster, respectively. While the overwhelming majority of Irish people consider themselves to be ethnically Irish, some Irish nationals see themselves as Irish of British descent, a group sometimes referred to as the "Anglo-Irish" or "West Britons." Another important cultural minority are Irish "Travellers," who have historically been an itinerant ethnic group known for their roles in the informal economy as artisans, traders, and entertainers. There are also small religious minorities (such as Irish Jews), and ethnic minorities (such as Chinese, Indians, and Pakistanis), who have retained many aspects of cultural identification with their original national cultures. Location and Geography. Ireland is in the far west of Europe, in the North Atlantic Ocean, west of the island of Great Britain. The island is 302 miles (486 kilometers) long, north to south, and 174 miles (280 kilometers) at its widest point. The area of the island is 32,599 square miles (84,431 square kilometers), of which the Republic covers 27, 136 square miles (70,280 square kilometers). The Republic has 223 miles (360 kilometers) of land border, all with the United Kingdom, and 898 miles (1,448 kilometers) of coastline. It is separated from its neighboring island of Great Britain to the east by the Irish Sea, the North Channel, and Saint George's Channel. The climate is temperate maritime, modified by the North Atlantic Current. Ireland has mild winters and cool summers. Because of the high precipitation, the climate is consistently humid. The Republic is marked by a low-lying fertile central plain surrounded by hills and uncultivated small mountains around the outer rim of the island. Its high point is 3,414 feet (1,041 meters). The largest river is the Shannon, which rises in the northern hills and flows south and west into the Atlantic. The capital city, Dublin (Baile Átha Cliath in Irish), at the mouth of the River Liffey in central eastern Ireland, on the original site of a Viking settlement, is currently home to almost 40 percent of the Irish population; it served as the capital of Ireland before and during Ireland's integration within the United Kingdom. As a result, Dublin has long been noted as the center of the oldest Anglophone and British-oriented area of Ireland; the region around the city has been known as the "English Pale" since medieval times. Demography. The population of the Republic of Ireland was 3,626,087 in 1996, an increase of 100,368 since the 1991 census. The Irish population has increased slowly since the drop in population that occurred in the 1920s. This rise in population is expected to continue as the birthrate has steadily increased while the death rate has steadily decreased. Life expectancy for males and females born in 1991 was 72.3 and 77.9, respectively (these figures for 1926 were 57.4 and 57.9, respectively). The national population in 1996 was relatively young: 1,016,000 people were in the 25–44 age group, and 1,492,000 people were younger than 25. The greater Dublin area had 953,000 people in 1996, while Cork, the nation's second largest city, was home to 180,000. Although Ireland is known worldwide for its rural scenery and lifestyle, in 1996 1,611,000 of its people lived in its 21 most populated cities and towns, and 59 percent of the population lived in urban areas of one thousand people or more. The population density in 1996 was 135 per square mile (52 per square kilometer). Linguistic Affiliation. Irish (Gaelic) and English are the two official languages of Ireland. Irish is a Celtic (Indo-European) language, part of the Goidelic branch of insular Celtic (as are Scottish Gaelic and Manx). Irish evolved from the language brought to the island in the Celtic migrations between the sixth and the second century b.c.e. Despite hundreds of years of Norse and Anglo-Norman migration, by the sixteenth century Irish was the vernacular for almost all of the population of Ireland. The subsequent Tudor and Stuart conquests and plantations (1534–1610), the Cromwellian settlement (1654), the Williamite war (1689–1691), and the enactment of the Penal Laws (1695) began the long process of the subversion of the language. Nevertheless, in 1835 there were four million Irish speakers in Ireland, a number that was severely reduced in the Great Famine of the late 1840s. By 1891 there were only 680,000 Irish speakers, but the key role that the Irish language played in the development of Irish nationalism in the nineteenth century, as well as its symbolic importance in the new Irish state of the twentieth century, have not been enough to reverse the process of vernacular language shift from Irish to English. In the 1991 census, in those few areas where Irish remains the vernacular, and which are officially defined as the Gaeltacht, there were only 56,469 Irish-speakers. Most primary and secondary school students in Ireland study Irish, however, and it remains an important means of communication in governmental, educational, literary, sports, and cultural circles beyond the Gaeltacht. (In the 1991 census, almost 1.1 million Irish people claimed to be Irish-speaking, but this number does not distinguish levels of fluency and usage.) Irish is one of the preeminent symbols of the Irish state and nation, but by the start of the twentieth century English had supplanted Irish as the vernacular language, and all but a very few ethnic Irish are fluent in English. Hiberno-English (the English language spoken in Ireland) has been a strong influence in the evolution of British and Irish literature, poetry, theater, and education since the end of the nineteenth century. The language has also been an important symbol to the Irish national minority in Northern Ireland, where despite many social and political impediments its use has been slowly increasing since the return of armed conflict there in 1969. Symbolism. The flag of Ireland has three equal vertical bands of green (hoist side), white, and orange. This tricolor is also the symbol of the Irish nation in other countries, most notably in Northern Ireland among the Irish national minority. Other flags that are meaningful to the Irish include the golden harp on a green background and the Dublin workers' flag of "The Plough and the Stars." The harp is the principal symbol on the national coat of arms, and the badge of the Irish state is the shamrock. Many symbols of Irish national identity derive in part from their association with religion and church. The shamrock clover is associated with Ireland's patron Saint Patrick, and with the Holy Trinity of Christian belief. A Saint Brigid's cross is often found over the entrance to homes, as are representations of saints and other holy people, as well as portraits of the greatly admired, such as Pope John XXIII and John F. Kennedy. Green is the color associated worldwide with Irishness, but within Ireland, and especially in Northern Ireland, it is more closely associated with being both Irish and Roman Catholic, whereas orange is the color associated with Protestantism, and more especially with Northern Irish people who support Loyalism to the British crown and continued union with Great Britain. The colors of red, white, and blue, those of the British Union Jack, are often used to mark the territory of Loyalist communities in Northern Ireland, just as orange, white, and green mark Irish Nationalist territory there. Sports, especially the national ones organized by the Gaelic Athletic Association such as hurling, camogie, and Gaelic football, also serve as central symbols of the nation. History and Ethnic Relations Emergence of the Nation. The nation that evolved in Ireland was formed over two millennia, the result of diverse forces both internal and external to the island. While there were a number of groups of people living on the island in prehistory, the Celtic migrations of the first millennium b.c.e. brought the language and many aspects of Gaelic society that have figured so prominently in more recent nationalist revivals. Christianity was introduced in the fifth century c.e., and from its beginning Irish Christianity has been associated with monasticism. Irish monks did much to preserve European Christian heritage before and during the Middle Ages, and they ranged throughout the continent in their efforts to establish their holy orders and serve their God and church. From the early ninth century Norsemen raided Ireland's monasteries and settlements, and by the next century they had established their own coastal communities and trading centers. The traditional Irish political system, based on five provinces (Meath, Connacht, Munster, Leinster, and Ulster), assimilated many Norse people, as well as many of the Norman invaders from England after 1169. Over the next four centuries, although the Anglo-Normans succeeded in controlling most of the island, thereby establishing feudalism and their structures of parliament, law, and administration, they also adopted the Irish language and customs, and intermarriage between Norman and Irish elites had become common. By the end of the fifteenth century, the Gaelicization of the Normans had resulted in only the Pale, around Dublin, being controlled by English lords. In the sixteenth century, the Tudors sought to reestablish English control over much of the island. The efforts of Henry VIII to disestablish the Catholic Church in Ireland began the long association between Irish Catholicism and Irish nationalism. His daughter, Elizabeth I, accomplished the English conquest of the island. In the early seventeenth century the English government began a policy of colonization by importing English and Scottish immigrants, a policy that often necessitated the forcible removal of the native Irish. Today's nationalist conflict in Northern Ireland has its historical roots in this period, when New English Protestants and Scottish Presbyterians moved into Ulster. William of Orange's victory over the Stuarts at the end of the seventeenth century led to the period of the Protestant Ascendancy, in which the civil and human rights of the native Irish, the vast majority of whom were Catholics, were repressed. By the end of the eighteenth century the cultural roots of the nation were strong, having grown through a mixture of Irish, Norse, Norman, and English language and customs, and were a product of English conquest, the forced introduction of colonists with different national backgrounds and religions, and the development of an Irish identity that was all but inseparable from Catholicism. National Identity. The long history of modern Irish revolutions began in 1798, when Catholic and Presbyterian leaders, influenced by the American and French Revolutions and desirous of the introduction of some measure of Irish national self-government, joined together to use force to attempt to break the link between Ireland and England. This, and subsequent rebellions in 1803, 1848, and 1867, failed. Ireland was made part of the United Kingdom in the Act of Union of 1801, which lasted until the end of World War I (1914–1918), when the Irish War of Independence led to a compromise agreement between the Irish belligerents, the British government, and Northern Irish Protestants who wanted Ulster to remain part of the United Kingdom. This compromise established the Irish Free State, which was composed of twenty-six of Ireland's thirty-two counties. The remainder became Northern Ireland, the only part of Ireland to stay in the United Kingdom, and wherein the majority population were Protestant and Unionist. The cultural nationalism that succeeded in gaining Ireland's independence had its origin in the Catholic emancipation movement of the early nineteenth century, but it was galvanized by Anglo-Irish and other leaders who sought to use the revitalization of Irish language, sport, literature, drama, and poetry to demonstrate the cultural and historical bases of the Irish nation. This Gaelic Revival stimulated great popular support for both the idea of the Irish nation, and for diverse groups who sought various ways of expressing this modern nationalism. The intellectual life of Ireland began to have a great impact throughout the British Isles and beyond, most notably among the Irish Diaspora who had been forced to flee the disease, starvation, and death of the Great Famine of 1846–1849, when a blight destroyed the potato crop, upon which the Irish peasantry depended for food. Estimates vary, but this famine period resulted in approximately one million dead and two million emigrants. By the end of the nineteenth century many Irish at home and abroad were committed to the peaceful attainment of "Home Rule" with a separate Irish parliament within the United Kingdom while many others were committed to the violent severing of Irish and British ties. Secret societies, forerunners of the Irish Republican Army (IRA), joined with public groups, such as trade union organizations, to plan another rebellion, which took place on Easter Monday, 24 April 1916. The ruthlessness that the British government displayed in putting down this insurrection led to the wide-scale disenchantment of the Irish people with Britain. The Irish War of Independence (1919–1921), followed by the Irish Civil War (1921–1923), ended with the creation of an independent state. Ethnic Relations. Many countries in the world have sizable Irish ethnic minorities, including the United States, Canada, the United Kingdom, Australia, and Argentina. While many of these people descend from emigrants of the mid- to late nineteenth century, many others are descendants of more recent Irish emigrants, while still others were born in Ireland. These ethnic communities identify in varying degrees with Irish culture, and they are distinguished by their religion, dance, music, dress, food, and secular and religious celebrations (the most famous of which is the Saint Patrick Day's parades that are held in Irish communities around the world on 17 March). While Irish immigrants often suffered from religious, ethnic, and racial bigotry in the nineteenth century, their communities today are characterized by both the resilience of their ethnic identities and the degree to which they have assimilated to host national cultures. Ties to the "old country" remain strong. Many people of Irish descent worldwide have been active in seeking a solution to the national conflict in Northern Ireland, known as the "Troubles." Ethnic relations in the Republic of Ireland are relatively peaceful, given the homogeneity of national culture, but Irish Travellers have often been the victims of prejudice. In Northern Ireland the level of ethnic conflict, which is inextricably linked to the province's bifurcation of religion, nationalism, and ethnic identity, is high, and has been since the outbreak of political violence in 1969. Since 1994 there has been a shaky and intermittent cease-fire among the paramilitary groups in Northern Ireland. The 1998 Good Friday agreement is the most recent accord. Urbanism, Architecture, and the Use of Space The public architecture of Ireland reflects the country's past role in the British Empire, as most Irish cities and towns were either designed or remodeled as Ireland evolved with Britain. Since independence, much of the architectural iconography and symbolism, in terms of statues, monuments, museums, and landscaping, has reflected the sacrifices of those who fought for Irish freedom. Residential and business architecture is similar to that found elsewhere in the British Isles and Northern Europe. The Irish put great emphasis on nuclear families establishing residences independent of the residences of the families from which the husband and wife hail, with the intention of owning these residences; Ireland has a very high percentage of owner-occupiers. As a result, the suburbanization of Dublin is resulting in a number of social, economic, transportation, architectural, and legal problems that Ireland must solve in the near future. The informality of Irish culture, which is one thing that Irish people believe sets them apart from British people, facilitates an open and fluid approach between people in public and private spaces. Personal space is small and negotiable; while it is not common for Irish people to touch each other when walking or talking, there is no prohibition on public displays of emotion, affection, or attachment. Humor, literacy, and verbal acuity are valued; sarcasm and humor are the preferred sanctions if a person transgresses the few rules that govern public social interaction. Food and Economy Food in Daily Life. The Irish diet is similar to that of other Northern European nations. There is an emphasis on the consumption of meat, cereals, bread, and potatoes at most meals. Vegetables such as cabbage, turnips, carrots, and broccoli are also popular as accompaniments to the meat and potatoes. Traditional Irish daily eating habits, influenced by a farming ethos, involved four meals: breakfast, dinner (the midday meal and the main one of the day), tea (in early evening, and distinct from "high tea" which is normally served at 4:00 p.m. and is associated with British customs), and supper (a light repast before retiring). Roasts and stews, of lamb, beef, chicken, ham, pork, and turkey, are the centerpieces of traditional meals. Fish, especially salmon, and seafood, especially prawns, are also popular meals. Until recently, most shops closed at the dinner hour (between 1:00 and 2:00 p.m.) to allow staff to return home for their meal. These patterns, however, are changing, because of the growing importance of new lifestyles, professions, and patterns of work, as well as the increased consumption of frozen, ethnic, take-out, and processed foods. Nevertheless, some foods (such as wheaten breads, sausages, and bacon rashers) and some drinks (such as the national beer, Guinness, and Irish whiskey) maintain their important gustatory and symbolic roles in Irish meals and socializing. Regional dishes, consisting of variants on stews, potato casseroles, and breads, also exist. The public house is an essential meeting place for all Irish communities, but these establishments traditionally seldom served dinner. In the past pubs had two separate sections, that of the bar, reserved for males, and the lounge, open to men and women. This distinction is eroding, as are expectations of gender preference in the consumption of alcohol. Food Customs at Ceremonial Occasions. There are few ceremonial food customs. Large family gatherings often sit down to a main meal of roast chicken and ham, and turkey is becoming the preferred dish for Christmas (followed by Christmas cake or plum pudding). Drinking behavior in pubs is ordered informally, in what is perceived by some to be a ritualistic manner of buying drinks in rounds. Basic Economy. Agriculture is no longer the principal economic activity. Industry accounts for 38 percent of gross domestic product (GDP) and 80 percent of exports, and employs 27 percent of the workforce. During the 1990s Ireland enjoyed annual trade surpluses, falling inflation, and increases in construction, consumer spending, and business and consumer investment. Unemployment was down (from 12 percent in 1995 to around 7 percent in 1999) and emigration declined. As of 1998, the labor force consisted of 1.54 million people; as of 1996, 62 percent of the labor force was in services, 27 percent in manufacturing and construction, and 10 percent in agriculture, forestry, and fishing. In 1999 Ireland had the fastest growing economy in the European Union. In the five years to 1999 GDP per capita rose by 60 percent, to approximately $22,000 (U.S.). Despite its industrialization, Ireland still is an agricultural country, which is important to its self-image and its image for tourists. As of 1993, only 13 percent of its land was arable, while 68 percent was devoted to permanent pastures. While all Irish food producers consume a modest amount of their product, agriculture and fishing are modern, mechanized, and commercial enterprises, with the vast bulk of production going to the national and international markets. Although the image of the small-holding subsistence farmer persists in art, literary, and academic circles, Irish farming and farmers are as advanced in technology and technique as most of their European neighbors. Poverty persists, however, among farmers with small holdings, on poor land, particularly in many parts of the west and south. These farmers, who to survive must rely more on subsistence crops and mixed farming than do their more commercial neighbors, involve all family members in a variety of economic strategies. These activities include off-farm wage labor and the acquisition of state pensions and unemployment benefits ("the dole"). Land Tenure and Property. Ireland was one of the first countries in Europe in which peasants could purchase their landholdings. Today all but a very few farms are family-owned, although some mountain pasture and bog lands are held in common. Cooperatives are principally production and marketing enterprises. An annually changing proportion of pasture and arable land is leased out each year, usually for an eleven-month period, in a traditional system known as conacre. Major Industries. The main industries are food products, brewing, textiles, clothing, and pharmaceuticals, and Ireland is fast becoming known for its roles in the development and design of information technologies and financial support services. In agriculture the main products are meat and dairy, potatoes, sugar beets, barley, wheat, and turnips. The fishing industry concentrates on cod, haddock, herring, mackerel, and shellfish (crab and lobster). Tourism increases its share of the economy annually; in 1998 total tourism and travel earnings were $3.1 billion (U.S.). Trade. Ireland had a consistent trade surplus at the end of the 1990s. In 1997 this surplus amounted to $13 billion (U.S). Ireland's main trading partners are the United Kingdom, the rest of the European Union, and the United States. Division of Labor. In farming, daily and seasonal tasks are divided according to age and gender. Most public activities that deal with farm production are handled by adult males, although some agricultural production associated with the domestic household, such as eggs and honey, are marketed by adult females. Neighbors often help each other with their labor or equipment when seasonal production demands, and this network of local support is sustained through ties of marriage, religion and church, education, political party, and sports. While in the past most blue-collar and wage-labor jobs were held by males, women have increasingly entered the workforce over the last generation, especially in tourism, sales, and information and financial services. Wages and salaries are consistently lower for women, and employment in the tourism industry is often seasonal or temporary. There are very few legal age or gender restrictions to entering professions, but here too men dominate in numbers if not also in influence and control. Irish economic policy has encouraged foreign-owned businesses, as one way to inject capital into underdeveloped parts of the country. The United States and the United Kingdom top the list of foreign investors in Ireland. Classes and Castes. The Irish often perceive that their culture is set off from their neighbors by its egalitarianism, reciprocity, and informality, wherein strangers do not wait for introductions to converse, the first name is quickly adopted in business and professional discourse, and the sharing of food, tools, and other valuables is commonplace. These leveling mechanisms alleviate many pressures engendered by class relations, and often belie rather strong divisions of status, prestige, class, and national identity. While the rigid class structure for which the English are renowned is largely absent, social and economic class distinctions exist, and are often reproduced through educational and religious institutions, and the professions. The old British and Anglo-Irish aristocracy are small in number and relatively powerless. They have been replaced at the apex of Irish society by the wealthy, many of whom have made their fortunes in business and professions, and by celebrities from the arts and sports worlds. Social classes are discussed in terms of working class, middle class, and gentry, with certain occupations, such as farmers, often categorized according to their wealth, such as large and small farmers, grouped according to the size of their landholding and capital. The social boundaries between these groups are often indistinct and permeable, but their basic dimensions are clearly discernible to locals through dress, language, conspicuous consumption, leisure activities, social networks, and occupation and profession. Relative wealth and social class also influence life choices, perhaps the most important being that of primary and secondary school, and university, which in turn affects one's class mobility. Some minority groups, such as Travellers, are often portrayed in popular culture as being outside or beneath the accepted social class system, making escape from the underclass as difficult for them as for the long-term unemployed of the inner cities. Symbols of Social Stratification. Use of language, especially dialect, is a clear indicator of class and other social standing. Dress codes have relaxed over the last generation, but the conspicuous consumption of important symbols of wealth and success, such as designer clothing, good food, travel, and expensive cars and houses, provides important strategies for class mobility and social advancement. Government. The Republic of Ireland is a parliamentary democracy. The National Parliament (Oireachtas ) consists of the president (directly elected by the people), and two houses: Dáil Éireann (House of Representatives) and Seanad Éireann (Senate). Their powers and functions derive from the constitution (enacted 1 July 1937). Representatives to Dáil Éireann, who are called Teachta Dála, or TDs, are elected through proportional representation with a single transferable vote. While legislative power is vested in the Oireachtas, all laws are subject to the obligations of European Community membership, which Ireland joined in 1973. The executive power of the state is vested in the government, composed of the Taoiseach (prime minister) and the cabinet. While a number of political parties are represented in the Oireachtas, governments since the 1930s have been led by either the Fianna Fáil or the Fine Gael party, both of which are center-right parties. County Councils are the principal form of local government, but they have few powers in what is one of the most centralized states in Europe. Leadership and Political Officials. Irish political culture is marked by its postcolonialism, conservatism, localism, and familism, all of which were influenced by the Irish Catholic Church, British institutions and politics, and Gaelic culture. Irish political leaders must rely on their local political support—which depends more on their roles in local society, and their real or imagined roles in networks of patrons and clients—than it does on their roles as legislators or political administrators. As a result there is no set career path to political prominence, but over the years sports heroes, family members of past politicians, publicans, and military people have had great success in being elected to the Oireachtas. Pervasive in Irish politics is admiration and political support for politicians who can provide pork barrel government services and supplies to his constituents (very few Irish women reach the higher levels of politics, industry, and academia). While there has always been a vocal left in Irish politics, especially in the cities, since the 1920s these parties have seldom been strong, with the occasional success of the Labour Party being the most notable exception. Most Irish political parties do not provide clear and distinct policy differences, and few espouse the political ideologies that characterize other European nations. The major political division is that between Fianna Fáil and Fine Gael, the two largest parties, whose support still derives from the descendants of the two opposing sides in the Civil War, which was fought over whether to accept the compromise treaty that divided the island into the Irish Free State and Northern Ireland. As a result, the electorate does not vote for candidates because of their policy initiatives, but because of a candidate's personal skill in achieving material gain for constituents, and because the voter's family has traditionally supported the candidate's party. This voting pattern depends on local knowledge of the politician, and the informality of local culture, which encourages people to believe that they have direct access to their politicians. Most national and local politicians have regular open office hours where constituents can discuss their problems and concerns without having to make an appointment. Social Problems and Control. The legal system is based on common law, modified by subsequent legislation and the constitution of 1937. Judicial review of legislation is made by the Supreme Court, which is appointed by the president of Ireland on the advice of the government. Ireland has a long history of political violence, which is still an important aspect of life in Northern Ireland, where paramilitary groups such as the IRA have enjoyed some support from people in the Republic. Under emergency powers acts, certain legal rights and protections can be suspended by the state in the pursuit of terrorists. Crimes of nonpolitical violence are rare, though some, such as spousal and child abuse, may go unreported. Most major crimes, and the crimes most important in popular culture, are those of burglary, theft, larceny, and corruption. Crime rates are higher in urban areas, which in some views results from the poverty endemic to some inner cities. There is a general respect for the law and its agents, but other social controls also exist to sustain moral order. Such institutions as the Catholic Church and the state education system are partly responsible for the overall adherence to rules and respect for authority, but there is an anarchic quality to Irish culture that sets it off from its neighboring British cultures. Interpersonal forms of informal social control include a heightened sense of humor and sarcasm, supported by the general Irish values of reciprocity, irony, and skepticism regarding social hierarchies. Military Activity. The Irish Defence Forces have army, naval service and air corps branches. The total membership of the permanent forces is approximately 11,800, with 15,000 serving in the reserves. While the military is principally trained to defend Ireland, Irish soldiers have served in most United Nations peacekeeping missions, in part because of Ireland's policy of neutrality. The Defence Forces play an important security role on the border with Northern Ireland. The Irish National Police, An Garda Siochána, is an unarmed force of approximately 10,500 members. Social Welfare and Change Programs The national social welfare system mixes social insurance and social assistance programs to provide financial support to the ill, the aged, and the unemployed, benefitting roughly 1.3 million people. State spending on social welfare comprises 25 percent of government expenditures, and about 6 percent of GDP. Other relief agencies, many of which are connected to the churches, also provide valuable financial assistance and social relief programs for the amelioration of the conditions of poverty and inequity. Nongvernmental Organizations and Other Associations Civil society is well-developed, and nongovernmental organizations serve all classes, professions, regions, occupations, ethnic groups, and charitable causes. Some are very powerful, such as the Irish Farmers Association, while others, such as the international charitable support organization, Trócaire, a Catholic agency for world development, command widespread financial and moral support. Ireland is one of the highest per capita contributors to private international aid in the world. Since the creation of the Irish state a number of development agencies and utilities have been organized in partly state-owned bodies, such as the Industrial Development Agency, but these are slowly being privatized. Gender Roles and Statuses While gender equality in the workplace is guaranteed by law, remarkable inequities exist between the genders in such areas as pay, access to professional achievement, and parity of esteem in the workplace. Certain jobs and professions are still considered by large segments of the population to be gender linked. Some critics charge that gender biases continue to be established and reinforced in the nation's major institutions of government, education, and religion. Feminism is a growing movement in rural and urban areas, but it still faces many obstacles among traditionalists. Marriage, Family, and Kinship Marriage. Marriages are seldom arranged in modern Ireland. Monogamous marriages are the norm, as supported and sanctioned by the state and the Christian churches. Divorce has been legal since 1995. Most spouses are selected through the expected means of individual trial and error that have become the norm in Western European society. The demands of farm society and economy still place great pressure on rural men and women to marry, especially in some relatively poor rural districts where there is a high migration rate among women, who go to the cities or emigrate in search of employment and social standing commensurate with their education and social expectations. Marriage festivals for farm men and women, the most famous of which takes place in the early autumn in Lisdoonvarna, has served as one way to bring people together for possible marriage matches, but the increased criticism of such practices in Irish society may endanger their future. The estimated marriage rate per thousand people in 1998 was 4.5. While the average ages of partners at marriage continues to be older than other Western societies, the ages have dropped over the last generation. Domestic Unit. The nuclear family household is the principal domestic unit, as well as the basic unit of production, consumption, and inheritance in Irish society. Inheritance. Past rural practices of leaving the patrimony to one son, thereby forcing his siblings into wage labor, the church, the army, or emigration, have been modified by changes in Irish law, gender roles, and the size and structure of families. All children have legal rights to inheritance, although a preference still lingers for farmers' sons to inherit the land, and for a farm to be passed on without division. Similar patterns exist in urban areas, where gender and class are important determinants of the inheritance of property and capital. Kin Groups. The main kin group is the nuclear family, but extended families and kindreds continue to play important roles in Irish life. Descent is from both parents' families. Children in general adopt their father's surnames. Christian (first) names are often selected to honor an ancestor (most commonly, a grandparent), and in the Catholic tradition most first names are those of saints. Many families continue to use the Irish form of their names (some "Christian" names are in fact pre-Christian and untranslatable into English). Children in the national primary school system are taught to know and use the Irish language equivalent of their names, and it is legal to use your name in either of the two official languages. Child Rearing and Education. Socialization takes place in the domestic unit, in schools, at church, through the electronic and print media, and in voluntary youth organizations. Particular emphasis is placed on education and literacy; 98 percent of the population aged fifteen and over can read and write. The majority of four-year-olds attend nursery school, and all five-year-olds are in primary school. More than three thousand primary schools serve 500,000 children. Most primary schools are linked to the Catholic Church, and receive capital funding from the state, which also pays most teachers' salaries. Post-primary education involves 370,000 students, in secondary, vocational, community, and comprehensive schools. Higher Education. Third-level education includes universities, technological colleges, and education colleges. All are self-governing, but are principally funded by the state. About 50 percent of youth attend some form of third-level education, half of whom pursue degrees. Ireland is world famous for its universities, which are the University of Dublin (Trinity College), the National University of Ireland, the University of Limerick, and Dublin City University. General rules of social etiquette apply across ethnic, class, and religious barriers. Loud, boisterous, and boastful behavior are discouraged. Unacquainted people look directly at each other in public spaces, and often say "hello" in greeting. Outside of formal introductions greetings are often vocal and are not accompanied by a handshake or kiss. Individuals maintain a public personal space around themselves; public touching is rare. Generosity and reciprocity are key values in social exchange, especially in the ritualized forms of group drinking in pubs. Religious Beliefs. The Irish Constitution guarantees freedom of conscience and the free profession and practice of religion. There is no official state religion, but critics point to the special consideration given to the Catholic Church and its agents since the inception of the state. In the 1991 census 92 percent of the population were Roman Catholic, 2.4 percent belonged to the Church of Ireland (Anglican), 0.4 percent were Presbyterians, and 0.1 percent were Methodists. The Jewish community comprised .04 percent of the total, while approximately 3 percent belonged to other religious groups. No information on religion was returned for 2.4 percent of the population. Christian revivalism is changing many of the ways in which the people relate to each other and to their formal church institutions. Folk cultural beliefs also survive, as evidenced in the many holy and healing places, such as the holy wells that dot the landscape. Religious Practitioners. The Catholic Church has four ecclesiastical provinces, which encompass the whole island, thus crossing the boundary with Northern Ireland. The Archbishop of Armagh in Northern Ireland is the Primate of All Ireland. The diocesan structure, in which thirteen hundred parishes are served by four thousand priests, dates to the twelfth century and does not coincide with political boundaries. There are approximately twenty thousand people serving in various Catholic religious orders, out of a combined Ireland and Northern Ireland Catholic population of 3.9 million. The Church of Ireland, which has twelve dioceses, is an autonomous church within the worldwide Anglican Communion. Its Primate of All Ireland is the Archbishop of Armagh, and its total membership is 380,000, 75 percent of whom are in Northern Ireland. There are 312,000 Presbyterians on the island (95 percent of whom are in Northern Ireland), grouped into 562 congregations and twenty-one presbyteries. Rituals and Holy Places. In this predominantly Catholic country there are a number of Church-recognized shrines and holy places, most notably that of Knock, in County Mayo, the site of a reported apparition of the Blessed Mother. Traditional holy places, such as holy wells, attract local people at all times of the year, although many are associated with particular days, saints, rituals, and feasts. Internal pilgrimages to such places as Knock and Croagh Patrick (a mountain in County Mayo associated with Saint Patrick) are important aspects of Catholic belief, which often reflect the integration of formal and traditional religious practices. The holy days of the official Irish Catholic Church calendar are observed as national holidays. Death and the Afterlife. Funerary customs are inextricably linked to various Catholic Church religious rituals. While wakes continue to be held in homes, the practice of utilizing funeral directors and parlors is gaining in popularity. Medicine and Health Care Medical services are provided free of charge by the state to approximately a third of the population. All others pay minimal charges at public health facilities. There are roughly 128 doctors for every 100,000 people. Various forms of folk and alternative medicines exist throughout the island; most rural communities have locally known healers or healing places. Religious sites, such as the pilgrimage destination of Knock, and rituals are also known for their healing powers. The national holidays are linked to national and religious history, such as Saint Patrick's Day, Christmas, and Easter, or are seasonal bank and public holidays which occur on Mondays, allowing for long weekends. The Arts and Humanities Literature. The literary renaissance of the late nineteenth century integrated the hundreds-year-old traditions of writing in Irish with those of English, in what has come to be known as Anglo-Irish literature. Some of the greatest writers in English over the last century were Irish: W. B. Yeats, George Bernard Shaw, James Joyce, Samuel Beckett, Frank O'Connor, Seán O'Faoláin, Seán O'Casey, Flann O'Brien, and Seamus Heaney. They and many others have constituted an unsurpassable record of a national experience that has universal appeal. Graphic Arts. High, popular, and folk arts are highly valued aspects of local life throughout Ireland. Graphic and visual arts are strongly supported by the government through its Arts Council and the 1997-formed Department of Arts, Heritage, Gaeltacht, and the Islands. All major international art movements have their Irish representatives, who are often equally inspired by native or traditional motifs. Among the most important artists of the century are Jack B. Yeats and Paul Henry. Performance Arts. Performers and artists are especially valued members of the Irish nation, which is renowned internationally for the quality of its music, acting, singing, dancing, composing, and writing. U2 and Van Morrison in rock, Daniel O'Donnell in country, James Galway in classical, and the Chieftains in Irish traditional music are but a sampling of the artists who have been important influences on the development of international music. Irish traditional music and dance have also spawned the global phenomenon of Riverdance. Irish cinema celebrated its centenary in 1996. Ireland has been the site and the inspiration for the production of feature films since 1910. Major directors (such as Neill Jordan and Jim Sheridan) and actors (such as Liam Neeson and Stephen Rhea) are part of a national interest in the representation of contemporary Ireland, as symbolized in the state-sponsored Film Institute of Ireland. The State of the Physical and Social Sciences The government is the principal source of financial support for academic research in the physical and social sciences, which are broadly and strongly represented in the nation's universities and in government-sponsored bodies, such as the Economic and Social Research Institute in Dublin. Institutions of higher learning draw relatively high numbers of international students at both undergraduate and postgraduate levels, and Irish researchers are to be found in all areas of academic and applied research throughout the world. Clancy, Patrick, Sheelagh Drudy, Kathleen Lynch, and Liam O'Dowd, eds.Irish Society: Sociological Perspectives, 1995. Curtin, Chris, Hastings Donnan, and Thomas M. Wilson, eds. Irish Urban Cultures, 1993. Taylor, Lawrence J. Occasions of Faith: An Anthropology of Irish Catholics, 1995. Wilson, Thomas M. "Themes in the Anthropology of Ireland." In Susan Parman, ed., Europe in the Anthropological Imagination, 1998. CAIN Project. Background Information on Northern Ireland Society—Population and Vital Statistics. Electronic document. Available from: http://cain.ulst.ac.uk/ni/popul.htm Government of Ireland, Central Statistics Office, Principal Statistics. Electronic document. Available from http://www.cso.ie/principalstats Government of Ireland, Department of Foreign Affairs. Facts about Ireland. Electronic document. Available from http://www.irlgov.ie/facts —Thomas M. Wilson "Ireland." Countries and Their Cultures. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/ireland-0 "Ireland." Countries and Their Cultures. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/encyclopedias-almanacs-transcripts-and-maps/ireland-0 "Ireland." A Dictionary of Architecture and Landscape Architecture. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/ireland "Ireland." A Dictionary of Architecture and Landscape Architecture. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/dictionaries-thesauruses-pictures-and-press-releases/ireland "Ireland." Oxford Dictionary of Rhymes. . Encyclopedia.com. (June 25, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/ireland "Ireland." Oxford Dictionary of Rhymes. . Retrieved June 25, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/ireland
1
129
<urn:uuid:41c70ed3-8be5-4c6c-bae8-8be68d6edf1a>
- freely available Cells 2014, 3(2), 258-287; doi:10.3390/cells3020258 Abstract: Transient receptor potential (TRP) channels constitute an ancient family of cation channels that have been found in many eukaryotic organisms from yeast to human. TRP channels exert a multitude of physiological functions ranging from Ca2+ homeostasis in the kidney to pain reception and vision. These channels are activated by a wide range of stimuli and undergo covalent post-translational modifications that affect and modulate their subcellular targeting, their biophysical properties, or channel gating. These modifications include N-linked glycosylation, protein phosphorylation, and covalent attachment of chemicals that reversibly bind to specific cysteine residues. The latter modification represents an unusual activation mechanism of ligand-gated ion channels that is in contrast to the lock-and-key paradigm of receptor activation by its agonists. In this review, we summarize the post-translational modifications identified on TRP channels and, when available, explain their physiological role. 1. Transient Receptor Potential Channels The first transient receptor potential (TRP) channel was uncovered in the compound eye of Drosophila melanogaster. A Drosophila mutant was isolated that showed a transient electrical light response in electroretinographic recordings upon application of a prolonged light stimulus . The name transient receptor potential for this mutant was coined in 1975 by Minke and colleagues . The corresponding TRP gene was cloned by Montell and collegues and the TRP protein was first suggested to be a Ca2+ permeable ion channel by Hardie and Minke . Later, it became obvious that TRP channels constitute a large family of cation channels that have been found in many eukaryotic organisms from yeast to human. TRP channels serve a multitude of functions ranging from sensory functions such as pain reception and vision to Ca2+ homeostasis. TRP channels exhibit considerable sequence homology and share six predicted transmembrane regions and intracellular N- and C-termini. The Drosophila TRP channel belongs to the subfamily of TRPC (canonical) channels. The TRPV (vanniloid) and TRPM (melastatin) subfamilies display the strongest similarity to TRPC channels. Other subfamilies encompass the TRPN (NOMPC-like), TRPA (ankyrin transmembrane proteins), TRPML (mucolipin), and TRPP (polycystin) channels. The channel pore is mainly formed by a pore-forming loop between the fifth and sixth transmembrane domain upon tetramerization of TRP subunits. Recently, the structure of TRPV1 has been resolved to 3.4 Å resolution by single particle cryo-electron microscopy [5,6]. These studies revealed a channel pore with a dual gating mechanism composed of a selectivity filter formed by the S5-S6 pore loop, that is located near the outer surface of the channel, and a second lower gate formed by parts of the S6 helix. Both gates are allosterically coupled. Agonists like the spider toxin DkTx bind close to and activate the upper gate while the hydrophobic agonists capsaicine and resiniferatoxin bind to and activate the lower gate deeper within the membrane. In general, activation of TRP channels can either occur via a receptor and a signaling cascade that finally culminates in the opening of the channel, which is the canonical activation mechanism for TRPC channels, or the channel itself is a receptor as exemplified by TRPV1. 2. The Various Types of Post-Translational Modifications Post-translational modification of proteins is defined as the processing of a protein during or after biosynthesis. Protein processing comprises regulated proteolysis of the polypeptide chain, attachment of coenzymes such as heme groups and covalent modifications of amino acid residues. The latter will be the focus of this review. Post-translational modifications largely enhance the flexibility and variability of an organism’s proteome, that is it allows the generation of a huge number of different proteins from a relatively limited pool of genes. In addition, many post-translational modifications are reversible and have a regulatory role. This includes regulation of the subcellular localization of proteins and control of protein-protein interactions. Post-translational modifications may also affect protein stability or regulate the activity of enzymes and ion channels. The same protein can undergo various post-translational modifications that may have opposite effects on protein function (see for example phosphorylation of TRPV1, Section 6). The post-translational modifications of TRP channels that are discussed in this review are summarized in Table 1. |Channel||Function *||Modification||Modified Site||Location of Modified Site||Modification Regulates||Reference| (see Figure 1A) |thermo-sensation (noxious cold), chemo-sensation, nociception, O2 sensing||covalent modification by electrophiles||C621 C641 C665||N-terminus||channel gating||| |covalent modification by electrophiles||C414 C421 C621||| |covalent modification by inflammatory mediators||C421 C621||| |C856||2nd intracellular loop| |TRPC3||BDNF-signaling in the brain||N-linked glycosylation||N418||1st extracellular loop||channel activity (by surface expression?)||| |phosphorylation by PKC||T646 S712||2nd intracellular loop||channel gating||[13,14]| |phosphorylation by PKG||T11 S263||N-terminus||| |phosphorylation by Src kinase||| |TRPC5||brain development||S-nitrosylation||C553 C558||adjacent to pore-forming loop||channel gating||| |phosphorylation by PKC||T970||| |TRPC6||signaling in smooth muscle||N-linked glycosylation||N473 N561||1st and 2nd extracellular loop||channel activity (by surface expression?)||| |phosphorylation by PKC||channel inhibition||| |phosphorylation by Src family kinase Fyn||T970||C-terminus||carbachol-mediated desensitization||| |phosphorylation by CaMKII||channel activation||| |TRPM4b||regulation of Ca2+ entry into the cell||N-linked glycosylation||N988||adjacent to pore-forming loop||surface expression||| |TRPM7||Mg2+ homeostasis and reabsorption in kidney and intestine, cell migration||autophorylation||several sites||C-terminus||substrate recognition||| |TRPM8||thermo-sensation (cold), sperm motility, acrosome reaction||N-linked glycosylation||N934||adjacent to pore-forming loop||response to cold and menthol||[24,25,26]| |Polyester modification||several sites||N-terminus and S3-S4 linker||channel function||| |TRPV1 (see Figure 1B)||thermo-sensation (heat), nociception||N-linked glycosylation||N604||adjacent to pore-forming loop||ligand binding or gating properties||[28,29]| |cysteine modification||C158||N-terminus||activation by cysteine-modifying compounds||| |cysteine modification||C158 C387 C391||sensitization by oxidative stress||| |phosphorylation by PKC||S502||1st intracellular loop||potentiation||| |phosphorylation by PKA||S117||N-terminus||prevention of desensitization||| |S502||1st intracellular loop| |phosphorylation by c-Src||Y200||N-terminus||surface expression||| |phosphorylation by CaMKII||S502||1st intracellular loop||channel activity||| |TRPV2||thermo-sensation (noxious heat), nociception||N-linked glycosylation||N570 (alignment)||adjacent to pore-forming loop||[37,38]| |TRPV4||tonicity sensing||N-linked glycosylation||N651||adjacent to pore-forming loop||channel activity (through surface expression?)||| |phosphorylation by Src||Y253||N-terminus||channel activity||[39,40]| |TRPV5||Ca2+ reabsorption in kidney||N-linked glycosylation||N358||1st extracellular loop||surface expression||| |TRPV6||Ca2+ reabsorption in intestine||N-linked glycosylation||N357||1st extracellular loop||surface expression| |dTRP (see Figure 2A)||generation of the photoreceptor potential||phosphorylation||S15||N-terminus||[42,43]| |dTRPL (see Figure 2B)||generation of the photoreceptor potential||phosphorylation||S20||N-terminus||| |S730 S927||C-terminus||channel stability| 3. N-Linked Glycosylation of TRP Channels Glycosylation is the enzymatically catalyzed covalent addition of sugars to lipids or proteins. Proteins can be glycosylated at the hydroxyl group of Ser and Thr residues (O-linked glycosylation) or at the amino group of Asn residues (N-linked glycosylation) that are part of an Asn-X-Ser/Thr consensus sequence whereby X can be every amino acid but is never a Pro residue. N-linked glycosylation is the prevalent covalent modification of eukaryotic proteins and serves many functions. It is involved in subcellular targeting of proteins, the protection of proteins against denaturation and proteolysis, affects protein turnover, influences the charge and the isoelectric point of proteins, promotes rigidity of proteins, and helps membrane proteins to take up the proper orientation within the bilayer [45,46]. For N-linked glycosylation, a generic oligosaccharide consisting of 14 sugars (two N acetylglucosamines, nine mannoses, and three glucoses) is synthesized at the ER membrane. The monosaccharide building blocks are successively added to dolichyl pyrophosphate, a lipid carrier residing within the ER membrane. Upon completion of this oligosaccharide, it is transferred en bloc to a nascent target protein by oligosaccharyl transferase that recognizes the Asn-X-Ser/Thr amino acid motif. Subsequently, the oligosaccharide is trimmed leaving a core oligosaccharide composed of two N-acetylglucosamines and three mannose residues. To achieve the vast variety of glycosylation patterns, the core oligosaccharides are altered by glycosyltransferases and glycosidases in the endoplasmic reticulum and in the Golgi complex. While glycosyltransferases catalyze the addition of a sugar to a specific oligosaccharide, glycosidases promote the hydrolyzation of certain sugars from a specific oligosaccharide. The addition or removal of a certain monosaccharide generates the substrate for the next enzyme that specifically recognizes its substrate to catalyze the addition or removal of another monosaccharide. Mammalian cells resort to nine different monosaccharide building blocks . An example of a TRP channel that is modified by N-linked glycosylation is the vanniloid receptor subtype 5 (TRPV5). TRPV5 is expressed in the distal convoluted tubules of the kidney where it facilitates Ca2+ uptake from the glomerular filtrate on the apical extracellular side into the cell. Interestingly, TRPV5 is among the TRP channels with the highest Ca2+ selectivity. Within the cell, Ca2+ binds to calbindin-D28K and by diffusion reaches the basolateral membrane where it is released into blood vessels. Consistently, TRPV5 knockout mice display impaired renal Ca2+ reabsorption and reduced bone thickness . The TRPV5 knockout mice were used by Chang and co-workers in an attempt to identify proteins that are involved in Ca2+ homeostasis . In TRPV5 knockout mice, the expression of a gene product called Klotho was diminished. In the wild type, Klotho was abundantly expressed in the kidney. At that time, it was already known that an insertional mutation of the klotho gene led to a compilation of phenotypes reminiscent of aging, including a short lifespan, arteriosclerosis, osteoporosis, and infertility that were observed in the mouse . When Chang and colleagues co-expressed mouse TRPV5 and KLOTHO in human embryonic kidney (HEK 293) cells, they observed an increase in cellular Ca2+ uptake. Since KLOTHO had been detected in extracellular liquids such as urine, serum, and cerebrospinal fluid , they reasoned that KLOTHO might regulate TRPV5 activity from the extracellular side of the cell. Therefore, they treated TRPV5-expressing cells with KLOTHO-containing supernatant from KLOTHO-expressing cells. Indeed, application of extracellular KLOTHO also increased TRPV5 channel activity. KLOTHO had been shown to exhibit β-glucuronidase activity . Consistently, β-glucuronidase treatment of TRPV5-expressing cells mimicked the effect of incubation with KLOTHO-containing supernatant. Abolition of a predicted N glycosylation site, Asn358, by exchange to Gln prevented KLOTHO-induced increase of TRPV5 channel activity. Finally, KLOTHO-mediated rise in channel activity could be attributed to an increased expression of TRPV5 at the plasma membrane . Taken together, the authors concluded that the β-glucuronidase KLOTHO hydrolyzes the extracellular glycan attached to TRPV5 and thereby traps the ion channel in the apical membrane in the distal nephron. Imbalances of this process have deleterious effects as exemplified by the klotho mutant mouse. Notably, TRPV6 that is believed to facilitate intestinal Ca2+ resorption is highly homologous to TRPV5. Presence of KLOTHO also increased the activity of TRPV6 and mutation of a predicted N glycosylation site (exchange of Asn357 to Gln357) within the first extracellular loop abolished the KLOTHO-mediated increase in activity . Together with the fact that KLOTHO is present in extracellular fluids, it is likely that KLOTHO regulates the activity of other channels by cleavage of extracellular glycans. Recently, it has been shown that tissue transglutaminase cross-links the N-glycosylated fraction of TRPV5 leading to its inactivation probably by structural changes that reduce the pore diameter . This finding points to a complex regulation of TRPV5. Vannier and colleagues set out to map the membrane topology of human TRPC3 by exploiting the fact that N-linked glycosylation occurs exclusively at sites that are exposed to the extracellular side . En passant, they identified Asn418 residing within the first extracellular loop as an endogenous TRPC3 glycosylation site. Through introduction of glycosylation sites by site directed mutagenesis and assessment of the glycosylation patterns of the resulting mutant TRPC3, the authors were able to generate a map of the transmembrane topology of TRPC3. As a result, intracellular localization of the N- and C-termini and the existence of six transmembrane domains could be verified but the first of seven predicted regions with the potential to span the membrane was shown to be located within the cytosol. Besides glycosylation of the first extracellular loop, TRPC6 is additionally glycosylated at the second extracellular loop . While TRPC3 exhibits constitutive channel activity, TRPC6 activity is tightly regulated by diacylglycerol. However, by removal of the second glycosylation site, TRPC6 could be converted to a constitutively active channel. Reversely, introduction of a second N-glycosylation site to TRPC3 in the second extracellular loop led to a reduction of TRPC3 basal activity . These results suggest that glycosylation of TRPC3 and TRPC6 can profoundly affect channel activity. Besides glycosylation of the first and second extracellular loop, N-linked glycosylation at the pore-forming third extracellular loop between the fifth and sixth transmembrane helix has been reported for TRPV1, TRPV4, TRPM8, and TRPM4b [22,24,28,29,37,38,39,40]. TRPV1 is expressed in a subset of nociceptive neurons. Activation of TRPV1 by noxious heat, by spider toxins, or by capsaicin, the pungent ingredient of chili peppers ultimately leads to the perception of pain in mammals. Therefore, TRPV1 is a polymodal detector of pain-inducing chemical and physical stimuli. By identification of possible N-linked glycosylation sites and subsequent topology prediction, Jahnel and colleagues identified Asn604 as a putative N-linked glycosylation site of the rat TRPV1 channel . Mutation of Asn604 resulted in absence of TRPV1 glycosylation demonstrating that this is the only glycosylation site. When Wirkner and coworkers expressed TRPV1 in which the Asn604 glycosylation site was ablated by exchange to Thr, they observed a decreased sensitivity towards capsaicin and thus concluded that glycosylation might regulate ligand binding or gating properties of TRPV1 . However, since the three-dimensional structure of TRPV1 revealed a capsaicin binding site within the membrane [5,6] distant to Asn604, regulation of ligand binding by extracellular glycosylation of Asn604 seems unlikely. Interestingly, Asn604 is absent from chicken TRPV1 that is insensitive towards activation by capsaicin. This work group also observed N-linked glycosylation of overexpressed TRPV2 that is gated by heat, akin to TRPV1, but not by capsaicin. Although the glycosylation site of TRPV2 has not been mapped experimentally, Cohen conducted a sequence alignment of TRP segments that harbor known glycosylation sites and identified a conserved N-linked glycosylation motif near the pore-forming loop of TRPV2 . Glycosylation adjacent to the pore-forming loop has also been shown for TRPV4 [39,40]. TRPV4 is strongly expressed in the distal convoluted tubules of the kidney and is gated by hypotonicity [53,54]. When Xu and co-workers treated TRPV4 from stably transfected HEK293 cells with endoglycosidase-F, an enzyme that removes N-linked oligosaccharides from proteins, they observed increased electrophoretic mobility of TRPV4 and thus inferred that TRPV4 is glycosylated . In a follow-up publication, the authors identified Asn651 as the only possible N-linked glycosylation site that is situated on the extracellular side according to membrane topology prediction . When Asn651 was mutated to Gln, they observed increased channel activity upon hypotonic stress. These results may suggest that glycosylation of extracellular loops affects channel gating. However, in the case of TRPV4, a larger portion of the mutated channel was located in the plasma membrane as compared to the wild type channel which may account for the observed increase in channel activity as assessed by fura-2 ratiometry . In contrast to TRPV1, the TRPM8 ion channel is activated by low temperatures and cooling compounds such as menthol and icilin [55,56,57,58]. The TRPM8 glycosylation site was mapped to Asn934 within the pore region [25,26]. Ablation of this glycosylation site by exchange to Gln did not affect plasma membrane localization or multimerization [25,26] but resulted in a decrease of the response to cold and menthol . Treatment of trigeminal sensory neurons with tunicamycin, a drug that inhibits N-glycosylation of proteins, mimicked the effect of ablation of the glycosylation site. Thus, the effect of the N-linked glycosylation on channel activity was shown both for heterologously expressed as well as for native TRPM8 channels . TRPM4b was shown to undergo N-glycosylation at Asn988 (Asn992 in human TRPM4b) in the pore-forming loop . Disruption of this N-glycosylation site resulted in faster disappearance of TRPM4b from the membrane pointing to a role of Asn988 glycosylation in stabilization of surface expression. 4. Covalent Modification of TRP Cysteine Residues For decades, it has been assumed that a signaling molecule interacts with its receptors in a non-covalent manner by fitting into a receptor binding pocket because of its three dimensional structure. This is known as the lock and key model of receptor activation. In the recent past, this model has been challenged by the finding that odorant receptors recognize odorants rather by their molecular vibrations than by their shape [59,60]. This new model has become known as the swipe card model of odorant recognition . Yet another mechanism of receptor activation involves reversible covalent modification of the receptor by its ligands, which have been described for several TRP channels. 4.1. Covalent Modification of TRPC5 One such covalent modification is protein S-nitrosylation that conveys redox-based signals of the cell. The signaling molecule, nitric oxide (NO), is covalently attached to the thiol group of a cysteine residue within the target protein. Since cellular Ca2+ and NO signals are widely used and coordinated with each other, Yoshida and colleagues sought to identify an abundantly-expressed Ca2+ channel that is regulated by NO . By heterologous expression of a variety of TRPC and TRPM channels in HEK cells, they identified TRPC5 as the TRP channel exhibiting the strongest activation by NO donors such as S-nitroso-N-acetyl-DL-penicillamine (SNAP) and (E)-4-ethyl-2-[(E)-hydroxyimino]-5-nitro-3-hexenamide. When the membrane-impermeable agent 5,5’-dithiobis(2-nitrobenzoic acid) (DTNB) was administered, no rise of intracellular Ca2+ levels could be observed. However, providing DTNB from the intracellular side resulted in increased intracellular Ca2+ levels. Consistently, extracellular application of the membrane-permeable methanethiosulfonate derivative 2-amino ethyl methane thiosulfonate hydrobromide (MTSEA) but not of membrane-impermeable (2- (trimethyl ammonium) ethyl) methane thiosulonate bromide (MTSET) resulted in TRPC5 activation. The authors concluded that the cysteine residues that are modified by TRPC5 agonists are located at the intracellular side. To identify these residues, every single cysteine residue was exchanged by serine and the resulting mutant TRPC5 channels were tested for agonist sensitivity. As a result, mutation of Cys553 and Cys558 led to a drastic reduction of TRPC5 activity upon stimulation by agonists. These two residues are located N-terminal of the pore-forming loop and are conserved in other TRP channels of the TRPC and TRPV subfamilies. Interestingly, sensitivity towards protons and temperature was increased in TRPV1 upon application of SNAP . To investigate the physiological role of NO-mediated TRPC5 activation in a native system, Yoshida and co-workers used bovine aortic endothelial cells that had been reported to express TRPC5 [62,63]. Native Ca2+ influx induced by NO donors was suppressed by expression of a dominant negative TRPC5 or by downregulation of TRPC5 via a siRNA approach showing that TRPC5 considerably contributes to Ca2+ influx in this system. Since ATP had been shown to activate endothelial NO synthases (eNOS) via G protein-coupled receptor stimulation, the authors tried to reproduce this pathway in the endothelial cells. Application of ATP led to an increase in intracellular Ca2+ levels that was blocked by eNOS-targeted siRNA as well as by suppressors of S-nitrosylation. In aggregation, these results provide compelling evidence that S-nitrosylation is a post-translational modification regulating the activity of TRP channels in the context of NO signaling. 4.2. Covalent Modification of TRPA1 TRPA1 is expressed in the termini of nociceptive sensory neurons [64,65,66] and is activated by noxious exogenous compounds and endogenous proinflammatory mediators [10,67,68,69,70,71] resulting in cation influx and depolarization of the neuron that is ultimately perceived as pain by mammals . Plants produce such pungent substances to protect themselves from herbivores. The pungent principal of the onion is allyl isothiocyanate (AITC), mustard produces mustard oil and cinnamon cinnamaldehyde, to name a few. As noticed in parallel by Hinman and colleagues as well as Macpherson and colleagues, most of these substances readily react with thiols and primary amines which led both groups to the assumption that these compounds covalently modify its receptor rather than activate it by specifically fitting into binding pockets in a lock and key manner [8,9]. To test their hypothesis, Hinman and colleagues made use of the synthetic electrophile cysteine-modifying agent N-methyl maleimide (NMM) because this compound irreversibly reacts with sulhydryl side chains under physiological conditions . Indeed, whereas oocytes expressing human TRPA1 perfused with AITC showed a transient electrical response, application of NMM led to a persistent response. The NMM-induced response could be terminated by application of ruthenium red, a channel blocker, demonstrating channel specificity. By mutagenesis of candidate cysteine residues, the authors identified three cysteine residues, Cys619, Cys639, and Cys663 (Cys621, Cys641, and Cys665 in human TRPA1) residing within the predicted cytosolic N-terminus that confer sensitivity of TRPA1 towards electrophiles. The respective mutant TRPA1 exhibited strongly reduced sensitivity towards NMM and AITC but sensitivity towards δ-9-tetrahydrocannabinol, a non-electrophile agonist of TRPA1, was unaffected. This shows that the three identified cysteine residues specifically confer sensitivity towards electrophile substances and that non-electrophile substances gate this channel by another mechanism. Using click chemistry, Macpherson and co-workers showed that mustard oil and cinnamaldehyde derivatives covalently bound to murine TRPA1 . The authors also demonstrated that the cysteine-modifying agents iodoacetamide (IAA) and MTSEA activate mouse TRPA1 expressed in HEK 293 cells. Mutation of Cys415, Cys422, and Cys622 (Cys414, Cys421, and Cys621 in human TRPA1) resulted in absence of TRPA1 stimulation by cinnamaldehyde and cold stimuli. Takahashi and co-workers reported that the human TRPA1 channel is activated by a variety of inflammatory mediators such as nitric oxide (NO), 15-deoxy-Δ12,14-prostaglandin J2 (15d-PGJ2), hydrogen peroxide (H2O2), and protons (H+) . Site-directed mutagenesis of cytoplasmic N-terminal cysteine residues revealed that Cys421 and Cys621 mediate 15d-PGJ2 susceptibility of TRPA1. Wang and colleagues identified four different disulfide bonds that are formed between five different cysteine residues in vivo . These different constellations of disulfide bonds may result in different conformational modalities of TRPA1 that might provide different binding pockets for effector molecules as well as altered accessibility of cysteine residues for covalent modification. Takahashi and colleagues uncovered a role of TRPA1 as an O2 sensor . When the authors expressed a panel of redox-sensitive TRP channels in HEK 293 cells, they observed a rise in intracellular Ca2+ levels upon hyperoxia and hypoxia in cells that expressed TRPA1. TRPA1 exhibited hydroxylation at Pro394 that was removed upon hypoxia which led to channel activation. Current responses to hyperoxia were drastically reduced when Cys633 or Cys856 residues of TRPA1 were mutated. The authors proposed that under normoxia, TRPA1 is hydroxylated at Pro394 within the N-terminal ankyrin repeat by prolyl hydroxylases (PHDs) inactivating the channel. Under hypoxia, decreased O2 concentrations lower the activity of PHDs leading to elevated levels of active TRPA1 that is not modified at Pro394. Under hyperoxia, O2 directly oxidizes Cys633 or Cys856 or both. Oxidation overrides the inactivation by hydroxylation at Pro394 and activates the channel. In Drosophila, TRPA1 is expressed in gustatory chemosensors to prevent the fly from taking up toxic electrophiles. The discovery of Drosophila TRPA1 and its role as a receptor of toxic electrophile compounds points to a common ancestral TRPA1 chemosensor for pungent substances. Thus, the detection of toxic electrophiles via covalent cysteine modification of TRPA1, which has the important biological function of prevention of intoxication, may be conserved from insects to man . Collectively, current data point to a complex regulation of the TRPA1 channel by a large panel of substances some of which covalently bind to different cysteine residues. Formation of intramolecular disulfide bonds seems to influence TRPA1 conformation and might regulate the accessibility of other cysteine residues. In some cases, activation of TRPA1 rather depends on the chemical nature of agonists than on their three dimensional structure. 4.3. Covalent Modification of TRPV1 Besides TRPC5 and TRPA1, TRPV1 has been reported to undergo covalent cysteine modifications mediated by pungent compounds and by oxidative stress [30,31]. When the reports describing the activation of TRPA1 by pungent chemicals through covalent modification were published, the activation mechanism of TRPV1 by such substances was still controversial. Therefore, Salazar and co-workers set out to investigate TRPV1 activation by electrophiles . To identify cells expressing TRPV1, they treated dissociated mouse dorsal root ganglia (DRG) neurons with capsaicin, the best-established agonist of TRPV1. In these cells, besides capsaicin, extracts from onion as well as garlic elicited considerable current increases. Allicin, a component of both onion and garlic, was identified as the active compound as allicin application elicited electrical responses of DRG neurons. Treatment with ruthenium red resulted in abolition of allicin-induced currents showing that these currents were mediated by a TRP channel. Application of DTT also resulted in robust suppression of these currents. Removal of allicin did not result in spontaneous reversal of the currents. The latter two observations led the authors to assume that activation of TRPV1 by allicin might be mediated by covalent modification akin to TRPA1 activation. To delineate the contribution of TRPA1 and TRPV1 to allicin-elicited responses, the authors used dissociated DRG neurons from Trpv1−/− and Trpa1−/− mice. Application of allicin induced action potentials and activated currents in DRG neurons from both Trpv1−/− mice expressing TRPA1 and Trpa1−/− mice expressing TRPV1. In a behavioral assay, the authors injected allicin into the paws of the Trpv1−/− and Trpa1−/− mice and their wild type littermates and determined the time the animals spent licking the injected paw. As a result, both Trpv1−/− and Trpa1−/− mice injected with allicin spent drastically more time licking their paws then control-injected animals indicating that TRPV1 or TRPA1 alone are able to elicit this behavior. Interestingly, the allicin-injected wild type littermates spent more time licking their paws than the single knockout mutants pointing to a synergistic effect of TRPV1 and TRPA1 in this scenario. In transfected HEK 293 cells expressing rat TRPV1, extracts from onion and garlic as well as allicin also induced currents that were reversed by DTT. Currents provoked by capsaicin, which does not modify cysteine residues, were not reversed by DTT. Pretreatment with allicin led to elevated capsaicin-induced currents. The cystein-reactive compound MTSEA elicited electrical responses in DRG neurons as well as in TRPV1-expressing HEK 293 cells. Pretreatment with MTSEA as well as MTSET led to elevated capsaicin-induced currents. Moreover, MTSEA resulted in an increased open probability of TRPV1 indicating that cysteine modification alone is sufficient to activate the channel. To identify cysteine residues that mediate sensitivity to cysteine-reactive compounds, Salazar and co-workers individually mutated the 18 cysteine residues of the rat TRPV1 primary sequence. Mutation of a single residue, Cys157 of rat TRPV1 (Cys158 in human TRPV1), residing in the predicted intracellular N-terminal region, resulted in insensitivity of the channel to MTSEA and garlic and onion extracts. Response to capsaicin, however, was not impaired. To provide further evidence that Cys157 is the only residue that is needed to confer TRPV1 sensitivity to cystein-reactive compounds, the authors generated a cysteineless TRPV1 mutant as well as a cysteineless TRPV1 in which only Cys157 was present. The cysteineless TRPV1 channel was functional and could still be activated by capsaicin but not by MTSEA or by onion and garlic extracts. In contrast, the cysteinless TRPV1 harboring Cys157 recovered activation by MTSEA and onion and garlic extracts that could be reversed by DTT. These experiments provide strong evidence for Cys157 as the only residue that confers sensitivity of TRPV1 to cysteine-reactive compounds. Treating human TRPV1-expressing HEK293 cells with hydrogen peroxide (H2O2) that mimics oxidative injuries, Chuang and Lin observed a slow but robust increase of the currents elicited by capsaicin . This sensitization of the channel could be reversed by strong reducing agents such as 2,3-dimercaptopropanol (BAL) or dithiothreitol (DTT) suggesting that H2O2-mediated oxidation of cysteine residues confer TRPV1 sensitization. To investigate the sidedness of cysteines involved in this process, the authors used membrane-impermeable dithio-bis-nitrobenzoic acid (DTNB) and membrane-permeable phenylarsine oxide (PAO) that both link adjacent thiol groups. Extracellular application of PAO but not DTNB elicited a TRPV1 response indicating the modulation-sensitive cysteines reside within the cell. These cysteine residues were then identified in a reversion mutagenesis approach using chicken TRPV1. At first, all cysteine residues were mutated and then single residues were reverted to Cys and the resulting TRPV1 channels were tested. As a result, Cys772 and Cys783 single revertants were identified as being amenable to PAO sensitization. Since these single revertants harbor only one Cys residue, the authors concluded that these Cys residues form disulfide bonds between different TRPV1 subunits. To identify Cys residues that form intra-subunit disulfide bonds, sets of Cys residues were reverted. As a result, a double revertant harboring Cys393 and Cys397 exhibited PAO sensitization. Co-expression of Cys393 and Cys397 single revertants did not restore PAO sensitivity suggesting intra-subunit formation of a disulfide bond. Notably, two Cys residues near the N-terminus and one Cys residue in the vicinity of the C-terminus mediated PAO-induced suppression of basal activity of chicken TRPV1. These residues are absent from mammalian TRPV1. Three of the four Cys residues that were found to mediate oxidative sensitization in chicken TRPV1 are conserved in mammalian TRPV1 (Cys387, Cys391, and Cys767 in human TRPV1). Since Cys158 (Cys157 in rat TRPV1) had been reported to be covalently modified by allicin , Chuang and Lin generated a quadruple mutant in which all four sites (Cys158, Cys387, Cys391, and Cys767) were mutated. The resulting channel was insensitive to H2O2 and PAO treatment. Triple mutants harboring mutations in Cys158, Cys387, and Cys767 or in Cys158, Cys391, and Cys767 were resistant to PAO. The authors concluded that upon oxidative treatment, Cys387 and Cys391 form an intra-subunit disulfide bond while Cys158 or Cys767 form disulfide bonds between different channel subunits. Upon prolonged or repeated exposure to capsaicin, sensory neurons become unresponsive to capsaicin due to a reduction of the Ca2+ influx , a phenomenon referred to as desensitization. Therefore, Chuang and Lin tested how oxidative treatment regulates desensitization. Upon oxidative treatment of TRPV1 that was desensitized by prolonged exposure to capsaicin, the authors observed strong sensitization. The other way around, pretreatment of TRPV1-expressing cells with PAO led to reduced desensitization by prolonged exposure to capsaicin. Finally, H2O2 treatment showed drastic synergism when applied in conjunction with phorbol di-butyrate that promotes phosphorylation of TRPV1 and low pH. Taken together, these results demonstrate that oxidative stress is a major signal to modulate TRPV1 channel activity that can override desensitization and augment previous stimuli. 5. Polyester Modification of TRPM8 A less common post-translational protein modification is the attachment of polyesters that are covalently bound to hydroxy groups of amino acid side chains and, in addition, form multiple hydrophobic interactions [76,77,78]. Recently, Cao and colleagues reported the identification of a poly-hydroxybutyrate (PHB) modification of mammalian TRPM8 by a mass spectrometry approach . The authors found a large number of putative PHB modification sites within the N-terminus and in the S3-S4 linker region of TRPM8. Enzymatic removal of PHB and mutation of PHButylated serine residues as well as adjacent hydrophobic amino acids that might interact with PHB methyl groups led to TRPM8 channel inhibition. The authors concluded that PHB modification of TRPM8 is a prerequisite for its normal function. 6. Phosphorylation of Mammalian TRP Channels Phosphorylation is an abundant reversible post-translational modification of proteins that is involved in regulation of a multitude of cellular processes. Protein phosphorylation is mediated by kinases that catalyze the addition of a phosphoryl group to a hydroxyl group of a serine, threonine, or tyrosine residue. All eukaryotic protein kinases share a common structure of the catalytic core domain and belong to the same superfamily. According to structural and functional properties, eukaryotic protein kinases are grouped into eight families. Protein dephosphorylation is mediated by phosphatases that catalyze the hydrolysis of phosphoryl groups. Exhibiting different catalytic mechanisms and structures, protein phosphatases are more heterogeneous than protein kinases. Protein phosphatases are divided into two groups, the serine/threonine protein phosphatases (STPs) and protein tyrosine phosphatases (PTPs). Whereas protein kinases preferentially phosphorylate their target proteins at certain residues that are flanked by a consensus sequence, protein phosphatases are believed to operate in a more unspecific manner. The phosphorylation level of a certain residue of a certain protein at a given time and physiological condition is the result of the activities of kinases and phosphatases that phosphorylate or dephosphorylate this residue. Thus, kinases and phosphatases have to be tightly regulated in order to generate certain phosphorylation patterns. 6.1. Phosphorylation of TRPC Channels The members of the canonical TRP receptor subfamily, TRPC3, TRPC6, and TRPC7, are activated by the permeant diacyl glycerol (DAG) analogue, 1-oleoyl-2-acetyl-sn-glycerol (OAG) and this activation is reversed by the PKC activator 12-myristate 13-acetate (PMA) [19,79,80,81]. Trebak and colleagues showed that application of PMA indeed resulted in increased phosphorylation of TRPC3 in vivo . By comparing TRPC3, TRPC6, and TRPC7 amino acid sequences, they identified conserved candidate PKC phosphorylation sites. Mutation of Ser712 to Ala resulted in abolition of PKC-mediated inhibition of TRPC3 . The moonwalker mouse mutant exhibits cerebellar ataxia and abnormal Purkinje cell development . This phenotype is caused by a point mutation of Thr635 (Thr646 in human TRPC3) that results in reduced PKC-mediated phosphorylation of TRPC3 in moonwalker mice. The mutated TRPC3 channel displays abnormal gating that leads to the death of purkinje cells. Kwan and colleagues observed cGMP-mediated inhibition of TRPC3 . Mutation of two putative protein kinase G phosphorylation sites, Thr11 and Ser263, reduced cGMP-mediated channel inhibition. In contrast to the inhibitory effect of phosphorylation of TRPC3 by PKC and PKG, phosphorylation of TRPC3 by Src kinase is required for its activation by diacyl glycerol . Zhu and co-workers observed activation of mouse TRPC5 by carbachol that activates muscarinic receptors . However, application of carbachol resulted in rapid desensitization of TRPC5. This desensitization of TRPC5 was blocked by inhibitors of PKC. Mutation of candidate PKC phosphorylation sites led to identification of Thr972 (Thr970 in human TRPC5). Exchange of Thr972 to Ala resulted in a marked decrease of carbachol-mediated desensitization of TRPC5. Akin to TRPC3 and TRPC5, TRPC6 is inhibited by PKC-mediated phosphorylation but phosphorylation by the Src family protein kinase Fyn increases its activity . TRPC6 phosphorylation by calmodulin-dependent kinase II (CaMKII) is a prerequisite for channel activation because inhibition of CaMKII prevented channel activation by carbachol . 6.2. Phosphorylation of TRPV1 Besides N-glycosylation and covalent modifications of cysteine residues, TRPV1 undergoes phosphorylation. Cesare and McNaughton observed that the heat-activated current in small cultured dorsal root ganglion neurons was sensitized by application of bradykinin. The sensitization by bradykinin was mimicked by activators of protein kinase C and prolonged by phosphatase inhibitors . Although five PKC isoforms were present in sensory neurons, only PKCε translocated to the plasma membrane upon application of bradykinin. Constitutively active PKCε sensitized the heat response whereas bradykinin-induced sensitization was suppressed by a specific inhibitor of PKCε . To delineate PKC-dependent TRPV1 phosphorylation sites that mediate sensitization, Numazaki and colleagues expressed rat TRPV1 in HEK293 cells and mutated predicted intracellular Ser and Thr residues to Ala . Potentiation of capsaicin-induced currents by PMA and ATP was reduced when Ser502 or Ser800 (Ser502 and Ser801 in human TRPV1) were mutated. Ser502 resides in the first intracellular loop connecting transmembrane regions two and three and Ser800 is located in the intracellular C-terminal region. Consistently, phosphorylation of TRPV1 fragments harboring Ser502Ala or Ser800Ala was drastically reduced as compared to wild type fragments in an in vitro kinase assay using PKCε. Because sensitization of TRPV1 by PKA had been proposed , Rathee and co-workers investigated the effects of PKA on heat-induced currents through TRPV1 and found that activation of the cAMP/PKA cascade by forskolin potentiated the heat-induced current in rat DRG neurons . Mutation of three putative PKA phosphorylation sites, Thr144, Thr370, and Ser502 of the rat TRPV1 channel (Thr145, Thr371, and Ser502 in human TRPV1) and expression in HEK 293 cells resulted in strongly decreased forskolin potentiation of heat-induced currents . Bhave and colleagues showed that activation of PKA led to phosphorylation of TRPV1 and prevented TRPV1 desensitization . By mutagenesis of candidate phosphorylation sites, they identified Ser116 of rat TRPV1 (Ser117 in human TRPV1) as the site that mediates PKA-dependent blockage of desensitization. Collectively, these results propose that PKA phosphorylates TRPV1 at Ser145, Thr371, and Ser502 in human TRPV1 and thereby sensitizes the channel. Phosphorylation of Ser116 by PKA prevents desensitization of TRPV1. Since Ca2+ influx into the nerve cell is the major outcome of TRPV1 stimulation, Docherty and co-workers tested the possibility of a Ca2+-mediated desensitization of TRPV1 . Using rat DRG neurons, the authors found that desensitization of capsaicin-induced responses was reduced when extracellular Ca2+ concentrations were lowered. Moreover, desensitization was inhibited by a Ca2+ chelator and by a specific inhibitor of the Ca2+/calmodulin-dependent protein phosphatase 2B (calcineurin). Therefore, they proposed that the capsaicin-induced rise of intracellular Ca2+ levels activates phosphatase 2B that in turn dephosphorylates TRPV1 to promote desensitization. Testing a panel of kinases, Jung and colleagues found that only calmodulin-dependent kinase II (CaMKII) was able to render TRPV1 susceptible for activation by capsaicin after previous desensitization . The authors concluded that phosphorylation of TRPV1 by CaMKII is a prerequisite for channel activation and mutated candidate CaMKII phosphorylation sites in rat TRPV1. They found that channel activity was abolished when two candidate CaMKII phosphorylation sites, Ser502 and Thr704 (Ser502 and Thr705 in human TRPV1), were ablated simultaneously. Mutation of putative PKA and PKC consensus motifs failed to disrupt capsaicin-induced channel activity. Activation of TRPV1 by acid was not impaired in the double mutant and did not exhibit desensitization in the wild type. Jin and co-workers examined the role of the cellular tyrosine kinase c-Src in the modulation of rat TRPV1 . They observed that capsaicin-induced currents through rat TRPV1 were abolished by the c-Src inhibitor PP2 and reduced when dominant-negative c-Src was co-expressed. Conversely, capsaicin-induced currents through rat TRPV1 were elevated by sodium orhovanadate, a tyrosine phosphatase inhibitor. Additionally Tyr-phosphorylated TRPV1 and Src kinase were shown by co-immunoprecipitation to interact. Zhang and colleagues showed that nerve growth factor (NGF) signaling ultimately led to activation of Src kinase. Src kinase phosphorylated TRPV1 at Tyr200 resulting in increased surface expression of TRPV1 . The authors proposed that sensitization of TRPV1 by NGF can largely be explained by subcellular trafficking of TRPV1 to the plasma membrane that is induced by Scr kinase-dependent phosphorylation of TRPV1. In general, phosphorylation sensitizes TRPV1 for activating stimuli whereas dephosphorylation renders it less susceptible for activation. Multiple kinases that are activated through different signaling pathways phosphorylate TRPV1 at different sites that exert different functions. Therefore, TRPV1 is not only activated by diverse ligands and heat but is also regulated through inputs from diverse signaling pathways. It is therefore believed that TRPV1 serves as a signal integrator combining different input signals that ultimately result in a certain Ca2+ level in the nociceptive cell. 6.3. Phosphorylation of TRPV4 The TRPV4 channel is activated by hypotonicity and is believed to function as an osmosensor in vertebrates [53,54]. Because tyrosine phosphorylation had been associated with hypotonic stress [87,88,89,90], Xu and colleagues used an anti-phosphotyrosine antibody to investigate tyrosine phosphorylation in immunoprecipitates from HEK293 cells stably transfected with V5-tagged TRPV4 . Upon induction of hypotonic stress, they observed a transient increase of TRPV4 tyrosine phosphorylation. To confirm that TRPV4 tyrosine phosphorylation also occurred in a native system, endogenous TRPV4 was precipitated from a murine distal convoluted tubule cell line using an anti-peptide antibody that was raised against the C-terminus of murine TRPV4. Using the anti-phosphotyrosine antibody, the authors confirmed that Tyr phosphorylation of native TRPV4 was up-regulated by hypotonic stress. Treatment of TRPV4 overexpressing cells with PP1, a specific inhibitor of the Src family of kinases, led to diminished Tyr phosphorylation under hypotonic conditions in a dose-dependent manner. In contrast treatment with genistein, a general tyrosine kinase inhibitor, and with piceatannol, an inhibitor of the Syk protein-tyrosine kinase, did not significantly impair Tyr phosphorylation. By immunoprecipitation, TRPV4 was shown to physically interact with a panel of Src family tyrosine kinases. However, Lyn was the only Src family tyrosine kinase that displayed a hypotonicity- and time-dependent interaction with TRPV4. In support of this result, Lyn and V5-tagged TRPV4 colocalized in stably transfected HEK293 cells as shown by confocal microscopy using anti-Lyn and anti-V5 antibodies. Overexpression of wild type Lyn by transient transfection of HEK293 cells stably transfected with TRPV4 resulted in increased Tyr phosphorylation of TRPV4 as was assessed with a phosphotyrosine antibody after immunoprecipitation. Overexpression of dominant negative Lyn led to modest depression of TRPV4 Tyr phosphorylation. Next, the authors ablated several predicted TRPV4 phosphorylation sites by exchange to Phe and expressed the mutated V5-tagged mouse TRPV4 proteins in COS7 cells. Tyr phosphorylation was assessed using an anti-phosphotyrosine antibody. As a result, TRPV4 in which Tyr253 was changed to Phe displayed strongly reduced levels of Tyr phosphorylation. HEK293 cells expressing the Tyr253 mutant of TRPV4 did not exhibit hypotonicity-induced Ca2+ transients that were observed in HEK293 cells expressing wild type TRPV4. In aggregation, the authors identified a single phosphorylation site of TRPV4 that is phosphorylated by a Src family tyrosine kinase under hypotonicity which is a prerequisite for gating of the channel . 6.4. Phosphorylation of TRPM7 TRPM7 is ubiquitously expressed and interestingly, in conjunction to its function as a cation channel, displays serine/threonine kinase activity and autophosphorylation [23,91]. TRPM7 is activated by PLC-coupled receptor agonists such as thrombin, lysophosphatidic acid, and bradykinin and regulates actomyosin contractibility by phosphorylating the C-terminus of the myosin IIA heavy chain . Actomyosin contractibility has been associated with various cellular processes such as cell migration, shape, adhesion, and cytokinesis [94,95,96]. Phosphorylation of myosin IIA is promoted by massive autophosphorylation of the TRPM7 C-terminus at 46 autophosphorylation sites that are located N-terminal of the kinase domain. Abrogation of this highly phosphorylated region suppressed substrate phosphorylation but not kinase activity per se. These results suggest that autophosphorylation of the TRPM7 C-terminus promotes substrate recognition rather than kinase activity of TRPM7 . 7. Phosphorylation of Drosophila TRP and TRPL 7.1. Drosophila Phototransduction The Drosophila phototransduction cascade is located within a specialized compartment of the photoreceptor cells in the compound eye, the rhabdomere. The rhabdomere is built by finger-shaped evaginations of the apical membrane, known as microvilli. Proteins of the phototransduction cascade are located within or at the cytoplasmic surface of the rhabdomeric membrane. A subset of these proteins is tethered by neither inactivation nor after potential (INAD) scaffolding protein. Interaction between INAD and its binding partners is mediated by PDZ domains (named after the first three proteins that were identified to harbor these domains, post synaptic density protein, Drosophila disc large tumor suppressor, zonula occludens-1 protein). INAD harbors five of these domains that consist of approximately 90 amino acids . Through interaction of binding partners with two or more INAD molecules and through INAD-INAD interactions, a supramolecular network, called the INAD signaling complex, is formed . Binding partners of INAD comprise phospholipase Cβ (PLCβ), eye-specific protein kinase C (ePKC), and the TRP ion channel [97,99,100,101,102]. Interestingly, in Drosophila, the TRP channel serves as an anchor that attaches INAD and the other INAD binding partners to the rhabdomeric membrane. In addition to TRP, a second light-activated channel, TRP-like (TRPL) is also present in the rhabdomeric membrane, but is probably not attached to the INAD signaling complex . The phototransduction cascade is initiated by the absorption of a photon by the visual chromophore of rhodopsin, 11-cis-3-hydroxy retinal that is thereby isomerized to all-trans-3-hydroxy retinal . This isomerization triggers a conformational change in the opsin apoprotein resulting in the activation of a heterotrimeric Gq protein. In the Gqα subunit, GDP is exchanged by GTP and the Gα subunit detaches from the Gβγ subunits and activates PLCβ. PLCβ, in turn, cleaves phosphatidyl-4,5-bisphosphate (PIP2), a building block of the rhabdomeric membrane, to yield soluble inositol-1,4,5-trisphosphate, diacyl glycerol (DAG) that stays in the membrane, and protons. It has been shown that the decrease of PIP2 and the increase of protons are necessary for finally activating the ion channels TRP and TRPL . Recent work also shows that the cleavage of PIP2 results in a considerable change of the curvature of the rhabdomeric membrane as manifested by a light-triggered contraction of the entire rhabdomere that may open TRP and TRPL channels mechanically . However, the exact mechanism of TRP and TRPL activation is still under debate. Activation of TRP and TRPL results in an influx of cations that depolarizes the photoreceptor cell. 7.2. Phosphorylation of TRP Since ePKC is a member of the INAD signaling complex, it was speculated that this protein kinase might phosphorylate other members of this complex. Indeed, by in vitro kinase assays using immunoprecipitated signaling complexes and radioactively labeled ATP, ePKC was shown to phosphorylate INAD as well as TRP [106,107,108]. However, in vitro assays may not reproduce all aspects of the physiological conditions in the cell. The first identification of an in vivo phosphorylation site of the Drosophila TRP ion channel was reported by Popescu and colleagues who unambiguously identified Ser982 as a TRP phosphorylation site in a mass spectrometry approach. In a pkc null mutant fly, the respective phosphopeptides could not be observed. The authors concluded that Ser982 is a phosphorylation site that is phosphorylated by ePKC in vivo. A transgenic fly that expressed a modified TRP channel in which the Ser982 phosphorylation site was ablated by exchange to Ala, displayed a prolonged deactivation of the photoresponse. However, this phenotype only became evident upon application of a very bright light stimulus. Notably, the epkc null mutant inaCP209 displayed a prolonged deactivation of the photoresponse as well, although it did so under every light intensity tested. To identify TRP phosphorylation sites with a possible role in the regulation of vision, we analyzed TRP from light- and dark-adapted flies [42,43]. Using quantitative mass spectrometry, we were able to identify 28 phosphorylation sites, 27 of which resided in the predicted intracellular C-terminal region and a single site that resided near the N-terminus (Figure 2A). Fifteen of the C-terminal phosphorylation sites exhibited enhanced phosphorylation in the light, whereas a single site, Ser936, exhibited enhanced phosphorylation in the dark (Figure 2A). To further investigate TRP phosphorylation at light dependent phosphorylation sites, we generated phosphospecific antibodies that specifically detected TRP phosphorylation at Thr849, Thr864, which become phosphorylated in the light, and at Ser936, which becomes dephosphorylated in the light. We found TRP phosphorylated at Thr849, Thr864, and Ser936 to be located within the rhabdomeres indicating that phosphorylation of these sites is not a trigger for removal of TRP from the rhabdomeres. This finding further suggests that phosphorylation or dephosphorylation of TRP takes place at the rhabdomeres. To delineate the stage of the phototransduction cascade that is necessary to trigger dephosphorylation of Ser936 or phosphorylation of Thr849 and Th864, we exploited available Drosophila mutants of the phototransduction cascade. Flies were light- or dark-adapted and fly heads were subjected to Western blot analyses using the phosphospecific antibodies. We observed strong phosphorylation of Ser936 in dark adapted wild type flies but no phosphorylation in light-adapted wild type flies. Conversely, we found weak phosphorylation of Thr849 and Thr864 in dark adapted wild type flies and strong phosphorylation in light-adapted wild flies. These results were in accordance with our data obtained by LC-MS/MS. Additionally, we observed strong phosphorylation of Ser936 and weak phosphorylation of Thr849 and Thr864, regardless of the light conditions in mutants of the phototransduction cascade that exhibit impaired vision. In contrast, a mutant expressing a constitutively active TRP channel exhibited weak phosphorylation of Ser936 and strong phosphorylation of Thr849 and Thr864. These data indicate that in vivo, TRP dephosphorylation at Ser936 and phosphorylation at Thr849 and Thr864 depends on the phototransduction cascade but activation of the TRP channel is sufficient to trigger this process. To identify kinases and phosphatases of Thr849 and Thr864, we conducted a candidate screen using available mutants of kinases and phosphatases that are expressed in the eye. We found that Thr849 phosphorylation was compromised in light-adapted epkc null mutants. Interestingly, we also found diminished phosphorylation in light-adapted pkc53e mutants suggesting that these two protein kinases C synergistically phosphorylate TRP at Thr849. Light-adapted rolled and snf1a mutants displayed significantly elevated phosphorylation levels of Thr849. Rolled is a mitogen-activated protein kinase that has been implicated in eye development [110,111]. The snf1a gene encodes an AMP-activated protein kinase. AMP-activated protein kinases have been associated to the cellular energy pathway . Thr864 exhibited diminished phosphorylation in ck1a, licorne, tao-1, and mppe mutant flies. Neither of these mutations resulted a complete dephosphorylation of Thr864 showing that neither of these enzymes exclusively controls TRP phosphorylation at Thr864. Notably, MPPE is a metallophosphoesterase that mediates the activation of α-Man II that deglycosylates rhodopsin 1 in the process of its maturation . 7.3. Phosphorylation of TRPL While the Drosophila TRP channel permanently resides within the rhabdomeric membrane, TRPL undergoes a light-dependent translocation from the rhabdomeric membrane to a storage compartment of yet elusive nature within the cell body . Light-dependent translocation has been demonstrated for several phototransduction proteins such as Gqα and arrestin in Drosophila and is believed to function in long term light adaptation. Using mass spectrometry, we identified nine phosphorylated serine and threonine residues of the TRPL channel (Figure 2B). Eight of these phosphorylation sites resided within the predicted cytosolic C-terminal region and a single site, Ser20, was located close to the TRPL N-terminus. Relative quantification revealed that Ser20 and Thr989 exhibited enhanced phosphorylation in the light, whereas Ser927, Ser1000, Ser1114, Thr1115, and Ser1116 exhibited enhanced phosphorylation in the dark. Phosphorylation of Ser730 and Ser931 was not light-dependent. To further investigate the function of the eight C-terminal phosphorylation sites, these serine and threonine residues were mutated either to alanine, eliminating phosphorylation (TRPL8x), or to aspartate, mimicking phosphorylation (TRPL8xD). The mutated TRPL channels were transgenically expressed in R1-6 photoreceptor cells of flies as trpl-eGFP fusion constructs. The mutated channels formed multimeres with wild type TRPL and produced electrophysiological responses when expressed in trpl;trp double mutant background indistinguishable from responses produced by wild-type TRPL. These findings indicated that TRPL channels devoid of their C-terminal phosphorylation sites form fully functional channels and they argue against a role of TRPL phosphorylation for channel gating or regulation of its biophysical properties. Since TRPL undergoes light-dependent translocation, we analyzed the subcellular localization of the phosphorylation-deficient as well as the phosphomimetic TRPL-eGFP fusion proteins by water immersion microscopy (see Figure 3 for TRPL8x-eGFP). After initial dark adaptation, wild type TRPL-eGFP was located in the rhabdomeres. After 16 h of light adaptation, TRPL-eGFP was translocated to the cell body and successively returned to the rhabdomeres within 24 h of dark adaptation. eGFP fluorescence obtained from the TRPL8x-eGFP displayed marked differences to the wild type. After initial dark adaptation, a faint eGFP signal was observed in the cell body but no eGFP signal was present in the rhabdomeres. After 16 h of light adaptation, a strong eGFP signal was observed in the cell body akin to that observed in the wild type. This indicated that TRPL8x-eGFP fusion construct was newly synthesized during light adaptation. After four hours of dark adaptation, TRPL8x-eGFP was present in the rhabdomere, but 20 h later, only faint eGFP fluorescence was observable in the cell bodies and none in the rhabdomeres. Using immunocytochemistry, the eGFP signal of mutated TRPL in the dark was observed in restricted regions outside of the rhabdomere and differed from the diffuse eGFP signal observed in light-adapted flies expressing either the mutated or the native TRPL-eGFP channel. This result suggests that TRPL is localized in another subcellular compartment in dark-adapted TRPL8x-eGFP mutants that may be involved in degradation of TRPL. Unexpectedly, mutation of phosphorylation sites to Asp resulted in a similar phenotype as mutation to Ala comprising defects in TRPL localization and stabilization in the dark. This finding may be explained either by assuming that in this context, Asp did not mimic phosphate groups or that a specific pattern of TRPL phosphorylation—that changes upon exposure of the flies to light or darkness—is required for correct localization and stability of the channel. Taken together, our findings indicate that phosphorylation of TRPL at C-terminal residues seems to be required for retaining TRPL at its physiological site of action and thereby preventing degradation. 8. Concluding Remarks The body of data about post-translational modifications of TRP channels is growing rapidly. However, some important questions remain unresolved. For many modifications it has not been thoroughly evaluated what fraction of the total amount of channels is modified at a given site. In the case of TRP phosphorylation, this fraction may range from a few percent to almost complete phosphorylation of all channel molecules. With regard to that, it would be important to find out whether a certain post-translational modification triggers a physiological response when present at a single subunit of a channel tetramer or only when this post-translational modification occurs at all four subunits at once. TRP regulation by modification of cysteine residues seems to be complex in some cases involving intra- and inter-subunit disulfide bonds that might influence the accessibility of cysteines by cysteine-modifying channel activators. In future work, the conditions under which certain disulfide bonds are formed have to be determined in greater detail and the physiological consequences have to be examined carefully. The physiological role of the massive C-terminal phosphorylation of the Drosophila TRP channel is still elusive. Generation of transgenic flies expressing modified TRP channels in which the phosphorylation sites are modified will help to shed light on this question. Furthermore, even if the physiological function of a certain post-translational modification is known, it is oftentimes not clear how that physiological function is achieved mechanistically. The mechanistic understanding of how post-translational modifications exert their physiological functions will be a major challenge in this research field. Work of the authors is supported by grants of the Deutsche Forschungsgemeinschaft (VO1741/1-1 and HU839/2-6). Conflicts of Interest The authors declare no conflict of interest. - Cosens, D.J.; Manning, A. Abnormal electroretinogram from a Drosophila mutant. Nature 1969, 224, 285–287. [Google Scholar] [CrossRef] - Minke, B.; Wu, C.; Pak, W.L. Induction of photoreceptor voltage noise in the dark in Drosophila mutant. Nature 1975, 258, 84–87. [Google Scholar] [CrossRef] - Montell, C.; Rubin, G.M. Molecular characterization of the Drosophila trp locus: A putative integral membrane protein required for phototransduction. Neuron 1989, 2, 1313–1323. [Google Scholar] [CrossRef] - Hardie, R.C.; Minke, B. The trp gene is essential for a light-activated Ca2+ channel in Drosophila photoreceptors. Neuron 1992, 8, 643–651. [Google Scholar] [CrossRef] - Liao, M.; Cao, E.; Julius, D.; Cheng, Y. Structure of the TRPV1 ion channel determined by electron cryo-microscopy. Nature 2013, 504, 107–112. [Google Scholar] [CrossRef] - Cao, E.; Liao, M.; Cheng, Y.; Julius, D. TRPV1 structures in distinct conformations reveal activation mechanisms. Nature 2013, 504, 113–118. [Google Scholar] [CrossRef] - Nilius, B.; Owsianik, G. The transient receptor potential family of ion channels. Genome Biol. 2011, 12, 218. [Google Scholar] [CrossRef] - Hinman, A.; Chuang, H.-H.; Bautista, D.M.; Julius, D. TRP channel activation by reversible covalent modification. Proc. Natl. Acad. Sci. USA 2006, 103, 19564–19568. [Google Scholar] [CrossRef] - Macpherson, L.J.; Dubin, A.E.; Evans, M.J.; Marr, F.; Schultz, P.G.; Cravatt, B.F.; Patapoutian, A. Noxious compounds activate TRPA1 ion channels through covalent modification of cysteines. Nature 2007, 445, 541–545. [Google Scholar] [CrossRef] - Takahashi, N.; Mizuno, Y.; Kozai, D.; Yamamoto, S.; Kiyonaka, S.; Shibata, T.; Uchida, K.; Mori, Y. Molecular characterization of TRPA1 channel activation by cysteine-reactive inflammatory mediators. Channels (Austin) 2008, 2, 287–298. [Google Scholar] [CrossRef] - Takahashi, N.; Kuwaki, T.; Kiyonaka, S.; Numata, T.; Kozai, D.; Mizuno, Y.; Yamamoto, S.; Naito, S.; Knevels, E.; Carmeliet, P.; et al. TRPA1 underlies a sensing mechanism for O2. Nat. Chem. Biol. 2011, 7, 701–711. [Google Scholar] [CrossRef] - Dietrich, A.; Mederos y Schnitzler, M.; Emmel, J.; Kalwa, H.; Hofmann, T.; Gudermann, T. N-linked protein glycosylation is a major determinant for basal TRPC3 and TRPC6 channel activity. J. Biol. Chem. 2003, 278, 47842–47852. [Google Scholar] - Trebak, M.; Hempel, N.; Wedel, B.J.; Smyth, J.T.; Bird, G.S.J.; Putney, J.W., Jr. Negative regulation of TRPC3 channels by protein kinase C-mediated phosphorylation of serine 712. Mol. Pharmacol. 2005, 67, 558–563. [Google Scholar] - Becker, E.B.; Oliver, P.L.; Glitsch, M.D.; Banks, G.T.; Achilli, F.; Hardy, A.; Nolan, P.M.; Fisher, E.M.; Davies, K.E. A point mutation in TRPC3 causes abnormal Purkinje cell development and cerebellar ataxia in moonwalker mice. Proc. Natl. Acad. Sci. USA 2009, 106, 6706–6711. [Google Scholar] - Kwan, H.Y.; Huang, Y.; Yao, X. Regulation of canonical transient receptor potential isoform 3 (TRPC3) channel by protein kinase G. Proc. Natl. Acad. Sci. USA 2004, 101, 2625–2630. [Google Scholar] [CrossRef] - Vazquez, G.; Wedel, B.J.; Kawasaki, B.T.; Bird, G.S.; Putney, J.W., Jr. Obligatory role of Src kinase in the signaling mechanism for TRPC3 cation channels. J. Biol. Chem. 2004, 279, 40521–40528. [Google Scholar] - Yoshida, T.; Inoue, R.; Morii, T.; Takahashi, N.; Yamamoto, S.; Hara, Y.; Tominaga, M.; Shimizu, S.; Sato, Y.; Mori, Y. Nitric oxide activates TRP channels by cysteine S-nitrosylation. Nat. Chem. Biol. 2006, 2, 596–607. [Google Scholar] [CrossRef] - Zhu, M.H.; Chae, M.; Kim, H.J.; Lee, Y.M.; Kim, M.J.; Jin, N.G.; Yang, D.K.; So, I.; Kim, K.W. Desensitization of canonical transient receptor potential channel 5 by protein kinase C. Am. J. Physiol., Cell Physiol. 2005, 289, C591–C600. [Google Scholar] [CrossRef] - Zhang, L.; Saffen, D. Muscarinic acetylcholine receptor regulation of TRP6 Ca2+ channel isoforms. Molecular structures and functional characterization. J. Biol. Chem. 2001, 276, 13331–13339. [Google Scholar] [CrossRef] - Hisatsune, C.; Kuroda, Y.; Nakamura, K.; Inoue, T.; Nakamura, T.; Michikawa, T.; Mizutani, A.; Mikoshiba, K. Regulation of TRPC6 channel activity by tyrosine phosphorylation. J. Biol. Chem. 2004, 279, 18887–18894. [Google Scholar] [CrossRef] - Shi, J.; Mori, E.; Mori, Y.; Mori, M.; Li, J.; Ito, Y.; Inoue, R. Multiple regulation by calcium of murine homologues of transient receptor potential proteins TRPC6 and TRPC7 expressed in HEK293 cells. J. Physiol. (Lond.) 2004, 561, 415–432. [Google Scholar] [CrossRef] - Woo, S.K.; Kwon, M.S.; Ivanov, A.; Geng, Z.; Gerzanich, V.; Simard, J.M. Complex N-glycosylation stabilizes surface expression of transient receptor potential melastatin 4b. J. Biol. Chem. 2013, 288, 36409–36417. [Google Scholar] [CrossRef] - Clark, K.; Middelbeek, J.; Morrice, N.A.; Figdor, C.G.; Lasonder, E.; van Leeuwen, F.N. Massive autophosphorylation of the Ser/Thr-rich domain controls protein kinase activity of TRPM6 and TRPM7. PLoS One 2008, 3, e1876. [Google Scholar] - Pertusa, M.; Madrid, R.; Morenilla-Palao, C.; Belmonte, C.; Viana, F. N-glycosylation of TRPM8 ion channels modulates temperature sensitivity of cold thermoreceptor neurons. J. Biol. Chem. 2012, 287, 18218–18229. [Google Scholar] [CrossRef] - Dragoni, I.; Guida, E.; McIntyre, P. The cold and menthol receptor TRPM8 contains a functionally important double cysteine motif. J. Biol. Chem. 2006, 281, 37353–37360. [Google Scholar] [CrossRef] - Erler, I.; Al-Ansary, D.M.M.; Wissenbach, U.; Wagner, T.F.J.; Flockerzi, V.; Niemeyer, B.A. Trafficking and assembly of the cold-sensitive TRPM8 channel. J. Biol. Chem. 2006, 281, 38396–38404. [Google Scholar] - Cao, C.; Yudin, Y.; Bikard, Y.; Chen, W.; Liu, T.; Li, H.; Jendrossek, D.; Cohen, A.; Pavlov, E.; Rohacs, T.; et al. Polyester modification of the mammalian TRPM8 channel protein: Implications for structure and function. Cell Rep. 2013, 4, 302–315. [Google Scholar] [CrossRef] - Jahnel, R.; Dreger, M.; Gillen, C.; Bender, O.; Kurreck, J.; Hucho, F. Biochemical characterization of the vanilloid receptor 1 expressed in a dorsal root ganglia derived cell line. Eur. J. Biochem. 2001, 268, 5489–5496. [Google Scholar] [CrossRef] - Wirkner, K.; Hognestad, H.; Jahnel, R.; Hucho, F.; Illes, P. Characterization of rat transient receptor potential vanilloid 1 receptors lacking the N-glycosylation site N604. Neuroreport 2005, 16, 997–1001. [Google Scholar] [CrossRef] - Salazar, H.; Llorente, I.; Jara-Oseguera, A.; García-Villegas, R.; Munari, M.; Gordon, S.E.; Islas, L.D.; Rosenbaum, T. A single N-terminal cysteine in TRPV1 determines activation by pungent compounds from onion and garlic. Nat. Neurosci. 2008, 11, 255–261. [Google Scholar] - Chuang, H.-H.; Lin, S. Oxidative challenges sensitize the capsaicin receptor by covalent cysteine modification. Proc. Natl. Acad. Sci. USA 2009, 106, 20097–20102. [Google Scholar] [CrossRef] - Numazaki, M.; Tominaga, T.; Toyooka, H.; Tominaga, M. Direct phosphorylation of capsaicin receptor VR1 by protein kinase C epsilon and identification of two target serine residues. J. Biol. Chem. 2002, 277, 13375–13378. [Google Scholar] - Bhave, G.; Zhu, W.; Wang, H.; Brasier, D.J.; Oxford, G.S.; Gereau, R.W., 4th. cAMP-dependent protein kinase regulates desensitization of the capsaicin receptor (VR1) by direct phosphorylation. Neuron 2002, 35, 721–731. [Google Scholar] [CrossRef] - Rathee, P.K.; Distler, C.; Obreja, O.; Neuhuber, W.; Wang, G.K.; Wang, S.Y.; Nau, C.; Kress, M. PKA/AKAP/VR-1 module: A common link of Gs-mediated signaling to thermal hyperalgesia. J. Neurosci. 2002, 22, 4740–4745. [Google Scholar] - Zhang, X.; Huang, J.; McNaughton, P.A. NGF rapidly increases membrane expression of TRPV1 heat-gated ion channels. EMBO J. 2005, 24, 4211–4223. [Google Scholar] [CrossRef] - Jung, J.; Shin, J.S.; Lee, S.Y.; Hwang, S.W.; Koo, J.; Cho, H.; Oh, U. Phosphorylation of vanilloid receptor 1 by Ca2+/calmodulin-dependent kinase II regulates its vanilloid binding. J. Biol. Chem. 2004, 279, 7048–7054. [Google Scholar] - Jahnel, R.; Bender, O.; Munter, L.M.; Dreger, M.; Gillen, C.; Hucho, F. Dual expression of mouse and rat VRL-1 in the dorsal root ganglion derived cell line F-11 and biochemical analysis of VRL-1 after heterologous expression. Eur. J. Biochem. 2003, 270, 4264–4271. [Google Scholar] [CrossRef] - Cohen, D.M. Regulation of TRP channels by N-linked glycosylation. Semin. Cell Dev. Biol. 2006, 17, 630–637. [Google Scholar] [CrossRef] - Xu, H.; Fu, Y.; Tian, W.; Cohen, D.M. Glycosylation of the osmoresponsive transient receptor potential channel TRPV4 on Asn-651 influences membrane trafficking. Am. J. Physiol. Renal. Physiol. 2006, 290, 1103–1109. [Google Scholar] - Xu, H.; Zhao, H.; Tian, W.; Yoshida, K.; Roullet, J.-B.; Cohen, D.M. Regulation of a transient receptor potential (TRP) channel by tyrosine phosphorylation. SRC family kinase-dependent tyrosine phosphorylation of TRPV4 on TYR-253 mediates its response to hypotonic stress. J. Biol. Chem. 2003, 278, 11520–11527. [Google Scholar] - Chang, Q.; Hoefs, S.; van der Kemp, A.W.; Topala, C.N.; Bindels, R.J.; Hoenderop, J.G. The beta-glucuronidase klotho hydrolyzes and activates the TRPV5 channel. Science 2005, 310, 490–493. [Google Scholar] [CrossRef] - Voolstra, O.; Beck, K.; Oberegelsbacher, C.; Pfannstiel, J.; Huber, A. Light-dependent phosphorylation of the Drosophila transient receptor potential (TRP) ion channel. J. Biol. Chem. 2010, 285, 14275–14284. [Google Scholar] - Voolstra, O.; Bartels, J.-P.; Oberegelsbacher, C.; Pfannstiel, J.; Huber, A. Phosphorylation of the Drosophila transient receptor potential ion channel is regulated by the phototransduction cascade and involves several protein kinases and phosphatases. PLoS One 2013, 8, e73787. [Google Scholar] - Cerny, A.C.; Oberacker, T.; Pfannstiel, J.; Weigold, S.; Will, C.; Huber, A. Mutation of light-dependent phosphorylation sites of the Drosophila transient receptor potential-like (TRPL) ion channel affects its subcellular localization and stability. J. Biol. Chem. 2013, 288, 15600–15613. [Google Scholar] [CrossRef] - Ohtsubo, K.; Marth, J.D. Glycosylation in cellular mechanisms of health and disease. Cell 2006, 126, 855–867. [Google Scholar] [CrossRef] - Helenius, A.; Aebi, M. Roles of N-linked glycans in the endoplasmic reticulum. Annu. Rev. Biochem. 2004, 73, 1019–1049. [Google Scholar] [CrossRef] - Hoenderop, J.G.J.; van Leeuwen, J.P.T.M.; van der Eerden, B.C.J.; Kersten, F.F.J.; van der Kemp, A.W.C.M.; Mérillat, A.-M.; Waarsing, J.H.; Rossier, B.C.; Vallon, V.; Hummler, E.; et al. Renal Ca2+ wasting, hyperabsorption, and reduced bone thickness in mice lacking TRPV5. J. Clin. Invest 2003, 112, 1906–1914. [Google Scholar] [CrossRef] - Kuro-o, M.; Matsumura, Y.; Aizawa, H.; Kawaguchi, H.; Suga, T.; Utsugi, T.; Ohyama, Y.; Kurabayashi, M.; Kaname, T.; Kume, E.; et al. Mutation of the mouse klotho gene leads to a syndrome resembling ageing. Nature 1997, 390, 45–51. [Google Scholar] [CrossRef] - Imura, A.; Iwano, A.; Tohyama, O.; Tsuji, Y.; Nozaki, K.; Hashimoto, N.; Fujimori, T.; Nabeshima, Y.-I. Secreted Klotho protein in sera and CSF: Implication for post-translational cleavage in release of Klotho protein from cell membrane. FEBS Lett. 2004, 565, 143–147. [Google Scholar] [CrossRef] - Tohyama, O.; Imura, A.; Iwano, A.; Freund, J.-N.; Henrissat, B.; Fujimori, T.; Nabeshima, Y.-I. Klotho is a novel beta-glucuronidase capable of hydrolyzing steroid beta-glucuronides. J. Biol. Chem. 2004, 279, 9777–9784. [Google Scholar] - Boros, S.; Xi, Q.; Dimke, H.; van der Kemp, A.W.; Tudpor, K.; Verkaart, S.; Lee, K.P.; Bindels, R.J.; Hoenderop, J.G. Tissue transglutaminase inhibits the TRPV5-dependent calcium transport in an N-glycosylation-dependent manner. Cell. Mol. Life Sci. 2012, 69, 981–992. [Google Scholar] [CrossRef] - Vannier, B.; Zhu, X.; Brown, D.; Birnbaumer, L. The membrane topology of human transient receptor potential 3 as inferred from glycosylation-scanning mutagenesis and epitope immunocytochemistry. J. Biol. Chem. 1998, 273, 8675–8679. [Google Scholar] [CrossRef] - Strotmann, R.; Harteneck, C.; Nunnenmacher, K.; Schultz, G.; Plant, T.D. OTRPC4, a nonselective cation channel that confers sensitivity to extracellular osmolarity. Nat. Cell Biol. 2000, 2, 695–702. [Google Scholar] [CrossRef] - Liedtke, W.; Choe, Y.; Marti-Renom, M.A.; Bell, A.M.; Denis, C.S.; Sali, A.; Hudspeth, A.J.; Friedman, J.M.; Heller, S. Vanilloid receptor-related osmotically activated channel (VR-OAC), a candidate vertebrate osmoreceptor. Cell 2000, 103, 525–535. [Google Scholar] [CrossRef] - McKemy, D.D.; Neuhausser, W.M.; Julius, D. Identification of a cold receptor reveals a general role for TRP channels in thermosensation. Nature 2002, 416, 52–58. [Google Scholar] [CrossRef] - Peier, A.M.; Moqrich, A.; Hergarden, A.C.; Reeve, A.J.; Andersson, D.A.; Story, G.M.; Earley, T.J.; Dragoni, I.; McIntyre, P.; Bevan, S.; et al. A TRP Channel that Senses Cold Stimuli and Menthol. Cell 2002, 108, 705–715. [Google Scholar] [CrossRef] - Chuang, H.-H.; Neuhausser, W.M.; Julius, D. The Super-Cooling Agent Icilin Reveals a Mechanism of Coincidence Detection by a Temperature-Sensitive TRP Channel. Neuron 2004, 43, 859–869. [Google Scholar] [CrossRef] - Bödding, M.; Wissenbach, U.; Flockerzi, V. Characterisation of TRPM8 as a pharmacophore receptor. Cell Calcium. 2007, 42, 618–628. [Google Scholar] [CrossRef] - Franco, M.I.; Turin, L.; Mershin, A.; Skoulakis, E.M.C. Molecular vibration-sensing component in Drosophila melanogaster olfaction. Proc. Natl. Acad. Sci. USA 2011, 108, 3797–3802. [Google Scholar] [CrossRef] - Gane, S.; Georganakis, D.; Maniati, K.; Vamvakias, M.; Ragoussis, N.; Skoulakis, E.M.C.; Turin, L. Molecular vibration-sensing component in human olfaction. PLoS One 2013, 8, e55780. [Google Scholar] - Brookes, J.C.; Horsfield, A.P.; Stoneham, A.M. The swipe card model of odorant recognition. Sensors (Basel) 2012, 12, 15709–15749. [Google Scholar] [CrossRef] - Yao, X.; Garland, C.J. Recent developments in vascular endothelial cell transient receptor potential channels. Circ. Res. 2005, 97, 853–863. [Google Scholar] [CrossRef] - Chang, A.S.; Chang, S.M.; Garcia, R.L.; Schilling, W.P. Concomitant and hormonally regulated expression of trp genes in bovine aortic endothelial cells. FEBS Lett. 1997, 415, 335–340. [Google Scholar] [CrossRef] - Dhaka, A.; Viswanath, V.; Patapoutian, A. Trp ion channels and temperature sensation. Annu. Rev. Neurosci. 2006, 29, 135–161. [Google Scholar] [CrossRef] - Montell, C. The TRP superfamily of cation channels. Sci. STKE. 2005, 2005, re3. [Google Scholar] - Clapham, D.E. TRP channels as cellular sensors. Nature. 2003, 426, 517–524. [Google Scholar] [CrossRef] - Jordt, S.-E.; Bautista, D.M.; Chuang, H.-H.; McKemy, D.D.; Zygmunt, P.M.; Hogestatt, E.D.; Meng, I.D.; Julius, D. Mustard oils and cannabinoids excite sensory nerve fibres through the TRP channel ANKTM1. Nature 2004, 427, 260–265. [Google Scholar] [CrossRef] - Bandell, M.; Story, G.M.; Hwang, S.W.; Viswanath, V.; Eid, S.R.; Petrus, M.J.; Earley, T.J.; Patapoutian, A. Noxious cold ion channel TRPA1 is activated by pungent compounds and bradykinin. Neuron 2004, 41, 849–857. [Google Scholar] [CrossRef] - Bautista, D.M.; Jordt, S.-E.; Nikai, T.; Tsuruda, P.R.; Read, A.J.; Poblete, J.; Yamoah, E.N.; Basbaum, A.I.; Julius, D. TRPA1 mediates the inflammatory actions of environmental irritants and proalgesic agents. Cell 2006, 124, 1269–1282. [Google Scholar] [CrossRef] - Bautista, D.M.; Movahed, P.; Hinman, A.; Axelsson, H.E.; Sterner, O.; Hogestatt, E.D.; Julius, D.; Jordt, S.-E.; Zygmunt, P.M. Pungent products from garlic activate the sensory ion channel TRPA1. Proc. Natl. Acad. Sci. USA 2005, 102, 12248–12252. [Google Scholar] [CrossRef] - Andersson, D.A.; Gentry, C.; Moss, S.; Bevan, S. Transient receptor potential A1 is a sensory receptor for multiple products of oxidative stress. J. Neurosci. 2008, 28, 2485–2494. [Google Scholar] [CrossRef] - Nilius, B.; Owsianik, G.; Voets, T.; Peters, J.A. Transient Receptor Potential Cation Channels in Disease. Physiol. Rev. 2007, 87, 165–217. [Google Scholar] [CrossRef] - Wang, L.; Cvetkov, T.L.; Chance, M.R.; Moiseenkova-Bell, V.Y. Identification of in vivo disulfide conformation of TRPA1 ion channel. J. Biol. Chem. 2012, 287, 6169–6176. [Google Scholar] [CrossRef] - Kang, K.; Pulver, S.R.; Panzano, V.C.; Chang, E.C.; Griffith, L.C.; Theobald, D.L.; Garrity, P.A. Analysis of Drosophila TRPA1 reveals an ancient origin for human chemical nociception. Nature 2010, 464, 597–600. [Google Scholar] [CrossRef] - Caterina, M.J.; Schumacher, M.A.; Tominaga, M.; Rosen, T.A.; Levine, J.D.; Julius, D. The capsaicin receptor: A heat-activated ion channel in the pain pathway. Nature 1997, 389, 816–824. [Google Scholar] [CrossRef] - Reusch, R.N. Poly-beta-hydroxybutyrate/calcium polyphosphate complexes in eukaryotic membranes. Proc. Soc. Exp. Biol. Med. 1989, 191, 377–381. [Google Scholar] [CrossRef] - Reusch, R.N. Streptomyces lividans potassium channel contains poly-(R)-3-hydroxybutyrate and inorganic polyphosphate. Biochemistry 1999, 38, 15666–15672. [Google Scholar] [CrossRef] - Seebach, D.; Brunner, A.; Bürger, H.M.; Schneider, J.; Reusch, R.N. Isolation and 1H-NMR spectroscopic identification of poly(3-hydroxybutanoate) from prokaryotic and eukaryotic organisms. Determination of the absolute configuration (R) of the monomeric unit 3-hydroxybutanoic acid from Escherichia coli and spinach. Eur. J. Biochem. 1994, 224, 317–328. [Google Scholar] [CrossRef] - Okada, T.; Inoue, R.; Yamazaki, K.; Maeda, A.; Kurosaki, T.; Yamakuni, T.; Tanaka, I.; Shimizu, S.; Ikenaka, K.; Imoto, K.; et al. Molecular and functional characterization of a novel mouse transient receptor potential protein homologue TRP7. Ca2+-permeable cation channel that is constitutively activated and enhanced by stimulation of G protein-coupled receptor. J. Biol. Chem. 1999, 274, 27359–27370. [Google Scholar] [CrossRef] - Trebak, M.; St J Bird, G.; McKay, R.R.; Birnbaumer, L.; Putney, J.W. Signaling mechanism for receptor-activated canonical transient receptor potential 3 (TRPC3) channels. J. Biol. Chem. 2003, 278, 16244–16252. [Google Scholar] - Venkatachalam, K.; Zheng, F.; Gill, D.L. Regulation of canonical transient receptor potential (TRPC) channel function by diacylglycerol and protein kinase C. J. Biol. Chem. 2003, 278, 29031–29040. [Google Scholar] [CrossRef] - Cesare, P.; McNaughton, P. A novel heat-activated current in nociceptive neurons and its sensitization by bradykinin. Proc. Natl. Acad. Sci. USA 1996, 93, 15435–15439. [Google Scholar] [CrossRef] - Cesare, P.; Dekker, L.V.; Sardini, A.; Parker, P.J.; McNaughton, P.A. Specific involvement of PKC-epsilon in sensitization of the neuronal response to painful heat. Neuron 1999, 23, 617–624. [Google Scholar] [CrossRef] - Lopshire, J.C.; Nicol, G.D. The cAMP transduction cascade mediates the prostaglandin E2 enhancement of the capsaicin-elicited current in rat sensory neurons: whole-cell and single-channel studies. J. Neurosci. 1998, 18, 6081–6092. [Google Scholar] - Docherty, R.J.; Yeats, J.C.; Bevan, S.; Boddeke, H.W. Inhibition of calcineurin inhibits the desensitization of capsaicin-evoked currents in cultured dorsal root ganglion neurones from adult rats. Pflugers Arch. 1996, 431, 828–837. [Google Scholar] [CrossRef] - Jin, X.; Morsy, N.; Winston, J.; Pasricha, P.J.; Garrett, K.; Akbarali, H.I. Modulation of TRPV1 by nonreceptor tyrosine kinase, c-Src kinase. Am. J. Physiol. Cell Physiol. 2004, 287, 558–563. [Google Scholar] [CrossRef] - Tilly, B.C.; van den Berghe, N; Tertoolen, L.G.; Edixhoven, M.J.; de Jonge, H.R. Protein tyrosine phosphorylation is involved in osmoregulation of ionic conductances. J. Biol. Chem. 1993, 268, 19919–19922. [Google Scholar] - Sadoshima, J.; Qiu, Z.; Morgan, J.P.; Izumo, S. Tyrosine kinase activation is an immediate and essential step in hypotonic cell swelling-induced ERK activation and c-fos gene expression in cardiac myocytes. EMBO J. 1996, 15, 5535–5546. [Google Scholar] - Zhang, Z.; Cohen, D.M. Hypotonicity increases transcription, expression, and action of Egr-1 in murine renal medullary mIMCD3 cells. Am. J. Physiol. 1997, 273, F837–F842. [Google Scholar] - Zhang, Z.; Yang, X.Y.; Cohen, D.M. Hypotonicity activates transcription through ERK-dependent and -independent pathways in renal cells. Am. J. Physiol. 1998, 275, 1104–1112. [Google Scholar] - Runnels, L.W.; Yue, L.; Clapham, D.E. TRP-PLIK, a bifunctional protein with kinase and ion channel activities. Science 2001, 291, 1043–1047. [Google Scholar] [CrossRef] - Langeslag, M.; Clark, K.; Moolenaar, W.H.; van Leeuwen, F.N.; Jalink, K. Activation of TRPM7 channels by phospholipase C-coupled receptor agonists. J. Biol. Chem. 2007, 282, 232–239. [Google Scholar] - Clark, K.; Langeslag, M.; van Leeuwen, B.; Ran, L.; Ryazanov, A.G.; Figdor, C.G.; Moolenaar, W.H.; Jalink, K.; van Leeuwen, F.N. TRPM7, a novel regulator of actomyosin contractility and cell adhesion. EMBO J. 2006, 25, 290–301. [Google Scholar] [CrossRef] - De la Roche, M.A.; Smith, J.L.; Betapudi, V.; Egelhoff, T.T.; Côté, G.P. Signaling pathways regulating Dictyostelium myosin II. J. Muscle Res. Cell. Motil. 2002, 23, 703–718. [Google Scholar] [CrossRef] - Geiger, B.; Bershadsky, A. Exploring the neighborhood: adhesion-coupled cell mechanosensors. Cell 2002, 110, 139–142. [Google Scholar] [CrossRef] - Burridge, K.; Wennerberg, K. Rho and Rac take center stage. Cell 2004, 116, 167–179. [Google Scholar] [CrossRef] - Tsunoda, S.; Sierralta, J.; Sun, Y.; Bodner, R.; Suzuki, E.; Becker, A.; Socolich, M.; Zuker, C.S. A multivalent PDZ-domain protein assembles signalling complexes in a G- protein-coupled cascade. Nature 1997, 388, 243–249. [Google Scholar] [CrossRef] - Huber, A. Scaffolding proteins organize multimolecular protein complexes for sensory signal transduction. Eur. J. Neurosci. 2001, 14, 769–776. [Google Scholar] [CrossRef] - Adamski, F.M.; Zhu, M.Y.; Bahiraei, F.; Shieh, B.H. Interaction of eye protein kinase C and INAD in Drosophila. Localization of binding domains and electrophysiological characterization of a loss of association in transgenic flies. J. Biol. Chem. 1998, 273, 17713–17719. [Google Scholar] - Chevesich, J.; Kreuz, A.J.; Montell, C. Requirement for the PDZ domain protein, INAD, for localization of the TRP store-operated channel to a signaling complex. Neuron 1997, 18, 95–105. [Google Scholar] [CrossRef] - Huber, A.; Sander, P.; Gobert, A.; Bahner, M.; Hermann, R.; Paulsen, R. The transient receptor potential protein (Trp), a putative store- operated Ca2+ channel essential for phosphoinositide-mediated photoreception, forms a signaling complex with NorpA, InaC and InaD. EMBO J. 1996, 15, 7036–7045. [Google Scholar] - Kimple, M.E.; Siderovski, D.P.; Sondek, J. Functional relevance of the disulfide-linked complex of the N-terminal PDZ domain of InaD with NorpA. EMBO J. 2001, 20, 4414–4422. [Google Scholar] [CrossRef] - Vogt, K.; Kirschfeld, K. Chemical identity of the chromophores of fly visual pigment. Naturwissenschaften 1984, 71, 211–213. [Google Scholar] [CrossRef] - Huang, J.; Liu, C.-S.; Hughes, S.A.; Postma, M.; Schwiening, C.J.; Hardie, R.C. Activation of TRP Channels by Protons and Phosphoinositide Depletion in Drosophila Photoreceptors. Curr. Biol. 2010, 20, 189–197. [Google Scholar] [CrossRef] - Hardie, R.C.; Franze, K. Photomechanical responses in Drosophila photoreceptors. Science 2012, 338, 260–263. [Google Scholar] [CrossRef] - Huber, A.; Sander, P.; Paulsen, R. Phosphorylation of the InaD gene product, a photoreceptor membrane protein required for recovery of visual excitation. J. Biol. Chem. 1996, 271, 11710–11717. [Google Scholar] [CrossRef] - Huber, A.; Sander, P.; Bahner, M.; Paulsen, R. The TRP Ca2+ channel assembled in a signaling complex by the PDZ domain protein INAD is phosphorylated through the interaction with protein kinase C (ePKC). FEBS Lett. 1998, 425, 317–322. [Google Scholar] [CrossRef] - Liu, M.; Parker, L.L.; Wadzinski, B.E.; Shieh, B.H. Reversible phosphorylation of the signal transduction complex in Drosophila photoreceptors. J. Biol. Chem. 2000, 275, 12194–12199. [Google Scholar] [CrossRef] - Popescu, D.C.; Ham, A.J.; Shieh, B.H. Scaffolding protein INAD regulates deactivation of vision by promoting phosphorylation of transient receptor potential by eye protein kinase C in Drosophila. J. Neurosci. 2006, 26, 8570–8577. [Google Scholar] [CrossRef] - Biggs, W.H., III; Zavitz, K.H.; Dickson, B.; van der, S.A.; Brunner, D.; Hafen, E.; Zipursky, S.L. The Drosophila rolled locus encodes a MAP kinase required in the sevenless signal transduction pathway. EMBO J. 1994, 13, 1628–1635. [Google Scholar] - Brunner, D.; Oellers, N.; Szabad, J.; Biggs, W.H., III; Zipursky, S.L.; Hafen, E. A gain-of-function mutation in Drosophila MAP kinase activates multiple receptor tyrosine kinase signaling pathways. Cell 1994, 76, 875–888. [Google Scholar] - Kahn, B.B.; Alquier, T.; Carling, D.; Hardie, D.G. AMP-activated protein kinase: ancient energy gauge provides clues to modern understanding of metabolism. Cell Metab 2005, 1, 15–25. [Google Scholar] [CrossRef] - Cao, J.; Li, Y.; Xia, W.; Reddig, K.; Hu, W.; Xie, W.; Li, H.S.; Han, J. A Drosophila metallophosphoesterase mediates deglycosylation of rhodopsin. EMBO J. 2011, 30, 3701–3713. [Google Scholar] [CrossRef] - Bähner, M.; Frechter, S.; Da Silva, N.; Minke, B.; Paulsen, R.; Huber, A. Light-regulated subcellular translocation of Drosophila TRPL channels induces long-term adaptation and modifies the light-induced current. Neuron 2002, 34, 83–93. [Google Scholar] [CrossRef] © 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
1
98
<urn:uuid:8a63675c-19ba-499e-ab31-b901a33043db>
Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA † The authors contributed equally to this work. Received: March 14, 2017 | Accepted: March 17, 2017 | Published: March 23, 2017 OBM Neurobiology 2017, Volume 1, Issue 1, doi:10.21926/obm.neurobiol.1701002 Academic Editor: Bart Ellenbroek Recommended citation: Delatour LC, Yeh HH. FASD and Brain Development: Perspectives on Where We are and Where We Need to Go. OBM Neurobiology 2017;1(1):002; doi:10.21926/obm.neurobiol.1701002. © 2017 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited. The developmental disabilities and birth defects associated with FASD are entirely preventable if women abstained from consuming ethanol during pregnancy. While this statement appears to be intuitively clear, it is in practice more theoretical than realistic. In actuality, despite intense public outreach, awareness, and preventive efforts by local and national organizations, an alarming percentage of women drink during pregnancy; many even binge-drink to risky levels. The prevalence of drinking among women of child bearing age ranges from 27.6% to 69.3% across the United States, with the prevalence of binge drinking within the same cohort ranging from 10.1% to 26.1% . With approximately 50% of pregnancies in the United Stated unplanned , a significant number of pregnant women may be unknowingly putting their children at risk. Furthermore, 1 pregnant woman in 10 in the United States reports alcohol use, and 1 in 33 reports binge drinking in the past 30 days . Binge drinking is particularly dangerous due to exceedingly high blood alcohol concentrations, prolonged exposure to ethanol due to a static rate of ethanol metabolism, and periods of withdrawal . Binge drinking during critical time periods in fetal brain development, which may occur even before a woman is aware that she is pregnant, can lead to microcephaly, disruptions of the corpus callosum, and neuronal loss (For reviews see: [4,5]). Children exposed to ethanol during prenatal development displayed characteristics such as learning difficulties, hyperactivity, impulsivity, and disruptive behaviors in school [4,5,6,7,8,9,10]. The prevalence of FASD was found to be in the range of 2% to 5% in a representative population of children in the United States , with the prevalence of Fetal Alcohol Syndrome (FAS), the most serious of the diagnoses falling under FASD, between 0.3 and 1.5 infants per 1,000 live births in certain areas of the United States . Furthermore, this problem is not unique to the United States . Globally, about 10% of pregnant women consume alcohol, and the prevalence of FAS is about 15 per 10,000 people . Preclinical reports pique our awareness that the developing brain is particularly vulnerable in FASD. Despite the staggeringly high prevalence of FASD, the mechanisms for the deleterious effects of prenatal exposure to ethanol on fetal brain development are still not fully understood. FASD and Deficits in Sensory Processing The clinical features of FASD suggest broad cortical involvement, but many of these deficits converge on the proper functioning of the somatosensory cortex. The somatosensory cortex is important for the integration of sensory information and communication with other areas of the brain. When this system is functioning properly, the individual can focus his/her attention on important stimuli. Sensory integration is the process by which an individual gathers and organizes information detected by one’s senses, and then formulates the appropriate response . Sensory modulation disorders are characterized by an inability to respond appropriately to sensory information from one’s surroundings and include sensory over-responsivity, under-responsivity, and sensory seeking/craving subtypes . Sensory processing dysfunction is associated with deficits in learning, attention, and motor function in addition to problems with language, hyperactivity, and behavior and emotional regulation [15,17]. These are reflected in characteristics commonly reported in children with FASD such as hyperactivity, attention deficits, restlessness, deficits in executive functions, impulsivity, behavioral problems persisting through adulthood, visual-motor integration, fine-motor coordination, and learning deficits [10,17,18] (for review see [5,8,9]). A representative group of school-age children diagnosed with FASD scored significantly lower on the Short Sensory Profile when compared to age and gender matched controls, illustrating significant deficits in categories of tactile sensitivity, auditory filtering, visual/auditory sensitivity, and underresponsive/seeks sensation . These deficits are also correlated with problems with behavior, rule breaking, emotional regulation, and daily living skills [20,21], which are all examples of adaptive behaviors and highlight the connection between sensory dysfunction and such responses [15,17,19,22]. Adaptive behaviors are age-appropriate behaviors that require an ability to process an array of sensory experiences in one’s environment, and then respond purposefully and appropriately [15,22]. These include following rules at school and work, self-care, safety, and social skills [22,23]. Broadly, in children with prenatal exposure to ethanol, problems were found in communication, social skills, and in performing tasks necessary for daily living [20,23], reflecting deficits in age-appropriate adaptive behaviors. While common to FASD, these features are not specific and are also seen in other sensory modulation disorders such as Attention Deficit Hyperactivity Disorder (ADHD) [24,25], Autism [24,26,27,28], Asperger Syndrome , and Fragile X Syndrome [27,30]. Sensory processing dysfunction affects every aspect of daily living from academics to caring for one’s self and thus further research into the developmental causes for these deficits could help inform current work focused on treatment options for the affected individuals. FASD and Deficits in Motor Skills Disruptions in primary motor and premotor cortex are also likely, given the clinical finding of impaired fine-motor and gross-motor skills in individuals with FASD (for reviews see [5,8]). Reports of balance issues, tremor, and low muscle tone suggest that the effects are more serious than just a developmental delay in gross-motor skills [32,33]. Ataxia and an increased incidence of cerebral palsy have also been reported in individuals with FASD . Specific deficits in measures of higher order cognitive-motor skills, including hand-eye coordination, have also been noted [31,34]. In an extensive literature review conducted by Bay et al., 2011 , they found that while prenatal exposure to ethanol does have a detrimental effect on both fine- and gross-motor function, the specific effects of binge drinking are largely understudied. Nonetheless, disruptions in sensory and motor function are undoubtedly linked and the mechanisms behind these dysfunctions remain to be fully elucidated. FASD and Deficits in Executive Function In addition to sensorimotor deficits, children exposed to ethanol during prenatal development show deficits in executive functions as evidenced by tasks requiring, for example, planning, selective inhibition, reasoning, cognitive flexibility, and working memory [36,37]. These all can influence judgment and decision making, leading to increased rule-breaking and impulsive behaviors . The prefrontal cortex mediates executive functions including working memory, selective attention, abstract reasoning, language, sensorimotor integration, and sequencing of activity, in addition to behavioral inhibition [38,39], thus implicating its probable role in FASD. Clearly, the effect of in utero exposure to ethanol is multifaceted, disrupting many different areas of the cortex, and ultimately resulting in a constellation of behavioral, cognitive, and sensory deficits. The use of rodent models has significantly advanced the field of prenatal ethanol research. This is in part due to the diversity of models, which can vary in parameters of ethanol exposure such as timing, method of ethanol delivery, and pattern of ethanol exposure. Furthermore, the propensity to voluntarily consume alcohol varies between different mouse strains [40,41]. Each model has its own strengths and weaknesses but, in aggregate, the ability to customize a paradigm to address a specific physiologically relevant question, in addition to an extensive array of transgenic mouse lines, has facilitated progress in this field. Because brain development occurs at different stages throughout gestation, the timing of ethanol exposure will differentially affect brain regions and specific processes. Thus, studies are often designed with a certain central nervous system dysfunction in mind. Furthermore, the period of most rapid growth, known as the “brain growth spurt”, occurs at different times in development in different species . In humans, it begins mid-gestation, while in rodents, it does not occur until right around birth . Therefore, investigations focused on third-trimester development in humans are conducted postnatally in rodents. However, the entire process of brain development is dynamic, with the formation of the cortical plate beginning by the 7th week of gestation in humans , which is approximately embryonic days 11 and 13 in mice and rats, respectively . In this light, there is justification for investigating the effects of prenatal exposure to ethanol during earlier time points. Through targeting different stages of development by altering the period of prenatal exposure to ethanol, the mechanism behind specific dysfunctions in the central nervous system can be elucidated. Route of Ethanol Administration The various routes of ethanol administration increase the feasibility for studying the effects of prenatal exposure to ethanol in rodents. Ingestion of ethanol by pregnant dams is the most common method . Ethanol can be added to a standard liquid food diet, which is then the sole source of food for the rodent during the experimental time window [46,47]. Ethanol can also be administered through drinking water . In a model first described by Rhodes et al. (2005) , with the administration of ethanol-containing water for only a short period during the dark cycle (now known as the Drinking in the Dark (DID) paradigm), pregnant dams reached physiologically significant blood alcohol levels (BAL) overtime more consistently than in a simple two-bottle choice paradigm, where both ethanol and water are available for extended periods of time [50,51]. While the delivery of ethanol in either liquid food or water is the simplest and most cost-effective method of administration, one significant drawback is that the exact amount of ethanol consumed cannot be precisely regulated . The administration of ethanol by oral intubation (gavage) is one way to circumvent this drawback. However, this invasive procedure is decidedly more stressful for the pregnant dam, which may confound the results of prenatal ethanol studies . Ethanol can also be delivered directly through subcutaneous or intraperitoneal injection. While this allows for both precise regulation of dosage and timing of exposure and high BALs, it does not mimic the natural mode of ethanol consumption and thus may not accurately reflect the ramifications of drinking . Ethanol can also be administered as a vapor. This method is relatively simple to perform and less stressful than gavage for the animal while still leading to significant BALs, but like injection, inhalation does not resemble the physiological route of ethanol intake . Overall, there is arguably no perfect rodent model for mimicking human ethanol consumption. Nonetheless, the strategies described above have all contributed important insight into our understanding of the outcomes of prenatal exposure to ethanol. Pattern of Ethanol Consumption The pattern of ethanol exposure is a key consideration in choosing a method for delivery. An ingestion method or the vapor method are more commonly used for modeling chronic consumption . However, both the liquid food diet and the DID paradigm [51,53,54] have also been used in binge-type exposure models. While a chronic paradigm is necessary for analyzing the effects of prolonged ethanol exposure on brain development, the National Birth Defects Prevention Study reported that the most common pattern of drinking was ethanol consumption during the first month of pregnancy, followed by abstinence . For many women, this period of drinking is very likely to occur before pregnancy recognition. Thus, there is also a need for short-term ethanol exposure models to study the effects of prenatal exposure limited to the very early stages of development. Taking this view, binge-type exposure models are particularly important, given the prevalence of this pattern of drinking among pregnant women, and the risk of exposing the fetus to high BALs for prolonged periods of time . The influence of a teratogen, such as ethanol, on cortical development can have lasting effects on the health and wellbeing of the unborn child. Cortical development is an intricately regulated process that is marked by precisely timed waves of migrating cells in the dorsal telencephalon. There are two principal modes of migration: the radial migration of primordial glutamatergic pyramidal neurons and the tangential migration of GABAergic inhibitory interneurons (Figure 1A) [56,57]. Figure 1 Embryonic Cortical Development in Mouse. (A) Primordial pyramidal neurons migrate radially from the proliferative ventricular zones lining the dorsal telencephalic vesicles (purple) along radial glial fibers. The majority of GABAergic interneurons migrate tangentially into the cortical plate from the medial (MGE; green), lateral (LGE; blue) and caudal (CGE) ganglionic eminences in the ventral telencephalon (CGE not illustrated). (B) Schematic of the embryonic cortical zones at embryonic day 15.5 in the mouse depicting the radial migration of primordial pyramidal neurons along radial glial fibers into the cortical plate (the anlage six layered cortex) and streams of migrating GABAergic interneurons, including a superficial route in the marginal zone and deeper routes through the intermediate and subventricular zones. VZ = Ventricular Zone; SVZ = Subventricular Zone; IZ = Intermediate Zone; SP = Subplate; CP = Cortical Plate; MZ = Marginal Zone. There are numerous regulatory signals, including the interaction between migrating glutamatergic neurons and GABAergic interneurons, which help to direct corticogenesis . The developing cortex begins as a single layer of neuroepithelial cells and it is through a series of tightly regulated and extremely coordinated processes that a laminar structure with highly complex functional capacities is produced. Therefore, a teratogen, such as ethanol, that disrupts even just one part of this process can have enduring consequences in the central nervous system [52,59,60]. With glutamatergic pyramidal neurons comprising 70–80% of cortical neurons and providing the sole cortical outflow to subcortical regions and the contralateral cortex , the process of radial migration must be tightly regulated to ensure proper cortical development. A snapshot of radial migration at embryonic day 15.5 in the mouse is depicted in Figure 1B. Radial glial cells, functioning as neural stem cells, can divide symmetrically, producing additional radial glial cells, or asymmetrically, resulting in a post-mitotic neuron and an intermediate progenitor cell [62,63]. Intermediate progenitor cells subsequently divide to generate migrating neurons as well . Radial glial and intermediate progenitor cell bodies are found in proliferative zones, delineated as the ventricular zone and subventricular zone, respectively [62,63]. Using the fibers of the radial glial cells as a scaffold, primordial glutamatergic pyramidal neurons migrate radially from the proliferative zones toward the cortical plate, the primitive anlage of the layered cortex, with distinct phases of translocation and locomotion [57,64,65,66]. The first migrating wave of neurons forms the preplate, which is subsequently split into a superficial marginal zone and deeper subplate region by the formation of the cortical plate [56,57,66,67]. The migrating pyramidal neurons stack the cortical plate in an inside-out fashion, such that the earlier born neurons form layers V and VI, and those born later traverse them to form the more superficial layers [57,66,68,69]. An intermediate zone, located between the subventricular zone and cortical plate, contains thalamo-cortical fibers and will become the white matter of the developed cortex . The radial migration of excitatory primordial glutamatergic pyramidal neurons is paired with the tangential migration of inhibitory GABAergic interneurons to establish the intricate balance of excitatory and inhibitory transmission in the developing cortex. Miller (1988) examined the effects of prenatal exposure to ethanol on regions across the cerebral cortex including sites in motor , somatosensory, visual, and auditory cortex. He found that the period of neuron generation was shifted, with a large increase in proliferation later in gestation in ethanol-exposed rats compared to control rats . Furthermore, these later born neurons to a large extent, did not assume their appropriate position in the superficial cortex . There were changes in the size and shape of neurons as well, with earlier born neurons in the ethanol-exposed rats smaller and less eccentric, but more polar, compared to those in age-matched controls, and thus resembling those of immature and migrating neurons . A thinner ventricular zone and thicker subventricular zone in ethanol-exposed rats compared to control rats also reflect this change in proliferation . Therefore, while there is evidence that in utero exposure to ethanol disrupts corticogenesis in the cerebral cortex, the precise mechanism is still unknown and once elucidated, would reveal critical insight into the detrimental effects of ethanol. In contrast to the radial migration of glutamatergic pyramidal neurons, GABAergic inhibitory interneurons migrate tangentially from the ventral telencephalon, primarily from the medial ganglionic eminence (MGE), to the cortex (for reviews see [57,66,74,75]). Once in the cortex, interneurons follow either a superficial or deep migratory stream, through the cortical plate/ marginal zone or subventricular zone/ intermediate zone respectively [57,73]. Towards the end of migration, they adopt a radial migratory pattern as they move into the cortical plate in the developing cortex [56,57,58]. GABA, via activation of GABAA receptors, promotes the tangential migration of primordial GABAergic cortical interneurons , which is exacerbated following ethanol exposure in utero . In utero exposure to ethanol affects this process not only through accelerating the migration of GABAergic interneurons but also by significantly increasing the number of GABAergic interneurons in the embryonic neocortex compared to control animals [52,77]. An increase in ambient GABA levels and an upregulation of the expression of GABAA receptors represent prenatal-ethanol-induced extrinsic and intrinsic changes, respectively, that could be influencing abnormal development of GABAergic interneurons . The mechanism behind these changes has been studied using a binge-ethanol exposure paradigm that targets the peak of tangential migration in mice . In utero exposure to ethanol enhanced the migration of GABAergic interneurons from the MGE into the medial prefrontal cortex (mPFC), resulting in both a significant increase in parvalbumin GABAergic interneurons in the mPFC at embryonic day 16.5 and in layer V of the mPFC in postnatal day 70 mice . With the persistent increase in GABAergic interneurons in the young adult mPFC, a commensurate shift in the inhibitory-excitatory balance in layer V pyramidal neurons at postnatal day 70 was characterized by an increase in spontaneous inhibitory post synaptic current frequency (IPSCs) and a reduction in spontaneous excitatory post synaptic current frequency (EPSCs) . An increase in frequency of miniature IPSCs suggested that an increase in the number of inhibitory synapses on mPFC layer V pyramidal neurons could be a possible explanation for the shift in the inhibitory-excitatory balance . The behavioral consequences of these changes at postnatal day 70 were evident by deficits in reversal learning and behavioral flexibility, functions specific to the mPFC . In summary, these experiments provide significant evidence for both the short and long term disruptive effects of binge-type prenatal exposure to ethanol on GABAergic-mediated transmission in the mPFC. Prenatal exposure to chronic ethanol also led to an increase in GABAergic interneurons in the adult mPFC, cortical inhibitory/excitatory imbalance, and hyperactivity, resembling the FASD phenotype in mice . The aberrant prenatal ethanol exposure-induced tangential migration demonstrated by Cuzon et al. (2008) and Skorput et al. (2015) and the previously described ethanol-induced changes in neuronal morphology and time course of neurogenesis described by Miller (1988) are significant because they illustrate that the deleterious effects of prenatal exposure to ethanol are not isolated to just one aspect of cortical development. Rather, they span different brain regions and include both radial and tangential migration. In addition to radial and tangential migration, other modes of migration contribute to shaping the immature brain. Aberrant migration is a common theme across many neurodevelopmental disorders. These include schizophrenia, X-linked lissencephaly with abnormal genitalia, autism, and several genetic disorders, among others. Disruption in the cytoarchitecture and development of layers II, III, and V are evident in the entorhinal cortex of postmortem brains from individuals with schizophrenia . It was found that layer II was almost completely absent of neurons and there was an ectopic band of neurons at the top of layer III . These ectopic neurons were smaller and resembled immature migrating neurons, suggesting a defect in migration . In addition, an altered distribution of neurons was found in the dorsal-lateral prefrontal cortex in the superior frontal gyrus of postmortem brains from individuals with schizophrenia . Specifically, there was an increase in NADPH-d neurons in the deep subcortical white matter, in conjunction with a decrease in NADPH-d neurons in the gray matter and superficial cortical white matter . NADPH-d neurons are a major component of the subplate during development, so their aberrant distribution could reflect a disruption of migration into the subplate or the programmed cell death of these cells during development . These data are consistent with disrupted migration as a possible underlying factor in the pathology of schizophrenia. Kato and Dobyns (2005) coined the term “developmental interneuronopathy” to describe disorders such as X-linked lissencephaly with abnormal genitalia, which result, in part, from abnormal tangential migration during embryonic development. Patients with this disorder suffer from intractable seizures from the day of birth, developmental delays, chronic diarrhea, and hypothermia . The pathophysiology behind the seizure disorder could be a deficiency in GABAergic interneurons, as seen in both postmortem studies and in a genetic mouse model . This pointed to abnormal tangential migration during embryonic development as an underlying mechanism. Furthermore, one possible cause for the collection of symptoms associated with autism could be defects in migration and a disruption in neurogenesis . Evidence of dysplasia and heterotopias in the neocortex, dentate gyrus, and cerebellum suggest that altered migration is not specific to one area of the brain . In these areas, distorted neuronal morphology, suggesting poor differentiation, and abnormal laminar distribution were found . Further studies noted disordered lamination, specifically in the anterior cingulate cortex, in postmortem brains from autistic individuals, suggesting aberrant migration within the cortical plate . This led Simms et al. (2009) to postulate that the origin of this defect is early in fetal development. There are also several genetic disorders that are linked to dysfunctions in neuronal migration. Periventricular heterotopia is an X-linked dominant condition that is only seen in affected females, as males do not survive . In females who are heterozygous for the mutation, random X inactivation leads to one population of cells that migrates correctly into the cortical plate, and another that remains arrested in the ventricular zone . The affected individuals suffer from epilepsy . One of the clinical manifestations of another genetic disorder, lissencephaly type I, is also epilepsy, in addition to severe intellectual deficits [84,85]. It is caused by a mutation in the LIS1 autosomal gene on chromosome 17 that results in absent or malformed gyri and a four-layered cortex due to defects in migration . X-linked lissencephaly is caused by a mutation in doublecortin and results in a similar phenotype in males to lissencephaly type I, and double cortex (subcortical band heterotopia) malformation in heterozygous females [84,85,86,87]. The double cortex phenotype includes a typical six-layered cortex; however, this is in addition to an ectopic population of neurons in the subcortical white matter . On the contrary, lissencephaly type II is due to neuronal over-migration, where neurons migrate past the marginal zone and into the leptomeninges . This results in a “cobblestone” cortex, a thickened gray matter with interrupting bands of white matter, and potentially hydrocephalus . Autosomal recessive Fukuyama Congenital Muscular Dystrophy and Walker-Warburg Syndrome are examples of lissencephaly type II [84,85]. These, in addition to several other known genetic disorders associated with aberrant migration, highlight the significance of proper cortical development in normal brain development and function. Given the known association of other neurodevelopmental disorders with disruptions in cortical development, it is important to focus on such deficits in light of prenatal exposure to ethanol to better understand the underlying etiology of FASD. In fact, many of the hallmark features of FASD including intellectual and attention deficits, difficulty with word comprehension, sensory processing dysfunction, and impaired visual-motor integration, nonverbal learning, fine-motor skills, and coordination [5,8,17,18,19,22,34] demonstrate that cortical dysfunction is a significant component of FASD. It is likely that it is a combination of aberrant migratory patterns throughout the brain that result in the complexity of symptoms seen in many neurodevelopmental disorders. There is no cure for FASD or even any specific treatment. However, early intervention therapies, particularly in the areas of behavior and education, have been shown to be helpful in allowing affected individuals to reach his/her greatest potential . These interventions may include occupational, speech and language, social skills, and physical therapy . Medical treatment for some of the symptoms such as hyperactivity and depression/anxiety may also be a part of the overall treatment plan, however they are not specific to FASD . While these interventions are all beneficial, if they could be coupled or complemented with specific FASD-targeted pharmacological therapies, then the interventions already in place could potentially be even more effective. This requires a better understanding of the mechanisms behind the detrimental effects of prenatal exposure to ethanol. Early in development, GABA is paradoxically depolarizing [89,90]. Therefore, GABAergic interneurons can form excitatory synapses on neighboring cells. This is largely due to a difference in the intracellular chloride concentration . NKCC1 is a Na+-K+-2Cl- co-transporter that functions to pump sodium, potassium, and chloride ions into cells . On the contrary, KCC2 is a K+-Cl- cotransporter and pumps both ions out of cells . Early in brain development, the balance between NKCC1 and KCC2 expression is tipped in favor of NKCC1, but with maturity, KCC2 expression in the brain ultimately predominates [90–92]. However, before this developmental switch and while NKCC1 expression is higher, there is an increase in the intracellular chloride concentration such that when a ligand binds to and opens the GABAA receptor ionophore, chloride flows out of the cells down its concentration gradient, and thus depolarizes the cell [90,91]. In light of the fact that binge-type in utero exposure to ethanol results in an increase in GABAergic interneurons evident at both embryonic day 16.5 and in young adult mice , the depolarizing and therefore potentially excitatory actions of these cells could be part of the underlying cause for prenatal ethanol-induced cortical dysfunction. In testing this mechanistic hypothesis, when the loop diuretic bumetanide has been co-administered with ethanol to pregnant dams, the number of GABAergic interneurons in the mPFC of their offspring is no different from that of the offspring of control dams (Skorput and Yeh, unpublished). This suggests a role for NKCC1 in the inhibitory/excitatory imbalance described by Skorput et al., 2015 , and an avenue for future research to further elucidate the mechanism. Another mode of intervention under investigation is choline supplementation. Choline is not only an essential nutrient, but also a precursor to the neurotransmitter acetylcholine and components of the cell membrane, including phosphatidylcholine and sphingomyelin . It can also serve as a methyl donor , highlighting its potential role in epigenetics. Choline administration has proven to be successful in abating some of the behavioral effects of prenatal exposure to ethanol. When ethanol and choline were co-administered from gestational days 5–20 in rats, the offspring exhibited significant improvements in reflex maturation , exploratory behavior , and spatial working memory compared to ethanol only-exposed offspring. Through altering the timing of choline administration relative to ethanol exposure, further insight regarding its potential therapeutic effects has been garnered. With ethanol exposure from postnatal days 4–9 and choline supplementation from postnatal days 4–30, ethanol only-exposed mice exhibited hyperactivity and an increase in perseverate errors on a spatial reversal learning task, both of which were mitigated by choline administration . In rats, following ethanol exposure from postnatal days 4–9, choline administration from either postnatal days 11–20 or postnatal days 21–30 mitigated the ethanol-induced deficits in spatial memory . The finding that choline has beneficial effects even when administered after exposure to ethanol suggests the exciting prospect of dietary interventions in treating FASD. The mechanism behind the effects of choline supplementation is yet to be determined, although it is thought that it may be specifically affecting the functioning of the hippocampus and/or prefrontal cortex . In rats exposed to ethanol from postnatal days 4–9, the density of M2/M4 receptors was significantly increased compared to controls in the hippocampus . However, when choline was administered from postnatal days 4–30 in conjunction with ethanol exposure from postnatal days 4–9, there was no longer a significant difference . This is likely just one of many effects of choline supplementation and further investigations in this field will better our understanding of its interaction with prenatal exposure to ethanol. With preclinical studies in rodents providing strong evidence for the benefits of choline supplementation in mitigating some of the adverse effects of prenatal exposure to ethanol, a randomized, double-blind, placebo-controlled clinical trial was conducted to determine its effectiveness in humans (https://clinicaltrials.gov/ct2/show/NCT01911299). Children ages five through ten with a history of prenatal exposure to ethanol were included in the study, and were randomly assigned to a six-week treatment regimen of 625mg of choline per day, or a placebo . Despite the promising preclinical results, there were no significant improvements observed in learning and memory, executive function, or sustained attention in the choline-treated group compared to the placebo . Investigations in rodents have suggested that the target therapeutic window for choline supplementation may be earlier than the time range used in this study, and thus this is one possible explanation for the unanticipated results . Furthermore, it is possible that the sample size was too small or that the treatment duration was too short to see any significant effects of choline supplementation . Regardless, future studies are necessary to further elucidate the mechanism behind the beneficial effects of choline supplementation in preclinical studies in order to better understand its potential effectiveness in humans. Peroxisome proliferator-activated receptor (PPAR)-γ is involved in glucose metabolism and adipogenesis, but its role in mediating the inflammatory response in the central nervous system has spurred investigations into its possible therapeutic efficacy in FASD. In both in vitro and in vivo studies, PPAR-γ agonists have been shown to mitigate effects of ethanol-induced toxicity on microglia and neuron loss . Specifically, in a mouse model for FASD, both endogenous and synthetic PPAR-γ agonist administration one day prior to and during perinatal ethanol exposure, significantly decreased the effects of ethanol-induced toxicity in the cerebellum . This included attenuating the loss of microglia and Purkinje cells . Furthermore, when a PPAR-γ agonist was administered one hour prior to ethanol administration to mice during the perinatal period, it mitigated the ethanol-induced increase in production of the mRNA of proinflammatory cytokines and chemokines in the hippocampus, cerebellum, and cerebral cortex . PPAR-γ agonist administration also prevented the change in morphology of microglia cells to a form suggesting activation that was seen following perinatal exposure to ethanol . The mechanism for how PPAR-γ agonists protect microglia and neurons from the damaging effects of ethanol during development is still unknown. FASD is a complex condition that, while completely preventable, is a leading cause of intellectual disability in the United States. There is evidence that, like other neurodevelopmental disorders, a significant underlying cause for deficiencies in sensory processing, cognition, and behavior is aberrant cortical development. From cell proliferation, to migration, lamination of the cortex, and circuit formation, these are all intricately regulated processes and integral in the proper establishment of the cortical inhibitory/excitatory balance and communication within the cortex and to subcortical areas. When a teratogen is present, it can disrupt the time course and regulation of these events during an extremely formative time for the unborn child, leading to lasting consequences. Therefore, as ongoing FASD research continues to explore fields such as the pathophysiology of and treatment options for the affected individual, an emphasis on embryonic development is crucial to fully understanding and appreciating the complexity of FASD and the design of targeted therapeutic interventions during pregnancy. We thank the following colleagues for comments and critical reading of the manuscript: Bryan W. Luikart, Ph.D., Paul D. Manganiello, M.D., Donald Bartlett Jr., M.D., and Pamela W.L. Yeh. LCD and HHY planned the content of the manuscript. LCD wrote the first draft of the manuscript. This work was supported in part by PHS NIH grants R01 AA023410 (HHY), R21 AA024036 (HHY), F30 AA025534 (LCD). The authors have declared that no competing interests exist.
1
6
<urn:uuid:5124858d-72cd-494d-b039-31d3da9c6d34>
Neuronal axons use specific mechanisms to mediate extension, maintain integrity, and induce degeneration. An appropriate balance of these events is required to shape functional neuronal circuits. The protocol described here explains how to use cell culture inserts bearing a porous membrane (filter) to obtain large amounts of pure axonal preparations suitable for examination by conventional biochemical or immunocytochemical techniques. The functionality of these filter inserts will be demonstrated with models of developmental pruning and Wallerian degeneration, using explants of embryonic dorsal root ganglion. Axonal integrity and function is compromised in a wide variety of neurodegenerative pathologies. Indeed, it is now clear that axonal dysfunction appears much earlier in the course of the disease than neuronal soma loss in several neurodegenerative diseases, indicating that axonal-specific processes are primarily targeted in these disorders. By obtaining pure axonal samples for analysis by molecular and biochemical techniques, this technique has the potential to shed new light into mechanisms regulating the physiology and pathophysiology of axons. This in turn will have an impact in our understanding of the processes that drive degenerative diseases of the nervous system. 21 Related JoVE Articles! Automated Sholl Analysis of Digitized Neuronal Morphology at Multiple Scales Institutions: Rutgers University, Rutgers University. Neuronal morphology plays a significant role in determining how neurons function and communicate1-3 . Specifically, it affects the ability of neurons to receive inputs from other cells2 and contributes to the propagation of action potentials4,5 . The morphology of the neurites also affects how information is processed. The diversity of dendrite morphologies facilitate local and long range signaling and allow individual neurons or groups of neurons to carry out specialized functions within the neuronal network6,7 . Alterations in dendrite morphology, including fragmentation of dendrites and changes in branching patterns, have been observed in a number of disease states, including Alzheimer's disease8 , and mental retardation11 . The ability to both understand the factors that shape dendrite morphologies and to identify changes in dendrite morphologies is essential in the understanding of nervous system function and dysfunction. Neurite morphology is often analyzed by Sholl analysis and by counting the number of neurites and the number of branch tips. This analysis is generally applied to dendrites, but it can also be applied to axons. Performing this analysis by hand is both time consuming and inevitably introduces variability due to experimenter bias and inconsistency. The Bonfire program is a semi-automated approach to the analysis of dendrite and axon morphology that builds upon available open-source morphological analysis tools. Our program enables the detection of local changes in dendrite and axon branching behaviors by performing Sholl analysis on subregions of the neuritic arbor. For example, Sholl analysis is performed on both the neuron as a whole as well as on each subset of processes (primary, secondary, terminal, root, etc.) Dendrite and axon patterning is influenced by a number of intracellular and extracellular factors, many acting locally. Thus, the resulting arbor morphology is a result of specific processes acting on specific neurites, making it necessary to perform morphological analysis on a smaller scale in order to observe these local variations12 The Bonfire program requires the use of two open-source analysis tools, the NeuronJ plugin to ImageJ and NeuronStudio. Neurons are traced in ImageJ, and NeuronStudio is used to define the connectivity between neurites. Bonfire contains a number of custom scripts written in MATLAB (MathWorks) that are used to convert the data into the appropriate format for further analysis, check for user errors, and ultimately perform Sholl analysis. Finally, data are exported into Excel for statistical analysis. A flow chart of the Bonfire program is shown in Figure 1 Neuroscience, Issue 45, Sholl Analysis, Neurite, Morphology, Computer-assisted, Tracing Neural Explant Cultures from Xenopus laevis Institutions: Harvard Medical School. The complex process of axon guidance is largely driven by the growth cone, which is the dynamic motile structure at the tip of the growing axon. During axon outgrowth, the growth cone must integrate multiple sources of guidance cue information to modulate its cytoskeleton in order to propel the growth cone forward and accurately navigate to find its specific targets1 . How this integration occurs at the cytoskeletal level is still emerging, and examination of cytoskeletal protein and effector dynamics within the growth cone can allow the elucidation of these mechanisms. Xenopus laevis growth cones are large enough (10-30 microns in diameter) to perform high-resolution live imaging of cytoskeletal dynamics (e.g.2-4 ) and are easy to isolate and manipulate in a lab setting compared to other vertebrates. The frog is a classic model system for developmental neurobiology studies, and important early insights into growth cone microtubule dynamics were initially found using this system5-7 . In this method8 , eggs are collected and fertilized in vitro , injected with RNA encoding fluorescently tagged cytoskeletal fusion proteins or other constructs to manipulate gene expression, and then allowed to develop to the neural tube stage. Neural tubes are isolated by dissection and then are cultured, and growth cones on outgrowing neurites are imaged. In this article, we describe how to perform this method, the goal of which is to culture Xenopus laevis growth cones for subsequent high-resolution image analysis. While we provide the example of +TIP fusion protein EB1-GFP, this method can be applied to any number of proteins to elucidate their behaviors within the growth cone. Neuroscience, Issue 68, Cellular Biology, Anatomy, Physiology, Growth cone, neural explant, Xenopus laevis, live cell imaging, cytoskeletal dynamics, cell culture A Functional Motor Unit in the Culture Dish: Co-culture of Spinal Cord Explants and Muscle Cells Institutions: University of Basel. Human primary muscle cells cultured aneurally in monolayer rarely contract spontaneously because, in the absence of a nerve component, cell differentiation is limited and motor neuron stimulation is missing1 . These limitations hamper the in vitro study of many neuromuscular diseases in cultured muscle cells. Importantly, the experimental constraints of monolayered, cultured muscle cells can be overcome by functional innervation of myofibers with spinal cord explants in co-cultures. Here, we show the different steps required to achieve an efficient, proper innervation of human primary muscle cells, leading to complete differentiation and fiber contraction according to the method developed by Askanas2 . To do so, muscle cells are co-cultured with spinal cord explants of rat embryos at ED 13.5, with the dorsal root ganglia still attached to the spinal cord slices. After a few days, the muscle fibers start to contract and eventually become cross-striated through innervation by functional neurites projecting from the spinal cord explants that connecting to the muscle cells. This structure can be maintained for many months, simply by regular exchange of the culture medium. The applications of this invaluable tool are numerous, as it represents a functional model for multidisciplinary analyses of human muscle development and innervation. In fact, a complete de novo neuromuscular junction installation occurs in a culture dish, allowing an easy measurement of many parameters at each step, in a fundamental and physiological context. Just to cite a few examples, genomic and/or proteomic studies can be performed directly on the co-cultures. Furthermore, pre- and post-synaptic effects can be specifically and separately assessed at the neuromuscular junction, because both components come from different species, rat and human, respectively. The nerve-muscle co-culture can also be performed with human muscle cells isolated from patients suffering from muscle or neuromuscular diseases3 , and thus can be used as a screening tool for candidate drugs. Finally, no special equipment but a regular BSL2 facility is needed to reproduce a functional motor unit in a culture dish. This method thus is valuable for both the muscle as well as the neuromuscular research communities for physiological and mechanistic studies of neuromuscular function, in a normal and disease context. Neuroscience, Issue 62, Human primary muscle cells, embryonic spinal cord explants, neurites, innervation, contraction, cell culture Dissection and Culture of Chick Statoacoustic Ganglion and Spinal Cord Explants in Collagen Gels for Neurite Outgrowth Assays Institutions: Purdue University. The sensory organs of the chicken inner ear are innervated by the peripheral processes of statoacoustic ganglion (SAG) neurons. Sensory organ innervation depends on a combination of axon guidance cues1 and survival factors2 located along the trajectory of growing axons and/or within their sensory organ targets. For example, functional interference with a classic axon guidance signaling pathway, semaphorin-neuropilin, generated misrouting of otic axons3 . Also, several growth factors expressed in the sensory targets of the inner ear, including Neurotrophin-3 (NT-3) and Brain Derived Neurotrophic Factor (BDNF), have been manipulated in transgenic animals, again leading to misrouting of SAG axons4 . These same molecules promote both survival and neurite outgrowth of chick SAG neurons in vitro5,6 Here, we describe and demonstrate the in vitro method we are currently using to test the responsiveness of chick SAG neurites to soluble proteins, including known morphogens such as the Wnts, as well as growth factors that are important for promoting SAG neurite outgrowth and neuron survival. Using this model system, we hope to draw conclusions about the effects that secreted ligands can exert on SAG neuron survival and neurite outgrowth. SAG explants are dissected on embryonic day 4 (E4) and cultured in three-dimensional collagen gels under serum-free conditions for 24 hours. First, neurite responsiveness is tested by culturing explants with protein-supplemented medium. Then, to ask whether point sources of secreted ligands can have directional effects on neurite outgrowth, explants are co-cultured with protein-coated beads and assayed for the ability of the bead to locally promote or inhibit outgrowth. We also include a demonstration of the dissection (modified protocol7 ) and culture of E6 spinal cord explants. We routinely use spinal cord explants to confirm bioactivity of the proteins and protein-soaked beads, and to verify species cross-reactivity with chick tissue, under the same culture conditions as SAG explants. These in vitro assays are convenient for quickly screening for molecules that exert trophic (survival) or tropic (directional) effects on SAG neurons, especially before performing studies in vivo . Moreover, this method permits the testing of individual molecules under serum-free conditions, with high neuron survival8 Neuroscience, Issue 58, chicken, dissection, morphogen, NT-3, neurite outgrowth, spinal cord, statoacoustic ganglion, Wnt5a Live Imaging of Dorsal Root Axons after Rhizotomy Institutions: Shriners Hospitals Pediatric Research Center and Department of Anatomy and Cell Biology, Department of Veterans Affairs Hospital, Drexel University College of Medicine, Temple University School of Medicine. The primary sensory axons injured by spinal root injuries fail to regenerate into the spinal cord, leading to chronic pain and permanent sensory loss. Regeneration of dorsal root (DR) axons into spinal cord is prevented at the dorsal root entry zone (DREZ), the interface between the CNS and PNS. Our understanding of the molecular and cellular events that prevent regeneration at DREZ is incomplete, in part because complex changes associated with nerve injury have been deduced from postmortem analyses. Dynamic cellular processes, such as axon regeneration, are best studied with techniques that capture real-time events with multiple observations of each living animal. Our ability to monitor neurons serially in vivo has increased dramatically owing to revolutionary innovations in optics and mouse transgenics. Several lines of thy1-GFP transgenic mice, in which subsets of neurons are genetically labeled in distinct fluorescent colors, permit individual neurons to be imaged in vivo1 . These mice have been used extensively for in vivo imaging of muscle2-4 , and have provided novel insights into physiological mechanisms that static analyses could not have resolved. Imaging studies of neurons in living spinal cord have only recently begun. Lichtman and his colleagues first demonstrated their feasibility by tracking injured dorsal column (DC) axons with wide-field microscopy8,9 . Multi-photon in vivo imaging of deeply positioned DC axons, microglia and blood vessels has also been accomplished10 . Over the last few years, we have pioneered in applying in vivo imaging to monitor regeneration of DR axons using wide-field microscopy and H line of thy1-YFP mice. These studies have led us to a novel hypothesis about why DR axons are prevented from regenerating within the spinal cord11 In H line of thy1-YFP mice, distinct YFP+ axons are superficially positioned, which allows several axons to be monitored simultaneously. We have learned that DR axons arriving at DREZ are better imaged in lumbar than in cervical spinal cord. In the present report we describe several strategies that we have found useful to assure successful long-term and repeated imaging of regenerating DR axons. These include methods that eliminate repeated intubation and respiratory interruption, minimize surgery-associated stress and scar formation, and acquire stable images at high resolution without phototoxicity. Neuroscience, Issue 55, in vivo imaging, dorsal root injury, wide field fluorescence microscope, laminectomy, spinal cord, Green fluorescence protein, transgenic mice, dorsal root ganglion, spinal root injury Dissection and Culture of Mouse Dopaminergic and Striatal Explants in Three-Dimensional Collagen Matrix Assays Institutions: University Medical Center Utrecht. Midbrain dopamine (mdDA) neurons project via the medial forebrain bundle towards several areas in the telencephalon, including the striatum1 . Reciprocally, medium spiny neurons in the striatum that give rise to the striatonigral (direct) pathway innervate the substantia nigra2 . The development of these axon tracts is dependent upon the combinatorial actions of a plethora of axon growth and guidance cues including molecules that are released by neurites or by (intermediate) target regions3,4 . These soluble factors can be studied in vitro by culturing mdDA and/or striatal explants in a collagen matrix which provides a three-dimensional substrate for the axons mimicking the extracellular environment. In addition, the collagen matrix allows for the formation of relatively stable gradients of proteins released by other explants or cells placed in the vicinity (e.g. see references 5 and 6). Here we describe methods for the purification of rat tail collagen, microdissection of dopaminergic and striatal explants, their culture in collagen gels and subsequent immunohistochemical and quantitative analysis. First, the brains of E14.5 mouse embryos are isolated and dopaminergic and striatal explants are microdissected. These explants are then (co)cultured in collagen gels on coverslips for 48 to 72 hours in vitro . Subsequently, axonal projections are visualized using neuronal markers (e.g. tyrosine hydroxylase, DARPP32, or βIII tubulin) and axon growth and attractive or repulsive axon responses are quantified. This neuronal preparation is a useful tool for in vitro studies of the cellular and molecular mechanisms of mesostriatal and striatonigral axon growth and guidance during development. Using this assay, it is also possible to assess other (intermediate) targets for dopaminergic and striatal axons or to test specific molecular cues. Neuroscience, Issue 61, Axon guidance, collagen matrix, development, dissection, dopamine, medium spiny neuron, rat tail collagen, striatum, striatonigral, mesostriatal Setting Limits on Supersymmetry Using Simplified Models Institutions: University College London, CERN, Lawrence Berkeley National Laboratories. Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures. Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models An Engulfment Assay: A Protocol to Assess Interactions Between CNS Phagocytes and Neurons Institutions: Boston Children's Hospital, Harvard Medical School. Phagocytosis is a process in which a cell engulfs material (entire cell, parts of a cell, debris, etc.) in its surrounding extracellular environment and subsequently digests this material, commonly through lysosomal degradation. Microglia are the resident immune cells of the central nervous system (CNS) whose phagocytic function has been described in a broad range of conditions from neurodegenerative disease (e.g. , beta-amyloid clearance in Alzheimer’s disease) to development of the healthy brain (e.g., . The following protocol is an engulfment assay developed to visualize and quantify microglia-mediated engulfment of presynaptic inputs in the developing mouse retinogeniculate system7 . While this assay was used to assess microglia function in this particular context, a similar approach may be used to assess other phagocytes throughout the brain (e.g., astrocytes) and the rest of the body (e.g. , peripheral macrophages) as well as other contexts in which synaptic remodeling occurs (e.g. Neuroscience, Issue 88, Central Nervous System (CNS), Engulfment, Phagocytosis, Microglia, Synapse, Anterograde Tracing, Presynaptic Input, Retinogeniculate System Simulating Pancreatic Neuroplasticity: In Vitro Dual-neuron Plasticity Assay Institutions: Technische Universität München, University of Applied Sciences Kaiserslautern/Zweibrücken. Neuroplasticity is an inherent feature of the enteric nervous system and gastrointestinal (GI) innervation under pathological conditions. However, the pathophysiological role of neuroplasticity in GI disorders remains unknown. Novel experimental models which allow simulation and modulation of GI neuroplasticity may enable enhanced appreciation of the contribution of neuroplasticity in particular GI diseases such as pancreatic cancer (PCa) and chronic pancreatitis (CP). Here, we present a protocol for simulation of pancreatic neuroplasticity under in vitro conditions using newborn rat dorsal root ganglia (DRG) and myenteric plexus (MP) neurons. This dual-neuron approach not only permits monitoring of both organ-intrinsic and -extrinsic neuroplasticity, but also represents a valuable tool to assess neuronal and glial morphology and electrophysiology. Moreover, it allows functional modulation of supplied microenvironmental contents for studying their impact on neuroplasticity. Once established, the present neuroplasticity assay bears the potential of being applicable to the study of neuroplasticity in any GI organ. Medicine, Issue 86, Autonomic Nervous System Diseases, Digestive System Neoplasms, Gastrointestinal Diseases, Pancreatic Diseases, Pancreatic Neoplasms, Pancreatitis, Pancreatic neuroplasticity, dorsal root ganglia, myenteric plexus, Morphometry, neurite density, neurite branching, perikaryonal hypertrophy, neuronal plasticity Direct Imaging of ER Calcium with Targeted-Esterase Induced Dye Loading (TED) Institutions: University of Wuerzburg, Max Planck Institute of Neurobiology, Martinsried, Ludwig-Maximilians University of Munich. Visualization of calcium dynamics is important to understand the role of calcium in cell physiology. To examine calcium dynamics, synthetic fluorescent Ca2+ indictors have become popular. Here we demonstrate TED (= targeted-esterase induced dye loading), a method to improve the release of Ca2+ indicator dyes in the ER lumen of different cell types. To date, TED was used in cell lines, glial cells, and neurons in vitro . TED bases on efficient, recombinant targeting of a high carboxylesterase activity to the ER lumen using vector-constructs that express Carboxylesterases (CES). The latest TED vectors contain a core element of CES2 fused to a red fluorescent protein, thus enabling simultaneous two-color imaging. The dynamics of free calcium in the ER are imaged in one color, while the corresponding ER structure appears in red. At the beginning of the procedure, cells are transduced with a lentivirus. Subsequently, the infected cells are seeded on coverslips to finally enable live cell imaging. Then, living cells are incubated with the acetoxymethyl ester (AM-ester) form of low-affinity Ca2+ indicators, for instance Fluo5N-AM, Mag-Fluo4-AM, or Mag-Fura2-AM. The esterase activity in the ER cleaves off hydrophobic side chains from the AM form of the Ca2+ indicator and a hydrophilic fluorescent dye/Ca2+ complex is formed and trapped in the ER lumen. After dye loading, the cells are analyzed at an inverted confocal laser scanning microscope. Cells are continuously perfused with Ringer-like solutions and the ER calcium dynamics are directly visualized by time-lapse imaging. Calcium release from the ER is identified by a decrease in fluorescence intensity in regions of interest, whereas the refilling of the ER calcium store produces an increase in fluorescence intensity. Finally, the change in fluorescent intensity over time is determined by calculation of ΔF/F0 Cellular Biology, Issue 75, Neurobiology, Neuroscience, Molecular Biology, Biochemistry, Biomedical Engineering, Bioengineering, Virology, Medicine, Anatomy, Physiology, Surgery, Endoplasmic Reticulum, ER, Calcium Signaling, calcium store, calcium imaging, calcium indicator, metabotropic signaling, Ca2+, neurons, cells, mouse, animal model, cell culture, targeted esterase induced dye loading, imaging Isolation and Culture of Dissociated Sensory Neurons From Chick Embryos Institutions: Assumption College. Neurons are multifaceted cells that carry information essential for a variety of functions including sensation, motor movement, learning, and memory. Studying neurons in vivo can be challenging due to their complexity, their varied and dynamic environments, and technical limitations. For these reasons, studying neurons in vitro can prove beneficial to unravel the complex mysteries of neurons. The well-defined nature of cell culture models provides detailed control over environmental conditions and variables. Here we describe how to isolate, dissociate, and culture primary neurons from chick embryos. This technique is rapid, inexpensive, and generates robustly growing sensory neurons. The procedure consistently produces cultures that are highly enriched for neurons and has very few non-neuronal cells (less than 5%). Primary neurons do not adhere well to untreated glass or tissue culture plastic, therefore detailed procedures to create two distinct, well-defined laminin-containing substrata for neuronal plating are described. Cultured neurons are highly amenable to multiple cellular and molecular techniques, including co-immunoprecipitation, live cell imagining, RNAi, and immunocytochemistry. Procedures for double immunocytochemistry on these cultured neurons have been optimized and described here. Neuroscience, Issue 91, dorsal root gangia, DRG, chicken, in vitro, avian, laminin-1, embryonic, primary Preparation of Primary Neurons for Visualizing Neurites in a Frozen-hydrated State Using Cryo-Electron Tomography Institutions: Baylor College of Medicine, Baylor College of Medicine, University of California at San Diego, Baylor College of Medicine. Neurites, both dendrites and axons, are neuronal cellular processes that enable the conduction of electrical impulses between neurons. Defining the structure of neurites is critical to understanding how these processes move materials and signals that support synaptic communication. Electron microscopy (EM) has been traditionally used to assess the ultrastructural features within neurites; however, the exposure to organic solvent during dehydration and resin embedding can distort structures. An important unmet goal is the formulation of procedures that allow for structural evaluations not impacted by such artifacts. Here, we have established a detailed and reproducible protocol for growing and flash-freezing whole neurites of different primary neurons on electron microscopy grids followed by their examination with cryo-electron tomography (cryo-ET). This technique allows for 3-D visualization of frozen, hydrated neurites at nanometer resolution, facilitating assessment of their morphological differences. Our protocol yields an unprecedented view of dorsal root ganglion (DRG) neurites, and a visualization of hippocampal neurites in their near-native state. As such, these methods create a foundation for future studies on neurites of both normal neurons and those impacted by neurological disorders. Neuroscience, Issue 84, Neurons, Cryo-electron Microscopy, Electron Microscope Tomography, Brain, rat, primary neuron culture, morphological assay Inhibitory Synapse Formation in a Co-culture Model Incorporating GABAergic Medium Spiny Neurons and HEK293 Cells Stably Expressing GABAA Receptors Institutions: University College London. Inhibitory neurons act in the central nervous system to regulate the dynamics and spatio-temporal co-ordination of neuronal networks. GABA (γ-aminobutyric acid) is the predominant inhibitory neurotransmitter in the brain. It is released from the presynaptic terminals of inhibitory neurons within highly specialized intercellular junctions known as synapses, where it binds to GABAA Rs) present at the plasma membrane of the synapse-receiving, postsynaptic neurons. Activation of these GABA-gated ion channels leads to influx of chloride resulting in postsynaptic potential changes that decrease the probability that these neurons will generate action potentials. During development, diverse types of inhibitory neurons with distinct morphological, electrophysiological and neurochemical characteristics have the ability to recognize their target neurons and form synapses which incorporate specific GABAA Rs subtypes. This principle of selective innervation of neuronal targets raises the question as to how the appropriate synaptic partners identify each other. To elucidate the underlying molecular mechanisms, a novel in vitro co-culture model system was established, in which medium spiny GABAergic neurons, a highly homogenous population of neurons isolated from the embryonic striatum, were cultured with stably transfected HEK293 cell lines that express different GABAA R subtypes. Synapses form rapidly, efficiently and selectively in this system, and are easily accessible for quantification. Our results indicate that various GABAA R subtypes differ in their ability to promote synapse formation, suggesting that this reduced in vitro model system can be used to reproduce, at least in part, the in vivo conditions required for the recognition of the appropriate synaptic partners and formation of specific synapses. Here the protocols for culturing the medium spiny neurons and generating HEK293 cells lines expressing GABAA Rs are first described, followed by detailed instructions on how to combine these two cell types in co-culture and analyze the formation of synaptic contacts. Neuroscience, Issue 93, Developmental neuroscience, synaptogenesis, synaptic inhibition, co-culture, stable cell lines, GABAergic, medium spiny neurons, HEK 293 cell line High Efficiency Differentiation of Human Pluripotent Stem Cells to Cardiomyocytes and Characterization by Flow Cytometry Institutions: Medical College of Wisconsin, Stanford University School of Medicine, Medical College of Wisconsin, Hong Kong University, Johns Hopkins University School of Medicine, Medical College of Wisconsin. There is an urgent need to develop approaches for repairing the damaged heart, discovering new therapeutic drugs that do not have toxic effects on the heart, and improving strategies to accurately model heart disease. The potential of exploiting human induced pluripotent stem cell (hiPSC) technology to generate cardiac muscle “in a dish” for these applications continues to generate high enthusiasm. In recent years, the ability to efficiently generate cardiomyogenic cells from human pluripotent stem cells (hPSCs) has greatly improved, offering us new opportunities to model very early stages of human cardiac development not otherwise accessible. In contrast to many previous methods, the cardiomyocyte differentiation protocol described here does not require cell aggregation or the addition of Activin A or BMP4 and robustly generates cultures of cells that are highly positive for cardiac troponin I and T (TNNI3, TNNT2), iroquois-class homeodomain protein IRX-4 (IRX4), myosin regulatory light chain 2, ventricular/cardiac muscle isoform (MLC2v) and myosin regulatory light chain 2, atrial isoform (MLC2a) by day 10 across all human embryonic stem cell (hESC) and hiPSC lines tested to date. Cells can be passaged and maintained for more than 90 days in culture. The strategy is technically simple to implement and cost-effective. Characterization of cardiomyocytes derived from pluripotent cells often includes the analysis of reference markers, both at the mRNA and protein level. For protein analysis, flow cytometry is a powerful analytical tool for assessing quality of cells in culture and determining subpopulation homogeneity. However, technical variation in sample preparation can significantly affect quality of flow cytometry data. Thus, standardization of staining protocols should facilitate comparisons among various differentiation strategies. Accordingly, optimized staining protocols for the analysis of IRX4, MLC2v, MLC2a, TNNI3, and TNNT2 by flow cytometry are described. Cellular Biology, Issue 91, human induced pluripotent stem cell, flow cytometry, directed differentiation, cardiomyocyte, IRX4, TNNI3, TNNT2, MCL2v, MLC2a Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney Institutions: University of Notre Dame. The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling. Cellular Biology, Issue 90, zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI) Ex vivo Culturing of Whole, Developing Drosophila Brains Institutions: National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD. We describe a method for ex vivo culturing of whole Drosophila brains. This can be used as a counterpoint to chronic genetic manipulations for investigating the cell biology and development of central brain structures by allowing acute pharmacological interventions and live imaging of cellular processes. As an example of the technique, prior work from our lab1 has shown that a previously unrecognized subcellular compartment lies between the axonal and somatodendritic compartments of axons of the Drosophila central brain. The development of this compartment, referred to as the axon initial segment (AIS)2 , was shown genetically to depend on the neuron-specific cyclin-dependent kinase, Cdk5. We show here that ex vivo treatment of wild-type Drosophila larval brains with the Cdk5-specific pharmacological inhibitors roscovitine and olomoucine3 causes acute changes in actin organization, and in localization of the cell-surface protein Fasciclin 2, that mimic the changes seen in mutants that lack Cdk5 activity genetically. A second example of the ex vivo culture technique is provided for remodeling of the connections of embryonic mushroom body (MB) gamma neurons during metamorphosis from larva to adult. The mushroom body is the center of olfactory learning and memory in the fly4 , and these gamma neurons prune their axonal and dendritic branches during pupal development and then re-extend branches at a later timepoint to establish the adult innervation pattern5 . Pruning of these neurons of the MB has been shown to occur via local degeneration of neurite branches6 , by a mechanism that is triggered by ecdysone, a steroid hormone, acting at the ecdysone receptor B17 , and that is dependent on the activity of the ubiquitin-proteasome system6 . Our method of ex vivo culturing can be used to interrogate further the mechanism of developmental remodeling. We found that in the ex vivo culture setting, gamma neurons of the MB recapitulated the process of developmental pruning with a time course similar to that in vivo . It was essential, however, to wait until 1.5 hours after puparium formation before explanting the tissue in order for the cells to commit irreversibly to metamorphosis; dissection of animals at the onset of pupariation led to little or no metamorphosis in culture. Thus, with appropriate modification, the ex vivo culture approach can be applied to study dynamic as well as steady state aspects of central brain biology. Neuroscience, Issue 65, Developmental Biology, Physiology, Drosophila, mushroom body, ex vivo, organ culture, pruning, pharmacology DiI-Labeling of DRG Neurons to Study Axonal Branching in a Whole Mount Preparation of Mouse Embryonic Spinal Cord Institutions: Max Delbrück Center for Molecular Medicine. Here we present a technique to label the trajectories of small groups of DRG neurons into the embryonic spinal cord by diffusive staining using the lipophilic tracer 1,1'-dioctadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate (DiI)1 . The comparison of axonal pathways of wild-type with those of mouse lines in which genes are mutated allows testing for a functional role of candidate proteins in the control of axonal branching which is an essential mechanism in the wiring of the nervous system. Axonal branching enables an individual neuron to connect with multiple targets, thereby providing the physical basis for the parallel processing of information. Ramifications at intermediate target regions of axonal growth may be distinguished from terminal arborization. Furthermore, different modes of axonal branch formation may be classified depending on whether branching results from the activities of the growth cone (splitting or delayed branching) or from the budding of collaterals from the axon shaft in a process called interstitial branching2 The central projections of neurons from the DRG offer a useful experimental system to study both types of axonal branching: when their afferent axons reach the dorsal root entry zone (DREZ) of the spinal cord between embryonic days 10 to 13 (E10 - E13) they display a stereotyped pattern of T- or Y-shaped bifurcation. The two resulting daughter axons then proceed in rostral or caudal directions, respectively, at the dorsolateral margin of the cord and only after a waiting period collaterals sprout from these stem axons to penetrate the gray matter (interstitial branching) and project to relay neurons in specific laminae of the spinal cord where they further arborize (terminal branching)3 . DiI tracings have revealed growth cones at the dorsal root entry zone of the spinal cord that appeared to be in the process of splitting suggesting that bifurcation is caused by splitting of the growth cone itself4 ), however, other options have been discussed as well5 This video demonstrates first how to dissect the spinal cord of E12.5 mice leaving the DRG attached. Following fixation of the specimen tiny amounts of DiI are applied to DRG using glass needles pulled from capillary tubes. After an incubation step, the labeled spinal cord is mounted as an inverted open-book preparation to analyze individual axons using fluorescence microscopy. Neuroscience, Issue 58, neurons, axonal branching, DRG, Spinal cord, DiI labeling, cGMP signaling Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules Institutions: Princeton University. The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods. Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study Institutions: RWTH Aachen University, Fraunhofer Gesellschaft. Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory. Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g. , signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding Barnes Maze Testing Strategies with Small and Large Rodent Models Institutions: University of Missouri, Food and Drug Administration. Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g. bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g. distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e. random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g. shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure. Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
1
5
<urn:uuid:1b1d8522-8c68-4be1-ac92-64f117bdf6bb>
Bone marrow stromal cells maintain the adult skeleton by forming osteoblasts throughout life that regenerate bone and repair fractures. We discovered that subsets of these stromal cells, osteoblasts, osteocytes, and hypertrophic chondrocytes secrete a C-type lectin domain protein, Clec11a, which promotes osteogenesis. Clec11a-deficient mice appeared developmentally normal and had normal hematopoiesis but reduced limb and vertebral bone. Clec11a-deficient mice exhibited accelerated bone loss during aging, reduced bone strength, and delayed fracture healing. Bone marrow stromal cells from Clec11a-deficient mice showed impaired osteogenic differentiation, but normal adipogenic and chondrogenic differentiation. Recombinant Clec11a promoted osteogenesis by stromal cells in culture and increased bone mass in osteoporotic mice in vivo. Recombinant human Clec11a promoted osteogenesis by human bone marrow stromal cells in culture and in vivo. Clec11a thus maintains the adult skeleton by promoting the differentiation of mesenchymal progenitors into mature osteoblasts. In light of this, we propose to call this factor Osteolectin.https://doi.org/10.7554/eLife.18782.001 Fate mapping studies in vivo show there are multiple distinct waves of mesenchymal progenitors that form skeletal tissues during development and then maintain the skeleton throughout adulthood (Liu et al., 2013; Maes et al., 2010; Mizoguchi et al., 2014; Park et al., 2012; Takashima et al., 2007; Worthley et al., 2015; Zhou et al., 2014a). These include Osterix+ cells that give rise to osteoblasts, osteocytes, and stromal cells in developing bones (Liu et al., 2013; Maes et al., 2010; Mizoguchi et al., 2014), Nestin-CreER-expressing cells that transiently form osteoblasts and bone marrow stromal cells in the early postnatal period (Méndez-Ferrer et al., 2010; Ono et al., 2014a; Takashima et al., 2007), Grem1-expressing cells that form osteoblasts, chondrocytes, and stromal cells postnatally (Worthley et al., 2015) and Leptin Receptor (LepR)-expressing stromal cells that are the major source of bone and adipocytes in adult mouse bone marrow (Mizoguchi et al., 2014; Zhou et al., 2014a). Osterix+ osteogenic progenitors also persist periosteally, on the outer surface of adult bones, where they help to repair bone injuries (Maes et al., 2010). Bone marrow stromal cells include skeletal stem cells (SSCs) as well as multiple other populations of mesenchymal progenitors (Chan et al., 2015). SSCs are multipotent progenitors that form fibroblast colonies in culture (CFU-F) with the potential to differentiate into osteoblasts, chondrocytes, and adipocytes (Bianco and Robey, 2015; Friedenstein et al., 1970). Bone marrow CFU-F can be identified based on the expression of CD146, CD271, VCAM-1, and Thy-1 in humans, or LepR, PDGFRα, CD51, and/or CD105 in mice, as well as the lack of expression of hematopoietic and endothelial markers (Chan et al., 2009, 2015; James et al., 2015; Mabuchi et al., 2013; Morikawa et al., 2009; Omatsu et al., 2010; Park et al., 2012; Sacchetti et al., 2007; Zhou et al., 2014a). CFU-F are enriched among bone marrow stromal cells that express high levels of the hematopoietic growth factors Scf (Zhou et al., 2014a) and Cxcl12 (Ding and Morrison, 2013; Omatsu et al., 2014; Sugiyama et al., 2006). Multiple growth factor families promote osteogenesis including Wnts (Cui et al., 2011; Krishnan et al., 2006), Bone Morphogenetic Proteins (BMPs) (Nakamura et al., 2007; Rahman et al., 2015), and Insulin-like Growth Factors (Yakar and Rosen, 2003). However, these factors have broad effects on many tissues, precluding their systemic administration to promote osteogenesis. Sclerostin, a Wnt signaling inhibitor that is locally produced by osteocytes, negatively regulates bone formation (Li et al., 2005). Sclerostin inhibitors can be administered systemically to promote bone formation (McClung et al., 2014). Factors secreted by bone marrow stromal cells promote osteogenesis (Chan et al., 2015), though the full repertoire of such factors remains to be identified. Osteoporosis is a progressive bone disease characterized by decreased bone mass and increased fracture risk (Harada and Rodan, 2003). Aging, estrogen insufficiency, long-term glucocorticoid use, and mechanical unloading all contribute to the development of osteoporosis (Harada and Rodan, 2003). Most existing osteoporosis therapies involve antiresorptive agents, such as bisphosphonates (Black et al., 1996; Liberman et al., 1995) and estrogens (Michaelsson et al., 1998), which reduce the rate of bone loss but do not promote new bone formation. Teriparatide, a small peptide derived from human parathyroid hormone (PTH; amino acids 1–34) is used clinically to promote the formation of new bone (Neer et al., 2001). Nonetheless, some patients cannot take Teriparatide (Kraenzlin and Meier, 2011) and its use is limited to two years because of a potential risk of osteosarcoma (Neer et al., 2001). Clec11a (C-type lectin domain family 11, member A) is a secreted sulfated glycoprotein that is expressed in the bone marrow and can promote colony formation by human hematopoietic progenitors in culture (Bannwarth et al., 1998, 1999; Hiraoka et al., 1997; Mio et al., 1998). The plasma level of human Clec11a correlates with hemoglobin level (Keller et al., 2009; Ouma et al., 2010) and increases in patients after bone marrow transplantation (Ito et al., 2003). As a result, Clec11a has been considered a hematopoietic growth factor. However, Clec11 is also expressed in skeletal tissues (Hiraoka et al., 2001) and the physiological function of Clec11a in vivo has not yet been tested. Reanalysis of our published microarray data (Ding et al., 2012) revealed that among enzymatically dissociated bone marrow cells, Clec11a was significantly more highly expressed by Scf-GFP+CD45-Ter119-CD31- stromal cells (more than 90% of which are also LepR+ [Zhou et al., 2014a]) and Col2.3-GFP+CD45-Ter119-CD31- osteoblasts as compared to VE-cadherin+ endothelial cells and unfractionated cells (Figure 1A). By RNA sequencing, Clec11a transcripts were at least 100-fold more abundant in PDGFRα+CD45-Ter119-CD31- bone marrow stromal cells (more than 90% of which are LepR+ [Zhou et al., 2014a]) as compared to unfractionated bone marrow cells (Figure 1B). A systematic analysis of Clec11a expression in bone marrow cells by quantitative reverse transcription PCR (qRT-PCR) showed that Clec11a was highly expressed by LepR+CD45-Ter119-CD31- stromal cells and Col2.3-GFP+CD45-Ter119-CD31- osteoblasts but not by hematopoietic cells (Figure 1C). Clec11a was also expressed at a very low level by B cell progenitors in the bone marrow and T cells in the spleen (Figure 1C). Published gene expression profile data were consistent with our results. RNA-seq analysis showed 80-fold higher levels of Clec11a in bone fragments that contain osteoblasts and osteocytes as compared to whole bone marrow or skeletal muscle cells (Ayturk et al., 2013). Clec11a expression is 9-fold higher in Grem1+ SSCs as compared to Grem1- stromal cells (see GSE57729 from [Worthley et al., 2015]). LepR+ stromal cells are enriched for Grem1 expression (see GSE33158 from [Ding et al., 2012]). To assess the Clec11a protein expression, we stained femur sections from eight week-old Prrx1-Cre; tdTomato reporter mice with a commercial polyclonal antibody against Clec11a. Prrx1-Cre recombines in limb bone marrow mesenchymal cells including SSCs, other bone marrow stromal cells, osteoblasts, osteocytes, and chondrocytes (Ding and Morrison, 2013; Greenbaum et al., 2013; Logan et al., 2002; Yue et al., 2016). In two month-old mice, most of the Clec11a staining was observed in and around the trabecular bone in the femur metaphysis (Figure 1F) as well as in cortical bone of the proximal femur (Figure 1H). This staining pattern was specific for Clec11a as we did not observe any staining in bone marrow sections from Clec11a deficient mice (Figure 1—figure supplement 1A and B; see below) or in sections stained with an isotype control in place of the anti-Clec11a antibody (Figure 1E,G, and I). Within the distal femur metaphysis of Prrx1-Cre; tdTomato mice we observed Clec11a staining adjacent to Tomato+ stromal cells in the bone marrow near trabecular bone (Figure 1Fi), Tomato+ osteoblasts lining trabecular bone surfaces (Figure 1Fii), and Aggrecan+ hypertrophic chondrocytes (Figure 1Fiii; Figure 1—figure supplement 1D). In the cortical bone matrix, we also observed Clec11a staining amongst osteocytes (Figure 1H). We did not detect Clec11a expression by bone marrow stromal cells, osteoblasts, or osteocytes in much of the diaphysis. Nonetheless, given the tendency of secreted growth factors to concentrate in regions of extracellular matrix, especially within the bone matrix (Hauschka et al., 1986, 1988; Mohan and Baylink, 1991), Clec11a may be more broadly expressed by bone marrow stromal cells than is evident from the antibody staining pattern. The position and morphology of the metaphyseal bone marrow stromal cells that were associated with Clec11a staining (Figure 1Fi) suggested that these cells included LepR+ cells. To test this, we stained femur sections from eight week-old Lepr-Cre; tdTomato mice with anti-Clec11a antibody. We observed Clec11a staining adjacent to a subset of Tomato+ stromal cells near trabecular bone in the metaphysis (Figure 1J) but not by Tomato+ stromal cells throughout most of the diaphysis. The Clec11a staining in LepR+ cells in the metaphysis (Figure 1J) was clearly above background (Figure 1I) but was dimmer than observed in and around bone matrix (Figure 1F and H). It is unclear whether this reflects the lower Clec11a expression by the LepR+ cells or whether Clec11a is bound and concentrated by bone matrix. We observed a similar Clec11a expression pattern in vertebrae, with anti-Clec11a antibody staining in and around the vertebral trabecular bone near the growth plate as well as in cortical bone (Figure 1—figure supplement 1C). Our data thus indicate that Clec11a is expressed by subsets of LepR+ bone marrow stromal cells, osteoblasts, and hypertrophic chondrocytes in the metaphysis as well as by osteocytes in certain regions of cortical bone. To test the physiological function of Clec11a we used CRISPR-Cas9 to generate a Clec11a mutant allele (Clec11a-/-) by deleting the second exon of Clec11a (Figure 1—figure supplement 1E). This was predicted to be a strong loss of function as exon 2 deletion introduced a frame shift that created a premature stop codon in exon 3 (Figure 1—figure supplement 1F). The predicted mutant protein did not contain any of the domains that are thought to be functionally important in Clec11a, including the polyglutamic acid sequence, the alpha-helical leucine zipper, or the C-type lectin domain (Figure 1—figure supplement 1F). Germline transmission of the mutant allele was confirmed by PCR and sequencing of genomic DNA (Figure 1—figure supplement 1G). Clec11a deficiency was also confirmed by qPCR analysis of bone marrow LepR+CD45-Ter119-CD31- cells from Clec11a-/- mice (Figure 1—figure supplement 1H) and by Clec11a cDNA sequencing to confirm exon two deletion (Figure 1—figure supplement 1I), and by the loss of Clec11a from the plasma Clec11a-/- mice (Figure 1—figure supplement 1J). Immunofluorescence analysis of femur sections with an anti-Clec11a polyclonal antibody suggested a complete loss of Clec11a protein from Clec11a-/- mice (Figure 1—figure supplement 1A and B). Clec11a-/- mice were born with Mendelian frequency (Figure 1—figure supplement 1K) and appeared grossly normal (Figure 1K), with normal body mass at 2 and 10 months of age (Figure 1L). White blood cell, red blood cell, and platelet counts were normal in 2, 10, and 16 month-old Clec11a-/- mice (Figure 1—figure supplement 1L–N). Two and 10-month old Clec11a-/- mice also had normal bone marrow and spleen cellularity (Figure 1M), as well as normal frequencies of Mac1+Gr1+ myeloid cells, Ter119+CD71+ erythroid progenitors, CD3+ T cells, and B220+ B cells in the bone marrow and spleen (Figure 1N–Q). Clec11a-/- mice had normal frequencies of CD150+CD48-Lineage-Sca-1+c-kit+ HSCs (Kiel et al., 2005) and CD150-CD48-Lin-Sca-1+c-kit+multipotent progenitors (MPPs) (Kiel et al., 2008; Oguro et al., 2013) in the bone marrow and spleen (Figure 1R and S), as well as normal frequencies of CD34+FcγR+Lin-Sca-1-c-kit+granulocyte-macrophage progenitors (GMPs), CD34-FcγR-Lin-Sca-1-c-kit+megakaryocyte-erythrocyte progenitors (MEPs), CD34+FcγR-Lin-Sca-1-c-kit+ common myeloid progenitors (CMPs) (Akashi et al., 2000) and Flt3+IL7Rα+Lin-Sca-1lowc-kitlowcommon lymphoid progenitors (CLPs) (Kondo et al., 1997) in the bone marrow (Figure 1T). Bone marrow from two month-old Clec11a-/- mice gave long-term multilineage reconstitution upon transplantation into irradiated mice with normal levels of donor cell reconstitution (Figure 1—figure supplement 1O–R). Clec11a is therefore not required for normal hematopoiesis in adult mice. Human recombinant Clec11a increases erythroid (BFU-E) and myeloid (CFU-G/M/GM) colony formation by human bone marrow cells when added to culture along with EPO or GM-CSF, respectively (Hiraoka et al., 1997, 2001). In cultures of mouse bone marrow cells, recombinant mouse Clec11a did not significantly increase BFU-E colony formation when added along with EPO and only slightly increased CFU-G/M/GM colony formation when added along with GM-CSF (Figure 1—figure supplement 1S and T). To test whether Clec11a regulates osteogenesis we performed micro-CT analysis of the distal femur from sex-matched littermates. In no case did we observe any significant difference between Clec11a+/+ and Clec11a+/- mice (data not shown), so samples from these mice were combined as controls. We always compared sex-matched littermates within individual experiments, using paired statistical tests to assess the significance of differences across multiple independent experiments. Trabecular bone volume was significantly reduced (by 24 ± 18%) in two month-old Clec11a-/- mice as compared to littermate controls (Figure 2A and D). The Clec11a-/- mice had significantly reduced trabecular bone thickness, increased trabecular spacing, and decreased connectively density and bone mineral density (Figure 2D–I). With the exception of the reduction in bone mineral density, these defects seemed to worsen with age as 10 and 16 month-old Clec11a-/- mice exhibited more profound reductions in trabecular bone volume (62 ± 27% and 64 ± 11%, respectively), trabecular number, trabecular thickness and connectivity density, as well as increased trabecular spacing (Figure 2B–H). MicroCT analysis of cortical bone parameters in the femur diaphysis from sex-matched littermates did not show significant differences between Clec11a-/- and control mice at 2 or 10 months of age (Figure 2—figure supplement 1A and B). However, 16 month-old Clec11a-/- mice exhibited significantly reduced cortical bone area, cortical area/total area ratio, and cortical thickness as compared to controls (Figure 2—figure supplement 1C–H). The femur length was slightly but significantly reduced in 2, 10 and 16 month-old Clec11a-/- mice as compared to littermate controls (Figure 2—figure supplement 1I). When we tested the mechanical strength of bones using a three point bending test we found significantly reduced peak load and fracture energy in the femur diaphysis of 2, 10, and 16 month-old Clec11a-/- as compared to sex-matched littermate control mice (Figure 2—figure supplement 1J–L). Micro-CT analysis of L3 vertebrae as a whole, including both cortical and trabecular bone, showed a trend toward reduced bone volume in Clec11a-/- as compared to sex-matched littermate controls, though the difference was not statistically significant (Figure 2—figure supplement 1M–R). Trabecular bone volume was significantly reduced in the L3 vertebral body of Clec11a-/- as compared to controls at 2, 10, and 16 months of age (Figure 2J–M). We also observed significantly reduced trabecular number and significantly increased trabecular spacing in L3 vertebrae from 2, 10 and 16 month-old Clec11a-/- as compared to littermate controls (Figure 2N–P). Clec11a is therefore required to maintain limb and vertebral bone. Alizarin red/alcian blue double staining at postnatal day 3 did not reveal any significant differences between Clec11a-/- and littermate control mice (Figure 2—figure supplement 1S). This suggests that Clec11a is not required for fetal skeletal development. We performed calcein double labeling to assess the rate of trabecular bone formation (Figure 2S). The trabecular bone mineral apposition and trabecular bone formation rates were both significantly decreased in the femur metaphysis of 2 and 10 month-old Clec11a-/- as compared to sex-matched littermate control mice (Figure 2T and U; 16 month-old mice were not assessed in these experiments). In contrast, the urinary bone resorption marker deoxypyridinoline did not significantly differ between Clec11a-/- and littermate controls (Figure 2V). This suggests that the difference in trabecular bone volume between Clec11a-/- and littermate control mice reflected reduced bone formation, not a change in bone resorption. To assess the mechanism by which Clec11a promotes osteogenesis in vivo we used multiple approaches to test whether it regulates the maintenance, proliferation, or differentiation of mesenchymal progenitors in the bone marrow. The frequency of LepR+CD45-Ter119-Tie2- stromal cells did not significantly differ between the bone marrow of Clec11a-/- and littermate control mice (Figure 3—figure supplement 1B). We also did not detect any difference in the rate of BrdU incorporation by these cells in vivo (Figure 3—figure supplement 1H) or the percentage of these cells that stained positively for activated caspase 3/7 (data not shown). We also examined a series of mesenchymal stem and progenitor cell populations that had been identified in a prior study of postnatal day 3 bone marrow based on expression of CD51 and other markers (Chan et al., 2015). In 10 month-old bone marrow we were able to identify 5 of the 8 cell populations that had been identified in the neonatal bone marrow (Figure 3—figure supplement 1A). All of these cell populations were uniformly or nearly uniformly positive for LeprR expression as nearly all CD51+ bone marrow stromal cells were LepR+ and vice versa (Figure 3—figure supplement 1A). Consistent with the data on LepR+ cells above, we did not detect any effect of Clec11a deficiency on the frequency of these cell populations (Figure 3—figure supplement 1C–G), their rate of BrdU incorporation (Figure 3—figure supplement 1I–M), or the percentage of cells that stained positively for activated caspase 3/7 (data not shown). Clec11a is therefore not required in vivo for the maintenance, survival, or the proliferation of bone marrow mesenchymal progenitors. To functionally assess this conclusion, we cultured at clonal density enzymatically dissociated femur bone marrow cells from Clec11a-/- and sex-matched littermate control mice at 2 and 10 months of age. We observed no difference in the frequency of cells that formed CFU-F colonies or in the number of cells per colony (Figure 2—figure supplement 1T and U). Clec11a is therefore not required for the maintenance of CFU-F in vivo or for their proliferation in culture. To test whether Clec11a is required for the differentiation of mesenchymal progenitors, we cultured CFU-F from Clec11a-/- and littermate control bone marrow at clonal density, then replated equal numbers of Clec11a-/- or control cells into osteogenic, adipogenic, or chondrogenic culture conditions. Consistent with the decreased osteogenesis in vivo, bone marrow stromal cells from Clec11a-/- mice gave rise to significantly fewer cells with alkaline phosphatase (a marker of mature osteoblasts and pre-adipocytes) or alizarin red (a marker of mineralization by mature osteoblasts) staining as compared to control cells under osteogenic culture conditions (Figure 3A–D). qRT-PCR analysis of cells in these cultures showed that Clec11a deficiency did not significantly affect the expression of Sp7 (Osterix), Runx2, or Col1a1 but did significantly reduce the expression of Integrin binding sialoprotein (Ibsp) and Dentin matrix protein 1 (Dmp1) (Figure 3—figure supplement 1N). Sp7, Runx2, and Col1a1 are broadly expressed by immature osteogenic progenitors and osteoblasts (Jikko et al., 1999; Kulterer et al., 2007; Nakashima et al., 2002) whereas Ibsp and Dmp1 mark mature osteoblasts (Kalajzic et al., 2005). Clec11a was thus required for the differentiation of bone marrow mesenchymal progenitors into mature osteoblasts. Under adipogenic (Figure 3E and F) and chondrogenic (Figure 3G and H) culture conditions we did not detect any difference between Clec11a-/- and control cells in terms of Oil Red O or Toluidine blue staining. Consistent with this, the number of Perilipin+ adipocytes (Figure 3I–K) and Safranin O+ chondrocytes (Figure 3L–N) in femur sections from two month-old mice did not differ between Clec11a-/- and sex matched littermate control mice. However, we did observe significantly more adipocytes in the femur sections from 10 month-old Clec11a-/- mice as compared to littermate controls (Figure 3—figure supplement 2A–C). The number of chondrocytes did not significantly differ between Clec11a-/- and littermate controls at 10 months of age (Figure 3—figure supplement 2D–F). To test whether Clec11a is necessary for the differentiation of mesenchymal progenitors into mature osteoblasts in vivo, we cultured CFU-F from adult bone marrow at clonal density, then seeded equal numbers Clec11a-/- or control cells into collagen sponges and transplanted them subcutaneously into immunocompromised NSG mice (Figure 3—figure supplement 1O). Eight weeks after transplantation, ossicles generated from Clec11a-/- cells contained significantly less bone as compared to the ossicles generated from control cells (Figure 3—figure supplement 1P). In contrast, the number of adipocytes per unit area did not differ between ossicles containing Clec11a-/- as compared to control cells (Figure 3—figure supplement 1Q). We performed mid-diaphyseal femur fractures in two month-old Clec11a-/- and sex-matched littermate controls. Two weeks later, Clec11a-/- mice had significantly less callus bone around the fracture site (Figure 4A) and significantly more callus cartilage (Figure 4B) as compared to controls, suggesting delayed endochondral ossification. MicroCT analysis of the callus at the fracture site two weeks after the fracture revealed significantly reduced trabecular bone volume, trabecular number, trabecular thickness, and trabecular connectivity density (Figure 4C–H) and significantly increased trabecular spacing (Figure 4I) in Clec11a-/- bones. The bone mineral density in the callus did not significantly differ between Clec11a-/- and control mice (Figure 4J). The callus volume, diameter, and polar moment of inertia were significantly increased in Clec11a-/- mice as compared to littermate controls (Figure 4K–M), further suggesting that fracture healing was compromised in Clec11a-/- mice (O'Neill et al., 2012). To test whether Clec11a is sufficient to promote osteogenesis we constructed a HEK293 cell line that stably expressed mouse Clec11a with a C-terminal Flag tag. We affinity purified recombinant Clec11a (rClec11a) from the culture medium using anti-Flag M2 beads. Wild-type bone marrow cells were cultured to form CFU-F, then replated and grown under osteogenic culture conditions. Addition of rClec11a to these cultures significantly increased alizarin red staining, suggesting increased mineralization (Figure 5A and B). Addition of rClec11a also rescued osteogenesis by Clec11a-/- bone marrow stromal cells at 7 and 14 days after induction of differentiation (Figure 5—figure supplement 1J–M). Transient expression of mouse Clec11a cDNA in the MC3T3-E1 mouse pre-osteoblast cell line (Wang et al., 1999) increased osteogenic differentiation by these cells in culture (Figure 5—figure supplement 1H and I). To test whether Clec11a is sufficient to promote the differentiation of bone marrow mesenchymal progenitors into mature osteoblasts, we sorted LepR+CD105+CD45-Ter119-CD31- cells from wild-type mice into recombinant Clecl11a (rClec11a)-containing and control cultures at clonal density. Addition of rClec11a to these cultures did not significantly affect the percentage of cells that formed CFU-F or the number of cells per colony (Figure 5—figure supplement 1A and B). Upon induction of differentiation, rClec11a significantly increased the number of alkaline phosphatase positive osteoblasts per colony (Figure 5—figure supplement 1D). This appeared to reflect a promotion of differentiation as rClec11a did not significantly affect the frequencies of dividing osteoblasts (Figure 5—figure supplement 1E) or osteoblasts undergoing apoptosis within the colonies (Figure 5—figure supplement 1F). Consistent with the experiments above, addition of rClec11a did not significantly affect Sp7, Runx2, or Col1a1 expression by cells within these cultures, but it did significantly increase the expression of Ibsp and Dmp1 (Figure 5—figure supplement 1G). Clec11a is thus sufficient to promote the differentiation of bone marrow mesenchymal progenitors into mature osteoblasts. To test whether rClec11a promotes osteogenesis in vivo, we administered daily subcutaneous injections of rClec11a to two month-old wild-type mice for 28 days. Consistent with the in vitro data, rClec11a dose-dependently increased trabecular bone volume in the distal femur metaphysis (Figure 5C and D). The higher doses of rClec11a also significantly increased trabecular number and reduced trabecular spacing (Figure 5E–G). The increased osteogenesis was associated with a significantly increased mineralized bone surface (Figure 5H) and bone formation rate (Figure 5I), but did not affect bone resorption (Figure 5J). Cortical bone parameters in the femur diaphysis were not affected by rClec11a in these experiments (Figure 5—figure supplement 1N–S). rClec11a thus promotes osteogenesis in wild-type mice in vivo. To test whether administration of rClec11a can rescue the bone loss phenotype in Clec11a-/- mice, we administered daily subcutaneous injections of 50 µg/kg rClec11a to six month-old Clec11a-/- mice for 28 days. This restored plasma Clec11a to control levels (data not shown). Consistent with this, Clec11a-/- mice exhibited significantly increased trabecular bone volume (Figure 5K and L), trabecular number, trabecular thickness, connectivity density, and decreased trabecular spacing (Figure 5M–Q). After rClec11a administration to Clec11a-/- mice, trabecular bone parameters were similar to those in normal control mice. Ovariectomy in adult mice induces osteoporosis by increasing bone resorption (Rodan and Martin, 2000). We ovariectomized mice at two months of age then administered daily subcutaneous injections of recombinant human parathyroid hormone (PTH) fragment 1–34, rClec11a, or vehicle for 28 days before analysis by microCT. MicroCT analysis showed that trabecular bone volume and number were significantly reduced in ovariectomized mice (Figure 6A–C) while trabecular spacing was significantly increased (Figure 6E). Daily administration of PTH to ovariectomized mice significantly increased trabecular bone volume (Figure 6B) and trabecular number (Figure 6C), while reducing trabecular spacing (Figure 6E). Ovariectomy did not significantly affect the plasma Clec11a level (data not shown). Daily administration of rClec11a to ovariectomized mice significantly increased trabecular bone volume (Figure 6B) and trabecular number (Figure 6C), while reducing trabecular spacing (Figure 6E). Cortical bone parameters were not significantly changed by PTH or Clec11a administration in these experiments (Figure 6—figure supplement 1A–F). rClec11a can therefore prevent the loss of trabecular bone in ovariectomized mice. Consistent with the fact that ovariectomy increases bone resorption (Harada and Rodan, 2003), the urinary bone resorption marker deoxypyridinoline was significantly increased in ovariectomized mice as compared to sham operated controls (Figure 6F). Administration of rClec11a or PTH did not significantly affect deoxypyridinoline levels (Figure 6F) or numbers of osteoclasts (Figure 6—figure supplement 1G) in ovariectomized mice. However, based on calcein double labeling and histomorphometry analysis, the trabecular bone formation rate (Figure 6G) and the number of osteoblasts associated with trabecular bones (Figure 6—figure supplement 1H) were significantly increased by rClec11a or PTH administration. rClec11a thus prevented the loss of trabecular bone in ovariectomized mice by promoting bone formation. We also assessed the effect of rClec11a on a model of secondary osteoporosis in which bone loss was induced by dexamethasone injection, mimicking glucocorticoid-induced osteoporosis in humans (McLaughlin et al., 2002; Weinstein et al., 1998). Daily intraperitoneal administration of 20 mg/kg dexamethasone for four weeks in mice significantly reduced lymphocyte numbers in the blood without significantly affecting neutrophil or monocyte counts (Figure 6—figure supplement 2A–D). MicroCT analysis of the distal femur metaphysis showed significantly reduced trabecular bone volume and thickness in the dexamethasone-treated as compared to vehicle-treated mice (Figure 6H–L). Treatment of dexamethasone-treated mice with PTH significantly increased trabecular bone volume, trabecular number, and trabecular thickness while significantly reducing trabecular spacing (Figure 6H–L). Dexamethasone treatment did not significantly affect plasma Clec11a levels (Figure 6—figure supplement 2K), but administration of rClec11a to dexamethasone-treated mice significantly increased trabecular bone volume and trabecular number while significantly reducing trabecular spacing (Figure 6H–L). Dexamethasone treatment also significantly reduced cortical thickness but neither PTH nor rClec11a rescued this effect in these experiments (Figure 6—figure supplement 2E–J). Consistent with the fact that dexamethasone reduces bone formation (Harada and Rodan, 2003), the rate of trabecular bone formation based on calcein double labeling (Figure 6M) and the numbers of osteoblasts in trabecular bone (Figure 6—figure supplement 2L) were significantly reduced in dexamethasone-treated as compared to vehicle-treated mice. Administration of PTH or rClec11a significantly increased the trabecular bone formation rate (Figure 6M) and the number of osteoblasts (Figure 6—figure supplement 2L) in dexamethasone-treated mice. As expected, dexamethasone treatment, or administration of PTH or rClec11a, did not significantly affect deoxypyridinoline levels (Figure 6N) or osteoclast numbers (Figure 6—figure supplement 2M). rClec11a thus prevented the loss of trabecular bone in dexamethasone-treated mice by promoting bone formation. To test whether rClec11a could reverse bone loss after the onset of osteoporosis we ovariectomized two month-old wild-type mice and waited for four weeks before administering PTH or rClec11a daily for another four weeks. MicroCT analysis showed that trabecular and cortical bone volumes as well as trabecular number were significantly reduced in ovariectomized mice (Figure 7A–C, and Figure 7—figure supplement 1C). Trabecular spacing was significantly increased in ovariectomized mice (Figure 7E). Daily administration of PTH to ovariectomized mice increased trabecular bone volume (Figure 7B) and significantly reduced trabecular spacing relative to untreated ovariectomized mice (Figure 7E). PTH treatment also significantly increased cortical area (Figure 7—figure supplement 1C) and cortical thickness (Figure 7—figure supplement 1E). Daily administration of rClec11a to ovariectomized mice significantly increased trabecular bone volume (Figure 7B) and trabecular number (Figure 7C), while reducing trabecular spacing (Figure 7E) relative to untreated ovariectomized mice. rClec11a did not significantly affect cortical area (Figure 7—figure supplement 1C) or cortical thickness (Figure 7—figure supplement 1E) in ovariectomized mice. rClec11a can thus reverse trabecular bone loss after the onset of ovariectomy-induced osteoporosis. To test whether human Clec11a also promotes osteogenesis, we constructed a HEK293 cell line that stably expressed human Clec11a with a C-terminal Flag tag, then affinity purified recombinant human Clec11a (rhClec11a) from the culture medium. Addition of rhClec11a to human bone marrow stromal cells cultured under osteogenic culture conditions significantly increased osteoblast differentiation based on alkaline phosphatase (Figure 7H and I) and alizarin red staining (Figure 7J and K). To test whether rhClec11a promotes osteogenesis by human bone marrow stromal cells in vivo we subcutaneously transplanted a suspension of human bone marrow stromal cells, hydroxyapatite/tricalcium phosphate particles, and fibrin gel into immunocompromised NSG mice (Bianco et al., 2006) and administered daily subcutaneous injections of rhClec11a or vehicle for 4 or 8 weeks. rhClec11a significantly accelerated bone formation in the ossicles after four weeks and significantly increased bone formation in the ossicles after eight weeks (Figure 7L–O). rhClec11a thus promotes osteogenesis by human bone marrow stromal cells in vivo. We have identified a new osteogenic factor, Clec11a, which maintains the adult skeleton by promoting the osteogenesis. Clec11a was necessary and sufficient to promote osteogenesis in culture and in vivo. Clec11a deficiency significantly reduced bone volume in both limb bones and vertebrae of adult mice (Figure 2). Clec11a was expressed by subsets of bone marrow stromal cells, osteoblasts, osteocytes, and hypertrophic chondrocytes, particularly in the metaphysis, but also in portions of cortical bone (Figure 1). In light of the unanticipated osteogenic activity of Clec11a, and its role in the maintenance of the adult skeleton, we propose to call this growth factor Osteolectin, a name that is more descriptive of both biological function and protein structure. Clec11a/Osteolectin appears to promote bone formation by promoting the differentiation of mesenchymal progenitors into mature osteoblasts. Clec11a/Osteolectin deficient bone marrow stromal cells formed significantly fewer osteoblasts and mineralized bone matrix in culture (Figure 3A–D) with significantly lower expression of the mature osteoblast markers Ibsp and Dmp1 (Figure 3—figure supplement 1N). Addition of recombinant Clec11a/Osteolectin to cultures of wild-type bone marrow stromal cells significantly increased the formation of osteoblasts (Figure 5—figure supplement 1D) as well as Ibsp and Dmp1 expression (Figure 5—figure supplement 1G). Clec11a/Osteolectin deficient bone marrow stromal cells also formed less bone in the ossicles in vivo (Figure 3—figure supplement 1O–Q). Recombinant Clec11a/Osteolectin promoted bone formation in the ossicles in vivo (Figure 7L–O). In contrast to its effects on osteogenic differentiation, Clec11a/Osteolectin deficiency did not affect the frequency, proliferation, or survival of bone marrow mesenchymal progenitors in vivo (Figure 3—figure supplement 1B–M). Recombinant Clec11a/Osteolectin also did not have any effect on the frequency of CFU-F that formed colonies in culture (Figure 5—figure supplement 1A) or the number of cells per colony (Figure 5—figure supplement 1B). We conclude that Clec11a/Osteolectin promotes the osteogenic differentiation of mesenchymal progenitors but not their proliferation or survival. Hypertrophic chondrocytes can also transdifferentiate into osteoblasts and osteocytes during endochondral ossification (Ono et al., 2014b; Yang et al., 2014; Zhou et al., 2014b). Thus, in addition to promoting the differentiation of mesenchymal progenitors into mature osteoblasts, Clec11a/Osteolectin might also promote the transdifferentiation of hypertrophic chondrocytes into osteoblasts. Lineage tracing studies will be required in future to test this. Nobody has yet identified the receptor for Clec11a/Osteolectin, limiting our ability to study the signaling mechanisms by which it promotes osteogenesis. We do observe binding of flag-tagged Clec11a/Osteolectin to the surface of osteogenic cell lines (data not shown). We hypothesize that Clec11a/Osteolectin promotes the osteogenic differentiation of mesenchymal progenitors by binding to a signaling receptor on the surface of these cells. Identification of this receptor will require significant additional work beyond the scope of the current study. Phylogenic analysis showed that Clec11a/Osteolectin is most closely related to Clec3b/Tetranectin. Tetranectin expression increases during mineralization by osteogenic progenitors in culture and overexpression of Tetranectin in PC12 cells increases the bone content of tumors formed by these cells (Wewer et al., 1994). Tetranectin deficient mice exhibit kyphosis as a result of asymmetric growth plate development in vertebrae (Iba et al., 2001); however, it is unknown whether Tetranectin is required for osteogenesis in vivo. Tetranectin is found in both cartilaginous fish and bony fish but Clec11a/Osteolectin is only found in bony fish and higher vertebrate species. This suggests that Clec11a/Osteolectin evolved in bony species to promote osteogenic differentiation and mineralization. Among mammals Clec11a/Osteolectin is highly conserved: human and mouse Clec11a/Osteolectin proteins are 85% identical and 90% similar. Consistent with this, recombinant human Clec11a/Osteolectin promoted osteogenesis by human bone marrow stromal cells in culture and in vivo (Figure 7H–O). To generate Clec11a-/- mice, Cas9 mRNA and sgRNAs were transcribed using mMESSAGE mMACHINE T7 Ultra Kit and MEGAshortscript Kit (Ambion), purified by MEGAclear Kit (Ambion), and microinjected into C57BL/6 zygotes by the Transgenic Core Facility of the University of Texas Southwestern Medical Center (UTSW). Chimeric mice were genotyped by restriction fragment length polymorphism (RFLP) analysis and backcrossed onto a C57BL/Ka background to obtain germline transmission. Mutant mice were backcrossed onto a C57BL/Ka background for 3 to 6 generations prior to analysis. Wild-type C57BL/Ka mice were used for rClec11a injection, ovariectomy, and dexamethasone injection experiments. All procedures were approved by the UTSW Institutional Animal Care and Use Committee (Animal protocol number: 2016–101334-G). The cell lines used in this study included HEK293 and MC3T3-E1 (Subclone 4), which were obtained from ATCC and authenticated by STR profiling. They were shown to be free of mycoplasma contamination. Antibodies used to analyze hematopoietic stem cells (HSCs) and multipotent hematopoietic progenitors (MPPs) included anti-CD150-PE-Cy5 (BioLegend, clone TC15-12F12.2, 1:200), anti-CD48-FITC (eBioscience, clone HM48-1, 1:200), anti-Sca-1-PEcy7 (eBioscience, E13-161.7, 1:200), anti-c-Kit-APC-eFluor780 (eBioscience, clone 2B8, 1:200) and the following antibodies against lineage markers: anti-Ter119-PE (eBioscience, clone TER-119, 1:200), anti-B220-PE (BioLegend, clone 6B2, 1:400), anti-Gr-1-PE (BioLegend, clone 8C5, 1:800), anti-CD2-PE (eBioscience, clone RM2-5, 1:200), anti-CD3-PE (BioLegend, clone 17A2, 1:200), anti-CD5-PE (BioLegend, clone 53–7.3, 1:400) and anti-CD8-PE (eBioscience, clone 53–6.7, 1:400). The following antibodies were used to identify restricted hematopoietic progenitors: anti-CD34-FITC (eBioscience, clone RAM34, 1:100); anti-CD16/32 Alexa Fluor 700 (eBioscience, clone 93, 1:200); anti-CD135-PEcy5 (eBioscience, clone A2F10, 1:100); anti-CD127-Biotin (BioLegend, clone A7R34, 1:200) + Streptavidin-PE-CF592 (BD Biosciences, 1:500); anti-cKit-APC-eFluor780 (eBioscience, clone 2B8, 1:200); anti-ScaI-PEcy7 (eBioscience, clone E13-161.7, 1:200) and lineage markers listed above. The following antibodies were used to identify differentiated cells: anti-CD71-FITC (BD Biosciences, clone C2, 1:200); anti-Ter119-APC (eBioscience, clone TER-119, 1:200); anti-CD3-PE (BioLegend, clone 17A2, 1:200); anti-B220-PEcy5 (eBioscience, clone RA3-6B2, 1:400); anti-Mac-1-APC-eFluor780 (eBioscience, M1/70, 1:200) and anti-Gr-1-PEcy7 (BioLegend, clone RB6-8C5, 1:400). Anti-CD45.2-FITC (BioLegend, clone 104, 1:200) and anti-CD45.1-APC-eFluor-78 (eBioscience, clone A20, 1:100) were used to distinguish donor from recipient cells in competitive reconstitution assays. The following antibodies were used to distinguish subpopulations of bone marrow stromal cells: anti-CD45-APC (eBioscience, clone 30-F11, 1:200), anti-Ter119-APC (eBioscience, clone TER-119, 1:200), anti-CD31-APC (Biolegend, clone MEC13.3, 1:200), anti-Tie2-APC (BioLegend, clone TEK4, 1:200), anti-Thy1.1-FITC (eBioscience, clone HIS51, 1:200), anti-Thy1.2-FITC (eBioscience, clone 30-H12, 1:200), anti-CD51-biotin (BioLegend, clone RMV-7, 1:100), anti-CD51-PE (BioLegend, clone RMV-7, 1:100), anti-Ly-51-PEcy7 (BioLegend, clone 6C3, 1:200), anti-CD200-PE (BioLegend, clone OX-90, 1:200), anti-CD105-Pacific Blue (BioLegend, clone MJ7/18), anti-LepR-biotin antibody (R and D systems, BAF497, 1:100) and anti-PDGFRα-biotin (eBioscience, clone APA5, 1:200). Cells were stained with antibodies in 200 µl of staining medium (HBSS + 2% fetal bovine serum) on ice for 1 hr, and then washed by adding 2 ml of staining buffer followed by centrifugation. Biotin-conjugated antibodies were incubated with streptavidin-PE or streptavidin-Brilliant Violet 421 (Biolegend, 1:500) for another 20 min (Biolegend, 1:500). Cells were resuspended in staining medium with 1 µg/ml DAPI (Invitrogen) and analyzed with a FACSCanto flow cytometer (BD Biosciences) or sorted using a FACSAria flow cytometer (BD Biosciences) with a 130 µm nozzle. To assess proliferation in vivo, mice were given a single intraperitoneal injection of BrdU (100 mg/kg body mass) and maintained on 0.5 mg/ml BrdU in the drinking water for 14 days. The frequency of BrdU+ SSCs was then analyzed by flow cytometry using the APC BrdU Flow Kit (BD Biosciences). Enzymatic digestion of bone marrow cells and CFU-F cultures were performed as described previously (Suire et al., 2012). Briefly, intact marrow plugs were flushed from the long bones and subjected to two rounds of enzymatic digestion at 37°C for 15 min each. The digestion buffer contained 3 mg/ml type I collagenase (Worthington), 4 mg/ml dispase (Roche Diagnostic) and 1 U/ml DNase I (Sigma) in HBSS with calcium and magnesium. The cells were resuspended in staining medium (HBSS + 2% fetal bovine serum) with 2 mM EDTA to stop the digestion. To form CFU-F colonies, freshly dissociated single-cell suspensions were plated at clonal density in 6-well plates (5 × 105 cells/well) or 10 cm plates (5 × 106 cells/dish) with DMEM (Gibco) plus 20% fetal bovine serum (Sigma F2442, lot 14M255, selected to support CFU-F growth), 10 µM ROCK inhibitor (Y-27632, TOCRIS), and 1% penicillin/streptomycin (Invitrogen). Cultures were maintained at 37°C in gas-tight chambers (Billups-Rothenberg, Del Mar, CA) that were flushed daily for 30 s with 5% O2 and 5% CO2 (balance Nitrogen) to maintain a low oxygen environment that promoted survival and proliferation (Morrison et al., 2000). The CFU-F culture medium was changed on the second day after plating to wash out contaminating macrophages, then changed every 3–4 days after that. CFU-F colonies were counted eight days after plating by staining with 0.1% Toluidine blue in 4% formalin solution. Osteogenic and adipogenic differentiation were assessed by replating primary CFU-F cells into 48-well plates (25,000 cells/cm2). On the second day of culture, the medium was replaced with adipogenic (four days) or osteogenic (seven days or 14 days) medium (StemPro MSC differentiation kits; Life Technologies). Equal numbers of cells from wild-type and Clec11a-/- cultures were replated so there was no difference in the density of cells of different genotypes. In some experiments, clonal differentiation potential was assessed by sorting 500 bone marrow LepR+CD105+CD45-Ter119-CD31- cells into each well of a 6-well plate to form CFU-F colonies at clonal density over an eight day period. Then the culture medium was replaced with osteogenic differentiation medium. Seven to 14 days later, the percentage of colonies that contained osteoblasts and the numbers of osteoblasts per colony were quantified by StemTAG Alkaline Phosphatase Staining and Activity Assay Kit (Cell Biolabs) and alizarin red staining (Sigma). Chondrogenic potential was assessed by centrifuging 2 × 105 CFU-F cells to form cell pellets, which were then cultured in chondrogenic medium for 21 days (StemPro chondrogenesis differentiation kit; Life Technologies), changing the culture medium every 2–3 days. Chondrocyte formation within the cell pellets was assessed by cryosectioning and Toluidine blue staining (Robey et al., 2014). The osteogenic differentiation of human bone marrow stromal cells and MC3T3-E1 cells was tested using the StemPro osteogenesis differentiation kit (Life Technologies). Images were acquired using an Olympus IX81 microscope. Femurs and lumbar vertebrae were dissected, fixed overnight in 4% paraformaldehyde and stored in 70% ethanol at 4°C. Femurs and lumbar vertebrae were scanned at an isotropic voxel size of 3.5 µm and 7 µm, respectively, with peak tube voltage of 55 kV and current of 0.145 mA (µCT 35; Scanco Medical AG, Bassersdorf, Switzerland). A three-dimensional Gaussian filter (σ = 0.8) with a limited, finite filter support of one was used to suppress noise in the images, and a threshold of 263–1000 was used to segment mineralized bone from air and soft tissues. Trabecular bone parameters were measured in the distal metaphysis of the femurs. The region of interest was selected from below the distal growth plate where the epiphyseal cap structure completely disappeared and continued for 100 slices toward the proximal end of the femur. Contours were drawn manually a few voxels away from the endocortical surface to define trabecular bones in the metaphysis (Bouxsein et al., 2010). Cortical bone parameters were measured by analyzing 100 slices in mid-diaphysis femurs. Vertebral bone parameters were measured by analyzing 200 slices in the middle of L3 lumbar vertebrae. The fracture callus was scanned at an isotropic voxel size of 6 µm with the same settings as described above. The region of interested was selected from 200 slices above the fracture site to 200 slices below the fracture site (400 slices in total), including the entire callus. A segmentation threshold of 263–500 was used to analyze the trabecular parameters in the fracture callus. Two month-old adult recipient mice were irradiated with an XRAD 320 irradiator (Precision X-Ray Inc.), giving two doses of 550 rad, delivered at least 2 hr apart. C57BL/Ka-Thy-1.1 (CD45.2) donor mice and C57BL/Ka-Thy-1.2(CD45.1) recipient mice were used in transplant experiments. 300,000 donor whole bone marrow cells from Clec11a-/- or littermate control mice (CD45.2) were transplanted along with 300,000 recipient whole bone marrow cells (CD45.1) into lethally irradiated recipient mice (C57BL/Ka-Thy-1.1 x C57BL/Ka-Thy-1.2 (CD45.1/CD45.2) heterozygotes). Peripheral blood was obtained from the tail veins of recipient mice at 4 to 16 weeks after transplantation. Blood was subjected to ammonium-chloride lysis of the red blood cells and leukocytes were stained with antibodies against CD45.1, CD45.2, B220, Mac-1, CD3 and Gr-1 to assess hematopoietic chimerism by donor and recipient cells by flow cytometry. Dissected bones were fixed in 4% paraformaldehyde overnight, decalcified in 10% EDTA for four days, and dehydrated in 30% sucrose for two days. Bones were sectioned (10 µm) using the CryoJane tape-transfer system (Leica). Sections were blocked in PBS with 10% horse serum for 30 min and then stained overnight at 4°C with goat IgG control (R and D Systems, 1:500), goat anti-Clec11a antibody (R and D systems, 1:500), rabbit anti-Aggrecan antibody (Chemicon, 1:500), rabbit anti-Perilipin antibody (Sigma, 1:2000) or goat anti-Osteopontin antibody (R and D, 1:500). Donkey anti-goat Alexa Fluor 488, Donkey anti-goat Alexa Fluor 647 and donkey anti-rabbit Alexia Fluor 555 were used as secondary antibodies (Invitrogen, 1:500). Slides were mounted with anti-fade prolong gold with DAPI (Invitrogen). Images were acquired using a Zeiss LSM780 confocal microscope or Olympus IX81 microscope. On day 0 and day 7, mice were injected intraperitoneally with 10 mg/kg body mass calcein dissolved in calcein buffer (0.15 M NaCl plus 2% NaHCO3 in water) and sacrificed on day 9. The tibias were fixed overnight in 4% paraformaldehyde at 4°C, dehydrated in 30% sucrose for two days and sectioned without decalcification (7 µm sections). Mineral apposition and bone formation rates were determined as previously described (Egan et al., 2012). For the quantification of osteoblast number/bone surface and osteoclast number/bone surface, decalcified 10 µm femur sections were stained histochemically for alkaline phosphatase (Roche) or tartrate-resistant acid phosphatase (Sigma) activity. Growth plate chondrocytes were identified based on staining with Safranin O/fast green (American MasterTech) and quantified using Image J. Alizarin red/alcian blue double staining was performed as described previously (Ovchinnikov, 2009). A stainless steel wire was inserted into the intramedullary canal of the femur through the knee after anesthesia, and a bone fracture was introduced in the femur mid-diaphysis by 3-point bending. Buprenorphine was injected every 12 hr up to 72 hr after the surgery. Bone resorption rate was determined by measuring urinary levels of deoxypyridinoline (DPD) using a MicroVue DPD ELISA Kit (Quidel). The DPD values were normalized to urinary creatinine levels using the MicroVue Creatinine Assay Kit (Quidel). Mouse Clec11a cDNA was cloned into pcDNA3 vector (Invitrogen) containing a C-terminal 1XFlag-tag, which was then transfected into HEK293 cells with Lipofectamine 2000 (Invitrogen) and subjected to stable cell line selection using 1 mg/ml G418 (Sigma). Stable clones with high Clec11a expression were cultured in DMEM plus 10% FBS (Sigma), and 1% penicillin/streptomycin (Invitrogen). Culture medium was collected every two days, centrifuged to eliminate cellular debris, and stored with 1 mM phenylmethylsulfonyl fluoride at 4°C to inhibit protease activity. One litter of culture medium was filtered through a 0.2 µm membrane to eliminate cellular debris (Nalgene) before being loaded onto a chromatography column containing 2 ml Anti-FLAG M2 Affinity Gel (Sigma), with a flow rate of 1 ml/min. The column was sequentially washed using 20 ml of high salt buffer (20 mM Tris-HCl, 300 mM KCl, 10% Glycerol, 0.2 mM EDTA) followed by 20 ml of low salt buffer (20 mM Tris-HCl, 150 mM KCl, 10% Glycerol, 0.2 mM EDTA) and finally 20 ml of PBS. The FLAG-tagged Clec11a was then eluted from the column using 10 ml 3X FLAG peptide (100 μg/ml) in PBS or protein storage buffer (50 mM HEPES, 150 mM NaCl and 10% glycerol, pH = 7.5). Eluted protein was concentrated using Amicon Ultra-15 Centrifugal Filter Units (Ultracel-10K, Millipore), then quantitated by SDS-PAGE and colloidal blue staining (Invitrogen) and stored at −80°C. The recombinant human Clec11a was generated and purified in the same way. For ovariectomy-induced osteoporosis, 8 week-old virgin female mice were anesthetized using Isoflurane, shaved, and disinfected with Betadine. A dorsal midline incision was made and the periovarian fat pad was gently grasped to exteriorize the ovary. The fallopian tube was then clamped off and the ovary was removed by cutting above the clamped area. The uterine horn was returned into the abdomen and the same process was repeated on the other side. After surgery, buprenorphine was given for analgesia, and mice were closely monitored until they resumed full activity. Vehicle, 40 µg/kg PTH (1–34) or 50 µg/kg rClec11a were subcutaneously injected daily starting one day or four weeks after the surgery and continuing for 28 days then the mice were analyzed. For dexamethasone-induced osteoporosis, PBS or 20 mg/kg dexamethasone was injected peritoneally into eight week-old virgin female mice daily for 28 days. Vehicle, 40 µg/kg PTH (1–34) or 50 µg/kg rClec11a were subcutaneously injected at the same time. Mouse and human bone marrow stromal cell ossicle formation in vivo was assessed as described previously (Bianco et al., 2006). Briefly, 2 × 106 mouse primary CFU-F cells were seeded into collagen sponges (Gelfoam, Pfizer), incubated at 37°C for 90 min, with or without 10 ng/ml rClec11a, and then transplanted subcutaneously into NSG mice. The ossicles formed by these cells were analyzed eight weeks after transplantation by cryosectioning and immunostaining with antibodies against perilipin and osteopontin. For human bone marrow stromal cells, 2 × 106 cells were incubated with 40 mg of hydroxyapatite (HA)/tricalcium phosphate (TCP) particles (65%/35%, Zimmer Dental, Warsaw IN), with or without 10 ng/ml rhClec11a, and rotated for 2 hr at 37°C. The cell/carrier slurry was centrifuged at 135xg for 5 min and embedded in a fibrin gel by adding 15 µl of human fibrinogen (3.2 mg/ml in sterile PBS) with 15 µl of human thrombin (25 U/ml in sterile 2% CaCl2 in PBS). The gels were left at room temperature for 10 min to clot, then transplanted subcutaneously into NSG mice. The ossicles formed by these cells were analyzed 4 or 8 weeks after transplantation by cryosectioning and H and E staining. Some of the mice were treated with daily subcutaneous injections of 50 µg/kg human recombinant Clec11a. For quantitative reverse transcription PCR (qPCR), 6000 PDGFRα+CD45-Ter119-CD31- cells were flow cytometrically sorted from enzymatically dissociated bone marrow into Trizol (Invitrogen). RNA was extracted and reverse transcribed into cDNA using SuperScript III (Invitrogen). qPCR was performed using a Roche LightCycler 480. The primers used for qPCR analysis included Clec11a (NM_009131.3): 5’-AGG TCC TGG GAG GGA GTG-3’ and 5’-GGG CCT CCT GGA GAT TCT T-3’; Runx2 (NM_001146038.2): 5’- TTA CCT ACA CCC CGC CAG TC-3’ and 5’-TGC TGG TCT GGA AGG GTC C-3’; Sp7 (NM_130458.3): 5’- ATG GCG TCC TCT CTG CTT GA-3’ and 5’-GAA GGG TGG GTA GTC ATT TG-3’; Ibsp (NM_008318.3): 5’-AGT TAG CGG CAC TCC AAC TG-3’ and 5’-TCG CTT TCC TTC ACT TTT GG-3’; Dmp1 (NM_016779.2): 5’-TGG GAG CCA GAG AGG GTA G-3’ and 5’- TTG TGG TAT CTG GCA ACT GG-3’; Actb (NM_007393.5): 5’-GCT CTT TTC CAG CCT TCC TT-3’ (Forward) and 5’-CTT CTG CAT CCT GTC AGC AA-3’ (Reverse). EdU was added into osteogenic differentiation medium at day 0 (10 μM final concentration) and maintained for the duration of the differentiation phase of the culture (8 days). The cultures were fixed by adding 1% paraformaldehyde on ice for 5 min then stained with alkaline phosphatase substrates (NBT/BCIP, Roche). Cells were then incubated with PBS supplemented with 3% FCS and 0.1% saponin for 5 min at room temperature, followed by Click-iT Plus reaction cocktail (Life Technologies) incubated for 30 min with 5 μM Alexa Fluor 555-azide. Cells were washed with PBS supplemented with 3% FCS and 0.1% saponin twice and quantified using an Olympus IX81 microscope. Caspase-3/7 enzymatic activity within individual cells growing adherently in culture plates was measured by adding CellEvent Caspase-3/7 Green Detection Reagent (a substrate for activated caspase-3/7, 2 μM final concentration; Life Technologies) to the differentiation medium at the end of the experiment and incubated for 30 min before fixation, alkaline phosphatase staining, and quantification. Peripheral blood was collected from the tail vein using Microvette CB 300 K2E tubes (Sarstedt) and counted using a HEMAVET HV950 cell counter (Drew Scientific). Hematopoietic colony formation was assessed by seeding 20,000 unfractionated mouse femur bone marrow cells into MethoCult M3334 or MethoCult M3234 supplemented with 10 ng/ml GM-CSF (STEMCELL Technologies). The cultures were incubated at 37°C for 10 days and then colonies were counted under the microscope. Mouse plasma was diluted 1:1 using 2x PBS buffer and 100 μl of diluted serum was coated on each well of the 96-well ELISA plate (COSTAR 96-WELL EIA/RIA STRIPWELL PLATE) at 4°C for 16 hr. The plate was then washed three times with washing buffer (PBS with 0.1% Tween-20), blocked with 300 μl ELISA Blocker Blocking Buffer (Thermo, N502) for 2 hr at room temperature, and washed for three times with washing buffer. Anti-Clec11a antibody (1 μg/ml diluted in 100 μl of PBS buffer with 0.1% Tween-20 per well) was then added and incubated at room temperature for 2 hr, washed three times with washing buffer, followed by HRP-conjugated donkey anti-Goat IgG secondary antibody (0.8 μg/ml diluted in 100 μl of PBS buffer with 0.1% Tween-20 per well) incubation at room temperature for 1 hr. After washing for three times, 100 μl of SureBlue TMB Microwell Peroxidase Substrate was added to each well and incubated at room temperature in the dark for 15 min. Finally, 100 μl of the TMB stop solution was added into each well and the optical density was measured at 450 nm. To assess biomechanical properties, the right femurs were harvested from mice, wrapped in saline-soaked gauze, and stored at –20°C. The femurs were rehydrated in PBS for at least 3 hr before testing, and then kept in a humidified chamber and preconditioned with 20 cycles of bending displacement (0.1 mm). Without immersion, the femurs were loaded to fracture with three-point bending (each holding point was 4 mm from the middle break point) under displacement control (3 mm/min) using a material testing system (Instron model # 5565, Norwood, MA). To genotype Clec11a+/+, Clec11+/-, and Clec11a-/- mice the following primers were used: 5’-TTT GGG TGC TGG GAA GCC C-3’ and 5’-TTG CAC TGA GTC GCG GGT G-3’ (Clec11a+/+: 910 bp; Clec11+/- or Clec11a-/-: 538 bp). To distinguish between Clec11a+/-and Clec11a-/- mice, the following primers were used: 5’-GAG GAA GAG GAA ATC ACC ACA GC-3’ and 5’-TTG CAC TGA GTC GCG GGT G-3’ (Clec11a+/-: 482 bp; Clec11a-/-: no amplification product). The statistical significance of differences between the two treatments was assessed using two-tailed Student's t tests. The statistical significance of differences among more than two groups was assessed using one-way ANOVAs with Tukey’s multiple comparison tests. The statistical significance of differences in long-term competitive reconstitution assays was assessed using two-way ANOVAs with Sidak’s multiple comparison tests. All data represent mean ± SD. *p<0.05, **p<0.01, ***p<0.001. An RNA-seq protocol to identify mRNA expression changes in mouse diaphyseal bone: applications in mice with bone property altering Lrp5 mutationsJournal of Bone and Mineral Research 28:2081–2093.https://doi.org/10.1002/jbmr.1946 Molecular cloning of a new secreted sulfated mucin-like protein with a C-type lectin domain that is expressed in lymphoblastic cellsJournal of Biological Chemistry 273:1911–1916.https://doi.org/10.1074/jbc.273.4.1911 Guidelines for assessment of bone microstructure in rodents using micro-computed tomographyJournal of Bone and Mineral Research 25:1468–1486.https://doi.org/10.1002/jbmr.141 Stem cell growth factor: in situ hybridization analysis on the gene expression, molecular characterization and in vitro proliferative activity of a recombinant preparation on primitive hematopoietic progenitor cellsThe Hematology Journal 2:307–315.https://doi.org/10.1038/sj.thj.6200118 Mice with a targeted deletion of the tetranectin gene exhibit a spinal deformityMolecular and Cellular Biology 21:7817–7825.https://doi.org/10.1128/MCB.21.22.7817-7825.2001 Serum stem cell growth factor for monitoring hematopoietic recovery following stem cell transplantationBone Marrow Transplantation 32:391–398.https://doi.org/10.1038/sj.bmt.1704152 Collagen integrin receptors regulate early osteoblast differentiation induced by BMP-2Journal of Bone and Mineral Research 14:1075–1083.https://doi.org/10.1359/jbmr.1922.214.171.1245 Expression profile of osteoblast lineage at defined stages of differentiationJournal of Biological Chemistry 280:24618–24626.https://doi.org/10.1074/jbc.M413834200 Suppression of a novel hematopoietic mediator in children with severe malarial anemiaInfection and Immunity 77:3864–3871.https://doi.org/10.1128/IAI.00342-09 Parathyroid hormone analogues in the treatment of osteoporosisNature Reviews Endocrinology 7:647–656.https://doi.org/10.1038/nrendo.2011.108 Sclerostin binds to LRP5/6 and antagonizes canonical Wnt signalingJournal of Biological Chemistry 280:19883–19887.https://doi.org/10.1074/jbc.M413274200 Romosozumab in postmenopausal women with low bone mineral densityNew England Journal of Medicine 370:412–420.https://doi.org/10.1056/NEJMoa1305224 Isolation and characterization of a cDNA for human mouse, and rat full-length stem cell growth factor, a new member of C-type lectin superfamilyBiochemical and Biophysical Research Communications 249:124–130.https://doi.org/10.1006/bbrc.1998.9073 Prospective identification, isolation, and systemic transplantation of multipotent mesenchymal stem cells in murine bone marrowThe Journal of Experimental Medicine 206:2483–2496.https://doi.org/10.1084/jem.20091046 The CCN family member Wisp3, mutant in progressive pseudorheumatoid dysplasia, modulates BMP and Wnt signalingJournal of Clinical Investigation 117:3075–3086.https://doi.org/10.1172/JCI32001 Effect of parathyroid hormone (1-34) on fractures and bone mineral density in postmenopausal women with osteoporosisNew England Journal of Medicine 344:1434–1441.https://doi.org/10.1056/NEJM200105103441904 A subset of chondrogenic cells provides early mesenchymal progenitors in growing bonesNature Cell Biology 16:1157–1167.https://doi.org/10.1038/ncb3067 Alcian blue/alizarin red staining of cartilage and bone in mouseCold Spring Harbor Protocols 2009:pdb.prot5170.https://doi.org/10.1101/pdb.prot5170 Bone marrow stromal cell assays: in vitro and in vivoMethods in Molecular Biology 1130:279–293.https://doi.org/10.1007/978-1-62703-989-5_21 Isolation and characterization of MC3T3-E1 preosteoblast subclones with distinct in vitro and in vivo differentiation/mineralization potentialJournal of Bone and Mineral Research 14:893–903.https://doi.org/10.1359/jbmr.19126.96.36.1993 A potential role for tetranectin in mineralization during osteogenesisThe Journal of Cell Biology 127:1767–1775.https://doi.org/10.1083/jcb.127.6.1767 Janet RossantReviewing Editor; University of Toronto, Canada In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included. Thank you for submitting your article "Clec11a is an osteogenic factor that promotes the maintenance of the adult skeleton" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Ophir D Klein (Reviewer #1), and the evaluation has been overseen by Janet Rossant as the Senior Editor and Reviewing Editor. The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission. In this elegant and well-executed manuscript, the Morrison lab reports on their exciting discovery that a C-type lectin domain protein, Clec11a, promotes osteogenesis. Clec11a maintains the adult skeleton by promoting osteogenic differentiation of mesenchymal progenitors. The authors first determined that Clec11a is highly expressed in skeletal lineage cells by gene and protein expression analyses. They then employed the CRISPR-Cas9 system to generate a Clec11a knock out mouse to test if Clec11a is necessary for hematopoiesis and osteogenesis in vivo. They found that Clec11a deficiency led to weaker bones under conditions of normal homeostasis and also after fracture healing. The authors also tested whether administration of recombinant Clec11a is sufficient to prevent and reverse osteoporosis using micro-CT and histomorphometric analyses. The identification of Clec11a as a novel agonist of bone maintenance and fracture healing is of high interest and the data presented are very compelling. While the study does not define a clear molecular mechanism of action for Clec11a, multiple models of osteoporosis were used to demonstrate the efficacy of rClec11a, greatly strengthening the clinical relevance of the study. Major revisions required: Although there was overall support for the findings, there were concerns from the reviewers about the robustness of the data on the bone phenotypes, specifically about the rigor of the µCT and histomorphometry data. Reviewer 1 noted that there are standard protocols for assessing and reporting mouse bone microstructure (for example: Bouxsein et al., 2010 JBMR. Guidelines for assessment of bone microstructure in rodents using micro-computed tomography). As presented here, the explanations for the exact location of the volume of interest and the method used to define trabecular and cortical bone regions are vague. If indeed the same number of slices were chosen for every timepoint, then it is likely that different anatomical regions within the growing femur were assessed. Details regarding image processing, including the algorithm used for image filtration and the approach used for calibration and image segmentation, should be added. Many of the units used by the authors to describe the various bone features differ from those recommended in standard protocols. In addition, there was concern as to whether the sample size of mice analyzed was sufficient to support the conclusions in all cases. Typically, unless the effect size is substantial, labs need to study 8 to 10 mice/genotype/sex/treatment group for most bone parameters. Because you present "representative" images in this paper and no clear indication of the number or sex of animals included in each group, the reviewers were not convinced that all the reported in vivo effects represent true positive results. It will be important for you to address these concerns as far as is possible within the two month time frame, or focus your results on those experiments where you have a fully powered cohort. Other concerns include explaining why mice with similar ages and genotypes have very different bone measures (e.g., BV/TV) in different figures and a need for a better description of the recombinant protein you produced and administered. Downstream experiments (beyond the scope of the present paper) could then determine if the effect on bone cells is direct or indirect, why fracture healing is delayed (osteoblast recruitment or transdifferentiation failure), and whether recombinant CLEC11A can be used as a therapeutic for common skeletal diseases such as fractures and osteoporosis. Overall there was enthusiasm for the finding of a potentially novel osteogenic factor.https://doi.org/10.7554/eLife.18782.024 - Sean J Morrison - Sean J Morrison The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. SJM is a Howard Hughes Medical Institute (HHMI) Investigator, the Mary McDermott Cook Chair in Pediatric Genetics, the Kathryn and Gene Bishop Distinguished Chair in Pediatric Research, the director of the Hamon Laboratory for Stem Cells and Cancer and a Cancer Prevention and Research Institute of Texas Scholar. RY was supported by a Damon Runyon Cancer Research Foundation fellowship. We thank Nicolas Loof and the Moody Foundation Flow Cytometry Facility, Kristen Correll for mouse colony management, Christopher Chen for biomechanical analysis of bones, Malea Murphy for assistance with confocal imaging, Ying Liu, Jingya Wang and Jerry Q Feng at the Texas A and M University Baylor College of Dentistry for assistance with microCT experiments. This work was supported by the Cancer Prevention and Research Institute of Texas. Animal experimentation: This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All procedures were approved by the UT Southwestern Medical Center Institutional Animal Care and Use Committee. Animal protocol number: 2016-101334-G - Janet Rossant, Reviewing Editor, University of Toronto, Canada - Received: June 14, 2016 - Accepted: November 1, 2016 - Version of Record published: December 13, 2016 (version 1) © 2016, Yue et al This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
1
44
<urn:uuid:9c1a9ff2-8f58-4568-a41d-ede5d5d4fa68>
Miracle gas or not? There are many published discussions and reports on various aspects of SF6 gas (sulphur hexafluoride) usage in electrical equipment. Most of them are based on facts and researches, but some are not. Let’s try to answer the 34 questions and break the myth about this ‘miracle’ gas. Note that most of answers are based on CAPIEL (Coordinating Committee for the Associations of Manufacturers of Industrial Electrical Switchgear and Controlgear in the European Union) researches and related IEC standards as well. 1. Where is SF6 used? The following applications are known. For some of these most probably you haven’t heard of. - For sound insulation in windows, - In vehicle tyres, - For magnesium casting in the automotive industry, - As insulating and arc extinguishing medium in electric power equipment, - For manufacturing of semi-conductors, - In tandem-particle accelerators, - In electron microscopes, - As tracer-gas in mining, - In x-ray material examination equipment, - As purification and protection gas for aluminium and magnesium casting, - In sport shoes, - Medical examinations, - In military aircraft radar systems and other military applications. 2. Is SF6 a health hazard? Pure SF6 is physiologically completely harmless for humans and animals. It’s even used in medical diagnostic. Due to its weight it might displace the oxygen in the air, if large quantities are concentrating in deeper and non ventilated places. 3. Is SF6 harmful for the environment? It has no ecotoxic potential, it does not deplete ozone. Due to its high global warming potential of 22.200 (*) it may contribute to the man made greenhouse-effect, if it is released into the atmosphere. However in electrical switchgear the SF6 gas is always used in gas-tight compartments, greatly minimising leakage. This make the real impact on greenhouse effect negligible. (*) According to the 3rd Assessment Report of UNFCCC. Previous accepted value was 23.900 4. What is the overall contribution of SF6 used in the electrical equipment to the greenhouse effect? (*) ECOFYS, Sina Wartmann, Dr. Jochen Harnisch, June 2005, “Reductions of SF6 Emissions from High and Medium Voltage Equipment in Europe” 5. How wide is the use of SF6 in transmission and distribution switchgear applications ? SF6 insulated switchgear is currently used world-wide. It is estimated that an average of about 80 % of HV equipment manufactured now has an SF6 content. 6. Why is SF6 used in electric power equipment? Because of its outstanding electrical, physical and chemical properties enabling significant benefits for the electricity supply network: - It insulates 2.5 times better than air (N2), - Over 100 times better arc quenching capability than air (N2), and - Better heat dissipation than air; In addition to this, LCA studies have proven that the use of SF6 technology in the electrical distribution switchgear equipment results in lower overall direct and indirect environmental impacts compared to air-insulated switchyards (*) (*) Solvay Germany, 1999: Urban power supply using SF6 technology, Life cycle assessment on behalf of ABB, PreussenElektra Netz, RWE Energie, Siemens, Solvay Fluor und Derivate 7. What are the benefits of high and medium voltage SF6-switchgear? There is a significant number of benefits, as follows: Local operator safety - SF6-insulated switchgear makes a substantial contribution to reduce the accident risk. - The total enclosure of all live parts in earthed metal enclosures provides immanent protection against electric shock and minimises the risks associated with human errors - The high-grade switchgear remains hermetically sealed for its whole service life. Very high operational reliability - It offers a great operational reliability because inside the enclosed gas compartments the primary conductors have complete protection against all external effects. - The minimal use of synthetics reduces the fire load. - The SF6 insulation ensures complete freedom from oxidation for the contacts and screwed joints, which means that there is no gradual reduction in the current carrying capacity of the equipment as it ages. - There is no reduction in insulation capacity due to external factors. Important contribution to the security of supply Total enclosure also means that the equipment is almost completely independent from the environment. SF6 insulated switchgear can also be used under difficult climatic conditions, for example: - In humid areas with frequent condensations from temperature changes, and even in places with flooding potential. - Where the reliability of the insulation might otherwise suffer from contamination, e.g. dust from industry or agriculture or saline deposits in coastal areas. Gas- insulated switchgear completely eliminates this possibility throughout the whole service life of an installation - In contrast to air insulation, whose insulating capacity reduces with increasing altitude, SF6-insulated switchgear retains its full insulating capacity regardless of height above sea level. So larger and more costly special designs, or equipment with higher insulation ratings – and therefore more costly – are avoided Small space requirement - Due to the high dielectric strength of the gas, the switchgear is compact with space requirements minimised. - The excellent safety and low space requirement of SF6 switchgear allows it to be sited directly in conurbations and close to load centres, such as city centres, industrial manufacturing plants and commercial areas. - Therefore, this fulfils one of the basic essentials of power distribution, namely that substations should be placed as close as possible to load centres in order to keep transmission losses to a minimum, to conserve resources and to minimise costs. - Major savings in building, land and transport costs can be achieved throughout the whole process chain. - In several cases SF6 switchgear is the only possible solution: for wind power plants (offshore), in caverns, for large generator circuit-breakers, and for extensions of existing installations. - This often allows existing buildings use to be extended where switchgear replacement or extension to meet load growth is needed. Excellent economical and ecological features - Distinct economic benefits come from: - The long service life - Minimal maintenance expenditure thanks to maintenance-free, gas-tight enclosures - Reduced costs for land, buildings, transport and commissioning - Maximum operational reliability as a prerequisite for the remote control and automation of power networks - Ecological and economic benefits arise from: - Minimum transmission losses as a result of placing equipment close to load centres. - Reduced primary energy consumption and emissions contribute to economically optimised power supply systems. - And the long service life of SF6 switchgear also contributes to the conservation of resources. - Aesthetic and ecological benefits for rural and city landscapes: - Because SF6 installations are compact, need minimum maintenance, have extraordinarily high availability and are independent from climatic impacts. They offer not only major ecological and economic advantages but can also be integrated seamlessly in any landscape or architecture of towns, cities or countryside - Reclamation of areas previously taken up by conventional substations. 8. Is there any alternative to SF6 in switchgear for high and medium voltage? From the LCA point of view no technically and economically viable alternative exists with an equivalent set of properties described above and the same degree of safety and reliability. ”A combination of extraordinary electrical, physical, chemical and thermal properties makes SF6 a unique and indispensable material in electric power equipment for which there is no functionally equivalent substitute.” (Quotation from a CIGRE3 Report) (*) (*) CIGRE: International Council on Large Electric Systems 9. What are the different applications in electrical power equipment using SF6? These are the most common applications where SF6 is used: - GIS (Gas Insulated Switchgear for medium and high voltage), - CBs (Circuit Breaker), - Power transformers, - VT (Voltage Transformer), - CT (Current transformer), - RMU (Ring Main Unit), - Assemblies of HV devices and GIL (Gas insulated lines), - Capacitors etc. 10. What is the difference between high-voltage (HV) and medium- voltage (MV) GIS regarding SF6? Basically there is no difference as both applications use the SF6 in gas-tight compartments with negligible leakage rates. In general the MV (up to 52 kV) use pressures close to atmospheric pressure in sealed pressure systems. Low pressure and small size result in little gas quantities of only some kg. The leakage rate is extremely low, less than 0.1 % per year. 11. What are the main commitments of the voluntary actions/agreements of manufacturers and users concerning SF6 handling? Both, switchgear manufacturers and users are committed to a continuous improvement in reduction of emission rates as well as monitoring and annual reporting. 12. How is the effectiveness of voluntary actions verified? The production processes of MV and HV switchgear in Western Europe have been improved so to reduce the specific emission rates of about 2/3 from1995 to 2003. Ecofys (*) determined for the same period an emission reduction of 40 %. (*) ECOFYS, Sina Wartmann, Dr. Jochen Harnisch, June 2005, “Reductions of SF6 Emissions from High and Medium Voltage Equipment in Europe” 13. What are the user’s obligations for monitoring SF6 data of medium voltage switchgear? As far as sealed pressure systems (sealed for life) are concerned the users do not normally need to either monitor or report emissions. Therefore they only have to assure that the disposal and the end of life is carried out by a qualified entity, in accordance with available national rules. 14. How is the proper end-of-life treatment of SF6-switchgear ensured? Following internationally acknowledged instructions (i.e. according to IEC 601634, CIGRE 2003 SF6 Recycling Guide). Cant see this video? Click here to watch it on Youtube. 15. What are the user’s obligations when taking SF6-switchgear out of service? To make sure that the SF6 is handled by a qualified entity or by qualified personnel according to IEC 61634 subclause 4.3.1. and according to IEC 60480 subclause 10.3.1. 16. How is used SF6-gas treated or disposed? It is normally re-used after proper filtering. In some special cases disposal of the gas is necessary. 17. In some European countries bans on SF6-switchgear have been proposed. Where are legal bans implemented? There are no legal bans implemented. In political discussions reduced use of SF6 in some applications was proposed which are not related to the electrical industry. In the past some proposals of this kind concerning electrical switchgear came up due to insufficient knowledge on how the electrical industry is using SF6. Once this was clarified and the benefits given by this technology were explained, the proposals were withdrawn. 18. Can we use vacuum as insulation medium? Vacuum technology is already in use for switching purposes in the MV range. In the case of a small volume, a vacuum can be relatively easy maintained, which is essential to assure the performance of the switching device. 19. How can the user supervise the SF6 quality? The sealed for life MV equipment does not require SF6 quality checks. For other HV equipment Annex B of IEC 60480 describes different methods of analysis applicable for closed pressure systems (on-site and in laboratory). 20. What about ageing process of SF6 gas? Is replenishment of gas needed after approximately 20 years? It is generally not necessary because the gas quality than is in line with the values given in IEC 60480 Table 2 “Maximum acceptable impurity levels” (applicable for closed pressure systems). For MV sealed for life equipment no replenishment is necessary, because of the unique qualities of SF6 under normal operating conditions no degradation occurs. 21. How much SF6 (quantified in kg) can escape due to “normal” leakage? This depends on the filling quantity, which depends on the rating and design of the equipment (volume and pressure). For HV switchgear the emission factor ranges from about 0.1% per year to 0,5% (0,5% per year is the maximum acceptable leakage rate according to IEC 62271-203) For sealed for life MV equipment a range below 0,1 % per year is common. For example, a 3 kg filling quantity (RMU) results in a calculated loss of 3 g per year. 22. How high is the MAC (Maximum allowable working environment concentration) for pure SF6 in the substation and how hazardous is pure SF6? It is generally recommended that the maximum concentration of SF6 in the working environment should be kept lower than 1000 μl/l (*). This is the value accepted for a full time (8 h/day, 5 day/week) work schedule. This value is not related to toxicity, but an established limit for all non-toxic gases which are not normally present in the atmosphere. (*) TRGS 900, Technische Regeln für Gefahrstoffe 23. What decomposition products are created in the case of internal arc faults, and in what quantities? Gaseous and dusty by-products will be generated. See IEC 60480, Table 1 and/or CIGRE Report Electra 1991 (“Handling of SF6 and its decomposition products in GIS”, Table 2 “Rough characterisation of the major decomposition products resulting from different sources”). The decomposition products depend on the type of equipment and its service history; the quantities depend on energy (voltage, current, time) and the type of the equipment. 24. How hazardous are the decomposition products? See IEC 61634, Annex C: “Release of SF6 from switchgear and control gear – potential effects on health”. Calculations show that, in practice, only in case of an internal arc with a massive emission of heavily arced gas a real hazard is created. Evacuation and ventilation is therefore compulsory in such an event. 25. What has to be done after an arc fault in the switchgear? In such accidental cases caution must be taken. If the encapsulation has been damaged some compounds with toxic characteristics may be present, generated not only from decomposition of SF6 but also from other sources (e.g. burning paintings, vapours of copper, etc) can be present. Therefore, in all cases, evacuation of the switchgear room is the first measure to be taken irrespective the switchgear contains SF6 or not. See IEC 61634 sub-clause 5.3: “Abnormal release due to internal fault”. 26. Does a (passive or active) ventilation system have to be installed in the switchgear room or cable basement? Buildings containing SF6-filled indoor equipment should be provided with ventilation; natural ventilation would normally be adequate to prevent the accumulation of SF6 released due to leakage (see IEC 61634, sub-clause 3.4: “Safety of personnel” and IEC 61936-1). Type and extent of required measures depend upon location of the room, the accessibility, and the ratio of gas to room volume. 27. What do I have to do when a GIS installation is damaged? For example hole drilled in encapsulation or transport damage such as a panel dropped and cast resin broken) and SF6 escapes or when my GIS develops an abnormal leak? Appropriate corrective action should be to deal with the leakage. If the equipment is in service and the leakage is high, it must be de-energised, in accordance with the organisation’s operational procedures. Loss of gas must be minimised by following the organisation’s procedures and using the services/recommendations of the manufacturer or qualified service organisation as appropriate. The technical integrity of the equipment will need to be verified after such an occurrence and appropriate corrective actions taken by authorised personnel before refilling of equipment or placing in service. 28. Under which conditions needs SF6 gas in gas-insulated switchgear to be replaced? Which are the parameters to be checked e.g. (concentration, dew point, decomposition products) and what are the related acceptable limits? Normally the gas remains until disassembly. During a maintenance operation requiring the evacuation of the gas, it should be analysed. Guidance on how to proceed then is given in IEC 60480. 29. How do I evacuate and fill the system? See IEC 61634, CIGRE Report 2004 (“Practical SF6 handling instructions”). Please, refer also to the instruction manual of your equipment. 30. How much SF6 gas is in my switchgear? Where do I find this information? On the nameplate or in the operating manual. For older equipment please ask your manufacturer. 31. Does SF6 have to be disposed of when moist? No, it is possible to dry the gas; moisture can be reduced to acceptable levels by adsorption; material such as alumina, soda lime, molecular sieves or mixtures thereof are suitable for this purpose (see also IEC 61634, Annex B.3: “Measures for the removal of SF6 decomposition products”). Maximum tolerable moisture levels for re-use can be taken from IEC 60480, Annex A. 32. What do I have to do when I came in contact with decomposed SF6? See IEC 61634, Annex E: “General safety recommendations, equipment for personal protection and first aid”. Normally only trained and qualified personnel should deal with this and hence be aware of the necessary precautions and actions. IEC 61634, Annex E: General safety recommendations, equipment for personal protection and first aid For medium-voltage switchgear and controlgear using sealed pressure systems, the contents of this annex are applicable only during end-of-life treatment or in the very unlikely event of an abnormal release. For other types of equipment, information in this annex is provided for use in situations where workers have to make contact with SF6 decomposition products. Such situations include: - Maintenance or any other activity involving opening the SF6-filled enclosures of equipment which has been in service; - Restorative activity after an internal fault or external fire provoking opening of the enclosure. Experience over more than 25 years in working environments where contaminated gas is handled regularly has shown that personnel are unlikely to suffer adverse effects to their health, as long as they are suitably trained and equipped as indicated in this report and as recommended in the manufacturers’ instructions. 33. What environmental and safety at work aspects have to be taken into account? See IEC 61634 , clause 4: “Handling of used SF6”. The need to handle used SF6 arises where: - Topping up of the SF6 in closed pressure systems is carried out; - The gas has to be removed from an enclosure to allow maintenance, repair or exten-sion to be carried out; - The gas has been wholly or partially expelled due to an abnormal release; - The gas has to be removed at the end of the life of an item of equipment; - Samples of the gas must be obtained or the gas pressure measured through tempo-rary connection of measuring apparatus. Situations 1 and 1 arise mainly with respect to high-voltage equipment and may arise with medium-voltage GIS equipment in particular if it is required to add further equipment to an existing switchboard. They do not arise with equipment using sealed pressure systems. 34. What has to be observed for cleaning of the switchgear room after an internal fault with emission of decomposed gas? See IEC 61634, sub-clause 5.3/5.3.3: Abnormal release due to internal fault (Indoor installations”, and national requirements.) An internal fault occurs when abnormal arcing is initiated inside a switchgear and controlgear enclosure. In certain types of equipment, particularly metal-enclosed medium-voltage switchboards, air insulation is used for the busbars between cubicles and around cable connections and SF6 is present only within switching chambers. In this case an internal fault could occur within the switchboard but outside the switching chamber, so that no SF6 is released. An internal fault is a very rare occurrence but cannot be completely disregarded. - A defect in the insulation system; - A mechanical defect leading to a disturbance of the electric field distribution inside the equipment; - The mal-operation of part of a switching device due to faulty assembly, components or malfunction or misuse of interlocks. An internal fault will cause an increase of pressure inside the enclosure, the effects of which will depend upon circumstances. The pressure rise is caused by the transfer of the electrical energy from the arc into the gas. The increase in pressure will depend upon the value of the arc current, the arc voltage, the arc duration and the volume of the enclosure in which the arc has developed. Following an internal fault leading to pressure relief or enclosure burn-through, the SF6 and much of any solid decomposition products (powders) will have been expelled from the SF6 enclosure. - CAPIEL (Coordinating Committee for the Associations of Manufacturers of Industrial Electrical Switchgear and Controlgear in the European Union) – Frequently asked Questions (FAQ) and Answers on SF6 - IEC 61634 High-voltage switchgear and controlgear – Use and handling of sulphur hexafluoride (SF6) in high-voltage switchgear and controlgear - IEC 60480 Guidelines for the checking and treatment of sulphur hexafluoride (SF6) taken from electrical equipment and specification for its re-use
1
20
<urn:uuid:c7cd7274-19f9-4847-ab4e-40d9baea1f52>
It is relevant to the SST Community as our project helps to map out certain areas in the school with stronger Wi-Fi signals, as well as those areas in school with weaker Wi-Fi signals. As such, a map can be drawn up to help students know where to go if they require a strong Wi-Fi signal to submit an urgent task. It can also be used to help people when designing buildings, as since nowadays most buildings have Wi-Fi systems in operation, our research can help designers understand how to make sure the necessary areas in the building are effectively covered by the Wi-Fi signal. Areas for further study We initially wanted to test how certain material blocks WiFi waves. As this is a school project, we have limited resources and time to conduct such a massive scale project as to make the results more reliable, we would have to use special RF/WiFi wave isolation materials. In the future, should this be attempted, the experimenter must consider the nature of WiFi and RF waves and how to isolate them to get the best and most accurate readings An Experimental Performance Comparison of 3G and Wi-Fi Richard Gass1 and Christophe Diot2 1 Intel Labs 2 Thomson Abstract. Mobile Internet users have two options for connectivity: pay premium fees to utilize 3G or wander around looking for open Wi-Fi access points. We perform an experimental evaluation of the amount of data that can be pushed to and pulled from the Internet on 3G and open Wi-Fi access points while on the move. This side-by-side comparison is carried out at both driving and walking speeds in an urban area using standard devices. We show that significant amounts of data can be transferred opportunistically without the need of always being connected to the network. We also show that Wi-Fi mostly suffers from not being able to exploit short contacts with access points but performs comparably well against 3G when downloading and even significantly better while uploading data. 1 Introduction Wireless communication is an important part of everyday life. It allows people to stay connected with their jobs, family, and friends from anywhere there is connectivity. The two dominant wireless technologies are Wi-Fi and third generation cellular (3G) networks. IEEE 802.11, commonly known as Wi-Fi, refers to a set of standards which operate in the unregulated ISM band. They are very well known for providing wireless connectivity in homes, offices, and hot-spots. They provide throughput of up to 600 Mbits/s with a coverage area in the hundreds of meters. Wi-Fi is easy and inexpensive to deploy, and is ubiquitous in urban areas. Despite access controls being deployed and newer access points (APs) being configured with security enabled by default, many Wi-Fi APs remain open. In addition, the growing popularity of community networks such as FON 3 and the growing list of large cities providing free wireless makes opportunistic communication a realistic scenario in urban areas. Due to the sparse and non-coordinated deployment of APs, Wi-Fi is not an “always connected” technology. It is designed primarily for the mobile user that accesses the network while relatively stationary. It provides high data rates between locally connected clients but is limited by the capacity of the link between the AP and the Internet. 3 www.fon.com 3G is based on technology that has evolved to fill the growing need for data in wireless voice networks. 3G provides seamless connectivity across large coverage areas with advertised data rates of 2 to 14 Mbits/s, shared among all users connected to any given base station. 3G network operators charge either based on consumption or have flat rate monthly plans. These networks are expensive to deploy and the performance experienced by users is sensitive to the number of users in a cell due to the large coverage areas. For data applications, one could argue that persistent connectivity may not be necessary. Instead, being connected “frequently enough” should be acceptable if applications and communications protocols could take advantage of short, but high bandwidth contact opportunities. We present results of a side by side, Wi-Fi vs 3G face-off. We show that with default access point selection (greatest signal strength), unmodified network setup methods (scan, associate, request an IP address with DHCP), and off the shelf equipment with no modifications or external antennae, opportunistic Wi-Fi performance is comparable to 3G. Despite only connecting to open or community access points in a typical urban residential area, Wi-Fi throughput surpasses 3G at walking and driving speed while uploading data and is nearly equivalent to 3G while downloading. The remainder of this paper is organized as follows: We first explain how the experiments were conducted and describe the equipment and software setup in Section 2. Next, in Section 3, we show the results of the experimental runs with the comparisons of 3G vs Wi-Fi under driving and walking conditions as well as look at the effects related to the uploading or downloading of data. Finally, we discuss related work in Section 4 and conclude the paper in Section 5. 2 Experiment description The experiments consist of two mobile clients and a server that is always connected to the Internet. One mobile client uses its Wi-Fi interface to transmit and receive data to/from the server and the other uses 3G. Experiments are performed both on foot and in a car following the same route. Wi-Fi and 3G tests are run simultaneously for a true side-by-side comparison. While downloading, the data originates at the servers and is streamed down to the mobile clients. Conversely, when uploading, the data originates on the mobile clients and is streamed to the servers. We investigated the potential of using the 3G device for collecting both 3G and Wi-Fi data but discovered that stationary Wi-Fi transfers in the uplink direction were capped around 6 Mbits/s, well below the advertised rates of an 802.11G enabled interface. We also saw variations in the Wi-Fi throughput while running simultaneous 3G and Wi-Fi experiments on the same mobile device. Due to these limitations, we chose to use a separate platform for each technology. 2.1 Server setup The servers run the Ubuntu distribution of Linux (version 8.04.1 with a 2.6.24- 19-server kernel) and are publicly accessible machines on the Internet that are the source or sink for the clients. The servers are virtual machines running on the Open Cirrus cluster hosted at Intel Labs Pittsburgh (ILP). The dedicated Internet connection to ILP is a 45Mbit/s fractional T3 and did not pose any restrictions in these experiments. We ran extensive tests of the code on the virtual machines and saw no performance related issues with the system or the network. The 3G server runs the apache web server and hosts large, randomly generated data files that can be downloaded by the client. The Wi-Fi server runs a simple socket program that generates data with /dev/random and streams it down to the Wi-Fi client. When data is being uploaded from the client, both the 3G and Wi-Fi server run our socket program that receives the data and sends it to /dev/null. The network interfaces for both servers are monitored with tcpdump and the resulting data traces are stored for off-line analysis. 2.2 Wi-Fi client The Wi-Fi client setup consists of an IBM T30 laptop with a default install of the Ubuntu distribution of Linux (version 8.04 with a 2.6.24-21-server kernel). The internal wireless device is the Intel 2915ABG network card using the unmodified Intel open source Pro/Wireless 2200/2915 Network Driver (version 1.2.2kmprq with 3.0 firmware). No external antenna is connected to the laptop for the experiments. The laptop attempts to connect to the Internet by first scanning the area for available open or community APs (excluding those with encryption enabled and those we have marked as unusable4 ) and chooses the one with the strongest signal strength. Once the AP is selected, it begins the association process followed by IP acquisition via DHCP. If the AP allocates an IP address to the client, it attempts to ping a known server to confirm connection to the Internet. Once Internet connectivity is verified, the Wi-Fi client begins either downloading or uploading data from/to the server via our simple socket program. After the client travels out of range of the AP, it detects the severed connection by monitoring the amount of data traversing the network interface. Once the client stops seeing packets for more than a configured time threshold, the current AP is abandoned and the search for another available AP begins. We choose 5 seconds in our experiments to allow ample time to make sure we do not attempt to reconnect to an AP that is at the trailing edge of the wireless range. All experimental runs utilize a USB global positioning system (GPS) receiver that is plugged into the laptop capturing speed, location, and time once per second. The GPS device is also used to synchronize the time on the laptop. The 4 An example entry is CMU’s public Wi-Fi that is open but only allows registered MAC addresses to use the network. laptop captures all data that is transmitted or received over the wireless interface with tcpdump. 2.3 3G client The 3G experiments employ an out of the box Apple iPhone 3G with no modifications to the hardware. The iPhone connects via the AT&T 3G network, uses a jail-broken version of the firmware (2.2, 5G77), and its modem baseband firmware is at version 02.11.07. The 3G client begins by first synchronizing its clock with NTP. Once the clock has been synchronized, it launches tcpdump to monitor the 3G wireless interface. After the monitoring has started, the client begins either downloading or uploading data. To download data, we use an open source command line tool for transferring files called curl. The curl program downloads a large file from the server and writes the output to /dev/null to avoid unnecessary CPU and battery consumption on the mobile device. This also allows us to isolate only network related effects. If the client is uploading data, the dd command continuously reads data out of /dev/zero. The output is piped into netcat and the data is streamed to the server. (a) (b) Fig. 1. Maps of an area in Pittsburgh showing (a) all available open access points and (b) the route followed for the experiments 2.4 The experiment route The experiments are performed in a residential area of Pittsburgh, Pennsylvania near the campus of Carnegie Mellon University (CMU). Figure 1(a) is a map of the area where we focused our measurement collection. This area lies between the CMU campus and a nearby business district where many students frequently travel. Each red tag in the figure represents an open Wi-Fi AP found from our wireless scans5 . The area is also covered by 3G service allowing us to compare 5 Our scan logs reveal 511 APs in the area with 82 that appear open. Table 1. 3G vs Opportunistic Wi-Fi Radio Speed Data-flow Usable contact time Throughput Total transfer 3G driving download 760 seconds 579.4 kbits/s 55 MB Wi-Fi driving download 223 seconds 1220 kbits/s 34 MB 3G walking download 3385 seconds 673 kbits/s 285 MB Wi-Fi walking download 1353 seconds 1243 kbits/s 210 MB 3G driving upload 866 seconds 130 kbits/s 14 MB Wi-Fi driving upload 118 seconds 1345 kbits/s 20 MB 3G walking upload 3164 seconds 129 kbits/s 51 MB Wi-Fi walking upload 860 seconds 1523 kbits/s 164 MB the two access technologies. We believe this area to be representative of typical Wi-Fi densities found in most European or US urban areas6 . Figure 1(b) shows the route selected in this area for our experiments. The experiment starts at the leftmost tag at the bottom right hand corner of the figure and follows the indicated route until the destination (same as start position) is reached. The total distance of the route is about 3.7 miles. For the walking experiments, we maintain a constant speed (2.4 MPH) throughout the course of the route. While driving, we obeyed all traffic laws and signs and remained as close to the speed limit (25 MPH) as possible. 3 Results Table 1 summarizes the results of the experiments which are based on 16 runs from different days performed in the afternoon and late evening. 3.1 3G vs Wi-Fi downloads Figure 2 shows the instantaneous throughput achieved for a single, representative experiment for 3G and Wi-Fi at driving speeds of up to 30 MPH. The 3G device is able to transfer around 55 MB of data for the 760 seconds of the experiment duration. During this time, the Wi-Fi client connects opportunistically to APs along the route and manages to spend 223 seconds connected, transferring 34 MB. These “in the wild” results clearly show the potential of this untapped resource of open Wi-Fi connectivity and have a similar behavior to the isolated and controlled experiments in [5, 6, 14]. Deeper investigation of our logs shows that the majority of contacts were initiated while the client was either stopped, slowing down, or accelerating after a stop. This meant that the client stayed within the range of a single AP for longer durations and allowed more time to perform the steps needed to setup a connection and begin a data transfer. Since our AP selection algorithm was to 6 http://wigle.net 0 1 2 3 4 5 6 7 8 9 0 100 200 300 400 500 600 700 0 10 20 30 40 50 60 70 MBits/s Total data transferred (MB) Time (sec) 3G 3G total Wi-Fi Wi-Fi total Fig. 2. Instantaneous throughput (Mbits/s) for 3G vs Wi-Fi downloads at driving speeds and total data transferred (MB) always select the AP with the strongest signal, while moving, this was generally not the optimum choice. When the client approaches a potential AP, it would be best to select the AP that would be just coming into range to maximize the usable connection duration. We found that many connection attempts succeeded but when the data transfer was about to begin, the connection was severed. This does not mean that opportunistic contacts cannot happen at speeds, but instead brings to light the need for faster AP association and setup techniques similar to QuickWiFi and better AP selection algorithms for mobile clients. Both of these would allow better exploitation of opportunistic transactions for in-motion scenarios. Also plotted in Figure 2 is the total amount of data transferred for each access technology. The 3G connection is always connected throughout the entire experiment and shows a linear increase in the total bytes received. Wi-Fi, is represented by a step function which highlights how each connection opportunity benefits the overall amount of data received. Each point on the “Wi-Fi total” line represents a successful contact with an AP. Even though Wi-Fi contacts show large variability due to the intermittent nature of the contact opportunities, there is still a significant amount of data transferred because of the higher data rates of the technology. This is more apparent in the walking experiments where the speeds are much slower and the use of the sidewalks brings the client physically closer to the APs, allowing the client to remain connected for longer durations. Figure 3 shows throughput results of a single, representative experiment for 3G and Wi-Fi at walking speeds. The walking experiments last around 3385 sec- onds and 3G is able to transfer around 285MB of data. Wi-Fi, on the other hand, is only connected for 1353 seconds of the experiment and downloads 210MB. Again, each Wi-Fi contact is able to exploit the opportunity and take advantage of very short, high throughput contacts, transferring significant amounts of data. 0 2 4 6 8 10 12 14 0 500 1000 1500 2000 2500 3000 0 50 100 150 200 250 300 MBits/s Total data transferred (MB) Time (sec) 3G 3G total Wi-Fi Wi-Fi total Fig. 3. Instantaneous throughput (Mbits/s) for 3G vs Wi-Fi downloads at walking speeds and total data transferred (MB) 3.2 3G vs Wi-Fi uploads Figure 4 shows the instantaneous throughput of 3G and Wi-Fi uploads at walking speeds. It has a similar behavior to that of the downloads described previously but this time, the total data transferred for Wi-Fi exceeds 3G by 2.6 times. This is due to poor upload performance of 3G on the mobile device. The instantaneous 3G traffic pattern shows transitioning between idle states and periods of data transfers that result in throughput much less than that of the downloads (averaging at 130 kbits/s). In order to understand this phenomenon, we performed additional experiments with a stationary laptop (Lenovo T500 using the iPhone SIM card) and the iPhone with updated software and baseband firmware, 3.0 (7A341) and 04.26.08 respectively. We found that this periodic pattern is no longer evident. The new traces exhibit more consistent, albeit lower, throughput throughout the entire duration of an upload. The total amount of data transferred for a similar experiment did not change. We conjecture that these are due to improvements in the iPhone baseband software which allow more efficient buffering of data, eliminating the burstiness of the traffic egressing 0 1 2 3 4 5 6 7 8 9 0 500 1000 1500 2000 2500 3000 0 20 40 60 80 100 120 140 160 180 MBits/s Total data transferred (MB) Time (sec) 3G 3G total Wi-Fi Wi-Fi total Fig. 4. Instantaneous throughput (Mbits/s) for 3G vs Wi-Fi uploads at walking speeds and total data transferred (MB) the device. Further upload experiments with the laptop show that it is able to transfer data at twice the rate of the iPhone. These results suggest hardware limitations on the iPhone and/or an artificial software limitation placed on the device7 . One of the side observations from our experiments that impact the mobile client throughput is that residential Internet service rates are much higher than shown in . Upon further investigation, we discovered that Verizon FIOS8 has recently become available in this area and our experiments show that some homes have upgraded to this higher level of service. This is hopeful for utilizing opportunistic communications since more data can be transferred during these very short contact opportunities. It is also important to note that during these experiments, the full potential of the Wi-Fi AP was not reached and instead was limited to the rate of the back-haul link the AP was connected to. Even though the cost of higher throughput links are dropping in price for residential service plans, affordable service provider rates are still well below the available wireless rates of 802.11. This will always place the bottleneck for this type of communication at the back-haul link to the Internet9 . 7 http://www.networkperformancedaily.com/2008/06/ 3g iphone shows bandwidth limi.html 8 http://www22.verizon.com 9 http://www.dslreports.com/shownews/Average-Global-Download-Speed-15Mbps- 101594 4 Related work This work compares two dominant access technologies, namely 3G and Wi-Fi, in the wild. Despite many works related to the performance of 3G and Wi-Fi networks, this is the first work to publish a side-by-side comparison while in motion. This work highlights the potential of Wi-Fi as a contender for high throughput in-motion communication. The performance of communicating with stationary access points has been studied in a variety of different scenarios. There have been experiments on a high speed Autobahn, in the Californian desert, and on an infrequently travelled road in Canada where the environment and test parameters were carefully controlled. These works showed that a significant amount of data can be transferred while moving by access points along the road. The authors of took this idea into the wild and reported on 290 drivehours in urban environments and found the median connection duration to be 13 seconds. This finding is very promising for in-motion communications. This could potentially allow large amounts of data to be transferred over currently under-utilized links without the use of expensive 3G connections. Previous work investigating performance of HSDPA (High Speed Data Packet Access), and CDMA 1x EV-DO (Code Division, Multiple Access, EvolutionData Optimized) networks show similar findings with variability in these data networks[10, 12, 8]. We also see this behavior in our experiments run on a HSDPA network. 5 Conclusion In this paper, we perform a comparison of two popular wireless access technologies, namely 3G and Wi-Fi. 3G provides continuous connectivity with low data rates and relatively high cost while Wi-Fi is intermittent with high bursts of data and comes for free when they are open. We experimentally show that with default AP selection techniques, off-the-shelf equipment, and no external antennae, we are able to opportunistically connect to open or community Wi-Fi APs (incurring no cost to the user) in an urban area and transfer significant amounts of data at walking and driving speeds. Intermittent Wi-Fi connectivity in an urban area can yield equivalent or greater throughput than what can be achieved using an “always-connected” 3G network. Wi-Fi could be easily modified to increase the number of successful opportunities. (1) Reduce connection setup time with APs, especially with community networks like FON that have a lengthy authentication process. (2) Clients could take advantage of Wi-Fi maps and real time location updates in order to choose which APs will provide the most benefit to the in-motion user. Finally, WiFi is bottlenecked by the ISP link and (3) caching data on the AP (both for upload and download) would eliminate the Internet back-haul link bottleneck. We are currently testing an improved in-motion Wi-Fi architecture that exhibits significantly higher transfer rates than 3G at all speeds. References 1. IEEE Standard 802.11: 1999(E), Wireless LAN Medium Access Control (MAC) and Physical Layer Specifications, August 1999. 2. IEEE 802.11n-2009, Wireless LAN Medium Access Control (MAC) and Physical Layer Specifications Enhancements for Higher Throughput, June 2009. 3. Bychkovsky, V., Hull, B., Miu, A., Balakrishnan, H., and Madden, S. A measurement study of vehicular internet access using in situ wi-fi networks. In MobiCom ’06: Proceedings of the 12th annual international conference on Mobile computing and networking (Los Angeles, CA, USA, 2006), ACM, pp. 50–61. 4. Eriksson, J., Balakrishnan, H., and Madden, S. Cabernet: Vehicular content delivery using wifi. MobiCom ’08: Proceedings of the 14th ACM international conference on Mobile computing and networking (2008), 199–210. 5. Gass, R., Scott, J., and Diot, C. Measurements of in-motion 802.11 networking. In WMCSA ’06: (HotMobile) Proceedings of the Seventh IEEE Workshop on Mobile Computing Systems & Applications (Semiahmoo Resort, Washington, USA, 2006), IEEE Computer Society, pp. 69–74. 6. Hadaller, D., Keshav, S., Brecht, T., and Agarwal, S. Vehicular opportunistic communication under the microscope. In MobiSys ’07: Proceedings of the 5th international conference on Mobile systems, applications and services (San Juan, Puerto Rico, 2007), ACM, pp. 206–219. 7. Han, D., Agarwala, A., Andersen, D. G., Kaminsky, M., Papagiannaki, K., and Seshan, S. Mark-and-sweep: Getting the “inside” scoop on neighborhood networks. In IMC ’08: Proceedings of the 8th ACM SIGCOMM conference on Internet measurement (Vouliagmeni, Greece, 2008), ACM. 8. Jang, K., Han, M., Cho, S., Ryu, H.-K., Lee, J., Lee, Y., and Moon, S. 3G and 3.5G wireless network performance measured from moving cars and highspeed trains. In ACM Workshop on Mobile Internet through Cellular Networks: Operations, Challenges, and Solutions (MICNET) (Beijing, China, October 2009). 9. Jones, K., and Liu, L. What where wi: An analysis of millions of wi-fi access points. In in Proceedings of 2007 IEEE Portable: International Conference on Portable Information Devices (May 2007), pp. 25–29. 10. Jurvansuu, M., Prokkola, J., Hanski, M., and Peral¨ a, P. H. J. ¨ HSDPA performance in live networks. In ICC (2007), pp. 467–471. 11. Kozuch, M., Ryan, M., Gass, R., Scholsser, S., O’Hallaron, D., Cipar, J., Stroucken, M., Lopez, J., and Ganger, G. Tashi: Location-Aware Cluster Management. In First Workshop on Automated Control for Datacenters and Clouds (ACDC09) (Barcelona, Spain, June 2009). 12. Liu, X., Sridharan, A., Machiraju, S., Seshadri, M., and Zang, H. Experiences in a 3G network: Interplay between the wireless channel and applications. In ACM MOBICOM (San Francisco, CA, September 2008). 13. Nicholson, A. J., Chawathe, Y., Chen, M. Y., Noble, B. D., and Wetherall, D. Improved access point selection. In MobiSys ’06: Proceedings of the 4th international conference on Mobile systems, applications and services (New York, NY, USA, 2006), ACM, pp. 233–245. 14. Ott, J., and Kutscher, D. Drive-thru internet: IEEE 802.11b for automobile users. In INFOCOM 2004. Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies (Hong Kong, March 2004), vol. 1, IEEE, p. 373. (www.pam2010.ethz.ch/papers/full-length/8.pdf (R Gass))
1
2
<urn:uuid:5bb4060c-eb99-454e-8cc4-12c127f55028>
Learn more about the MMR titer from experts. Thanks for visiting our MMR titer test resource page. We hope that you will get a better understanding of what an MMR titer is and options for getting this blood test after reading this information. An MMR titer is a blood test that checks your antibody titer levels to MMR. It is performed by a laboratory. An MMR titer test is not an injection, shot, or vaccination. MMR is an abbreviation for 3 different infections - Measles, Mumps and Rubella. If you have MMR antibodies in your blood, it means that your immune system has "seen" an MMR infection before and has the capability to respond to an MMR infection if you are exposed again. Generally speaking, one is thought to be immune to MMR if they have a certain threshhold of antibodies circulating in their blood at the time they are measured using a blood test. Yes. One way to get MMR antibodies in your blood is through something thought of as natural exposure to MMR. More specifically, if you were infected by Measles, Mumps, or Rubella when you were younger, you probably had symptoms and felt sick for a while. In most cases, after you recover, your immune system forms a "memory" to the MMR-related infections that infected you. The next time you are exposed to the same infection, your immune system will remember what it's like to be infected and can easily produce an immune response using antibodies in your blood to help you fight off the infection. Another way to get MMR antibodies is through getting an MMR vaccination. The MMR vaccine is a special vaccine that contains weakened live versions of Measles, Mumps and Rubella. When you get immunized with the MMR vaccine, you are voluntarily exposing yourself to a mild form of the MMR infection so that your immune system can see it and form a memory of it through the process of fighting it. Said another way, an MMR vaccine is designed to simulate a natural MMR infection. The immune system responds to the MMR vaccine-induced infection by producing antibodies to MMR to protect you against future infections. Some people have a regular doctor who can order an MMR titer test for them and sometimes even draw their blood. In this age of high-deductibles and less insurance coverage, however, the costs of doing that way can be quite high at times. In contrast, it is now possible to order an MMR titer online and pay an affordable and, more importantly, guaranteed price for your testing. For a good option to get an MMR titer in this way, check out Accesa Labs. The MMR titer is a laboratory test. First, you need to get an order from your doctor or medical provider to get your blood drawn for the MMR titer. Then, you go to a laboratory or a medical office and get your blood drawn and analyzed. By law, the results of your MMR titer test will be sent to the medical provider who ordered your test. The medical provider will then release them to you. Once your blood is drawn, the results for most MMR titers should be back in 3-4 business days. MMR titer test results can be reported as both qualitative or quantitative. "Qualitative" means that your results will be either Positive or Negative, without a numerical result. "Quantitative" means that your results will be reported as a number. The number can be compared against the Reference Interval on your lab report. An MMR titer test report will show a result that represents your titer level for Measles, Mumps and Rubella. Accompanying your titer result will be a reference range (sometimes called a value or index) that shows titer ranges with different interpretations. If your results are negative or equivocal, it typically meaans that you are not immune. If you results are positive, it means that you are considered immune by generally accepted standards. Here is a helpful video on how to understand your titer test report: If your titer results are negative or equivocal, it means that you have to undergo the MMR vaccine series again to try and rebuild your immune system's memory to MMR if you wish to be immune (or if your school / employer requires it). Unfortunately, individual vaccines to Measles, Mumps and Rubella are no longer made. To be considered immune after a negative titer, one needs to receive the entire MMR vaccine again. MMR titers are used as a measure of immunity to Measles, Mumps and Rubella. People who work in certain settings, such as hospitals or medical clinics, are at an increased risk for being exposed to certain diseases. As a result, many institutions want proof of a certain antibody level or vaccination for diseases such as Measles, Mumps and Rubella prior to stepping into those settings. It really depends who is asking for the titer result. For personal reasons, MMR titer results may be good indefinitely assuming that one doesn't plan on being exposed to MMR. For school or employment reasons, an MMR titer may be required every year or every time one starts a new position. A Varicella titer is a different test that looks for immunity to Varicella, or chickenpox. The MMR titer and Varicella titer are frequently ordered together. It depends on the type of health insurance that you have. Some insurance plans will cover it but many will not. Additionally, in order to use your health insurance, you will have to find a medical provider who accepts your health insurance and can order the tests for you. An antibody is a component of the immune system. Sometimes, it is known as an immunoglobulin. The immune system uses antibodies as the initial defenders against biological invaders such as bacteria and viruses. Specific antibodies create a lock-and-key fit with components (also called antigens) on the bacteria and viruses that they protect against. CPT codes that have been used for the MMR titer previously include 86765, 86735 and 86762. In the US, the MMR vaccine is made by Merck. This resource has more information about titer testing in general and some other information about specific types of titers.
1
3
<urn:uuid:1c812b03-cf5b-40aa-87a7-42a82bbadbe3>
The user interface (UI), in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. Generally, the goal of user interface design is to produce a user interface which makes it easy (self-explanatory), efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result. This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the human. With the increased use of personal computers and the relative decline in societal awareness of heavy machinery, the term user interface is generally assumed to mean the graphical user interface, while industrial control panel and machinery control design discussions more commonly refer to human-machine interfaces. Other terms for user interface are man–machine interface (MMI) and when the machine in question is a computer human–computer interface. The user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch. In complex systems, the human–machine interface is typically computerized. The term human–computer interface refers to this kind of system. In the context of computing the term typically extends as well to the software dedicated to control the physical elements used for human-computer interaction. The engineering of the human–machine interfaces is enhanced by considering ergonomics (human factors). The corresponding disciplines are human factors engineering (HFE) and usability engineering (UE), which is part of systems engineering. Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics. There is a difference between a user interface and an operator interface or a human–machine interface (HMI). - The term "user interface" is often used in the context of (personal) computer systems and electronic devices - Where a network of equipment or computers are interlinked through an MES (Manufacturing Execution System)-or Host to display information. - A human-machine interface (HMI) is typically local to one machine or piece of equipment, and is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled. - The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons (limited set of functions, optimized for ease of use) and the other for library personnel (wide set of functions, optimized for efficiency). - The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface (HMI). HMI is a modification of the original term MMI (man-machine interface). In practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more commonly used for human–computer interaction. Other terms used are operator interface console (OIC) and operator interface terminal (OIT). However it is abbreviated, the terms refer to the 'layer' that separates a human that is operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to interact with information systems. In science fiction, HMI is sometimes used to refer to what is better described as direct neural interface. However, this latter usage is seeing increasing application in the real-life use of (medical) prostheses—the artificial extension that replaces a missing body part (e.g., cochlear implants). In some circumstances, computers might observe the user and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces. The history of user interfaces can be divided into the following phases according to the dominant type of user interface: 1945–1968: Batch interface In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible. The input side of the user interfaces for batch machines were mainly punched cards or equivalent media like paper tape. The output side added line printers to these media. With the limited exception of the system operator's console, human beings did not interact with batch machines in real time at all. Submitting a job to a batch machine involved, first, preparing a deck of punched cards describing a program and a dataset. Punching the program cards wasn't done on the computer itself, but on keypunches, specialized typewriter-like machines that were notoriously balky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes meant to be parsed by the smallest possible compilers and interpreters. Holes are punched in the card according to a prearranged code transferring the facts from the census questionnaire into statistics Once the cards were punched, one would drop them in a job queue and wait. Eventually. operators would feed the deck to the computer, perhaps mounting magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or (all too often) an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in later computation. The turnaround time for a single job often spanned entire days. If one were very lucky, it might be hours; real-time response was unheard of. But there were worse fates than the card queue; some computers actually required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines actually had to be partly rewired to incorporate program logic into themselves, using devices known as plugboards. Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called “load-and-go” systems. These used a monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented a first step towards both operating systems and explicitly designed user interfaces. 1969–present: Command-line user interface Command-line interfaces (CLIs) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change his or her mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy mnemonic load on the user, requiring a serious investment of effort and learning time to master. The earliest command-line systems combined teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the Rule of Least Surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users. The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s. Just as importantly, the existence of an accessible screen — a two-dimensional display of text that could be rapidly and reversibly modified — made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as rogue(6), and vi(1), are still a live part of Unix tradition. 1985: SAA User Interface or Text-Based User Interface In 1985, with the beginning of Microsoft Windows and other graphical user interfaces, IBM created what is called the Systems Application Architecture (SAA) standard which include the Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent DOS or Windows Console Applications will use that standard as well. This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard. 1968–present: Graphical User Interface AMX Desk made a basic WIMP Linotype WYSIWYG 2000, 1989 - 1968 – Douglas Engelbart demonstrated NLS, a system which uses a mouse, pointers, hypertext, and multiple windows. - 1970 – Researchers at Xerox Palo Alto Research Center (many from SRI) develop WIMP paradigm (Windows, Icons, Menus, Pointers) - 1973 – Xerox Alto: commercial failure due to expense, poor user interface, and lack of programs - 1979 – Steve Jobs and other Apple engineers visit Xerox. Pirates of Silicon Valley dramatizes the events, but Apple had already been working on the GUI before the visit - 1981 – Xerox Star: focus on WYSIWYG. Commercial failure (25K sold) due to cost ($16K each), performance (minutes to save a file, couple of hours to recover from crash), and poor marketing - 1984 – Apple Macintosh popularizes the GUI. Super Bowl commercial shown once, most expensive ever made at that time - 1984 – MIT's X Window System: hardware-independent platform and networking protocol for developing GUIs on UNIX-like systems - 1985 – Windows 1.0 – provided GUI interface to MS-DOS. No overlapping windows (tiled instead). - 1985 – Microsoft and IBM start work on OS/2 meant to eventually replace MS-DOS and Windows - 1986 – Apple threatens to sue Digital Research because their GUI desktop looked too much like Apple's Mac. - 1987 – Windows 2.0 – Overlapping and resizable windows, keyboard and mouse enhancements - 1987 – Macintosh II: first full-color Mac - 1988 – OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft, looks a lot like Windows 2 Primary methods used in the interface design include prototyping and simulation. Typical human–machine interface design consists of the following stages: interaction specification, interface software specification and prototyping: All great interfaces share eight qualities or characteristics: - Clarity The interface avoids ambiguity by making everything clear through language, flow, hierarchy and metaphors for visual elements. - Concision It's easy to make the interface clear by over-clarifying and labeling everything, but this leads to interface bloat, where there is just too much stuff on the screen at the same time. If too many things are on the screen, finding what you're looking for is difficult, and so the interface becomes tedious to use. The real challenge in making a great interface is to make it concise and clear at the same time. - Familiarity Even if someone uses an interface for the first time, certain elements can still be familiar. Real-life metaphors can be used to communicate meaning. - Responsiveness A good interface should not feel sluggish. This means that the interface should provide good feedback to the user about what's happening and whether the user's input is being successfully processed. - Consistency Keeping your interface consistent across your application is important because it allows users to recognize usage patterns. - Aesthetics While you don't need to make an interface attractive for it to do its job, making something look good will make the time your users spend using your application more enjoyable; and happier users can only be a good thing. - Efficiency Time is money, and a great interface should make the user more productive through shortcuts and good design. - Forgiveness A good interface should not punish users for their mistakes but should instead provide the means to remedy them. Principle of least astonishment The principle of least astonishment (POLA) is a general principle in the design of all kinds of interfaces. It is based on the idea that human beings can only pay full attention to one thing at one time, leading to the conclusion that novelty should be minimized. If an interface is used persistently, the user will unavoidably develop habits for using the interface. The designer's role can thus be characterized as ensuring the user forms good habits. If the designer is experienced with other interfaces, they will similarly develop habits, and often make unconscious assumptions regarding how the user will interact with the interface. HP Series 100 HP-150 Touchscreen - Direct manipulation interface is the name of a general class of user interfaces that allow users to manipulate objects presented to them, using actions that correspond at least loosely to the physical world. - Graphical user interfaces (GUI) accept input via devices such as a computer keyboard and mouse and provide articulated graphical output on the computer monitor. There are at least two different principles widely used in GUI design: Object-oriented user interfaces (OOUIs) and application oriented interfaces. - Touchscreens are displays that accept input by touch of fingers or a stylus. Used in a growing amount of mobile devices and many types of point of sale, industrial processes and machines, self-service machines etc. - Command line interfaces, where the user provides the input by typing a command string with the computer keyboard and the system provides output by printing text on the computer monitor. Used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users. - Touch user interface are graphical user interfaces using a touchpad or touchscreen display as a combined input and output device. They supplement or replace other forms of output with haptic feedback methods. Used in computerized simulators etc. - Hardware interfaces are the physical, spatial interfaces found on products in the real world from toasters, to car dashboards, to airplane cockpits. They are generally a mixture of knobs, buttons, sliders, switches, and touchscreens. - Attentive user interfaces manage the user attention deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user. - Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. The computer does not prompt for further input after the processing has started. - Conversational interfaces enable users to command the computer with plain text English (e.g., via text messages, or chatbots) or voice commands, instead of graphic elements. These interfaces often emulate human-to-human conversations. - Conversational interface agents attempt to personify the computer interface in the form of an animated person, robot, or other character (such as Microsoft's Clippy the paperclip), and present interactions in a conversational form. - Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing. - Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures, or mouse gestures sketched with a computer mouse or a stylus. - Holographic user interfaces provide input to electronic or electro-mechanical devices by passing a finger through reproduced holographic images of what would otherwise be tactile controls of those devices, floating freely in the air, detected by a wave source and without tactile interaction. - Intelligent user interfaces are human-machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human-machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture). - Motion tracking interfaces monitor the user's body motions and translate them into commands, currently being developed by Apple. - Multi-screen interfaces, employ multiple displays to provide a more flexible interaction. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets. - Non-command user interfaces, which observe the user to infer his / her needs and intentions, without requiring that he / she formulate explicit commands. - Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors, allowing users to manipulate simulated objects and their properties. - Reflexive user interfaces where the users control and redefine the entire system via the user interface alone, for instance to change its command verbs. Typically this is only possible with very rich graphic user interfaces. - Search interface is how the search box of a site is displayed, as well as the visual representation of the search results. - Tangible user interfaces, which place a greater emphasis on touch and physical environment or its element. - Task-focused interfaces are user interfaces which address the information overload problem of the desktop metaphor by making tasks, not files, the primary unit of interaction. - Text-based user interfaces are user interfaces which output a text. TUIs can either contain a command-line interface or a text-based WIMP environment. - Voice user interfaces, which accept input and provide output by generating voice prompts. The user input is made by pressing keys or buttons, or responding verbally to the interface. - Natural-language interfaces – Used for search engines and on webpages. User types in a question and waits for a response. - Zero-input interfaces get inputs from a set of sensors instead of querying the user with input dialogs. - Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail, and where the user can change the scale of the viewed area in order to show more detail. - ^ Griffin, Ben; Baston, Laurel. "Interfaces" (Presentation): 5. Retrieved 7 June 2014. The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human-machine interface (HMI). - ^ a b c d "User Interface Design and Ergonomics" (PDF). COURSE CIT 811. NATIONAL OPEN UNIVERSITY OF NIGERIA: SCHOOL OF SCIENCE AND TECHNOLOGY: 19. Retrieved 7 June 2014. In practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. - ^ "Introduction Section". Recent advances in business administration. [S.l.]: Wseas. 2010. p. 190. ISBN 978-960-474-161-8. Other terms used are operator interface console (OIC) and operator interface terminal (OIT) - ^ Cipriani, Christian; Segil, Jacob; Birdwell, Jay; Weir, Richard. "Dexterous control of a prosthetic hand using fine-wire intramuscular electrodes in targeted extrinsic muscles". IEEE Transactions on Neural Systems and Rehabilitation Engineering: 1–1. ISSN 1534-4320. doi:10.1109/TNSRE.2014.2301234. Neural co-activations are present that in turn generate significant EMG levels and hence unintended movements in the case of the present human machine interface (HMI). - ^ Citi, Luca (2009). "Development of a neural interface for the control of a robotic hand" (PDF). Scuola Superiore Sant'Anna, Pisa, Italy: IMT Institute for Advanced Studies Lucca: 5. Retrieved 7 June 2014. - ^ Jordan, Joel. "Gaze Direction Analysis for the Investigation of Presence in Immersive Virtual Environments" (Thesis submitted for the degree of Doctor of Philosophy). University of London: Department of Computer Science: 5. Retrieved 7 June 2014. The aim of this thesis is to investigate the idea that the direction of gaze may be used as a device to detect a sense-of-presence in Immersive Virtual Environments (IVE) in some contexts. - ^ Ravi (August 2009). "Introduction of HMI". Retrieved 7 June 2014. In some circumstance computers might observe the user, and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces. - ^ "HMI Guide". - ^ Richard, Stéphane. "Text User Interface Development Series Part One - T.U.I. Basics". Retrieved 13 June 2014. - ^ a b c McCown, Frank. "History of the Graphical User Interface (GUI)". Harding University. - ^ Raymond, Eric Steven (2003). "11". The Art of Unix Programming. Thyrsus Enterprises. Retrieved 13 June 2014. - ^ C. A. D'H Gough; R. Green; M. Billinghurst. "Accounting for User Familiarity in User Interfaces" (PDF). Retrieved 13 June 2014. - ^ Sweet, David (October 2001). "9 - Constructing A Responsive User Interface". KDE 2.0 Development. Sams Publishing. Retrieved 13 June 2014. - ^ John W. Satzinger; Lorne Olfman (March 1998). "User interface consistency across end-user applications: the effects on mental models". Journal of Management Information Systems. Managing virtual workplaces and teleworking with information technology. Armonk, NY. 14 (4): 167–193. - ^ a b Raskin, Jef (2000). The human interface : new directions for designing interactive systems (1. printing. ed.). Reading, Mass. [u.a.]: Addison Wesley. ISBN 0-201-37937-6. - ^ Udell, John (9 May 2003). "Interfaces are habit-forming". Infoworld. Retrieved 3 April 2017. - ^ Gordana Lamb. "Improve Your UI Design Process with Object-Oriented Techniques". Visual Basic Developer magazine. 2001. quote: "Table 1. Differences between the traditional application-oriented and object-oriented approaches to UI design." - ^ Errett, Joshua. "As app fatigue sets in, Toronto engineers move on to chatbots". CBC. CBC/Radio-Canada. Retrieved July 4, 2016. - ^ appleinsider.com
1
4
<urn:uuid:4228e980-179b-47a6-89c3-3e8d6c1884f2>
AINSLEY f & m Scottish, English (Modern) From a surname which was from a place name: either Annesley in Nottinghamshire or Ansley in Warwickshire. The place names themselves derive from Old English anne "alone, solitary" or ansetl "hermitage" and leah ALAN m English, Scottish, Breton, French The meaning of this name is not known for certain. It was used in Brittany at least as early as the 6th century, and it possibly means either "little rock" or "handsome" in Breton. Alternatively, it may derive from the tribal name of the Alans, an Iranian people who migrated into Europe in the 4th and 5th centuries.... [more] ALLAN m English, Scottish Variant of ALAN . The American author Edgar Allan Poe (1809-1849) got his middle name from the surname of the parents who adopted him. ALLEN m English, Scottish Variant of ALAN . A famous bearer of this name was Allen Ginsberg (1926-1997), an American beat poet. Another is the American film director and actor Woody Allen (1935-), who took the stage name Allen from his real first name. ALPIN m Scottish Anglicized form of the Gaelic name Ailpein , possibly derived from a Pictish word meaning "white". This was the name of two kings of Dál Riata and two kings of the Picts in the 8th and 9th centuries. AODH m Irish, Scottish, Irish Mythology From the old Irish name Áed , which meant "fire". This was a very popular name in early Ireland, being borne by numerous figures in Irish mythology and several high kings. It has been traditionally Anglicized as Hugh AODHÁN m Irish, Scottish, Irish Mythology From the old Irish name Áedán , a diminutive of Áed ). This was the name of an Irish monk and saint of the 7th century. It was also borne by several characters in Irish mythology. AONGHUS m Irish, Scottish, Irish Mythology Possibly meaning "one strength" derived from Irish óen "one" and gus "force, strength, energy". Aonghus (sometimes surnamed Mac Og meaning "young son") was the Irish god of love and youth. The name was also borne by an 8th-century Pictish king and several Irish kings. ARCHIBALD m Scottish, English Derived from the Germanic elements ercan "genuine" and bald "bold". The first element was altered due to the influence of Greek names beginning with the element αρχος (archos) meaning "master". The Normans brought this name to England. It first became common in Scotland in the Middle Ages. ARRAN m Scottish From the name of an island off the west coast of Scotland in the Firth of Clyde. ATHOL m & f Scottish From the name of a district in Scotland which was derived from Gaelic ath Fodhla BARCLAY m Scottish, English (Rare) From a Scottish surname which was likely derived from the English place name Berkeley , meaning "birch wood" in Old English. BLAIR m & f Scottish, English From a Scottish surname which is derived from Gaelic blár meaning "plain, field, battlefield". BOYD m Scottish, English From a Scottish surname which was possibly derived from the name of the island of Bute. BRUCE m Scottish, English From a Scottish surname, of Norman origin, which probably originally referred to the town of Brix in France. The surname was borne by Robert the Bruce, a Scottish hero of the 14th century who achieved independence from England and became the king of Scotland. It has been in use as a given name in the English-speaking world since the 19th century. A notable bearer is the American musician Bruce Springsteen (1949-). CAMERON m & f Scottish, English From a Scottish surname meaning "crooked nose" from Gaelic cam "crooked" and sròn CAMPBELL m Scottish From a Scottish surname meaning "crooked mouth" from Gaelic cam "crooked" and béul CARSON m & f Scottish, English From a Scottish surname of uncertain meaning. A famous bearer of the surname was the American scout Kit Carson (1809-1868). CINÁED m Scottish, Irish Means "born of fire" in Gaelic. This was the name of the first king of the Scots and Picts (9th century). It is often Anglicized as Kenneth CONALL m Irish, Scottish, Irish Mythology Means "strong wolf" in Gaelic. This is the name of several characters in Irish legend including the hero Conall Cernach ("Conall of the victories"), a member of the Red Branch of Ulster, who avenged Cúchulainn 's death by killing Lugaid. CRAIG m Scottish, English From a Scottish surname which was derived from Gaelic creag meaning "crag" or "rocks", originally indicating a person who lived near a crag. DAVID m English, Hebrew, French, Scottish, Spanish, Portuguese, German, Swedish, Norwegian, Danish, Dutch, Czech, Slovene, Russian, Croatian, Serbian, Macedonian, Romanian, Biblical, Biblical Latin From the Hebrew name דָּוִד (Dawid) , which was probably derived from Hebrew דוד (dwd) meaning "beloved". David was the second and greatest of the kings of Israel, ruling in the 10th century BC. Several stories about him are told in the Old Testament, including his defeat of Goliath , a giant Philistine. According to the New Testament, Jesus was descended from him.... [more] DONALD m Scottish, English From the Gaelic name Domhnall which means "ruler of the world", composed of the old Celtic elements dumno "world" and val "rule". This was the name of two 9th-century kings of the Scots and Picts. It has traditionally been very popular in Scotland, and during the 20th century it became common in the rest of the English-speaking world. This is the name of one of Walt Disney's most popular cartoon characters, Donald Duck. It was also borne by Australian cricket player Donald Bradman (1908-2001). DOUGAL m Scottish, Irish Anglicized form of the Gaelic name Dubhghall , which meant "dark stranger" from dubh "dark" and gall DOUGLAS m Scottish, English Anglicized form of the Scottish surname Dubhghlas , meaning "dark river" from Gaelic dubh "dark" and glais "water, river". Douglas was originally a river name, which then became a Scottish clan name (belonging to a powerful line of Scottish earls). It has been used as a given name since the 16th century. DUNCAN m Scottish, English Anglicized form of the Gaelic name Donnchadh meaning "brown warrior", derived from Gaelic donn "brown" and cath "warrior". This was the name of two kings of Scotland, including the one who was featured in Shakespeare's play 'Macbeth' (1606). EOGHAN m Irish, Scottish, Irish Mythology Possibly means "born from the yew tree" in Irish, though it is possibly derived from EUGENE . It was borne by several legendary or semi-legendary Irish figures, including a son of Niall of the Nine Hostages. ERSKINE m Scottish, Irish, English (Rare) From a surname which was originally derived from the name of a Scottish town meaning "projecting height" in Gaelic. A famous bearer of the name was the Irish novelist and nationalist Erskine Childers (1870-1922). FEARGHAS m Irish, Scottish, Irish Mythology Means "man of vigour", derived from the Gaelic elements fear "man" and gus "vigour". This was the name of several characters in Irish legend including the Ulster hero Fearghas mac Róich. FIFE m Scottish From a Scottish place name which was formerly the name of a kingdom in Scotland. It is said to be named for the legendary Pictish hero Fib. FINGAL m Scottish From Scottish Gaelic Fionnghall meaning "white stranger", derived from fionn "white, fair" and gall "stranger". This was the name of the hero in James Macpherson's epic poem 'Fingal' (1762), which he claimed to have based on early Gaelic legends about Fionn FORBES m Scottish From a surname which was originally taken from a Scottish place name meaning "field" in Gaelic. FRASER m Scottish, English (Rare) From a Scottish surname which is of unknown meaning. A famous bearer of the surname was Simon Fraser (1776-1862), a Canadian explorer. GAVIN m English, Scottish Medieval form of GAWAIN . Though it died out in England, it was reintroduced from Scotland in the 20th century. GILCHRIST m Scottish Derived from the Gaelic phrase giolla Chríost meaning "servant of Christ". GILLESPIE m Scottish Anglicized form of Scottish Gille Easbaig or Irish Giolla Easpuig both meaning "servant of the bishop". GILROY m Irish, Scottish From an Irish surname, either Mac Giolla Ruaidh , which means "son of the red-haired servant", or Mac Giolla Rí , which means "son of the king's servant". GLENN m Scottish, English From a Scottish surname which was derived from Gaelic gleann "valley". A famous bearer of the surname is American astronaut John Glenn (1921-). GORDON m Scottish, English From a Scottish surname which was originally derived from a place name meaning "spacious fort". It was originally used in honour of Charles George Gordon (1833-1885), a British general who died defending the city of Khartoum in Sudan. GRAHAM m Scottish, English From a Scottish surname, originally derived from the English place name Grantham , which probably meant "gravelly homestead" in Old English. The surname was first taken to Scotland in the 12th century by the Norman baron William de Graham. A famous bearer was Alexander Graham Bell (1847-1922), the Scottish-Canadian-American inventor who devised the telephone. GRANT m English, Scottish From an English and Scottish surname which was derived from Norman French grand meaning "great, large". A famous bearer of the surname was Ulysses Grant (1822-1885), the commander of the Union forces during the American Civil War who later served as president. In America the name has often been given in his honour. GREGOR m German, Scottish, Slovak, Slovene German, Scottish, Slovak and Slovene form of GREGORY . A famous bearer was Gregor Mendel (1822-1884), a Czech monk and scientist who did experiments in genetics. IRVING m English, Scottish, Jewish From a Scottish surname which was in turn derived from a Scottish place name meaning "green water". Historically this name has been relatively common among Jews, who have used it as an American-sounding form of Hebrew names beginning with I such as Isaac . A famous bearer was the Russian-American songwriter and lyricist Irving Berlin (1888-1989), whose birth name was Israel Beilin. ISLAY m Scottish From the name of the island of Islay, which lies off of the west coast of Scotland. IVOR m Irish, Scottish, Welsh, English (British) From the Old Norse name Ívarr , which was derived from the elements yr "yew, bow" and arr "warrior". During the Middle Ages it was brought to Britain by Scandinavian settlers and invaders, and it was adopted in Ireland, Scotland and Wales. JAMIE m & f Scottish, English Originally a Lowland Scots diminutive of JAMES . Since the late 19th century it has also been used as a feminine form. KEITH m English, Scottish From a Scottish surname which was originally derived from a place name, itself probably derived from the Brythonic element cet meaning "wood". This was the surname of a long line of Scottish nobles. It has been used as a given name since the 19th century. KENNETH m Scottish, English, Swedish, Norwegian, Danish Anglicized form of both COINNEACH . This name was borne by the Scottish king Kenneth (Cináed) mac Alpin, who united the Scots and Picts in the 9th century. It was popularized outside of Scotland by Sir Walter Scott, who used it for the hero in his novel 'The Talisman' (1825). A famous bearer was the British novelist Kenneth Grahame (1859-1932), who wrote 'The Wind in the Willows'. KENTIGERN m Scottish Possibly means "chief lord" in Gaelic. This was the name of a 6th-century saint from Glasgow. KERR m Scottish, English (Rare) From a Scottish surname which was derived from a place name meaning "rough wet ground" in Old Norse. LACHLAN m Scottish, English (Australian) Originally a Scottish nickname for a person who was from Norway. In Scotland, Norway was known as the "land of the lochs", or Lochlann LENNON m Scottish, English (Rare) Anglicized form of the Irish surname Ó Leannáin , which means "descendant of Leannán". The name Leannán means "lover" in Gaelic. This surname was borne by musician John Lennon (1940-1980), a member of the Beatles. LENNOX m Scottish, English (Rare) From a Scottish surname which was derived from the name of a district in Scotland. The district, called Leamhnachd in Gaelic, possibly means "place of elms". LINDSAY f & m English, Scottish From an English and Scottish surname which was originally derived from the name of the region Lindsey , which means "LINCOLN island" in Old English. As a given name it was typically masculine until the 1960s (in Britain) and 1970s (in America) when it became popular for girls, probably due to its similarity to Linda and because of American actress Lindsay Wagner (1949-). LOGAN m & f Scottish, English From a surname which was originally derived from a Scottish place name meaning "little hollow" in Scottish Gaelic. MALCOLM m Scottish, English From Scottish Máel Coluim which means "disciple of Saint COLUMBA ". This was the name of four kings of Scotland starting in the 10th century, including Malcolm III, who became king after killing Macbeth, the usurper who had murdered his father. The character Malcolm in Shakespeare's tragedy 'Macbeth' (1606) is based on him. Another famous bearer was Malcolm X (1925-1965), an American civil rights leader. MONROE m Scottish, English From a Scottish surname meaning "from the mouth of the Roe". The Roe is a river in Ireland. Two famous bearers of the surname were American president James Monroe (1758-1831) and American actress Marilyn Monroe (1926-1962). MUIR m Scottish From a surname which was originally taken from a Scottish place name meaning "moor, fen". It also means "sea" in Scottish Gaelic. MUNGO m Scottish Possibly derived from Welsh mwyn "gentle, kind". This was a nickname of the 6th-century Saint Kentigern. MURRAY m Scottish, English From a Scottish surname which was derived from the region in Scotland called Moray , meaning "seaboard settlement". NAOISE m Irish, Scottish, Irish Mythology Meaning unknown, presumably of Gaelic origin. In Irish legend he was the young man who eloped with Deirdre , the beloved of Conchobhar the king of Ulster. Conchobhar eventually succeeded in having Naoise murdered, which caused Deirdre to die of grief. NAOMHÁN m Irish, Scottish Means "little saint", derived from Irish naomh "saint" combined with a diminutive suffix. NEIL m Irish, Scottish, English From the Gaelic name Niall , which is of disputed origin, possibly meaning "champion" or "cloud". This was the name of a semi-legendary 4th-century Irish king, Niall of the Nine Hostages.... [more] NINIAN m Scottish, Irish, Ancient Celtic Meaning unknown. It appears in a Latinized form Niniavus , which could be from the Welsh name NYNNIAW . This was the name of a 5th-century British saint who was apparently responsible for many miracles and cures. He is known as the Apostle to the Picts. RANULF m Scottish Scottish form of the Old Norse name Randúlfr , a cognate of RANDOLF . Scandinavian settlers and invaders introduced this name to Scotland in the Middle Ages. RODERICK m English, Scottish, Welsh Means "famous power" from the Germanic elements hrod "fame" and ric "power". This name was in use among the Visigoths; it was borne by their last king (also known as Rodrigo), who died fighting the Muslim invaders of Spain in the 8th century. It also had cognates in Old Norse and West Germanic, and Scandinavian settlers and Normans introduced it to England, though it died out after the Middle Ages. It was revived in the English-speaking world by Sir Walter Scott's poem 'The Vision of Don Roderick' (1811). RONALD m Scottish, English Scottish form of RAGNVALDR , a name introduced to Scotland by Scandinavian settlers and invaders. It became popular outside Scotland during the 20th century. A famous bearer was American actor and president Ronald Reagan (1911-2004). ROSS m Scottish, English From a Scottish and English surname which originally indicated a person from a place called Ross (such as the region of Ross in northern Scotland), derived from Gaelic ros meaning "promontory, headland". A famous bearer of the surname was Sir James Clark Ross (1800-1862), an Antarctic explorer. ROY m Scottish, English, Dutch Anglicized form of RUADH . A notable bearer was the Scottish outlaw and folk hero Rob Roy (1671-1734). It is often associated with French roi RUADH m Irish, Scottish Gaelic byname meaning "red", often a nickname for one with red hair. This was the nickname of the Scottish outlaw Raibeart Ruadh MacGregor (1671-1734), known as Rob Roy in English. SCOTT m English, Scottish From an English and Scottish surname which referred to a person from Scotland or a person who spoke Scottish Gaelic. It is derived from Latin Scoti meaning "Gaelic speaker", with the ultimate origin uncertain. SOMERLED m Scottish Anglicized form of the Old Norse name Somarliðr meaning "summer traveller". This was the name of a 12th-century Scottish warlord who created a kingdom on the Scottish islands. STUART m English, Scottish From an occupational surname originally belonging to a person who was a steward. It is ultimately derived from Old English stig "house" and weard "guard". As a given name, it arose in 19th-century Scotland in honour of the Stuart royal family, which produced several kings and queens of Scotland and Britain between the 14th and 18th centuries. TADHG m Irish, Scottish Means "poet" in Irish. This was the name of an 11th-century king of Connacht. TAVISH m Scottish Anglicized form of Thàmhais , vocative case of TÀMHAS . Alternatively it could be taken from the Scottish surname MacTavish , Anglicized form of Mac Tàmhais , meaning "son of Thomas". WALLACE m English, Scottish From a Scottish and English surname which originally meant "Welsh" or "foreigner" in Norman French. It was first used as given name in honour of Sir William Wallace, the Scottish hero who led a rebellion to expel the English invaders from Scotland in the 13th century.
1
2
<urn:uuid:db6580b9-ae2d-480d-9582-5e03b450a203>
People were visiting the North Wildwood Area long before the city itself actually existed. The first of these visitors, as was the case with many other areas along the South Jersey Shore, were the Lenni-Lenape Indians. This Algonquian speaking tribe likely visited Five Mile Beach during the summer, and forged a number of trails in and around the island. One trail at the north end of Five Mile Beach was a continuation of a mainland trail, while the other intersected the first in the middle of the island. The place where these two trails intersect each other would be the future site of the Rio Grande Bridge. Robert Juet is credited as being the first European to visit and write about Five Mile Beach in 1609. Sailing with Henry Hudson, and English navigator, Juet arrived in Delaware Bay on August 28th. The two explorers were searching for the Northwest Passage, which was an oft sought after, but never found trading route that would provide direct access to eastern Asia. Juet and Hudson stopped briefly in the Bay before turning around and continuing their journey. As they sailed north up the East Coast, Juet looked to the shore and saw Five Mile Beach for the first time. He described the island in his journal as “a pleasant land to see” For the next 250 years, before permanent settlements were established, farmers used the Wildwood area as a grazing point for their livestock. These farmers would ferry their animals across from the mainland on flatboats, and leave them roaming free to graze. The Hereford Inlet, where North Wildwood currently stands, was visited very frequently throughout the 17th and 18th centuries. Whalers were the first mariners to visit the Inlet on a regular basis. They would often drag their catches into the Inlet to butcher them, while taking refuge from the ocean. Fishermen were also frequent visitors to the Inlet, which was very attractive because in was approximately at the middle distance between Delaware Bay in the south and Great Egg Harbour in the north. The fishermen, like the whalers who had come before, took refuge in the Inlet from the unforgiving Atlantic Ocean. These fishermen also established the first permanent settlement on Five Mile Beach: the small fishing village of Anglesea. The early residents of Anglesea would follow the ancient Indian trails to the edge of the mainland, and would then travel to Anglesea by boat. This small community by the sea would later be renamed North Wildwood. An early problem for the small village was that the waters around the Hereford Inlet were extremely hazardous for ships seeking to visit the village. This was due to shifting sandbars and very strong currents. A navigation station was established on the southern bank of the inlet, in 1949, as the United States Lifesaving Service responded to a number of shipwrecks and groundings around the Inlet. The first navigation station was insufficient to protect the ever-increasing shipping traffic however, and a larger station was built in 1871. This station was also unable to stop the grounds and shipwrecks so later in 1871, the United States Congress legislated the construction of a full lighthouse at the Hereford Inlet. The land for the lighthouse was purchased in 1873, and the construction began later that year. By 1874 the lighthouse was complete, and was lit for the first time. The tower stood 50 feet tall, and had a light that could be seen for 14 nautical miles from the shore. With the navigation of the waters around the Hereford Inlet now much safer, more settlers began to arrive in Anglesea. John Marche was the first Hereford Inlet Lighthouse’s first keeper. Unfortunately, a boating accident killed him three months after taking his post in 1874. John Nickerson took over as keeper after the accident, until a permanent keeper could arrive. Captain Freeling H. Hewitt, a veteran of the civil war, became the permanent lighthouse keeper in 1878, and would tend to the lighthouse until 1918. Hewitt also conducted a Baptist ceremony that was the first formal religious service ever conducted on the Five Mile Beach Island. Many families from Anglesea attended, and the lighthouse would remain a place of worship on the island until the first church was built. 1884 saw the construction of a rail line running from Cape May Court House to Anglesea, making the small community much more accessible from the mainland. That year, a bridge was constructed on the old Indian trail where Rio Grande Avenue currently sits. This bridge would be replaced twice: the first time after it was destroyed by fire in 1885, and the second when it was demolished in favor of a bridge that could bear the weight of automobiles. Anglesea officially became a borough on June 2nd, 1885. In 1906, a vote was held to determine whether the borough of Anglesea should be renamed. The vote was “yes”, and Anglesea officially became “North Wildwood.” It was hoped that the borough’s new name would capitalize on the success of the City of Wildwood to the south, which was experiencing amazing growth and prosperity. In the early years of North Wildwood, the Firehouse, City Hall and the Police Station all shared the same building at 3rd and Central Avenue. The firemen occupied the first floor, the government the second, with the police station on top. 1914 saw the establishment of the North Wildwood Beach Patrol. It was created by Mayor Harry Hoffman in July of 1914, and is still in operation today. By 1917, only eleven years after become a borough, North Wildwood had grown large enough to officially become the City of North Wildwood. In the mid-1920s the City created its first paid firefighting division to better protect its population and buildings. These paid firemen would work along side part-paid drivers. Woodrow Hall became the first Fire Chief in 1925. The fire department was expanded to operate out of two buildings: the three story structure that they shared with City Hall and the police, and another building at 18th and New Jersey Ave. There were always at least one firefighter on duty at each of these buildings, and some firefighters lived permanently in the firehouses. Ironically, a fire would later destroy the top floor of the 3rd and Central Ave building, making it a two building used exclusively by the fire department. A new station was build at 15th and Central Ave in 1927. North Wildwood, including the Hereford Inlet Lighthouse, was darkened during the Second World War to protect the coast from potential attacks from enemy submarines. After the war, North Wildwood, along with its sister city to the south, experienced a large increase in tourism due to the optimism that pervaded America in the post-war years. The public wanted to hop in their cars and explore. They were fascinated with exotic, far away countries, and had more disposable income than ever before. From this optimistic attitude sprang Wildwood’s Doo Wop. “Doo Wop” was a term coined in 1990 by the MAC to describe the culture and architecture of the Wildwoods in the 1950s and 1960s. Doo Wop architecture consisted of flashy space-aged designed building and huge bright neon signs. As Doo Wop began to fall out of style, North Wildwoods tourism fell declined as well. The colourful and flashy hotels and restaurants fell into disrepair. There were still enough travellers visiting North Wildwood to keep the hotels open, but not enough to justify renovating or restoring them. The Hereford Inlet Lighthouse was decommissioned in 1964 after 90 years of service. It was replaced by an automated marine beacon, and sat abandoned and in disrepair for the next 18 years. 1982 saw the City of North Wildwood sign a lease to gain stewardship of the Hereford Inlet Lighthouse. This had been due to the tireless efforts of North Wildwood’s Mayor Anthony Catanoso and his wife. The restoration of the lighthouse began immediately, with many local volunteers chipping in to help. Within 10 months, the Hereford Inlet Lighthouse had been restored to the point where it could receive tourists. However, North Wildwood’s tourism industry remained in a depressed state until 1997, when the Doo Wop Preservation League was formed with the intention of restoring the Doo Wop buildings to their former glory, and using the Doo Wop history of the Wildwoods to promote the area. The League has been very successful, and, today, North Wildwood is seeing resurgence in tourism. Many of the city’s historic Doo Wop hotels and restaurants have been renovated and restored. In addition, the city has opened new, upscale restaurants and has succeeded, along with Wildwood City, in creating a bustling entertainment district. Millions of people visit North Wildwood and the rest of the Greater Wildwoods each year to walk along the boardwalk, enjoying the rides, or take in the 1950s ambiance created by the fantastic Doo Wop Architecture.
1
4
<urn:uuid:fbd6544b-84f0-40a6-8ea2-d4f013cd445f>
Gene Nomenclature System for Rice - First Online: - Cite this article as: - McCouch, S.R. & CGSNL (Committee on Gene Symbolization, Nomenclature and Linkage, Rice Genetics Cooperative) Rice (2008) 1: 72. doi:10.1007/s12284-008-9004-9 - 4.1k Downloads The Committee on Gene Symbolization, Nomenclature and Linkage (CGSNL) of the Rice Genetics Cooperative has revised the gene nomenclature system for rice (Oryza) to take advantage of the completion of the rice genome sequence and the emergence of new methods for detecting, characterizing, and describing genes in the biological community. This paper outlines a set of standard procedures for describing genes based on DNA, RNA, and protein sequence information that have been annotated and mapped on the sequenced genome assemblies, as well as those determined by biochemical characterization and/or phenotype characterization by way of forward genetics. With these revisions, we enhance the potential for structural, functional, and evolutionary comparisons across organisms and seek to harmonize the rice gene nomenclature system with that of other model organisms. Newly identified rice genes can now be registered on-line at http://shigen.lab.nig.ac.jp/rice/oryzabase_submission/gene_nomenclature/. KeywordsOryza sativa Genome sequencing Gene symbolization The biological community is moving towards a universal system for the naming of genes. Emerging gene nomenclature systems have been described for a number of plants such as Arabidopsis thaliana , tomato , maize , and Medicago , as well as for Saccharomyces cereviseae and for metazoans such as mouse and humans . The adoption of a common genetic language across diverse organisms is a great advantage for scientific communication and facilitates structural, functional, and evolutionary comparisons of genes and genetic variation among living things. With increasing emphasis on the molecular and biochemical nature of genes and gene products, it is important that the gene nomenclature system for rice (Oryza) reflect knowledge about the biochemical features of a specific gene, gene model, or gene family as well as about the phenotypic consequences of a particular allele in a given genetic background. The current rules for gene names and gene symbols in rice are based on recommendations from the Committee on Gene Symbolization, Nomenclature and Linkage (CGSNL) of the Rice Genetics Cooperative . Most of the early gene names and symbols are descriptive of visible phenotypes that provided the earliest evidence for the existence of a gene, and these names and symbols are widely used by the rice research community. With the completion of the rice genome sequence and the emergence of new methods for detecting, characterizing, and describing genes, an expanded nomenclature system is needed that outlines a set of standard procedures for describing genes based on biochemical characterization and on DNA, RNA, and protein sequence analysis , in addition to the rules previously outlined for naming genes associated with phenotypic variants . The focus of this publication is to summarize the rules for gene nomenclature in rice and, so far as possible, to harmonize the rice gene nomenclature system with that of other model organisms. We describe a set of rules for naming chromosomes and identifying loci, genes, and alleles based on biological function, mutant phenotype, and sequence identity, and suggest ways of dealing with aliases (synonyms), sequence variants, and loci identified by multiple annotations of the genome assemblies available from various sources. The nomenclature rules are based on the previous rice gene nomenclature system , but they have been expanded to accommodate sequence information based on the recommendations by members of the International Rice Genome Sequencing Project (IRGSP) as summarized at two Rice Annotation Project (RAP) meetings, namely RAP-1, held in Tsukuba, Japan in December 2004 and RAP-2, held in Manila, Philippines in December 2005. These rules have also been approved by the Sub-committee on CGSNL of the Rice Genetics Cooperative (http://www.shigen.nig.ac.jp/rice/oryzabase/rgn/office.jsp). Though studies on rice genetics have been documented for over a century, the recent advances in large-scale mutagenesis experiments and sequencing of expressed sequence tags (ESTs), full-length cDNAs, and both the Oryza sativa ssp. japonica and O. sativa ssp. indica genomes of rice (O. sativa) have significantly added to our understanding of gene networks, gene function, and allelic and sequence diversity. Therefore, the nomenclature practice summarized in this report is designed to outline the rules for naming genes and alleles based on biological function and to facilitate the cross-referencing of gene annotations provided by multiple sequencing and annotation projects, namely, the IRGSP , RAP , The Institute of Genomic Research (TIGR) , Munich Information Center for Protein Sequences (MIPS) , National Center for Biotechnology Information (NCBI) , Syngenta , and Beijing Genomics Institute (BGI) and to provide coherence for annotation of gene variants coming from the sequencing of different germplasm accessions [1, 15]. Genome assemblies and systematic locus identifier (systematic_locus_ID) A single rice species may support multiple genetic, physical, and sequence maps, gene annotations, and genome assemblies. Currently, the O. sativa genome is represented by the genome sequence of the O. sativa ssp. japonica cultivar, cv. Nipponbare, which was sequenced by the IRGSP (International Rice Genome Sequencing Project) , and by the O. sativa ssp. indica cultivar, cv. 93-11, which was sequenced by the BGI . The Nipponbare sequence has been annotated by several groups, including RAP [8, 20], TIGR , NCBI-GenBank , MIPS , and Syngenta , while annotation of the O. sativa ssp. indica sequence, cv. 93-11, has been provided almost exclusively by the BGI . In the case of Nipponbare, the same raw sequence generated by the IRGSP has been independently assembled and annotated by both RAP and TIGR, and thus, the rice community currently manages three independent genome assemblies (two for cv. Nipponbare and one for cv. 93-11) for the species O. sativa. Each of these assemblies has an independently annotated set of loci representing gene models/transcription units anchored along pseudomolecules that differ in subtle ways from each other. A locus is defined as a position on the genome, and because each annotation group independently assigns locus identifiers (locus IDs) to all genes, transcripts, and proteins based on their position on the pseudomolecules, the same gene may have a different systematic_locus_ID, depending on the genome, the assembly, and the software used for annotation. Specifications of the rules used by each annotation group to assign systematic_locus_IDs for nuclear genes/transcripts/proteins, organellar genes/transcripts/proteins, and transposable elements are available on the RAP database , the TIGR Osa1 database , and the BGI-RIS . Suggestions for assigning systematic_locus_IDs citing examples from the RAP database are provided towards the end of this article. Rules for Classifying Sequenced Genes as Suggested by the CGSNL Identical to rice protein with known function Identity > = 98%, length coverage = 100% to known rice protein [blastx] Receive the same, original gene name Similar to a known protein Identity > = 50% to a known protein. [blastx] Receive “original gene name, putative” InterPro domain-containing protein Not in category I or II, but contains InterPro domain. Receive “InterPro name domain-containing protein” Conserved hypothetical protein Identity > = 50%, length coverage > = 50% to hypothetical protein [blast x] Receive “conserved hypothetical protein” If not in category I to IV Receive “hypothetical protein” Example of the SD1 Gene and Its Associations dee-geo-woo-gen dwarf, d49, d47, green revolution gene, C20OX2, GA C20oxidase2, GA20 oxidase, Gibberellin-20 oxidase RAPdb (build #4) Os01g0883800 (O. sativa ssp. ssp. japonica cv. Nipponbare) TIGR_osa1 (build #4) LOC_Os01g66100 (O. sativa ssp. japonica cv. Nipponbare) OsIBCD004089 (O. sativa ssp. indica cv. 93-11) JRGP RFLP map: sd1, linkage group-1, 149.1–151 cM Rice morphological map: sd1, linkage group-1, 73 cM PMID: 12077303, 11961544, 11939564, etc. GenBank accession number AB077025, AF465255, AF465256, AY114310, U50333 Uniprot accession number Q8RVF5, Q8S492, Q0JH50, Q2Z294 Accurate cross-referencing among rice pseudomolecules requires careful manual curation. Close paralogues, particularly when tandemly arrayed, and subtle differences in the structure of gene models across multiple assemblies of the same genome sequence present significant challenges. Researchers familiar with the particular characteristics of a new gene will be in the best position to provide accurate information about the gene and to ensure that the different rice genome annotations are progressively improved and updated. Rules for chromosome names and gene symbolization in rice The 12 nuclear chromosomes are assigned Arabic numerals based on the convention outlined by Khush and Kinoshita , and linkage groups have been assigned to chromosomes and named accordingly. For database purposes, the chromosomes will each be assigned a two-digit number starting with 01 up to 12, but single digits for chromosomes 1–9 are generally used in publications. Short and long arms are symbolized by “S” and “L”, respectively (example: 1S, 1L), and it is acceptable to abbreviate them as chr. 1S and chr. 1L or Chr. 2S and Chr. 2L. While there are recognized inconsistencies in the current chromosome- and chromosome arm-naming conventions due to inaccuracies in the techniques previously used to estimate chromosome size and arm ratios , no revisions to the existing rice chromosome nomenclature have been suggested at this time. The circular chromosomes are assigned the English characters “Pt” for plastid or chloroplast, and “Mt” for mitochondria, respectively, instead of the Arabic numerals used for nuclear chromosomes. These chromosomes do not have centromeres, and thus, they will not be designated with short or long arms. It is acceptable to abbreviate them as chr. Pt or chr. Mt. Gene full name The full name of a gene consists of a name and a number referred to as the locus designator. Gene full names are written in all capital, italicized letters, with a space between the name and the locus number (i.e., SHATTERING 1). The name should briefly describe the salient characteristics associated with a biochemical function of the gene product or the phenotype rendered due to mutant or allelic forms of this gene. The locus designator consists of one to three digits and differentiates a gene at a particular locus from genes at other loci that confer a similar function or phenotype. The number used as the locus designator indicates the order in which a particular gene or gene family member was identified and should not be confused with the systematic_locus_ID or the chromosome/linkage group on which it is found. By default, any gene name that does not have a locus designator is presumed to be the first such gene identified and will be assigned the locus designator, “1”, e.g., PURPLE NODE will be designated PURPLE NODE 1. This format of writing the gene full name in all capital letters is different from the previous rule where the gene full name was written in all lowercase, italicized letters, with a capital first letter indicating dominant behavior and a lowercase first letter indicating recessive behavior of the first allele identified. Please refer to the section “Dominant/recessive relationships” for further discussion of this point. In cases where a phenotype is mapped to a complex locus consisting of a tandem array of gene family members (for example, XANTHOMONAS ORYZAE PV. ORYZAE RESISTANCE 21, XA21, or SUBMERGENCE 1, SUB1), each gene in the array will be given an independent locus identifier (i.e., SUB1, SUB2, SUB3, etc.). If a gene is newly identified based on sequence information and that gene is later proven to be the same as a gene originally identified based on phenotype (such as those listed by ), the precedence rule applies and the gene full name will be that based on phenotype, with the other name used as a synonym. If there is redundancy, overlap, or confusion caused by use of the same name for different genes or different names for the same gene, the first published gene name will generally be retained and the CSGNL will work with the authors of publications to identify a new gene name and gene symbol for the subsequently reported gene(s) or loci. Genes identified in the plastid genome will be assigned names and symbols as described by the Uniprot , and genes identified in the mitochondrial genome will be assigned names and symbols as recommended by . Gene names are assigned based on experimental evidence about gene function or impact on phenotype. Experimental evidence may indicate a molecular function, a role in a biological process, or interaction with another gene or a phenotype associated with that gene (Fig. 1). Gene names based on computationally determined sequence similarity to a previously described homologue, orthologue, or paralogue, or based on the presence of a consensus feature such as an Interpro domain can only be assigned if there is substantial experimental evidence confirming the gene’s function. Participants at the Rice Annotation Project-1 (RAP-1) meeting held at Tsukuba, Japan, in December 2004 agreed that database curators would use a standard system of ‘evidence categories’ to indicate the type of evidence or published experimental support for the nuclear gene annotation that they provide. A description of these categories is summarized in Table 1. As determined by CGSNL, if the evidence is considered insufficient to substantiate assigning a gene function, the gene name field remains empty and the description/definition field will be utilized to describe what is known about the characteristics of the gene (Table 1). The gene symbol is an abbreviation of the gene full name and the gene symbol is written in italics. A gene symbol consists of two parts, namely, a gene class symbol consisting of two to five letters, and the corresponding locus designator consisting of one to three digits. The gene symbol should be derived from the full name of the gene discussed previously, and it is followed by the same locus designator assigned to the full gene name. Both parts of the gene symbol should be written together with no space, hyphen, or any other symbol between them (e.g., SH1, GLH2). Together, the gene class symbol and locus designator form a gene symbol that must be unique to the locus and the genome. Every effort should be made to assign gene symbols that are easily recognizable as corresponding to a gene full name. Where possible, existing symbols should be retained even if they do not fully conform to this rule, for example: C (CHROMOGEN FOR ANTHOCYANIN), A (ANTHOCYANIN ACTIVATOR), and WX (GLUTINOUS ENDOSPERM). For any gene symbol that does not have a locus designator, it is presumed that the first such gene identified has the locus designator, “1”, e.g., the previously identified gene, GLUTINOUS ENDOSPERM (WX) should be designated GLUTINOUS ENDOSPERM 1 (WX1). All new genes with similar characteristics will be assigned a new number as the new locus designator by the CGSNL, in order of discovery. The CSGNL will also make sure that previously identified gene symbols and newly identified genes that were not previously registered are assigned a unique gene symbol, thus avoiding conflicting names and symbols. The use of the suffix “(t)” and “*” to indicate a ‘tentative’ locus designation (when the allelic relationship between a newly described gene and a previously reported gene is not clear ) will be suspended and new genes will be assigned a new locus designation, under the assumption that they are new loci. If the new gene is later demonstrated to be allelic to a previously reported locus, the records of the two should be merged and the original gene symbol will be adopted by the precedence rule. The other symbol(s) will be cited as synonym(s). No previously assigned gene symbols will be deleted, thus avoiding confusion resulting from re-usage of the same symbol. Assigning a symbol to a gene should be consistent with that of the full gene name as described above. Authors who refer to specific rice genes of known function in their publications must cite the approved gene full name and symbol, if available, a ‘systematic locus ID’ from one of the genome annotation centers and, if possible, a GenBank accession number. Where complete information is not yet available, either the systematic locus_ID or the gene symbol can be used as a placeholder until additional experimental evidence is provided (Fig. 1). Gene names must not be assigned unless approved by the CGSNL. Use of species name in gene name and symbol The use of organism-specific prefixes such as “Os” (O. sativa) in the gene name and/or gene symbol may be useful in publications but will not be included in the official gene name because it is redundant with species information that is already associated with submitted/registered genes. Furthermore, it leads to a proliferation of gene names Oryza sativa-X. The relationship between the gene and the organism will be clearly maintained in all genome and sequence databases. However, authors may append the organism-specific prefixes for clarity in publications to avoid repetition of the species name whenever a gene is referenced. In any case, the species symbol should not become part of the adopted gene symbol or gene full name. Note, however, that the symbol “Os” is allowed for use in the systematic locus ID, e.g., Os05g0000530, LOC_Os03g01590, and OsIBCD000082, that is assigned based on the system adopted by RAP (http://rapdb.lab.nig.ac.jp/index.html), TIGR (http://www.tigr.org/tdb/e2k1/osa1/tigr_gene_nomenclature.shtml), and BGI-RIS (http://rise.genomics.org.cn/rice/index2.jsp), respectively. Different alleles of the same gene are distinguished by adding a numerical suffix (or previously a letter), separated by a dash or hyphen, to the gene full name or the gene symbol, e.g., SHATTERING 1-1 (SH1-1); PGI1-1, PGI1-2. Historically, there are a few cases where a letter (t) or asterisk (*), rather than a number, was used to indicate an allele, and because these letter or symbol descriptions of allelic variants have become widely used and accepted in the rice genetics community, they will be retained as exceptions in publications and will be noted as synonyms in the database. Example of a Gene Full Name and Symbol for Use in Publications Gene Full name NARROW LEAF 1 narrow leaf 1-1 Narrow leaf 1-2 Sequence variant 1 NARROW LEAF 1-s1 Sequence variant 2 NARROW LEAF 1-s2 Given that a gene is a DNA segment that has a known or predicted function/phenotype, once a gene has been named and located on a sequence map via a systematic locus_ID, it can also be represented by the group of alleles and sequence variants that consistently map to the same genetic locus. Molecular variants of genes identified by sequence alone in diverse plant material will be given a name, symbol, and accession identifier, and information about the sequence variant will be cross-referenced to specific information about the germplasm source (including the corresponding germplasm accession ID) from which the DNA/RNA material was isolated. However, sequence variants will not be considered “alleles” by the CGSNL until a molecular function or phenotype has been described for them and an allelism test has been performed. “Sequence variants” whose specific function is unknown will be distinguished from “alleles” by adding a suffix ‘-sX, to an allele name, where “s” means “sequenced” and “X” is a number that serves to identify a particular sequence variant. The name and symbol of a molecular variant will carry the name and symbol of the corresponding gene, similar to the convention for an allele, except that it carries the suffix described above and is written in all caps due to the fact that no allelic behavior can be assigned to these sequenced variants (Table 3). If a sequenced variant is later demonstrated to confer a specific novel phenotype or function, it will be assigned a new allele identifier or alternatively, if a sequenced variant is demonstrated to be equivalent to a previously named allele corresponding to a known gene, it will be assigned an existing allele identifier, based on the precedence rule, with the other identifier retained as a synonym. An example of recommended designation of gene locus, full name, and allele is shown in Table 3. The germplasm name and its accession information, in which sequence variants are identified, are not recorded in the official name/symbol. This information should be recorded separately in the database so that it can be readily cross-referenced by the genetics community. Authors submitting information about sequence variants will be responsible for finding out if the newly sequenced form is the same as any previously reported sequence variant or allele. In publications, authors may choose to concatenate the allele name, sequence variant suffix, and the germplasm source to avoid undue repetition for the readers. Protein name and symbol The name of a protein encoded by a particular gene should be consistent with the gene full name in cases where the gene name is based on phenotype or molecular function (refer to the “Gene full name” section), except that the protein name is written using all upper case characters without italics. If, at a later stage, a gene and its corresponding protein product are determined to have a biochemically characterized molecular function, such as an enzyme or a structural component (subunit) of a macromolecular complex, the protein should be assigned a synonym consistent with the enzyme nomenclature recommended by the IUPAC Enzyme Commission or the macromolecule name adapted by the IUBMB . Because there may be several functional assignments for a given protein (i.e., based on a phenotypic assay, a biochemical assay, or a molecular function), there may be several synonyms for the protein name (and similarly, for the gene full name). The protein symbol should always be consistent with the adopted gene symbol, with the exception that protein symbols are written using all upper case characters without italics, followed by a space and the numeric locus designator. For example, the GLUTINOUS ENDOSPERM 1 (WX1) gene encodes the granule-bound starch synthase enzyme (EC: 18.104.22.168). The protein name is GLUTINOUS ENDOSPERM 1 and the symbol is ‘WX1’. The protein name(s), ‘WAXY’, ‘WAXY 1’, and GRANULE-BOUND STARCH SYNTHASE (GBSS) will be recorded as synonyms. If a name cannot be assigned based on phenotype, known biochemistry, or other experimental evidence supporting its function, a systematic locus identifier (described above) and a name consistent with the description in Table 1 must be used to describe the gene until its function can be confirmed. In cases where a post-translational modification, such as protein splicing, leads to formation of two or more protein molecules with different activities or functions, the spliced protein molecules will carry a protein name and symbol consistent with their molecular function or associated phenotype, and will carry the name and symbol from the primary molecule as synonyms. Molecular technology has identified sequences that bear striking similarity to structural gene sequences but are not transcribed. These sequences are termed pseudogenes. In order to show the relatedness of pseudogenes to functional genes, pseudogenes will be identified with the gene symbol of the structural/functional gene, in italics, followed by a “.P” (symbol “period” and capital letter “P”) for pseudogene. This will replace the conventionally used Greek symbol for “psi” for pseudogene; an example is RPS14.P instead of RPS14.psi for pseudoribosomal protein S14. The same is suggested for pseudogenes identified in mitochondrial and plastid (chloroplast) genomes and examples are ACTB.P1 (ACTIN BETA PSEUDOGENE 1), ACTB.P2 (ACTIN BETA PSEUDOGENE 2), etc. Pseudogenes may be on different chromosomes or closely linked to the functional gene from which they derive their name and may occur in varying numbers. For nomenclature purposes, a pseudogene is a gene that has no function . If a pseudogene were later proven to transcribe and regulate the expression of another gene or for instance the transcribed mRNA were shown to have a function, the gene would have to be reclassified to another gene category such as fnRNA or potogene as described by . Due to the genetic variability inherent within a species, it is possible that a gene sequenced from one germplasm accession may not be mapped in either of the two fully sequenced genomes from O. sativa, due to insertion/deletion polymorphism and gene family expansion/contraction. Similarly, a gene identified by phenotype in a segregating population may not be present in one of the parental genomes. In such situations, even without the mapping information, a gene name and symbol can still be assigned to these allelic variants. When assigning a gene name to such unmapped loci, it is essential to confirm that there is valid experimental evidence supporting the existence and function of the gene. If a second instance of a similar unmapped sequenced gene occurs, the best reciprocal match approach should be applied to rigorously confirm whether it is, in fact, the same as the gene previously identified. In cases where a second instance of a phenotypically defined gene occurs, an allelism or complementation test will be considered essential evidence. If any of these evidences are missing, such a gene should be assigned a new gene name and symbol. In the mean time, the unique identifier assigned for a gene that is registered by the CGSNL, and if available the GenBank accession number, will serve as a placeholder. Quantitative trait loci (QTL) QTLs serve as placeholders for genes and contribute to the functional characterization of the genome. A QTL is defined as a region of the genome that is statistically associated with a measurable phenotype, generally with a quantitatively inherited trait. QTLs are identified by genetic mapping using association panels of segregating populations, and each QTL is defined by at least two, closely linked, mapped genetic markers that delimit a specific chromosomal region. Rice QTL nomenclature rules indicate that each QTL name should be italicized and start with a lower case letter “q” to indicate that it is a QTL, followed by a two to five letter standardized “trait name” (e.g. SW for Seed Width), a number designating the rice chromosome on which it occurs (1–12), a period (“.”), and a unique identifier to differentiate individual QTLs for the same trait that reside on the same chromosome (e.g. qSW5.1). When QTLs are entered into a genome database such as Gramene , they may be further assigned a standardized trait term from the Trait Ontology (TO; ; e.g. seed width, Accession #TO: 0000140) to facilitate querying and may be assigned a new, unique identifier to avoid confusion between studies. In any case, this database assignment will be reflected as a synonym within the QTL record, and the original, published QTL name will be retained for search purposes. When gene(s) that are actually responsible for the phenotypic variation associated with the QTL is identified for the first time based on its correspondence to a QTL, the gene full name may reflect the QTL designation (except for the elimination of the prefix ‘q’ and the use of italics (e.g., SW5)); however, if the gene underlying the QTL corresponds to a previously characterized and named gene, the precedence rule applies and the original gene name must be retained. Nonetheless, it is recommended that the relationship between genes and QTLs be noted in the list of synonyms associated with gene names. Systematic locus ID assignment: a RAP database example Systematic locus ID for nuclear genes Systematic locus identifiers will be assigned to genes identified along the rice (O. sativa ssp. japonica, cv. Nipponbare) pseudomolecules (assembled chromosome contigs of the sequenced genome of O. sativa) based on automated gene prediction programs, orthologue alignments, and/or alignment of ESTs and full-length cDNAs, following the recommendations adopted for yeast S. cereviseae and A. thaliana . Systematic identifiers are assigned to protein-coding genes (ORFs), RNA-coding genes (snoRNA, snRNA, rRNA, tRNAs, and microRNAs), and pseudogenes. A nuclear gene locus ID will consist of: (a) an uppercase letter “O” and lowercase letter “s” to indicate the rice species O. sativa; (b) a two-digit number to indicate a specific rice chromosome (01, 02, 03, ...12); (c) a letter “g” indicating that the locus ID is for a gene; (d) a seven-digit number (assuming there will be fewer than 10,000 genes per chromosome) indicating the sequential order of a gene along a chromosome in ascending order from the telomere of the short arm (north side) to the telomere of the long arm (south side). The numbers indicating gene order are independent of the polarity of the strand (+/– or Watson/Crick) and should be initially assigned in increments of 100, thus leaving room for expansion as new genes are discovered. For example, the third and fourth genes on rice chromosome 5 would be indicated as Os05g0000300 and Os05g0000400. If, during the course of the sequencing or based on new experimental evidence, a new gene is detected between the two already annotated genes, the new gene will be assigned a number between the two previously annotated genes, using the tenth number space. For example, a gene discovered between Os05g0000300 and Os05g0000400 would be assigned Os05g0000350, again leaving room for expansion. Despite the obvious benefits of this strategy, it is true that in some cases gene order within a particular chromosomal segment may not follow the ascending/descending order rule based on precedence of gene discovery; however, this shortcoming does not negate the value of the system as a whole. Systematic locus IDs will be assigned to all genes, including those that are known to have been introduced into the nuclear genome via an insertion of a portion of an organellar genome (plastid and/or mitochondria), recognizing that such genes will often turn out to be non-functional or pseudogenes. For regions where the genome sequence of rice is incomplete, such as the gaps in the telomeric and centromeric regions or the smaller interstitial gaps, it is suggested that a locus ID space be reserved. The locus ID space would accommodate 1,000 genes per gap in the telomere and centromere regions, and one gene per 2 kb interstitial gap. Note that the loci identified in the genomes of cultivars, subspecies, or species accessions of the genus Oryza other than O. sativa ssp. japonica cv. Nipponbare must be named in consultation with the CGSNL. Database curators and individual researchers must assign names and symbols only after registration with and approval by the CGSNL. Systematic locus ID for organellar genes The main mitochondrial and chloroplast chromosomes are circular (also called master circles) and do not have arms. Locus IDs for genes found on organellar chromosomes will use the symbols ‘Mt’ for mitochondrion and ‘Pt’ for plastid (chloroplast), respectively, instead of the chromosome number designations used for nuclear genes. These letters will be followed by a letter “g” indicating that the locus corresponds to a gene, followed by a seven-digit number (assuming there will be fewer than 10,000 genes per chromosome) indicating the sequential order of genes along an organellar chromosome, independent of the polarity of the strand, in ascending order from the first base pair of the completely sequenced molecule to the last base pair in the linearized molecule (as submitted by the author of the sequence to any of the reference sequence databases, namely NCBI-GenBank, DDBJ, or EMBL). For example, OsPtg0000100 indicates the first gene on the rice plastid genome. Looking at the GenBank entries for plastid genomes sequenced from O. sativa cv. Nipponbare, this would refer to the gene, PSBA (82–1,143 bp), as referenced by GenBank entry NC_001320. In addition to the system for identifying loci found on master circles, there are genes found on plasmids, both linear and circular (also referred to as subgenomic circles) in the mitochondria, and these will be indicated by using a lower case letter, a–z (in the order of precedence by submission to GenBank), immediately following the organellar symbol, Mt or Pt. For example, OsMtag0000200 indicates gene 2 on the 2,135-bp mitochondrial plasmid B1 (GenBank accession NC_001751). The number series for genes on plasmids will start from the first base pair of the fully assembled, sequenced plasmid or subgenomic circle, as determined by the sequence submitted to GenBank, DDBJ, or EMBL. Every known or predicted form of transcript of a gene will be assigned a systematic identifier that will be the same as the locus identifier except that the letter ‘g’ for gene will be replaced by letter ‘t’ for transcript, to be added as a suffix following the two-digit chromosome identifier. This naming convention will ensure consistency in the gene’s locus ID and its transcript ID. For example, the transcript Os05t0000300 is transcribed by the locus Os05g0000300 representing gene 3 on chromosome 5. Sometimes the nascent transcript undergoes alternative splicing. In order to clearly identify the alternatively spliced forms of the transcripts, a two-digit suffix will be added to the systematic transcript ID of the gene will be added, separated by a dash, e.g., -01, -02, -03, ....-99, in order of discovery. By default, the transcript ID of the very first transcript (or the only transcript identified) will always have number “-01” suffixed to the transcript ID. For example, the transcript ID of the locus Os05g0000300, for which there are no known splice variants, will be Os05t0000300-01. If there is a later report suggesting that the transcript from this locus undergoes alternative splicing, such that three alternative forms are created, if any one of the three forms matches the original transcript, it would retain the original transcript ID and two additional IDs would be generated, Os05t0000300-02 and Os05t0000300-03. Assigning the number series to the splice variants will depend on the precedence of identification, the submission to GenBank or possibly the size of the cDNA. Any additional alternative forms are numbered sequentially. All the peptides deduced experimentally or computationally from a gene sequence/transcript will be assigned a systematic identifier that is the same as the transcript identifier, except that the letter ‘t’ for transcript will be replaced by the letter ‘p’ for protein, thus assuring consistency with the gene’s locus ID and its transcript ID. For instance, the protein Os05p0000300-01 is translated from transcript Os05t0000300-01 that is transcribed from locus Os05g0000300, which represents gene 3 on chromosome 5. In order to avoid conflicts with proteins deduced from alternatively spliced forms of the transcripts from a single locus, the protein ID must reflect the corresponding transcript from which it is deduced, except for the letter ‘t’. Genes present on unanchored sequenced clones For genes identified in unanchored BAC/PAC clones, continued use of the nomenclature system whereby the gene is sequentially designated by a numerical suffix following the BAC/PAC clone name assigned by the sequencing center (e.g., F23H14.13) is acceptable. The systematic locus ID nomenclature system outlined above will supersede the clone-based name once the sequence in the region is fully assembled and completed. In such cases, the earlier clone-based locus identifiers must become either the alternate ID or the gene synonym. Adding, deleting, editing, merging, and splitting of loci Editing a locus Consistent use of a given locus identifier, full gene name, and gene symbol is suggested. Consistency can be maintained as long as there are no major changes in the gene model or function, particularly no changes that would lead to a change in the start position of the locus. For example, consistency of nomenclature is possible in cases where the gene encodes an ORF, and the modifications in annotation change only the intron–exon boundaries, the strand identity, require the addition or deletion of exon(s) or intron(s), or change or modify the function or associated phenotype assigned to the locus. Similarly, in cases where updated annotation changes the definition of the ORF, the gene’s full name, symbol, and the definition line of the GenBank/DDBJ/EMBL records should reflect the change in the molecule’s structure or function, but in all of the above cases, the locus ID remains same. Deleting a locus Genes identified by computational methods alone may prove to be false positives when confirmed by experimental evidence, thus making it necessary to retire the locus. In such cases, all the records and corresponding identifiers should be preserved with a flag OBSOLETE and never DELETED from data repositories. The flag OBSOLETE ensures that the same identifiers are not used again for a new locus, thus avoiding a situation that would lead to confusion and, if required, makes it possible for an obsolete gene to still be referenced. Splitting a locus When it is determined that a locus identifier actually refers to more than one gene (e.g., two genes mistakenly identified as one by an automated prediction method), the locus closest to the locus start position will retain the original locus identifier, gene name, and gene symbol, and the gene farther from the locus start position will be considered a newly identified locus and will receive a new locus identifier, gene name, and gene symbol, following the recommendations mentioned above. The modification of the gene name and gene symbol should accommodate the new function, if applicable. In the cases where there is experimental evidence (such as full-length cDNA sequence) indicating that two previously identified genes are actually one gene or part of the same locus, the two loci must be merged into one. The new locus must retain the locus identifiers, gene name, and gene symbol from the locus closest to the start position of the new, merged locus. For the second gene, the locus identifier becomes a secondary locus ID (associated with the first one), whereby the second gene’s name and symbol will become synonyms of the first one. Transposable element locus ID IDs assigned to loci containing a transposable element (TE) will be similar to those for gene loci except that the ‘g’ in the gene locus ID will be replaced by ‘te’, e.g., Os05te0000300. Since the majority of the current TE annotations are based on in silico prediction and computational analyses, it was decided at the RAP1 meeting that this system be implemented at a later stage. It was also suggested that experts be consulted before a nomenclature system for TEs is put in place. However, if a TE is proven to contain a functional gene, it will be assigned a gene locus identifier, as described above. Registration of gene names and symbols A web-based gene registration and nomenclature website has been established to support the registration process and can be accessed at http://shigen.lab.nig.ac.jp/rice/oryzabase_submission/gene_nomenclature/. Registration requests will be handled by subcommittees within the CGSNL, depending whether the gene was identified by sequence or by phenotype. Rice researchers are encouraged to use this website to register genes and alleles of interest. The CGSNL will give priority to functionally characterized genes and may request experimental evidence in order to process a new request. The approved gene names and symbols will be released immediately upon approval. Although this nomenclature system will catalogue the genes from O. sativa, every effort will be made by the CGSNL to manage gene nomenclature in non-O. sativa rice species and the rice community is encouraged to use the same gene registration site for registering rice genes from species other than O. sativa. Descriptive information about the characteristics of the gene, including but not limited to information about its molecular function, its role in a biological process, its location in a subcellular component, its expression in a particular plant tissue and growth stage, and its effects on phenotype Inheritance and allelism data Source germplasm (genus, species, stock/strain/Accession_ID/germplasm repository). If from a hybrid accession, provide information on germplasm resources of the parents Chromosomal and map location Sequence data and gene model (intron/exon structure, promoter, etc.) GenBank accession number and/or locus_ID from at least one of the rice genome annotation projects (if available) Protein/gene family relationship Supporting documents including a photograph of the mutant phenotype, RNA and/or protein expression data, enzymatic assays, sequence alignments, etc. The submitted registration entries will be sent to the convener of the CGSNL via an electronic submission form provided via the OryzaBase database that will host the gene registration site (http://shigen.lab.nig.ac.jp/rice/oryzabase_submission/gene_nomenclature/). After examining the submitted information to determine if a gene is new and to consider naming conventions, the convenor will notify the submitting author to verify the new gene’s full name and symbol. Upon approval, the registered gene will be assigned an appropriate gene full name and gene symbol. This must be reported in the annotation databases and in publications. The gene registration database will also provide an online and downloadable list of registered genes that will include information on the approved gene name, symbol, synonyms, mapped systematic_locus_IDs from annotation databases, and the associated GenBank accessions, if available (Table 2). The convenor must also communicate with the appropriate databases and RGC members so that the new gene name and symbol is included in the list of genes/alleles published in the OryzaBase (http://www.shigen.nig.ac.jp/rice/oryzabase/top/top.jsp), Gramene (http://www.gramene.org), IRIS (http://www.iris.irri.org), RAP (http://rapdb.lab.nig.ac.jp/), TIGR (http://www.tigr.org/tdb/e2k1/osa1/), and other relevant databases and websites. A research note describing all newly accepted genes/alleles will be published biannually in the Rice Genetics Newsletter (http://www.shigen.nig.ac.jp/rice/oryzabase/rgn/newsletter.jsp). Suggestions for amendments of these rules can be submitted to the CGSNL using an online “Suggestions” form available on the OryzaBase web site http://shigen.lab.nig.ac.jp/rice/oryzabase_submission/gene_nomenclature/. Amendments will be announced in the journal RICE, in the Rice Genetics Newsletter and via the OryzaBase, Gramene, and IRIS Databases and the rice-e-net e-mail list (http://chanko.lab.nig.ac.jp/list-touroku/rice-e-net-touroku.html). For contact, the users are encouraged to send e-mail to [email protected]. By curating genes whose function has been experimentally determined (‘genes of known function’) independently of genes predicted by sequence analysis alone (gene models), the rice community has established a flexible yet robust system for bridging these two different approaches to gene structure/function analysis. The long-term goal is to provide a functional description for every gene in rice, at which time, every gene model (locus_ID) should be associated with a gene name. However, with the rapidly diminishing cost of sequencing and the rapidly expanding number of sequenced rice genomes in the public domain, our understanding of the gene repertoire in rice is no longer limited by the availability of a single O. sativa ssp. japonica and a single O. sativa ssp. indica genome sequence. Thus, the rice gene nomenclature system has adopted protocols for establishing one-to-many associations between genes of known function and computationally determined gene models, where multiple types of evidence are curated in support of the functional description of each Oryza gene. A gene may code for a protein product (CDS) or it may code for one of many kinds of non-coding RNA molecules, including snoRNA, snRNA, tRNA, rRNA, microRNA, SiRNA, or fnRNA (functional RNA), etc. If new classes of genes are identified in the future, we will amend our classification system accordingly. In the naming of genes, the use of English is preferred, and gene symbols should consist of Latin letters and Arabic numerals. The name of a gene should either briefly describe the phenotype and/or convey some meaning as to the function of the gene product, if known. All new gene names should be approved by and registered with the CGSNL to avoid confusion and duplication. The rice community gives priority to the first published name for a gene but it is recognized that names change over time to reflect new knowledge. While we do not propose the adoption of a rigid or restrictive gene nomenclature system at this time, we agree to adopt a system of synonyms that permits the establishment of correspondences between sequence-based gene identifiers and names based on experimentally confirmed biochemical function or phenotypic variation. This approach allows for continued evolution of the gene nomenclature system for rice as new technologies are developed and new knowledge is accumulated. We kindly acknowledge the following researchers, Pankaj Jaiswal, Junjian Ni, and Immanuel Yap from the Gramene database (http://www.gramene.org) and the Department of Plant Breeding and Genetics at Cornell University, Ithaca, NY, USA; Toshiro Kinoshita from the Kita 6 Jo, Nishi 18 Chome, Sapporo 060-0006, Japan; David Mackill and Richard Bruskiewich from the International Rice Research Institute, DAPO 7777, Metro Manila, Philippines; C. Robin Buell from Department of Plant Biology, Michigan State University, East Lansing, MI 48824-1312, USA; Masahiro Yano, Takeshi Itoh, and Takuji Sasaki from the Department of Molecular Genetics, National Institute of Agrobiological Resources, Tsukuba, Ibaraki 305-8602, Japan; and Qifa Zhang from the National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, People’s Republic of China for the help in preparing this manuscript and to numerous other experts for their help and useful suggestions on improving the rice gene nomenclature. We also thank all the members of the Rice Genetics Cooperative (RGC: http://www.shigen.nig.ac.jp/rice/oryzabase/rgn/office.jsp) for their support. Financial support was provided by NSF Grant DBI 0703908 (Cold Spring Harbor Subcontract 22930113 to Cornell University). This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
1
4
<urn:uuid:2ae4556b-9d7e-4bd8-809d-d1bd5438fabe>
PAG - Programmable Audio Generator What is it? Why would you use it? How do you use it? Below is an actual file, let's take a look at it: #format WAVE 44100 16 1 dtmf.wav 1 0.2 697 50 50 1209 50 50 2 0.2 697 50 50 1336 50 50 3 0.2 697 50 50 1477 50 50 4 0.2 770 50 50 1209 50 50 5 0.2 770 50 50 1336 50 50 6 0.2 770 50 50 1477 50 50 7 0.2 852 50 50 1209 50 50 8 0.2 852 50 50 1336 50 50 9 0.2 852 50 50 1477 50 50 * 0.2 941 50 50 1209 50 50 0 0.2 941 50 50 1336 50 50 # 0.2 941 50 50 1477 50 50 A 0.2 697 50 50 1633 50 50 B 0.2 770 50 50 1633 50 50 C 0.2 852 50 50 1633 50 50 D 0.2 941 50 50 1633 50 50 s 0.2 1 1 1 1 1 1 #macros h 8675309 #start 0s1s2s3s4s5s6s7s8s9s0sAsBsCsDs*s#s hThe first line defines the format of the file to produce: #format WAVE 44100 16 1 dtmf.wavThe format of the this line is: #format fileType sampling bits channels fileNamefileType can be one of: WAVE, AIFF, AU sampling is the sampling rate in Hz, such as 44100, 11025, 8000, etc bits is the sample size, only 8 and 16 are allowed channels is 1 or 2, for mono or stereo fileName is, as expected, the name of the file to produce Next come the definitions for the tones. Each tone is assigned to a character, whenever that character appears in the fourth section of the file (tones to send) it is produced. A tone can actually be defined to produce more than one frequency at a time, which is necessary in cases like DTMF where two tones must be played at the same time for each button on the phone. For example, the definition for the 1 button: 1 0.2 697 50 50 1209 50 50The definition of this line is: character length freq1 ampL1 ampL2 ... freqN ampLN ampRNThe first letter in the line is the character assigned to that tone. Any printable or non printable character may be used, except for the tilde ~ which is actually mapped to the space character, since it is more likely you will want to define the space than the tilde. That is, you use the tilde here to define the tones to send when the space character is encountered. Hopefully that isn't too confusing. The second value is the length (in seconds) that tone should be played for. Next, we take values in blocks of three, such as 697 50 50 in that line. The 697 means to play 697 Hz, the 50 50 means to set the amplitude to 50% for both the left and right channels. The next set of three values is 1209 50 50 which means we want to play a second tone at the same time, of 1209 Hz, also 50% amplitude in each channel. You can have different amplitudes for each tone and channel of course, but for a given channel, the sum of all the amplitudes for all the frequencies should not exceed 100%, or distortion will result. You can define up to ten frequencies for each character. You must specify all three values for each, including both amplitudes, even if the sound file is monophonic. After all of the tones are defined, you define the macros, preceeded by the line: #macrosYou must include this line, even if no macros follow. Macros are easy to define, in our example, we have one macro definition: h 8675309This means that whenever the character h is encountered, to substitute the characters 8675309 instead. Don't define a macro using a letter already assigned to a tone. After the macros have been defined, there should be the following line: #startAnd following this, the text to be converted into tones. In our example: 0s1s2s3s4s5s6s7s8s9s0sAsBsCsDs*s#s hIn this case we are sending each tone tone button's tone pairs, with a space between them, as s was defined to be a 1 Hz tone of amplitude 1%, making it inaudible. Then we have the letter h, which is a macro, which means that the tones for buttons 8675309 are sent. We do have returns in our data portion, but as we have not defined them to map to any tones, they cause no additional output. That's really about it. There is a text editor in the program, or you can use your own, like BBEdit. TextEdit that comes with Mac OS X doesn't always save in a plain text format. Windows users can use any editor, like Notepad, if they'd prefer. After you have your text file opened in PAG, select Process File from the File menu. It may take a few seconds (or more!) depending on the length of the sound file being produced. After it is done, it displays a second window with some debugging results (which are also contained in three text files it produces). You can send this to me (along with the input file!) if you're having difficulties, it may help determine the problem. PAG is only $19.99! Before buying, you should download a copy and make sure it works on your particular configuration. Last modified December 4, 2005
1
4
<urn:uuid:d7bf2dc8-9f1a-421b-9894-0d559cb86b0e>
All cancers in Australia The following material has been sourced from the Australian Institute of Health and Welfare Cancer is a diverse group of several hundred diseases in which some of the body’s cells become abnormal and begin to multiply out of control. The abnormal cells can invade and damage the tissue around them, and spread to other parts of the body, causing further damage and eventually death. All cancers combined incorporates ICD-10 cancer codes C00–C97 (Malignant neoplasms of specific sites), D45 (Polycythaemia), D46 (Myelodysplastic syndromes), and D47.1, D47.3, D47.4 and D47.5 (Myeloproliferative diseases); but excludes basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) of the skin. BCC and SCC, the most common skin cancers, are not notifiable diseases in Australia and are not reported in the Australian Cancer Database. Estimatednumber of new cancer cases diagnosed in 2017 134,174 = 72,169 males + 62,005 females Estimated number of deaths from cancer in 2017 47,753 = 27,076 males + 20,677 females Chance of surviving at least 5 years (2009–2013) People living with cancer at the end of 2012 (diagnosed in the 5 year period 2008 to 2012) New cases of cancer in Australia In 2013, there were 124,465 new cases of cancer diagnosed in Australia (68,936 males and 55,529 females). In 2017, it is estimated that 134,174 new cases of cancer will be diagnosed in Australia (72,169 males and 62,005 females). In 2013, the age–standardised incidence rate was 483 new cases per 100,000 persons (562 for males and 416 for females). In 2017, it is estimated that the age–standardised incidence rate will be 470 cases per 100,000 persons (526 for males and 423 for females). The incidence rate of all cancers combined will generally increase with age for both males and females (Figure 1). In 2017, it is estimated that the risk of an individual being diagnosed with cancer by their 85th birthday will be 1 in 2 (1 in 2 males and 1 in 2 females). The number of new cases of cancer diagnosed increased from 47,440 (25,420 males and 22,020 females) in 1982 to 124,465 in 2013. Over the same period, the age–standardised incidence rate increased from 383 new cases per 100,000 persons (472 for males and 328 for females) in 1982 to 483 cases per 100,000 persons in 2013 (Figure 2). |Cancer type||New cases 2017||% of all new cancers 2017| |Breast (among females)||17,586||28.4| |Prostate (among males)||16,665||23.1| Deaths from cancer In 2014, there were 44,171 deaths from cancer in Australia (24,718 males and 19,453 females). In 2017, it is estimated that this will increase to 47,753 deaths (27,076 males and 20,677 females). In 2014, the age–standardised mortality rate was 162 deaths per 100,000 persons (200 for males and 132 for females). In 2017, it is estimated that the age–standardised mortality rate will be 161 deaths per 100,000 persons (200 for males and 129 for females). The mortality rate for all cancers combined will generally increase with age for both males and females (Figure 1) In 2017, the risk of an individual dying from cancer by their 85th birthday will be 1 in 5 (1 in 4 males and 1 in 6 females). The number of deaths from cancer increased from 17,032 (9,541 males and 7,491 females) in 1968 to 44,171 in 2014. Over the same period, the age–standardised mortality rate decreased from 199 deaths per 100,000 persons in 1968 (258 for males and 159 for females) to 162 deaths per 100,000 persons in 2014 (Figure 2). |Cancer type||Number of deaths 2017||% of all cancer deaths 2017| |Prostate (among males)||3,452||12.7| |Breast (among females)||3,087||14.9| Figure 1: Estimated age-specific incidence and mortality rates for all cancers combined, by sex, 2017 Source: AIHW . Figure 2: Age-standardised incidence rates for all cancers combined 1982–2013 and age-standardised mortality rates for all cancers combined 1968–2014, by sex Source: AIHW . Survival from cancer In 2009–2013, individuals diagnosed with cancer had a 68% chance (68% for males and 69% for females) of surviving for 5 years compared to their counterparts in the general Australian population. Between 1984–1988 and 2009–2013, 5-year relative survival from cancer improved from 48% to 68%. Figure 3: 5-year relative survival from all cancers combined, by sex, 1984–1988 to 2009–2013 Source: AIHW . The survivorship population is measured using prevalence data. Prevalence refers to the number of people alive who have previously been diagnosed with cancer. The prevalence for 1, 5 and 31 years given below are the number of people living with cancer at the end of 2012 who had been diagnosed in the preceding 1, 5 and 31 years respectively. At the end of 2012, there were 106,340 people living who had been diagnosed with cancer that year, 410,530 people who had been diagnosed with cancer in the previous 5 years (from 2008 to 2012) and 994,605 people living had been diagnosed with cancer in the previous 31 years (from 1982 to 2012). International Statistical Classification of Diseases and Related Health Problems Version 10 (ICD–10) Cancer is classified by the International Statistical Classification of Diseases and Related Health Problems Version 10 (ICD–10). This is a statistical classification, published by the World Health Organization, in which each morbid condition is assigned a unique code according to established criteria. Future estimations for incidence and mortality are a mathematical extrapolation of past trends. They assume that the most recent trends will continue into the future, and are intended to illustrate future changes that might reasonably be expected to occur if the stated assumptions continue to apply over the estimated period. Actual future cancer incidence and mortality rates may vary from these estimations. For instance, new screening programs may increase the detection of new cancer cases; new vaccination programs may decrease the risk of developing cancer; and improvements in treatment options may decrease mortality rates. Cancer incidence indicates the number of new cancers diagnosed during a specified time period (usually one year). The 2013 national incidence counts include estimates for NSW because the actual data were not available. Note that actual data for the Australian Capital Territory do not include cases identified from death certificates. The 2017 estimates are based on 2004–2013 incidence data. Due to rounding of these estimates, male and female incidence may not sum to person incidence. Cancer mortality refers to the number of deaths occurring during a specified time period (usually one year) for which the underlying cause of death is cancer. The 2017 estimates are based on mortality data up to 2013. Joinpoint analysis was used on the longest time series of age–standardised rates available to determine the starting year of the most recent trend. Prevalence of cancer refers to the number of people alive with a prior diagnosis of cancer at a given time. It is distinct from incidence, which is the number of new cancers diagnosed within a given period of time. The longest period for which it is possible to calculate prevalence using the available national data (from 1982 to 2012) is currently 31 years so this is used to provide an estimate of the ‘total’ prevalence of cancer as at the end of 2012, noting that people diagnosed with cancer before 1982 aren’t included. Age standardised rates Incidence and mortality rates expressed per 100,000 population are age–standardised to the Australian population as at 30 June 2001. - Australian Institute of Health and Welfare (AIHW) 2017. Australian Cancer Incidence and Mortality (ACIM) books: All cancers combined. Canberra: AIHW. [Accessed February 2017]. - AIHW 2017. Cancer in Australia 2017. Cancer series no. 101. Cat. No. CAN 100. Canberra: AIHW.
1
8
<urn:uuid:750fc143-15e5-4c97-ab7f-6c2747902d2e>
In welcoming the roundtable participants to the Laboratory, Al Ramponi, associate deputy director for Science and Technology, remarked that rare earth research brought him to the Lab and the laser program in the 1980s. Looking forward, he noted that "high tech and green tech are large and growing users of rare earth elements." The rare earths comprise 17 elements in the periodic table -- scandium, yttrium and the 15 lanthanides (lanthanum, cerium, praseodymium, neodymium, promethium, samarium, europium, gadolinium, terbium, dysprosium, holmium, erbium, thulium, ytterbium and lutetium). Despite their name, the rare earths (with the exception of promethium) are not all that rare, but are actually found in relatively high concentrations across the globe. However, because of their geochemical properties, they seldom occur in easily exploitable deposits. Rare earth elements are of strategic concern because they are used in many devices important to a high-tech economy and national security, including computer components, electronic polishing compounds, refining catalysts, superconductors, permanent magnets, hybrid/electric vehicle batteries and magnets, fiber optic communications systems, LCD screens, night vision goggles, tunable microwave resonators -- and, at the Laboratory, NIF's neodymium-glass laser amplifiers. The two DOE PI representatives, Kay Thompson and Diana Bauer, described DOE's interest in rare earths. Thompson observed that rare earth elements are central to DOE's mission because they are essential to many of the clean energy technologies that are needed to achieve a low-carbon society. She also noted that the roundtable is one of the first efforts supporting the new U.S.-Japan Clean Energy Policy Dialogue, announced earlier in November by President Obama and Japanese Prime Minister Kan at the Asia-Pacific Economic Cooperation (APEC) Leaders Meeting. Led by DOE and Japan's Ministry of Economy, Trade and Industry, the dialogue will focus on policies to promote the development and deployment of clean energy technologies, including electric vehicles, transformative energy, peaceful nuclear energy, and rare earth elements. (The initiative builds on the Clean Energy Action Plan agreed to by the U.S. and Japan in November 2009.) Bauer described the goals of the DOE Critical Materials Strategy. "The global ramp-up in clean energy technologies is changing the supply-and-demand dynamic of rare earth elements. Therefore, we're looking at ways to diversify the global supply chain, at potential substitutes, and at efficient use, from mining and extraction through manufacturing to reuse and recycling." She noted that DOE is focusing on the use of rare earths in four energy-related technology areas in particular -- magnets, batteries, photovoltaic thin films, and lighting. China currently produces about 97 percent of the world's supply of rare earths, although it has not always been the leader in rare earth production. Until 1948, most rare earths came from placer sand deposits in India and Brazil. In the 1950s, South Africa was the principal source of rare earths, extracted from large veins of monazite ore. In the 1960s through the 1980s, the Mountain Pass mine in California was the leading rare earth producer. Chinese rare earth production took off in the 1990s, when China undercut U.S. prices on rare earths as a way to obtain hard currency. The two days of talks and discussions covered all stages of rare earth production and use, from geological availability, to recovery, extraction and separation from mineral ores, to manufacturing and use, to alternatives and substitutes. Special sessions also provided an overview of Japan's New Energy and Industrial Technology Development Organization and ARPA-E's perspective on the issues surrounding rare earth elements. The roundtable was chaired by LLNL's Ed Jones of the Atmospheric, Earth and Energy Division within the Physical and Life Sciences Directorate. Japanese participants haled from the Agency for Natural Resources and Energy, National Institute of Advanced Industrial and Science and Technology (AIST), Japan Oil, Gas and Metals National Corporation (JOGMEC), New Energy and Industrial Technology Development Organization (NEDO), Kansai University and Tohoku University. Participating U.S. organizations included DOE's Advanced Research Projects Agency-Energy (ARPA-E) and Office of Policy and International Affairs (DOE-PI); Colorado School of Mines, Golden Colo.; Naval Postgraduate School; Molycorp Minerals LLC, Greenwood Village, Colo.; NSTec, Las Vegas, Nev.; National Energy Technology Laboratory, U.S. Geological Survey; and Ames, Argonne, Idaho, Pacific Northwest, Lawrence Berkeley, Lawrence Livermore, and Sandia national laboratories.
1
5
<urn:uuid:e42763e5-f644-4e2e-bda3-f93ec03e8e57>
A photo of a cat with the compression rate decreasing, and hence quality increasing, from left to right. |Internet media type|| |Uniform Type Identifier (UTI)||public.jpeg| |Magic number||ff d8 ff| |Developed by||Joint Photographic Experts Group| |Type of format||lossy image format| |Standard||ISO/IEC 10918, ITU-T T.81, ITU-T T.83, ITU-T T.84, ITU-T T.86| JPEG (// JAY-peg) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. These format variations are often not distinguished, and are simply called JPEG. The term "JPEG" is an abbreviation for the Joint Photographic Experts Group, which created the standard. The MIME media type for JPEG is image/jpeg, except in older Internet Explorer versions, which provides a MIME type of image/pjpeg when uploading JPEG images. JPEG files usually have a filename extension of .jpg or .jpeg. JPEG/JFIF supports a maximum image size of 65,535×65,535 pixels, hence up to 4 gigapixels (for an aspect ratio of 1:1). - 1 The JPEG standard - 2 Typical usage - 3 JPEG compression - 4 JPEG files - 5 Syntax and structure - 6 JPEG codec example - 7 Effects of JPEG compression - 8 Lossless further compression - 9 Derived formats for stereoscopic 3D - 10 Patent issues - 11 Standards - 12 Implementations - 13 See also - 14 References - 15 External links The JPEG standard "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the JPEG standard and also other still picture coding standards. The "Joint" stood for ISO TC97 WG8 and CCITT SGVIII. In 1987 ISO TC 97 became ISO/IEC JTC1 and in 1992 CCITT became ITU-T. Currently on the JTC1 side JPEG is one of two sub-groups of ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 (ISO/IEC JTC 1/SC 29/WG 1) – titled as Coding of still pictures. On the ITU-T side ITU-T SG16 is the respective body. The original JPEG group was organized in 1986, issuing the first JPEG standard in 1992, which was approved in September 1992 as ITU-T Recommendation T.81 and in 1994 as ISO/IEC 10918-1. The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, but not the file format used to contain that stream. The Exif and JFIF standards define the commonly used file formats for interchange of JPEG-compressed images. JPEG standards are formally named as Information technology – Digital compression and coding of continuous-tone still images. ISO/IEC 10918 consists of the following parts: |Part||ISO/IEC standard||ITU-T Rec.||First public release date||Latest amendment||Title||Description| |Part 1||ISO/IEC 10918-1:1994||T.81 (09/92)||Sep 18, 1992||Requirements and guidelines| |Part 2||ISO/IEC 10918-2:1995||T.83 (11/94)||Nov 11, 1994||Compliance testing||rules and checks for software conformance (to Part 1)| |Part 3||ISO/IEC 10918-3:1997||T.84 (07/96)||Jul 3, 1996||Apr 1, 1999||Extensions||set of extensions to improve the Part 1, including the SPIFF file format| |Part 4||ISO/IEC 10918-4:1999||T.86 (06/98)||Jun 18, 1998||Jun 29, 2012||Registration of JPEG profiles, SPIFF profiles, SPIFF tags, SPIFF colour spaces, APPn markers, SPIFF compression types and Registration Authorities (REGAUT)||methods for registering some of the parameters used to extend JPEG| |Part 5||ISO/IEC 10918-5:2013||T.871 (05/11)||May 14, 2011||JPEG File Interchange Format (JFIF)||A popular format which has been the de facto file format for images encoded by the JPEG standard. In 2009, the JPEG Committee formally established an Ad Hoc Group to standardize JFIF as JPEG Part 5.| |Part 6||ISO/IEC 10918-6:2013||T.872 (06/12)||Jun 2012||Application to printing systems||Specifies a subset of features and application tools for the interchange of images encoded according to the ISO/IEC 10918-1 for printing.| The JPEG compression algorithm is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where the amount of data used for an image is important, JPEG is very popular. JPEG/Exif is also the most common format saved by digital cameras. On the other hand, JPEG may not be as well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images may be better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format. The JPEG standard actually includes a lossless coding mode, but that mode is not supported in most products. As the typical use of JPEG is a lossy compression method, which somewhat reduces the image fidelity, it should not be used in scenarios where the exact reproduction of the data is required (such as some scientific and medical imaging applications and certain technical image processing work). JPEG is also not well suited to files that will undergo multiple edits, as some image quality will usually be lost each time the image is decompressed and recompressed, particularly if the image is cropped or shifted, or if encoding parameters are changed – see digital generation loss for details. To avoid this, an image that is being modified or may be modified in the future can be saved in a lossless format, with a copy exported as JPEG for distribution. JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the spatial (2D) domain into the frequency domain (a.k.a. transform domain.) A perceptual model based loosely on the human psychovisual system discards high-frequency information, i.e. sharp transitions in intensity, and color hue. In the transform domain, the process of reducing information is called quantization. In simpler terms, quantization is a method for optimally reducing a large number scale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the overall picture than other coefficients, are characteristically small-values with high compressibility. The quantized coefficients are then sequenced and losslessly packed into the output bitstream. Nearly all software implementations of JPEG permit user control over the compression-ratio (as well as other optional parameters), allowing the user to trade off picture-quality for smaller file size. In embedded applications (such as miniDV, which uses a similar DCT-compression scheme), the parameters are pre-selected and fixed for the application. The compression method is usually lossy, meaning that some original image information is lost and cannot be restored, possibly affecting image quality. There is an optional lossless mode defined in the JPEG standard. However, this mode is not widely supported in products. There is also an interlaced progressive JPEG format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, support for progressive JPEGs is not universal. When progressive JPEGs are received by programs that do not support them (such as versions of Internet Explorer before Windows 7) the software displays the image only after it has been completely downloaded. There are also many medical imaging and traffic systems that create and process 12-bit JPEG images, normally grayscale images. The 12-bit JPEG format has been part of the JPEG specification for some time, but this format is not as widely supported. A number of alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple of 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0 chroma subsampling). Utilities that implement this include jpegtran, with user interface Jpegcrop, and the JPG_TRANSFORM plugin to IrfanView. Blocks can be rotated in 90-degree increments, flipped in the horizontal, vertical and diagonal axes and moved about in the image. Not all blocks from the original image need to be used in the modified one. The top and left edge of a JPEG image must lie on an 8 × 8 pixel block boundary, but the bottom and right edge need not do so. This limits the possible lossless crop operations, and also prevents flips and rotations of an image whose bottom or right edge does not lie on a block boundary for all channels (because the edge would end up on top or left, where – as aforementioned – a block boundary is obligatory). When using lossless cropping, if the bottom or right side of the crop region is not on a block boundary then the rest of the data from the partially used blocks will still be present in the cropped file and can be recovered. It is also possible to transform between baseline and progressive formats without any loss of quality, since the only difference is the order in which the coefficients are placed in the file. Furthermore, several JPEG images can be losslessly joined together, as long as they were saved with the same quality and the edges coincide with block boundaries. The file format known as "JPEG Interchange Format" (JIF) is specified in Annex B of the standard. However, this "pure" file format is rarely used, primarily because of the difficulty of programming encoders and decoders that fully implement all aspects of the standard and because of certain shortcomings of the standard: - Color space definition - Component sub-sampling registration - Pixel aspect ratio definition. Several additional standards have evolved to address these issues. The first of these, released in 1992, was JPEG File Interchange Format (or JFIF), followed in recent years by Exchangeable image file format (Exif) and ICC color profiles. Both of these formats use the actual JIF byte layout, consisting of different markers, but in addition employ one of the JIF standard's extension points, namely the application markers: JFIF uses APP0, while Exif uses APP1. Within these segments of the file, that were left for future use in the JIF standard and aren't read by it, these standards add specific metadata. Thus, in some ways JFIF is a cutdown version of the JIF standard in that it specifies certain constraints (such as not allowing all the different encoding modes), while in other ways it is an extension of JIF due to the added metadata. The documentation for the original JFIF standard states: - JPEG File Interchange Format is a minimal file format which enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. This minimal format does not include any of the advanced features found in the TIFF JPEG specification or any application specific file format. Nor should it, for the only purpose of this simplified format is to allow the exchange of JPEG compressed images. Image files that employ JPEG compression are commonly called "JPEG files", and are stored in variants of the JIF image format. Most image capture devices (such as digital cameras) that output JPEG are actually creating files in the Exif format, the format that the camera industry has standardized on for metadata interchange. On the other hand, since the Exif standard does not allow color profiles, most image editing software stores JPEG in JFIF format, and also include the APP1 segment from the Exif file to include the metadata in an almost-compliant way; the JFIF standard is interpreted somewhat flexibly. Strictly speaking, the JFIF and Exif standards are incompatible because each specifies that its marker segment (APP0 or APP1, respectively) appear first. In practice, most JPEG files contain a JFIF marker segment that precedes the Exif header. This allows older readers to correctly handle the older format JFIF segment, while newer readers also decode the following Exif segment, being less strict about requiring it to appear first. JPEG filename extensions The most common filename extensions for files employing JPEG compression are .jpg and .jpeg, though .jpe, .jfif and .jif are also used. It is also possible for JPEG data to be embedded in other file types – TIFF encoded files often embed a JPEG image as a thumbnail of the main image; and MP3 files can contain a JPEG of cover art, in the ID3v2 tag. Many JPEG files embed an ICC color profile (color space). Commonly used color profiles include sRGB and Adobe RGB. Because these color spaces use a non-linear transformation, the dynamic range of an 8-bit JPEG file is about 11 stops; see gamma curve. Syntax and structure A JPEG image consists of a sequence of segments, each beginning with a marker, each of which begins with a 0xFF byte followed by a byte indicating what kind of marker it is. Some markers consist of just those two bytes; others are followed by two bytes (high then low) indicating the length of marker-specific payload data that follows. (The length includes the two bytes for the length, but not the two bytes for the marker.) Some markers are followed by entropy-coded data; the length of such a marker does not include the entropy-coded data. Note that consecutive 0xFF bytes are used as fill bytes for padding purposes, although this fill byte padding should only ever take place for markers immediately following entropy-coded scan data (see JPEG specification section B.1.1.2 and E.1.2 for details; specifically "In all cases where markers are appended after the compressed data, optional 0xFF fill bytes may precede the marker"). Within the entropy-coded data, after any 0xFF byte, a 0x00 byte is inserted by the encoder before the next byte, so that there does not appear to be a marker where none is intended, preventing framing errors. Decoders must skip this 0x00 byte. This technique, called byte stuffing (see JPEG specification section F.1.2.3), is only applied to the entropy-coded data, not to marker payload data. Note however that entropy-coded data has a few markers of its own; specifically the Reset markers (0xD0 through 0xD7), which are used to isolate independent chunks of entropy-coded data to allow parallel decoding, and encoders are free to insert these Reset markers at regular intervals (although not all encoders do this). |SOI||0xFF, 0xD8||none||Start Of Image| |SOF0||0xFF, 0xC0||variable size||Start Of Frame (baseline DCT)||Indicates that this is a baseline DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e.g., 4:2:0).| |SOF2||0xFF, 0xC2||variable size||Start Of Frame (progressive DCT)||Indicates that this is a progressive DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e.g., 4:2:0).| |DHT||0xFF, 0xC4||variable size||Define Huffman Table(s)||Specifies one or more Huffman tables.| |DQT||0xFF, 0xDB||variable size||Define Quantization Table(s)||Specifies one or more quantization tables.| |DRI||0xFF, 0xDD||4 bytes||Define Restart Interval||Specifies the interval between RSTn markers, in macroblocks. This marker is followed by two bytes indicating the fixed size so it can be treated like any other variable size segment.| |SOS||0xFF, 0xDA||variable size||Start Of Scan||Begins a top-to-bottom scan of the image. In baseline DCT JPEG images, there is generally a single scan. Progressive DCT JPEG images usually contain multiple scans. This marker specifies which slice of data it will contain, and is immediately followed by entropy-coded data.| |RSTn||0xFF, 0xDn (n=0..7)||none||Restart||Inserted every r macroblocks, where r is the restart interval set by a DRI marker. Not used if there was no DRI marker. The low three bits of the marker code cycle in value from 0 to 7.| |APPn||0xFF, 0xEn||variable size||Application-specific||For example, an Exif JPEG file uses an APP1 marker to store metadata, laid out in a structure based closely on TIFF.| |COM||0xFF, 0xFE||variable size||Comment||Contains a text comment.| |EOI||0xFF, 0xD9||none||End Of Image| There are other Start Of Frame markers that introduce other kinds of JPEG encodings. Since several vendors might use the same APPn marker type, application-specific markers often begin with a standard or vendor name (e.g., "Exif" or "Adobe") or some other identifying string. At a restart marker, block-to-block predictor variables are reset, and the bitstream is synchronized to a byte boundary. Restart markers provide means for recovery after bitstream error, such as transmission over an unreliable network or file corruption. Since the runs of macroblocks between restart markers may be independently decoded, these runs may be decoded in parallel. JPEG codec example Although a JPEG file can be encoded in various ways, most commonly it is done with JFIF encoding. The encoding process consists of several steps: - The representation of the colors in the image is converted from RGB to Y′CBCR, consisting of one luma component (Y'), representing brightness, and two chroma components, (CB and CR), representing color. This step is sometimes skipped. - The resolution of the chroma data is reduced, usually by a factor of 2 or 3. This reflects the fact that the eye is less sensitive to fine color details than to fine brightness details. - The image is split into blocks of 8×8 pixels, and for each block, each of the Y, CB, and CR data undergoes the discrete cosine transform (DCT), which was developed in 1974 by N. Ahmed, T. Natarajan and K. R. Rao; see Citation 1 in discrete cosine transform. A DCT is similar to a Fourier transform in the sense that it produces a kind of spatial frequency spectrum. - The amplitudes of the frequency components are quantized. Human vision is much more sensitive to small variations in color or brightness over large areas than to the strength of high-frequency brightness variations. Therefore, the magnitudes of the high-frequency components are stored with a lower accuracy than the low-frequency components. The quality setting of the encoder (for example 50 or 95 on a scale of 0–100 in the Independent JPEG Group's library) affects to what extent the resolution of each frequency component is reduced. If an excessively low quality setting is used, the high-frequency components are discarded altogether. - The resulting data for all 8×8 blocks is further compressed with a lossless algorithm, a variant of Huffman encoding. The decoding process reverses these steps, except the quantization because it is irreversible. In the remainder of this section, the encoding and decoding processes are described in more detail. Many of the options in the JPEG standard are not commonly used, and as mentioned above, most image software uses the simpler JFIF format when creating a JPEG file, which among other things specifies the encoding method. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24 bits per pixel (eight each of red, green, and blue). This particular option is a lossy data compression method. Color space transformation First, the image should be converted from RGB into a different color space called Y′CBCR (or, informally, YCbCr). It has three components Y', CB and CR: the Y' component represents the brightness of a pixel, and the CB and CR components represent the chrominance (split into blue and red components). This is basically the same color space as used by digital color television as well as digital video including video DVDs, and is similar to the way color is represented in analog PAL video and MAC (but not by analog NTSC, which uses the YIQ color space). The Y′CBCR color space conversion allows greater compression without a significant effect on perceptual image quality (or greater perceptual image quality for the same compression). The compression is more efficient because the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel. This more closely corresponds to the perception of color in the human visual system. The color transformation also improves compression by statistical decorrelation. A particular conversion to Y′CBCR is specified in the JFIF standard, and should be performed for the resulting JPEG file to have maximum compatibility. However, some JPEG implementations in "highest quality" mode do not apply this step and instead keep the color information in the RGB color model, where the image is stored in separate channels for red, green and blue brightness components. This results in less efficient compression, and would not likely be used when file size is especially important. Due to the densities of color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y' component) than in the hue and color saturation of an image (the Cb and Cr components). Using this knowledge, encoders can be designed to compress images more efficiently. The transformation into the Y′CBCR color model enables the next usual step, which is to reduce the spatial resolution of the Cb and Cr components (called "downsampling" or "chroma subsampling"). The ratios at which the downsampling is ordinarily done for JPEG images are 4:4:4 (no downsampling), 4:2:2 (reduction by a factor of 2 in the horizontal direction), or (most commonly) 4:2:0 (reduction by a factor of 2 in both the horizontal and vertical directions). For the rest of the compression process, Y', Cb and Cr are processed separately and in a very similar manner. After subsampling, each channel must be split into 8×8 blocks. Depending on chroma subsampling, this yields (Minimum Coded Unit) MCU blocks of size 8×8 (4:4:4 – no subsampling), 16×8 (4:2:2), or most commonly 16×16 (4:2:0). In video compression MCUs are called macroblocks. If the data for a channel does not represent an integer number of blocks then the encoder must fill the remaining area of the incomplete blocks with some form of dummy data. Filling the edges with a fixed color (for example, black) can create ringing artifacts along the visible part of the border; repeating the edge pixels is a common technique that reduces (but does not necessarily completely eliminate) such artifacts, and more sophisticated border filling techniques can also be applied[specify]. Discrete cosine transform Next, each 8×8 block of each component (Y, Cb, Cr) is converted to a frequency-domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT), which was introduced by N. Ahmed, T. Natarajan and K. R. Rao in 1974; see Citation 1 in discrete cosine transform. The DCT is sometimes referred to as "type-II DCT" in the context of a family of transforms as in discrete cosine transform, and the corresponding inverse (IDCT) is denoted as "type-III DCT". As an example, one such 8×8 8-bit subimage might be: Before computing the DCT of the 8×8 block, its values are shifted from a positive range to one centered around zero. For an 8-bit image, each entry in the original block falls in the range . The midpoint of the range (in this case, the value 128) is subtracted from each entry to produce a data range that is centered around zero, so that the modified range is . This step reduces the dynamic range requirements in the DCT processing stage that follows. (Aside from the difference in dynamic range within the DCT stage, this step is mathematically equivalent to subtracting 1024 from the DC coefficient after performing the transform – which may be a better way to perform the operation on some architectures since it involves performing only one subtraction rather than 64 of them.) This step results in the following values: The next step is to take the two-dimensional DCT, which is given by: - is the horizontal spatial frequency, for the integers . - is the vertical spatial frequency, for the integers . - is a normalizing scale factor to make the transformation orthonormal - is the pixel value at coordinates - is the DCT coefficient at coordinates If we perform this transformation on our matrix above, we get the following (rounded to the nearest two digits beyond the decimal point): Note the top-left corner entry with the rather large magnitude. This is the DC coefficient. It will define the basic hue for the whole block. The DC may also be called constant component. The remaining 63 coefficients are called the AC coefficients, where AC may stand for alternating components. The advantage of the DCT is its tendency to aggregate most of the signal in one corner of the result, as may be seen above. The quantization step to follow accentuates this effect while simultaneously reducing the overall size of the DCT coefficients, resulting in a signal that is easy to compress efficiently in the entropy stage. The DCT temporarily increases the bit-depth of the data, since the DCT coefficients of an 8-bit/component image take up to 11 or more bits (depending on fidelity of the DCT calculation) to store. This may force the codec to temporarily use 16-bit bins to hold these coefficients, doubling the size of the image representation at this point; they are typically reduced back to 8-bit values by the quantization step. The temporary increase in size at this stage is not a performance concern for most JPEG implementations, because typically only a very small part of the image is stored in full DCT form at any given time during the image encoding or decoding process. The human eye is good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This allows one to greatly reduce the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This rounding operation is the only lossy operation in the whole process (other than chroma subsampling) if the DCT computation is performed with sufficiently high precision. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to represent. The elements in the quantization matrix control the compression ratio, with larger values producing greater compression. A typical quantization matrix (for a quality of 50% as specified in the original JPEG Standard), is as follows: The quantized DCT coefficients are computed with where is the unquantized DCT coefficients; is the quantization matrix above; and is the quantized DCT coefficients. Using this quantization matrix with the DCT coefficient matrix from above results in: For example, using −415 (the DC coefficient) and rounding to the nearest integer Entropy coding is a special form of lossless data compression. It involves arranging the image components in a "zigzag" order employing run-length encoding (RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left. The JPEG standard also allows, but does not require, decoders to support the use of arithmetic coding, which is mathematically superior to Huffman coding. However, this feature has rarely been used as it was historically covered by patents requiring royalty-bearing licenses, and because it is slower to encode and decode compared to Huffman coding. Arithmetic coding typically makes files about 5–7% smaller. The previous quantized DC coefficient is used to predict the current quantized DC coefficient. The difference between the two is encoded rather than the actual value. The encoding of the 63 quantized AC coefficients does not use such prediction differencing. The zigzag sequence for the above quantized coefficients are shown below. (The format shown is just for ease of understanding/viewing.) If the i-th block is represented by and positions within each block are represented by where and , then any coefficient in the DCT image can be represented as . Thus, in the above scheme, the order of encoding pixels (for the -th block) is , , , , , , , and so on. This encoding mode is called baseline sequential encoding. Baseline JPEG also supports progressive encoding. While sequential encoding encodes coefficients of a single block at a time (in a zigzag manner), progressive encoding encodes similar-positioned coefficients of all blocks in one go, followed by the next positioned coefficients of all blocks, and so on. So, if the image is divided into N 8×8 blocks , then progressive encoding encodes for all blocks, i.e., for all . This is followed by encoding coefficient of all blocks, followed by -th coefficient of all blocks, then -th coefficient of all blocks, and so on. It should be noted here that once all similar-positioned coefficients have been encoded, the next position to be encoded is the one occurring next in the zigzag traversal as indicated in the figure above. It has been found that baseline progressive JPEG encoding usually gives better compression as compared to baseline sequential JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each "scan" or "pass" (which includes similar-positioned coefficients), though the difference is not too large. In the rest of the article, it is assumed that the coefficient pattern generated is due to sequential mode. In order to encode the above generated coefficient pattern, JPEG uses Huffman encoding. The JPEG standard provides general-purpose Huffman tables; encoders may also choose to generate Huffman tables optimized for the actual frequency distributions in images being encoded. The process of encoding the zig-zag quantized data begins with a run-length encoding explained below, where: - is the non-zero, quantized AC coefficient. - RUNLENGTH is the number of zeroes that came before this non-zero AC coefficient. - SIZE is the number of bits required to represent . - AMPLITUDE is the bit-representation of . The run-length encoding works by examining each non-zero AC coefficient and determining how many zeroes came before the previous AC coefficient. With this information, two symbols are created: |Symbol 1||Symbol 2| Both RUNLENGTH and SIZE rest on the same byte, meaning that each only contains four bits of information. The higher bits deal with the number of zeroes, while the lower bits denote the number of bits necessary to encode the value of . This has the immediate implication of Symbol 1 being only able store information regarding the first 15 zeroes preceding the non-zero AC coefficient. However, JPEG defines two special Huffman code words. One is for ending the sequence prematurely when the remaining coefficients are zero (called "End-of-Block" or "EOB"), and another when the run of zeroes goes beyond 15 before reaching a non-zero AC coefficient. In such a case where 16 zeroes are encountered before a given non-zero AC coefficient, Symbol 1 is encoded "specially" as: (15, 0)(0). The overall process continues until "EOB" – denoted by (0, 0) – is reached. With this in mind, the sequence from earlier becomes: (0, 2)(-3); (1, 2)(-3); (0, 2)(-2); (0, 3)(-6); (0, 2)(2); (0, 3)(-4); (0, 1)(1); (0, 2)(-3); (0, 1)(1); (0, 1)(1); (0, 3)(5); (0, 1)(1); (0, 2)(2); (0, 1)(-1); (0, 1)(1); (0, 1)(-1); (0, 2)(2); (5, 1)(-1); (0, 1)(-1); (0, 0). (The first value in the matrix, -26, is the DC coefficient; it is not encoded the same way. See above.) From here, frequency calculations are made based on occurrences of the coefficients. In our example block, most of the quantized coefficients are small numbers that are not preceded immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words. Compression ratio and artifacts The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that cannot be distinguished by eye from the original. A compression ration of 100:1 is usually possible, but will look distinctly artifacted compared to the original. The appropriate level of compression depends on the use to which the image will be put. |Illustration of edge busyness| Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts that appear in JPEG images, which may take the form of noise around contrasting edges (especially curves and corners), or 'blocky' images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around sharp corners between contrasting colors (text is a good example as it contains many such corners). The analogous artifacts in MPEG video are referred to as mosquito noise, as the resulting "edge busyness" and spurious dots, which change over time, resemble mosquitoes swarming around the object. These artifacts can be reduced by choosing a lower level of compression; they may be eliminated by saving an image using a lossless file format, though for photographic images this will usually result in a larger file size. The images created with ray-tracing programs have noticeable blocky shapes on the terrain. Certain low-intensity compression artifacts might be acceptable when simply viewing the images, but can be emphasized if the image is subsequently processed, usually resulting in unacceptable quality. Consider the example below, demonstrating the effect of lossy compression on an edge detection processing step. |Image||Lossless compression||Lossy compression| Canny edge detector Some programs allow the user to vary the amount by which individual blocks are compressed. Stronger compression is applied to areas of the image that show fewer artifacts. This way it is possible to manually reduce JPEG file size with less loss of quality. Since the quantization stage always results in a loss of information, JPEG standard is always a lossy compression codec. (Information is lost both in quantizing and rounding of the floating-point numbers.) Even if the quantization matrix is a matrix of ones, information will still be lost in the rounding step. Decoding to display the image consists of doing all the above in reverse. Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in) and taking the entry-for-entry product with the quantization matrix from above results in which closely resembles the original DCT coefficient matrix for the top-left portion. The next step is to take the two-dimensional inverse DCT (a 2D type-III DCT), which is given by: - is the pixel row, for the integers . - is the pixel column, for the integers . - is defined as above, for the integers . - is the reconstructed approximate coefficient at coordinates - is the reconstructed pixel value at coordinates Rounding the output to integer values (since the original had integer values) results in an image with values (still shifted down by 128) and adding 128 to each entry This is the decompressed subimage. In general, the decompression process may produce values outside the original input range of . If this occurs, the decoder needs to clip the output values keep them within that range to prevent overflow when storing the decompressed image with the original bit depth. The decompressed subimage can be compared to the original subimage (also see images to the right) by taking the difference (original − uncompressed) results in the following error values: with an average absolute error of about 5 values per pixels (i.e., ). The error is most noticeable in the bottom-left corner where the bottom-left pixel becomes darker than the pixel to its immediate right. The encoding description in the JPEG standard does not fix the precision needed for the output compressed image. However, the JPEG standard (and the similar MPEG standards) includes some precision requirements for the decoding, including all parts of the decoding process (variable length decoding, inverse DCT, dequantization, renormalization of outputs); the output from the reference algorithm must not exceed: - a maximum of one bit of difference for each pixel component - low mean square error over each 8×8-pixel block - very low mean error over each 8×8-pixel block - very low mean square error over the whole image - extremely low mean error over the whole image These assertions are tested on a large set of randomized input images, to handle the worst cases. The former IEEE 1180–1990 standard contained some similar precision requirements. The precision has a consequence on the implementation of decoders, and it is critical because some encoding processes (notably used for encoding sequences of images like MPEG) need to be able to construct, on the encoder side, a reference decoded image. In order to support 8-bit precision per pixel component output, dequantization and inverse DCT transforms are typically implemented with at least 14-bit precision in optimized decoders. Effects of JPEG compression JPEG compression artifacts blend well into photographs with detailed non-uniform textures, allowing higher compression ratios. Notice how a higher compression ratio first affects the high-frequency textures in the upper-left corner of the image, and how the contrasting lines become more fuzzy. The very high compression ratio severely affects the quality of the image, although the overall colors and image form are still recognizable. However, the precision of colors suffer less (for a human eye) than the precision of contours (based on luminance). This justifies the fact that images should be first transformed in a color model separating the luminance from the chromatic information, before subsampling the chromatic planes (which may also use lower quality quantization) in order to preserve the precision of the luminance plane with more information bits. For information, the uncompressed 24-bit RGB bitmap image below (73,242 pixels) would require 219,726 bytes (excluding all other information headers). The filesizes indicated below include the internal JPEG information headers and some meta-data. For highest quality images (Q=100), about 8.25 bits per color pixel is required. On grayscale images, a minimum of 6.5 bits per pixel is enough (a comparable Q=100 quality color information requires about 25% more encoded bits). The highest quality image below (Q=100) is encoded at nine bits per color pixel, the medium quality image (Q=25) uses one bit per color pixel. For most applications, the quality factor should not go below 0.75 bit per pixel (Q=12.5), as demonstrated by the low quality image. The image at lowest quality uses only 0.13 bit per pixel, and displays very poor color. This is useful when the image will be displayed in a significantly scaled-down size. Note: The above images are not IEEE / CCIR / EBU test images, and the encoder settings are not specified or available. Image Quality Size (bytes) Compression ratio Comment Highest quality (Q = 100) 83,261 2.6:1 Extremely minor artifacts High quality (Q = 50) 15,138 15:1 Initial signs of subimage artifacts Medium quality (Q = 25) 9,553 23:1 Stronger artifacts; loss of high frequency information Low quality (Q = 10) 4,787 46:1 Severe high frequency loss; artifacts on subimage boundaries ("macroblocking") are obvious Lowest quality (Q = 1) 1,523 144:1 Extreme loss of color and detail; the leaves are nearly unrecognizable The medium quality photo uses only 4.3% of the storage space required for the uncompressed image, but has little noticeable loss of detail or visible artifacts. However, once a certain threshold of compression is passed, compressed images show increasingly visible defects. See the article on rate–distortion theory for a mathematical explanation of this threshold effect. A particular limitation of JPEG in this regard is its non-overlapped 8×8 block transform structure. More modern designs such as JPEG 2000 and JPEG XR exhibit a more graceful degradation of quality as the bit usage decreases – by using transforms with a larger spatial extent for the lower frequency coefficients and by using overlapping transform basis functions. Lossless further compression From 2004 to 2008 new research emerged on ways to further compress the data contained in JPEG images without modifying the represented image. This has applications in scenarios where the original image is only available in JPEG format, and its size needs to be reduced for archiving or transmission. Standard general-purpose compression tools cannot significantly compress JPEG files. Typically, such schemes take advantage of improvements to the naive scheme for coding DCT coefficients, which fails to take into account: - Correlations between magnitudes of adjacent coefficients in the same block; - Correlations between magnitudes of the same coefficient in adjacent blocks; - Correlations between magnitudes of the same coefficient/block in different channels; - The DC coefficients when taken together resemble a downscale version of the original image multiplied by a scaling factor. Well-known schemes for lossless coding of continuous-tone images can be applied, achieving somewhat better compression than the Huffman coded DPCM used in JPEG. Some standard but rarely used options already exist in JPEG to improve the efficiency of coding DCT coefficients: the arithmetic coding option, and the progressive coding option (which produces lower bitrates because values for each coefficient are coded independently, and each coefficient has a significantly different distribution). Modern methods have improved on these techniques by reordering coefficients to group coefficients of larger magnitude together; using adjacent coefficients and blocks to predict new coefficient values; dividing blocks or coefficients up among a small number of independently coded models based on their statistics and adjacent values; and most recently, by decoding blocks, predicting subsequent blocks in the spatial domain, and then encoding these to generate predictions for DCT coefficients. A freely available tool called packJPG is based on the 2007 paper "Improved Redundancy Reduction for JPEG Files." Derived formats for stereoscopic 3D JPEG Stereoscopic (JPS, extension .jps) is a JPEG-based format for stereoscopic images. It has a range of configurations stored in the JPEG APP3 marker field, but usually contains one image of double width, representing two images of identical size in cross-eyed (i.e. left frame on the right half of the image and vice versa) side-by-side arrangement. This file format can be viewed as a JPEG without any special software, or can be processed for rendering in other modes. JPEG Multi-Picture Format JPEG Multi-Picture Format (MPO, extension .mpo) is a JPEG-based format for multi-view images. It contains two or more JPEG files concatenated together. There are also special EXIF fields describing its purpose. This is used by the Fujifilm FinePix Real 3D W1 camera, Panasonic Lumix DMC-TZ20, DMC-TZ30, DMC-TZ60& DMC-TS4 (FT4), Sony DSC-HX7V, HTC Evo 3D, the JVC GY-HMZ1U AVCHD/MVC extension camcorder and by the Nintendo 3DS for its 3D Camera. In the last few years, due to the growing use of stereoscopic images, much effort has been spent by the scientific community to develop algorithms for stereoscopic image compression. In 2002, Forgent Networks asserted that it owned and would enforce patent rights on the JPEG technology, arising from a patent that had been filed on October 27, 1986, and granted on October 6, 1987 (U.S. Patent 4,698,672). The announcement created a furor reminiscent of Unisys' attempts to assert its rights over the GIF image compression standard. The JPEG committee investigated the patent claims in 2002 and were of the opinion that they were invalidated by prior art. Others also concluded that Forgent did not have a patent that covered JPEG. Nevertheless, between 2002 and 2004 Forgent was able to obtain about US$105 million by licensing their patent to some 30 companies. In April 2004, Forgent sued 31 other companies to enforce further license payments. In July of the same year, a consortium of 21 large computer companies filed a countersuit, with the goal of invalidating the patent. In addition, Microsoft launched a separate lawsuit against Forgent in April 2005. In February 2006, the United States Patent and Trademark Office agreed to re-examine Forgent's JPEG patent at the request of the Public Patent Foundation. On May 26, 2006 the USPTO found the patent invalid based on prior art. The USPTO also found that Forgent knew about the prior art, and did not tell the Patent Office, making any appeal to reinstate the patent highly unlikely to succeed. Forgent also possesses a similar patent granted by the European Patent Office in 1994, though it is unclear how enforceable it is. As of October 27, 2006, the U.S. patent's 20-year term appears to have expired, and in November 2006, Forgent agreed to abandon enforcement of patent claims against use of the JPEG standard. The JPEG committee has as one of its explicit goals that their standards (in particular their baseline methods) be implementable without payment of license fees, and they have secured appropriate license rights for their JPEG 2000 standard from over 20 large organizations. Beginning in August 2007, another company, Global Patent Holdings, LLC claimed that its patent (U.S. Patent 5,253,341) issued in 1993, is infringed by the downloading of JPEG images on either a website or through e-mail. If not invalidated, this patent could apply to any website that displays JPEG images. The patent emerged[clarification needed] in July 2007 following a seven-year reexamination by the U.S. Patent and Trademark Office in which all of the original claims of the patent were revoked, but an additional claim (claim 17) was confirmed. In its first two lawsuits following the reexamination, both filed in Chicago, Illinois, Global Patent Holdings sued the Green Bay Packers, CDW, Motorola, Apple, Orbitz, Officemax, Caterpillar, Kraft and Peapod as defendants. A third lawsuit was filed on December 5, 2007 in South Florida against ADT Security Services, AutoNation, Florida Crystals Corp., HearUSA, MovieTickets.com, Ocwen Financial Corp. and Tire Kingdom, and a fourth lawsuit on January 8, 2008 in South Florida against the Boca Raton Resort & Club. A fifth lawsuit was filed against Global Patent Holdings in Nevada. That lawsuit was filed by Zappos.com, Inc., which was allegedly threatened by Global Patent Holdings, and seeks a judicial declaration that the '341 patent is invalid and not infringed. Global Patent Holdings had also used the '341 patent to sue or threaten outspoken critics of broad software patents, including Gregory Aharonian and the anonymous operator of a website blog known as the "Patent Troll Tracker." On December 21, 2007, patent lawyer Vernon Francissen of Chicago asked the U.S. Patent and Trademark Office to reexamine the sole remaining claim of the '341 patent on the basis of new prior art. On March 5, 2008, the U.S. Patent and Trademark Office agreed to reexamine the '341 patent, finding that the new prior art raised substantial new questions regarding the patent's validity. In light of the reexamination, the accused infringers in four of the five pending lawsuits have filed motions to suspend (stay) their cases until completion of the U.S. Patent and Trademark Office's review of the '341 patent. On April 23, 2008, a judge presiding over the two lawsuits in Chicago, Illinois granted the motions in those cases. On July 22, 2008, the Patent Office issued the first "Office Action" of the second reexamination, finding the claim invalid based on nineteen separate grounds. On Nov. 24, 2009, a Reexamination Certificate was issued cancelling all claims. Beginning in 2011 and continuing as of early 2013, an entity known as Princeton Digital Image Corporation, based in Eastern Texas, began suing large numbers of companies for alleged infringement of U.S. Patent No. 4,813,056 (U.S. Patent 4,813,056). Princeton claims that the JPEG image compression standard infringes the '056 patent and has sued large numbers of websites, retailers, camera and device manufacturers and resellers. The patent was originally owned and assigned to General Electric. The patent expired in December 2007, but Princeton has sued large numbers of companies for "past infringement" of this patent. (Under U.S. patent laws, a patent owner can sue for "past infringement" up to six years before the filing of a lawsuit, so Princeton could theoretically have continued suing companies until December 2013.) As of March 2013, Princeton had suits pending in New York and Delaware against more than 55 companies. General Electric's involvement in the suit is unknown, although court records indicate that it assigned the patent to Princeton in 2009 and retains certain rights in the patent. Here are some examples of standards created by ISO/IEC JTC1 SC29 Working Group 1 (WG 1), which includes the Joint Photographic Experts Group and Joint Bi-level Image experts Group: - JPEG (lossy and lossless): ITU-T T.81, ISO/IEC 10918-1 - JPEG extensions: ITU-T T.84 - JPEG-LS (lossless, improved): ITU-T T.87, ISO/IEC 14495-1 - JBIG (lossless, bi-level pictures, fax): ITU-T T.82, ISO/IEC 11544 - JBIG2 (bi-level pictures): ITU-T T.88, ISO/IEC 14492 - JPEG 2000: ITU-T T.800, ISO/IEC 15444-1 - JPEG 2000 extensions: ITU-T T.801 - JPEG XR (formerly called HD Photo prior to standardization) : ITU-T T.832, ISO/IEC 29199-2 A very important implementation of a JPEG codec is the free programming library libjpeg of the Independent JPEG Group. It was first published in 1991 and was key for the success of the standard. This library or a direct derivative of it is used in countless applications. |Wikimedia Commons has media related to JPEG compression.| - Better Portable Graphics new format based on intra-frame encoding of the HEVC - High Efficiency Image File Format, image container format for HEVC and other image coding formats - C-Cube an early implementer of JPEG in chip form - Comparison of graphics file formats - Comparison of layout engines (graphics) - Deblocking filter (video), the similar deblocking methods could be applied to JPEG - Design rule for Camera File system (DCF) - File extensions - Graphics editing program - Image compression - Image file formats - Lenna, the traditional standard image used to test image processing algorithms - Lossless Image Codec FELICS - Motion JPEG - "Definition of "JPEG"". Collins English Dictionary. Retrieved 23 May 2013. - MIME Type Detection in Internet Explorer: Uploaded MIME Types (msdn.microsoft.com) - JPEG File Layout and Format - ISO/IEC JTC 1/SC 29 (2009-05-07). "ISO/IEC JTC 1/SC 29/WG 1 – Coding of Still Pictures (SC 29/WG 1 Structure)". Retrieved 2009-11-11. - ISO/IEC JTC 1/SC 29. "Programme of Work, (Allocated to SC 29/WG 1)". Retrieved 2009-11-07. - ISO. "JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information". Retrieved 2009-11-11. - JPEG. "Joint Photographic Experts Group, JPEG Homepage". Retrieved 2009-11-08. - "T.81 : Information technology – Digital compression and coding of continuous-tone still images – Requirements and guidelines". Retrieved 2009-11-07. - William B. Pennebaker and Joan L. Mitchell (1993). JPEG still image data compression standard (3rd ed.). Springer. p. 291. ISBN 978-0-442-01272-4. - ISO. "JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information". Retrieved 2009-11-07. - JPEG (2009-04-24). "Press Release – 48th WG1 meeting, Maui, USA – JPEG XR enters FDIS status, JPEG File Interchange Format (JFIF) to be standardized as JPEG Part 5". Retrieved 2009-11-09. - "JPEG File Interchange Format (JFIF)". ECMA TR/98 1st ed. Ecma International. 2009. Retrieved 2011-08-01. - "Progressive Decoding Overview". Microsoft Developer Network. Microsoft. Retrieved 2012-03-23. - "JFIF File Format as PDF" (PDF). - Tom Lane (1999-03-29). "JPEG image compression FAQ". Retrieved 2007-09-11. (q. 14: "Why all the argument about file formats?") - "ISO/IEC 10918-1 : 1993(E) p.36". - Thomas G. Lane. "Advanced Features: Compression parameter selection". Using the IJG JPEG Library. - Phuc-Tue Le Dinh and Jacques Patry. Video compression artifacts and MPEG noise reduction. Video Imaging DesignLine. February 24, 2006. Retrieved May 28, 2009. - "3.9 mosquito noise: Form of edge busyness distortion sometimes associated with movement, characterized by moving artifacts and/or blotchy noise patterns superimposed over the objects (resembling a mosquito flying around a person's head and shoulders)." ITU-T Rec. P.930 (08/96) Principles of a reference impairment system for video - I. Bauermann and E. Steinbacj. Further Lossless Compression of JPEG Images. Proc. of Picture Coding Symposium (PCS 2004), San Francisco, USA, December 15–17, 2004. - N. Ponomarenko, K. Egiazarian, V. Lukin and J. Astola. Additional Lossless Compression of JPEG Images, Proc. of the 4th Intl. Symposium on Image and Signal Processing and Analysis (ISPA 2005), Zagreb, Croatia, pp.117–120, September 15–17, 2005. - M. Stirner and G. Seelmann. Improved Redundancy Reduction for JPEG Files. Proc. of Picture Coding Symposium (PCS 2007), Lisbon, Portugal, November 7–9, 2007 - Ichiro Matsuda, Yukio Nomoto, Kei Wakabayashi and Susumu Itoh. Lossless Re-encoding of JPEG images using block-adaptive intra prediction. Proceedings of the 16th European Signal Processing Conference (EUSIPCO 2008). - "Latest Binary Releases of packJPG: V2.3a". January 3, 2008. - J. Siragusa, D. C. Swift, "General Purpose Stereoscopic Data Descriptor", VRex, Inc., Elmsford, New York, USA, 1997. - Tim Kemp, JPS files - "Multi-Picture Format" (PDF). 2009. Retrieved 2015-12-30. - cybereality, MPO2Stereo: Convert Fujifilm MPO files to JPEG stereo pairs, mtbs3d, retrieved 12 January 2010 - Alessandro Ortis, Sebastiano Battiato, A new fast matching method for adaptive compression of stereoscopic images, SPIE - Three-Dimensional Image Processing, Measurement (3DIPM), and Applications 2015, retrieved 30 April 2015 - Alessandro Ortis, Francesco Rundo, Giuseppe Di Giore, Sebastiano Battiato, Adaptive Compression of Stereoscopic Images, International Conference on Image Analysis and Processing (ICIAP) 2013, retrieved 30 April 2015 - "Concerning recent patent claims". Jpeg.org. 2002-07-19. Retrieved 2011-05-29. - JPEG and JPEG2000 – Between Patent Quarrel and Change of Technology at the Wayback Machine (archived August 17, 2004) - Kawamoto, Dawn (April 22, 2005). "Graphics patent suit fires back at Microsoft". CNET News. Retrieved 2009-01-28. - "Trademark Office Re-examines Forgent JPEG Patent". Publish.com. February 3, 2006. Retrieved 2009-01-28. - "USPTO: Broadest Claims Forgent Asserts Against JPEG Standard Invalid". Groklaw.net. May 26, 2006. Retrieved 2007-07-21. - "Coding System for Reducing Redundancy". Gauss.ffii.org. Retrieved 2011-05-29. - "JPEG Patent Claim Surrendered". Public Patent Foundation. November 2, 2006. Retrieved 2006-11-03. - Ex Parte Reexamination Certificate for U.S. Patent No. 5,253,341 Archived June 2, 2008 at the Wayback Machine - Workgroup. "Rozmanith: Using Software Patents to Silence Critics". Eupat.ffii.org. Retrieved 2011-05-29. - "A Bounty of $5,000 to Name Troll Tracker: Ray Niro Wants To Know Who Is saying All Those Nasty Things About Him". Law.com. Retrieved 2011-05-29. - Reimer, Jeremy (2008-02-05). "Hunting trolls: USPTO asked to reexamine broad image patent". Arstechnica.com. Retrieved 2011-05-29. - U.S. Patent Office – Granting Reexamination on 5,253,341 C1 - "Judge Puts JPEG Patent On Ice". Techdirt.com. 2008-04-30. Retrieved 2011-05-29. - "JPEG Patent's Single Claim Rejected (And Smacked Down For Good Measure)". Techdirt.com. 2008-08-01. Retrieved 2011-05-29. - Workgroup. "Princeton Digital Image Corporation Home Page". Retrieved 2013-05-01. - Workgroup. "Article on Princeton Court Ruling Regarding GE License Agreement". Retrieved 2013-05-01. - JPEG Standard (JPEG ISO/IEC 10918-1 ITU-T Recommendation T.81) at W3.org - Official Joint Photographic Experts Group site - JFIF File Format at W3.org - JPEG viewer in 250 lines of easy to understand python code - Wotsit.org's entry on the JPEG format - Example images over the full range of quantization levels from 1 to 100 at visengi.com - Public domain JPEG compressor in a single C++ source file, along with a matching decompressor at code.google.com - Example of .JPG file decoding - Jpeg Decoder Open Source Code , Copyright (C) 1995–1997, Thomas G. Lane. - JPEG compression and decompression on GPU.
1
19
<urn:uuid:45376c7e-5abf-4f5d-8dc5-15b31d94548f>
Serial Terminal Basics Serial Terminal Overview COM ports. Baud rate. Flow control. Tx. Rx. These are all words that get thrown around a lot when working with electronics, especially microcontrollers. For someone who isn’t familiar with these terms and the context in which they are used, they can be confusing at times. This tutorial is here to help you understand what these terms mean and how they form the larger picture that is serial communication over a terminal. In short, serial terminal programs make working with microcontrollers that much simpler. They allow you to see data sent to and from your microcontroller, and that data can be used for a number of reasons including troubleshooting/debugging, communication testing, calibrating sensors, configuring modules, and data monitoring. Once you have learned the ins and outs of a terminal application, it can be a very powerful tool in your electronics and programming arsenal. Covered in this Tutorial There are lots of different terminal programs out there, and they all have their pros and cons. In this tutorial we will discuss what a terminal is, which terminal programs are best suited for certain situations and operating systems, and how to configure and use each program. You should be familiar with these topics before diving into this tutorial. If you need a refresher, feel free to pop on over to these links. We’ll be right here waiting. What is a Terminal? Terminal emulators go by many names, and, due to the varied use of the word terminal, there can often be some confusion about what someone means when they say terminal. Let’s clear that up. To understand the use of the word terminal, we must visit the not so distant past. Back when computers where big, bulky, and took up entire rooms, there were only a handful of ways to interface with them. Punch cards and paper tape reels where one such interface, but there was also what was known as a terminal that was used for entering and retrieving data. These terminals came in many form factors, but they soon began to resemble what would become their personal computer descendants. Many consisted of a keyboard and a screen. Terminals that could display text only were referred to as text terminals, and later came graphical terminals. When discussing terminal emulators, it’s these terminal of days past that are being referenced. An OG terminal Today, terminal programs are “emulating” the experience that was working on one of these terminals. They are known as emulators, applications, programs, terms, TTYs, and so on. For the purposes of this tutorial, just the word terminal will be used. Many terminals use to emulate specific types of computer terminals, but today, most terminals are more generic in their interface. When working on a modern operating system, the word terminal window will often be used to describe working within one of these applications. And, often, when reading other tutorials and hookup guides, you will be requested to open a terminal window. Just know that means to open whichever one of these terminals programs strikes your fancy. It is also worth noting that many terminal programs are capable of much more than just serial communication. Many have network communication capabilities such as telnet and SSH. However, this tutorial will not cover these features. Terminal vs Command Line A terminal is not a command prompt, though the two are somewhat similar. In Mac OS, the command prompt is even called Terminal. Hence the confusion when using that word. Regardless, you can perform some of the same tasks in a command prompt that you could also perform within a terminal window, but it doesn’t work the other way around; you cannot issue command line statements within a terminal window. We will go over how to create a serial terminal connection within a command line interface later in this tutorial. For now, just know how to distinguish between the two. Here are some terms you should be familiar with when working within a serial terminal window. Many of these terms are covered in a lot more detail in our Serial Communication tutorial. It highly recommended that you read that page as well to get the full picture. ASCII - Short for the American Standard Code for Information Interchange’s character encoding scheme, ASCII encodes special characters from our keyboards and converts them to 7-bit binary integers that can be recognized by a number of programs and devices. ASCII charts are very helpful when working with serial terminals. Baud Rate - In short, baud rate is how fast your data is being transmitted and received. 9600 is the standard rate, but other speeds are typical amongst certain devices. Just remember that all the links in your chain of communication have to be “speaking” at the same speed, otherwise data will be misinterpreted on one end or the other. Transmit (TX) - Also known as Data Out or TXO. The TX line on any device is there to transmit data. This should be hooked up to the RX line of the device with which you would like to communicate. Receive (RX) - Also known as Data In or RXI. The RX line on any device is there to receive data. This should be hooked up to the TX line of the device with which you would like to communicate. COM Port (Serial Port) - Each device you connect to your computer will be assigned a specific port number. This helps to identify each device connected. Once a device has a port assigned to it, that port will be used every time that device is plugged into the computer. Your device will show up on your computer as either COM# (if you’re on a Windows machine) or /dev/tty.usbserial-######## (if you’re on a Mac/Linux computer), where the #’s are unique numbers or alphabetic characters. TTY - TTY stands for teletypewriter or teletype. Much like terminal is synonymous with the terminals of old, so too is teletype. These were the electromechanical typewriters used to enter information to the terminal and, thus, to the mainframe. When working with terminals on Mac and Linux, you will often see tty used to represent a communication port rather than ‘COM port’. Data, Stop, and Parity Bits - Each packet of data sent to and from the terminal has a specific format. These formats can vary, and the settings of your terminal can be adjusted accordingly to work with different packet configurations. One of the most common configurations you’ll see is 8-N-1, which translates to 8 data bits, no parity bit, and one stop bit. Flow Control - Flow control is controlling the rate at which data is sent between devices to ensure that the sender is not sending data faster than the receiver can receive the data. In most applications used throughout these tutorials, you will not need to use flow control. The flow control may also be present in the shorthand notation: 8-N-1-None, which stands for no flow control. Carriage Return & Line Feed - Carriage return and line feed are the ASCII characters sent when you press the enter key on your keyboard. These terms have roots from the days of typewriters. Carriage return meant the carriage holding the paper would return to the starting point of that particular line. Line feed (aka new line) meant the carriage should move to the next line to prevent typing over the previous line. When typing on a modern keyboard, these terms still apply. Every time you press enter (or return) you are telling your cursor to move down to the next line and move to the beginning of that new line. Consulting our handy-dandy ASCII table, we can see that the character for line feed is 10 (0x0A in hex) and carriage return is 13 (0x0D in hex). The importance of these two characters cannot be stressed enough. When working in a terminal window you’ll often need to be aware of which of these two characters, if not both, are being used to emulate the enter key. Some devices only need one character or the other to know that a command has been sent. More importantly, when working with microcontrollers, be aware of how you are sending data. If a string of 5 characters needs to be sent to the micro, you may need a string that can actually hold 7 characters on account of the 10 and 13 sent after every command. Local Echo - Local echo is a setting that can be changed in either the serial terminal or the device to which you are talking, and sometimes both. This setting simply tells the terminal to print everything you type. The benefit from this is being able to see if you are in fact typing the correct commands should you encounter errors. Be aware, though, that sometimes local echo can come back to bite you. Some devices will interpret local echo as double type. For example, if you type hello with local echo on, the receiving device might see hheelllloo, which is likely not the correct command. Most devices can handle commands with or without local echo. Just be aware that this can be an issue. Serial Port Profile (SPP) - The Serial Port Profile is a Bluetooth profile that allows for serial communication between a Bluetooth device and a host/slave device. With this profile enabled, you can connect to a Bluetooth module through a serial terminal. This can be used for configuration purposes or for communication purposes. While not exactly pertinent to this tutorial, it’s still good to know about this profile if you want to use Bluetooth in a project. Connecting to Your Device Now that you know what a terminal is and the lingo that comes with the territory, it’s time to hook up a device and communicate with it. This page will show you how to connect a device, how to discover which port it has been assigned, and how to communicate over that port. What You’ll Need For this example you’ll need - An FTDI Basic - 5V or 3.3V will work fine. You can also use an FTDI Cable if that’s all you have. - A USB Mini-B Cable - (Not necessary if you have an FTDI Cable.) - A jumper wire - Most FTDI products have female headers, so a male-to-male jumper cable should suffice. Or, you could just use a piece of wire that is stripped on both ends. Discovering Your Device Once you have all your supplies ready, attach the FTDI Basic to the USB cable, and attach the cable to your computer. If this is the first time you’ve plugged in a device of this nature into your computer, you may need to install the drivers. If this is the case, visit our FTDI Driver Installation Guide. If the drivers are all up to date, carry on. Depending on which operating system you’re using, there are a few different ways to discover which port your device has been assigned. No matter which version of Windows you have, you have a program called Device Manger. To open device manger, open the start menu, and type into the search bar. Press enter, and it’ll open right up. Or, you can right-click on MyComputer, select Properties, and open the Device Manger from there (Windows 7). If you intend on using your computer to communicate with several serial devices, it may be worth creating a desktop shortcut to Device Manger. Once you’ve got Device Manger open, expand the Ports tab. Here is where the information we need lives. In this image, we have just a few COM Ports showing up. The first thing to know is that COM1 is ALWAYS reserved for the true Serial Port, not USB. You know those grey, bulky cables, which have a DB9 connection on each end. Yeah, that serial port. Many computers (especially laptops) no longer have serial ports, and they are becoming obsolete in exchange for more USB ports. Nevertheless, the OS still reserves COM1 for that port for people who still have an true serial port on their computer. Another port that is likely to show up on most computers is LPT1. This is reserved for the parallel port. Parallel ports and cables are becoming even more obsolete than serial cables, but, again, many computers still have these ports (they’re often used to connect to printers) and have to accommodate for that in the OS. With those out of the way, we can focus on the ports that we do need to use. Now with your FTDI plugged in, you should see a new COM port get added to the list. Typically, your computer will enumerate your devices. For instance, if this is the first serial communication device you’ve plugged into your computer, it should enumerate as COM2. On my computer this is the not the first device I’ve plugged in, but rather the eighth, so it has enumerated as COM9 (don’t forget about COM1). What’s important to know is that once a device has been associated with your computer and has had a port assigned to it, the computer will remember that device every time it’s attached. So, if you have an Arduino board that has been assigned COM4 for example, it is not necessary to open Device Manger and check which COM port it is on every time, because that device will now always be on COM4. This can be good and bad. Most people will never plug more than a couple dozen serial devices into their computers. However, some people will plug in lots of devices, and your computer can only assign so many ports (256 if I remember correctly). Thus, it may be necessary to delete some COM ports. We will discuss that in the tips and tricks section. If you do have multiple devices and are not sure which device is the one you just plugged in, unplug it, watch for whichever COM port disappears, and then plug it back in. The COM port should reappear letting you know that’s the device you’re looking for. One last thing to mention is that all serial devices, even if they require different drivers, will show up as COM ports in Windows. For example, an Arduino Uno and the FTDI Basic both have different drivers and are technically two different types of devices. Windows doesn’t discriminate. It will treat both devices the same, and all you have to worry about is with which COM port it’s associated. Mac OS and Linux treat this slightly differently. Read on to find out. Command Line (Mac, Linux) Similar to Windows, Mac OS and Linux assign a specific port to every device attached to the computer. However, unlike Windows, there is no specific program you can open up to view all the devices currently attached. Have no fear. There is still a simple solution to find you device. The default command line interface for Mac OS X is Terminal. To open it, go to your Utilities folder. There you should see the icon for Terminal. I’m going to assume that if you’re using Linux, you already know how to open a command line window. Once open, you should see the typical terminal screen. To see a list of all the available Serial ports on both Mac and Linux, type this command: You should now see a list of all serial ports on your computer. You’ll notice a few Bluetooth ports on there. I have several Bluetooth devices paired with my computer, so you may have more or less devices that show up depending on what devices have been paired with your computer. (Notice the SPP portion of these names. That indicates that Bluetooth device can talk to the serial terminal as well.) The important devices to note are the tty.usbserial and the tty.usbmodem. For this example I have both an FTDI Basic and an Arduino Uno plugged into my computer. This is just to show you the key difference between the two. As mentioned earlier, some devices are treated differently depending on how they communicate with the computer. The FT232 IC on the FDTI basic is a true serial device, and, thus, it shows up as usbserial. The Uno on the other hand, is an HID device and shows up as a usbmodem device. The HID (Human Interface Device) profile is used for keyboards, mice, joysticks, etc., and, as an HID device, the computer treats it slightly different despite the fact that is can still send serial data. In either case, these tty.usb______ ports are what we’re after when connecting to a serial terminal. With that out of the way, it’s time to actually communicate with the FTDI. The specifics of each terminal program will be discussed in the following sections. This example will be shown in CoolTerm, but be aware that this can be done with any terminal. Open up a terminal with the correct settings: 9600, 8-N-1-None. Make sure local echo is turned off for this test. Take your jumper wire and connect it to the TX and RX lines of the FTDI Basic. Everything you type should be displayed in the terminal window. It’s nothing fancy, but you are now communicating with the terminal. Data is being sent from your keyboard, to the computer, through the USB cable to the FTDI, out the FTDI’s TX pin, into the RX pin, back through the USB cable, into the computer, and is finally displayed in the terminal window. Don’t believe me? Unplug the jumper and type some more. Pending you did turn local echo off, you should not see anything being typed. This is the echo test. If you have two FTDI boards or other similar serial devices, try hooking up both of them. Connect the TX line of one to the RX line of the other and vise versa. Then, open two serial terminal windows (yes, you can have multiple terminal windows open at once), each connected to a different device. Make sure they are both set to the same baud rate and settings. Then connect, and start typing. What you type in one terminal should show up in the opposite terminal and vise versa. You’ve just created a very simplistic chat client! Now let’s explore the different terminal programs. Arduino Serial Monitor (Windows, Mac, Linux) The Arduino Integrated Development Environment (IDE) is the software side of the Arduino platform. And, because using a terminal is such a big part of working with Arduinos and other microcontrollers, they decided to included a serial terminal with the software. Within the Arduino environment, this is called the Serial Monitor. Making a Connection Serial monitor comes with any and all version of the Arduino IDE. To open it, simply click the Serial Monitor icon. The icon is located to the right of the other icons in Arduino 0023 and below. The icon is located to the far right in Arduino 1.0 and beyond. Selecting which port to open in the Serial Monitor is the same as selecting a port for uploading Arduino code. Go to Tools -> Serial Port, and select the correct port. Once open, you should see something like this: The Serial Monitor has limited settings, but enough to handle most of your serial communication needs. The first setting you can alter is the baud rate. Click on the baud rate drop-down menu to select the the correct baud rate. You can also change the enter key emulation to carriage return, line feed, both, or neither. Last, you can the the terminal to autoscroll or not by checking the box in the bottom left corner. - The Serial Monitor is a great quick and easy way to establish a serial connection with your Arduino. If you’re already working in the Arduino IDE, there’s really no need to open up a separate terminal to display data. - The lack of settings leaves much to be desired in the Serial Monitor, and, for advanced serial communications, it may not do the trick. HyperTerminal is the defacto terminal program for any Windows OS up to XP – Windows Vista, 7, and 8 don’t include it. If you’re on Windows Vista, 7, or 8, and really just have to have HyperTerminal, a little scouring of the Internet should turn up some workarounds. Better alternatives are more easily available however- we’ll get to those shortly. If you’re on a pre-Vista machine, and only have HyperTerminal to work with, here are some tips and tricks for using it: Initiating a Connection When initially opening up HyperTerminal, it will present you with a “Connection Description” dialog. Enter any name you please, and, if you really want to get fancy, select your favorite icon. Then hit “OK”. (If this window didn’t pop up go to File > New Connection to open it.) None of the settings in this first window have any effect on the serial communication. On the next window, ignore the first three text boxes – we’re not working with a dial-up modem here. Do select your COM port next to the “Connect using” box. Then hit “OK”. The settings on the next box should look pretty familiar. Make sure the “Bits per second” dropdown is set to the correct baud rate. And verify that all of the other settings are correct. Hit “OK” once everything looks correct there. It doesn’t look like much, but you now have an open terminal! Type in the blank white area to send data, and anything that is received by the terminal will show up there as well. There are some limited adjustments we can make to the HyperTerminal UI. To find them, go to File > Properties. Under the “Settings” tab you’ll see most of the options. If you want to see what you’re typing in the terminal, you can turn on local echo. To flip this switch, hit the “ASCII Setup” button, then check “Echo typed characters locally”. The other settings are very specific to formatting how characters are sent or received. For most cases they should be let be. Those who have used HyperTerminal have either come to accept it for what it is, or sought out some other – any other(!) – terminal program. It’s not great for serial communication, but it does work. Let’s explore some of the better alternatives! Tera Term (Windows) Tera Term is one of the more popular Windows terminal programs. It’s been around for years, it’s open source, and it’s simple to use. For Windows users, it’s one of the best options out there. You can download a copy from here. Once you have Tera Term installed, open up it up, and let’s poke around. Making a Connection You should initially be presented with a “TeraTerm: New connection” pop-up within the program. Here, you can select which serial port you’d like to open up. Select the “Serial” radio button. Then select your port from the drop-down menu. (If this window doesn’t open when you start TeraTerm, you can get here by going to **File > New connection…“.) That’ll open up the port. TeraTerm defaults to setting the baud rate at 9600 bps (8-N-1). If you need to adjust the serial settings, go up to Setup > Serial Port. You’ll see a window pop up with a lot of familiar looking serial port settings. Adjust what you need to and hit “OK”. The title of your TeraTerm window should change to something like “COM##:9600baud” – good sign. That’s about all there is to it. The blank window with the blinking cursor is where data is both sent (by typing it in) and received. TeraTerm Tips and Tricks It can be weird to type stuff in the window and not see it show up in the terminal. It’s undoubtedly still flowing through the serial terminal to your device, but it can be difficult to type when you don’t have any visual feedback for exactly what you’re typing. You can turn on local echo by going to the Setup menu and selecting Terminal. Check the Local echo box if you’d like to turn the feature on. There are other settings to be made in this window as well. You can adjust the size of the terminal (the values are in terms of characters per row/column), or adjust how new-lines are displayed (either a carriage return, line feed, or both). Clear Buffer and Clear Screen If you want to clear your terminal screen you can use either the “Clear buffer” or “Clear screen” commands. Both are located under the Edit menu. Clear screen will do just that, blank out the terminal screen, but any data received will still be preserved in the buffer. Scroll up in the window to have another look at it. Clear buffer deletes the entire buffer of received data – no more data to scroll up to. Menus are a pain! If you want to get really fast with TeraTerm, remember some of these shortcuts: - ALT+N: Connects to a new serial port. - ALT+I: Disconnects from the current port. - ALT+V: Pastes text from clipboard to the serial port (not CTRL+V). - ALT+C: Copy selected text into clipboard (not CTRL+C). - CTRL+TAB: Switch between two TeraTerm windows. TeraTerm is awesome for simple ASCII-only serial terminal stuff, but what if you need to send a string of binary values ranging from 0-255? For that, we like to use RealTerm. RealTerm is designed specifically for sending binary and other difficult-to-type streams of data. RealTerm is available to download on their SourceForge page. Setting Up the Serial Port When you open up RealTerm, you’ll be presented with a blank window like below. The top half is where you’ll type data to send, and it’ll also display data received. The bottom half is split into a number of tabs where we adjust all of the settings. Let’s get connected! To begin, navigate to the “Port” tab. On the “Port” dropdown here, select the number of your COM port. Then, make sure the baud rate and other settings are correct. You can select the baud rate from the dropdown, or type it in manually. With all of those settings adjusted, you’ll have to click “Open” twice to close and re-open the port (clicking “Change” doesn’t work until after you’ve established a connection on a COM port). That’s all there is to that! Type stuff in the black ether above to send data, and anything received by the terminal will pop up there too. Sending Sequences of Values The ability to send long sequences of binary, hexadecimal, or decimal values is what really sets RealTerm apart from the other terminal programs we’ve discussed. To access this function, head over to the “Send” tab. Then click into either of the two text boxes next to “Send Numbers”. This is where you enter your number sequence, each value separated by a space. The numbers can be a decimal value from 0 to 255, or a hexadecimal value, which are prefixed with either a “0x” or a ‘$’. Once you have your string typed out, hit “Send Numbers” and away they go! Why would you need this you ask? Well, let’s say you had a Serial Seven Segment Display hooked up to an FTDI Basic, which is connected to your computer. This is a pretty cool setup – you can control a 7-segment display by just typing in your terminal. But what if you wanted to dim the display? You’d need to send two sequential bytes of value 123 and 0. How would you do that with the handful of keys on a keyboard? Consulting an ASCII table to match binary values to characters, you’d have to press DEL for 127 and CTRL+SHIFT+2 (^@) for 0…or just use the “Send” tab in RealTerm! Adjusting the Display Just as you can use RealTerm to send literal binary values, you can also use it to display them. On the “Display” tab, under the “Display As” section are a wide array of terminal display choices. You can have data coming in displayed as standard ASCII characters, or you can have them show up as hex values, or any number of other display types. Incoming bytes are displayed as hexadecimal values. Can you decode the secret message?! RealTerm is preferred for more advanced terminal usage. We’ll use it when we need to send specific bytes, but for more basic terminal applications, TeraTerm is our go-to emulator. YAT - Yet Another Terminal (Windows) YAT is a user-friendly and feature-rich serial terminal. It features text as well as binary communication, predefined commands, a multiple-document user interface and lots of extras. YAT is available to download at SourceForge. YAT features a multiple-document user interface (MDI) that consists of a single workspace with one or more terminals. Each terminal can be configured according to the device it shall be communicating with. These extra features make a terminal especially easy to use: - Text command console - File command list - Unlimited number of predefined commands - Drop-down of recent commands Each terminal has its own monitor to display outgoing and incoming data. The view can be configured as desired: - Time stamp - Line number - End-of-line sequence - Line length - Line and bytes transmission rate Most of these features can be enabled and configured, or hidden for a cleaner and simpler user interface. - Text or binary communication - Communication port type: - Serial Port (COM) - TCP/IP Client, Server or AutoSocket - UDP/IP Socket - USB serial HID - Specifc settings depending on port type Serial COM Port Settings TCP and UDP Settings USB Serial HID Settings Text Terminal Settings - Full support of any known ASCII and Unicode encoding - End-of-line configuration - Predefined and free-text sequences - Possibility to define separate EOL for Tx and Rx - Send and receive timing options - Character substituion - Comment exclusion Text Terminal Settings Binary Terminal Settings - Configuration of protocol and line representation - Possibility to define separate settings for Tx and Rx Binary Terminal Settings - Various display options - Various advanced communication options - Specialized communication options for serial ports (COM) - Escapes for bin/oct/dec/hex like - Escapes for ASCII controls like <CR><LF>as well as C-style - Special commands such as - Versatile monitoring and logging of sent and received data - Formatting options for excellent readability - Powerful keyboard operation including shortcuts for the most important features - Versatile shell/PowerShell command line - x86 (32-bit) and x64 (64-bit) distribution Change Management and Support YAT is fully hosted on SourceForge. Feature Requests and Bug Reports can be entered into the according tracker. Both trackers can be filtered and sorted, either using the predefined searches or the list view. Support is provided by a few simple helps integrated into the application, some screenshots on the SourceForge page, and the project’s email if none of the above can help. YAT is implemented in C#.NET using Windows.Forms. The source code is implemented in a very modular way. Utilities and I/O sub-systems can also be used independent on YAT, e.g. for any other .NET based application that needs serial communication, command line handling or just a couple of convenient utilities. Testing is done using an NUnit based test suite. Project documentation is done in OpenOffice. For more details and contributions to YAT, refer to Help > About. CoolTerm (Windows, Mac, Linux) CoolTerm is useful no matter which operating system you’re using. However, it is especially useful in Mac OS where there aren’t as many terminal options as there are in Windows. You can download the latest version of CoolTerm here. Making a Connection Download and open a CoolTerm window. To change the settings, click the Options icon with the little gear and wrench. You’ll be presented with this menu: Here, you can select your port, baud rate, bit options, and flow control. Now click on the Terminal tab on the left. Here, you can change the enter key emulation (carriage return/line feed), turn local echo off or on, and you can switch between line mode and raw mode. Line mode doesn’t send data until enter has been pressed. Raw mode sends characters directly to the screen. Once all your setting are correct, the Connect and Disconnect buttons will open and close the connection. The settings and status of your connection will be displayed in the bottom left corner. If you need to clear the data in the terminal screen, click the Clear Data icon with the large red X on it. If you’re getting annoyed with not being able to use the backspace, turn on ‘Handle Backspace Character’ under the Terminal tab under Options. One awesome feature of CoolTerm is Hex View. If you want to see the actual hex values of the data you are sending rather than the ASCII values, Hex View is a tremendous help. Click the View Hex icon. The terminal’s appearance will change slightly. Now whatever you type will show up as hex and ASCII. The first column is just keeping track of line numbers. The second column is the hex values, and the last column is the actual ASCII characters you type. Here I’ve typed hello and <enter>. Notice the 0D and 0A that appear for carriage return and line feed. To get back to ACSII mode, click the View ASCII icon. You can also use the Send String option to send entire strings of text. In the connection menu, select Send String. You should now have a dialog box with which to send your string in hex or ASCII mode. You can download the latest version of ZTerm here ZTerm is another terminal option for Mac users. Compared to CoolTerm, it seems a lot less user friendly, however, once you find your way around, it’s just as useful. Making a Connection When you first open ZTerm, you be greeted with this prompt: Choose the correct port, and click OK. You should now have a blank terminal window. *Note: Once you’ve made a connection, ZTerm will open the most recent connection every time you run it. This can be annoying if you have multiple connections available. To get around this auto connect, hold down the SHIFT key as you start ZTerm. This will bypass the auto connect and ask you to which port you’d like to connect. Once you’re connected, you can change the terminal settings by going to Settings -> Connection. Here you can change the baud rate (data rate); parity, data, and stop bits; flow control; and turn local echo on or off. If you need to change your port after establishing a connection, go to Settings -> Modem Preferences. Choose the correct port under the Serial Port dropdown menu. ZTerm has lots of other uses for network communication, but that is beyond the scope of this tutorial. One nice feature that can be used is the macros. Go to Macros -> Edit Macros. Here you can create macros that send whatever strings/commands you’d like. Have a command that you’re typing constantly? Make a macro for it! Command Line (Windows, Mac, Linux) As mentioned earlier, you can use command line interfaces to create serial connections. The major limiting factor is the lack of connection options. Most of the programs we’ve discussed so far have a slew of options that you can tweak for your specific connection, whereas the command line method is more of a quick and dirty way of connecting to your device in a pinch. Here’s how to accomplish this on the three major operating systems. Terminal and Screen (Mac, Linux) Open Terminal. See the Connecting to Your Device section for directions. ls /dev/tty.* to see all available ports. You can now use the screen command to to establish a simple serial connection. screen <port_name> <baud_rate> to create a connection. The terminal will go blank with just a cursor. You are now connected to that port! To disconnect, type control-a followed by control-\. The screen will then ask if you are sure you want to disconnect. There are other options you can control from screen, however it is recommended that you only use this method if you are comfortable with the command line. Type man screen for a full list of options and commands. screen command can also be used in Linux. There are only a few variations from the Mac instructions. If you do not have screen installed, get it with sudo apt-get install screen. Making a connection is the same as Mac. To disconnect, type That’s all there is to it. MS-DOS Prompt (Windows) The fastest way to get to the command line in Windows is to click on the start menu, type cmd into the search field, and press Enter. This will open up a blank MS-DOS command line prompt. To be able to issue Serial commands, you must first enter PowerShell. Type powershell to get into PowerShell command mode. To see a list of all the available COM ports, type You should now see something like this.. Now create an instance of the port you want with this command $port= new-Object System.IO.Ports.SerialPort COM#,Baudrate,None,8,one With that, you can now connect to and send data to or from that COM port. $port.open() $port.WriteLine("some string") $port.ReadLine() $port.Close() Again, this method of serial communication is only recommended for advanced command line users. Tips and Tricks Changing/Deleting COM Ports (Windows) There may come a time when you need a device to be on a specific COM port. An example of this is, in older versions of TeraTerm, you could only connect to COM ports 16 and below. Thus, if your device was on COM 17, you’d have to change it to connect to it. This problem has been addressed in newer versions of TeraTerm, but there are many other programs out there that only allow a certain number of COM ports. To get around this, we’ll have to dive into Device Manger. Open Device Manger, and expand the ports tab. Now right-click on the port you want to alter. Select Properties. In Properties, go to Port Settings, and select Advanced. Here, you’ll see a drop down menu with all the available COM ports in it. Some of them will have (in use) next to them. These are the ports that have been assigned to a serial device. Notice that COM 9 doesn’t have an (in use) next to it because that is the port we are currently working with. If we wanted to change COM 9 to COM 3, we simply select COM 3 in this menu, and click OK. The (in use) next to COM 3 should go away. Whatever was connected to COM 9 is now associated with COM 3, and whatever was associated with COM 3 has now been overwritten. If you need to clear out some old COM ports, you can follow the steps above but for numerous COM ports. WARNING: Do not select COM 1 when cleaning up old ports. This trick is only for if you really need it and shouldn’t be performed very often, for sanity’s sake. TTY vs CU (Mac, Linux) In Unix and Linux environments, each serial communication port has two parts to it, a tty.* and a cu.*. When you look at your ports in say the Arduino IDE, you’ll see both for one port. The difference between the two is that a TTY device is used to call into a device/system, and the CU device (call-up) is used to call out of a device/system. Thus, this allows for two-way communication at the same time (full-duplex). This is more important to know if you are doing network communications through a terminal or other program, but it is still a question that comes up frequently. Just know that, for the purposes of this tutorial, always use the tty option for serial communication. Cannot Connect to That Port! You can only have one connection to a particular port open at any given time (but you can have multiple terminal windows connected to different ports open at the same time). Thus, if you have an Arduino Serial Monitor window open and try to connect to that same port on a different terminal program, it will yell at you and say it could not establish a connection with that port or some such jazz. If you are ever having trouble connecting to a port, make sure it’s not open somewhere else. If you don’t have another connection open and still can’t connect, make sure all your settings (baud rate, etc.) are correct. Connected, But Can’t See Any Data If you are connected to the correct port but don’t see any data, there are two possible culprits. First check your baud rate. I know I sound like a broken record, but baud rate is the most important setting to match up. Check that baud! The other culprit could be that the TX and RX lines are reversed. Make sure you have TX->RX and RX->TX. Programming Arduino and Serial Communication The Arduino has one dedicated UART, which is just the fancy name for the serial TX and RX lines. It is over these two lines that the Arduino gets programmed. Thus, when working with the Arduino (or other microcontrollers) it’s best to avoid using these lines to communicate with other serial devices, especially if you are developing your code and need to upload frequently. What happens is, if you have another device hooked up to the UART, the data from your computer might not get interpreted correctly by the Arduino leading to code not working the way it’s supposed to or not getting uploaded at all. The same rule applies to serial terminals. If you have a terminal open on the same port that you are trying to program, it won’t work. Arduino will throw some errors about not being able to communicate with that port. If this happens, close your connection, and try again. One simple way around this is to use the Software Serial Library built into Arduino to create a separate UART for outside serial communication. That way, your Arduino can communicate on one port while still leaving the default UART open for programming. Resources and Going Further That was a lot of information! At the very least, you should walk away from this knowing what a terminal window is, how to use it, which terminal program is best suited for you and your operating system, and how to navigate that program’s interface. Again, terminal programs are a very powerful tool when working with serial devices and microcontrollers. Now go collect some data! If you’d like to know more about different types of communication, visit these tutorials: To see some products that require the use of a serial terminal, check out these hook-up guides: Your favorite terminal didn’t make the list? Tell us which terminal emulator is your favorite and why in the discussion section.
1
3
<urn:uuid:21475b7a-23b0-4978-80c7-fb482d53602e>
Although history celebrates James Watt as the mechanical genius whose steam engines launched the Industrial Revolution, Watt’s most enduring innovation reflects an even greater penchant for marketing. He invented horsepower — the metric and meme that effectively defined his industry. Most important, Watt’s neologism has outlived every engine he designed or built. The term horsepower represented clever rhetorical engineering by Watt and partner Matthew Boulton, whose business had prospered by charging mine owners only one-third of the cost savings achieved by replacing less-efficient Newcomen steam engines with their own. Seeking to broaden their market, the collaborators thought brewmasters might find value in this new production technology. But 18th-century British breweries used horses — not steam — to power the turning of their mills’ grindstones. So it behooved Boulton and Watt to recalculate their steam engines’ appeal accordingly. After a period of equine observation, Watt determined that the typical coal-mine pony could pull 22,000 foot-pounds per minute. To extrapolate this finding to a large horse, Watt increased these test results by 50 percent — i.e., 33,000 foot-pounds of work per minute — and called it horsepower. Some historians believe that Watt overstated the amount of power that a horse can deliver over a sustained period of time. Nonetheless, his comparison of steam engine output to a team of horses working together proved to be a remarkably persuasive marketing metric for prospective purchasers, whether brewers, millers, or mine owners. Horsepower became a global standard that helped build the Boulton & Watt brand and business. This notion of using innovative metrics — measures that gauge the unique value inherent in an innovation as a means of marketing it — goes well beyond the traditional approach of adding new “features” and “functionality” to attract consumers to products and services. By creating fresh language for the way people calibrate the worth and efficacy of a particular idea, innovative metrics have the potential to be so intrinsically compelling — or at least so creatively marketed — that they become, like horsepower, the overriding identity of a product or brand. Which means, in turn, that these metrics should be crafted with the same singular sensibility as the inventions themselves. Though that may seem a high bar to reach, devising innovative metrics can be a remarkably low-tech endeavor. For example, in the 1880s, Harley Procter, a son of the cofounder of Procter & Gamble, merely examined a laboratory analysis to add up the ingredients in Ivory Soap that didn’t fall into the category of pure soap; he learned that taken together these ingredients equaled 56/100 of 1 percent of Ivory. He subtracted that number from 100 and wrote the slogan “99−44/100% Pure.” That metric — rooted, like horsepower, in a simple empirical calculation — became an integral ingredient of the soap’s brand equity and in many ways spurred Procter & Gamble’s future success. By contrast, Willis Carrier — the entrepreneur most responsible for the commercialization of air-conditioning — chose a far more sophisticated path to an innovative metric. He conducted elaborate studies examining the complex relationships between moisture in the air and ambient temperature and studied the effectiveness of various types of cooling technology on them. Armed with extensive charts and scores of formulas, Carrier presented his work on the performance of air-conditioning methods in a 1911 paper, “Rational Psychrometric Formulae.” This one paper established air-conditioning as a new engineering discipline. Carrier’s technologies were particularly eye-opening because they not only managed humidity levels but did so with the accuracy of a thermostat controlling temperature. Based on Carrier’s calculations and his new equipment born of this research, prospects were persuaded to pay large sums not only to cool overheated factories but to, for the first time, “condition” — remove unwanted moisture from — their fetid air in hopes of improving production quality. The creative challenge posed by formulating innovative metrics shouldn’t be confused with “unique selling propositions” that proclaim a product’s unparalleled characteristics to convince a customer to switch brands. The purpose of innovative metrics is not to “sell” the innovation but instead to empower customers to calculate for themselves whether the innovation represents good value — along dimensions the innovator has defined. Ideally, these dimensions reflect the special competences of the innovator. Sometimes an innovative metric is a fungible concept that can evolve as rapidly as the technology it seeks to market. For example, in the early 1970s, semiconductor pioneer Intel declared MIPS — millions of instructions per second — the high-performance standard. Within a decade, Intel had upped the ante and embraced a new, faster innovative metric: “clock speed” — the rates, usually in megahertz and gigahertz, at which processors execute instructions. But Intel’s emphasis on clock speed led to a problem that the company was slow to recognize: The chips got too hot too fast and consumed an exorbitant amount of energy. Finally giving in to customer complaints — and after having lost some customers to rivals who had more energy-efficient chips — Intel recently changed the innovative metric again, this time to “performance per unit of energy.” And upon doing that, Intel positioned its “multicore” architecture, which essentially stacks two or more processors on the same single integrated circuit, as providing the best balance of computational performance and energy use. In other words, chips, previously defined by speed and performance, are now measured by performance and power — and, as important, by energy efficiency. When innovative metrics prove unreliable, they may end up discouraging the very innovations they sought to promote. That could indeed be the situation that financial-services firms now face. Many lenders adopted innovative metrics such as “value at risk” and “extreme value theory” — ostensibly to better manage their multibillion-dollar portfolios of innovative financial products such as collateralized debt and subprime mortgages. These metrics would also provide institutional investors and traders a variety of ways to assess their exposure to risk. Yet as the subprime mortgage financial meltdown stunningly affirmed, these models created more risk than value. With the enormous losses sustained by so many “innovative” lenders, investors are likely to think twice before they trust the metrics offered by some financial-services firms to sell their novel products and services. Emerging trends typically invite innovative metrics. For example, a large number of companies worldwide are currently seeking innovative metrics that let them assess — and communicate — the environmental impact of new products or services that they bring to market. Consequently, innovative metrics concerning recyclability and reuse seem destined to become contemporary counterparts to Procter & Gamble’s late-19th-century “99−44/100% Pure” innovation branding. An intriguing example of a possible “greenovation” metric is the nascent shift from “miles per gallon” to “gallons per 100 miles.” Duke University researchers Richard Larrick and Jack Soll argue that “miles per gallon” metrics make it too easy for consumers to miscalculate comparisons between automobile mileage performance. They found most people surveyed ranked an improvement from 34 to 50 mpg as using less gas over 10,000 miles than an improvement from 18 to 28 mpg over 10,000 miles — even though the latter saves twice as much fuel. (Going from 34 to 50 mpg saves 94 gallons; but going from 18 to 28 mpg saves 198 gallons.) These mistaken impressions were corrected when fuel efficiency was expressed more directly in gallons used per 100 miles. Viewed that way, 18 mpg becomes 5.5 gallons per 100 miles and 28 mpg translates to 3.6 gallons. As the late mathematician Richard Hamming tartly observed, “The purpose of computing is insight, not numbers.” That’s also the goal of innovative metrics. The measure of the success of innovative metrics is how clearly they convey the value — and risks — of the innovation. Watt’s steam engines, P&G’s soap, and Intel’s microprocessors might well have dominated their markets without novel metrics. But for these businesses and many others, innovative metrics made selling their products to a large number of customers a much less difficult prospect. Indeed, as many innovators are learning, oftentimes the best way to take the measure of a new market is to create a new measure for the market. Michael Schrage, a contributing editor of strategy+business, holds appointments at MIT’s Sloan School of Management and London’s Imperial College. He was previously a Washington Post reporter and a columnist for Fortune and the Los Angeles Times.
1
2
<urn:uuid:ae7405cc-35af-4cac-9a9a-92be092020c2>
What for ?In order to be able to boot Linux on the Palm, we need a loadlin-alike program that will wipe out the Palm OS from the RAM and then give the hand to the Linux kernel image we provide it with. Why is it so specific ?As explained, the bootloader has to be a Palm OS program. Unlike Linux which is a cross-platform software, our bootloader is platform-specific, since it has to be written for the Tungsten E platform, and for the Palm OS (version 5). Coding for the Palm OSIn order to be able to compile software that will run on the Palm OS, we need a SDK. There is a free Palm OS SDK called prc-tools which uses the Gnu Compiler Collection. However, while newer Palm OS devices sport different kinds of processors (X-Scale, OMAP, etc...), former one used to run a 68000 processor. In order to keep a binary compatibility, newer devices (among of which the Tungsten E) runs a 68k emulator. And even though, they are still faster than their predecessor because their CPUs are way faster (and also because the Palm OS itself doesn't use emulation but is coded natively for the host CPU). However, Palm OS applications can be compiled natively for ARM : you'll gain speed, but you'll lose backward compatibility as a tradeoff. What had already been done ?A bootloader for the Palm OS had already been written, but for older Motorola 68k-powered Palm devices. This loader is distributed with the uClinux embedded linux distribution, but it's also available here as a tarball. A few parts of this bootloader helped us, but most of it wasn't useable, since 68ks are too different from ARMs. What have we done ?Here are the problems we ran into, and here is how we tried to solve them : - In order to boot it, we obviously need to send a kernel image to the PalmOS. However, PalmOS databases cannot be bigger than 64Kbytes. This problem can be quite easily solved by splitting the kernel in 64Kb chunks, and re-joining them later on. - We needed to get out of the 68k emulator, because any application is started as if it was a 68k app. This is not so hard, because the Palm OS has provision for this : we just used the PceNativeCall function. - We had to setup the machine properly (i.e. turn off interrupts, turn off MMU, set up registers according to Linux expectancies). That was not a very hard job for two reasons : that's fully documented, and anyway most of this can be copied from u-boot, which fully supports OMAPs - Last, but not least : we had to actually launch the kernel. That can sound dumb, but it was not as easy as it might have seemed. For this very simple reason : we had turned off the MMU. That's why the kernel image address we had was no longer valid. Here is how we solved it : first of all, we disassembled the PalmOS itself so as to get an idea of the virtual to physical memory mapping. Thanks to a disassembly of the DAL.prc file (we cannot distribute this one because it's copyrighted), we managed to find where the mapping was stored. Altering the memory mapping, we created a linear-mapped memory area that wasn't used by the PalmOS in normal use, but that was still large enough to contain a kernel image plus a small piece of code. Before disabling the MMU we copied our kernel image to this location, plus the end of the bootloader. We then jumped to the end of the bootloader, and disabled the MMU. In effect, disabling the MMU must be done from a linear-mapped memory address, otherwise the PC isn't valid once we stopped the MMU. A few words about GaruxThe bootloader we made is named Garux. This stands for... err... nothing :-) To get the latest version of Garux, just use our CVS repository. Here is a list of key facts about Garux : - Garux is functional. We have had some output from a running Linux kernel through the LCD display, so we are positive about its functionality. However, it still has a few limitations (for example I'm not sure it will correctly boot images bigger than a megabyte (i.e. 8Mbit), because of the PalmOS heap size limit). - Because of the peculiar file structure of the PalmOS it's not possible (or at least pretty awkward) to dissociate the bootloader from the kernel image : that's why Garux needs the kernel image at compile time (in order to split it in 32000 bytes chunks and to join them in the .prc file). - Garux has a built-in machine detection algorithm that will prevent it from running the kernel if it is ran on another PalmOS device. This wasn't meant at all to prevent our code from being reused, but since running Garux on a machine that is not a Tungsten|E would obviously lead to a crash we thought it was safer. If you wish to re-use some code from Garux for another Palm, you're obviously welcome.
1
2
<urn:uuid:1d842807-52d2-406c-b482-61a808849ab0>
Intragenomic conflict in populations infected by Parthenogenesis Inducing Wolbachia ends with irreversible loss of sexual reproduction © Stouthamer et al; licensee BioMed Central Ltd. 2010 Received: 19 February 2010 Accepted: 28 July 2010 Published: 28 July 2010 The maternally inherited, bacterial symbiont, parthenogenesis inducing (PI) Wolbachia, causes females in some haplodiploid insects to produce daughters from both fertilized and unfertilized eggs. The symbionts, with their maternal inheritance, benefit from inducing the production of exclusively daughters, however the optimal sex ratio for the nuclear genome is more male-biased. Here we examine through models how an infection with PI-Wolbachia in a previously uninfected population leads to a genomic conflict between PI-Wolbachia and the nuclear genome. In most natural populations infected with PI-Wolbachia the infection has gone to fixation and sexual reproduction is impossible, specifically because the females have lost their ability to fertilize eggs, even when mated with functional males. The PI Wolbachia infection by itself does not interfere with the fertilization process in infected eggs, fertilized infected eggs develop into biparental infected females. Because of the increasingly female-biased sex ratio in the population during a spreading PI-Wolbachia infection, sex allocation alleles in the host that cause the production of more sons are rapidly selected. In haplodiploid species a reduced fertilization rate leads to the production of more sons. Selection for the reduced fertilization rate leads to a spread of these alleles through both the infected and uninfected population, eventually resulting in the population becoming fixed for both the PI-Wolbachia infection and the reduced fertilization rate. Fertilization rate alleles that completely interfere with fertilization ("virginity alleles") will be selected over alleles that still allow for some fertilization. This drives the final resolution of the conflict: the irreversible loss of sexual reproduction and the complete dependence of the host on its symbiont. This study shows that dependence among organisms can evolve rapidly due to the resolution of the conflicts between cytoplasmic and nuclear genes, and without requiring a mutualism between the partners. Intragenomic conflicts are a fundamental driving force in evolution [1–3]. Sex and later on anisogamy are thought to have evolved as a consequence of conflicts between selfish genetic elements and their host genome . In turn, sex and anisogamy have promoted other conflicts between cytoplasmic and nuclear genes based on their different modes of inheritance . Cytoplasmic genes are only transmitted through females, rendering males a dead end for them. Therefore, selection at the cytoplasmic level favors any manipulation of the sex ratio increasing female production. In response, nuclear genes are selected to re-establish a balanced sex ratio by suppressing or counteracting the action of cytoplasmic elements. These nucleo-cytoplasmic conflicts can have dramatic consequences on sex-allocation and sex-determination systems . One of the most striking examples of such conflicts is cytoplasmic male sterility (CMS) in hermaphroditic plants where mitochondria promote the production of females by sterilizing male gametes, but are counteracted by nuclear suppressor alleles. It has been proposed that CMS has played a major role in the evolution of dioecy in some plant taxa . Other cytoplasmically inherited genetic elements include endosymbiotic bacteria that are common in many arthropods. In many cases such symbionts cause female-biased offspring sex ratio by several means: either transforming genetic males into females (feminization ), killing male offspring or by inducing parthenogenesis . For example; in different strains of Wolbachia all of these phenotypes have evolved [9, 10], different strains of Cardinium have been shown to induce feminization, parthenogenesis and cytoplasmic incompatibility [11–13], while in Rickettsia both male killing strains and parthenogenesis inducing strains are known . In the case of feminization, suppressor alleles have evolved and the intragenomic conflict has profoundly affected the sex-determination system in the woodlouse Armadillidium vulgare . Male-killers have also been shown to affect sexual selection by reversing the roles of males and females in courtship behaviors . Despite extreme sex ratio biases caused by symbionts, nuclear suppressor alleles do not always evolve rapidly. For instance, the male-killing Wolbachia in the butterfly Hypolimnas bolina on Independent Samoa has reached extremely high frequencies, only 1 in 100 individuals is male, but nuclear suppressor alleles have not evolved in over 400 generations . However, resistance to male-killing in this system has been shown and a rapid invasion of populations infected with the male killer by these suppressor genes was recently observed . Here we investigate the consequences of nucleo-cytoplasmic conflict when parthenogenesis induction by Wolbachia occurs. PI-Wolbachia are found in many species of parasitoid wasps and allow infected females to produce daughters from unfertilized eggs . In Hymenoptera the normal, sexual mode of reproduction is such that unfertilized (haploid) eggs become males while fertilized (diploid) eggs develop into females. The PI-Wolbachia does not appear to influence meiosis. Instead infected unfertilized eggs become diploid by a Wolbachia-induced modification of the first [22, 23] or the second mitotic division [24, 25]. In all cases studied the outcome is: two identical sets of chromosomes that fail to separate, resulting in egg nuclei that are diploid. These eggs develop into completely homozygous infected females. In Trichogramma species the anaphase of the first mitotic division aborts and the resulting diploid individual develops into a female. The ability to develop from an unfertilized egg does not preclude the possibility of fertilization. In fertilized infected eggs, Wolbachia does not interfere with normal fertilization. In Trichogramma populations, where both infected and uninfected individuals co-exist, infected females mate and produce two types of infected daughters, heterozygous daughters that have a father, and completely homozygous daughters that are parthenogenetically produced . Genetic conflicts have been demonstrated in Trichogramma kaykai populations from the Mojave Desert (California, USA). In these populations, the PI-Wolbachia is found in all studied field populations at a relatively low frequency of about 10% of females. The infection does not reach higher frequencies because it is countered by the presence of a PSR (paternal sex ratio) chromosome . The PSR chromosome is a B-chromosome that is exclusively transmitted through males. It causes eggs fertilized with sperm from a PSR male to develop into males, instead of females [27, 28]. The PSR chromosome accomplishes this by the destruction of the paternal set of chromosomes (excluding itself) in the fertilized egg, therefore the fertilized egg does not develop into a diploid female but becomes a haploid male, again a carrier of the PSR chromosome. The presence of the PSR chromosome in the T. kaykai population keeps the Wolbachia infection from reaching fixation . In some wasp species the PI-Wolbachia infection has gone to fixation in all studied populations . In other species both completely infected ('fixed') and uninfected populations exist geographically isolated from each other [29–32]. 'Mixed' populations, in which infected and uninfected individuals coexists, are only known from several Trichogramma species . In practically all 'fixed' populations sexual reproduction appears no longer possible. Males can be derived from such populations by antibiotic treatment and paired with antibiotic treated females, and still no sexual reproduction takes place. In those species where both sexual and infected populations occur in geographically distinct areas, it is possible to investigate which of the two sexes is responsible for the lack of fertilization. In Apoanagyrus lopezi , Telenomus nawaii [30, 31], Leptopilina clavipes , L. japonica and Trichogramma pretiosum (Peru) males derived from 'fixed' populations by antibiotic treatment are able to inseminate females from sexual populations, and these females use the sperm to successfully fertilize their eggs. However, females from infected populations exposed to males from the sexual population do not fertilize their eggs. Therefore in the 'fixed' infected populations, sexual functionality has been lost in females, but not in males. Similarly, in Aphytis lignanensis, A. diaspidis and Eretmocerus mundus males derived from 'fixed' infected populations produce sperm and mate with infected females, but the females do not then use the sperm to fertilize their eggs. Furthermore, in Telenomus nawaii, repeated introgression of nuclear genes from a 'fixed' infected population (using males were derived by antibiotic treatment) into females from an uninfected population, resulted after two generations in the inability of some of the introgressed females to produce fertilized eggs (). This shows that the non-fertilization trait is inherited as one or more nuclear genes. The loss of female versus male sexual function in 'fixed' infected populations can be explained by selection against female sexual function. Several hypotheses have been proposed to explain this asymmetry in the loss of sexual function. Stouthamer et al posed the "neutral mutation accumulation" hypothesis: once a PI-Wolbachia infection had reached fixation there would be no more selection to maintain alleles involved in sexual reproduction and over time both male and female sexual function would erode. Huigens and Stouthamer subsequently suggested that female sexual function would erode faster if more loci were involved in coding for the female behavior than for the male behavior. Alternatively, Pijls et al hypothesized that once a population had reached fixation for the PI-Wolbachia infection and sexual reproduction has ceased, those mutations that would disable costly female traits involved with sexual reproduction would be selected. Examples of costly female traits could be pheromone production, maintenance of spermathecal glands etc. This "costly female trait" hypothesis would explain the rapid decline in sexual behavior of the females relative to that of the males. More recently, Huigens and Stouthamer and Jeong and Stouthamer hypothesized that the female-biased sex ratio in populations with a spreading PI-Wolbachia infection selects for alleles that increase the production of males, which in haplo-diploids is accomplished by a reduced egg fertilization rate. They used the term "functional virginity mutations" to describe these mutations because the phenotypic result is females that no longer fertilize their eggs. "Functional virginity mutations" could disable any trait required for successful sexual reproduction in females. These same traits could be the target of "costly female trait" mutations. However, in contrast to the "functional virginity" hypothesis, the "costly female trait" hypothesis requires that disabling the trait also results in a positive physiological fitness effect. Using models, we explore these hypotheses and show that the spread of a PI-Wolbachia infection, results in selection favoring mutations in the nuclear genes reducing the female fertilization rate. Our models show that the genetic conflict between PI-Wolbachia and the nuclear genome strongly influences offspring sex-allocation and that the final resolution of the conflict is the irreversible loss of sexual reproduction ending in the complete reproductive dependence of the host on its symbiotic counterpart. This is consistent with the irreversible loss of sexual reproduction found in many PI-Wolbachia infected species . We model the spread of a PI-Wolbachia infection in an initially uninfected population. We present these models with increasing complexity. First, we model the case where the Wolbachia transmission from mother to offspring is perfect, but where the cost of being infected varies. Next, we introduce imperfect transmission of the PI-Wolbachia (in these cases not all the offspring of the infected mothers are infected). We provide analytic solutions for these cases. Finally, we allow the fertilization rate of the females in the population to vary. For this last model we used iteration of numerical examples to provide some exact calculations, and support these results with an approximate general analytic solution. 1. Perfect transmission of the Wolbachia Proportions of different types of offspring produced by Wolbachia infected or uninfected females Infected female (Fecundity = ω) Uninfected mother (Fecundity = 1) Fertilized eggs (x) Unfertilized eggs (1-x) Fertilized eggs (x) Unfertilized eggs (1-x) Under such circumstances the infection will rapidly go to fixation in the population, the speed of the spread being dependent on the fertilization rate. The higher the fertilization rate the longer it will take before the infection reaches fixation. Infection with the PI-Wolbachia may have some physiological cost to the host, resulting in reduced fitness of infected females. In several species, infection with PI-Wolbachia reduced the offspring production of infected females relative to uninfected females [36, 42–44]. The dynamics of this system are determined not by the total offspring production but by the number of daughters that the females produce, and only two outcomes are realistic: 1) if an infected female produces fewer daughters than an uninfected female, the infection will not spread (if ω <x), and 2) if an infected female produces more daughters than an uninfected female the infection goes to fixation (if ω > x). The time before the infection reaches fixation depends on how much larger ω is than x. It is clear that the number of generations it takes to reach a particular infection frequency increases with a higher fertilization frequency of the eggs, and with a lower relative fitness of the infected females. 2. Imperfect transmission of the PI-Wolbachia The equilibrium is maintained because mated infected females also produce some uninfected daughters (Table 1), and this "sponsoring" of the uninfected part of the population by the infected females increases with the infection frequency in the population (dashed curve Figure 1A). The model integrating the infection cost and, especially, the imperfect transmission is more representative of natural situations. An equilibrium infection frequency can be maintained under these circumstances (equation 6); allowing the long-lasting co-existence of infected and uninfected females and the evolution of conflict between the nuclear and cytoplasmic genes. 3. Evolution of fertilization rate of females When the population sex ratio becomes more female-biased because of a spreading PI-Wolbachia infection, males will have increased fitness due to their high mating rates. Thus alleles that reduce the egg fertilization rate will be selected, since all unfertilized eggs of uninfected females and a fraction (1-α) of eggs of infected females become males (see Table 1). Genetic variation for offspring sex ratio (i.e. fertilization rate) has been found in several uninfected parasitoid wasp species [47–49]. We simulated the spread of a PI-Wolbachia infection in a population where two fertilization rate variants were present: 1) the wild type fertilization rate (x, females expressing this allele fertilize 50% of their eggs) and 2) a recessive mutant fertilization rate allele (n, females expressing this allele fertilize a lower percentage of their eggs) using the recurrence relationships defined in additional file 1. The simulations were initiated with a wild type population into which we introduced a PI-Wolbachia infection at a 1% frequency among the females (frequency of infected (I) wildtype (++) females I++ = 0.01, frequency of uninfected (U) wildtype (++) females U++ = 0.99, with all other female genotypes at zero; see additional file 1) and a recessive fertilization mutant n at a frequency of 1% in the males (Mn = 0.01, M+ = 0.99). To confirm the generality of the simulations, we derived an analytic solution for the initial spread or decline of a rare recessive fertilization mutation (additional file 2). where r is the prevailing proportion of males in the population. Note that the only necessary condition for the spread is a female biased sex ratio (r < 0.5). Interestingly in the simulations, the limitation of the male mating capacity resulted in two effects: 1.) the infection frequency went to a much higher level within a few generations, this is caused by a lower production of uninfected females by a.) uninfected mothers- a fraction of them remained unmated and produced only sons, and b.) a fraction of the infected females remained unmated and consequently produced fewer uninfected daughters (reduced sponsoring effect); 2.) in populations with a reduced mating capability it takes longer to reach fixation for the infection and the fertilization mutation because the spread of the mutant allele mainly takes place through the mating of males produced by infected females. And they will only be able to mate with a limited number of females (See legend Figure 5). The mutant allele and the infection finally go to fixation because of a ratcheting effect: all infected females that are homozygous for the fertilization mutation will no longer mate and their female offspring will remain homozygous and infected, yet they will produce male offspring that carries the mutation. This results in most of the males present the population to be carriers of the mutation. Only those females that are not yet homozygous for the mutation will mate and part of their offspring will become homozygous for the mutation. Consequently the class of females that is homozygous for the mutant allele and infected will grow relative to the class of females that is not yet homozygous and infected. Over time this ratcheting mechanism leads to all females being homozygous for the mutation and infected, all the males that are produced in these populations will then also be carriers of the mutation. Whether the fertilization mutation is dominant or recessive has only little influence on the rate at which the mutation reaches fixation in the population, for most transmission efficiencies the recessive mutant will go to fixation faster than a dominant mutant. Only at very low Wolbachia transmission efficiencies does the dominant mutant go to fixation faster (data not shown). We show that the female-biased sex ratio caused by a spreading PI-Wolbachia infection creates a conflict that gives a large selective advantage to females producing male offspring. The ultimate result of this process is the fixation of mutant alleles reducing the fertilization rate both in the infected and in the uninfected segments of a population. Although our model was based on a single locus, the same principles apply if there are many different loci involved with egg fertilization, thus selection should eventually lead to the fixation of mutant alleles at many loci or to fixation of a mutant allele with a major effect at a single locus, in either case resulting in no egg fertilization at all. Fixation of these mutant alleles occurs even though above a certain frequency there will no longer be an advantage to producing males (since only a small proportion of the females would still be fertilizing their eggs). The spread continues because by that time most of the males in the population will carry the mutation and the some of the daughters of females still willing to mate- i.e. those females not homozygous for the mutation- will become homozygous for the mutation (Figure 4). The ultimate outcome is fixation of both PI-Wolbachia infection and the mutations interfering with sexual reproduction (egg fertilization). If males are produced in these 'fixed' populations, they are in principle still capable of mating and producing sperm, but it is to be expected that females will no longer fertilize their eggs, and as a consequence, sexual reproduction cannot be regained in these populations even if the PI-Wolbachia infection is cured. Thus the conflict is resolved by a complete and irreversible loss of sexual reproduction and therefore complete dependence of the host on its symbiont counterpart for reproduction. While the "costly female trait" hypothesis results in the same outcome as the "functional virginity mutation" hypothesis, their mechanisms differ. The difference between the 'functional virginity mutation" and the "costly female trait" hypotheses is the cost/benefit that is thought to be derived from the mutation. Under the "costly female trait" hypothesis the mutant spreads because females homozygous for the mutation are assumed to be fitter than non mutant females, while no cost assumptions are required under the "functional virginity mutation" hypothesis. Indeed, simulations and an analytical solution (additional file 2) show that even a substantial negative fitness effect does not deter the spread of functional virginity mutations (Figure 6). In addition, because Hymenoptera are known to have mechanisms to adjust their sex ratio, and that some variation exists in natural populations, selection can act readily on fertilization efficiency as soon as Wolbachia invades, and without requiring new mutations to arise. Consequently, we expect the "functional virginity" mutations to be the common cause of the loss of sexual function in females from populations fixed for PI-Wolbachia infections. Once Wolbachia infection has reached fixation we expect the processes suggested by the other two other hypotheses also to take place. Initially we expect the "costly female traits" mutations to spread through their selective advantage, while other genes involved with sexual behavior in females and all male-specific traits should accumulate mutations at a neutral rate ("neutral accumulation hypothesis"). The longer the population has been fixed for infection the less functional the males are expected to be. Several changes have been found in the female reproductive organs and behavior in PI-Wolbachia fixed species. In most species only the lack of fertilization has been noted and no additional studies have been done to determine if there are morphological or physiological changes in the females or males. One exception to this is the species Muscidifurax uniraptor where females lack a spermathecal muscle. In several species where the PI infection has gone to fixation the males have been studied in detail, for instance in M. uniraptor males no longer produce sperm. In the species L. clavipes the males derived from the infected lines are less fertile than males from an arrhenotokous line . Another effect of this irreversible loss of sexual function is that the fate of the Wolbachia and its host are inextricably linked. As a result, selection on the interaction between the Wolbachia and its host for increased production of infected offspring by infected females (αω) should also become stronger. Such clonal selection can start as soon as infected females have become homozygous for the virginity allele(s). Increased production of infected offspring can be attained by reducing the negative effect of the infection on the offspring production (1-ω) and by increasing the transmission efficiency (α) of the Wolbachia. After the infection and the virginity mutation(s) have reached fixation, we expect that all females will be identical in those genes associated with the virginity mutations. However, the rest of the genome may differ between different clonal lines. The next selective step in the evolution of PI-Wolbachia infected species is a reduction in the number of clones due to selective sweeps favoring reductions in costly female traits and other beneficial mutations, as well as mutations in Wolbachia that affect the fitness of infected females. This scenario should result in a large reduction of clonal types in the field, depending on the size of the population and the frequency at which beneficial and costly female trait mutations take place. In addition, we would expect the population to regain some level of clonal diversity in neutral markers between sweeps. Several fixed PI-Wolbachia infected populations have been studied for clonal variation. In Diplolepis spinosissimae and L. clavipes both fixed infected and uninfected populations were studied for genetic variation, and only a small number of clones were found in infected populations. The number of clones per infected population varied from 1-3 in D. spinosissimae as was inferred from three different microsatellite loci . The total genetic diversity in the infected populations was also much lower than in the sexual population . In Diplolepis rosae all studied populations were parthenogenetic and most likely infected with Wolbachia . Over the whole range (from Sweden to Greece) only 8 different clones were recognized using 9 different allozymes . The situation is less clear in L. clavipes where two main clonal types were identified using AFLP markers ; but within each clonal type there was considerable genetic variation. Furthermore the genetic variation in the sexual population appeared to be similar to that in the clonal populations. Once a PI-Wolbachia infection enters in a population several outcomes are possible. If the PI-Wolbachia immediately has a perfect transmission the infection will go rapidly to fixation depending on relative offspring production of the infected females. Neither male nor female sexual function is expected to change during this rapid spread of the infection. It is however unlikely that new associations of host and Wolbachia will immediately result in a perfect transmission. Many artificial inoculations of Wolbachia in novel hosts are unsuccessful, or have a poor transmission [55–59]. If a PI-Wolbachia enters a population and it has an imperfect transmission and it allows the Wolbachia to spread through the population then several outcomes are possible. Initially such a spread will lead to the coexistence of both infected and uninfected individuals in the population. This prolonged coexistence allows time for several traits to evolve (either by selection on already existing variation or on arising mutations). Here, we have shown that the most likely outcome is the selection for low fertilization mutants in the population, eventually leading to the fixation of the infection in the population and the irreversible loss of sexual reproduction. This appears to be the most common outcome in PI-Wolbachia infected Hymenoptera and highlights that dependence among organisms can evolve rapidly due to the resolution of the conflicts emerging between cytoplasmic and nuclear genes, and without requiring mutualism between the partners. This is clearly an alternative to the classical scenarios of evolution of mutualism where dependence is thought to evolve after long co-evolutionary history between partners. The recursive relationships relating the frequencies of the infection and the different fertilization genotypes in present generation to the next are given in Additional file 1. The model assumes non-overlapping generations, and unlimited growth of the population. All females independent of genotype or infection status have an equal chance of mating. These relationships were modeled over the generations using an excel spreadsheet (Additional file 4) that is explained in Additional file 3. We would like to thank Paul Rugman-Jones for the review of earlier versions of this manuscript. This work was supported in part by a National Science Foundation Award EF-0328363 to RS. - Cosmides LM, Tooby J: Cytoplasmatic inheritance and intragenomic conflict. JtheorBiol. 1981, 89: 83-129.Google Scholar - Hurst LD: Intragenomic conflict as an evolutionary force. Proc R Soc Lond. 1992, 248: 135-140. 10.1098/rspb.1992.0053.View ArticleGoogle Scholar - Eberhard WG: Evolutionary consequences of intracellular organelle competition. QuartRevBiol. 1980, 55: 231-249.Google Scholar - Hoekstra RF: The Evolution of Sexes. Experientia Supplementum (Basel), Vol 55 The Evolution of Sex and Its Consequences 403p Birkhaeuser. Edited by: Stearns SC. 1987, Verlag: Basel, Switzerland; Boston, Massachusetts, USA Illus, 59-92.View ArticleGoogle Scholar - Maurice S, Belhassen E, Couvet D, Gouyon PH: Evolution of dioecy: Can nuclear-cytoplasmic interactions select for maleness?. Heredity. 1994, 73 (4): 346-354. 10.1038/hdy.1994.181.View ArticlePubMedGoogle Scholar - Rigaud T: Inherited microorganisms and sex determination of arthropod hosts. Influential passengers: inherited microorganisms and arthropod reproduction. Edited by: O'Neill SL, Hoffmann AA, Werren JH. 1997, Oxford, UK: Oxford University Press, 81-101.Google Scholar - Hurst GDD, Hurst LD, Majerus MEN: Cytoplasmic sex ratio distorters. Influential passengers: inherited microorganisms and arthropod reproduction. Edited by: O'Neill SL, Hoffmann AA, Werren JH. 1997, Oxford, UK: Oxford University Press, 125-154.Google Scholar - Stouthamer R: Wolbachia-induced parthenogenesis. Influential passengers: Inherited microorganisms and arthropod reproduction. Edited by: O'Neill SL, Hoffmann AA, Werren JH. 1997, Oxford, UK: Oxford University Press, 102-124.Google Scholar - Werren JH: Biology of Wolbachia. Annu Rev Entomol. 1997, 587-609. 10.1146/annurev.ento.42.1.587.Google Scholar - Stouthamer R, Breeuwer JAJ, Hurst GDD: Wolbachia pipientis: Microbial manipulator of arthropod reproduction. Annual Review of Microbiology. 1999, 53: 71-102. 10.1146/annurev.micro.53.1.71.View ArticlePubMedGoogle Scholar - Weeks AR, Marec F, Breeuwer JAJ: A mite species that consists entirety of haploid females. Science. 2001, 292 (5526): 2479-2482. 10.1126/science.1060411.View ArticlePubMedGoogle Scholar - Zchori-Fein E, Perlman SJ, Kelly SE, Katzir N, Hunter MS: Characterization of a 'Bacteroidetes' symbiont in Encarsia wasps (Hymenoptera: Aphelinidae): proposal of 'Candidatus Cardinium hertigii'. International Journal of Systematic and Evolutionary Microbiology. 2004, 54 (Part 3): 961-968. 10.1099/ijs.0.02957-0.View ArticlePubMedGoogle Scholar - Hunter MS, Perlman SJ, Kelly SE: A bacterial symbiont in the Bacteroidetes induces cytoplasmic incompatibility in the parasitoid wasp Encarsia pergandiella. Proc Roy Soc (B). 2003, 270 (1529): 2185-2190. 10.1098/rspb.2003.2475.View ArticleGoogle Scholar - Hurst GDD, Hammarton TC, Bandi C, Majerus TMO, Bertrand D, Majerus MEN: The diversity of inherited parasites of insects: The male-killing agent of the ladybird beetle Coleomegilla maculata is a member of the Flavobacteria. Genet Res. 1997, 70 (1): 1-6. 10.1017/S0016672397002838.View ArticleGoogle Scholar - Hagimori T, Abe Y, Date S, Miura K: The first finding of a Rickettsia bacterium associated with parthenogenesis induction among insects. Current Microbiology. 2006, 52 (2): 97-101. 10.1007/s00284-005-0092-0.View ArticlePubMedGoogle Scholar - Rigaud T, Juchault P, Mocquard JP: The evolution of sex determination in isopod crustaceans. Bioessays. 1997, 19 (5): 409-416. 10.1002/bies.950190508.View ArticleGoogle Scholar - Jiggins FM: Widespread 'hilltopping' in Acraea butterflies and the origin of sex-role-reversed swarming in Acraea encedon and A. encedana. Afr J Ecol. 2002, 40 (3): 228-231. 10.1046/j.1365-2028.2002.00359.x.View ArticleGoogle Scholar - Dyson EA, Hurst GDD: Persistence of an extreme sex-ratio bias in a natural population. Proc Natl Acad Sc USA. 2004, 101 (17): 6520-6523. 10.1073/pnas.0304068101.View ArticleGoogle Scholar - Hornett EA, Charlat S, Duplouy AMR, Davies N, Roderick GK, Wedell N, Hurst GDD: Evolution of male-killer suppression in a natural population. PLoS Biology. 2006, 4 (9): e283-10.1371/journal.pbio.0040283.PubMed CentralView ArticlePubMedGoogle Scholar - Charlat S, Hornett EA, Fullard JH, Davies N, Roderick GK, Wedell N, Hurst GDD: Extraordinary flux in sex ratio. Science. 2007, 317 (5835): 214-214. 10.1126/science.1143369.View ArticlePubMedGoogle Scholar - Huigens ME, Stouthamer R: Parthenogenesis associated with Wolbachia. Insect symbiosis. Edited by: Bourtzis K, Miller TA. 2003, Boca Raton: CRC press, 247-266.Google Scholar - Stouthamer R, Kazmer DJ: Cytogenetics of microbe-associated parthenogenesis and its consequences for gene flow in Trichogramma wasps. Heredity. 1994, 73 (3): 317-327. 10.1038/hdy.1994.139.View ArticleGoogle Scholar - Pannebakker BA, Pijnacker LP, Zwaan BJ, Beukeboom LW: Cytology of Wolbachia-induced parthenogenesis in Leptopilina clavipes (Hymenoptera: Figitidae). Genome. 2004b, 47 (2): 299-303. 10.1139/g03-137.View ArticleGoogle Scholar - Stille B, Davring L: Meiosis and reproductive stategy in the parthenogenetic gall wasp Diplolepis rosae. Heriditas. 1980, 92: 353-362. 10.1111/j.1601-5223.1980.tb01720.x.View ArticleGoogle Scholar - Gottlieb Y, Zchori-Fein E, Werren JH, Karr TL: Diploidy restoration in Wolbachia-infected Muscidifurax uniraptor (Hymenoptera: Pteromalidae). J Inv Pathol. 2002, 81 (3): 166-174. 10.1016/S0022-2011(02)00149-0.View ArticleGoogle Scholar - Stouthamer R, van Tilborg M, de Jong JH, Nunney L, Luck RF: Selfish element maintains sex in natural populations of a parasitoid wasp. Proc Roy Soc (B). 2001, 268 (1467): 617-622. 10.1098/rspb.2000.1404.View ArticleGoogle Scholar - Reed KM: Cytogenetic analysis of the paternal sex-ratio chromosome of Nasonia vitripennis. Genome. 1993, 36 (1): 157-161. 10.1139/g93-020.View ArticlePubMedGoogle Scholar - van Vugt JFA, Salverda M, de Jong JH, Stouthamer R: The paternal sex ratio chromosome in the parasitic wasp Trichogramma kaykai condenses the paternal chromosomes into a dense chromatin mass. Genome. 2003, 46 (4): 580-587. 10.1139/g03-044.View ArticlePubMedGoogle Scholar - Pijls JWAM, Van Steenbergen HJ, Van Alphen JJM: Asexuality cured: The relations and differences between sexual and asexual Apoanagyrus diversicornis. Heredity. 1996, 76 (5): 506-513. 10.1038/hdy.1996.73.View ArticleGoogle Scholar - Arakaki N, Noda H, Yamagishi K: Wolbachia-induced parthenogenesis in the egg parasitoid Telenomus nawai. Entomol Exp Appl. 2000, 96 (2): 177-184. 10.1023/A:1004007127942.View ArticleGoogle Scholar - Jeong G, Stouthamer R: Genetics of female functional virginity in the Parthenogenesis-Wolbachia infected parasitoid wasp Telenomus nawai (Hymenoptera: Scelionidae). Heredity. 2005, 94 (4): 402-407. 10.1038/sj.hdy.6800617.View ArticlePubMedGoogle Scholar - Pannebakker BA, Zwaan BJ, Beukeboom LW, Van Alphen JJM: Genetic diversity and Wolbachia infection of the Drosophila parasitoid Leptopilina clavipes in western Europe. Mol Ecol. 2004c, 13 (5): 1119-1128. 10.1111/j.1365-294X.2004.02147.x.View ArticleGoogle Scholar - Kremer N, Charif D, Henri H, Bataille M, Prevost G, Kraaijeveld K, Vavre F: A new case of Wolbachia dependence in the genus Asobara: evidence for parthenogenesis induction in Asobara japonica. Heredity. 2009, 103: 248-256. 10.1038/hdy.2009.63.View ArticlePubMedGoogle Scholar - Russell JE, Stouthamer R: The genetics and evolution of obligate reproductive parasitism in Trichogramma pretiosum infected with parthenogenesis-inducing Wolbachia. Heredity. 2010, Google Scholar - Zchori-Fein E, Faktor O, Zeidan M, Gottlieb Y, Czosnek H, Rosen D: Parthenogenesis-inducing microorganisms in Aphytis (Hymenoptera: Aphelinidae). Insect Mol Biol. 1995, 4 (3): 173-178. 10.1111/j.1365-2583.1995.tb00023.x.View ArticlePubMedGoogle Scholar - De Barro PJ, Hart PJ: Antibiotic curing of parthenogenesis in Eretmocerus mundus (Australian parthenogenic form). Entomol Exp Appl. 2001, 99 (2): 225-230. 10.1023/A:1018927905287.View ArticleGoogle Scholar - Stouthamer R, Luck RF, Hamilton WD: Antibiotics cause parthenogenetic Trichogramma to revert to sex. Proc Natl Acad Sci. 1990, 87: 2424-2427. 10.1073/pnas.87.7.2424.PubMed CentralView ArticlePubMedGoogle Scholar - Horjus M, Stouthamer R: Does infection with thelytoky-causing Wolbachia in the pre-adult and adult life stages influence the adult fecundity of Trichogramma deion and Muscidifurax uniraptor?. Proc Sect Exper Appl Entomol Netherlands Entomol Soc. 1995, 6: 35-40.Google Scholar - Stouthamer R, Mak F: Influence of antibiotics on the offspring production of the Wolbachia-infected parthenogenetic parasitoid Encarsia formosa. J Inv Pathol. 2002, 80 (1): 41-45. 10.1016/S0022-2011(02)00034-4.View ArticleGoogle Scholar - Grenier S, Gomes SM, Pintureau B, Lassabliere F, Bolland P: Use of tetracycline in larval diet to study the effect of Wolbachia on host fecundity and clarify taxonomic status of Trichogramma species in cured bisexual lines. J Inv Pathol. 2002, 80 (1): 13-21. 10.1016/S0022-2011(02)00039-3.View ArticleGoogle Scholar - Silva ISSM: Identification and evaluation of Trichogramma parasitoids for biological pest control. 1999, PhD Thesis Wageningen University, The NetherlandsGoogle Scholar - Tagami Y, Miura K, Stouthamer R: How does infection with parthenogenesis-inducing Wolbachia reduce the fitness of Trichogramma?. J Inv Pathol. 2001, 78 (4): 267-271. 10.1006/jipa.2002.5080.View ArticleGoogle Scholar - Huigens ME, Luck RF, Klaassen RHG, Maas MFPM, Timmermans MJTN, Stouthamer R: Infectious parthenogenesis. Nature. 2000, 405 (6783): 178-179. 10.1038/35012066.View ArticlePubMedGoogle Scholar - Stouthamer R, Luck RF: Influence of microbe-associated parthenogenesis on the fecundity of Trichogramma deion and Trichogramma pretiosum. Entomol Exp Appl. 1993, 67 (2): 183-192. 10.1007/BF02386524.View ArticleGoogle Scholar - Legner EF: Natural and induced sex ratio changes in populations of thelytokous Muscidifurax uniraptor (Hymenoptera; Pteromalidae). Annals of the Entomological Society of America. 1985, 78 (3): 398-402.View ArticleGoogle Scholar - Schilthuizen M, Stouthamer R: Horizontal transmission of parthenogenesis-inducing microbes in Trichogramma wasps. Proc Roy Soc (B). 1997, 264 (1380): 361-366. 10.1098/rspb.1997.0052.View ArticleGoogle Scholar - Orzack SH, Parker ED: Genetic variation for sex ratio traits within a natural population of a parasitic wasp, Nasonia vitripennis. Genetics. 1990, 124: 373-384.PubMed CentralPubMedGoogle Scholar - Orzack SH, Gladstone J: Quantitative genetics of sex ratio traits in the parasitic wasp, Nasonia vitripennis. Genetics. 1994, 137 (1): 211-220.PubMed CentralPubMedGoogle Scholar - Henter HJ: Constrained sex allocation in a parasitoid due to variation in male quality. J Evol Biol. 2004, 17 (4): 886-896. 10.1111/j.1420-9101.2004.00746.x.View ArticlePubMedGoogle Scholar - Gottlieb Y, Zchori-Fein E: Irreversible thelytokous reproduction in Muscidifurax uniraptor. Entomol Exp Appl. 2001, 100 (3): 271-278. 10.1023/A:1019298825049.View ArticleGoogle Scholar - Pannebakker BA, Beukeboom LW, van Alphen JJM, Brakefield PM, Zwaan BJ: The genetic basis of male fertility in relation to haplodiploid reproduction in Leptopilina clavipes (Hymenoptera: Figitidae). Genetics. 2004a, 168 (1): 341-349. 10.1534/genetics.104.027680.View ArticleGoogle Scholar - Plantard O, Rasplus JY, Mondor G, Le Clainche I, Solignac M: Wolbachia-induced thelytoky in the rose gallwasp Diplolepis spinosissimae (Giraud) (Hymenoptera: Cynipidae), and its consequences on the genetic structure of its host. Proc Roy Soc (B). 1998, 265 (1401): 1075-1080. 10.1098/rspb.1998.0401.View ArticleGoogle Scholar - Schilthuizen M, Stouthamer R: Distribution of Wolbachia among the guild associated with the parthenogenetic gall wasp Diplolepsis rosae. Heredity. 1998, 81 (3): 270-274. 10.1046/j.1365-2540.1998.00385.x.View ArticleGoogle Scholar - Stille B: Population genetics of the parthenogenetic gall wasp Diplolepis rosae. Genetica. 1985, 67: 145-151. 10.1007/BF02424421.View ArticleGoogle Scholar - Huigens ME, de Almeida RP, Boons PAH, Luck RF, Stouthamer R: Natural interspecific and intraspecific horizontal transfer of parthenogenesis-inducing Wolbachia in Trichogramma wasps. Proceedings of the Royal Society of London Series B-Biological Sciences. 2004, 271 (1538): 509-515. 10.1098/rspb.2003.2640.View ArticleGoogle Scholar - Van Meer MMM, Stouthamer R: Cross-order transfer of Wolbachia from Muscidifurax uniraptor (Hymenoptera: Pteromalidae) to Drosophila simulans (Diptera: Drosophilidae). Heredity. 1999, 82: 163-169. 10.1038/sj.hdy.6884610.View ArticlePubMedGoogle Scholar - Kang L, Ma X, Cai L, Liao S, Sun L, Zhu H, Chen X, Shen D, Zhao S, Li C: Superinfection of Laodelphax striatellus with Wolbachia from Drosophila simulans. Heredity. 2003, 90 (1): 71-76. 10.1038/sj.hdy.6800180.View ArticlePubMedGoogle Scholar - Riegler M, Charlat S, Stauffer C, Mercot H: Wolbachia transfer from Rhagoletis cerasi to Drosophila simulans: Investigating the outcomes of host-symbiont coevolution. Applied and Environmental Microbiology. 2004, 70 (1): 273-279. 10.1128/AEM.70.1.273-279.2004.PubMed CentralView ArticlePubMedGoogle Scholar - Grenier S, Pintureau B, Heddi A, Lassabliere F, Jager C, Louis C, Khatchadourian C: Successful horizontal transfer of Wolbachia symbionts between Trichogramma wasps. Proc Roy Soc (B). 1998, 265 (1404): 1441-1445. 10.1098/rspb.1998.0455.View ArticleGoogle Scholar This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1
15
<urn:uuid:8ec5aab0-803b-407a-b31f-fdd8b93c3722>
The most striking feature of China’s behavior in its maritime disputes this year has been efforts to redefine the status quo. In its disputes with the Philippines and Japan, China has used the presence of its civilian maritime law enforcement agencies to create new facts on the water to strengthen China’s sovereignty claims. Before April 2012, neither China nor the Philippines maintained a permanent presence at Scarborough Shoal. Fishermen from the Philippines, Vietnam, Taiwan and China operated in and around the large reef. At times in the past, especially in the late 1990s and early 2000s, the Philippine navy had arrested Chinese fishermen who were inside the shoal. Since then, Chinese patrols have sailed by the shoal, but no effort has been undertaken to exercise effective control over the shoal or its surrounding waters. The situation changed following the standoff over sovereignty of Scarborough Shoal. The standoff began in April 2012 when the Philippine navy prepared to arrest Chinese fishermen who were operating in the shoal’s lagoon. After receiving a distress call, two China Marine Surveillance (CMS) vessels arrived on the scene, blocking the entrance to the lagoon and preventing the arrest of the Chinese fishermen. After the fishing boats left the shoal, however, government ships from both sides remained to defend claims to sovereignty over the shoal. By the end of May, China had deployed as many as seven CMS and Bureau of Fisheries Administration ships. In early June, the Philippines announced that an agreement had been reached with China for a mutual withdrawal of ships. Although China never publicly confirmed the existence of such an agreement, ships from both sides left in mid June as a typhoon approached the area. Later, however, Chinese ships returned and appear have maintained a permanent presence in the waters around the shoal since then. In mid July 2012, for example, an intrepid news crew from Al Jazeera videotaped an attempt to visit the shoal, only to be turned away by a combination of CMS and fisheries administration vessels. China has also roped off the sole entrance to the lagoon inside the shoal to control access to it. Before the standoff, China had no permanent presence at Scarborough Shoal. Three months later, China had effective control of the shoal and the surrounding waters, thereby altering the status quo in this dispute in its favor. As an editorial in the Global Times noted, China has “directly consolidated control” of the shoal. A similar dynamic is underway in the East China Sea over the Senkaku / Diaoyu Islands. Before the Japanese government’s purchase of three of the islets from a private citizen in September 2012, Chinese government ships had generally avoided entering the 12 nautical mile limit of Japan’s territorial waters around the islands. As I wrote several years ago, China and Japan appeared to have a tacit agreement from the mid-2000s to limit the presence of ships and citizens near the islands in an effort to manage the potential for escalation. In September 2010, the detention of a Chinese fishing captain whose boat had broached the 12 nautical mile limit and then rammed a Japanese Coast Guard ship sparked a crisis in China-Japan relations. Part of China’s response included increasing the number of patrols by marine surveillance and fisheries vessels near the islands. Most of the time, these boats remained beyond Japan’s 12 nautical mile territorial waters around the Senkakus or crossed this line only briefly. China in practical terms continued to accept Japanese de facto control of the islands and their associated territorial waters (over which a state enjoys sovereignty rights under the UN Convention on the Law of the Sea). After the purchase of the islands last month, however, China has abandoned this approach. China firstissued baselines to claim its own territorial waters around the islands and then began to conduct almost daily patrols within its newly-claimed waters – directly challenging the Japanese control that it had largely accepted before. The purpose of the patrols is two-fold: to demonstrate that the purchase of the islands will not affect China’s sovereignty claims and to challenge Japan’s position that there is no dispute over the sovereignty of the islands. Although China does not control the waters around the Senkakus (unlike the situation at Scarborough), it no longer accepts de facto Japanese control. On October 31, a Ministry of Foreign Affairs spokesmanasserted that a new status quo had been created. After describing China’s new patrols as “routine,” Hong Lai stated that “the Japanese side should face squarely the reality that a fundamental change has already occurred in the Diaoyu Islands.” In both cases, China responded to challenges to its claims with an enhanced physical presence to bolster China’s position and deter any further challenges. These responses suggest an even greater willingness to pursue unilateral actions to advance its claims. In neither case is a return to the status quo ante likely. [This originally appeared in The Diplomat.]
1
3
<urn:uuid:0700300a-820b-43f0-871f-510c8826c98d>
Mammography is a method of taking x-ray images of the breasts to identify tumors or abnormalities in the tissues that may indicate breast cancer. Screening Mammography: Screenings are performed on otherwise healthy individuals to look for cancer or precursors to cancer of the breasts. Early detection of breast cancer through screening may allow for better forms of treatment and possible prevention of metastasis and mortality. Diagnostic Mammography: Diagnostic mammography includes additional x-ray views of each breast, taken from different angles and if performed digitally, may be manipulated, enlarged, or enhanced for better visualization of the abnormality found during screening mammography. Analog or conventional mammography is when the radiologist takes an image and prints it on film for the radiologist to review on a light box. Digital mammography is when images are taken and saved to a computer, which can then be enhanced, magnified, and manipulated as needed to aid in a more accurate diagnosis of early stage breast cancers or patients with very dense breast tissue. CAD: Computer-Aided Detection (CAD) is a computer-based process that is used in conjunction with digital mammography to analyze mammographic images and identify suspicious areas by marking them and bringing them to the radiologist's attention. While the ACR (American College of Radiology) feels that CAD, when used with screening and diagnostic mammography, is a *valuable tool that aids in the detection of early breast cancer, there are other professional organizations that, after studying the results of mammograms performed with CAD, believe that it may instead make readings less accurate*. Screening mammography is recommended for women age 40 and older every one to two years and younger than 40 years of age when the patient has increased risk factors for breast cancer. In general, screening mammograms are not recommended for women under 40 years of age, in part because breast tissue tends to be more dense in younger women, making mammograms as a screening tool less effective. As there is such a low risk of developing breast cancer in younger women, experts do not believe that it is justifiable to expose them to low levels of radiation or the cost of mammograms unless they do have *high risk factors. Insurance companies follow the above recommendations as well and set guidelines that allow payment at 100% of allowable fee schedule for a screening mammogram in women 40 years and older, every 1-2 years and in women younger than 40 years of age when their medical history indicates they are *high risk. If coded correctly, payment should be at 100% of the allowable fee schedule for preventive services. CPT Coding for Screening Mammography: Screening mammography is considered bilateral so do not report the code with modifier 50 or RT/LT. Proper reporting of ICD-9-CM codes informs the insurance company the service was for screening mammography. If incorrectly billed, the claim may be processed and paid at a lesser value. There are two ICD-9-CM diagnosis codes used to report a screening mammogram: Report code V76.11 (Screening for malignant neoplasms, screening mammogram for high risk patient) when any one of the following criteria is documented in the report: Report code V76.12 (Screening for malignant neoplasms, other screening mammogram) for all other screening mammography. If the patient has a personal history of breast cancer, has completed active treatment and is back to annual mammographic screening, report V76.11. No additional personal history code is required as V76.11 inherently covers this diagnosis; however, you may report a personal history of breast cancer (V10.3) as a secondary code if you like. How do breast implants affect screening mammography?Patient's with breast implants should still undergo screening mammograms; however, the implants can make it more difficult to see the breast tissue clearly. There is a technique that technicians should be trained in that allows them to better visualize breast tissue surrounding the implants called 'implant displacement views.' Patients with implants after mastectomy should have orders that clarify if the physician wants the reconstructed breast to be screened as well. ICD-9-CM and CPT/HCPCS coding is reported in the same manner as other screening mammograms. How do you code a screening mammogram for a patient with a mastectomy? Patients with a mastectomy due to a fully treated breast cancer, no longer have breast cancer in that breast (as it has now been surgically removed). Once 'active treatment' is completed, the patient may return to annual screening mammograms. As a screening mammogram is inherently bilateral in nature, report modifier -52 when screening mammogram is performed on a patient with a history of mastectomy where only one breast is imaged. Screening Mammograms Performed Earlier Than Recommended: Screening mammograms can be performed every year, as long as it has been a full 11 months or one year since the last screening mammogram. It is recommended that providers obtain a signed ABN (advanced beneficiary notification) from the patient prior to the procedure, as the insurance most likely will not cover the service and the patient will be left responsible for the cost. Lack of a signed ABN on file will result in the provider being forced to write off the service and be unable to bill the patient for it. The following are some examples of how to code screening mammograms: The patient is a 76-year old female presenting for annual screening mammogram today, (December 30, 2009). Her last mammogram was one year ago (December 15, 2008). She has a sister with breast cancer at age 56. Digital screening mammogram with CAD was performed. Findings: Negative. CPT/HCPCS Codes: G0202, 77052 ICD-9-CM Codes: V76.11 Example 2:Patient is a 52-year old female with a personal history of breast cancer, fully resolved status post right breast mastectomy in 1992. She presents for annual digital screening mammogram with CAD. CPT/HCPCS Codes: G0202-52, 77052 ICD-9-CM Codes: V76.11, V10.3 Example 3:History: A 42-year-old female, annual exam. Comparison: Mammogram one year prior. Findings: Bilateral digital implant screening mammogram, standard and displaced views were obtained. CAD utilized. Bilateral subglandular breast implants are noted. Implants appear stable and mammographically intact. CPT/HCPCS Codes: G0202, 77052 ICD-9-CM Codes: V76.12 Aimee Wilcox, MA, CST, CCS-P is a Certified Coding Guru (CCG) for Find-A-Code. For more information about ICD-10-CM, ICD-10-PCS, and medical coding and billing please visit FindACode.com where you will find the ICD-10 code sets and the current ICD-9-CM, CPT, and HCPCS code sets plus a wealth of additional information related to medical billing and coding. This article is available for publishing on websites, blogs, and newsletters. The article must be published in its entirety - all links must be active. If you would like to publish this article, please contact us and let us know where you will be publishing it. The easiest way to get the text of the article is to highlight and copy. Or use your browser's "View Source" option to capture the HTML formatted code. If you would like a specific article written on a medical coding and billing topic, please contact us. Find A Code, LLC 62 East 300 North Spanish Fork, UT 84660 Phone: 801-770-4203 (9-5 Mountain)
3
13
<urn:uuid:f07e134a-739d-4827-9d97-c80f456eee71>
Prehypertension and Hypertension in Community-Based Pediatric Practice OBJECTIVE: To examine the prevalence of prehypertension and hypertension among children receiving well-child care in community-based practices. METHODS: Children aged 3 to 17 years with measurements of height, weight, and blood pressure (BP) obtained at an initial (index) well-child visit between July 2007 and December 2009 were included in this retrospective cohort study across 3 large, integrated health care delivery systems. Index BP classification was based on the Fourth Report on the Diagnosis, Evaluation, and Treatment of High Blood Pressure in Children and Adolescents: normal BP, <90th percentile; prehypertension, 90th to 94th percentile; hypertension, 3 BP measurements ≥95th percentile (index and 2 subsequent consecutive visits). RESULTS: The cohort included 199 513 children (24.3% aged 3–5 years, 34.5% aged 6–11 years, and 41.2% aged 12–17 years) with substantial racial/ethnic diversity (35.9% white, 7.8% black, 17.6% Hispanic, 11.7% Asian/Pacific Islander, and 27.0% other/unknown race). At the index visit, 81.9% of participants were normotensive, 12.7% had prehypertension, and 5.4% had a BP in the hypertension range (≥95th percentile). Of the 10 848 children with an index hypertensive BP level, 3.8% of those with a follow-up BP measurement had confirmed hypertension (estimated 0.3% prevalence). Increasing age and BMI were significantly associated with prehypertension and confirmed hypertension (P < .001 for trend). Among racial/ethnic groups, blacks and Asians had the highest prevalence of hypertension. CONCLUSIONS: The prevalence of hypertension in this community-based study is lower than previously reported from school-based studies. With the size and diversity of this cohort, these results suggest the prevalence of hypertension in children may actually be lower than previously reported. - blood pressure - health information technology - electronic health records - BP — - blood pressure - HPMG — - HealthPartners Medical Group - KPNC — - Kaiser Permanente Northern California - KPCO — - Kaiser Permanente Colorado What’s Known on This Subject: Prevalence of hypertension in children increased significantly over the past few decades, tracks into adulthood, and is a major risk factor for cardiovascular disease. However, current prevalence estimates in children have largely been based on studies conducted in school environments. What This Study Adds: The current study reports the prevalence of childhood hypertension in community pediatric practice, which provides a typical pediatric examination environment, unlike blood pressure measured in school. The results show a significantly lower prevalence than what has previously been reported. In the past 2 decades there has been increased recognition of the importance of blood pressure (BP) measurement in the pediatric population,1,2 particularly in relation to the rising prevalence of childhood obesity.3 However, the importance of BP goes beyond its relation to obesity, because longitudinal studies reveal a relation between childhood BP and future cardiovascular risk factors in young adults, independent of BMI.4 Data from pediatric BP screening programs and the NHANES support early detection and management of hypertension in pediatric practice, particularly given its association with excess weight and other cardiovascular risk factors1,5–9 and the increasing awareness of childhood origins of adult disease.10 Epidemiologic BP screening studies conducted in large school systems with the use of carefully controlled measurement protocols show that many children with an initially elevated BP have normal BP on repeated measurements over relatively short periods of time.5,7,11,12 This significant reduction in hypertension prevalence after repeated measurement emphasizes that a single elevated BP is insufficient for clinical diagnosis in children.12 The recent availability of automated electronic medical records in large health plans has enabled the examination of BP in pediatric populations across community-based clinical settings13 and presents a unique opportunity to compare these data with published epidemiologic studies of pediatric hypertension. In the current study, data were obtained from automated clinic records in a multiethnic population of children receiving care within 3 large, community-based US health plans. The results present a contemporary assessment of what pediatricians in similar general practices can expect in terms of BP measurement when seeing a child for the first time and provide useful estimates of the prevalence of hypertension across age, gender, and racial/ethnic subgroups among children receiving pediatric well-child care. Kaiser Permanente Northern California (KPNC), Kaiser Permanente Colorado (KPCO), and HealthPartners Medical Group (HPMG) are integrated health care delivery systems providing care to >4 million members living primarily in urban and suburban communities in Northern California, Colorado, and Minnesota. Their memberships are racially and ethnically diverse, and children younger than age 18 years constitute 20% to 25% of the total membership. All 3 health systems have used similar integrated electronic medical records: KPCO and HPMG for >6 years and KPNC for up to 4 years (in the 3 subregions used for this study). The current study includes all children aged 3 to 17 years with an initial (index) measurement of BP, height, and weight obtained at a well-child visit between July 1, 2007 to December 31, 2009. Follow-up BP measurements from outpatient non–urgent care visit settings were used for hypertension classification through December 31, 2010, with a total observation/follow-up period of 3.5 years. Membership in the health plan for 6 months before the index visit and pharmacy benefits were required to ascertain treatment with BP-lowering medication before the index visit. At each site, weight was measured on a calibrated scale and height by stadiometer. The specific methods for BP assessment varied by health plan. At KPNC, BP was measured by trained medical assistants with the use of oscillometric devices that were calibrated periodically. At KPCO and HPMG, BP was measured by trained staff predominantly by using aneroid sphygmomanometers recalibrated as needed by bioengineering services. All measurements were conducted with children in the seated position with selection of cuff size appropriate to arm size. BP standards from the Fourth Report on the Diagnosis, Evaluation, and Treatment of High Blood Pressure in Children and Adolescents were used to classify BP according to gender, age, and height.10 Children with systolic and diastolic BP <90th percentile for height at the index visit were classified as having normal BP. Children with systolic or diastolic BP between the 90th and 95th percentile (or ≥120/80 mm Hg for adolescents) were classified as prehypertensive. Hypertension was defined by BP ≥95th percentile at the index visit and at 2 subsequent consecutive visits within ≤3.5 years of follow-up. For subsequent pediatric BP without height measurement, the closest height within 6 months was used to calculate the BP percentile; for subsequent BP obtained at age ≥18 years (0.3% of children), a criterion of SBP ≥140 and/or DBP ≥90 mmHg was used. Information pertaining to age, gender, race/ethnicity, height, weight, and systolic and diastolic BP measurements were obtained from the electronic medical record for the index and subsequent outpatient visits. BMI percentiles representing conventional pediatric categories (<85th percentile = normal, 85th–94th percentile = overweight, ≥95th percentile = obese, and ≥99th percentile = severely obese)14,15 were based on the 2000 Centers for Disease Control and Prevention growth charts.16 Treatment with antihypertensive medications was assessed by using pharmacy dispensing records, and individuals receiving an antihypertensive medication prescription in the 6 months before the index visit were excluded from the cohort (n = 785). The majority of these excluded children (63.2%) received guanfacine and clonidine, drugs typically used for developmental and behavioral indications and rarely used to treat hypertension in the pediatric population. Of the remaining 289 children, 183 did not have hypertension diagnoses. Only 106 had diagnoses of hypertension (n = 55) or elevated BP (n = 4) or had received hypertension screening (n = 47). Because it was not possible to confirm the hypertension diagnosis and because the purpose of this study was to assess prevalence on the basis of an index BP measurement, these children were excluded. The Institutional Review Board at HealthPartners Institute for Education and Research approved the study, with ceding of oversight authority by the KPNC and KPCO Institutional Review Boards. A waiver of informed consent was obtained due to the nature of the study. Differences across subgroups were compared by using the χ2 test. The Cochrane-Armitage test was used to examine trends across age and BMI percentile strata. Point estimates and 95% confidence intervals were calculated for the prevalence of elevated BP. All analyses were conducted with the use of SAS, version 9.1 (SAS Institute, Cary, NC). A 2-tailed α of <0.05 was chosen as the criterion for significance. The initial source population included 342 323 children and adolescents aged 3 to 17 years with ≥1 outpatient visits between July 1, 2007, and December 31, 2009. After excluding children without a well-child visit with measurements of height, weight, and systolic and diastolic BP and those not meeting membership criteria, the final cohort consisted of 199 513 children and adolescents, half of whom were female (Table 1). Of these, 24.3% were aged 3 to 5 years, 34.5% aged 6 to 11 years, 21.8% aged 12 to 14 years, and 19.4% aged 15 to 17 years. The cohort was racially and ethnically diverse: 35.9% were non-Hispanic white, 7.8% were non-Hispanic black, 17.6% were Hispanic, 11.7% were Asian/Pacific Islander (PI), and 27.0% were of other or unknown race. A total of 14.3% were obese, defined by a BMI ≥95th percentile according to age and gender. Compared with children in the source population, those in the study cohort were more likely to be younger (80.6% vs 70.2% younger than age 15 years) and slightly more likely to be white (35.9% vs 32.4%), with no difference in gender. Mean BP, height, and weight across gender, age, race/ethnicity, and BMI subgroups are shown in Table 1. As expected, systolic and diastolic BP increased by age group and BMI percentile, and anthropometric measures (height and weight) increased by age and varied by gender and race/ethnicity. The mean values at each age are provided in Appendixes 1 and 2. At the index visit, 163 295 (81.9%) of the cohort were normotensive, 25 370 (12.7%) had BP in the prehypertension range (90th–94th percentile), and 10 848 (5.4%) had BP ≥95th percentile (Table 2). Among the 10 848 with an elevated index BP ≥95th percentile, 6739 (62.1%) had 1 to 2 more subsequent ambulatory visits allowing final BP assignment, including 6482 with a second or third consecutive BP <95th percentile. Only 257 had 2 subsequent consecutive BP measurements that met the Task Force Report criteria for hypertension,10 representing 3.8% of those with a BP ≥95th percentile and subsequent BP measurement. However, an additional 29 children began taking antihypertensive medication before the second or third measurement. The inclusion of these 29 children with the 257 children translates to an overall prevalence of hypertension of 0.14% (95% confidence interval: 0.12%, 0.16%). The prevalence of hypertension was low at each of the 3 sites, ranging from 0.04% to 0.17%. Among the 4109 children with an index BP ≥95th percentile who did not have a second (n = 3571) or third (n = 538) BP percentile available during the longitudinal follow-up period, 65% were younger than age 12 years. Compared with the children with follow-up measurements, a slightly greater proportion of those without follow-up BP were overweight or obese (52.3% vs 48.5%, P < .01) and of nonwhite or unknown race (77.4% vs 68.8%, P < .01), but did not differ by gender. If the assumption is made that a similar percentage (eg, 3.8%) of the 4109 children without follow-up would have met criteria for hypertension had they been seen for repeated BP measurements,5 the estimated number of hypertensive children would increase from 257 to 413 or 0.2% of the overall cohort. If these analyses were restricted to the 85 780 children with an index visit and 2 subsequent visits with BP measurements, the prevalence of normotensive BP would be 83.4%, prehypertension would be 12.0%, and confirmed hypertension would increase to 0.3%. Table 2 shows BP classification by age group, gender, race/ethnicity, and BMI. The percentage of boys with normal BP was significantly lower and there was a greater percentage with prehypertension, but the percentage with hypertension was similar between genders. Older age was associated with a lower percentage with normal BP and a higher percentage with prehypertension (both P < .0001 for trend). The youngest age group (3–5 years) had the highest percentage with an index BP in the hypertensive range (≥95th percentile), but in those with 3 consecutive BP measurements, hypertension was directly related to older age (P for trend < .0001). Index BP classification by each age and gender is provided in Appendixes 3 and 4. As BMI percentile increases from normal to obese, the likelihood of a normal index BP decreased whereas the likelihood of an index BP in the prehypertensive and hypertensive range increased (all P < .0001 for trend, Table 2). The highest proportion with confirmed hypertension was seen in children with a BMI ≥99th percentile (1.0% overall and 2.6% among those with 3 consecutive visits). Similarly, as BP percentile increased, the proportion in the normal BMI percentile decreased and the likelihood of overweight and obesity increased (Fig 1). However, nearly half of those classified as having hypertension had BMIs in the non-obese range. There were significant differences in BP by race/ethnicity (Table 2). In particular, a greater percentage of nonwhites had prehypertension than did whites (all P < .01), with blacks having the lowest percentage with normal BP and the greatest percentage with prehypertension compared with Asians and Hispanics. Blacks and Asians had the highest percentage with hypertension, which was significantly greater than that for whites (both P ≤ .01). Asian/PI (P < .01) but not blacks (P = .15) also had a greater percentage of children with hypertension than did Hispanics. Whites and Hispanics did not differ with respect to prevalence of hypertension (P = .25). This collaborative report is one of the first from major community-based pediatric practices that describes the prevalence of hypertension using the criteria published in the Fourth Task Force report.10 Because of the large cohort size of nearly 200 000, it was also possible to make comparisons among children across the entire pediatric age range (3–17 years), BMI category (normal to obese), gender, and race/ethnicity. The results reveal that the great majority of children seen in routine well-child care in pediatric practices have normal BP, 12.7% have BP in the prehypertension range, 5.4% have an initial systolic or diastolic BP ≥95th percentile for gender, age, and height, but the prevalence of hypertension, as defined by 3 consecutive hypertensive BP ≥95th percentile, was <1% overall and within each health plan. The percentage of children in this study with an initially hypertensive BP is slightly lower compared with the only other previous report (7.2%) from a community-based health plan that included data on initial BP measurements.13 However, it is within the range reported from previous prevalence studies conducted primarily in junior high– or high school–aged children, ranging from 2.7% in Minnesota to 19.4% in Texas,7,11,17,18 and from estimates using NHANES data from 2003 to 2006 in which the prevalence of an initial hypertensive BP range was 2.6% among 13- to 17-year-olds.19 The prevalence of prehypertension was similarly related to both age and BMI. Although the significant reduction in prevalence of confirmed hypertension from the initial (index) measurement to the third consecutive measurement was expected and followed previously described patterns, the rather low estimated prevalence of confirmed hypertension (0.3% overall) in this study was surprising and substantially lower than the prevalence of 0.8% to 4.5% reported in previous studies.7,11,17,18,20 Although 38% of children with an initially hypertensive BP did not return for additional BP measurements, it seems unlikely that this had a major effect on prevalence, because they were not substantially different from those children who returned. The frequency of 3 consecutive elevated BPs would have to be >10 times higher in those without follow-up for an overall prevalence of 1%. This is not likely given that the majority were younger than age 12 years, an age group in which the prevalence of confirmed hypertension is low. Several factors may explain the lower prevalence of hypertension in this cohort compared with findings from other studies. Previous studies may have less diverse patient populations and be less generalizable than the current study, which did not focus on a single geographic region. The lower prevalence in this study may reflect the inclusion of preschool and grade school children, as opposed to studying only junior high– and high school–aged children,7,11,17,18 the lower number of non-Hispanic blacks, or the lower number of overweight/obese children. However, even in the oldest age group (15–17 years), and among black and obese children, the prevalence of hypertension, although higher than in the other groups, remained low. Even if we had included the 106 children receiving previous antihypertensive medication, the prevalence would have increased only by 0.05%. A previous report from another cohort with a similar age distribution (3–18 years) reported a hypertension prevalence of 3.6%.20 That study was conducted over 7 years and appears to have used 3 nonconsecutive hypertensive BP levels, rather than 3 consecutive hypertensive levels.20 Those results and others suggest that protocols using longer periods of observation, >3 measurements, nonconsecutive hypertensive levels, or the average of many measurements rather than only the hypertensive levels should be compared with the currently recommended approach of using 3 consecutive hypertensive levels to diagnose hypertension in children. It is also possible that, despite the large size of this cohort, it was not truly representative of a natural distribution of children; the children were largely from families with health insurance and may have had higher socioeconomic status or healthier family lifestyles. The prevalence of obesity in our cohort was 14.3%, which is somewhat lower than the national estimate of 16.9% reported from the NHANES conducted in 2009–2010.21 We also focused our study on children receiving well-child care, which may preferentially reflect a healthier population of children and families accessing preventive care services who perhaps are less likely to have hypertension. Finally, because BP was measured in clinic settings that were familiar to the children, as opposed to the specific atypical activity of having BP measured in school, the accommodation effect of repeated BP assessment might have been enhanced. BP is known to be directly related to BMI across all age groups,10,13,19 as is evident in this study in which both prehypertension and index hypertensive measurements directly and strongly related to higher BMI percentile. The prevalence of confirmed hypertension also increased substantially with increasing BMI, although it remained relatively low even among obese children. It is known that levels of other cardiovascular risk factors are often higher in overweight and obese children.22,23 Whereas assessment of BP is important to childhood and adolescent care, these data suggest that even in overweight and obese children, a single elevated BP measure requires careful confirmation before diagnosing hypertension. Compared with other race/ethnic subgroups, there was a lower percentage of black children with normal BP and a higher percentage with prehypertension and hypertension. Children of Asian/PI heritage also showed a higher prevalence of hypertension, but there were no significant differences in the proportion with hypertension between the other ethnic groups. There were some limitations associated with this study. First, this study examined BP measurements obtained at preventive care visits with predominantly aneroid manometers at KPCO and HPMG and with oscillometric devices at KPNC. Oscillometric systolic BP readings are known to be slightly higher than those obtained by auscultatory measurement.10 However, these methodologic differences likely had little substantial effect on the results, given the low prevalence of hypertension overall. Also, because this was a retrospective study with clinical data collected from a large number of clinics, technique could not be monitored at each site; although each clinic measured BP in the seated position with appropriate cuff size, there was no standardized training for measuring height, weight, or BP and no standardized time interval for a return to clinic after an elevated BP. Second, most subjects were insured, and the low rates of observed hypertension prevalence could be related to underrepresentation of socially disadvantaged youth in this cohort. Third, a number of subjects with index hypertensive BP were missing a second or third BP measurement required for the diagnosis of hypertension (38%); however, assuming the percentage with hypertension in this group was similar to the group who returned for repeat measurements (ie, 3.8%), the prevalence of hypertension would increase only from 0.13% to 0.2%. It is unlikely, based on observed patterns of BP at the index and follow-up visits, that the study conclusions would have been significantly altered had the missing BPs been obtained. Finally, although this study used consecutive BP measurements ≥95th percentile to diagnose hypertension, as recommended by the Task Force report, the period of observation was over 3.5 years. Repeating the measurements over a shorter period of time, similar to previous epidemiologic BP studies in children, may lead to an increased prevalence of hypertension. Nonetheless, it seems reasonable to suggest that a true diagnosis of hypertension should be sustainable over longer periods of observation. In summary, this study describes the prevalence of hypertension in 3 large community-based, geographically diverse pediatric practices from predominantly urban or suburban communities. The results from data in nearly 200 000 children suggest that in community-based practices in settings similar to those in this study, the prevalence of pediatric hypertension and prehypertension may be substantially lower across a wide range of age, race/ethnicity, and adiposity status than suggested in previous studies. We acknowledge Heather Tavel, Nicole Trower, Maureen Peterson, Joel Gonzalez, and Gabriela Sanchez for their support with data acquisition and manuscript preparation. - Accepted October 1, 2012. - Address correspondence to Joan C. Lo, MD, Division of Research, Kaiser Permanente Northern California, 2000 Broadway, Oakland, CA 94612. E-mail: Dr Lo supervised the team, conceptualized and designed the study, contributed substantively to the analysis and interpretation of data, drafted the initial manuscript, and approved the final manuscript as submitted; Dr Sinaiko provided substantive contribution to the conception and design of the study and analysis and interpretation of data, drafted portions of the manuscript, revised the manuscript for important intellectual content, and approved the final manuscript as submitted; Ms Chandra provided substantive contribution to the design of the study, collected the data, conducted all data analyses, revised the manuscript for important intellectual content, and approved the final manuscript as submitted; Drs Daley, Greenspan, Kharbanda, Margolis, Adams, and Prineas provided substantive contribution to the analysis and interpretation of data, revised the manuscript for important intellectual content, and approved the final manuscript as submitted; Dr Parker led the statistical analyses, contributed to the interpretation of data, revised the manuscript for important intellectual content, and approved the final manuscript as submitted; Dr Magid supervised the team and provided substantive contribution to the design of the study, analysis and interpretation of data, revision of the manuscript for important intellectual content, and approved the final manuscript as submitted; and Dr. O’Connor obtained funding, supervised the team, provided substantive contribution to the conception and design of the study and analysis and interpretation of data, drafted portions of the manuscript, revised the manuscript for important intellectual content, and approved the final manuscript as submitted. FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose. FUNDING: This study was supported by the National Heart, Lung, and Blood Institute at the National Institutes of Health (1RO1HL093345 to HealthPartners Research Foundation; Patrick O’Connor, Principal Investigator) and conducted within the Cardiovascular Research Network, a consortium of research organizations affiliated with the HMO Research Network and sponsored by the National Heart, Lung, and Blood Institute (U19 HL91179-01). Funded by the National Institutes of Health (NIH). - Din-Dzietham R, - Liu Y, - Bielo MV, - Shamsa F - Jago R, - Harrell JS, - McMurray RG, - Edelstein S, - El Ghormli L, - Bassin S - Sorof JM, - Lai D, - Turner J, - Poffenbarger T, - Portman RJ - Boyd GS, - Koenigsberg J, - Falkner B, - Gidding S, - Hassink S - Sorof JM, - Turner J, - Martin DS, - et al - ↵National High Blood Pressure Education Program Working Group on High Blood Pressure in Children and Adolescents. The fourth report on the diagnosis, evaluation, and treatment of high blood pressure in children and adolescents. Pediatrics. 2004;114(2 suppl):555–576 - ↵McNiece KL, Poffenbarger TS, Turner JL, Franco KD, Sorof JM, Portman RJ. Prevalence of hypertension and pre-hypertension among adolescents. J Pediatr. 2007;150(6):640–644 - Moore WE, - Eichner JE, - Cohn EM, - Thompson DM, - Kobza CE, - Abbott KE - ↵Ogden CL, Flegal KM. Changes in terminology for childhood overweight and obesity. Nat Health Stat Rep. 2010;25:1–5 - Barlow SE - Falkner B, - Gidding SS, - Portman R, - Rosner B - Ostchega Y, - Carroll M, - Prineas RJ, - McDowell MA, - Louis T, - Tilert T - Sinaiko AR, - Steinberger J, - Moran A, - et al - ↵Freedman DS, Mei Z, Srinivasan SR, Berenson GS, Dietz WH. Cardiovascular risk factors and excess adiposity among overweight children and adolescents: the Bogalusa Heart Study. J Pediatr. 2007;150(1):12–17 - Copyright © 2013 by the American Academy of Pediatrics
1
2
<urn:uuid:e557c59d-88b1-41a1-8728-e0bc367fed55>
Traditionally, leg lymphedema has been thought of as a primary and or hereditary lymphedema condition. However, with cancer treatment become much more effective and with so many more cancer patients not only surviving cancer but literally achieving a cure, more and more incidents of leg lymphedema are coming to the forefront. We have long understood breast cancer as a leading cause of secondary lymphedema, but the frightening truth is that as statistics are kept we are finding similar ratios among cancers survivors of all types. In conjunction with this the medical community is slowly coming to a clearer understanding of other conditions that could trigger lymphedema as well. I hope this page will provide leg “lymphers” with information that is both helpful and that can enable them to have a better quality of life and lifestyle. Our page Your Emotions and Self Image with Lymphedema give helpful tips and insights on facing the emotional challenge of lymphedema. For information on leg see our page on lymphedema in children 1.) Lymph node removal for biopsies 3.) Deep invasive wounds that might tear, cut or damage the lymphatics. 4.) Radiation treatments, especially ones that are focused in areas that might contain “clusters” of lymph nodes 6.) Serious burns, even intense sunburn 7.) Infection of the microscopic parasite filarial larvae, though this is more common in tropical countries 8.) For primary lymphedema any person who has a family history of unknown swelling of a limb 9.) Radiation and chemotherapy for cancer 10.) Insect bites 11.) Bone fractures and breaks If you ask most people that are familiar with lymphedema the question, “Are you aware of secondary lymphedema,” most would reply that “yes, it is where the arm swells after the lymph system has been damaged by breast cancer biopsy and treatment.” This is called arm lymphedema. Even if they are aware that such a condition as secondary leg lymphedema exists, their response might well be that it is a small group of afflicted men who have prostate cancer. Thus shows how little awareness there is about this particular form of lymphedema. Even in the lymphedema world it is a poor step-child. However, if the membership of Lymphedema People and the posts in the online lymphedema support groups are an indication, this condition is increasing dramatically. The reasons for this increase are multiple. They include: 1. increased survival rates of cancer 2. improved treatment of trauma injuries that previously would have been terminal 3. increase in antibiotics for infections and treatment for other conditions that previously might have resulted in death. It is also important to note that secondary leg lymphedema does not necessarily start immediately after the injury or trauma. It may not start for years. What is secondary leg lymphedema? Secondary lymphedema is a condition where the lymphatic system has been damaged. The main job of this system is to move excess through and out of our bodies. When it becomes damaged or impaired, it is no longer able to accomplish this function and these fluids (lymph fluids) collect in the interstitial tissues of our legs. This causes leg swelling. What causes secondary leg lymphedema? Secondary leg lymphedema (also referred to as acquired lymphedema) is caused by or can develop as a results of: 1.) Surgeries involving the abdomen or legs where the lymph system has been damaged. This includes any intrusive surgery. 2.) Removal of lymph nodes for cancer biopsy. These cancers include, but are not limited to 4. Some types of chemo therapy. For example, tamoxifen has been linked to secondary lymphedema and blood clots. 6.) Trauma injuries such as those experienced in an automobile accident that severly injures the leg and the lymph system. 7.) Burns - this even includes severe sunburn. We have a member that acquired secondary leg lymphedema from this. 8.) Bone breaks and fractures. 9.) Morbid obesity - the lymphatics are eventually crushed by the excessive weight. When that occurs, the damage is permanent and chronic secondary leg lymphedema begins. What are some of the symptoms of secondary leg lymphedema? These symptoms may include: 1.) Unexplained swelling of either part of or the entire leg. In early stage lymphedema, this swelling will actually do down during the night and/or periods of rest, causing the patient to think it is just a passing thing and ignore it. 2.) A feeling of heaviness or tightness in the leg 3.) Increaseing restriction on the range of motion for the leg. 4.) Unsual or unexplained aching or discomfort in the leg. There are three basic stages active of lymphedema. The earlier lymphedema is recognized and diagnosed, the easier it is to successful treat it and to avoid many of the complications. It is important as well to be aware that when you have lymphedema, even in one limb there is always the possibility of another limb being affected at some later time. This “inactive” period referred to as the latency stage. It is associated with hereditary forms of lymphedema. Lymphatic transport capacity is reduced No visible/palpable edema Subjective complaints are possible (Spontaneously Irreversible Lymphedema) Accumulation of protein rich edema fluid Pitting becomes progressively more difficult Connective tissue proliferation (fibrosis) Treatment of Leg Lymphedema 1. Infections such as cellulitis, lymphangitis, erysipelas. This is due not only to the large accumulation of fluid, but it is well documented that lymphodemous limbs are localized immunodeficient and the proein rich fluid provides an excellent nurturing invironment for bacteria. 2. Draining wounds that leak lymphorrea which is very caustic to surrounding skin tissue and acts as a port of entry for infections. 4. Loss of Function due to the swelling and limb changes. 5. Depression - Psychological coping as a result of the disfigurement and debilitating effect of lymphedema. 6. Deep venous thrombosis again as a result of the pressure of the swelling and fibrosis against the vascular system. Also, can happen as a result of cellulitis, lymphangitis and infections. See also Thrombophlebitis 7. Sepsis, Gangrene are possibilities as a result of the infections. 8. Possible amputation of the limb. 11. Chronic localized inflammations. 12. Pain ranging from mild in early lymphedema to severe in late stage lymphedema. 13. Lymphatic cancers which can include angiosarcoma, lymphoma; Kaposi's Sarcoma; lymphangiosarcoma (Stewart_treves Syndrome); Cutaneous T-Cell lymphoma; Cutaneous B-Cell lymphoma; Pseudolymphomatous Cutaneous Angiosarcoma. See also: Primary Lymphedema and Cancer for a discussion and Lymphatic Cancers Secondary to Lymphedema. Note: These cancers are rare and are usually associated with long term, untreated or improperply treated lymphedema. Typically occuring in stage three or four; quite rare in stage two. 14. Skin complications possible in stages 3 and 4 include papillomatosis; placques including “cobblestone” appearing placque; dermatofibroma; Skin Tags; Warts and Verrucas; Mycetoma skin fungus; dermatitis and many lymphedema patients report increased problems with psoriasis; eczema and shingles. I would suspect this may be due to again, the immunocompromised condition of the arm or leg afflicted with lymphedema. 16. Debilitating joint problems. This is caused by a combination of the excess fluid weight and the constant inflammatory process that accompanies lymphedema. As we have gotten older, many lymphedemapatients are having total knee replacement, total hip replacement, or total shoulder replacement while others are experiencing carpal tunnel syndrome and are having carpal tunnel surgery or experiencing shoulder problems associated with lymphedema and must haverotator cuff surgery There are three basic stages active of lymphedema. The earlier lymphedema is recognized and diagnosed, the easier it is to successful treat it and to avoid many of the complications. It is important as well to be aware that when you have lymphedema, even in one limb there is always the possibility of another limb being affected at some later time. This “inactive” period referred to as the latency stage. It is associated with hereditary forms of lymphedema. (Reversible Lymphedema) Accumulation of protein rich edema fluid Pitting edema Reduces with elevation (no fibrosis) The treatment for leg lymphedema is much the same as treatment for arm lymphedema. The preferred treatment is decongestive therapy. See also manual lymphatic drainage mld complex decongestive therapy cdt. However, there is an important exception and that is pneumatic compression pumps should not be used in leg lymphedema. During the course of treatment, the leg will be wrapped in compression bandages after the treatment session. Upon completion of the treatment, compression sleeves and leg garments will be prescribed. There is one final and critical area pertaining to the treatment, control and management of lymphedema, and that is exercise. Not only is it vital for our over all health, it helps in weight control and is important for the movement of lymph fluid through our body. No matter the stage of lymphedema, underlying medical conditions or age, everyone of us should have a plan for exercises for lymphedema. Sometimes too, the process we must go through to get our treatment covered is maddening to say the least. You made need to learn how_to_file_a_health_insurance_appeal to reverse a coverage or treatment denial and you may even have to learn the process how to file a complaint against your insurance company with your state commissioner. All the Lymphatic Drainage strokes are based on one principle motion. Research has found that the initial lymphatics open up and the lymph angions are stimulated by a straight stretch, but even more so with a little lateral motion. After these 2 motions, we need to release completely to allow the initial lymphatics to close and the lymph to be sucked down the channels. In this zero pressure phase don’t completely disconnect from the skin, just return your pressure to nothing. Also don’t pull the skin back with you as you return, let it spring back by itself. This basic motion may resemble a circle, and is called stationary circles. All motions are based on this principle. In orienting this motion, we always want to push the lymph towards the correct nodes, so the last, lateral stretch motion should be going towards the nodes. Think about moving water. Visualize those initial lymphatics just in the skin, stretch, opening them up, then release and wait for the lymph angions to pump the lymph down the vessel. Remember how superficial this is. If you are feeling muscle, or other tissue under the skin, you are pushing too hard. Here are four points remember when performing Lymphatic Massage- 1. Correct pressure is deep enough so that you do not slide over the skin, but light enough so that you don’t feel anything below the skin. This is about 1-4 ounces. It is very common for massage therapists trained in Swedish or deep tissue to apply too much pressure with lymphatic drainage massage. Sometimes it is hard to believe that something so light could be effective. Always remember- you are working on skin. How much pressure does it take to deform the skin? Almost nothing. Remember- if you push too hard you collapse the initial lymphatic. 2. Direction of your stroke is of great importance, because we always want to push the lymph towards the correct nodes. If you push the lymph the wrong way, your work will not be effective. 3. Rhythm is very important because with the correct rhythm and speed, the initial lymphatics are opened, and then allowed to shut and then there is a little time that is given for that lymph to get sucked down along the vessel. An appropriate rhythm will also stimulate the parasympathetic nervous system, causing the client to relax. 4. Sequence means the order of the strokes. When we want to drain an area, we always start near the node that we are draining to. Always push the lymph toward the node. Then as we work, we move further and further away from the node, but always pushing the fluid back in the direction of the node. In this way we clear a path for the lymph to move, as well as create a suctioning effect that draws the lymph to the node. v Rules for MLD: o The strokes should be made with arcing motions or half circle motions. o Do not slide over your skin, but rather, keep your fingers in contact with your skin and stretch it gently over the underlying tissues. o You should have NO PAIN. o Each stroke should be done 10-15 times SLOWLY, taking about 2 seconds for each stroke. o If redness occurs, you are pressing too hard. o For lymphedema of BOTH legs, perform all moves on both sides. o The best position to be in for this is seated reclined, or lying down and propped up slightly. o Make sure you can make skin-to-skin contact for all of these strokes. They won't work when done over clothing. 1. Neck: Place the flats of your fingers on your opposite shoulder, in the triangular part just above the collarbone and next to your neck. Move your hand in an arcing motion stretching the skin forward and down towards your chest. Repeat this on the other side. 2. Armpit: Raise your arm (on the same side as the leg in which you have lymphedema), bend you elbow, and place the hand behind your head. Place the flat of your opposite hand in your armpit. Stretch the skin in an arcing motion up towards the neck. 3. Above the waist: Place the flat of your opposite hand on the side of your body (on the side on which you have lymphedema) below the breast, but above the waist. Move your hand upwards in an arcing motion in the direction of your armpit, stretching your skin. 4. Below the waist: Place the flat of your opposite hand on the side of your body (on the side on which you have lymphedema) on or just below the waist, but above your hip. Move your hand upwards in an arcing motion in the direction of your armpit, stretching your skin. 5. Deep (diaphragmatic) breathing: Place both open palms on top of each other below the belly button. Take a slow breath in and feel your belly rise up into your hands as it expands to take in the air. Then breath out and feel your belly sink in as the breath leaves you. As you get better at this you can use your hands to resist your stomach slightly as you breath in, and press in slightly with your hands as you breath out. Don’t get dizzy. Start with only 2 or 3 breaths and work up to 10 as you get stronger. 6. Groin: Place the flat of your hand on the front of your groin, right where your underwear falls. Make a scooping motion in the groin, rolling your hand from the thumb to the little finger. Imagine that your hands are the bottom of a water wheel. 7. Back of knee: Place the flat fingers of both hands behind your knee. Perform a scooping motion up towards the body. 8. Repeat steps 3, 4 and 6 (waist and groin areas) A very special Thanks to Katy from LymphedemaTherapists · Lymphedema Therapists One of the best posts on how to wrap a leg…from [email protected] Since you have the swelling in the feet (and toes), it is probably lymphedema, perhaps compounded with lipedema. The traditional bandaging technique is with a stockinet, then some artiflex (cotton padding), and lastly, the bandages. I bandage directly over the skin. The padding is supposed to even out if you should constrict some part of the bandaging, causing the lymph not to flow, but the bandages are really not like rubber bands – properly spaced and overlapped, they will not cause constriction – and the artiflex is a pain. The stockinet is just another thing to wash and dry. I went to www.bandages.com and found that they have new bandages that are thick enough to be used without layering (e.g. the stockinet and padding). Perhaps this is the way to go, or perhaps you want to bother with stockinets and padding. If you were seeing a therapist, they would also use foam instead of artiflex (just cotton padding). Some pictures of bandaging look absolutely monstrous. My so called therapist used some foam, etc., but I soon discovered that the leg went down more without it. The pad is supposed to “spread” the compression so there is no binding – but what really happens is all the elasticity of the bandages goes to compressing the FOAM – not compressing your leg. A little compression trickles down to the actual leg, but my experience was that the swelling went down better without the extra stuff. However, since this is against tradition, you should at least be aware if any part of your leg feels too tight, and, if so redo the bandages (which is at least an hour for two legs – and bandages that were OK while you were up and around can suddenly become too tight in the middle of the night – which means you have to get up and do it again.) Anyway, with or without stockinet and padding, here is one technique for bandaging: materials (1 large leg not grossly larger than normal (I am 5'9” and the calf measure is 21” and I have wide, swollen feet - if you are substantially larger, you may need more) optional: stockinet, artiflex, foam 1 roll 1” professional strength masking tape. 1 ea 3” strip of heavy padding around the ankles 1 ea 1” x 5m medi-rip 2 ea 8 cm. x 5 m short stretch bandages 1 ea 10 cm x 10 m short stretch bandages 1 ea 6 cm x 5 m short stretch bandages. Double for 2 legs, if you are very much larger than me, add another 1 ea 10 cm. x 5 m short stretch bandage for each leg. I sit on my bed and have a low table I can rest my foot on, but two chairs will work also (one to sit on and one to put your foot on). Wrap the 3” strip of heavy padding (or chock pads) around the ankles. The figure 8's you are making around your foot and from the foot onto the leg will tend to bind right at the intersection of the foot and leg (where the 90” turn is made. This is the only place padding is essential. Secure it with masking tape. Secure all the bandages after they have been wrapped with masking tape. Cut a lot of 5” strips of masking tape and have them ready. Stick them on the edge of the table, or a windowsill, or something. First hold all the bandages so that you are drawing from the bottom of the bandage cylinder (the bandages rolled up are a cylinder), not the top. A little experimentation will show you that this is much easier. Start with the 1” medi-rip (it is a self cohesive bandage, but looses some of the self cohesion with laundering). Use this tiny bandage to bandage along the toe line. That is, make the same arc that the joints of the toes to the feet make. Do not bind the toes. If you can, wrap each toe with it, but I find that this binds the toes and hurts, so I leave my toes unwrapped, even though they swell, but if you start with the larger short stretch bandages, there will be a half moon that swells even more (Since if you make a straight circle from just below the little toe to just below the big toe, this will leave some area of foot not bandaged and the lymph will be pushed into this area, and it will be worse than before. The little 1” medirip can be wrapped in a curved path that covers all of the foot. Overlap this 1” medirep by 1/2 and continue winding it around your foot until you get to the end of the arch, then take it up diagonally over the top of the foot, and you will still have enough bandage to wrap again just under the toe line again for a few wraps. The medi wrap has strands of elastic in an otherwise cotton strip, so pull the medirip tight (that is the elastic is extended, but not to the point of discomfort). When you wrap the bandages, pull a bit at the end of each circle, but do not stretch them too hard, or with constant tension as far as they will stretch. You want them to exert a little spring, but don't strangle your legs. If you get them too tight, it will hurt, and you must undo your wrapping and redo it (a big pain). If you don't stretch them a little, they won't have much compression. Of course, it's always the bottom bandages on the feet that hurt, so you have to unwrap the whole deal to get to them. Next,step 2 take a 8 cm. x 5 m short stretch bandage, and start at the tip of the foot, but do not bind any toes, and since you already have the medi-rip, allow a little breathing space to make sure you don't bind toes. Then wind around your foot overlapping the bandages by about 1/2 to 2/3 (I probably overlap 2/3) until you have gotten almost to the leg (your foot should be at a 90 degree angle to the leg, and for me this is 2 or 3 wraps), then go around the heel itself, and, as you come off the other side of the heel, take the bandage diagonally up on the top of the foot to just below the top of the first wrap (just under the bottom of the big toe), go around the bottom of the foot, and then bring the bandage back around the ankle just above the heel, then around the ankle, and back up diagonally across the top of the foot just like before, overlapping 1/2 to 2/3 of the previous path. This will make large figure 8s. Continue with the figure 8's each layer a little higher around the ankle, until you again are wrapping just in front of the leg (no more space to do another figure 8) and use the rest of the bandages going in straight circles (not figure 8's) around the ankles. Next,step 3 take the second 8 cm x 5 meter short stretch bandage, and start at the base of the leg (around the ankles), go around once or twice, to anchor the bandage, then on the next turn go down around the bottom of the foot close to the heel, and then around the bottom of the foot and then over and up around the leg, then continue making figure 8's up the leg overlapping by about 2/3. To make a figure 8 around the leg, on one side of the front of the leg, the bandage is going uphill (or towards your knee), then it goes more or less straight around the back of the leg at the high end of the 8, then goes downhill (or towards the foot), as you come across the front of the leg again, then more or less straight across the back of the leg at the low end of the 8 and then up again for the next figure 8. On me, this bandage is finished just about at the beginning of the calf (a little above the bottom of the muscle – it would be ideal if this bandage ended just before the muscle begins, but it will be a bit different for everyone depending of how much they overlap and how large their leg is. Next,step 4 do figure 8's with the 10 cm x 10 m bandage. Begin at the bottom of the leg with the beginning of the bandage facing upward, so the first direction is in a downward direction, (the end pointing up) coming around and then going up again. The 10 cm x 10 m bandage should take you up to just below the knee, but if the legs are very large, you may need another 10 cm. bandage. Each course of the figure 8 should overlap a little less or evenly, but not more than the previous course. The more you overlap the greater the compression, and you must always have less compression proximally (towards your heart) than distally (towards your toes). Finally,step 5 take the last 6 cm. x 5 meter short stretch bandage and start at about mid calf or a little higher, and wind in straight circles until just below and as close as possible to the knee. This last bandage gives compression over the tops of the top 8's where there is not as much overlap, and sort of holds it all up, as the circumference of the leg is actually smaller at the knee than at the mid calf (doesn't slide down because a smaller circle would have to slide over a larger circumference of the leg). I have been complemented on my ability to wrap, but It is hard to know if a novice can make much sense of my directions – but I tried. Look at some photographs of the bandaging while you are at www.bandages.com. You don't see to many photographs of the figure 8's, but they give more compression and stay up better, and bind less. You will get the general idea of winding up the leg, and overlap by looking at the photographs, however. It may seem complicated to follow my directions (I tried to be clear), but the real technique is not very hard at all. The new thick bandages that do not need padding (padding is included) are : KomprimED. They are located on the bandagesplus web site under bandages, then under two way stretch bandages. I think you should start with these, as the padding may be more important for someone who is just beginning bandages. This is much simpler than all those stupid layers. *Soft and comfortable directly on patient's skin *Thicker texture avoids application of foam and padding in many cases *Suitable for lymphedema and venous ulcers *Patient-friendly application requires less layers *All bandages are short-stretch/low stretch KomprimED 4cmx5m Other wise, the standard short stretch bandages are rosidal or comprilan. I use rosidal. The medi-rip is under the section cohesive bandages on page 2 under the more general category bandages. By Linda Fisher The obstruction of the flow of lymph from a given area results in the accumulation of abnormally large amounts of tissue fluid in that area. Such an accumulation is called lymphedema. Lymphadema is not only uncomfortable, it may cause such problems as pain, infection and recurrent infection, difficulty in movement, clothing restrictions, and air travel restrictions. Remembering that the lymph moves upward in the body toward the heart, from the finger tips in toward the heart, and from the top of the head down toward the heart, we can see that the fluid moving furthest in the body is from the lower extremities. Some causes of lymphedema of the lower extremities is congestive heart failure, trauma to the back or lower abdominal area, blockage in the groin (inguinal nodes), or blockage behind the knee (popliteal nodes). I often use the analogy of a traffic accident on the freeway to explain movement of lymph. At the point of the accident, all traffic either stops or slows to a near halt, until the accident is cleared away, thus allowing the traffic to again flow naturally. Anatomically, at the point of blockage, everything slows down and begins to accumulate backward along the path of flow. If the feet and ankles are swollen, it generally means that there is a blockage “up ahead.” Even in slender young people, we sometimes see signs of lymphedema in the legs. This appears as “heavy ankles” or as a little pouch of fat on the inside curve of the knee area. When present in this portion of the population, we usually find that the individual is not getting the right exercise and eating largely of the wrong foods, or just the opposite. Many joggers, tennis players, and aerobic exercise enthusiasts exercise and eat properly, but they get this problem because repeated hard impact will slow lymph movement. In the middle age and senior group, we may see a different, but very common, problem - shuffling the feet instead of walking comfortably. When you cannot lift your feet to step properly, you may just accept that you probably have an “arthritic problem.” Many times, you may have a large mass of lymph fluid behind your knee that has pooled, and then hardened. Imagine the pain this would cause. It would be like strapping a tennis ball behind your knee and then attempting to walk! There is more than one cause of lymphedema in the lower extremities. The ones mentioned above are just some of the more common ones. Tips to Avoid Blockage: Do not wear tight jeans or tight under garments. Do not cross the knees when sitting; cross feet at the ankles instead. For the exercise enthusiast, integrate some form of slow, rhythmic exercise - yoga and pilates are excellent, as is walking. Bouncing on a trampoline is excellent - no need to jump. Bend your knees and get a gentle bounce going for a minimum of 12-15 minutes a day. If balance is a concern, hold onto a stationary item or purchase a balance bar that attaches to your trampoline. Also, if wheel chair bound, place your feet on the trampoline and have someone else bounce it for you - you will receive a positive benefit from this. Lie on a slant board. And, as always, drink plenty of clean water, practice deep belly-breathing, and eat plenty of fresh, unprocessed foods. Caution: In the case of congestive heart failure, be absolutely sure to check with your health care practitioner before attempting any form of exercise and, of course, no slant-boarding! “Creating free lymphatic movement through the body is a vital part of any healing process.” Linda Fisher owns the Lympathic Wellness Center in Santa Maria. What can cause lymphedema of the leg? Can lymphedema of the leg become worse? Lymphedema are classified on the basis of their origins. Two form of lymph-edema of the leg that occur frequently are described below. A) PRIMARY LYMPHEDEMA OF THE LEG The cause is a congenital malfunction of the lymphatic system which results in lymphedema of the leg that often begins with peripheral edema. There is swelling of the foot and lower high. If this goes untreated, the entire leg may become endematous. Since the patient discovers the condition only after the foot begins to swell, it is difficult to take the preventive measures. Primary Lymphedema can be present at birth, but it may also develop later on. The swelling usually starts during puberty. Diagnosing congenital lymphatic vessel malformation without the presence of lymphedema is very difficult. B) SECONDARY LYMPHEDEMA OF THE LEG - surgical severing of lymphatic vessels - removal of lymph nodes in the groin and/or in the true pelvis - accidental trauma to the lymph passages of the legs, e.g.g when a bone is broken as the result of a strong blow to the upper thigh, etc. - radiotherapy of the groin area, the lower abdomen, or the lower lumbar vertebrae The result is lymphedema of the leg which frequently begins centrally. Lymphedema then spreads relatively rapidly to the entire leg. If there is no actual edema and “only” the preconditions for lymphedema of the leg are present, the condition is termed “predisposition to edema.” At this stage it is important to take preventive measures. Although lymphedema of the leg and/or the trunk after an abdominal operation does not constitute a threat to the life of the patient, it can according to Stillwell”….. often be the source of considerable physical and mental suffering and occasionally even cause disability.” Untreated, lymphedema will get progressively worse, and a case of mild edema can degenerate with hardening of the tissues as a result of fibrosis or scleroses. Morever, long-term untreated lymphedema may lead to a form of cancer. Just as lymphedema of the upper extremities can become a complication after post surgical removal of breast cancer, lymphedema of the lower extremities can be a debilitating condition with several cancers. Prostate, lung, liver, lymphomas, ovarian, and abdominal cancers can cause swelling of the legs. The swelling can come from any compression or surgical removal of the lymph nodes in the lower body. It can also come secondarily to production of fluid into the abdomen (ascites) which spreads into the legs. When under treatment for any cancer, if your protein levels fall into lower levels, fluid will leak into your whole body including legs When you first notice swelling in your legs, you need to act to reverse it. Once you let the legs blow up to large size, it is harder to reverse the process. This must be discussed with your doctor. The use of elastic stockings with at least 30 mm hg pressure is the first step. If the edema is only at the ankles and feet, then you only need stocking to the knee. Any medical supply store can help fit the stockings. You should read the package and measure your ankle,calf and the length from the knee to the heel so that you are sure that they fit you correctly. These measurements are usually listed on the box. If the edema goes up to the knee or past, you will need thigh high stockings. You must keep pulling these up as the stocking fall down with wear during the day. The stockings are all hard to apply. You need someone with strong hands. Sometimes it helps to wear rubber gloves to get a better grip on the stockings. There are also leotards for edema that goes above the thigh. When you apply these stockings, they should be perfectly smooth. If you leave wrinkles, it will become painful underneath or you can cut the circulation in that spot. The stockings should be worn through the day from when you first get up. You do not sleep with them on. At night you remove the elastic hose and elevate your legs on pillows in the bed. Try to get them above your heart. You can wrap legs with elastic wraps. This is difficult to do correctly. The wraps should be on a diagonal. If you go in straight circles, you could end up with a turnicate like constriction of the leg and make the edema worse. If you develop numbness in your toes or coldness, that means that you have wrapped it too tightly. You should totally remove it and apply it again. For men, often the edema will go up into the scrotum. You should also elevate your penis at night to try to empty the water back in to the abdomen. Wearing a jock strap helps support the heavy and often painful scrotal sack when you are up and about. When the edema is not responding, you can use the external pump devices if so desired. These devices can be rented from a medical supply house. They are usually covered by insurances. After pumping you must then wear the elastic stockings until bed time. You pump daily for 2-3 weeks to get the severe edema under control. You can also go to outpatient physical therapy or edema clinics for treatments. When you are sitting you need to elevate your legs during the day or lie down at intervals with the legs elevated on pillows. Do not wear tight shoes as any kind of constriction only adds to the edema above or below the constriction. You must also be very careful not to cut yourself or open the skin. You must immediately see an MD if you have a weeping sore. It will take careful treatment to heal it without infection developing. Sometimes antibiotics are necessary. Exercises like pumping your feet up and down, leg kicks, going up and down on your toes in standing will help decrease edema. A regular exercise program of walking, exercise with light weights or any kind of movement activity is also helpful. In some instances, decreasing your salt intake becomes necessary. Other precautions are to be careful with heat or ice on severely swollen legs. That includes your shower or bath water. Bathe legs with regular soaps and rinse well. If you develop athlete's foot, be sure to treat it with one of the common sprays or powders. Be careful cutting your toenail. Get treatment for ingrown toe nails. The problems are more complex when severe edema is involved. Cancer Supportive Care For the patient who is at risk of developing Lymphedema, and for the patient who has developed Lymphedema. WHO IS AT RISK? At risk is anyone who has had gynecological, melanoma, prostate or kidney cancer in combination with inguinal node dissection and/or radiation therapy. Lymphedema can occur immediately postoperatively, within a few months, a couple of years, or 20 years or more after cancer therapy. With proper education and care, Lymphedema can be avoided or, if it develops, kept under control. (For information regarding other causes of lower extremity Lymphedema, see What is Lymphedema?) The following instructions should be reviewed carefully pre-operatively and discussed with your physician or therapist. 1. Absolutely do not ignore any slight increase of swelling in the toes, foot, ankle, leg, abdomen, genitals (consult with your doctor immediately). 2. Never allow an injection or a blood drawing in the affected leg(s). Wear a LYMPHEDEMA ALERT Necklace. 3. Keep the edemic or at-risk leg spotlessly clean. Use lotion (Eucerin, Lymphoderm, Curel, whatever works best for you) after bathing. When drying it, be gentle, but thorough. Make sure it is dry in any creases and between the toes. 4. Avoid vigorous, repetitive movements against resistance with the affected legs. 5. Do not wear socks, stockings or undergarments with tight elastic bands. 6. Avoid extreme temperature changes when bathing or sunbathing (no saunas or hottubs). Keep the leg(s) protected from the sun. 7. Try to avoid any type of trauma, such as bruising, cuts, sunburn or other burns, sports injuries, insect bites, cat scratches. (Watch for subsequent signs of infection.) 8. When manicuring your toenails, avoid cutting your cuticles (inform your pedicurist). 9. Exercise is important, but consult with your therapist. Do not overtire a leg at risk; if it starts to ache, lie down and elevate it. Recommended exercises: walking, swimming, light aerobics, bike riding, and yoga. 10. When travelling by air, patients with Lymphedema and those at-risk should wear a well-fitted compression stocking. For those with Lymphedema, additional bandages may be required to maintain compression on a long flight. Increase fluid intake while in the air. 11. Use an electric razor to remove hair from legs. Maintain electric razor, properly replacing heads as needed. 12. Patients who have Lymphedema should wear a well-fitted compression stocking during all waking hours. At least every 4-6 months, see your therapist for follow-up. If the stocking is too loose, most likely the leg circumference has reduced or the stocking is worn. 13. Warning: If you notice a rash, itching, redness, pain, increase of temperature or fever, see your physician immediately. An inflammation or infection in the affected leg could be the beginning or a worsening of Lymphedema. 14. Maintain your ideal weight through a well-balanced, low sodium, high-fiber diet. Avoid smoking and alcohol. Lymphedema is a high protein edema, but eating too little protein will not reduce the protein element in the lymph fluid; rather, this may weaken the connective tissue and worsen the condition. The diet should contain easily-digested protein such as chicken, fish or tofu. 15. Always wear closed shoes (high tops or well-fitted boots are highly recommended). No sandals, slippers or going barefoot. Dry feet carefully after swimming. 16. See a podiatrist once a year as prophylaxis (to check for and treat fungi, ingrown toenails, calluses, pressure areas, athelete's foot). 17. Wear clean socks & hosiery at all times. 18. Use talcum powder on feet, especially if you perspire a great deal; talcum will make it easier to pull on compression stockings. Be sure to wear rubber gloves, as well, when pulling on stockings. Powder behind the knee often helps, preventing rubbing and irritation. Unfortunately, prevention is not a cure. But, as a cancer and/or Lymphedema patient, you are in control of your ongoing cancer checkups and the continued maintenance of your Lymphedema. Revised © January 2001 National Lymphedema Network. Permission to print out and duplicate this page in its entirety for educational purposes only, not for sale. All other rights reserved. For more information, contact the NLN: 1-800-541-3259. Foot care for Lower Extremity Lymphedema The National lymphedema network NLN has been flooded with questions regarding foot and ankle care for patients with lower extremity lymphedema. Dr. Joseph Hewitson, a San Francisco Podiatrist, who has worked with many lymphedema patients, provided NLN a list of guidelines and suggestions for proper foot care for people suffering from lower extremity lymphedema. These guidlelines are excerpted from The July NLN newsletter. Be sure to trim your toenails, but not necessarily straight across. If the corners have grown into the skin, trim the offending border. If you get an infection, you should remove that side of the nail to resolve the infection. Antibiotics often will not work because an abscess (walled off infection) has occurred. Soaking may only provide temporary relief. A lymphedema patient should never undergo a chemical matrisectomy (destroying root growth matrix with a chemical to permanently remove nail). Fungal nails are common in lymphedema patients and should be soaked in 1:1 vinegar/water solution for 20 minutes, with antifungal solution applied afterwards. Routine foot care every three months with a podiatrist if possible or your physician. Meticulous nail care decreases the chance for inflammation and infection. Taking Care of Your Toes The inner spaces between your toes need to be kept clean and dry. Soaking in a 1:1 vinegar/water solution for 20 minutes at least once a week and running a piece of gauze between your toes to remove any debris will help keep your web spaces clean. Using a drying agent/antifungal solution like Castelani's Paint decrease chances of irritation and infection. Applying lambs wool (see your pharmacist) between the toes allows the web greater breathability. Open toed compression garments will also allow greater breathability, as will breathable footwear that is fitted correctly. Dr. Hewitson says that proper footwear is very important. He says always buy your shoes at the time of day when your foot is most swollen (usually the end of the day). If you wear a compression garment, make sure you fit your shoes to accommodate this. Good athletic shoes are excellent to wear because they are more supportive, and more breathable. For very large feet, a Velcro strap shoe is usually more accommodating. If you have painful corns and calluses, they should be routinely trimmed by a podiatrist or practitioner. Never use any callous removal pads, because they can cause burns and infections. Dr. Hewitson also says to always work with reputable practitioners who are willing to further educate themselves on lymphedema. He adds, you may be their best and only teacher. Related article with more foot care tips: Skin Care from Tri-State Lymphedema Clinic The skin is the body's first line of defense. It protects the body from trauma and infection and aids in temperature regulation. Therefore it is essential to keep the skin healthy. Individuals who have had any impairment of the lymphatic system are especially at risk for developing an infection. Any small cut or abrasion can allow bacteria to enter the skin and the stagnant lymphatic fluid is a perfect milieu in which bacteria can grow. Simple measures which will promote healthy skin: 1. Inspect the skin daily for any crack, cuts or dry areas. Check carefully areas with reduced sensation or where there are skin folds. 2. Clean skin daily with non-perfumed soap 3. Dry skin completely, especially the area between the toes 4. Keep skin supple. Use a Iow pH lotion as Eucerin to keep the skin moist and pliable. 5. Check fingernails and toenails for any signs of infection, cracks, fungus, or hangnails. Do not cut nails or cuticles. Use an emery board. 6. Call your doctor at the first signs of any infection, redness or high temperature. Foot Care for the at Risk Patient People who have lymphedema, diabetes or vascular disease are at risk for infections. 1. To care for corns and calluses, do not use over the counter medications such as Dr. Scholl's corn pads as they contain acid. After the bath or shower, when the skin is softened, buff the skin to remove the dead skin and soften calluses. 2. Corns can develop between the 4th and 5th toes as the foot swells. Fungus can also develop, which can lead to infections. Changing to larger or wider shoes may alleviate the development of corns. Use lambs wool in between the toes to reduce friction. 3. When you trim your toe nails, round the edges to prevent ingrown toenails. Boil clippers for one minute and let cool for one hour before using. 4. Dry you feet very well after bathing, especially between the toes. Do not use alcohol on your feet. Use a Iow pH lotion. 5. If you are unable to cut your toe nails, see a Podiatrist regularly. For information on nail care, see our page How To Have Healthy Nails See also our page on Foot Care for Lymphedema Ann Plast Surg. 2009 Aug; Chen HC, Gharb BB, Salgado CJ, Rampazzo A, Xu E, di Spilimbergo SS, Su S. Department of Plastic Surgery, E-Da Hospital/I-Shou University, Taiwan. Entry lesions at the toes interdigital spaces, in the setting of chronic lymphedema, are strongly associated with repetitive infective episodes which cause significant morbidity. A prospective study was designed to evaluate the outcome in 2 groups of patients affected by end stage III lymphedema of the lower extremity, treated with the Charles procedure with or without simultaneous amputation of the toes. At a mean 3 years of follow-up, 20% of the patients receiving elective toes amputation experienced recurrence of the infection and none required more proximal amputations. Among the patients not desiring elective toes amputation; 83% suffered multiples attacks of cellulitis and in 88% the toes were eventually amputated. The difference in the number of infective episodes between the 2 groups was highly significant. No cases of recurrent lymphedema were registered. Elective toes amputation in combination with the Charles procedure reduces recurrent cellulitis and long-term morbidity in stage III lymphedema of the lower leg. *Please note this abstract included for information purposes only. Posting it does NOT constitute an endorsement of the procedure or the rationale for the procedure.* Lower extremity glandography (LEG): a new concept to identify and enhance lymphatic preservation. Int J Gynecol Cancer. 2011 Apr Burnett AF, Stone PJ, Klimberg SV, Gregory JL, Roman JR. Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, University of Arkansas for Medical Sciences, Little Rock, AR 72205, USA. [email protected] Abstract BACKGROUND: Lower extremity edema remains a major postoperative complication after inguinal lymphadenectomy for vulvar cancer. This study documents the lymphatic drainage of the vulva versus the lymphatic drainage of the lower extremity coming through the femoral triangle. METHODS: Seven patients underwent either unilateral or bilateral inguinal lymphadenectomy in conjunction with a radical vulvar resection. Preoperatively, patients had technetium-99 injected into the vulvar cancer. Isosulfan blue was injected into the medioanterior thigh 10 cm below the inguinal ligament. The femoral triangle was opened, and a neoprobe was used to locate the “hot” node bearing the technetium-99. Gentle dissection located the blue lymphatic channel and any blue lymph nodes. The blue and hot nodes were resected and submitted separately. The patients then underwent a complete inguinal lymphadenectomy. RESULTS: A total of 11 groin dissections were performed. In 9 of the 11 groins, the hot node was identified, and in 8 of the 11 groins, blue node or lymphatic channel was identified. The hot nodes were uniformly located on the superior medial aspect of the femoral triangle. The blue nodes were uniformly located on the lateral aspect of the femoral triangle just anterior to the femoral artery or vein. Three patients had hot lymph nodes containing cancer. Of those 3 patients, one had an additional node positive. None of the blue lymph nodes contained cancer. CONCLUSIONS: This procedure demonstrates the alternative lymphatic drainage of the leg versus the vulva. Larger studies are necessary to document the exclusivity of these 2 drainage systems. Preservation of the lymphatic drainage of the leg may result in decreased lymphedema. Venous dynamics in leg lymphedema. Kim DI, Huh S, Hwang JH, Kim YI, Lee BB. Division of Vascular Surgery, Samsung Medical Center, College of Medicine, Sung kyun kwan University, Seoul, Korea. To determine whether there is anatomical and/or functional impairment to venous return in patients with lymphedema, we examined venous dynamics in 41 patients with unilateral leg lymphedema. A Volometer was used for computer analysis of leg volume, a color Duplex Doppler scanner was used to determine deep vein patency and skin thickness, and Air-plethysmography was used to assess ambulatory venous pressure, venous volume, venous filling index and the ejection fraction. In the lymphedematous leg, volume and skin thickness were uniformly increased (126.4 +/- 21.3% and 156.9 +/- 44.5%) (mean +/- S.D.), respectively. The ambulatory venous pressure was also increased (134 +/- 60.7%) as was the venous volume (124.5 +/- 37.5%), and the venous filling index (134.5 +/- 50.5%). The ejection fraction was decreased (94.9 +/- 26.1%). Greater leg volume correlated with increased venous volume and venous filling index (values = 0.327, 0.241, respectively) and decreased ejection fraction (r = -0.133). Increased subcutaneous thickness correlated with increased venous filling index and venous volume (r = 0.307, 0.126, respectively) and decreased ejection fraction (r = -0.202). These findings suggest that soft tissue edema from lymphatic stasis gradually impedes venous return which in turn aggravates the underlying lymphedema. Limb Positioning and Movement For Lymphedema Patients Careful positioning of an affected limb when resting or sitting can help to prevent further swelling. You can also use gravity to help drain away excess fluid. Avoid standing or sitting with your legs down if you can, as this allows fluid to pool around your feet and calves. Movement of your muscles helps to push fluid around the body, so regular gentle movement can help to prevent fluid accumulating. These guidelines will help you to position your affected limb correctly Don't cross your legs when you are sitting. Don't sit with your legs down for long periods – either lie with your legs up on a pillow, or have them fully supported on a footstool. Try not to stand still for long periods of time. If standing is unavoidable, do the following exercises to stimulate the pump action of your muscles: raise yourself up on to your toes frequently to tense and relax your calf muscles; shift your weight from one leg to the other and transfer your weight from heels to toes, as if walking on the spot. Swollen leg and primary lymphoedema. Wright NB, Carty HM. Department of Radiology, Royal Liverpool Children's NHS Trust. Children who present with unilateral or bilateral swelling of the legs are often suspected of having a deep venous thrombosis. The incidence of deep venous thrombosis in children is low and lymphoedema may be a more appropriate diagnosis. Lymphoedema can be primary or secondary. In childhood, primary lymphoedema is more common and may be seen associated with other congenital abnormalities, such as cardiac anomalies or gonadal dysgenesis. Primary hypoplastic lymphoedema is the most often encountered type. It is more common in girls, especially around puberty, and is typically painless. Atypical presentations produce diagnostic confusion and may require imaging to confirm the presence, extent, and precise anatomical nature of the lymphatic dysplasia. This article describes four patients presenting with limb pain and reviews the clinical features and imaging options in children with suspected lymphoedema. Publication Types: · Case Reports PMID: 8067792 [PubMed - indexed for MEDLINE] Primary lymphedema of the leg: relationship between subcutaneous tissue pressure, intramuscular pressure and venous function. Christenson JT, Hamad MM, Shawa NJ. In eight patients with unilateral primary lymphedema, subcutaneous tissue and intramuscular pressure were measured in both legs using the slit-catheter technique. Venous function was assessed by venography, or Doppler or photoplethysmography. Both at rest and during exercise, subcutaneous tissue pressure was elevated in the lymphedematous leg (17.9 +/- 7.6 and 33.0 +/- 10.8 mmHg respectively) compared to healthy contralateral leg (0.4 +/- 2.6 and -0.6 +/- 3.6 mmHg; p less than 0.001). The intramuscular pressure in the anterior tibial compartment was also increased at rest and during exercise in the edematous leg (24.9 +/- 4.4 mmHg and 43.6 +/- 11.2 mmHg respectively) compared to control leg (9.6 +/- 5.6 and 25.8 +/- 10.00 mmHg). These findings suggest that derangements in both the superficial and deep lymphatic systems as well as venous dysfunction contribute to the clinical appearance of “primary lymphedema.” PMID: 4033199 [PubMed - indexed for MEDLINE] Effect of venous and lymphatic congestion on lymph capillary pressure of the skin in healthy volunteers and patients with lymph edema. Gretener SB, Lauchli S, Leu AJ, Koppensteiner R, Franzeck UK. Division of Vascular Medicine (Angiology), Department of Medicine, University Hospital, Zurich, Switzerland. The aim of the present study was to assess the influence of venous and lymphatic congestion on lymph capillary pressure (LCP) in the skin of the foot dorsum of healthy volunteers and of patients with lymph edema. LCP was measured at the foot dorsum of 12 patients with lymph edema and 18 healthy volunteers using the servo-nulling technique. Glass micropipettes (7-9 microm) were inserted under microscopic control into lymphatic microvessels visualized by fluorescence microlymphography before and during venous congestion. Venous and lymphatic congestion was attained by cuff compression (50 mm Hg) at the thigh level. Simultaneously, the capillary filtration rate was measured using strain gauge plethysmography. The mean LCP in patients with lymph edema increased significantly (p < 0.05) during congestion (15.7 +/- 8.8 mm Hg) compared to the control value (12.2 +/- 8.9 mm Hg). The corresponding values of LCP in healthy volunteers were 4.3 +/- 2.6 mm Hg during congestion and 2.6 +/- 2.8 mm Hg during control conditions (p < 0.01). The mean increase in LCP in patients with lymph edema was 3.4 +/- 4.1 mm Hg, and 1.7 +/- 2.0 mm Hg in healthy volunteers (NS). The maximum spread of the lymph capillary network in patients increased from 13.9 +/- 6.8 mm before congestion to 18.8 +/- 8.2 mm during thigh compression (p < 0.05). No increase could be observed in healthy subjects. In summary, venous and lymphatic congestion by cuff compression at the thigh level results in a significant increase in LCP in healthy volunteers as well as in patients with lymph edema. The increased spread of the contrast medium in the superficial microlymphatics in lymph edema patients indicates a compensatory mechanism for lymphatic drainage during congestion of the veins and lymph collectors of the leg. Copyright 2000 S. Karger AG, Basel Publication Types: · Clinical Trial PMID: 10720887 [PubMed - indexed for MEDLINE] Effect of sequential intermittent pneumatic compression on both leg lymphedema volume and on lymph transport as semi-quantitatively evaluated by lymphoscintigraphy. Miranda F Jr, Perez MC, Castiglioni ML, Juliano Y, Amorim JE, Nakano LC, de Barros N Jr, Lustre WG, Burihan E. Vascular Surgery Division, Federal University of Sao Paulo, Paulista School of Medicine, SP, Brazil. [email protected] Sequential Intermittent Pneumatic Compression (SIPC) is an accepted method for treatment of peripheral lymphedema. This prospective study evaluated the effect in 11 patients of a single session of SIPC on both lymphedema volume of the leg and isotope lymphography (99Tc dextran) before SIPC (control) and 48 hours later after a 3 hour session of SIPC. Qualitative analysis of the 2 lymphoscintigrams (LS) was done by image interpretation by 3 physicians on a blind study protocol. The LS protocol attributed an index score based on the following variables: appearance, density and number of lymphatics, dermal backflow and collateral lymphatics in leg and thigh, visualization and intensity of popliteal and inguinal lymph nodes. Volume of the leg edema was evaluated by measuring limb circumference before and after SIPC at 6 designated sites. Whereas there was a significant reduction of circumference in the leg after SIPC (p<0.05), there was no significant difference in the index scores of the LS before and after treatment. This acute or single session SIPC suggests that compression increased transport of lymph fluid (i.e., water) without comparable transport of macromolecules (i.e., protein). Alternatively, SIPC reduced lymphedema by decreasing blood capillary filtration (lymph formation) rather than by accelerating lymph return thereby restoring the balance in lymph kinetics responsible for edema in the first place. PMID: 11549125 [PubMed - indexed for MEDLINE] Long-term follow-up after lymphaticovenular anastomosis for lymphedema in the leg. Koshima I, Nanba Y, Tsutsui T, Takahashi Y, Itoh S. J Reconstr Microsurg. 2003 May;19(4):209-15. Department of Plastic and Reconstructive Surgery, Graduate School of Medicine and Dentistry, Okayama University, Japan. Over the last 9 years, the authors analyzed lymphedema of the lower extremity in a total of 25 patients, comparing the use of supermicrosurgical lymphaticovenular anastomosis and/or conservative treatment. The most common cause of edema was hysterectomy, with or without subsequent radiation therapy for uterine cancer. Among 12 cases that underwent only conservative treatment, only one case showed a decrease of over 4 cm in the circumference of the lower leg. The average period for conservative treatment was 1.5 years, and the average decreased circumference was 0.6 cm (8 percent of the preoperative excess). Thirteen patients were followed after lymphaticovenular anastomoses, as well as pre- and postoperative conservative treatment. The average follow-up after surgery was 3.3 years, and eight patients showed a reduction of over 4 cm in the circumference of the lower leg. The average decrease in the circumference, excluding edema in the bilateral leg, was 4.7 cm (55.6 percent of the preoperative excess). These results indicate that supermicrosurgical lymphaticovenular anastomosis has a valuable place in the treatment of lymphedema. Publication Types: · Case Reports PMID: 12858242 [PubMed - indexed for MEDLINE] Minimal Invasive Lymphaticovenular Anastomosis Under Local Anesthesia for Leg Lymphedema: Is It Effective for Stage III and IV? Annals of Plastic Surgery. 53(3):261-266, September 2004. Koshima, Isao MD; Nanba, Yuzaburo MD; Tsutsui, Tetsuya MD; Takahashi, Yoshio MD; Itoh, Seiko MD; Fujitsu, Misako MD Abstract: This is the first report on the effectiveness of minimal invasive lymphaticovenular anastomosis under local anesthesia for leg lymphedema. Fifty-two patients (age: 15 to 78 years old; 8 males, 44 females) were treated with lymphaticovenular anastomoses under local anesthesia and by postoperative compression using elastic stockings. The average duration of edema of these patients before treatment was 5.3 +/- 5.0 years. The average number of anastomosis in each patient was 2.1 +/- 1.2 (1-5). The patients were followed for an average of 14.5 +/- 10.2 months, and the result were considered effective (82.5%) even for the patients with stage III (progressive edema with acute lymphangitis) and IV (fibrolymphedema), but others showed no improvement. Among these cases, 17 patients showed reduction of over 4 cm in the circumference of the lower leg. The average decrease in the circumference excluding edema in bilateral legs was 41.8 +/- 31.2% of the preoperative excess length. These results indicate that minimal invasive lymphaticovenular anastomosis under a local anesthesia is valuable instead of general anesthesia. Severe lower limb cellulitis is best diagnosed by dermatologists and managed with shared care between primary and secondary care. Levell NJ, Wingfield CG, Garioch JJ. Dermatology Department, Norfolk and Norwich University Hospital, Norwich NR4 7UY, U.K. Abstract Background: Cellulitis is responsible for over 400 000 bed days per year in the English National Health Service (NHS) at the cost of £96 million. Objectives: An audit following transfer of care of lower limb cellulitis managed in secondary care from general physicians to dermatologists. Methods: Review of patient details and work diaries from the first 40 months of implementation of the new model of care. Results: Of 635 patients referred with lower limb cellulitis 33% had other diagnoses which did not require admission. Four hundred and seven of 425 patients with cellulitis were managed entirely as outpatients, many at home. Twenty-eight per cent of patients with cellulitis had an underlying skin disease identified and treated, which is likely to have reduced the risk of recurrent cellulitis, leg ulceration and lymphoedema. Only 18 of 635 patients referred with lower limb cellulitis required hospital admission for conventional treatment. Conclusions This new way of managing suspected lower limb cellulitis offered substantial savings for the NHS, and benefits of early and accurate diagnosis with correct home treatment for patients. Growth factor therapy and autologous lymph node transfer in lymphedema. Growth factor therapy and autologous lymph node transfer in lymphedema. 2011 Feb 15 Lähteenvuo M, Honkonen K, Tervala T, Tammela T, Suominen E, Lähteenvuo J, Kholová I, Alitalo K, Ylä-Herttuala S, Saaristo A. Plastic Surgery, Turku University Central Hospital, Finland. BACKGROUND: Lymphedema after surgery, infection, or radiation therapy is a common and often incurable problem. Application of lymphangiogenic growth factors has been shown to induce lymphangiogenesis and to reduce tissue edema. The therapeutic effect of autologous lymph node transfer combined with adenoviral growth factor expression was evaluated in a newly established porcine model of limb lymphedema. METHODS AND RESULTS: The lymphatic vasculature was destroyed within a 3-cm radius around an inguinal lymph node. Lymph node grafts and adenovirally (Ad) delivered vascular endothelial growth factor (VEGF)-C (n=5) or VEGF-D (n=9) were used to reconstruct the lymphatic network in the inguinal area; AdLacZ (β-galactosidase; n=5) served as a control. Both growth factors induced robust growth of new lymphatic vessels in the defect area, and postoperative lymphatic drainage was significantly improved in the VEGF-C/D-treated pigs compared with controls. The structure of the transferred lymph nodes was best preserved in the VEGF-C-treated pigs. Interestingly, VEGF-D transiently increased accumulation of seroma fluid in the operated inguinal region postoperatively, whereas VEGF-C did not have this side effect. CONCLUSIONS: These results show that growth factor gene therapy coupled with lymph node transfer can be used to repair damaged lymphatic networks in a large animal model and provide a basis for future clinical trials of the treatment of lymphedema. Intensive decongestive treatment restores ability to work in patients with advanced forms of primary and secondary lower extremity lymphoedema. Dec 2011 Lymphatic dysfunction in the apparently clinically normal contralateral limbs of patients with unilateral lower limb swelling. Jan 2012 These are garments that are designed to help control the swelling and should be utilized after you have undergone treatment to reduce to the size of your arm. They are also used inconjunction with compression bandage wrapping. Covered ICD-9-CM Edema or Lymphedema Codes 125.0-125.9 Filarial lymphedema 457.0 Post-mastectomy lymphedema syndrome 457.1 Other lymphedema (praecox, secondary, acquired/chronic, elephantiasis) 457.2 Lymphangitis 457.8 Other noninfectious disorders of lymphatic channels (chylous disorders) 624.8 Vulvar lymphedema 729.81 Swelling of limb 757.0 Congenital lymphedema (of legs), chronic hereditary, ideopathic hereditary 782.3 Edema of Legs-Acute traumatic edema HCPCS Procedure Codes Procedure A manipulation of the body to give a treatment or perform a test; more broadly, any distinct service a doctor renders to a patient. All distinct physician services have ‘procedure codes’ in various payment schemes. 97001 or 97003 initial evaluation by a physical or an occupational therapist, or an Evaluation and Management CPT Code for physicians. 97002 or 97004 re-evaluation by a physical or an occupational therapist, or an E valuation and Management CPT Code for physicians. 97110 Therapeutic exercises 97016 Vasopneumatic Pump 97124 Massage therapy for edema of an extremity 97140 Manual therapy, manual lymphatic drainage (15 minute units) 97150 Group therapy 97504 Orthotic training/fitting 97530 Therapeutic activities, restoration of impaired function 97535 Self-care home management training, instruction on bandaging, exercises, and self-care 97703 Checkout for orthotic or prosthetic use The items and supplies listed below are considered “incident to” a physician service and are not separately reimbursable. However, if these supplies are given to a patient as a take home supply, the claim should be submitted to the DMERC. A4454 Tape A4460 Elastic bandage (e.g. compression bandage). Use this code to report compression bandages associated with lymphatic drainage (CIM 60-9, MCM 2133, ASC) A4465 Non-elastic binder for extremity. Use for Reid, CircAid, ArmAssist, etc manually-adjustable sleeves and leggings. Medicare jurisdiction DME regional carrier (CIM 60-9, MCM 2133, ASC) A4490-4510 Surgical Stockings A4490 Surgical Stockings above knee length (each) A4495 Surgical Stockings thigh length (each) A4500 Surgical Stockings below knee length (each) A4510 Surgical Stockings full length (each) A4649 Miscellaneous Surgical Supplies, Compression bandaging kit E0650-0652 Pneumatic Compressor and Appliances E0650 Pneumatic Compressor, non-segmental home model E0651 Pneumatic Compressor, segmental home model, without calibrated gradient pressure E0652 Pneumatic Compressor, segmental home model, with calibrated gradient pressure E0655-0673 Arm and Leg Appliances used with Pneumatic Compressor L0100-L4398 Orthotics L2999 Lower Limb Orthosis, not otherwise specified L3999 Upper Limb Orthosis, not otherwise specified L4205 Repair of orthotic device, labor, per 15 minutes L4210 Repair of orthotic device, repair or replace minor parts L5000-L5999 Lower Limb L6000-L7499 Upper Limb L8000-8490 Prosthetics L8010 Mastectomy Sleeve, Ready-Made L8100-L8239 Elastic supports L8100-8195 Elastic Supports, elastic stockings various lengths & weights L8210 Gradient compression stocking, custom made L8220 Gradient compression stocking/sleeve, Lymphedema, Custom L8239 Gradient stocking, not otherwise specified. Carrier discretion. Leg Lymphedema with extreme inflammation. Note how slight the lymphedema is, but it still had this terrible inflammation and infection. From: Texas Wound Center The following are pages from Lymphedema People:
2
14
<urn:uuid:1904a1b7-9a4f-4e50-8f10-6796aba93e96>
Lentis/Rare Earth Metals< Lentis What are Rare Earth Elements and Why Do We Care?Edit Rare Earth Elements, or REEs, form the "largest chemically coherent group in the Periodic table," and "are essential for many hundreds of applications." Used both in smaller technological devices such as cell phones, laptops and headphones, as well as in more largely-scaled innovations such as LCD screen televisions, hybrid cars, and electricity generating wind turbines, rare earth elements have become an integral component in the modern technological world. Despite what the name may imply, "rare earth elements are neither rare, nor earth" states Steven Castor, a recently retired research geologist with the Nevada Bureau of Mines and Geology. However, they occur in low concentrations and are extremely difficult to process because significant water, acid, and electricity reserves are required, and production produces radioactive and chemical waste as a by product. Unfortunately, hybrid cars such as the Toyota Prius would not exist without the use of REEs because approximately 20 to 30 pounds of these elements are used within various components of these vehicles, including the battery, motor, and generator. Additionally, there is no known substitute for Europium which is used as the red phosphor in many computer monitors and televisions, so this element must be used regardless of its difficulty in mining and processing. Social Impacts and RelationsEdit America's Dependency on China for REEsEdit American trade with China is extremely complex, and is arguably one of the most important trade relations in the 21st century. Currently, China is producing approximately 97% of the world's REEs and has almost half of the entire globe's reserves of these metals. Therefore, America's current technological standard of living is extremely dependent on China's exports of rare earth metals for not only commercial use, but also for American military superiority since radar, smart bombs, and other high end developed technologies are all dependent on these elements. But since China is the leading producer of REEs, they are in the favorable position to dictate distribution to not only America, but the rest of the world as well . Following the Cold War, Deng Xiaoping, China’s then Communist Party leader, observed that while the Middle east had oil, China had Rare Earth Metals--something of equal importance for the technological future. It is for this reason that Deng Xiaoping is attributed as the starting point for what America calls "China's economic war of control." American analysts often tend to view China as an economic piranha as opposed to a free market. Chinese dominance, however, occurred because America's consideration of market force failed to take into account the economic power strategy behind the manufacturing of resources. During the Arms Embargo of the 1990s, China began to gain a foothold in the REE industry by lowering prices. Then in the 2000s they began to take more aggressive measures towards controlling supply. Over the past several years China has imposed tariffs and quotas on their REE exports, and has recently closed its largest state linked mining enterprise, Batou Steel Rare Earth. Both acts have served the purpose of creating artificial shortages to drive up prices to what the Chinese Government deems realistic for the mining requirements, and to further centralize the mining in China to address administrative and environmental concerns . This has sparked alarm to the U.S. and further intensified the trade and currency dispute between the two nations. By restricting the amount produced to be exported and increasing the cost, China controls the market. Their monopoly has allowed them to create a political bargaining chip for issues of power and dominance. Chinese political maneuverings have upset the west by removing them from the position of control, and from the American perspective have endangered the global economy. Chinese actions, and western reactions have reestablished mistrust and paved the way for technological and social changes to meet demand of rare earth metals. America is quick to point fingers at China for the sole blame for the situation. However, America was content to let China provide the raw materials for them, and ignore the environmental ramifications of their mining if raw materials could be obtained for cheaper by Chinese state run mining operations not held to the same stringent environmental guidelines. Arguably, American short term perspective for development as a result of being the predominant world power after two world wars and the cold war played a role in this decision to allow China to make decisions based on its current socialist philosophy regarding the environment: "China's leaders used to believe that humans can and should conquer Nature, that environmental damage was a problem affecting only capitalist societies, and that socialist societies were immune to it. Now, facing overwhelming signs of China's own severe environmental problems, they know better". These actions have provided long-term negatives for both America and China. American opinion toward actions in light of increased world demand, expected to nearly double in the next 5 years, is part of the justification against China's financial gain and desire to stockpile raw material for their own industry. In fact, accusations have prompted the World Trade Organization to investigate this an illegal maneuver to force more production businesses to Chinese shores so that they are doing more than just producing raw goods and financing other nations. China's response is that the closure of production facilities is to gain more control over the nation's manufacture industry to fix corruption and pollution. Despite any reasoning applied toward the actions taken, this analysis will not acquire a solution. They will if anything only continue to put a strain on an international relation key to world stability, and America's financial status as a result of prior decisions.Instead of armies, China is vying for power through resources control. While it is easy to blame China for the shortage of REEs, China feels entitled to call the shots and dictate how much is globally exported because it fells it is taking risks that no one else wants to take. It is important to note that China's production dominance of REEs is not a result of exclusive access to more minerals than other countries--REE deposits exist all over the world. Instead, it is a result of environmental standards and lower wages in China. Mining and refining these metal ores is not environmentally friendly primarily because it is its dirty, toxic and often radioactive. As illegal mining is alluring due to the high market value of REEs, non-state influenced mines cause additional pollution problems across China. Therefore, "as part of the [Chinese] government's measures to tighten its grip on the industry and steer it to a sustainable and healthy development track," the Chinese government has forced to crack down on illegal mining by introducing specialized invoices for designated rare earths producers that will make it harder for illegal miners to sell their products. As a result, reduction in production was necessary to gain control for safer production for workers and the environment. China also feels justified to restrict its REE exports because it believes that its resources have been exploited by other countries over the past two decades and fears its resources may be depleted in 20 or 30 years if exports continue unrestricted. In the early 1990s, China had 85% of the world reserves of these metals; it now only controls about a third of reserves. Thus these new restrictions are not simply "all about moving Chinese manufacturers up the supply chain to sell valuable finished goods to the world rather than lowly raw materials" as The Economist claims, but more so to ensure domestic technological stability via stable access to key resources. This move also allows China a chance to boost its economy by enacting policies which make it more economically favorable for companies to move their factories to China, but in no way forces it for survival of those companies. Chinese economic and environmental survival is paramount in these decisions. Make no mistake: it has been a series of choices, not geophysical or economic constraints, that led to Chinese dominance of REE supply. Many other countries, including the U.S. have deposits of REE they could mine themselves. However, it has simply been easier for these other countries to allow China to do all the "dirty work." Until now. Leaving China's ability to dictate market availability of REE resources should not be mistaken as Chinese declaration of an "economic" war;" China is willing to have a cooperation with America as an equal. But in order for this to happen, China desires the American respect of Chinese socialist culture, unification, and financial assistance in lieu of continued creditor behavior. . Impacts on FutureEdit China has increasingly frustrated America by undervaluing currency and trade operations, so much so that President Barack Obama publicly declared "Enough's enough," at the Asia-Pacific Economic Cooperation summit. Regarding the current perceived economical warfare between America and it's Chinese competitors, the most feasible solution is world independence from Chinese rare earth metal exports. This may be reached through diplomacy, reopening and establishing mines, and redesigning technology to be less dependent. However, the current American-Sino relationship is tense and a diplomatic solution of mutual benefit is unlikely. America wants to maintain it's power and control while China simultaneously desires more influence and reparations for sustained environmental damage through its actions. Therefore, referring to ethology, this case is an example of the power struggle to become and maintain the status of "alpha male." Western mining also presents a unique obstacle in that the restrictions that allowed American dependence on Chinese exports will need to be weakened, and American Corporations will need aid from the government to be able to produce results in a timely manner along with stockpiling of resources to counteract China. Permanent magnets made from neodymium are at the forefront of the American Government's concern. Unlike most rare earth metals, Neodymium is rare and mainly controlled by China. Referencing past technology, American inventors are making use of Nicolai Tesla's induction motor patent to redesign technologies to not depend on rare earth metals. Future technology can continue to proceed in this direction to minimize and eventually remove dependence. In conclusion, Chinese and American conflicts of interest over rare earth metals have caused global tension, a shift in policy, and a potential shift in technological exploration. By controlling exports, China is consolidating their industry and working to correct environmental problems. America is struggling to approach a solution by attempting to removing their need for rare earth metals by revisiting policy and development. Future movements will consist of a technological shift reminiscent of moving from fossil fuels to alternative energy, and policy changes. Diplomatic solutions and repercussions from the continued disagreements on the world stage are uncertain at this time. Therefore, the underlying moral is that market dominance and power do not come about by pure good luck or fortune, but rather by strategy, policy, and decision making. - Diamond, J. (2005). Collapse: How Societies Choose to Fail or Succeed. New York: Penguin Books.
1
3
<urn:uuid:ea2b3ca6-89d8-4889-b5a3-32a9ed67a48c>
Canaan Dog is one of the oldest breeds of dogs, dating as far back as the biblical times when they were used for herding. Over time, this breed has proved to be very versatile due to their high intelligence as well as obedience traits. He is an excellent watchdog and remains alert and vary of strangers. They are very frequently used as mine detectors and tracker dogs. They make very good companion dogs for families but are a very rare breed with only about 1,600 of their kind left in the world. History and Health : - History – This breed of dogs originated in the region of Canaan which is the present day Israel. These are one of the earliest discovered dogs and possibly one of the first dogs to be domesticated and used for herding. Proof of the existence of these dogs have been found in rock carvings as well as excavated skeletons dating as far back as 4th Century BC. In the 1930s and 1940s, much of this dog’s population was killed by the government of Israel to fight against rabies. Since then they have become a rare breed. - Health – Canaan Dog is an exceptionally healthy breed. However, some health problems common to all breeds can sometimes occur in these dogs too like hip dysplasia, epilepsy and Progressive Retinal Atrophy. Temperament and Personality : - Personality – The Canaan Dog is very intelligent and strong headed. He has very strong territorial instincts which make him wary of strangers and an excellent watchdog. He has a loud, deep bark which he uses to alert the owners if someone is trespassing. Sometimes, he can also tend to bark at completely harmless things. He is usually a great companion for kids as he is protective towards them. - Activity Requirements – These dogs have moderate exercise requirements. A long daily walk or jog is necessary to keep them healthy. Apart from that, these dogs love to have a space to run around in freely and play various dog games. They also love a challenge and would easily spend hours trying to perfect an activity or task that their trainer expects of them. - Trainability – It is easy to train these dogs since they are intelligent creatures and are quick learners. However, they are also very independent and therefore require a trainer with a strong personality who can establish himself as the leader. - Behavioural Traits – Since the Canaan Dog is such an ancient breed, it is used to the ‘pack culture’ and shows several traits of being a part of a pack even when he is not. He will always be gentle towards small kids as he will consider them part of his pack and he would not tolerate interference from another dog or even kids from other families. He is also a very vocal dog and if not trained properly, he can bark his head off at completely harmless things. Appearance and Grooming : - Appearance – The Canaan Dog does not really have a unique appearance but is rather the basic, primitive dog and looks like one. In fact, he is said to have some physical features of sheepdogs, dingos, border collies as well as greyhounds. They have a glossy double coat which is generally white or light brown, golden or golden-red but can also be a solid black colour. - Size and Weight – It is a typical medium sized dog with a square body. The body weight varies between 18-25 kgs. - Coat and Colour – The Canaan Dog is native to the desert region of central Asia and therefore has a double coat to protect it from the extremely hot temperature of the region. The undercoat is short and soft while the outer coat is harsh but short. The colour of the coat can be black, varying shades of brown, golden, golden-red, cream or pure white. If it is a light coloured coat, it will generally have markings with a different colour. - Grooming – The coat needs to be brushed on a weekly basis to keep it smooth and free of debris. However, during shedding season, the coat should be brushed twice a day to free it of loose dead hair. Other general grooming activities like trimming nails etc. have to be performed. - Body Type – The Canaan Dog has a strongly built body which can endure extreme temperatures. The body is well muscled as the breed is basically a working dog participating in many activities like herding, guarding, search and rescue etc. Its neck is gracefully arched and the legs are straight. - He is very intelligent and quick to learn things. - He is good with small kids as he is gentle and docile. - He is an excellent watchdog. - He is friendly with other animals if properly socialized. - He is vary of strangers and is generally aloof towards them. - He has very high barking tendencies. - He requires a lot of physical as well as mental stimulation. - They are below average shedders but will shed heavily twice a year. - His trainer needs to be a strong and confident leader. - He can be aggressive towards other dogs if his ‘pack leader’ traits kick in. Tasty Tidbits : This dog loves to dig, especially when he is bored. If he is let loose in a garden or yard without supervision, he may dig up the whole place in a single day. Images, Pics, Photos and Pictures of Canaan Dog : To keep the dog healthy, it is necessary to provide him ample opportunity to exercise as well as space to play in. Also, for the Canaan Dog, it is necessary to socialize him at a young age by making him meet new people and new dogs so that he does not get aggressive or shy towards strangers at a later stage. A healthy diet of dry food of 1.5-2.5 cups of kibble divided in two meals a day is recommended for the Canaan Dog. Information And Facts of Canaan Dog : - Full name of this dog is Canaan Dog. - It is also known as kalef K’naani. - Its country of origin is Israel. - It is a medium sized breed of dogs. - It belongs to Southern Pariah or Herding group of dogs. - The life span of this dog if 12-15 years. - The height of a male dog is 50-60 cms while that of a female dog is 45-55 cms. - The average weight of the Canaan Dog is 18-25 kgs with the female dogs weighing slightly less than the male dogs. - The colour of this dog varies from solid black to golden to golden-red, brown and cream. - The price of a Canaan Dog puppy is approximately $700-$1200. - Canaan dog has high activity requirements and needs both mental as well as physical stimulations. - It is a very versatile dog and can adapt to any kind of work activity quickly including herding, hunting, dog sports and even daily household chores.
1
2
<urn:uuid:d125c9b5-3758-4768-a92c-452bf0614911>
Type to Learn Jr. New Keys for Kids |REPORT CARD | |Overall Rating:||5 Stars| |Ease of Use:||A| Company: Sunburst Technology, 1550 Executive Drive, Elgin, IL 60123; Phone: 888/492-8817; Internet: http://store.sunburst.com/. Price: $29.95—home version; $69.95—school edition, single copy; $799.95—network or unlimited site license. Additional licensing options are available. Audience: K-2nd grade students, in school or at home. Format: Mac/Win CD-ROM or online download: animation and sound. Minimum System Requirements: Windows systems: PC with a 486 processor, Windows 95, 8 MB RAM, sound card, and CD-ROM drive. Macintosh systems: 78040 processor, Mac 8.6, 8 MB RAM, sound card, and CD-ROM drive. OS X compatible. Description: Type to Learn Jr. New Keys for Kids is intended to introduce children to keyboarding by teaching them some of the basics, including correct posture, home row keys, numbers, and so on. The interactive sessions feature pleasant character animations and sound. The program contains a record-keeping component for teachers. Installation: Installation of a downloaded version of the program was easy with clear directions. Installation Rating: A Content/Features: Keyboarding is an essential computer literacy skill that research indicates can be taught formally with some success to 3rd grade and older students. However, many useful tasks for students younger than grade 3 require keyboard familiarity, which can take some time to develop. Some knowledge of letters, numbers, symbols, and special keys can make simple word processing tasks much easier for young students. Increasingly, early elementary teachers in our district want their students to type simple sentences and paragraphs. Some of them have unrealistic expectations of the students, thinking they should be able to keyboard much faster than most can, use clip art, or type a perfectly spelled paragraph, etc. Some students do learn to keyboard fairly rapidly by 2nd grade but, for most, the task is painfully slow and adds a level of difficulty to any writing task. Teachers who display paragraphs written by students on computers often end up doing more of the work than the students or end up doing it with the help of an aide. Basically, the less time students take looking for a letter key or trying to figure out how to make a capital letter, the faster they can do age-appropriate tasks like type a few sentences in 1st grade, or a short paragraph in 2nd grade. Type to Learn Jr. New Keys for Kids offers well-chosen skills for student practice. The activities help students learn correct posture, the home row keys, and how to type words with capital letters and short sentences with punctuation. Students learn to locate letters of the alphabet and numbers on the keyboard. They learn to use the Shift key to type upper case letters; they learn to type short words and sentences. The words and sentences have been selected with the small size of many K-2 students in mind. The students also learn to type simple punctuation and use the space bar and the Return/Enter key. Learning to keyboard involves a lot of repetition, which can be boring. New Keys for Kids helps keep students on task with pleasant animated characters (Sunbuddies) and sounds, as well as very clear instructions and demonstrations, including color-coded keyboards. Basically, skills are taught through three on-screen activities located in three different buildings on a cartoon street. The activities begin with a very brief explanation of proper posture and hand placement when keyboarding. Afterward, students can choose an activity and begin with some preliminary practice or go to the actual activity. In Tiny's Multiplex, students practice home row keys by typing the missing letters of matinee movie titles on the theatre's marquee. Here, students practice using the Shift key to make capital letters. This brings me to one of the few criticisms I have of the program. The explanation of what the Shift key is and how to use it is brief enough that nearly half of the 100 first graders I had try this program missed it. Once shown the information by an instructor, however, their practice was excellent. In Cassie's Grocery Store, students practice typing number keys. They help Cassie at her checkout register by supplying the numbers in grocery item prices. The third activity is located in the Sunbuddy Cyber Cafe. Here, students type Internet and e-mail addresses, becoming familiar with some of the symbols. They also practice typing short e-mail messages. New Keys for Kids has teacher management tools, including an automatic record-keeping feature that tracks student progress. Teachers can create and modify classroom lists or print student progress reports. The keyboard display style can be modified; teachers can select practice or play modes for students. Among several other features is the ability to pre-select student practice time and activity sets. These days, a variety of equipment—new and older—can be found in our schools. New Keys for Kids is very robust and works well on cranky older computers, as well as on newer ones. A demo version of Type to Learn Jr. New Keys for Kids is available for online download. Content/Features Rating: A Ease of Use: This program is extremely easy to use. It is intuitive and has excellent instructions and directions. Even my 1st graders experienced very few problems using it. Ease of Use Rating: A Product Support: A small number of technical support questions can be searched on the company's Web site. Support is also available using a toll-free telephone number. The program is so easy and trouble-free to use that I doubt many users will need assistance. Product Support Rating: A Recommendation: Type to Learn Jr. New Keys for Kids is an excellent program. I highly recommend it for anyone, for school or home use, who wants to help K-2 students learn to keyboard a bit faster until they are ready to begin more formal keyboard training. Highly recommended. Reviewer: Charles Doe, Media Specialist, Hastings Area Schools, Central Elementary School, Hastings, MI; charliegd[at]iserv.net.
1
2
<urn:uuid:b9f338b5-1bc8-41ed-a5fe-62f78a9dd03d>
In category theory, a branch of mathematics, a subobject is, roughly speaking, an object that sits inside another object in the same category. The notion is a generalization of concepts such as subsets from set theory, subgroups from group theory, and subspaces from topology. Since the detailed structure of objects is immaterial in category theory, the definition of subobject relies on a morphism that describes how one object sits inside another, rather than relying on the use of elements. In detail, let A be an object of some category. Given two monomorphisms - u: S → A and - v: T → A with codomain A, say that u ≤ v if u factors through v — that is, if there exists w: S → T such that . The binary relation ≡ defined by - u ≡ v if and only if u ≤ v and v ≤ u is an equivalence relation on the monomorphisms with codomain A, and the corresponding equivalence classes of these monomorphisms are the subobjects of A. If two monomorphisms represent the same subobject of A, then their domains are isomorphic. The collection of monomorphisms with codomain A under the relation ≤ forms a preorder, but the definition of a subobject ensures that the collection of subobjects of A is a partial order. (The collection of subobjects of an object may in fact be a proper class; this means that the discussion given is somewhat loose. If the subobject-collection of every object is a set, the category is well-powered.) To get the dual concept of quotient object, replace monomorphism by epimorphism above and reverse arrows. In the category Set, a subobject of A corresponds to a subset B of A, or rather the collection of all maps from sets equipotent to B with image exactly B. The subobject partial order of a set in Set is just its subset lattice. Similar results hold in Grp, and some other categories. Given a partially ordered class P, we can form a category with P's elements as objects and a single arrow going from one object (element) to another if the first is less than or equal to the second. If P has a greatest element, the subobject partial order of this greatest element will be P itself. This is in part because all arrows in such a category will be monomorphisms. - Mac Lane, p. 126 - Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics, 5 (2nd ed.), New York, NY: Springer-Verlag, ISBN 0-387-98403-8, Zbl 0906.18001 - Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001.
1
4
<urn:uuid:33f3824f-20dc-4b80-9a09-8eded752e642>
SPARCbook 3 (1994?) Macintosh Powerbook 5300 series (1995) Ομάδα Ελεύθερου Λογισμικού / Λογισμικού Ανοιχτού Κώδικα Τμήματος Πληροφορικής Πανεπιστήμιου Ιωαννίνων: http://fosscsuoi.wordpress.com/ AutoFS is an automounter of storage devices for Linux and UNIX operating systems. An automounter enables the user to mount a directory only whenever is needed e.g. when it needs to get accessed. After some time of inactivity, the filesystem will be unmounted. The main automounter’s file is /etc/auto.master (or auto_master sometimes, mainly in Solaris). /misc /etc/auto.misc /net -hosts +auto.master The interesting parts are the two first entries. The first part of the entry specifies the root directory which autofs will use for the mount points. The second part specifies which file contains the mount points information. For instance, look at the second entry. The /net directory will be used as the root directory and /etc/hosts will be used as the file which contains the mount points information. That means, autofs will make available all the NFS exports of the all hosts specified in /etc/hosts under /net. Let’s say now that you want to mount when needed, specific NFS exports from your file server. Let’s say that you want to mount them under /mount/nfs. At first place, you’ll need to create the file that will contains the information of the mount points. The file could be /etc/nfstab or whatever you like. You can specify the entries in the following easy understandable format: music -rw 192.168.1.10:/exports/music photos -ro 192.168.1.10:/exports/photos apps -rw,nosuid 192.168.1.10:/exports/apps If you want, you don’t specify any options or specify as many as you need. The options that apply on ‘mount’, apply on AutoFs as well. Once you have created the list file, you need to add in /etc/auto.master and it should then look like this: /misc /etc/auto.misc /net -hosts /mount/nfs /etc/nfstab +auto.master You can use the automounter in order to mount non-network filesystems. Next step is to restart autofs daemon. Having done so, you should be able to access the three shares. Note that they may not be displayed under the directory unless you try to access them. Having a look in /etc/auto.misc gives a few examples: cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom This will mount the CD/DVD drive under /misc when the user, or a service, will try to access it. We have a Dell PowerEdge R900 server with two SAS 300GB disk forming a RAID 1. There was a need of around 2TB of space and for that reason we bought 2 x 1TB SAS disks and attached them on the machine. I’d expect that Linux (Scientific Linux 5.2) would automatically see the disks and display them on the “fdisk -l” output. But unfortunately that didn’t work. Checking with “dmesg” the disks were detected (and the PERC controller of course): megasas: FW now in Ready state scsi1 : LSI SAS based MegaRAID driver Vendor: SEAGATE Model: ST3300555SS Rev: T211 Type: Direct-Access ANSI SCSI revision: 05 Vendor: SEAGATE Model: ST3300555SS Rev: T211 Type: Direct-Access ANSI SCSI revision: 05 Vendor: SEAGATE Model: ST31000640SS Rev: MS04 Type: Direct-Access ANSI SCSI revision: 05 Vendor: SEAGATE Model: ST31000640SS Rev: MS04 Type: Direct-Access ANSI SCSI revision: 05 usb 1-7: new high speed USB device using ehci_hcd and address 3 Vendor: DP Model: BACKPLANE Rev: 1.06 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 I was a bit puzzled on why the system didn’t show the disks while being detected. Checking under /proc/scsi/scsi just confirmed that the disks weren’t available to the system at all: cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.06 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 After a bit of Googling I came across a post on a forum which was saying that the disks should be built in a array in order to be available to the system. That actually means that the disks have to be Online and configured on the PERL controller and then the controller would make them available to the system. Next step was to reboot the machine and run the disk configuration utility. While being in the utility, the steps for creating a RAID0 array for concatenating the disks’ were: – Select the right PERC controller – Check disks status on the Physical Disk Management page (the status indication for the new disks was “OFFLINE” so that explained to me why the disks weren’t accessible) – Return to the Virtual Disk Management page – Select Controller 1 from the top – New Virtual Disk – Select the two available hard drives – Check available space – Specify name – Select stripe option – Return to VD Management page – Exit the utility – Reboot the machine – Job done 🙂 Then, “fdisk -l” would display the new /dev/sdb device with 2TB of space free. Just to confirm that everything was there: cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.06 Type: Enclosure ANSI SCSI revision: 05 Host: scsi1 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 And the last thing was to create a huge filesystem and mount it on the system 🙂 Το LCFG είναι ένα σύστημα με σκοπό την αυτοματοποίηση της εγκατάστασης και της ρύθμισης μεγάλου αριθμού συστημάτων UNIX. Είναι κυρίως κατάλληλο για ταχέως μεταβαλλόμενα περιβάλλοντα με πολλές διαφορετικές διαμορφώσεις. Τα αρχικά που αποτελούν το όνομα του έχουν παρθεί από το Local ConFiGuration. Η ανάπτυξη του ξεκίνησε το 1993 στο τμήμα της Πληροφορικής του Πανεπιστήμιου του Εδιμβούργου από τον Paul Anderson. Η πρώτη αυτή έκδοση δούλευε μόνο σε Solaris. Μέσα στα επόμενα χρόνια, o Alistair Scobie έφτιαξε ένα port του LCFG για Linux με ένα εντελώς καινούργιο σύστημα εγκατάστασης κάνοντας χρήση πακέτων RPM. Έτσι το LCFG χρησιμοποιούταν για το στήσιμο μηχανημάτων που έτρεχαν Red Hat Enterprise Linux αρχικά και Fedora Core στη συνέχεια. Στα χρόνια που ακολουθούν, στο LCFG προσφέρουν όλο και περισσότεροι κυρίως από τον χώρο του Πανεπιστήμιου. Τον τελευταίο χρόνο το LCFG μεταφέρθηκε στο περιβάλλον του Scientific Linux 5 το οποίο στην ουσία είναι το Red Hat Enterprise Linux 5 re-compiled. Το κάθε μηχάνημα που ελέγχεται από το LCFG έχει ένα προφίλ σε έναν κεντρικό server. Το κάθε προφίλ έχει ως όνομα αρχείου το hostname του μηχανήματος και κάνει include κάποιους headers με τον ίδιο τρόπο όπως γίνεται και στην C. Ο κάθε header περιγράφει πτυχές της ρύθμισης του συστήματος. Για παράδειγμα το μοντέλο του υπολογιστή, ότι είναι ένας web server κλπ. Με την χρήση των LCFG components, μπορεί το κάθε μηχάνημα να έχει τις δικές του ξεχωριστές ρυθμίσεις οι οποίες θα ελέγχονται από το LCFG. Ένας δαίμονας στον κεντρικό server παράγει αρχεία XML από το κάθε προφίλ, και στη συνέχεια το δημοσιεύει σε έναν web server από τον οποίο το κάθε μηχάνημα δέχεται το προφίλ του. Κάθε αλλαγή στο προφίλ θα δημιουργήσει νέο XML αρχείο αλλάζοντας έτσι τις αντίστοιχες ρυθμίσεις στο αντίστοιχο μηχάνημα. Το κάθε LCFG component έχει και από έναν αριθμό scripts τα οποία εγκαθίστανται στον πελάτη ανάλογα με το υποσύστημα το οποίο θα εγκατασταθεί (π.χ. MySQL server, Web server, DNS server). Το κάθε component θα ειδοποιηθεί όταν αλλάξει κάποιο resource σχετικό με τις λειτουργίες του και έτσι θα ενημερώσει κατάλληλα το σύστημα στο οποίο τρέχει. Αντίστοιχα πάλι εάν γίνουν αλλαγές στο σύστημα χωρίς αυτές να έχουν δηλωθεί μέσω του προφίλ, το αντίστοιχο component θα επαναφέρει τις ρυθμίσεις όπως αυτές υπάρχουν στο προφίλ. Ένα LCFG component είναι υπεύθυνο για το ποια πακέτα είναι εγκατεστημένα στο σύστημα. Ελέγχει πια πακέτα είναι εγκατεστημένα μέσω μιας λίστας η οποία περιέχει τα πακέτα που θα έπρεπε να είναι κανονικά εγκατεστημένα. Εάν κάποιο έχει αφαιρεθεί χωρίς να δηλωθεί στο προφίλ τότε θα εγκατασταθεί ξανά αυτόματα. Ακριβώς το ανάποδο θα συμβεί σε περίπτωση που κάποιο νεό πακέτα εγκατασταθεί χωρίς να έχει δηλωθεί στο προφίλ. Νέα μηχανήματα μπορούν να εγκατασταθούν αυτόματα κάνοντας χρήση ενός boot CD ή μέσω PXE. Για να πραγματοποιηθεί η εγκατάσταση θα πρέπει να υπάρχει το αντίστοιχο προφίλ για το νέο μηχάνημα. Έτσι, με το πέρας της εγκατάστασης το σύστημα μπορεί να είναι έτοιμο και ρυθμισμένο για την παροχή μια σειράς υπηρεσιών που θα χρειαζόταν ώρες για τη ρύθμιση τους εάν αυτό γινόταν χειροκίνητα. Τι συστήματα υποστηρίζονται Σήμερα, το LCFG υποστηρίζει τα Fedora Core 6, Scientific Linux 5, Mac OS X και Solaris 9. Η περισσότερη δουλειά και υποστήριξη είναι αυτή για το Scientific Linux. Πριν δημοσιευθούν τα προφίλ, μεταγλωττίζονται από έναν C preprocessor και εάν δεν βρεθούν σφάλματα τότε δημοσιεύονται. Η χρήση του C preprocessor κάνει το συντακτικό των προφίλ εύκολα κατανοητό. Για παράδειγμα θέλω να δηλώσω ότι ένα νέο μηχάνημα είναι ένας server τύπου PowerEdge 2950 της Dell. Εάν υποθέσουμε ότι υπάρχει ήδη ένας header για τις απαραίτητες ρυθμίσεις για το υλικό του server, το μόνο που μένει να δηλώσουμε στο νέο προφίλ είναι: # include <poweredge2950.h> Για την ρύθμιση του eth0 interface: Πρόσθεση ενός νέου group στο /etc/group: Τα “network” και “auth” είναι δύο από τα LCFG components, ακολουθούμενα από τα resources στα οποία θέλουμε να δώσουμε τιμές. Google Summer of Code on 2006 have sponsored the ZFS-FUSE project which aims to bring Sun’s ZFS on Linux. Because of the incompatibility between Sun’s CDDL license, which ZFS is distributed under, and Linux’s kernel GPL license, ZFS can’t be ported on Kernel level. The workaround to this is to run the filesystem on userspace level with FUSE. A month ago has been released the 0.5.0 version zfs-fuse. So, how can you get zfs-fuse running on your Linux? In order to compile and use zfs-fuse, you need the following: – Linux kernel 2.6.x – FUSE 2.5.x or greater – libaio and libaio-devel – zlib and zlib-deve – glibc version 2.3.3 or newer with NPTL enabled. For compiling the code: cd zfs-fuse-0.5.0 cd src scons If all goes fine, then proceed with the installation: scons install install_dir=/installation/target If you don’t define a specific directory to be installed in, then the binaries will be placed in /usr/local/sbin. If you install the binaries in a different directory don’t forget to add the directory path into your $PATH: Once the installation is finished you can start the zfs-fuse daemon. However, before starting the daemon you’ll need to create the /etc/zfs directory: That directory used to be automatically created in previous releases but not on 0.5.0 (?). The directory is used to store the zpool.cache file which contains the information of your pools. Having created the directory, what is left is to start the zfs-fuse daemon by simply running the command ‘zfs-fuse’. You can now create your first pool by issuing the command: zpool create zfsRoot /dev/sdb You can replace “zfsRoot” with the name you like for your pool. That will then be created under /. The device at the last part of the command is the disk you want to start using as your ZFS pool. If you want to add /dev/sdc as another one disk to the pool run: zpool add zfsRoot /dev/sdc Having the pool set, the new filesystems can start to be created. Creating a filesystem with ZFS is as easy as that: zfs create zfsRoot/filesystem1 Data can now start being written to the new ZFS filesystem. You’ve probably have mentioned that there hasn’t been defined any capacity for the filesystem. Leaving the filesystem as it is, it will use as much space as it needs and as much as the pool offers. If you want to avoid that, you can set a quota: zfs set quota=10G zfsRoot/filesystem1 This will set the maximum capacity of the filesystem to 10 GB. So far so good. What happens if the machine will get rebooted? You’ll need to do some manual work. Having created the /etc/zfs directory everything should be alright. The process should be the following: – Start zfs-fuse daemon – import the existing pools: – Mount the existing filesystems zfs mount zfsRoot/filesystem1 Then everything should be back in place. The problem is that doing this manually every time can cause problems and definitely doesn’t save any time. I have written a script which starts and stops the zfs-fuse daemon and mounts any existing filesystems. In order to mount the filesystems, the scripts reads the filesystems from /etc/zfstab which specifies the filesystems in the normal ZFS format ‘zfsRoot/filesystem1’. Depending on the host operating system, the script could be configured to be a startup script so you’ll not have to run the script manually at all. The script is setZFS One of our servers, a Dell PowerEdge 1850 which was running CentOS 4.5 with Xen hosting a couple of virtual machines, have start flashing one of its front panel lights, next to the hard disks, and beeping as well. The machine was under three year warranty so I thought best would be to contact Dell UK Server Support. The e-mail conversation follows bellow with some comments from my side between the messages. My initial message: Our PE1850 server is having an amber flashing light next to the second hard disk, on the right. There's also a amber flashing light on the back of the server. The machine is on service so I can't perform any of the Dell diagnostic tests. If there's no other way to gather information about the problem, then I'll arrange to do so. The first response from Dell: With regards to the drive failure on the system in question , in order to determine root cause of the drive failure on the system we will need to obtain a log from the raid controller . Attached on this email is an program that allows us to grab the hardware log from the raid controller . This can be done within the operating system so there will be no need to down the system in order to do this . If you could reply with the output logfile we will examine it and determine the next action. From that reply what I understand is that the technician indicates a hard disk failure. Because I didn’t mention something like that I’m just making it clear on my next e-mail. Also, the file attached was called Creating and Using the LSI Controller Log (TTY Log).oft Not an executable to my eyes: I have to mention that there's no indication if it's a hard disk failure or something else. The disks seem to be working fine so far. We have a few VMs on one of them and none of them have reported any problems. How do I run this binary file in Linux? Making it executable and trying to run it doesn't work. After a phone call from Dell’s support, we solve the misunderstanding and they send me the right tool to get the required logs off the server and send them to Dell: As per our telephone conversation please reply to this mailbox with the controller log file . During the phone call, I asked the friendly Dell support guy to send an engineer to have a look and replace the faulty disk. So: As discussed you will not be able to receive an engineer till next week . As a result please reply to this email closer to the date you want the service call to take place . An hour and a half after I received that e-mail, I received another one from another Dell support guy in response to the server’s logs: This is not a hard disk problem, this is referring to another problem. I read through your logs, and the good news is that your array is in good health. The normal cause would be not having both power leads attached. If this isn't the case, we will need to obtain the logs from the onboard management chip. This will enable us to see exactly why the light is flashing. Have you installed server administrator on the machine? I'm glad to hear that there's not disk array failure and I guess it's good that we didn't close the ticket and sending a technician... I'm not aware if server administrator is installed but as far as I can see (by open ports) it's not running. Is there any quick way to figure out? If it's not installed, is there a way to gather information without running the diagnostics? I had to ask those silly questions as I wasn’t really familiar with Dell’s server administrator tool. In response to that e-mail, there’s a third guy replying bringing some light with his e-mail: I went through the controller log file you sent to us previously one more time and although it indeed lists the 2 HDDs in an online state that was true when the controller first started - around the 14th of July. Since then there is a message stating that disk ID0 failed on the 2nd of November. There were no errors logged prior to disk failure so it is not clear if the disk itself is faulty or not. In either case we must first check if the disk needs to be replaced by running some diagnostics on it. If all diags pass then we can rebuild the disk back into the array. If it turns out that the disk is faulty may I note that according to our records the server has been originally shipped with 2x73GB Seagate HDDs whereas presently there are 2x300GB Fujitsu drives in the system. If the drives were purchased from Dell then we will need a Dell order number for these 2 drives before we can replace them. If you had the drives purchased from a 3rd party then you will have to replace them yourself. Please follow the procedure below: 1. First, what we need to do is make sure the drive is seated properly in its slot. I suggest you remove the drive for 1-2 minutes and then reinsert it back into the system. Doing that alone may force the drive to start rebuilding. Monitor the LEDs on the drive and see if there is any activity on it (blinking green LED) after inserting. If so - the drive is rebuilding and you should check the status of it in 2-3 hours. If after several hours the LED on the drive turns green and the LED on the back of the server turns blue then the drive has successfully rebuilt back into the array. You can leave it at than or proceed with the diagnostics below in case you want to be certain the drive is OK. 2. In order to run diagnostics on the disk after reseating it you can wither you Dell 32-bit diags from a bootable CD: (...) or try running Dell PEDiags from within Linux: (...) Run the extended diagnostics on disk ID0 (or both) and let us know if any errors occur. If any of the disks fail please make sure you have your Dell order number when you contact us again so that we could book you a replacement drive. Before following his advice and removing the hard disk in order to force the array to re-build, I migrated all of the virtual machines running to another Xen server. I then installed Dell’s server administrator tool and ran the diagnostics: I have installed pediags and run the diagnostics on all the devices except the NICs as they would be disconnected. There were no errors reported at all and I was wondering if I should proceed with the rebuilding of the array? All the hardware we have into Dell machines is ordered directly from Dell, the same goes for those hard disk drives. Then, there’s a fourth guy replying: As long as the diags passed i would suggest to proceed with the rebuild. And so I did, following their previous instructions. The results were send with my next mail: I forced the server to rebuilt as you instructed me. I took out the one disk, kept it for 1-2 minutes and then re-inserted it. The only lights were, and still are, flashing amber (on the disk itself) and the status LED remains still flashing red, the same as the flashing light at the back of the server. And then guess what? There’s a fifth guy replying to my last e-mail: It looks as though this drive is going to need to be replaced. In the CTRL-R we would not be able to verify the drives. Reseating while up should have caused the drive to try and rebuild. In order to get a replacement drive out to you we are going to need a few details. Can you please supply us with two contact people onsite and their phone numbers as well as the complete physical address of the server including the post code. Will you also let us know if you are happy to fit the new drive on your own or would you prefer and engineer onsite to replace the drive? Thank you for running through these tests with us. Start getting pissed off: How will you replace a hard drive that you don't know if it's faulty or not? Both HDD had green lights but the system's LED was flashing amber. When I run the diagnostics I didn't receive any errors from the array or any other part of the hardware. From the systems logs, another Dell support person suggested to rebuild and I did so by taking out one of the hard disks. Then I have the third guy calling me back and then replying: As discussed on the phone, there was a misunderstanding on the first mail sent, and this has propogated itself throughout this mail thread. There was a small error several months ago on one of the disks, but this can be ignored as the reason for your current situation. You have proven this by successfully rebuilding the array. The onboard management led is flashing to indicate and error. We can pull the onboard management logs using server administrator, and the Dell diagnostics should also access this. You are going to check both power leads are attached, and if so, run diagnostics to pull the oboard logs. You will then email the response back to this address. For me, the misunderstanding was still going on: Unfortunately the misunderstanding still continues.... Maybe my fault. I have followed the process that was described to me in order to re-built the array. That was: take out the hard disk for 1-2 minutes, re-insert it and that should force the rebuilding. I just checked the machine and the OS was frozen. I rebooted and there was a message saying that "Logical Drive(s) failed". I had two options from this point (1) Run the configuration utility and (2) continue. I first choose the 2n option to continue but the OS wasn't loading, then the system was trying to perform a network boot. I rebooted and then chose the first option and got into the configuration utility. Both disks in the configuration menu were marked as "fail". I tried to clean the configuration, erase the existing logical volume and then create a new one. The existing one was erased but I wasn't successful to create a new one. Could you please send a technician next week in order to rebuild the array? I’d guess that was because I followed the earlier instructions for “rebuilding the disk array”. But still, a technician will not come. I don’t bother if I can get the disk array easily re-build: Just tried calling you, I left a message. Getting an array created should take a few seconds on the phone, but what you have done will likely have erased what was on the drives. I'll try to get someone to contact you in the morning, as I am off from tonight until Tuesday morning. So I got a call next morning and given instructions on how to re-build the disk array. To be honest, I was straight forward configuration if you know where to look at. Finally, the host started up again with no flashing lights or beeping. The OS and the virtual machines were still there (going against all odds). However, to destroy the array and rebuild it took 10 working days and five Dell employees! I must say that that was the only bad experience I had with Dell (server) support. Apart from that, most of the other requests were handled in the right manner within two days.
1
3
<urn:uuid:e1b38607-90a7-41e1-b9fa-69e745327133>
|A section of mouse liver showing an apoptotic cell indicated by an arrow| WikiDoc Resources for Apoptosis Evidence Based Medicine Guidelines / Policies / Govt Patient Resources / Community Healthcare Provider Resources Continuing Medical Education (CME) Experimental / Informatics Apoptosis (/̩æ.pəpˈtō.səs/) is a form of programmed cell death in multicellular organisms. It is one of the main types of programmed cell death (PCD) and involves a series of biochemical events leading to a characteristic cell morphology and death, in more specific terms, a series of biochemical events that lead to a variety of morphological changes, including blebbing, changes to the cell membrane such as loss of membrane asymmetry and attachment, cell shrinkage, nuclear fragmentation, chromatin condensation, and chromosomal DNA fragmentation (1-4). Processes of disposal of cellular debris whose results do not damage the organism differentiate apoptosis from necrosis. In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis, in general, confers advantages during an organism's life cycle. For example, the differentiation of fingers and toes in a developing human embryo occurs because cells between the fingers apoptose; the result is that the digits are separate. Between 50 billion and 70 billion cells die each day due to apoptosis in the average human adult. For an average child between the ages of 8 and 14, approximately 20 billion to 30 billion cells die a day. In a year, this amounts to the proliferation and subsequent destruction of a mass of cells equal to an individual's body weight. Research on apoptosis has increased substantially since the early 1990s. In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in an extensive variety of diseases. Excessive apoptosis causes hypotrophy, such as in ischemic damage, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Discovery and etymology That cell death is a completely normal process in living organisms was already discovered by scientists more than 100 years ago. The German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. Apoptosis (Greek: apo - from, ptosis - falling; thus etymologically correct pronunciation is æpɒˈtəʊsɪs) was distinguished from traumatic cell death by John Foxton Ross Kerr while he was studying tissues using electron microscopy at the University of Queensland Pathology Department in Brisbane. Following publication of this paper, Kerr was invited to join Professor Alastair R Currie and Andrew Wyllie, Currie's PhD student at the time, at the University of Aberdeen to continue his research. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had originally used the term "programmed cell necrosis" to describe the phenomenon but in the 1972 article this process of natural cell death was called apoptosis. Kerr, Wylie and Currie credited Professor James Cormack (Department of Greek, University of Aberdeen) with suggesting the term apoptosis. In Greek, apoptosis means "dropping off" of petals or leaves from plants or trees. Cormack reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with a silent p (pronounced /æpɒˈtəʊsɪs/) and the p spelt out (pronounced /æpɒpˈtəʊsɪs/), as in the original Greek. In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc. John Foxton Ross Kerr, Emeritus Professor of Pathology at the University of Queensland, received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14 2000, for his description of apoptosis. He shared the prize with Boston biologist Robert Horvitz. Apoptosis can occur when a cell is damaged beyond repair, infected with a virus, or undergoing stress conditions such as starvation. DNA damage from ionizing radiation or toxic chemicals can also induce apoptosis via the actions of the tumour-suppressing gene p53. The "decision" for apoptosis can come from the cell itself, from the surrounding tissue, or from a cell that is part of the immune system. In these cases apoptosis functions to remove the damaged cell, preventing it from sapping further nutrients from the organism, or to prevent the spread of viral infection. Apoptosis also plays a role in preventing cancer; if a cell is unable to undergo apoptosis, due to mutation or biochemical inhibition, it can continue dividing and develop into a tumour. For example, infection by papillomaviruses causes a viral gene to interfere with the cell's p53 protein, an important member of the apoptotic pathway. This interference in the apoptotic capability of the cell plays a critical role in the development of cervical cancer. In the adult organism, the number of cells is kept relatively constant through cell death and division. Cells must be replaced when they become diseased or malfunctioning; but proliferation must be compensated by cell death. This balancing process is part of the homeostasis required by living organisms to maintain their internal states within certain limits. Some scientists have suggested homeodynamics as a more accurate term. The related term allostasis reflects a balance of a more complex nature by the body. Homeostasis is achieved when the rate of mitosis (cell division) in the tissue is balanced by cell death. If this equilibrium is disturbed, one of two potentially fatal disorders occurs: - The cells are dividing faster than they die, effectively developing a tumor. - The cells are dividing slower than they die, which results in a disorder of cell loss. The organism must orchestrate a complex series of controls to keep homeostasis tightly controlled, a process that is ongoing for the life of the organism and involves many different types of cell signaling. Impairment of any one of these controls can lead to a diseased state; for example, dysregulation of signaling pathway has been implicated in several forms of cancer. The pathway, which conveys an anti-apoptotic signal, has been found to be activated in pancreatic adenocarcinoma tissues. Programmed cell death is an integral part of both plant and animal tissue development. Development of an organ or tissue is often preceded by the extensive division and differentiation of a particular cell, the resultant mass is then "pruned" into the correct form by apoptosis. Unlike cellular death caused by injury, apoptosis results in cell shrinkage and fragmentation. This allows the cells to be efficiently phagocytosed and their components reused without releasing potentially harmful intracellular substances (such as hydrolytic enzymes, for example) into the surrounding tissue. Research on chick embryos has suggested how selective cell proliferation, combined with selective apoptosis, sculpts developing tissues in vertebrates. During vertebrate embryo development, structures called the notochord and the floor plate secrete a gradient of the signaling molecule (Shh), and it is this gradient that directs cells to form patterns in the embryonic neural tube: cells that receive Shh in a receptor in their membranes called Patched1 (Ptc1) survive and proliferate; but, in the absence of Shh, one of the ends of this same Ptc1 receptor (the carboxyl-terminal, inside the membrane) is cleaved by caspase-3, an action that exposes an apoptosis-producing domain. During development, apoptosis is tightly regulated and different tissues use different signals for inducing apoptosis. In birds, bone morphogenetic proteins (BMP) signaling is used to induce apoptosis in the interdigital tissue. In Drosophila flies, steroid hormones regulate cell death. Developmental cues can also induce apoptosis, such as the sex-specific cell death of hermaphrodite specific neurons in C. elegans males through low TRA-1 transcription factor activity (TRA-1 helps prevent cell death). The development of B lymphocytes and the development of T lymphocytes in the human body is a complex process that effectively creates a large pool of diverse cells to begin with, then weeds out those potentially damaging to the body. Apoptosis is the mechanism by which the body removes both the ineffective and the potentially-damaging immature cells, and in T-cells is initiated by the withdrawal of survival signals. Cytotoxic T-cells are able to directly induce apoptosis in cells by opening up pores in the target's membrane and releasing chemicals that bypass the normal apoptotic pathway. The pores are created by the action of secreted perforin, and the granules contain granzyme B, a serine protease that activates a variety of caspases by cleaving aspartate residues. The process of apoptosis is controlled by a diverse range of cell signals, which may originate either extracellularly (extrinsic inducers) or intracellularly (intrinsic inducers). Extracellular signals may include hormones, growth factors, nitric oxide or cytokines, and therefore must either cross the plasma membrane or transduce to effect a response. These signals may positively or negatively induce apoptosis; in this context the binding and subsequent initiation of apoptosis by a molecule is termed positive, whereas the active repression of apoptosis by a molecule is termed negative. Intracellular apoptotic signalling is a response initiated by a cell in response to stress, and may ultimately result in cell suicide. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, and hypoxia are all factors that can lead to the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Before the actual process of cell death is carried out by enzymes, apoptotic signals must be connected to the actual death pathway by way of regulatory proteins. This step allows apoptotic signals to either culminate in cell death, or be aborted should the cell no longer need to die. Several proteins are involved, however two main methods of achieving regulation have been identified; targeting mitochondria functionality, or directly transducing the signal via adapter proteins to the apoptotic mechanisms. The whole preparation process requires energy and functioning cell machinery. The mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies - a fact exploited by some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways; they may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence that indicates that nitric oxide (NO) is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Mitochondrial proteins known as SMACs (second mitochondria-derived activator of caspases) are released into the cytosol following an increase in permeability. SMAC binds to inhibitor of apoptosis proteins (IAPs) and deactivates them, preventing the IAPs from arresting the apoptotic process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell, therefore the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability. Cytochrome c is also released from mitochondria due to formation of a channel, MAC, in the outer mitochondrial membrane, and serves a regulatory function as it precedes morphological change associated with apoptosis. Once cytochrome c is released it binds with Apaf-1 and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn activates the effector caspase-3. MAC is itself subject to regulation by various proteins, such as those encoded by the mammalian Bcl-2 family of anti-apoptopic genes, the homologs of the ced-9 gene found in C. elegans. Bcl-2 proteins are able to promote or inhibit apoptosis either by direct action on MAC or indirectly through other proteins. It is important to note that the actions of some Bcl-2 proteins are able to halt apoptosis even if cytochrome c has been released by the mitochondria. Direct signal transduction File:TFN-signalling.png File:Fas-signalling.png Two important examples of the direct initiation of apoptotic mechanisms in mammals include the TNF-induced (tumour necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals. TNF is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF: TNF-R1 and TNF-R2. The binding of TNF to TNF-R1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. The link between TNF and apoptosis shows why an abnormal production of TNF plays a fundamental role in several human diseases, especially in autoimmune diseases. The Fas receptor (also known as Apo-1 or CD95) binds the Fas ligand (FasL), a transmembrane protein part of the TNF family. The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of pro-apoptotic factors from mitochondria and the amplified activation of caspase-8. Following TNF-R1 and Fas activation in mammalian cells a balance between pro-apoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family is established. This balance is the proportion of pro-apoptotic homodimers that form in the outer-membrane of the mitochondrion. The pro-apoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of pro-apoptotic proteins under normal cell conditions of non-apoptotic cells is incompletely understood, but it has been found that a mitochondrial outer-membrane protein, VDAC2, interacts with BAK to keep this potentially-lethal apoptotic effector under control. When the death signal is received, products of the activation cascade displace VDAC2 and BAK is able to be activated. There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor). For more information, see the article of the author Susin in Nature of 1999 and also reference 21 mentioned below. Although many pathways and signals lead to apoptosis, there is only one mechanism that actually causes the death of the cell in this process; after the appropriate stimulus has been received by the cell and the necessary controls exerted, a cell will undergo the organized degradation of cellular organelles by activated proteolytic caspases. A cell undergoing apoptosis shows a characteristic morphology that can be observed with a microscope: - Cell shrinkage and rounding due to the breakdown of the proteinaceous cytoskeleton by caspases. - The cytoplasm appears dense, and the organelles appear tightly packed. - Chromatin undergoes condensation into compact patches against the nuclear envelope in a process known as pyknosis, a hallmark of apoptosis. - The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA. - The cell membrane shows irregular buds known as blebs. - The cell breaks apart into several vesicles called apoptotic bodies, which are then phagocytosed. Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death. Removal of dead cells Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the cytosolic surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a hypothetical protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. Upon recognition, the phagocyte reorganizes its cytoskeleton for engulfment of the cell. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. Implication in disease Defective apoptotic pathways The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature it is a victim of causality; removing or modifying one component leads to an effect in another. In a living organism this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: the normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by-date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell becoming cancerous or diseased. A recently-described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9, and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the amount of pro-apoptotic agonists. As a consequence, the balance of anti-apoptotic and pro-apoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical reactions. Part of this pathway includes interferon-alpha and interferon-beta, which induce transcription of the p53 gene and result in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair, however it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors. The progression of the human immunodeficiency virus (HIV) to AIDS is primarily due to the depletion of CD4+ T-helper lymphocytes, which leads to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which can be the end-product of multiple biochemical pathways: - HIV enzymes inactivate anti-apoptotic Bcl-2 and simultaneously activate pro-apoptotic procaspase-8. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. - HIV products may increase levels of cellular proteins which have a promotive effect on Fas-mediated apoptosis. - HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane. - Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T-helper cells. - HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue. - The infected CD4+ cell may also receive the death signal from a cytotoxic T cell, leading to apoptosis. In addition to apoptosis, infected cells may also die as a direct consequence of the viral infection. Viruses can trigger apoptosis of infected cells via a range of mechanisms including: - Receptor binding. - Activation of protein kinase R (PKR). - Interaction with p53. - Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as Natural Killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis. Most viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit pro-apoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. Consequently p53 cannot induce apoptosis since it cannot induce the expression of pro-apoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function. Interestingly, viruses can remain intact from apoptosis particularly in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. - Autophagy network - Apoptosis DNA Fragmentation - Webster.com dictionary entry - Kerr, JF. (1965). "A histochemical study of hypertrophy and ischaemic injury of rat liver with special reference to changes in lysosomes.". Journal of Pathology and Bacteriology (90): 419–435. - Agency for Science, Technology and Research. "Prof Andrew H. Wyllie - Lecture Abstract". Retrieved 2007-03-30. - Kerr, JF; Wyllie AH, Currie AR (1972). "Apoptosis: a basic biological phenomenon with wide-ranging implications in tissue kinetics". British Journal of Cancer (26): 239–257. Cite uses deprecated parameter - Apoptosis Interest Group (1999). "About apoptosis". Retrieved 2006-12-15. - Webster.com dictionary entry - John Kerr and apoptosis The Medical Journal of Australia, 2000; 173: 616-617 - Thompson, CB (1995). "Apoptosis in the pathogenesis and treatment of disease". Science. 267 (5203): 1456–62. - Damasio, Antonio; (1999). The Feeling of What Happens. New York: Harcourt Brace & Co. Cite uses deprecated parameter - Guerrero I, Ruiz i Altaba A. (2003). "Development. Longing for ligand: patched, and cell death". Science. 301 (5634): 774–776. - Thibert C, Teillet MA, Lapointe F, Mazelin L, Le Douarin NM, Mehlen P. (2003). "Inhibition of neuroepithelial patched-induced apoptosis.". Science. 301 (5634): 774–776. - Werlen G; et al. (2003). "Signaling life and death in the thymus: timing is everything". Science. 299 (5614): 1859–1863. - Cotran; Kumar, Collins. Robbins Pathologic Basis of Disease. Philadelphia: W.B Saunders Company. 0-7216-7335-X. Cite uses deprecated parameter - Bernhard Brüne (2003). "Nitric oxide: NO apoptosis or turning it ON?". Nature. 10 (8): 864–869. doi:10.1038/sj.cdd.4401261. - Chiarugi A, Moskowitz MA (2002). "PARP-1—a perpetrator of apoptotic cell death?". Science. 297 (5579): 259–263. - Fesik SW, Shi Y. (2001). "Controlling the caspases". Science. 294 (5546): 1477–1478. - Laurent M. Dejean, Sonia Martinez-Caballero, Kathleen W. Kinnally (2006). "Is MAC the knife that cuts cytochrome c from mitochondria during apoptosis?". Cell Death and Differentiation. 13: 1387–1395. doi:10.1038/sj.cdd.4401949. - Laurent M. Dejean, Sonia Martinez-Caballero, Stephen Manon, Kathleen W. Kinnally (2006). "Regulation of the mitochondrial apoptosis-induced channel, MAC, by BCL-2 family proteins.". Biochim Biophys Acta. 1762 (2): 191-201. - Lodish, Harvey; et al. (2004). Molecular Cell Biology. New York: W.H. Freedman and Company. 0-7167-4366-3. Cite uses deprecated parameter - Wajant H (2002). "The Fas signaling pathway: more than a paradigm". Science. 296 (5573): 1635–1636. - Chen G, Goeddel DV (2002). "TNF-R1 signaling: a beautiful pathway". Science. 296 (5573): 1634–1635. - Goeddel, DV; et al. "Connection Map for Tumor Necrosis Factor Pathway". Science. doi:10.1126/stke.3822007tw132]. - Wajant, H. "Connection Map for Fas Signaling Pathway". Science. doi:10.1126/stke.3802007tr1]. - Murphy, KM; et al. (2000). "Bcl-2 inhibits Bax translocation from cytosol to mitochondria during drug-induced apoptosis of human tumor cells". Cell Death and Differentiation. 7 (1). - Cheng EH (2003). "VDAC2 inhibits BAK activation and mitochondrial apoptosis". Science. 301 (5632): 513–517. - Santos A. Susin; et al. (2000). "Two Distinct Pathways Leading to Nuclear Apoptosis". Journal of Experimental Medicine. 192 (4): 571–580. doi:10.1073/pnas.191208598v1. - Madeleine Kihlmark; et al. (2001). "Sequential degradation of proteins from the nuclear envelope during apoptosis". Journal of Cell Science (114): 3643–3653. - Nagata S (2000). "Apoptotic DNA fragmentation". Experimental Cell Research. 256 (1): 12-8. - M Iwata, D Myerson, B Torok-Storb and RA Zager (1996). "An evaluation of renal tubular DNA laddering in response to oxygen deprivation and oxidant injury". Unknown parameter |accessdaymonth=ignored (help); Unknown parameter - Li MO; et al. (2003). "Phosphatidylserine receptor is required for clearance of apoptotic cells". Science. 302 (5650): 1560–1563. - Wang X; et al. (2003). "Cell corpse engulfment mediated by C. elegans phosphatidylserine receptor through CED-5 and CED-12". Science. 302 (5650): 1563–1566. - Savill J, Gregory C, Haslett C. (2003). "Eat me or die". Science. 302 (5650): 1516–1517. - Yang, L.; et al. (2003). "Predominant suppression of apoptosome by inhibitor of apoptosis protein in non-small cell lung cancer H460 cells: therapeutic effect of a novel polyarginine-conjugated Smac peptide.". Cancer Research. 63 (4): 831–837. - Takaoka A; et al. (2003). "Integration of interferon-alpha/beta signalling to p53 responses in tumour suppression and antiviral defence.". Nature. 424 (6948): 516–523. - Judie B. Alimonti, T. Blake Ball, Keith R. Fowke (2003). "Mechanisms of CD4+ T lymphocyte cell death in human immunodeficiency virus infection and AIDS". J Gen Virology (84): 1649–1661. doi:10.1099/vir.0.19110-0. - Everett, H. and McFadden, G. (1999). "Apoptosis: an innate immune response to virus infection". Trends Microbiol. 7 (4): 160–165. PMID 10217831. - Teodoro, J.G. Branton, P.E. (1997). "Regulation of apoptosis by viral gene products". J Virol. 71 (3): 1739–1746. PMID 9032302. - Polster, B.M. Pevsner, J. and Hardwick, J.M. (2004). "Viral Bcl-2 homologs and their role in virus replication and associated diseases". Biochim Biophys Acta. 1644 (2–3): 211–227. PMID 14996505. - Hay, S. and Kannourakis, G. (2002). "A time to kill: viral manipulation of the cell death program". J Gen Virol. 83: 1547–1564. PMID 12075073. - Wang, X.W. Gibson, M.K. Vermeulen, W. Yeh, H. Forrester, K. Sturzbecher, H.W. Hoeijmakers, J.H. and Harris, C.C. (1995). "Abrogation of p53-induced Apoptosis by the Hepatitis B Virus X Gene". Cancer Res. 55 (24): 6012–6016. PMID 8521383. - Alberts, Bruce; et al.. Molecular Biology of the Cell. Garland Publishing. Cite uses deprecated parameter - Bast, Robert C. Jr; et al., (2000). Cancer Medicine, 5th edn. B.C. Decker. Cite uses deprecated parameter - Alfons Lawen (2003). "Apoptosis - an introduction". BioEssays. 25 (9): 888–896. - Α. Afantitis, G. Melagraki, H. Sarimveis, P.A. Koutentis, J. Markopoulos and O. Igglessi – Markopoulou (2006). "A Novel QSAR Model for Modeling and Predicting Induction of Apoptosis by 4-Aryl-4H-chromenes". Bioorganic and Medicinal Chemistry. 14: 6686–6694. - Apoptosis (Programmed Cell Death) - The Virtual Library of Biochemistry and Cell Biology - Apoptosis Research Portal - Apoptosis Info Apoptosis protocols, articles, news, and recent publications. - Database of proteins involved in apoptosis - Apoptosis Video - The Mechanisms of Apoptosis Kimball's Biology Pages. Simple explanation of the mechanisms of apoptosis triggered by internal signals (bcl-2), along the caspase-9, caspase-3 and caspase-7 pathway; and by external signals (FAS and TNF), along the caspase 8 pathway. Accessed 25 March 2007. - WikiPathways - Apoptosis pathway - Finding Cancer’s Self-Destruct Button CR magazine (Spring 2007). Article on apoptosis and cancer. bg:Апоптоза cs:Apoptóza da:Apoptosis de:Apoptose et:Apoptoosfa:آپوپتوزgl:Apoptose ko:아포토시스 io:Apoptozo id:Apoptosis it:Apoptosi he:אפופטוזה lt:Apoptozė hu:Apoptózis nl:Apoptoseno:Apoptosesk:Apoptóza sl:Apoptoza sr:Апоптоза su:Apoptosis fi:Apoptoosi sv:Apoptosuk:Апоптоз ur:سقوطیت
1
18
<urn:uuid:2dc7f3c7-1125-4e8c-a918-d9a18c5592cb>
Soviet Disinformation During Periods Of Relaxed East-West Tension, 1959-1979 [Note: This document, although nearly 30 years old, is posted as a contribution to the current debate over "fake news," disinformation, and Russian influence in foreign governments. For "Soviet," please substitute mentally "Russian imperialism," since the goals of the Kremlin are unchanged. Editing conforms to CIP style.] The flag of the Crimean Tatars, oppressed by Russian imperialist occupation The purpose of this study is to examine whether Soviet disinformation activities historically have ceased during periods of relaxation in tension between the Soviet Union and the United States. A review of the record shows that one of the most remarkable aspects of the long-term relationship between the Soviet bloc, on the one hand, and the United States and its allies, on the other, consists in the continuation, during such periods, of Soviet disinformation and related activities. They have been aimed at undermining the influence of western governments, and particularly the United States, and at sowing dissension between them. The present study briefly examines some outstanding examples of this phenomenon, beginning with the 1959-1960 period, characterized by the "spirit of Camp David," and concluding with the 1971-1979 period known as an era of detente. This study concentrates on forgeries, the most obvious and damaging example of disinformation activities. The sources utilized include official U.S. government publications, information provided by Soviet-bloc defectors, and official Soviet publications. The Soviet Union has been identified as the origin either because Soviet agents adopted and used the forgery in question or because Central Intelligence Agency analysis of the forgery concluded that Moscow was the originator. THE SPIRIT OF CAMP DAVID, 1959-60 The phase of East-West cordiality traditionally placed by historians under the rubric of Camp David begins in September 1959 with the visit of Soviet Communist Party chief Nikita Khrushchev to the U.S. The mood in the U.S. was hopeful; the common term for the relationship between the two superpowers was dialogue, and both public and governmental opinion in the U.S. genuinely sought evidence that the Soviets had changed since the death of Josef Stalin six years before. In propaganda terms, Party leader Khrushchev was portrayed as a human figure, strongly contrasted with the Stalin who had in reality been Khrushchev's mentor. The sources of this benevolent image included Khrushchev's attack on Stalin in the "secret speech" at the 20th Congress of the Soviet Communist Party in 1956 as well as his somewhat bumptious, rural cultural mannerisms, which were found endearing by American media and the public. By contrast, U.S. president Dwight David Eisenhower was perceived as something of a naive bumbler, politically unsophisticated, and highly dependent on his advisers. But above all, the accent in the U.S., in Europe, and elsewhere was on hope, on the belief that the spirit of Camp David would herald a new era in U.S.-Soviet understanding. However, it soon became clear that in one significant area, at least, little had changed in the conduct of the Soviet Union toward the West: that is, the area of disinformation and forgeries. It was during the Camp David. period that two of the most significant applications of this political weapon, were detected. These were the fraudulent Sam Sary-Kellogg letter. on Cambodia and the forged Rockefeller document. Both of these items illustrate central elements in the overall pattern of Soviet disinformation. 1. The Sam Sary/Kellogg Letter, January 16, 1960 This fake document appeared in the Indian newsweekly BLITZ on January 16, 1960. BLITZ is a leftist periodical with a long history as a source for Soviet propaganda attacks on the U.S., and the manner of its functioning was described in the late 1950s by a defected Soviet intelligence officer, Aleksandar Kaznačejev, as follows: "(one) of the (newspapers) most notorious by (its) close ties with the Soviet intelligence, as I. learned during my service with the KGB." The false document was a purported letter from Cambodian political figure Sam Sary to a U.S. Embassy official named Kellogg, appeared in BLITZ under the headline "US Uses Traitor Sam Sary's Bid to Suck Cambodia Into SEATO." The document itself was presented as a handwritten letter from Mr. Sary to Mr. Kellogg, in English. The gist of the document was that Mr. Sary sought intervention by the U.S., Thailand, and then pro-Western South Vietnam in a conspiracy to overthrow neutralist Prince Norodom Sihanouk. Kaznačejev noted in his book INSIDE A SOVIET EMBASSY that Mr. Kellogg had left Cambodia three months before the date on which, as stated in the letter, he supposedly met with members of the anti-Sihanouk opposition. The apparent goal of this forgery was to excite suspicion within Cambodia as to the intentions of the country's neighbors, Thailand and South Vietnam, in the context of their commitments to SEATO and the alliance with the U.S. 2. The Forged Rockefeller Letter, 1957-1960 On February 15, 1957, the East German daily NEUES DEUTSCHLAND, official organ of the ruling Socialist Unity Party (SED), published, under the headline "Secret Rockefeller Document," an extensive forgery presented as a private letter from Nelson A. Rockefeller to President Eisenhower. This forgery circulated throughout the world during the "Camp David" period, appearing in such media as Radio Moscow, the Soviet party organ PRAVDA and USSR news agency TASS, Radio Hanoi, the Czechoslovak domestic press, and Radio Beijing as well as the official news agency of the People's Republic of China. In the "letter" then-Governor Rockefeller was portrayed as the advocate of a "bolder program of aid to under-developed countries" as a cover for what the East Germany press referred to as "supercolonialism" (superkolonialismus). The "letter" was printed in its "original" typewritten form. Its obvious aim was to discredit the U.S. commitment to the removal of the old colonial powers from their involvements in Asia and Africa. A cursory examination of the "letter" by any reasonably-literate American clearly showed its faked character. To begin with, its opening paragraph is couched in language completely inappropriate for a Republican politician addressing a President of the same party, referring to a "tiresome discussion" supposedly held between the two men at Camp David. Secondly, the "letter" displays spelling and other usage characteristic of a writer whose native language is not American English. The words "emphasizing" and "economize" with an "s" rather than a "z," and the word "favour"¨ with a "u", reflect British rather than American style. Further, selected sentences in the "letter" suggest its composition by an individual not wholly familiar with the rigorous canons of English grammar to be expected from someone of Rockefeller's education. Finally, the blatant tone of the document's "superkolonial" recommendations certainly indicate a spurious origin. The Camp David period ended in May 1960 when Khrushchev refused participation in summit talks with President Eisenhower because of the U-2 overflight of Soviet territory. THE TEST-BAN TREATY PERIOD, 1963 In the aftermath of the Cuban missile crisis of October 1962, a second period of East-West rapprochement emerged in summer 1963, with the signing of the nuclear test-ban treaty by President John F. Kennedy. The beginning of this period of optimism is often identified with a speech on U.S.-Soviet relations by President Kennedy at American University on June 10, 1963. During the Kennedy administration, a theme that has remained a major one came to the fore in disinformation: that of the criminality of the Central Intelligence Agency and, especially, its director. As stated in a pamphlet titled SPY NO. 1, issued by Gospolitizdat, the State Publishing House for Political Literature in Moscow, in June 1963, then-CIA Director John McCone was described as "the organizer of dirty political intrigues and criminal conspiracies." With the assassination of President Kennedy in November 1963, and notwithstanding the hopeful attitudes expressed by both sides following the test-ban treaty signing, this theme increased in stridency, featuring the additional suggestion that the President's murderer, Lee Oswald, was a CIA operative. Simultaneous with the death of President Kennedy, two faked issues of NEWSWEEK magazine appeared in Paris, France, laden with anti-American propaganda regarding the progress of the civil rights movement in the U.S. Aside from their covers the NEWSWEEK forgeries were obviously inauthentic -- their contents were limited to photographs and captions in a style wholly unlike that of the actual newsmagazine. The manifest goal pursued in this instance was to cast doubt on the reality of the U.S. administration's support for civil rights. In a report to the House of Representatives on September 28, 1965, Rep. Melvin Price (R-IL) noted that "14 new forgeries" had appeared by the end of July 1965. THE SPIRIT OF GLASSBORO, 1967-1968 By the time of the brief "spirit of Glassboro," Soviet party First Secretary Khrushchev had been replaced by the team of Aleksei Kosygin and Leonid Brezhnev. Kosygin met with President Lyndon B. Johnson at Glassboro, New Jersey in the wake of the June 1967 Mideast war. It was during this period that one of the most long-lived forgeries appeared in Western Europe, namely, the "Top Secret Documents on U.S. Forces in Europe." This melange of real and false U.S. contingency plans for war in Europe, subtitled in published form "Holocaust Again for Europe". claims to demonstrate that "U.S. thinking is still dominated by preparation for war." Its first appearance came in the Norwegian periodical ORIENTERING in 1967. It reappeared in London in June 1980 and subsequently. The most recent era of relaxation in U.S.-Soviet relations began with the visit of President Richard M. Nixon to Moscow in 1972, and included, inter alia, the signing of both the SALT I and SALT II agreements, the latter at a meeting between Leonid Brezhnev and President Jimmy Carter during a summit in Vienna in 1979. Like the Camp David period, that of detente, as this one came to be known, was characterized by great initial hope that the Soviets would demonstrate a change in their basic attitudes toward the West. Between 1972 and 1976 a possible hiatus in forgery activities was noted by the Central Intelligence Agency. It may be that the appearance of such a diminution of active measures was a product of Western failure in data collection rather than an absence of Soviet actions. Analyzing the possibility of a hiatus in terms of Soviet intentions, Professor Ladislav Bittman, a defector from Czechoslovak intelligence with an extensive knowledge of disinformation operations, has concluded that overall disinformation activities increase during periods of East-West relaxation, when Soviet operatives act to exploit the fact that the West's guard is down. Bittman has noted that the period of continuing detente following the possible hiatus saw a "forgery offensive" and a campaign of general active measures against the U.S. and its allies nearly unparalleled in the data assembled by Western observers. The temporary fall in demand for forgeries did not lead to the shutdown of the disinformation industry. While forgeries, for example, may have declined in frequency, other disinformation practices increased. The following are thirteen instances in the four years ending with the 1979 invasion of Afghanistan -- the official closing event of detente -- that delimit the forgery offensive. Items 1 through 3 are taken as of Soviet origin because of their adoption and use by official Soviet media, or by the testimony of Soviet defectors. Items 4 through 15 were provided to the U.S. Congress by the Central Intelligence Agency as examples of Soviet-originated forgeries. 1.The Fake Zhou Enlai Will -- January 1976. In January 1976, four years after the U.S. opening to the People's Republic of China, an invented last will and testament supposedly from the hand of recently-deceased PRC premier Zhou Enlai appeared in the Japanese daily SANKEI SHIMBUN. SANKEI is one of the most respected newspapers in Japan, and is quite conservative. The fake will was highly convincing in that it put forward a view that was known to be characteristic of Premier Zhou, namely, a repudiation of the post-1966 "great proletarian cultural revolution". However, it also purported to show sympathies on Zhou's part toward reconciliation between China and the Soviet Union. The clear aims of this forgery were varied: to encourage political turmoil in China, including stimulation of anti-U.S., pro-Soviet elements, by exploiting Zhou's great popularity, and to heighten suspicion toward China on the part of Japanese public opinion, which tends to view Soviet intentions in the region -- and particularly the possibility of a renewed Soviet-Chinese association -- with great concern. The official Soviet news agency, TASS, redistributed the article throughout the world, citing SANKEI, with its conservative reputation, as its source. KGB defector Stanislav Levčenko, who served in Japan before coming over to the West, has stated that this fraud was perpetrated by Service A of the KGB First Chief Directorate. Further, according to Levčenko the KGB-recruited editor of SANKEI responsible for placing the forgery was later promoted to a position as managing editor of the paper. 2. Field Manual 30-31B -- September 1976. Perhaps the outstanding incident in the forgery offensive, in terms of its sophistication, its impact, and its continued use, as well as its utility for Western analysts, is the case of "Field Manual 30-31B," a concoction purporting to be a U.S. military-issue manual for support to leftist terrorism. 30-31B is claimed to be a secret supplement to two authentic military procedural guides, FM 30-31, and 30-31A. The title of the fake is "Stability Operations-Intelligence." In form, it is a poor photocopy of a typewritten document, accompanied by a spurious letter over the signature of former Army chief of staff Gen. William C. Westmoreland. Its unarguable intention is to attribute leftist terrorism around the world to U.S. intelligence operations, with the probable dual aim of undermining U.S. prestige with foreign governments and of diverting attention from Soviet and bloc involvements with such terrorists. This point in the Soviet disinformation agenda has proven to be a major one. FM 30-31B first surfaced in Thailand in September 1976, on a bulletin board at the embassy of the Philippines, along with a cover letter addressed to Philippine president Ferdinand Marcos. At this time, the leftist movement inside the Philippines was clearly dominated by pro-Chinese elements hostile to Soviet interests. The Soviets had embarked on a process of flirtation with Marcos that years later would culminate, just before the fall of his regime, in Soviet support to his electoral campaign. However, it seems the forgery did not reach Marcos, and its application to Southeast Asia was negligible. Its main use has come in Europe, and most importantly in a nation beset by a non-ethnic, purely ideological, far-leftist terrorism: Italy. Its impact in such an environment can be impressive. Marked "TOP SECRET," FM 30-31B. purports to demonstrate that the U.S. military sanctions "the use of extreme leftist organization to safeguard the interests of the United States in friendly nations where communists appear close to entering the government," according to the fake letter attributed to General Westmoreland In Italy in 1978, soon after FM 30-31B was surfaced on the European continent, the Red Brigades had reached the zenith of their armed offensive against the Italian state, by kidnapping and murdering Aldo Moro. Moro, a Christian Democrat politician, was the architect of an understanding with the Italian Communist Party that many believed might lead to a grand coalition of the Christian Democrats and Communists. The medium for the appearance of FM 30-31B in Europe, on September 18, 1976, was the eminent Spanish daily, EL PAIS which, although considered Spain's most prestigious media organ is characterized by a strongly critical line on U.S. foreign policy that frequently allows its use as a Soviet propaganda platform, although it seldom provides such access to domestic Spanish leftists. On September 23, 1976, an extended version of the forgery appeared in another Spanish publication, the newsweekly TRIUNFO, which had emerged in the twilight of the Franco era as a tolerated voice for the left. Fernando Gonzalez, a Spanish Communist party (PCE) member who presented the forgery in TRIUNFO, attributed its ostensible discovery to a Turkish newspaper that supposedly had been closed down for revealing its existence. Turkey, even more than Italy, was then beset by leftist terrorism in the form of Dev Yol and Dev Genc, two armed mass groupings that acquired a considerable following in the youth of the country, and fought a kind of mini-civil war against the Turkish right, producing many hundreds of deaths. The Turkish crisis resulted in the fall of democratic rule and a period of military dictatorship. Similar preoccupations were widespread in Spain, where it was feared that the post-Franco transition to democracy. would be destroyed because of Basque and Maoist terrorism. The forgery then appeared in the Paris daily LE MONDE, and in the Netherlands, followed by Italy, Greece, and Portugal. As noted, it is in these countries that its effect was most pronounced: Portugal, like Italy, Turkey, and Spain, had undergone a serious terrorist experience, and Greece had long been a fertile ground for anti-American propaganda. In addition, and most importantly, most of the five Mediterranean republics -- Portugal, Spain, Italy, Greece, and Turkey -- at one time or another have suffered vulnerabilities in the structure of institutionalized democracy. (FM 30-31B has also surfaced in the U.S., through the gadfly publication COVERT ACTION INFORMATION BULLETIN, and has been officially sanctioned for Soviet public use in two publications as recent as 1983: INTERNATIONAL TERRORISM AND THE CIA: DOCUMENTS, EYEWITNESS REPORTS, FACTS, and Yu. Pankov, editor, POLITICAL TERRORISM: AN INDICTMENT OF IMPERIALISM. Both of these publications appear under the imprint of Progress Publishers, Moscow, in English and other non-Russian languages. In these publications, it is linked to a conspiracy against constitutional government in Italy, identified with the ultra-rightist pseudo-Masonic P-2 lodges organized under the control of one Licio Gelli. In addition, in 1985 a forgery was detected in Italy in which the theme of U.S. involvement in the Red Brigades was reintroduced.) 3. Airgram A-8950 -- November 1976. On November 7, 1976, THE TIMES of London revealed the existence of a forged State department document, "Airgram A-8950," which consists of a largely authentic document altered to support the claim that U.S. officials were pursuing a campaign of bribery and undercutting of the economic effectiveness of America's foreign trade competitors. The bogus airgram was then adopted by the official Soviet news agency, TASS, without mention of the TIMES's clear description of its spurious nature. 4. The Edwin Yeo speech -- December 1976. December 1976 saw the appearance of a series of forgeries aimed at disrupting the improvement of U.S. relations with the Arab Republic of Egypt. The first of these was a fake copy of the journal AMERICAN ECONOMICS, which is published by the United States Information Agency, in Athens, Greece. The content of this item was an invented speech by U.S. Treasury Undersecretary Edwin Yeo, replete with insulting remarks about Egyptian president Anwar al-Sadat. 5. Egyptian Forgeries -- Second Round -- 1977. A second round of forgeries aimed at poisoning U.S.-Egyptian relations surfaced in 1977. The first was a film negative of a spurious letter from U.S. Amb. H.F. Eilts, to the Saudi Arabian Ambassador to Cairo, ostensibly recommending a joint intervention policy by the U.S. and Saudi Arabia in the Sudan, Egypt's southern neighbor. The second was a forged collection of notes supposedly by U.S. Secretary of State Cyrus Vance, containing critical statements about Sadat, King Hussein of Jordan, President Hafez al-Assad of Syria, and the leaders of Saudi Arabia and Kuwait. The target area for this particular document, which was noted in April 1977, was a broader range of Arab states. The third such product, surfacing in June 1977, was another fake letter from Amb. Eilts, this time addressed to the State Department in Washington, and attacking the leadership of President Sadat. The first Eilts forgery and the bogus Vance notes were brought to public attention through delivery to, respectively, the Sudanese embassy in Beirut and the Egyptian embassy in Rome. The second Eilts fraud was sent to Egyptian newspapers. 6. Teheran Dispatch -- August 1977. In August 1977, the Egyptian embassy in Belgrade, Yugoslavia received by mail a fake dispatch supposedly originating in the U.S. embassy in Teheran, Iran. This item purported to demonstrate that the Shah of Iran and Saudi Arabia were plotting to overthrow Sadat, with Israeli support and American acquiescence, in line with an overall U.S. strategy to install conservative regimes in the Middle East. It should be noted that while items 2 and 3 in the forgery offensive, namely the fraudulent field manual and airgram, reflect a high quality of editorial work in English, and although the items under 4 and 5 -- the Egyptian forgeries -- show a fairly low incidence of obvious errors, the Teheran dispatch was clearly the work of a non-English native speaker. 7. The Fake Carter Speech -- December 1977. In December 1977, a fraudulent speech by President Jimmy Carter was mailed to a number of Greek newspapers, and published in the leading Athens daily TO VIMA as well as in the daily organ of the pro-Moscow "Exterior" faction of the Greek Communist Party, RIZOSPASTIS. The forgery contained very negative remarks about Greece along with statements calling on the Greek government to strengthen its commitment to the NATO alliance. Like item 6, the Teheran dispatch, the faults of English usage in the Carter speech clearly indicated its fabrication by a non-English speaker. 8. State Department Telegram -- March 1978. On March 17, 1978 then-member of the Greek parliament Andreas Papandreou, a representative of the Panhellenic Socialist Movement who is at the time of this writing prime minister of Greece, had tabled a copy of what he claimed was a September 1976 telegram from the State Department titled "Greek Turkish Dispute in the Aegean," in which fraudulent evidence was presented for a U.S. policy in favor of Turkish claims in the area. It is not known how Papandreou came into possession of this item. 9. Defense Intelligence Agency Collection Document -- 1978. Toward the beginning of 1978 the Greek daily TO VIMA obtained a copy of a fabricated Defense Intelligence Agency document calling for political surveillance of 43 Greek leftwing political groupings. TO VIMA perceived the falsity of the concoction and declined to publish it. Although procedural instructions included in the document were obsolete at the time of its supposed preparation, its language was close to American English standards for such documents. In analyzing items 7 through 9, the Greek forgeries, it is interesting to note the range of Greek concerns targeted by the authors of the fraudulent materials -- Greek involvement with NATO, which remains a sensitive subject in that country, Greek fear of Turkey and concern over possible internal political interference. 10. The Luns Letter -- June 1978. In the first week of June 1978 a spurious letter from NATO Secretary General Joseph M.A. Luns, addressed to U.S. NATO Amb. W. Tapley Bennett, was received by Belgian newspapers. The intention of the forgery, as in item 9, the fake DIA collection document, was to sow suspicion regarding possible U.S. interference in the internal affairs of the target country, in this instance through the suggestion that U.S. authorities sought harassment of journalists "showing a negative attitude to the neutron bomb." This action coincided with a worldwide Soviet-coordinated campaign against the so-called.horror bomb. The Luns fake was published in at least two leading Dutch-language dailies, without noting its bogus character. 11. Mondale Interview -- July 1978. A fraudulent U.S. Embassy press release outlining a fabricated interview between Vice President Walter Mondale and a nonexistent "Karl Douglas" was received by Paris newspaper and news service correspondents in July 1978. The content of the fake included disrespectful remarks about Israeli prime minister Menachem Begin and Egyptian president Anwar al-Sadat. The poor English usage in the document indicated its obviously phony character. 12. Heard Letter -- 1978-79. Late in 1978 and through the beginning of the next year a spurious letter from U.S. Air Force Col. Allen P. Heard, Chief of the Foreign Liaison Division in the U.S. Department of the Air Force, addressed to Col. Armand Troquet, Belgian defense attache in Washington, was received by members of the Belgian cabinet. Errors in English usage betrayed its corrupt origin. The disinformation content of the letter purported to demonstrate the existence of a U.S. agreement with the People's Republic of China for interference in Zaire, the former Belgian Congo. 13. Mitchell Report - January 1979. AL-DAWA, a Cairo magazine published by the fundamentalist Muslim Brotherhood, published in January 1979 what it asserted was a highly confidential document originating with the CIA. The content of this forgery ostensibly indicated that U.S. authorities sought to combat Muslim Brotherhood opposition to Israel-Egypt peace negotiations through bribery and intrigue. This fraud was reprinted in May-June 1979 by THE MUSLIM STANDARD, a periodical in Port of Spain, Trinidad. 14. Green Letter -- April 1979. In April 1979 a fake letter from U.S. defense attache in Rome CPT William C. Green, USN, was received by newspapers in Naples, Italy, clearly intended to associate infant deaths in the Naples region and destruction of oyster beds with storage and spillage of chemical and bacteriological war materiel at a U.S. military facility. 15. Third Eilts Fake -- October 1979. The Syrian newspaper AL-BA'TH, published by the ruling party in Damascus, on October 1, 1979, presented a third bogus letter supposedly by U.S. Amb. to Egypt H.F. Eilts. In this instance, the text consisted purportedly of a private letter to Director of Central Intelligence Stansfield Turner. The document ostensibly showed a U.S. intention to "repudiate and get rid of" Egyptian leader Sadat should he prove recalcitrant in accepting U.S. demands. The fraudulent letter also sought to demonstrate negative U.S. intentions toward the Palestine Liberation Organization (PLO). In summary, because Marxist-Leninist ideology presumes and indeed mandates irreconcilable differences between the two social systems, periods of warmth have not brought about a cessation of hostile disinformation activities by the Soviet Union. The persistent Soviet search for advantages in Europe, Latin America, Africa, Asia, and the Pacific has made continued disinformation efforts an irresistible temptation, particularly in those eras when the climate of opinion in the West is most receptive to the view that Kremlin leaders are bent on reforming Soviet domestic and foreign policies. To use a favorite Soviet expression, it is no accident that it is precisely in these regions that the bulk of disinformation and forgery enterprises are detected. Ladislav Bittman, telephone interview, January 4, 1988.. "Communist Forgeries," Testimony of Richard Helms, Senate Judiciary Committee, June 2,1961, pp. 80, 9-13. INTERNATIONAL TERRORISM AND THE CIA: DOCUMENTS, EYEWITNESS REPORTS, FACTS, Moscow, Progress Publishers, 1983 (in English), pp. 221-227. Alexander Kaznacheev, INSIDE A SOVIET EMBASSY, Philadelphia and New York, Lippincott, 1962, pp. 177 and ff. Yu. Pankov, ed., POLITICAL TERRORISM -- AN INDICTMENT OF IMPERIALISM, Moscow, Progress Publishers, 1983 (in English), pp. 213, 262. "Soviet Active Measures," Hearings Before the House Permanent Select Committee on Intelligence, July 13, 14, 1982, pp. 74-88. "The Soviet and Communist Bloc Defamation Campaign," Remarks of Rep. Melvin Price (R-IL), CONGRESSIONAL RECORD, September 28, 1965, pp. 24478-24479. "Soviet Covert Action (The Forgery Offensive)," Hearings Before the Subcommittee on Oversight of the House Permanent Select Committee on Intelligence, February 6, 1980, pp. 190-246. Richard H. Shultz and Roy Godson, DEZINFORMATSIA: ACTIVE MEASURES IN SOVIET STRATEGY. Washington, Pergamon-Brassey's, 1984. Adam Ulam, THE RIVALS: AMERICA AND RUSSIA SINCE WORLD WAR II. New York, Viking, 1971. U.S. Government Publications Consulted CIA Report on Soviet Propaganda Operations, Report to House Permanent Select Committee on Intelligence, April 20, 1978.
1
3
<urn:uuid:fc3312b3-1530-4cec-86a1-52c0c9d51de8>
Recommendations to prevent the spread of vancomycin resistance have been in place since 1995 and include guidelines for inpatient pediatric use of vancomycin. The emergence of large databases allows us to describe variation in pediatric vancomycin across hospitals. We analyzed a database with hospitalizations for children under 18 at 421 hospitals in 2008. The Premier hospital 2008 database, consisting of records for 877,201 pediatric hospitalizations in 421 hospitals, was analyzed. Stratified analyses and logistic mixed effects models were used to calculate the probability of vancomycin use while considering random effects of hospital variation, hospital fixed effects and patient effects, and the hierarchical structure of the data. Most hospitals (221) had fewer than 10 hospitalizations with vancomycin use in the study period, and 47 hospitals reported no vancomycin use in 17,271 pediatric hospitalizations. At the other end of the continuum, 21 hospitals (5.6% of hospitals) each had over 200 hospitalizations with vancomycin use, and together, accounted for more than 50% of the pediatric hospitalizations with vancomycin use. The mixed effects modeling showed hospital variation in the probability of vancomycin use that was statistically significant after controlling for teaching status, urban or rural location, size, region of the country, patient ethnic group, payor status, and APR-mortality and severity codes. The number and percentage of pediatric hospitalizations with vancomycin use varied greatly across hospitals and was not explained by hospital or patient characteristics in our logistic models. Public health efforts to reduce vancomycin use should be intensified at hospitals with highest use. Citation: Lasky T, Greenspan J, Ernst FR, Gonzalez L (2012) Pediatric Vancomycin Use in 421 Hospitals in the United States, 2008. PLoS ONE 7(8): e43258. https://doi.org/10.1371/journal.pone.0043258 Editor: Adam J. Ratner, Columbia University, United States of America Received: May 8, 2012; Accepted: July 18, 2012; Published: August 16, 2012 Copyright: © Lasky et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: Funded by AHRQ grant R03 HS017998 “Measuring Pediatric Inpatient Use” to Tamar Lasky, Principal Investigator, http://www.ahrq.gov. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have the following interests. Frank Ernst is employed by Premier Research Services, the company that produced the database used in the analysis. Tamar Lasky is President and owner of MIE Resources. There are no further patents, products in development or marketed products to declare. This does not alter the authors’ adherence to all the PLoS ONE policies on sharing data and materials as detailed online in the guide for authors. Vancomycin is indicated for the treatment of serious or severe infections caused by susceptible strains of methicillin-resistant (beta-lactam-resistant) staphylococci. It is indicated for patients who cannot receive or who have failed to respond to other drugs, including the penicillins or cephalosporins, and for infections caused by vancomycin-susceptible organisms that are resistant to other antimicrobial drugs. Because of concerns about the development of drug-resistant bacteria, vancomycin should be used only to treat or prevent infections that are proven or strongly suspected to be caused by susceptible bacteria. Recommendations to prevent the spread of vancomycin resistance have been in place since 1995 and include guidelines for inpatient pediatric use of vancomycin , , , , , , . In 1999 Shah and colleagues described vancomycin use in one hospital’s pediatric neurosurgery unit and noted vancomycin was used primarily for prophylaxis and was inconsistent with the Hospital Infection Control Practices Advisory Committee recommendations . Hopkins and colleagues evaluated use in one hospital’s pediatric hematology-oncology unit, and concluded that 100% of the use was not consistent with CDC recommendations . Keyserling and colleagues (2003) studied 22 hospitals belonging to the Pediatric Prevention Network (PPN) . They described series of 25 patients receiving vancomycin at each hospital, and surveyed the physicians who prescribed the vancomycin. They did not categorize adherence to guidelines, but noted general patterns of use such as the low percentage (7%) with laboratory-confirmed β-lactam-resistant organisms isolated at the time vancomycin was prescribed, or the association of vancomycin use with presence of indwelling vascular catheters. Bolon and colleagues evaluated vancomycin use in children older than 1 year at a pediatric tertiary care medical center in 2000 and 2001 . They developed algorithms to evaluate whether use was appropriate, and concluded that 35% of the initial courses were inappropriate. Patel and colleagues studied the medical records of 200 neonates born after Septermber 2005 at 4 tertiary care (NICUs) and concluded that 32% of the days of vancomycin use were inappropriate and non-adherent to the Centers for Disease Control and Prevention 12 Step Campaign to Prevent Antimicrobial Resistance , . They noted non-adherence to steps 4 (Target the pathogen) and 8 (Treat infection, not contamination or colonization). Other studies have described use, but have not assessed guideline adherence , . The emergence of large multi-hospital databases in the past decade offers new opportunities to study patterns of vancomycin use across hospitals. In this study, we analyzed a large database and describe variation in pediatric vancomycin use in all children under 18 hospitalized at 421 hospitals in 2008. We analyzed vancomycin use occurring in pediatric hospitalizations in 2008 using the Premier hospital database, a large US hospital-based, service-level, all-payer database, containing information from primarily non-profit, non-governmental, community and teaching hospitals and health systems . Detailed service level information was available for each hospital day and included medication information and central supplies. Patient information collected included, but was not limited to, patient demographics (age, gender, race/ethnicity), principal and secondary diagnoses, principal and secondary procedures, payor, length of stay, cost of care, drug utilization, department cost and charge detail, day-of-stay data, and physician specialty. In addition to the service-level data recorded in most standard hospital discharge files, the database provided a daily log of all billed items, including procedures, medications, laboratory tests and diagnostic and therapeutic services, at the individual patient level. Hospitals were self-selected, choosing to provide their own data to Premier as part of agreements by which they received access to analytic tools developed and offered by Premier. In the study year, 2008 423 hospitals participated (99.1% with pediatric hospitalizations). Data were entered into a hospital’s core information system, fed into their decision support system (DSS), then sent to Premier on a monthly or quarterly basis via a secure FTP site. Upon receiving data from participating hospitals, Premier undertook an extensive seven phase data validation and correction process that included more than 95 quality assurance checks. Deidentified data were extracted and used for statistical analysis. The study proposal was approved by the University of Rhode Island Institutional Review Board and informed consent requirements were waived, as permitted under 45 CFR 46.116(c) or (d). We calculated the number of unique hospitalizations with any use of vancomycin and estimated the prevalence of vancomycin use (number of hospitalizations with documentation of any vancomycin use per 100 hospitalizations). We also estimated the percentages vancomycin use in children over and under 1 year of age, and the percentages of hospitalizations with use longer than 3 days duration. Patient characteristics were: age in years, gender, race (White, African-American, other), and type of insurance (private, government or none). Hospital characteristics were size (small, medium, large), teaching or non-teaching, urban or rural, and region of the country (Northeast, Midwest, South and West, as defined by the US Census). Logistic mixed effects modeling with two levels of hierarchy (hospital and patient) were used to calculate the probability of vancomycin use while considering the random effects of hospital variation, as well as hospital fixed effects and patient effects. In our models we included the 3 M™ All Patient Refined Diagnosis Related Groups (APR DRGs) severity and mortality (as 4 level, ordered categories), and International Classification of Diseases –9th Revision – Clinical Modification (ICD-9-CM) Group Codes. The 3 M™ APR DRGs expand the basic DRG structure and address patient differences relating to severity of illness and risk of mortality . Severity of illness was defined as the extent of physiologic decompensation or organ system loss of function. Risk of mortality was defined as the likelihood of dying. The SAS procedure, GLIMMIX was run using the events/trial syntax to maintain the hierarchical structure of the data , , . Models were run in subsets of hospitalizations, for children under 1 year of age, and further separated by ICD-9 Group Codes. Plots of odds ratios were produced using SAS Graphics . All statistical analyses were conducted using SAS 9.2 . The dataset contained records for 877,201 hospitalizations of children under 18 at time of admission with 50,879 (5.8%) being repeat admissions. The study population was 50.9% male and 49.1% female, 48.5% white, 16.1% African-American, 12.2% Hispanic, 3.6% Asian/Pacific Islander, 18.7% other and less than 1% American Indian. The average length of stay was 3.7 days (median 2 days, interquartile range 2–3 days). Private insurance paid for 46.2% of the hospital stays, government paid for 45.9% and self-pay, no charge or other sources accounted for 7.9% of the hospital stays. Most of the hospitalizations took place in urban areas (89.2%) compared to rural areas (10.8%). We validated the Premier sample of hospitalizations by comparing characteristics of the sample to the HCUP KID sample of pediatric hospitalizations for 2006 and details of the validation are available in a previous publication (Lasky et al., 2011). The Premier sample included a greater proportion of infants born in the hospital, from Southern hospitals, from non-teaching hospitals, and from large size hospitals compared to the HCUP KID sample. The two samples were similar with regards to proportions male, routine discharge status, APR-DRGs severity and proportions urban. We did not compare the proportions of different racial and ethnic groups in Premier to the KID because of well documented limitations in racial and ethnic data within the KID, high rates of missing data resulting from state differences in collection and reporting of race and ethnicity. Vancomycin was administered in 19,775 hospitalizations, or 2.3% of 877,201 pediatric hospitalizations in the database in 2008. In 98% of cases, vancomycin was administered parenterally or “other” (this includes ophthalmic solutions, intraocular/intravitreal injections, catheter flushes, inhalation formulations, rectal formulations and topical gels compounded by pharmacy) and in less than 2% of cases, vancomycin was administered orally. Half (10,033 or 50.7%) of the courses of vancomycin were less than 3 days duration. Males had higher prevalence of use than did females (2.5%, 95% CI 2.5–2.6 compared to 2.0%, 95% CI 2.0–2.0), African-Americans had higher use than did whites or other groups (3.1%, 95% CI 3.0–3.2, 2.3%, 95% CI 2.3–2.4 and 1.8%, 95% CI 1.8–1.9 respectively), and children in the age groups 2–4 and 5–11 had higher prevalence compared to children under 2, or children age 12–17 (6.8%, 95% CI 6.5–7.1, 6.6%, 95% CI 6.4–6.8, 1.5%, 95% CI 1.5–1.5, and 4.3%, 95% 4.2–4.4). The greatest number of hospitalizations with vancomycin use occurred to children under 2 (10,282 or 52% of hospitalizations), however the highest prevalence of use occurred in children age 2–4, and 5–11 (Figure 1). Children age 2–4 were 4.5 times more likely, children 5–11 were 4.4 times more likely, and children 12–17 were 2.9 times more likely to receive vancomycin compared to children under 2. In hospitalizations of children under one year with vancomycin use, the four most frequent ICD-9 group diagnoses were: “Liveborn infants according to type of birth” (ICD-9-CM V30–39) (51.83%), “Other conditions originating in the perinatal period” (ICD-9-CM 760–779) (14.29%), “Infections of skin and subcutaneous tissue” (ICD-9-CM 680–709) (6.07%), and “Congenital anomalies” (5.91%) (ICD-9-CM 740–759). In children age 1 or over, the four most frequent diagnoses were: “Infections of skin and subcutaneous tissue” (23.32%), “Pneumonia and influenza”, (9.41%) (ICD-9-CM 480–488), “Complications of surgical and medical care, not elsewhere classifiable” (7.58%) (ICD-9-CM 996–999), and “Other bacterial diseases” (4.41%) (030–041). Vancomycin was administered to children at 374 hospitals in the Premier hospital database; another 47 hospitals with 17,271 pediatric hospitalizations (13,233 under age 2) reported no vancomycin use during 2008. The number of hospitalizations with vancomycin use ranged from 0 to 1225 at individual hospitals, and percentage of hospitalizations with vancomycin use ranged from 0.0% to 33.3%. Twenty one hospitals (5.6%) had more than 200 hospitalizations with vancomycin use, accounting for 9,979 (50%) of the pediatric hospitalizations with vancomycin use. Because of the skewness in distribution of vancomycin use, we stratified hospitals by number of hospitalizations with vancomycin use. Low volume was defined as 0 to 10, medium volume as 11–100, and high volume as over 100. Most hospitals were categorized as low volume (221 hospitals), 155 hospitals were categorized as medium volume, and 45 hospitals categorized as high volume. Within high volume hospitals, percentage of hospitalizations with vancomycin use ranged from 1.3–12.9 (mean percentage was 4.6, 95% CI = 3.9–5.4). Within medium volume hospitals, percentage of hospitalizations with vancomycin use ranged from 0.3 to 9.5 (mean percentage was 1.7, 95% CI = 1.5–1.9). The hospitals with high volume of vancomycin use were predominantly large (73.3%), teaching (68.9%), and urban (97.8%) compared to the hospitals with low volume of vancomycin use, which were 45.7% large, 19.91% teaching, and 74.21% urban, and 95% Confidence Intervals of estimates generally did not overlap. In 2008, 47 hospitals, or 11.16% of the hospitals in the database, reported no vancomycin use in the entire year. The logistic mixed effects modeling showed hospital variation in vancomycin use. The estimated variance of the random effects hospital intercepts for models run for children under 1, and for the four most frequent ICD-9 Code Groups are summarized in Table 1. The intercept estimates and the 95% Confidence Bounds for each model do not include zero, indicating hospital variation that is statistically significant after controlling for the other variables in the model. The lower limits of the 95 percent confidence intervals are above zero, indicating statistically significant variability in the use of vancomycin depending on the hospital in which a patient was treated. The Odds Ratios for hospital and patient fixed effects for two of the models are plotted in Figures 2 and 3. Variables associated with vancomycin use were different in each of the age and ICD groups. For example, in children under 1 year with ICD-9 group “Liveborn infants according to type of birth” increased APR-DRG severity of illness was associated with over 6 times the use of vancomycin, but in children under 1 year with ICD-9 group “Infections of skin and subcutaneous tissue” use of vancomycin was almost 2 times as frequent in children with increased APR-DRG severity (although the 95%CI slightly overlapped 1). Both models showed an association between increased APR-DRG severity of illness and vancomycin use, but the magnitude of the effect differed greatly in the two patient groups. Another example is the effect of rural vs. urban status of the hospital. Rural or urban status of the hospital was statistically significant in predicting vancomycin use in children under 1 year with ICD-9 group “Liveborn infants according to type of birth” and with ICD-9 group “Infections of skin and subcutaneous tissue”, but the direction of the effect was different in each group of patients. For children under 1 year with ICD-9 group “Liveborn infants according to type of birth” rural hospitals had lower vancomycin use than did urban hospitals, but for children under 1 year with ICD-9 group “Infections of skin and subcutaneous tissue” rural hospitals had higher vancomycin use than did urban hospitals. Another example can be seen with patient’s insurance coverage. In children under 1 year with ICD-9 group “Liveborn infants according to type of birth” those with government insurance had slightly higher vancomycin use than did those with private insurance, and those with no insurance had less vancomycin use than those with private insurance. In children under 1 year with ICD-9 group “Infections of skin and subcutaneous tissue” insurance coverage was not associated with vancomycin use. Most hospitals (221) had fewer than 10 hospitalizations with vancomycin use in the study period, and 47 hospitals reported no vancomycin use in 17,271 pediatric hospitalizations. At the other end of the continuum, 21 hospitals (5.6% of hospitals) each had over 200 hospitalizations with vancomycin use, and together, accounted for more than 50% of the pediatric hospitalizations with vancomycin use. Percentage of hospitalizations with vancomycin use ranged up to 33.3% when hospitals with few pediatric hospitalizations were kept in the sample, and for this reason, percentage, by itself, may not be a useful indicator in small hospitals. In hospitals with more than 100 hospitalizations with vancomycin use, the percentage with vancomycin use ranged from 1.26 to 12.90, a 10 fold range in the probability of vancomycin use. Without knowing how percentage correlates with inappropriate use, one might begin by evaluating vancomycin use in hospitals with greater than 100 hospitalizations with vancomycin use and greater than 4 or 5% of hospitalizations having vancomycin. The two measures, absolute number of hospitalizations and percentage of hospitalizations with vancomycin use, can be used to identify and target hospitals for evaluation and potential interventions. The limitations of the study include those inherent in secondary analysis of large databases. In this database, the ICD-9 code referred to the entire hospitalization rather than the indication for medication use, and we were unable to analyze the relationship between Methicillen-resistant Staphylococcus aureus (MRSA) occurrence and vancomycin use. The Premier database codes for MRSA changed during 2008 and future analyses will be able to describe MRSA variation. Although we could not analyze the data from our study year, 2008, a preliminary analysis of 2011 data showed MRSA codes in only 9.6% of hospitalizations with vancomycin use. Future analyses whether this is explained by undercoding MRSA, inappropriate use of vancomycin, or some other difference. The definition of vancomycin use as any use during the hospital stay is both a weakness and a strength. It does not permit analysis of dose or duration which will be of interest in future studies, but it does permit estimates of percentages of hospitalized children exposed to vancomycin use. This is of interest when planning clinical studies, comparative effectiveness research, policy and labeling priorities, and other issues. The percentage (or prevalence) may provide a different perspective than that developed as a result of using numerator data only. For example, Keyserling and colleagues (2003) tabulated vancomycin courses and identified the largest number as occurring in neonatology services, and then suggested that neonatal intensive care units (NICUs) improve their vancomycin use . We also found the highest number of uses in children under 2 years of age, but the lowest probability of use in children under 2 years of age (1.5%) compared to other age groups (4.3–6.8%). This may reflect the large number of healthy newborns in databases and programs to measure use of vancomycin in neonates will need to define a denominator, in addition to measuring the numerator, as is currently done. The stratified analysis and the logistic modeling consistently document variation in vancomycin use by individual hospital, after considering independent effects of hospital and patient characteristics. The mixed models allowed us to estimate variation in the use of vancomycin by hospital in children with the same ICD-9 group codes. The intercepts measure the differences between hospitals, controlling for other effects in the model such as hospital and patient characteristics . For every ICD-9 Group modeled, hospital-to-hospital variation and 95% Confidence Bounds of the intercept excluded zero, after controlling for hospital and patient effects. Until recently, few studies have compared antibiotic use across large numbers of hospitals or geography, however the establishment of the National Healthcare Safety Network by the Centers for Disease Control and Prevention will allow regular comparisons across hospitals . Geographic variation in use of antibiotics has also recently been documented in the United States . The analyses presented here are also a step forward in studying pediatric hospital variation; previous researchers have used hierarchical models to consider hospital variation in adults, for example by studying maternity length of stay or mortality in patients undergoing coronary artery bypass surgery (CABG) , . Hospital variation in care of adults has been studied for several decades, much of it made possible by large Medicare claims databases . Only recently have aggregated data for pediatric hospitalizations been available. Bowman and colleagues (2005) recently demonstrated variation in management of pediatric splenic injuries and Weiss et al (2009) have demonstrated variation in use of corticosteroids, opioids and nonsteroidal anti-inflammatory drug, diagnostic imaging, and renal biopsy in children with Henoch Schönlein purpura , . The public health implications of these data are that efforts to control vancomycin use may be channeledo hospitals with high numbers and prevalence of vancomycin use. While it seems intuitive that high prevalence will lead to high volume, and vice versa, some high volume hospitals maintain a prevalence below 3 per cent, while others range above 7 percent, almost to 13 percent. Presently, the message to reduce vancomycin use is broadcast to the entire healthcare community , . General principles of judicious antibiotic use are applicable to all hospitals and providers for all antibiotics, and these data may be used to channel intensive stewardship activities and intervention research data to hospitals with the highest volume and prevalence of vancomycin use. While the Premier hospital database contains information about presence or absence of laboratory testing, it does not include information about test results. Future research relating vancomycin use to laboratory test results may need to budget for and obtain access to hospital charts or to laboratory-based infection surveillance data. We restricted the current analysis to one year, 2008; access to other years of data would permit assessment of trends over the last several years. Finally, further study may prepare the way for comparative effectiveness research, by identifying and comparing children with similar conditions treated with and without vancomycin, and relating the treatment patterns to outcomes. Our key findings include the skewness of the distribution of vancomycin use throughout hospitals, the importance of denominator data in assessing vancomycin use, and hospital variation in vancomycin use, not explained by hospital or patient characteristics including: bed size, teaching status, region of the country, rural or urban geography, and patient sex, race, APR-DRG risk of mortality and APR-DRG severity of illness. We would like to thank Cecilia Di Pentima, MD, MPH, FAAP, for her comments and careful reading of the manuscript. At the time when she reviewed the manuscript, she was Associate Professor, Department of Pediatrics, Infectious Disease Division, Alfred I. duPont Hospital for Children, Jefferson Medical College, Thomas Jefferson University Conceived and designed the experiments: TL JG FE LG. Performed the experiments: TL JG FE LG. Analyzed the data: TL LG. Wrote the paper: TL JG FE LG. - 1. Goldmann DA, Weinstein RA, Wenzel RP, Tablan OC, Duma RJ, et al. (1996) Strategies to Prevent and Control the Emergence and Spread of Antimicrobial-Resistant Microorganisms in Hospitals. A challenge to hospital leadership. Jama 275: 234–240. - 2. Di Pentima M, Chan S (2010) Impact of Antimicrobial Stewardship Program on Vancomycin Use in a Pediatric Teaching Hospital. Pediatr Infect Dis J. - 3. Centers for Disease Control and Prevention (2002) 12-Step Program to Prevent Antimicrobial Resistance in Health Care Settings. Department of Health and Human Services. - 4. Centers for Disease Control and Prevention (1995) Recommendations for preventing the spread of vancomycin resistance: recommendations of the Hospital Infection Control Practices Advisory Committee (HICPAC). Morbidity and Mortality Weekly Reports 44: 1–12. - 5. Shlaes DM, Gerding DN, John JF Jr, Craig WA, Bornstein DL, et al. (1997) Society for Healthcare Epidemiology of America and Infectious Diseases Society of America Joint Committee on the Prevention of Antimicrobial Resistance: guidelines for the prevention of antimicrobial resistance in hospitals. Infect Control Hosp Epidemiol 18: 275–291. - 6. American Academy of Pediatrics Committee on Infectious Diseases (1997) Therapy for children with invasive pneumococcal infections. Pediatrics 99: 289–299. - 7. Rybak MJ, Lomaestro BM, Rotschafer JC, Moellering RC Jr, Craig WA, et al. (2009) Therapeutic monitoring of vancomycin in adults summary of consensus recommendations from the American Society of Health-System Pharmacists, the Infectious Diseases Society of America, and the Society of Infectious Diseases Pharmacists. Pharmacotherapy 29: 1275–1279. - 8. Shah SS, Sinkowitz-Cochran RL, Keyserling HL, Jarvis WR (1999) Vancomycin use in pediatric neurosurgery patients. Am J Infect Control 27: 482–487. - 9. Hopkins HA, Sinkowitz-Cochran RL, Rudin BA, Keyserling HL, Jarvis WR (2000) Vancomycin use in pediatric hematology-oncology patients. Infect Control Hosp Epidemiol 21: 48–50. - 10. Keyserling HL, Sinkowitz-Cochran RL, Harris JM, 2nd, Levine GL, Siegel JD, et al (2003) Vancomycin use in hospitalized pediatric patients. Pediatrics 112: e104–111. - 11. Bolon MK, Arnold AD, Feldman HA, Rehkopf DH, Strong EF, et al. (2005) Evaluating vancomycin use at a pediatric hospital: new approaches and insights. Infect Control Hosp Epidemiol 26: 47–55. - 12. Patel SJ, Oshodi A, Prasad P, Delamora P, Larson E, et al. (2009) Antibiotic use in neonatal intensive care units and adherence with Centers for Disease Control and Prevention 12 Step Campaign to Prevent Antimicrobial Resistance. Pediatr Infect Dis J 28: 1047–1051. - 13. Grohskopf LA, Huskins WC, Sinkowitz-Cochran RL, Levine GL, Goldmann DA, et al. (2005) Use of antimicrobial agents in United States neonatal and pediatric intensive care patients. Pediatr Infect Dis J 24: 766–773. - 14. Pakyz AL, Gurgle HE, Ibrahim OM, Oinonen MJ, Polk RE (2009) Trends in antibacterial use in hospitalized pediatric patients in United States academic health centers. Infect Control Hosp Epidemiol 30: 600–603. - 15. (2008) Premier Perspective Database. Charlotte, North Carolina: Premier Research Services, Premier, Inc. - 16. Hughes J (2008) Development of the 3 M TM All Patient Refined Diagnosis Related Groups (APR DRGs). Mortality Measurement Meeting. Cambridge, Massachusetts: Department of Health and Human Services. - 17. Schabenberger O (2005) Introducing the GLIMMIX Procedure for General Linear Mixed Models. Cary, North Carolina: SAS Institute Inc. - 18. Dai J, Li Z, Rocke D (2006) Hierarchical Logistic Regression Modeling with SAS GLIMMIX. Proceedings of the Thirty-first Annual SAS Users Group International Conference. Cary, North Carolina: SAS Institute Inc. - 19. Li J, Alterman T, Deddens JA (2006) Analysis of Large Hierarchical Data with Multilevel Logistic Modeling Using PROC GLIMMIX. SAS Conference Proceedings: Western Users of SAS Software 2006. Irvine, California. - 20. SAS Institute Inc. (2009) SAS/GRAPH 9.2: StatisticalGraphics Procedures Guide. Cary: SAS Institute Inc. - 21. SAS Institute Inc. (2009) Base SAS 9.2 Procedures Guide: Statistical Procedures, Second Edition. Cary: SAS Institute Inc. - 22. Kuehn BM (2011) Antibiotic Use Tracking. Jama 306: 2661. - 23. Titus A (2011) National Antibiotic Use at a Glance: How Does Your State Compare? Extending the Cure. Washington, DC: Center for Disease Dynamics, Economics & Policy. - 24. Leung K-M, Elashoff RM, Rees KS, Hasan MM, Legorreta AP (1998) Hospital- and Patient-Related Characteristics Determining Maternity Length of Stay: A Hierarchical Linear Model Approach. American Journal of Public Health 88: 377–381. - 25. Wennberg JE, Fisher ES, Stukel TA, Sharp SM (2004) Use of Medicare claims data to monitor provider-specific performance among patients with severe chronic illness. Health Aff (Millwood) Suppl Web Exclusives: VAR5–18. - 26. Bowman SM, Zimmerman FJ, Christakis DA, Sharar SR, Martin DP (2005) Hospital characteristics associated with the management of pediatric splenic injuries. JAMA 294: 2611–2617. - 27. Weiss PF, Klink AJ, Hexem K, Burnham JM, Leonard MB, et al. (2009) Variation in inpatient therapy and diagnostic evaluation of children with Henoch Schonlein purpura. J Pediatr 155: 812–818 e811. - 28. Hersh AL, Beekmann SE, Polgreen PM, Zaoutis TE, Newland JG (2009) Antimicrobial stewardship programs in pediatrics. Infect Control Hosp Epidemiol 30: 1211–1217.
2
4
<urn:uuid:d27542e6-4bf7-4eed-abd6-24337a5cca25>
The new paper Myalgic Encephalomyelitis is not fatigue, or 'CFS' explains why M.E. is not defined by mere 'chronic fatigue' and why M.E. and 'CFS' are not synonymous terms, as well as why a diagnosis of CFS based on any of the definitions of CFS can only ever be a misdiagnosis. (Members of the M.E. community are also recommended to read: Fatigue Schmatigue. This paper explains how the fraudulent 'fatigue' construct came into being and how the M.E. community can and MUST play an active part in debunking this myth.) See the Downloads section below to download these papers in Word or PDF format. Copyright © Jodi Bassett June 2007. This version updated March 2009. From www.hfme.org Myalgic Encephalomyelitis (M.E.) is not synonymous with being tired all the time. If a person is very fatigued for an extended period of time this does not mean they are having a ‘bout’ of M.E. To suggest such a thing is no less absurd than to say that prolonged fatigue means a person is having a ‘bout’ of multiple sclerosis, Parkinson’s disease or Lupus. If a person is constantly fatigued this should not be taken to mean that they have M.E. no matter how severe or prolonged their fatigue is. Fatigue is a symptom of many different illnesses as well as a feature of normal everyday life – but it is not a defining symptom of M.E., nor even an essential symptom of M.E. There are a number of post-viral fatigue states or fatigue syndromes which may follow common infections such as mononucleosis/glandular fever, hepatitis, Q fever, Ross river virus and so on. M.E. is an entirely different condition to these self-limiting fatigue syndromes however (and is not caused by the Epstein Barr virus or any of the herpes or hepatitis viruses), the science is very clear on this point. People suffering with any of these post-viral fatigue states or fatigue syndromes do not have M.E. M.E. is also not the same condition as Lyme disease, athletes over-training syndrome, burnout, depression, somatisation disorder, candida, multiple chemical sensitivity syndrome or Fibromyalgia, or indeed any other illness. M.E. is a distinct neurological illness with a distinct; onset, symptoms, aetiology, pathology, response to treatment, long and short term prognosis – and World Health Organization classification (G.93.3) (Hyde 2006, 2007, [Online]) (Hooper 2006, [Online]) (Hooper & Marshall 2005, [Online]) (Hyde 2003, [Online]) (Dowsett 2001, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Dowsett 1996, p. 167) (Dowsett et al. 1990, pp. 285-291) (Dowsett n.d., [Online]). M.E. is also not defined by ‘fatigue following exertion which can last up to 24 hours’ as the bogus definitions of ‘CFS’ describe. Fatigue following activity (or post-exertional fatigue or malaise) is a common symptom of a large number of different illnesses – but what is happening in M.E. is quite different. Overexertion does not cause fatigue in M.E. but instead a worsening of the severity of the illness generally and of various neurological, cognitive, cardiac, cardiovascular, immunological, muscular and gastrointestinal (and other) symptoms. The severity of these symptoms can range from mild to severe to life-threatening. The effects of overexertion can last for hours, days, weeks or even many months in M.E., or can even be permanent. The onset of these post-exertional effects are very often significantly delayed so that very often the worsening of the illness caused by overexertion has not even begun within 24 hours in M.E., let alone been completely resolved in that time. The reaction people with M.E. have to physical and mental activity, sensory input and orthostatic stress not only has nothing to do with mere fatigue (or ‘malaise’) but is in fact unique to M.E. in a number of ways. This reaction is so abnormal in fact that exercise testing is one of the series of tests which can be used to help confirm a M.E. diagnosis, as are various tests which measure the abnormal responses to orthostatic stress seen in M.E. This is simply not the case in post-viral fatigue syndromes, Lyme disease, Fibromyalgia and so on. These patient groups do not exhibit the same measurable pathological abnormalities as M.E. patients in these (and other) tests. Recent research has also shown that postural stress exacerbates cardiac insufficiency in M.E. and that this cardiac insufficiency is the cause of many of the symptoms and much of the disability of M.E. This pathology is also not seen in any of those illnesses causing fatigue after exertion which are commonly misdiagnosed as ‘CFS.’ The way people with M.E. respond to physical and mental activity, sensory input and orthostatic stress is profoundly different than in these other illnesses; it is an entirely different problem, of a much greater magnitude (Cheney 2006, [video recording]) (Hooper & Marshall 2005, [Online]) (Hyde 2003, [Online]) (Dowsett 2001, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Dowsett et al. 1990, pp. 285-291) (Ramsay 1986, [Online]). What does define Myalgic Encephalomyelitis? What defines M.E. is not ‘chronic fatigue’ but a specific type of acquired damage to the brain. Myalgic encephalomyelitis is an acutely acquired illness initiated by a virus infection with multi system involvement which is characterised by post encephalitic damage to the brain stem; a nerve centre through which many spinal nerve tracts connect with higher centres in the brain in order to control all vital bodily functions – this is always damaged in M.E. (Hence the name Myalgic Encephalomyelitis.) Central nervous system (CNS) dysfunction, and in particular, inconsistent CNS dysfunction is undoubtedly both the chief cause of disability in M.E. and the most critical in the definition of the entire disease process. Myalgic Encephalomyelitis is a loss of the ability of the CNS (the brain) to adequately receive, interpret, store and recover information which enables it to control vital body functions (cognitive, hormonal, cardiovascular, autonomic and sensory nerve communication, digestive, visual auditory balance etc). It is a loss of normal internal homeostasis. The individual can no longer function systemically within normal limits. This dysfunction also results in the inability of the CNS to consistently programme and achieve normal smooth end organ response. There is also multi-system involvement of cardiac and skeletal muscle, liver, lymphoid and endocrine organs. Some individuals also have damage to skeletal and heart muscle. This diffuse brain injury is initiated by a virus infection which targets the brain; M.E. represents a major attack on the central nervous system (CNS) by the chronic effects of a viral infection. M.E. is an infectious and primarily neurological disease process which occurs in epidemic and sporadic forms. There is a history of recorded outbreaks of M.E. going back to 1934, when an epidemic of what seemed at first to be poliomyelitis was reported in Los Angeles. A review of M.E. outbreaks found that clinical symptoms were consistent in over sixty recorded epidemics of M.E. spread all over the world. M.E. has been linked to Poliomyelitis (Polio) since 1934 and for a number of years M.E. was referred to as ‘atypical Polio.’ There is ample evidence that M.E. is caused by the same type of virus that causes polio; an enterovirus. The evidence which exists to support this theory is compelling, for example: M.E. epidemics very often followed Polio epidemics, M.E. resembles Polio at onset, serological studies have shown that communities affected by an outbreak of M.E. were effectively blocked (or immune) from the effects of a subsequent polio outbreak, evidence of enteroviral infection has been found in the brain tissue of M.E. patients at autopsy, and so on. (See: The outbreaks (and infectious nature) of M.E. and for more information.) M.E. is primarily neurological, but because the brain controls all vital bodily functions virtually every bodily system can be affected by M.E. Again, although M.E. is primarily neurological it is also known that the vascular and cardiac dysfunctions seen in M.E. are also the cause of many of the symptoms and much of the disability associated with M.E. – and that the well-documented mitochondrial abnormalities present in M.E. significantly contribute to both of these pathologies. There is also multi-system involvement of cardiac and skeletal muscle, liver, lymphoid and endocrine organs in M.E. Some individuals also have damage to skeletal and heart muscle. Thus Myalgic Encephalomyelitis symptoms are manifested by virtually all bodily systems including: cognitive, cardiac, cardiovascular, immunological, endocrinological, respiratory, hormonal, gastrointestinal and musculo-skeletal dysfunctions and damage. M.E. is an infectious neurological disease and represents a major attack on the central nervous system (CNS) – and an associated injury of the immune system – by the chronic effects of a viral infection. There is also transient and/or permanent damage to many other organs and bodily systems (and so on) in M.E. M.E. affects the body systemically. Even minor levels of physical and cognitive activity, sensory input and orthostatic stress beyond a M.E. patient’s individual post-illness limits causes a worsening of the severity of the illness (and of symptoms) which can persist for days, weeks or months or longer. In addition to the risk of relapse, repeated or severe overexertion can also cause permanent damage (eg. to the heart), disease progression and/or death in M.E. M.E. is not stable from one hour, day, week or month to the next. It is the combination of the chronicity, the dysfunctions, and the instability, the lack of dependability of these functions, that creates the high level of disability in M.E. It is also worth noting that of the CNS dysfunctions, cognitive dysfunction is one of the most disabling characteristics of M.E. M.E. is a distinct, recognisable disease entity which contrary to popular belief is not difficult to diagnose and can in fact be diagnosed relatively early in the course of the disease (within just a few weeks) – providing that the physician has some experience with the illness. Although there is (as yet) no single test which can be used to diagnose M.E. there are a series of tests which can confirm a suspected M.E. diagnosis. If all tests are normal, if specific abnormalities are not seen on certain of these tests (eg. brain scans), then a diagnosis of M.E. cannot be correct (Hyde 2006, 2007, [Online]) (Hooper 2006, [Online]) (Hooper & Marshall 2005, [Online]) (Hyde 2003, [Online]) (Dowsett 2001, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Hyde 1992 p. xi) (Hyde & Jain 1992 pp. 38 - 43) (Hyde et al. 1992, pp. 25-37) (Dowsett et al. 1990, pp. 285-291) (Ramsay 1986, [Online]) (Dowsett n.d., [Online]) (Dowsett & Ramsay n.d., pp. 81-84) (Richardson n.d., pp. 85-92). What are some of the real hallmark symptoms and characteristics of Myalgic Encephalomyelitis? What characterises M.E. every bit as much as the individual neurological, cognitive, cardiac, cardiovascular, immunological, endocrinological, respiratory, hormonal, muscular, gastrointestinal and other symptoms is the way in which people with M.E. respond to physical and cognitive activity, sensory input and orthostatic stress, and so on. In other words, the pattern of symptom exacerbations, relapses and of disease progression. The way the bodies of people with M.E. react to these activities/stimuli post-illness is unique in a number of ways. Along with a specific type of damage to the brain (the central nervous system) this characteristic is one of the defining features of the illness which must be present for a correct diagnosis of M.E. to be made. The main characteristics of the pattern of symptom exacerbations, relapses and disease progression (and so on) in Myalgic Encephalomyelitis include: When a person with M.E. is active beyond their individual post-illness limits, the result is not tiredness, fatigue or even exhaustion – nor is ‘malaise’ an accurate word to describe what occurs. There simply is no one symptom caused by overexertion in M.E. What does happen is that there is a worsening of all sorts of different symptoms and of the severity of the illness generally with overexertion. (Repeated or severe overexertion can also cause disease progression, permanent damage, or death in M.E.) It is an entirely different problem of a much greater magnitude. Overexertion causes an exacerbation of all sorts of combinations of neurological, cognitive, cardiac, cardiovascular, immunological, endocrinological, respiratory, hormonal, muscular, gastrointestinal and other symptoms which can be mild, moderate, severe, or even life threatening (eg. seizures and cardiac events). Many of the symptoms involved are present at a lower level at rest, but overexertion causes them to worsen. (Although some patients may also have some symptoms that only appear after overexertion.) Anywhere from one symptom to a large cluster of symptom can be made worse, or produced, by overexertion. The cluster of symptoms made worse by excessive exertion or stimulus is often very similar from patient to patient, as generally it is a worsening of the most common symptoms of the illness. M.E. patients commonly experience a combination of the following: Profound cognitive dysfunctions (and various other neurological disturbances), muscle weakness (or paralysis), burning eye pain or burning skin, subnormal temperature or low-grade fever, sore throat or painful lymph nodes (and/or other signs of inappropriate immune system activation), faintness, weakness or vertigo, loss of co-ordination, dyspnoea, an explosion of sensory phenomena (low level seizure activity), cardiac and/or blood pressure disturbances, facial pallor and/or a slack facial expression, widespread severe pain, nausea or feeling as if ‘poisoned,’ feeling cold and shivering one minute and hot and sweating the next, anxiety or even terror (as an organic part of the attack itself rather than as a reaction to it) and hypoglycaemia. Often the patient will feel an urgent need to retreat from all homeostatic pressures. The types of symptoms triggered vary widely from patient to patient, but some combination of these is common. There may also be an accompanying exacerbation of other symptoms. These symptoms often combine to create an indescribable and overwhelming experience of terrible illness that is unique to M.E, and can be profoundly incapacitating. At its most severe, the patient feels as if they are about to die. Each of the symptoms caused or exacerbated by overexertion can be clearly articulated without difficulty whether they be; seizures, cardiac events, labile blood pressure, tachycardia, shortness of breath, muscle pain, muscle weakness or muscle paralysis, facial paralysis, black outs, flu-like symptoms, nausea, inability to speak or to understand speech, problems with memory, and so on. It makes no scientific or logical sense to subsume these very specific symptoms, and very specific and varied combinations of symptoms, under a vague and inaccurate label of mere ‘fatigue.’ To say that all of these very different and very specific – and in some cases very serious – symptoms can be accurately summarised as being a problem of mere ‘fatigue,’ ‘malaise’ or ‘exhaustion’ is absurd. Repeated or severe overexertion can also cause disease progression, permanent damage (eg. to the heart), or death in M.E. patients. Again, to suggest that these very serious and long-term effects – including fatalities – could be accurately summarised as being a problem of mere ‘fatigue’ is clearly absurd (Hyde 2006, 2007, [Online]) (Hooper 2006, [Online]) (Cheney 2006, [video recording]) (Hooper & Marshall 2005, [Online]) (Hyde 2003, [Online]) (Dowsett 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Dowsett 1996, p. 167) (Hyde 1992 p. xi) (Hyde & Jain 1992 pp. 38 - 43) (Hyde et al. 1992, pp. 25-37) (Dowsett et al. 1990, pp. 285-291) (Ramsay 1986, [Online]) (Dowsett n.d., [Online]). What is ‘Chronic Fatigue Syndrome'? CFS was created in a response to an outbreak of what was unmistakably M.E., but this new name and definition did not describe the known signs, symptoms, history and pathology of M.E. It described a disease process that did not, and could not exist. All each of these flawed CFS definitions ‘define’ is a heterogeneous (mixed) population of people with various misdiagnosed psychiatric and miscellaneous non-psychiatric states which have little in common but the symptom of fatigue (a symptom seen in many illnesses but not a defining feature of M.E. nor even an essential symptom of M.E.). The disease category ‘CFS’ has undoubtedly been used to impose a false psychiatric paradigm of M.E. by allying it with various unrelated psychiatric fatigue states and post-viral fatigue syndromes (etc) for the benefit of various (proven) financial and political interests (Hyde 2006, [Online]) (Hooper 2006, [Online]) (Hyde 2003, [Online]) (Hooper 2003, [Online]) (Dowsett 2001, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]). The fact that a person qualifies for a diagnosis of CFS (a) does not mean that the patient has Myalgic Encephalomyelitis, and (b) does not mean that the patient has any other distinct and specific illness named ‘CFS.’ A diagnosis of CFS – based on any of the CFS definitions – can only ever be a wastebasket diagnosis, in other words, a misdiagnosis. Despite popular opinion, M.E. and CFS are not synonymous terms. All a diagnosis of ‘CFS’ actually means is that the patient has a gradual onset fatigue syndrome which is usually due to a missed major disease. As Dr Byron Hyde explains, the patient has: a. Missed cardiac disease, b. Missed malignancy, c. Missed vascular disease, d. Missed brain lesion either of a vascular or space occupying lesion, e. Missed test positive rheumatologic disease, f. Missed test negative rheumatologic disease, g. Missed endocrine disease, h. Missed physiological disease, i. Missed genetic disease, j. Missed chronic infectious disease, k. Missed pharmacological or immunization induced disease, l. Missed social disease, m. Missed drug use disease or habituation, n. Missed dietary dysfunction diseases, o. Missed psychiatric disease (2006, [Online]). The only groups which gain from this ‘CFS’ confusion are insurance companies and other organisations and corporations which have a vested financial interest in how these patients are treated, including the government. Under the cover of ‘CFS’ these vested interest groups have assiduously attempted to obliterate recorded medical history of Myalgic Encephalomyelitis; even though the existing evidence has been published in prestigious peer-reviewed journals around the world and spans over 70 years. This is clearly unscientific, and unethical. The only way forward for M.E. patients and all of the diverse patient groups commonly misdiagnosed with ‘CFS’ (both of which are denied appropriate support, diagnosis and treatment) is that the bogus disease category of ‘CFS’ must be abandoned. (See: Who benefits from 'CFS' and 'ME/CFS'?) People with M.E. must be diagnosed with Myalgic Encephalomyelitis, and treated for M.E. based on information gained solely from studies involving authentic M.E. patients. People with depression must be diagnosed and treated for depression. People with cancer must be diagnosed with cancer and then treated as appropriate for the type of cancer they have, and so on. Physicians who diagnose ‘CFS’ in any patient experiencing fatigue without looking and testing for the true cause of the symptoms (and who choose not to familiarise themselves with the scientific facts about Myalgic Encephalomyelitis) do their patients – and themselves – a great disservice. Some of the conditions commonly misdiagnosed as CFS are very well defined and well-known illnesses and very treatable – but only once they have been correctly diagnosed. Some conditions can become very serious or can even be fatal if not correctly diagnosed and managed, including Myalgic Encephalomyelitis. Every patient deserves the best possible opportunity for appropriate treatment for their illness, and for recovery and this process must begin with a correct diagnosis if at all possible. A correct diagnosis is half the battle won (Hyde 2006, 2007, [Online]) (Hooper 2006, [Online]) (Hyde 2003, [Online]) (Hooper 2003, [Online]) (Dowsett 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Dowsett n.d., [Online]). All of this is not simply theory, but is based upon an enormous body of mutually supportive clinical information which has been published in prestigious peer-reviewed journals all over the world and spans over 60 years. Confirmation of this hypothesis is supported by electrical tests of muscle and of brain function (including the subsequent development of PET and SPECT scans) and by biochemical and hormonal assays. Newer scientific evidence is increasingly strengthening this hypothesis. M.E. is neither ‘mysterious’ nor ‘medically unexplained. Many aspects of the pathophysiology of the disease have, indeed, been medically explained in volumes of research articles. These are well-documented, scientifically sound explanations for why patients are bedridden, profoundly intellectually impaired, unable to maintain an upright posture and so on. Despite popular opinion, there simply is no legitimate scientifically motivated debate about whether or not M.E. is a ‘real’ illness or not, or whether or not it is ‘behavioural’ or has a biological basis. The psychological or behavioural ‘theories’ of M.E. are no more scientifically viable than are the theories of a ‘flat earth.’ They are pure fiction. The reality is that anyone, whether medically qualified or not, who looks at the worldwide published medical evidence on M.E. could not fail to recognise that the psychological or psychiatric theories could not possibly explain the many different and profound physical abnormalities seen in M.E. (nor the many other characteristics of the disease which are not consistent with psychological or behavioural illness). There are only two ways that a person could reach a different conclusion: Myalgic Encephalomyelitis is a debilitating autoimmune disease which has been recognised by the World Health Organisation (WHO) since 1969 as an organic neurological disorder. M.E. is similar in a number of significant ways to illnesses such as multiple sclerosis, Lupus and Polio. M.E. affects all races and socio-economic groups and has been diagnosed all over the world with a similar strike rate to multiple sclerosis. Children as young as five can get M.E., as well as adults of all ages. M.E. can be extremely disabling, and is not a self-limiting or short term illness. 25% of M.E. sufferers are severely affected and housebound and bedbound. In some cases Myalgic Encephalomyelitis can also be progressive, or fatal. Governments around the world are currently spending $0 a year on M.E. research. The name and authentic definition of Myalgic Encephalomyelitis must be fully restored (to the exclusion of all others) and the WHO classification of M.E. must be accepted and adhered to in all official documentations and government policy. People with M.E. must be diagnosed with M.E. and treated for M.E. again, finally. There were sound medical reasons for the creation of the name in 1956, and for the classification of the illness as a distinct organic neurological disorder by the WHO in 1969; neither of which has changed in the interim (Hyde 2006, 2007, [Online]) (Hooper 2006, [Online]) (Cheney 2006, [video recording]) (Hyde 2003, [Online]) (Dowsett 2001, [Online]) (Hooper et al. 2001, [Online]) (Dowsett 2000, [Online]) (Dowsett 1999a, 1999b, [Online]) (Hyde 1992 p. xi) (Hyde & Jain 1992 pp. 38 - 43). As Professor Malcolm Hooper explains: The term myalgic encephalomyelitis (means muscle pain, my-algic, with inflammation of the brain and spinal cord, encephalo-myel-itis, brain spinal cord inflammation) was first coined by Ramsay and Richardson and has been included by the World Health Organisation (WHO) in their International Classification of Diseases (ICD), since 1969. The currently version ICD-10 lists ME under G.93.3 - neurological conditions. It cannot be emphasised too strongly that this recognition emerged from meticulous clinical observation and examination (2006, [Online]). Myalgic Encephalomyelitis is a distinct infectious neurological disease of extraordinarily incapacitating dimensions that affects virtually every bodily system – not a problem of medically unexplained ‘fatigue.’ The basic facts are that fatigue, ‘CFS’ and M.E. are not at all the same thing: There is also no such disease as ‘ME/CFS’ or ‘CFS/ME’ or CFIDS and so on. The unadulterated scientific facts about M.E. are mind blowing and utterly compelling and credible, but the ‘CFS’ and ‘ME/CFS’ propaganda isn’t. For more information see: Who benefits from 'CFS' and 'ME/CFS'?, What is Myalgic Encephalomyelitis? A historical, medical and political overview and The Terminology Explained A note on the high quality of the references used to compile this paper: This paper has been compiled using the highest quality resources available. Not everyone was taken in by the ‘CFS’ insurance scam thankfully! A small but dedicated group of M.E. experts have made many remarkable discoveries about the pathology of M.E. – as well as confirmed many times over what was already known about M.E. prior to 1988, before M.E. research became tainted by ‘fatigue’ and ‘CFS.’ Legitimate unbiased M.E. experts and researchers do exist, and their numbers continue to grow – albeit far more slowly than is needed, unfortunately. A final additional note on ‘fatigue’: Just as some M.E. sufferers will experience other minor and non-essential symptoms such as vomiting or night sweats some of the time, but others will not, the same is true of fatigue. The diagnosis of M.E. is determined upon the presence of certain neurological, cognitive, cardiac, cardiovascular, immunological, endocrinological, respiratory, hormonal, muscular, gastrointestinal and other symptoms (and so on) – the presence or absence of mere ‘fatigue’ is irrelevant. In addition to these other (far more serious) symptoms, some M.E. sufferers may also suffer with mild, moderate or severe fatigue some of the time, while others will not. Thus the symptom of fatigue is not an essential symptom of M.E. and does not define M.E. (Although the symptom of fatigue is essential to qualify for a misdiagnosis of ‘CFS’). The point to be most aware of is not that M.E. is ‘more than fatigue’ – but that M.E. ISN’T FATIGUE AT ALL. All of the information concerning Myalgic Encephalomyelitis on this website is fully referenced and has been compiled using the highest quality resources available, produced by the world's leading M.E. experts. More experienced and more knowledgeable M.E. experts than these – Dr Byron Hyde and Dr. Elizabeth Dowsett in particular – do not exist. Between Dr Byron Hyde and Dr. Elizabeth Dowsett, and their mentors the late Dr John Richardson and Dr Melvin Ramsay (respectively), these four doctors have been involved with M.E. research and M.E. patients for well over 100 years collectively, from the 1950s to the present day. Between them they have examined more than 15 000 individual (sporadic and epidemic) M.E. patients, as well as each authoring numerous studies and articles on M.E., and books (or chapters in books) on M.E. Again, more experienced, more knowledgeable and more credible M.E. experts than these simply do not exist. This paper is merely intended to provide a brief summary of some of the most important facts of M.E. It has been created for the benefit of those people without the time, inclination or ability to read each of these far more detailed and lengthy references created by the world’s leading M.E. experts. The original documents used to create this paper are essential additional reading however for any physician (or anyone else) with a real interest in Myalgic Encephalomyelitis. For more information see the References page. Before reading this research/advocacy information, please be aware of the following facts: 1. Myalgic Encephalomyelitis and ‘Chronic Fatigue Syndrome’ are not synonymous terms. The overwhelming majority of research on ‘CFS’ or ‘CFIDS’ or ‘ME/CFS’ or ‘CFS/ME’ or ‘ICD-CFS’ does not involve M.E. patients and is not relevant in any way to M.E. patients. If the M.E. community were to reject all ‘CFS’ labelled research as ‘only relating to ‘CFS’ patients’ (including research which describes those abnormalities/characteristics unique to M.E. patients), however, this would seem to support the myth that ‘CFS’ is just a ‘watered down’ definition of M.E. and that M.E. and ‘CFS’ are virtually the same thing and share many characteristics. A very small number of ‘CFS’ studies/articles and books refer in part to people with M.E. but it may not always be clear which parts refer to M.E. The A warning on ‘CFS’ and ‘ME/CFS’ research and advocacy paper is recommended reading and includes a checklist to help readers assess the relevance of individual ‘CFS’ studies (etc.) to M.E. (if any) and explains some of the problems with this heterogeneous and skewed research. In future, it is essential that M.E. research again be conducted using only M.E. defined patients and using only the term M.E. The bogus, financially-motivated disease category of ‘CFS’ must be abandoned. 2. The research referred to on this website varies considerably in quality. Some is of a high scientific standard and relates wholly to M.E. and uses the correct terminology. Other studies are included which may only have partial or minor possible relevance to M.E., use unscientific terms/concepts such as ‘CFS,’ ‘ME/CFS,’ ‘CFS/ME,’ ‘CFIDS’ or Myalgic ‘Encephalopathy’ and also include a significant amount of misinformation. Before reading this research it is also essential that the reader be aware of the most commonly used ‘CFS’ propaganda, as explained in A warning on ‘CFS’ and ‘ME/CFS’ research and advocacy and in more detail in Putting Research and Articles on Myalgic Encephalomyelitis into Context. “People in positions of power are misusing that power against sick people and are using it to further their own vested interests. No-one in authority is listening, at least not until they themselves or their own family join the ranks of the persecuted, when they too come up against a wall of utter indifference.’ Professor Hooper 2003 ‘Do not for one minute believe that CFS is simply another name for Myalgic Encephalomyelitis (M.E.). It is not. The CDC definition is not a disease process. It is (a) a partial mix of infectious mononucleosis /glandular fever, (b) a mix of some of the least important aspects of M.E. and (c) what amounts to a possibly unintended psychiatric slant to an epidemic and endemic disease process of major importance’ Dr Byron Hyde 2006 ‘Thirty years ago when a patient presented to a hospital clinic with unexplained fatigue, any medical school physician would search for an occult malignancy, cardiac or other organ disease, or chronic infection. The concept that there is an entity called chronic fatigue syndrome has totally altered that essential medical guideline. Patients are now being diagnosed with CFS as though it were a disease. It is not. It is a patchwork of symptoms that could mean anything’ Dr Byron Hyde 2003 The vested interests of the Insurance companies and their advisers must be totally removed from all aspects of benefit assessments. There must be a proper recognition that these subverted processes have worked greatly to the disadvantage of people suffering from a major organic illness that requires essential support of which the easiest to provide is financial. The poverty and isolation to which many people have been reduced by ME is a scandal and obscenity. Professor Malcolm Hooper 2006 To the very few physicians still practicing today who began seeing patients with this illness some 40 years ago and who have continued to record and publish their clinical findings throughout, the current enthusiasm for renaming and reassigning this serious disability to subgroups of putative and vague "fatigue" entities, must appear more of a marketing exercise than a rational basis for essential international research. It was not always so unnecessarily complicated! Dr Elizabeth Dowsett M.E. is a systemic disease (initiated by a virus infection) with multi system involvement characterised by central nervous system dysfunction which causes a breakdown in bodily homoeostasis (The brain can no longer receive, store or act upon information which enables it to control vital body functions, cognitive, hormonal, cardiovascular, autonomic and sensory nerve communication, digestive, visual auditory balance, appreciation of space, shape etc). It has an UNIQUE Neuro-hormonal profile. .Dr Elizabeth Dowsett M.E. appears to be in this same family of diseases as paralytic polio and MS. M.E. is less fulminant than MS but more generalized. M.E. is less fulminant but more generalized than poliomyelitis. This relationship of M.E.-like illness to poliomyelitis is not new and is of course the reason that Alexander Gilliam, in his analysis of the Los Angeles County General Hospital M.E. epidemic in 1934, called M.E. atypical poliomyelitis. Dr Byron Hyde 2006 Disclaimer: The HFME does not dispense medical advice or recommend treatment, and assumes no responsibility for treatments undertaken by visitors to the site. It is a resource providing information for education, research and advocacy only. Please consult your own health-care provider regarding any medical issues relating to the diagnosis or treatment of any medical condition. Copyright © Jodi Bassett, January 2009. This version updated May 2009. From www.hfme.org For more information, and to read a fully-referenced version of this text compiled using information from the world’s leading M.E. experts, please see: What is M.E.? Extra extended version. Permission is given for this unedited document to be freely redistributed. Please redistribute this text widely. To download other papers from this site, see the Document Downloads page. Permission is given for this document to be freely redistributed by e-mail or in print for any not-for-profit purpose provided that the entire text (including this notice and the author’s attribution) is reproduced in full and without alteration. Please redistribute this text widely.
1
15
<urn:uuid:6bc7b43c-ca4f-49ee-8290-544eb09b901d>
Why aren't we warning the public about the risks of blood contact of cancer patients? If it's actually true that secondary tumors are caused by those rogue cancer cells that are hitchhiking through the blood stream, or lymph system, then why do cancer cells of the primary tumor hardly ever travel to adjacent tissues? metastases is not a theory, it's a fact. Are you saying from one person to another? Cancer is not well understood in many ways. Many cancers metastasize to specific tissue types (breast cancer usually metastasizes to the lungs, liver,and bones while colon cancer usually invades the peritoneum, liver, and lungs). It would appear the most common sites of metastasis involve the liver, lungs,and bones for most types of cancer. According to http://www.cancer.gov/cancertopics/fact … etastatic, "The ability of a cancer cell to metastasize successfully depends on its individual properties; the properties of the noncancerous cells, including immune system cells, present at the original location; and the properties of the cells it encounters in the lymphatic system or the bloodstream and at the final destination in another part of the body. Not all cancer cells, by themselves, have the ability to metastasize. In addition, the noncancerous cells at the original location may be able to block cancer cell metastasis. Furthermore, successfully reaching another location in the body does not guarantee that a metastatic tumor will form. Metastatic cancer cells can lie dormant (not grow) at a distant site for many years before they begin to grow again, if at all. " But that still doesn't explain it. If the theory is true, about the rogue cells, then any blood-on-blood action could cause cancer. It would also mean that blood is the key and if that were true, then the brain would have filtered it out from the other parts of the body. So why wouldn't the brain perform it's functions? They have tested the filter, it was unharmed and in perfect order, yet the cancer spreads from the brain to other places... why? The immune system, and other cells and the possibility for growth would make the chances of the cancer growing slim. Even slimmer as the cells travel through the body... so why are the cells not attacking the closest place? It is all ready weakened and perfect breeding ground for more growth. That is proven by the tumors growth across the brain. I think there is a lot that we DON'T know about cancer. There are studies showing impairment of the blood-brain barrier with cancer patients, however - the barrier is compromised with metastases to the brain (see this study on breast cancer patients: http://www.hindawi.com/journals/pri/2011/920509/) From the aforementioned article, it would appear that the genes specific to the tumor cell determine which tissue it will grow in: "Searches for genetic determinants of metastasis have led to identification of gene signatures that selectively mediate breast cancer cell metastasis to bones, the lungs, and the brain [13–15]. Based on previous work on genomic analysis of breast cancer metastasis to bone and lung, the Massagué group identified three tumor metastasis genes that mediate extravasation through the BBB and cancer cell colonization in the brain" I agree with you. Some of my countrymen even consider cancer as a curse. We should help n spreading cancer awareness. I know it can't be treated but at least we can apply some methods towards prevention. Awareness and continuing research. Some forms of cancer have been conquered - Gleevec, for example, is a relatively new drug that cures Chronic Myeloid Leukemia. Molecular therapy (like Gleevec) is a much better way to attack cancer, because there are few side effects and the drug is extremely effective (you can read about this specific drug here: http://www.cancer.gov/newscenter/qa/2001/gleevecqa. Some forms of cancer are still incredibly deadly, unfortunately. I don't think metastais is a "theory", it is an observed phenomenon that *some* cancers, ast a *certain levels of severity* spread throughout the body--often resulting in a very severe or terminal condition. In these cases tumors will arise in non-adjacent positions. This by no means implies that cancer it is transmissible via blood. That would require a lot of other factors than just malignant cells in the blood or lymph. If I had to take a guess, from a scientist's perspective (and keep in mind this is NOT my primary field of study) - it's most likely because cells from another person are not compatible and would recognized as "foreign" and therefore be destroyed by your immune system. Similar of to organ donations. Organ donors have to be pretty compatible for the recipient not to reject and destroy the donor's organ. Viruses and bacteria, on the other hand, are not human cells are totally different. Many have mutated to avoid being destroyed by the host's immune system. I gather from the discussions already posted that cancer is considered as falling under the germ theory of disease. That is, cancer is caused by a germ like virus that would make it communicable. To my knowledge cancer is not caused by germs but by free radicals. For example, superoxide which is a by-product of metabolism of glucose. It consists of two atoms of oxygen with one unpaired electron which, to stabilize itself it grabs another electron of a nearly molecule that belongs to a tissue. This grabbing results in injury that triggers mutation in DNA or cell membrane that graduate into tumor or cancer. Tumor or cancer starts with one cell only whose growth is uncontrolled. Tumor cells that breach the matrix that confines it, usually the bottom, escapes to the blood or to the lymph fluid than invades cells. That's when it becomes metastatic. Bromelain or protease usually dissolves cell matrix, which is also found in papaya and pineapple. Melanoma, skin cancer, can colonize brain cells of the same person not of another person. It is not communicable that is why conventional medicine calls cancer a non-communicable disease. Superoxide is neutralized by superoxide dismutase, an enzyme, that converts it to hydrogen peroxide which is a reactive oxygen species that acts like a free radical in that it grabs another electron that results in injury. Hydrogen peroxide is dismantled by glutathione peroxidase into safe water. Interesting... I really appreciate your post. If cancer does not communicate like other diseases, then would it be logical to create a medicine that does to battle it? Perhaps one that would "light" up the cancer cells that would show the original rogue cell. I think you are mistaking transferring to other person with transferring in ones on body. When you try to transfer from one human to another, the new host will see the cells as foreign to oneself and destroy it by immune mechanism. While in ones own body the cancer cells are usually similar to the normal cells and hence not 'seen' by immune cells. The cancer cells only at times carry proteins that are entirely different and usually the proteins are similar to other normal proteins, hence cannot differentiate between normal and abnormal. There are a couple of known cases where a virus can eventually lead to cancer. The human papillomavirus is one example. A woman infected has a higher risk of cervical cancer - it doesn't necessarily mean she'll get it. But most cervical cancers are caused by HPV infections. And HPV is an sexually transmitted disease. It's important to note that it's the virus that can spread from human to human, it's not cancer cells spreading between humans as this original forum question seems to be asking. It is more likely virus does not transmit cancer. Macrophages, components of the immune system, mediated by the inducible nitric oxide synthase, produces nitric oxide and use it as bullets to shoot virus and inflamed cells. NO causes mutation in stricken cells including healthy cells in the vicinity. Mutation results in tumor or cancer. Macrophage uses a free radical, as bullet (Cranton, E. and A. Brecher. Bypassing Bypass. 1984). So, the free radical causes cancer. That is correct. Viruses cannot transmit "cancer" in the sense that a communicable disease is transmitted but the presence of the virus and what the virus eventally does within the host that causes the eventual onset of cancer by damaging DNA via free radicals. Countering tumor or cancer takes a different tack. A free radical like nitric oxide (NO) inflicts disease differently from that how a virus does. Nitric oxide is a product of reactions (on L-arginine) catalyzed by three enzymes: endothelium nitric oxide synthase(eNOS) produces one; neuron nitric oxide (nNOS) produces another; inducible nitric oxide synthase (iNOS) produces still another. Each has specific functions. NO of eNOS is a messenger that signals the endothelium to dilate and allow the flow of more blood (the same as done by nitroglycerin given during an episode of angina pectoris). NO from eNOS is beneficial. NO mediated by the macrophage, a component of the immune system, is used by the macrophage to shoot virus (Cranton, E. MD and A. Brecher. Bypassing Bypass. 1984). It kills the virus or bacteria and also strikes healthy cells in their vicinity inflicting damage to their membranes or DNA. That can result in inflammation or mutation or scar or stenosis as happens in the initiation of rheumatic heart (triggered by the bacteria Streptococcus pyogenes). So, a free radical could serve as a messenger and as a grabber of electrons. Inflammation is evidence of damage done by free radicals. NO produced by iNOS is countered by antioxidants to mitigate the preponderance of free radicals. A free radical has a charge that can be neutralized by a charge from the electron of hydrogen atom that comes from antioxidants. That's how an antioxidant does battle against a free radical. The rogue in cancer is a free radical. Now we cannot escape from free radicals. It is a matter of balance between free radicals and antioxidants. Our body has built-in antioxidants like superoxide dismutase, glutathione peroxidase, and catalase. These can be supplemented by vitamin B complex, A, C, D and coenzyme coQ10. Infusion chelation therapy also counters free radicals. EDTA (ethylene diamine tetra acetate) has a negative charge that can neutralize free radicals, and remove and bind with minerals like iron and copper in whose presence during a reaction produce hydroxyl radical and alkoxy radical. NO produced by eNOS can also be caught by antioxidants that is why angina occurs; or NO is not produced because an artery injured and with a plaque does not produce NO which is released by the inner wall of a normal artery. The best approach against tumor and cancer is prevention that can be done with antioxidants. Cancer is also amenable to treatment during early stages (likely to be effective in stages I and II). Heritable colon cancer can be prevented, and halted. The gene involved is adenomatosus polyposis colitis (APC). The first mutation happens during meiosis. The next mutations occur in the mitotic cells of the sibling. One who inherited APC has thousands of polyps in the intestine. But 5 to 7 mutations more must occur in one mitotic cell before a full-blown colon cancer develops. Counters before each mutation can be mounted. The last mutation is the mutation of both alleles of p53 gene that serves as a switch in mitosis. Once both alleles had been mutated, development of colon cancer cannot be stopped. p53 blocks the mitosis (uncontrolled growth) of the one cell that had sustained the 5 mutations in genes k-ras, DCA, etc). Colonoscopy can catch a mutating polyp early on that is still in the adenoma stage. A polyp that sustains a mutation in k-ras gene turns into adenoma, a name applied to stages up until p53 genes had been mutated when colon cancer now arises. Immunotherapy works against cancer; it kills cancer cells only. Chemotherapy produces a lot of free radicals that destroy both cancer cells and healthy cells in the vicinity that is why it could create another disease. It destroys the bone marrow that is why bone marrow transplant is part of a protocol involving adriamycin. Before administration, part of the bone marrow is taken, preserved and transplanted back to the same patient once chemotherapy sessions had been completed. I will edit my recent post: for APC I meant adenomatosis polyposis coli. Adenoma is "an intermediate tumor with fingerlike projections." The mutations that develop into colon cancer are: APC (one alelle mutated), k-ras proto-oncogene (one allele mutated); gene DCC (two alleles mutated); gene DC4 (two alleles mutated); gene DPCA (two alleles mutated; and V18-1 (two alleles mutated. Where did you get your information? I've been looking through medical books for a while and none of them have been that specific. I read this thread with great interest and I'm grateful for the info here. Just recently it seems that my everyone I know has cancer or has a family member of friend that has. At least this thread has given me a little comfort. by Pam Ryan18 months ago Sometimes I think radical Islam and radical Christianity have more in common with each other than they do with any of the more moderate or secular belief systems. All this burning in hell for all eternity and... by paarsurrey6 years ago Hi friendsI appreciate post of our friend yoshi97.http://hubpages.com/forum/topic/52955?p … ost1211545Following points could resolve the present impasse:1. "True Islam does not preach... by tutemaven3 years ago Inflammation is in all of us, depending on who we are, what we eat, where we live, what we do, and how old we are.It is one of the greatest causes of disease that there is. Maybe the greatest cause.If you fall on your... by clevela6 years ago my current PSA number is 5what is the best way and amount to eat avacado daily to kill prostrate cancer cells? by aoiffe3796 years ago Consider a diabetic with a wound that takes some time to heal. There are factors no doubt. Now, think of a person with a bed sore. There are factors surrounding that to. The third person has a wound that is blamed... by Connie Smith7 years ago Please pray for Lexi who is undergoing brain surgery right this minute. I am not self-promoting, but I did write a little hub about her with pics if anyone wants to see it. Believe me, the last thing I care... Copyright © 2017 HubPages Inc. and respective owners. Other product and company names shown may be trademarks of their respective owners. HubPages® is a registered Service Mark of HubPages, Inc.
1
5
<urn:uuid:a0fa0e7a-6480-4591-85f9-ca1cc464b77e>
There are two basic types of term life insurance policies—level term and decreasing term. - Level term means that the death benefit stays the same throughout the duration of the policy.- Decreasing term means that the death benefit drops, usually in one-year increments, over the course of the policy’s term. In 2003, virtually all (97 percent) of the term life insurance bought was level term. Whole life or permanent insurance pays a death benefit whenever you die—even if you live to 100! There are three major types of whole life or permanent life insurance—traditional whole life, universal life, and variable universal life, and there are variations within each type. In the case of traditional whole life, both the death benefit and the premium are designed to stay the same (level) throughout the life of the policy. The cost per $1,000 of benefit increases as the insured person ages, and it obviously gets very high when the insured lives to 80 and beyond. The insurance company could charge a premium that increases each year, but that would make it very hard for most people to afford life insurance at advanced ages. So the company keeps the premium level by charging a premium that, in the early years, is higher than what’s needed to pay claims, investing that money, and then using it to supplement the level premium to help pay the cost of life insurance for older people. By law, when these “overpayments” reach a certain amount, they must be available to the policyholder as a cash value if he or she decides not to continue with the original plan. The cash value is an alternative, not an additional, benefit under the policy. In the 1970s and 1980s, life insurance companies introduced two variations on the traditional whole life product—universal life insurance and variable universal life insurance. - Whole or ordinary life What are the different types of permanent policies? This is the most common type of permanent insurance policy. It offers a death benefit along with a savings account. If you pick this type of life insurance policy, you are agreeing to pay a certain amount in premiums on a regular basis for a specific death benefit. The savings element would grow based on dividends the company pays to you. - Universal or adjustable life This type of policy offers you more flexibility than whole life insurance. You may be able to increase the death benefit, if you pass a medical examination. The savings vehicle (called a cash value account) generally earns a money market rate of interest. After money has accumulated in your account, you will also have the option of altering your premium payments – providing there is enough money in your account to cover the costs. This can be a useful feature if your economic situation has suddenly changed. However, you would need to keep in mind that if you stop or reduce your premiums and the saving accumulation gets used up, the policy might lapse and your life insurance coverage will end. You should check with your agent before deciding not to make premium payments for extended periods because you might not have enough cash value to pay the monthly charges to prevent a policy lapse. - Variable life This policy combines death protection with a savings account that you can invest in stocks, bonds and money market mutual funds. The value of your policy may grow more quickly, but you also have more risk. If your investments do not perform well, your cash value and death benefit may decrease. Some policies, however, guarantee that your death benefit will not fall below a minimum level. - Variable-universal life If you purchase this type of policy, you get the features of variable and universal life policies. You have the investment risks and rewards characteristic of variable life insurance, coupled with the ability to adjust your premiums and death benefit that is characteristic of universal life insurance. What are my health insurance choices? There are essentially two types of health insurance plans: indemnity plans (fee-for services) or managed care plans. The differences include the choice of providers, out-of-pocket costs for covered services and how bills are paid. There is no one "best" plan for everyone. Some plans are better than others for you or your family's health care needs, but no one plan will pay for all the costs associated with your medical care. A. Indemnity Plans Spending Plans are employer-sponsored plans that allow the employee to design his or her own employee benefit package, choosing between one or more employee benefits and cash. Several types of Flexible Benefits or Cafeteria Plans are used by employers, including a pre-tax conversion plan, multiple option pre-tax conversion plan, medical plans plus flexible spending accounts, and employer credit cafeteria plans. For more information about these choices, contact your employee benefits department. Plans allow you to choose your health care providers. You can go to any doctor, hospital or other provider for a set monthly premium. The plan reimburses you or your health care provider on the basis of services rendered. You may be required to meet a deductible and pay a percentage of each bill. However, there is also often an annual limit on out-of-pocket expenses, so that once an individual or family reaches the limit, the insurance covers the remaining eligible medical expenses in full. Indemnity plans sometimes impose restrictions on covered services and may require prior authorization for hospital care or other expensive services. "Basic and Essential" Health Plans provide limited health insurance benefits at a considerably lower cost. When buying such a plan, it is extremely important to read the policy description carefully because these plans don't cover some basic treatments, such as chemotherapy, certain prescriptions and maternity care. Furthermore, rates vary considerably because, unlike indemnity plans or a managed care option, premiums are community rated and are based on age, gender, health status, occupation or geographic location. Health Savings Accounts (HSA) are a recent alternative to traditional health insurance plans. HSAs are basically a savings product designed to offer individuals a different way to pay for their health care. HSAs enable you to pay for current health expenses and save for future qualified medical and retiree health expenses on a tax-free basis. Instead of paying a premium, you establish a tax-free savings account that covers your out-of-pocket medical expenses. This means that you own and control the money in your HSA. You make all decisions about how to spend the money without relying on a third party or a health insurer. You also decide what types of investments to make with the money in the account in order to make it grow. However, if you sign up for an HSA, you are generally required to buy a High Deductible Health Plan as well. High-Deductible Health Plans (HDHP) are sometimes referred to as catastrophic health insurance coverage. An HDHP is an inexpensive health insurance plan that kicks in only after a high deductible is met of at least $1,000 for an individual or $2,000 for a family. B. Managed Care Options Health Maintenance Organizations (HMOs) offer access to an extensive network of participating physicians, hospitals and other health care professionals and facilities. You choose a primary care doctor from a list provided by the HMO and this doctor coordinates your health care. You must contact your primary care doctor to be referred to a specialist. Generally, you pay fewer out-of-pocket expenses with an HMO, but you are often charged a fee or co-payment for services such as doctor visits or prescriptions. (POS) plans are an indemnity-type option in which the primary care doctors in the POS plan usually make referrals to other providers within the plan. If a doctor makes a referral out of the plan, the plan pays all or most of the bill. However, if you refer yourself to an outside provider, the service is covered by the plan, but you will be required to pay co-insurance. Preferred Provider Organizations (PPO) charge on a fee-for-service basis. The participating doctors, hospitals and health care providers are paid by the insurer on a negotiated, discounted fee schedule. Costs are lower if you use in-network healthcare services, but you have the option of going out-of-network. If you choose an out-of-network provider, you are generally required to pay the difference between what the provider charges and what the plan pays. If you would like information about other types of life insurance, please call our agency. We will be glad to answer any questions you may have.
1
2
<urn:uuid:7c99f75e-fc97-44d0-aef7-4443b36cf38d>
There should be something for everybody. It may take a moment or two to An optical image distortion conditional on the varying refraction of light rays of different wavelengths on a lens. Thus light rays of shorter wavelengths have longer focal distances than light rays of longer wavelengths. An optical image distortion conditional on the varying distance of paraxial light rays of the same wavelength from the optic axis. Light rays that travel through outer lens zones have shorter focal distances than rays that travel through the lens center (optic axis). The corrector lens on the front of a Schmidt-Cassegrain, for example, is an aspheric lens which corrects the aberration from the spherical primary mirror. - ABG - Anti-Blooming Gate - An electronic drain structure on a CCD chip which assures that electrons/voltage exceeding the full-well capacity of a pixel do not spill over to adjacent pixels. - Absolute Zero - The lowest possible temperature, attained when a system is at its minimum possible energy. The Kelvin temperature scale sets its zero point at absolute zero (-273.15 on the Celsius scale, and -459.67 on the Fahrenheit scale). The idea of a true minimum temperature has been confirmed by many experiments. Given the concept of temperature as molecular energy it follows that there must be a point at which no further energy can be extracted from a system. Although it is possible to approach ever closer to absolute zero, the "third law" of thermodynamics holds that it is impossible to attain absolute zero in a system. The present temperature of the cosmic background radiation is about 2.7 degrees above absolute zero. If the universe expands forever, this temperature will asymptotically approach absolute zero. - AC - Alternating Current - Electrical current that reverses (or alternates) at regular intervals. - Achromat (achromatic objective) - Describes a correction class for objectives. The chromatic aberration for two wavelengths is corrected for objectives of this type. Usually an objective of this type is corrected to a wavelength below 500nm and above 600nm. Furthermore, the sine condition for one wavelength is met. The image curvature aberration is not corrected. - Accretion Disk - A disk of gas which accumulates around a center of gravitational attraction, such as a white dwarf, neutron star, or black hole. As the gas spirals in, it becomes hot and emits light or even X-radiation. - Age of the Universe, The - An expanding universe must have been smaller in the past, and in fact the distance between any two points approaches zero roughly 13 billion years ago. This moment of ultra-high density is called the Big Bang, and marks the birth of the universe (at least to all intents and purposes). The age of the universe is therefore about 13 billion years. For about half a million years after the Big Bang, the universe was opaque to electromagnetic radiation. This sets a maximum distance that we can see: radiation emitted just as the universe became transparent (the cosmic microwave background), that is reaching us now, has traveled 13 billion light years. - AGN - Active Galactic Nucleus [Plural: Active Galactic Nuclei, (also - To try to summarise what we know about AGN is to step into a minefield. There is a "standard model" which everyone agrees is at least partly wrong, but every expert has his or her own proposal for either fixing it up, or replacing it with something completely different. To avoid getting too bogged down in these controversies, the following description is deliberately vague in many places. AGN are exclusively found in the centres of large galaxies; the galaxy containing an AGN is said to be its host. The central light-year or so of an AGN contains an enormous mass, equivalent to at least a million suns, and sometimes ranging up to a few billion suns. This region also contains something that shines brightly in the part of the electromagnetic spectrum from the ultraviolet through to X-rays. There is pretty good (but disputed) evidence that all this is actually concentrated on a much smaller scale, sometimes less than a few light-days across. Certainly the X-rays originate in an astronomically tiny volume, smaller than our own solar system. To put this into perspective, within a light-year of us there is just one sun (ours), and the host galaxies are typically 100 thousand light-years across or more. The thing (if it is one thing) at the heart of an AGN is often called the "monster". If general relativity is correct it seems almost inevitable that nearly all the mass of the monster is contained in a spinning black hole. Black holes don't radiate, by definition, so the radiation from an AGN is believed to come from gas clouds falling into the hole. In a way, the monster works as a sort of engine, fuelled by matter falling in (accreting, in the jargon). To be more precise, the gravitational potential energy of the accreted stuff is ultimately converted to radiation, and to kinetic energy in the form of jets. We don't know just how this happens, although there are dozens of competing theories. At larger distances from the center swirl clouds of gas and dust which are lit up and heated by the central heart, producing the characteristic emission lines and infra-red radiation. The outermost regions can be imaged from Earth in some of the nearest AGN, but mostly AGN appear just as points of light. - ADC - Analog-Digital-Converter - An electronic device, often an integrated circuit, that converts an analog voltage to a digital value. All digital instruments use an A/D converter to convert the input signal into digital information. The output signal does only change at special times and can only take special values - quantization. Compare with DAC - ADU - Analog-to-digital Unit - ADUs are employed as a measurement of pixel value or brightness. Pixel voltages (numbers of electrons) stored during CCD integrations are converted to ADU integers representing the measured voltage compared to maximum (full pixel) voltages in terms of the full Base 2 dynamic range of the CCD system (12 bit = 2^12th, 16 bit = 2^16th, etc.). - A photographic method that uses an eyepiece in the telescope focused normally (you look through it) and a camera with its lens focused at infinity. You then just point the camera into the eyepiece. The camera can be on a separate tripod or attached to the telescope with a bracket or attached with a threaded adapter or even hand held. It is a great method to use for photographing the moon and planets. - Abbreviation for "ampere-hour". Designates an amount of electric charge. Used for accumulators to denominate their capacity. Because the voltage of an accumulator is nearly constant, you can calculate the stored energy from the given Ah-rating, e.g. 12Vx100Ah=1.2kWh. - Airy Disc - The Airy disc refers to the inner, light circle (surrounded by alternating dark and light diffraction rings) of the diffraction pattern of a point light source. The diffraction discs of two adjacent object points overlap some or completely, thus limiting the spatial resolution capacity. - In CCD imaging, an algorithm usually refers to a software procedure, often for image processing. An algorithm is the mathematical function which tells the computer what to do with an image. - An image distortion caused by a sampling frequency that is too low in relation to the - Where the sampling rate is less than twice the input signal's highest - Alt-Azimuth (also called Alt-Az) - Short for Altitude-Azimuth . Telescopes which are mounted so that they move up-down and left-right (as opposed to equatorially) are called alt-az. This is a convenient mounting configuration for visual observing as the eyepiece is always in a convenient position. However, an equatorial wedge must be used for photography or CCD imaging. - A device that uses an active component to increase the voltage or power of a signal without distorting its waveshape. - A continuous, non-digital representation of phenomena. An analog voltage, for example, may take any value. Opposite to "digital". - AND Gate - A gate whose output is ON only if all input signals are ON. - Angle of Incidence - Angle between the incident ray of light and a normal drawn to the point of reflection. I.e. The angle between the optical axis of the light incident on the surface of a filter and the axis normal to this surface. - Angle of Reflection - Angle between the ray of light and the normal drawn to the point of refraction. - Angle of Refraction - Angle between the refracted ray of light and the normal drawn to the point of reflection. - Angle of view - The amount of a scene that can be recorded by a particular lens; determined by the focal length of the lens; also field of view FOV. - Angular deviation - A shift in the direction of light beam from the true optical axis of the system, measured in units of angle such as arcminutes (1/60 of a degree) or arcseconds (1/60 of an - Angstrom (Ċ) - A unit of length. 1/10,000 of a micrometer (10-4µm). - This feature is added to a CCD chip to prevent pixel blooming. This feature generally reduces sensitivity, well-depth, and linear response. For these reasons, non-anti-blooming chips are popular, and there is even software available to remove blooming streaks from CCD images - Particles with certain properties opposite to those of matter. Each matter particle has a corresponding antiparticle. The antiparticle has exactly the same mass and the opposite electric charge as its partner. An example is the electron (negative charge) and its antimatter version the positron (positive charge). When a particle and its antiparticle collide, both are annihilated and converted into photons. Similarly two photons with sufficient energy can combine to form a particle-antiparticle pair. The universe is made almost entirely out of matter. This means that in the big bang there was an excess of matter over antimatter so that when matter and antimatter combined and annihilated, some matter was left over. - Apparent Field of View - A characteristic of eyepieces. The apparent field of view is the angle through which your eyeball rotates when you look through an eyepiece and transfer your gaze from one edge of the field to the other. - The lens opening formed by the iris diaphragm inside the lens. The size of the hole can be made larger or smaller by the auto focus system or a manual control. The size is indicated as a 'f-number' or 'f-stop' i.e. f/4, f/5.6, - Aperture diaphragm - An adjustable diaphragm located in the illumination optics, which controls the numerical aperture of the illuminating beam and affects the brightness of the beam. - Aperture, maximum - The largest size of the hole though which light enters the camera. - Aperture, numerical - The aperture is the sine of the angle under which light enters into the front lens of a microscope objective; its symbol is NA. The aperture influences both the light gathering capacity and the resolution capacity of an objective. Since various media can be present between specimen and objective lens (such as the embedding medium for the specimen), the numerical aperture (NA = n * sin a) is usually applied as the unit of measurement for the light gathering capacity and the resolution capacity. - Aperture Synthesis - The technique of combining the signals from a collection of individual antennas or telescopes to provide an image with a resolution equivalent to a single telescope with a size roughly equal to the maximum distance between the individual antennas. This may be quite large, e.g. 217 km for MERLIN, and up to the size of the Earth for VLBI. - Apochromat (apochromatic objective) - Describes a correction class for objectives. The chromatic aberration for three wavelengths is corrected for objectives of this type (usually 450nm, 550nm and 650nm) and the sine condition for at least two colors is met. The image curvature aberration is not corrected. - Erroneous pixels created during the capture phase of imaging, caused by electrical interference or physical barriers such as dust. - Aspheric Surface - A lens or mirror surface that is altered slightly from spherical to reduce - Aspect ratio - The relationship of the X and Y scales of a 2-dimensional grid. Non-square CCD pixels are represented as square by video monitors and other output devices, yielding an aspect ratio not in accord with true sky coordinates unless the images are resampled to an aspect ratio that, in effect, squares the pixels. - I.e. The ratio between the width and height of an image or image - The science that studies the natural world beyond the earth. - In hardware, it is an event that occurs independent of other events; it is not synchronized with a clock signal. In software, it refers to a function that begins an operation and returns to the calling program prior to the completion or termination of the operation. - The smallest component of matter which retains its chemical properties. An atom consists of a nucleus composed of at least one proton, some number of neutrons, and at least one The atomic number of an atom corresponds to the number of protons present in the nucleus of an atom. This determines its elemental identity. The number of neutrons determines the isotope of the atom. - Attenuation Level (also Blocking level) - A measure of the out-of-band attenuation of an optical filter, over an extended range of the spectrum. The attenuation level is often defined in units of optical density - Spectacular array of light in the night sky, caused by charged particles from the Sun hitting the Earth's upper atmosphere. The aurora borealis is seen in the north of the Northern hemisphere; the aurora australis in the south of the Southern. - Autoguiding / Guiding - Telescope tracking controlled by feedback from real-time sensing of star movements within the field of view (FOV). Movement may be sensed by an electro-optical device, such as a CCD chip, or by the human eye comparing star movement to a eyepiece reticle intersection or a reticle grid. Autoguiding refers to automatic feedback to telescope drives provided by electronic devices, while manual guiding is accomplished by human feedback intervention using slow-motion controls on telescope drives. - Automatic exposure - A mode of camera operation in which the camera automatically adjusts the aperture, shutter speed, or both for proper exposure. - A brand name for Meade's hand-held computerized controller. - Average transmission - The average calculated over the useful transmission region of a filter, rather than over the entire spectrum. For a bandpass filter, this region spans the FVMM of the transmission band. - Averted Vision - When you look squarely at something, you are using a part of the retina of your eye that is not as sensitive to low light levels as the parts that are off to the side. Thus to see faint objects, don't look straight at them. Center them in the field of view of your telescope, but fix your stare part way out to the edge of the field. - a) Directional bearing around the horizon, measured in degrees from north (0°). - b) Angular distance from the north point eastward to the intersection of the celestial horizon with the vertical circle passing through the object and the zenith. - An optical filter that has a well-defined short wavelength cut-on and long wavelength cut-off. Bandpass filters are denoted by their center wavelength and bandwidth. - Also FWHM. For optical bandpass filters, typically the separation between the cut-on and cut-off wavelengths at 50% of peak transmission. Sometimes a bandwidth at, for example, 10% of peak transmission is specified. I.e. The highest frequency signal component that can pass through input amplifiers and/or filters without being attenuated. - An extra lens you can add to an eyepiece to amplify the magnification. Usually a 2 times multiplier. - The control portion of a bipolar transistor. In an NPN transistor, the P-type material forms the base. - Bayer pattern - A pattern of red, green, and blue filters on the image There are twice as many green filters as the other colors because the human eye is more sensitive to green and therefore green color accuracy is more - Bias Signal - The electrons and subsequent ADU generated by the voltage maintained over the CCD array during integration. - Big Bang - The state of extremely high (classically, infinite) density and temperature from which the universe began expanding. The beginning point of time and space for the universe. - Big Crunch - One hypothesized future for the universe in which the current expansion stops, reverses, and results in all space and all matter collapsing together; a reversal of the - A system of numbers using 2 as a base, in contrast to the decimal system which uses 10 as a base. The binary system requires only two symbols: 0 and - Binary Star - A system of two stars orbiting around a common center of gravity. Visual binaries are those whose components can be resolved telescopically (i.e., angular separation > 0'.5) and which have detectable orbital motion. Astrometric binaries are those whose dual nature can be deduced from their variable proper motion; spectroscopic binaries, those whose dual nature can be deduced from their variable radial velocity. At least half of the stars in the solar neighborhood are members of binary (or multiple) - Binning involves combining pixels on a CCD chip to create larger pixels. For example, taking a 2x2 square of pixels and creating one pixel that is twice the width and four times the area of the original pixel. This is done to increase sensitivity or to match a long focal length telescope to a CCD camera with small pixels. - Binocular Viewer - A set of prisms that allows you to use two eyepieces on a telescope. Using both eyes is particularly good for the moon and planets. - 1. An analog signal range that includes both positive and negative values. 2. An electronic device whose operation depends on the transport of both holes and electrons. - Bipolar Transistor - BJT (bipolar junction transistor). Most important foundation for integrated circuits. You may think of it being an electrically controlled valve or current amplifier. Modern bipolar transistors work up to the three-digit GHz-range. - Bipolar Switch - An electronic switch which is able to control bipolar signals. The switch consist of two identical in/outputs and a control line which does open and close the switch. There are usually in 2 or 4 switches packed in one IC e.g. the CMOS IC 4016 contains four. - A binary digit. A bit is the smallest unit of storage in a digital computer, and is used to represent one of the two states in the binary - Bit Depth - A measurement of the number of bits used to create a single pixel in a digital image. A 24-bit RG8 image is created from a palette of 16.7 million colors. - Images formed from pixels with each pixel a shade of gray or color. Using 24-bit color, each pixel can be set to any one of 16 million colors. - Black Hole - An object so dense that its escape velocity exceeds the speed of light. According to general relativity, such an object must collapse to an infinitely dense point, a singularity. The singularity is surrounded by a surface called the event horizon, within which objects and information can only move inwards, quickly reaching the singularity (and being crushed to a point in the process, of course). Therefore nothing can escape from a black hole. A technical exception is Hawking radiation, a quantum mechanical process first described by Steven Hawking, but this is unimaginably weak for the massive black holes of interest to astronomers. - Blocking range - The range of wavelengths over which an optical filter maintains a specified attenuation level. - Each photosite of a CCD chip can contain a certain amount of electric charge. This amount is determined by the well depth of the CCD. When the well-depth is exceeded, electric charge "bleeds" out of the photosite appearing in an image as a bright streak extending vertically from a bright source in the image (usually a star). This effect can be minimized or eliminated by using a CCD with an - A shift in the frequency of a photon toward higher energy and shorter wavelength. Blueshifts can be produced by relative motion of the emitter toward the observer (doppler blueshift), light falling in a gravitational field from the emitter to the observer (gravitational blueshift), or in a contracting universe (cosmological blueshift). For further details, see Redshift. - Boolean Algebra - A logical calculus named for mathematician George Boole, using alphabetic symbols to stand for logical variables, and 0 and 1 to represent states. AND, OR, and NOT are the three basic logic operations in this algebra. and NOR are each combinations of two of the three operations. - The business of observational astronomy boils down to measuring the brightness of celestial objects. Unfortunately, the English word "brightness" covers three quite different concepts, each of which covers several subtle variations. that is: Luminosity, Flux Density and Intensity (or Surface Brightness) - Brown Dwarf - A low-mass substellar object that is near the minimum mass for nuclear fusion reactions to occur in its core. Brown dwarf objects are a possible source of baryonic dark matter. Brown dwarfs are possible dark matter halo objects. - A group of eight bits (in the telecommunication field also octet). Multipliers are Kilobyte (1024 bytes), megabyte (1024x1024 bytes) etc. - 1. Generally, normalizing a system to a set of standards or constants. - 2. Specific to CCD imaging, to eliminate unwanted signal and reduce noise components by subtracting a dark frame and dividing by a flat-field frame. - 1. The capability of storing electrical charge. Unit of measure is the 2. In a capacitor or system of conductors and dielectrics, the property that permits the storage of electrically separated charges when potential differences exist between the conductors. Capacitance is related to charge and voltage as follows: C = Q/V, where C is the capacitance in farads, Q is the charge in coulombs, and V is the voltage in volts. - Telescope devised by Cassegrain in which an auxiliary convex mirror reflects the magnified image, upside down, through a hole in the center of the main objective mirror - i.e., through the end of the telescope itself. - Any of a number of compromise telescope designs, using both a lens and mirrors. Examples are the Schmidt-Cassegrain and Maksutov-Cassegrain. Because the light path is folded twice, the telescope is very compact. - CCD - Charge Coupled Device - A semiconductor device used for signal filters or as sensor elements. Electronic cameras get their picture from a CCD-sensor. The inner consists of a matrix of light sensitive elements, converting light into current. Because of the huge amount of elements, a direct wiring is not suitable. Therefore the charge packets generated during light exposure are passed by electric fields until they reach a connection point. I.e. "charge coupled". It operates by storing charge on capacitors and selectively moving that charge through the device by manipulating voltages on its - Usually, CCDs are micromanufactured into two-dimensional grids of rows and columns, each intersection comprising a pixel several microns in both - CCD raw format - The uninterpolated data collected directly from the image sensor before - Cepheid Variable - A type of luminous giant star whose luminosity varies in a periodic fashion. Cepheids are characterized by a rapid rise in luminosity followed by a slow decline. The period of the cycle is related to the luminosity of the Cepheid by the Period-Luminosity relationship. The more luminous the Cepheid, the longer the period. This property makes Cepheids useful for obtaining distances. One determines the pulsation period and uses the relationship to get the luminosity. The apparent brightness of the star then gives you the distance. Cepheids come in two types, Type I which are metal rich and Type II which are metal poor. Type I Cepheids are more luminous than Type II. - Circuit Layout - The physical arrangement of all the circuit elements on the surface of the - The color parameters of an image. Usually represented by hue and saturation. - Clear Aperture - The surface area of an optical filter which is free of any defects or obstructions. On interference filters the clear aperture is often delimited by an annulus of metal or opaque material. - A style of GEM made by Celestron. - CMM - Color Management Modules - This is software, such as Apple's Color Sync and Kodak's Color Management System, that attempts to regulate the display of color. - CMOS - Complementary Metallic Oxide Semiconductor - Circuit technique initiating the high integration level of today's integrated circuits. Nearly all modern microprocessors are produced in CMOS-technology. Conceptual not the fastest design technique, but allowing for complex circuits and automatic layout. - CMOS image sensor - An image sensor created using CMOS technology. It requires more light but it is cheaper than an CCD sensor. - CMY - Cyan, Magenta, Yellow. - Named the subtractive colors of the human visual spectrum, since cyan = white - red, magenta = white - green, and yellow = white - blue. - CMYK- Cyan, Magenta, Yellow, Key - Cyan, Magenta, Yellow, and Black (historically called the key color, hence K) are the standard inks used in the lithographic printing industry to reproduce color images. - The region of a bipolar transistor that "collects" the emitted electrons and then passes them on through a conductor, completing the - Collimated Light - Light in which the rays are parallel. - This refers to how correctly the optics are pointing towards each other. If a telescope is out of collimation, you will not get as clear an image as you should. Refractors generally have fixed optics, so you don't have to collimate them. Reflectors and catadioptrics usually have screws that you turn to collimate. - Color balance - The overall accuracy with which the colors in a photograph match or are capable of matching those in the original scene. - Color depth - The number of bits assigned to each pixel in the image and the number of colors that can be created from those bits. True Color uses 24 bits per pixel to render 16 million colors. - Color Space - Describes the mode used to represent color such as RGB, CMYK, or Lab. Each space has its own unique limitations. - Color Temperature - A stellar temperature determined by comparison of the spectral distribution of the star's radiation with that of a blackbody. - This refers to the blurring of objects at the edge of the field of view, most common in short focal ratio Newtonian telescopes (at f/10 and longer, Newtonians are very well corrected for coma). - One of three competing memory card formats used in digital cameras. Holds between 2Mb and 128Mb of picture data. - The process of reducing the size of a file. Compression techniques are distinguished by whether they remove detail and color from the image. 'Lossless' techniques compress image data without removing detail; 'lossy' techniques compress images by removing detail. - Compression, lossless - A file compression scheme that makes a file smaller without degrading the image. This method is generally less effective than lossy methods in terms of resulting file size, but retains the entire original image. - Compression, lossy - A file compression scheme that reduces the size of a file but degrades it in the process so it can't be restored to its original quality. Once deleted the data cannot be recovered. The higher the compression the more noticeable the artifacts of the compression. - Conductor, Electrical - A material capable of carrying (conducting) electricity. Silver is the best electrical conductor. Copper, gold, and aluminum are also popular conductors. Aluminum is the conductor most commonly used in IC fabrication. - Confocal (Confocality) - While the optical design of conventional microscopes results in the detection of both focused and unfocussed image components, the confocal principle suppresses the structures outside of the focal plane of the microscope objective. To achieve this pinholes are implemented in optically conjugated locations in the optical path. They function as point light source (excitation pinhole) and point detector (detection pinhole). The diameter of the detection pinhole, along with the wavelength and numerical aperture of the objective being used, determines the axial extension of an optical section. - Precisely defined area of the celestial sphere, associated with a grouping of stars, that the International Astronomical Union has designated as a constellation. - Continuum Emission - Any type of electromagnetic emission which produces radiation over a relatively wide range of frequencies. c.f. - Quantities which provide references for locations in space and time. A typical coordinate system consists of a point of reference (the origin), a set of directions (axes) that span space, and a set of labels that indicate how points are related to the origin. Coordinates in and of themselves are user defined and arbitrary, although certain simple, regular coordinate systems (e.g. Cartesian coordinates) are widely used. A Coordinate Singularity is a location at which a particular coordinate system fails, such as the Schwarzschild metric coordinates at the Schwarzschild radius of a black hole, or lines of longitude at the North pole. This failure doesn't indicate a breakdown in the underlying geometry. It is merely a failure of the coordinate system to give a unique well-defined label to a point in that geometry. - Corrector Plate - Thin lens-like optical piece which removes certain optical aberrations. - An unwanted electrochemical process that affects device (e.g., semiconductor, telescope-mounts etc.) - Cosmic Background Radiation (CBR) - The Cosmic Background Radiation (CBR) consists of relic photons left over from the very hot, early phase of the Big Bang. It now peaks in the microwave band, corresponding to blackbody radiation with a temperature of about 2.7 degrees Kelvin. The CBR is also sometimes called the Microwave Background, or the Cosmic Microwave Background (CMB). - Cosmic Rays - Cosmic rays are really charged particles such as protons, alpha-particles (i.e. helium nuclei) and electrons, traveling at almost the speed of light, c. From the theory of special relativity, cosmic rays carry a very high energy (tending to infinity as the speed tends towards c). This energy is much larger than their rest-mass energy (mc ²), and so they are also known as high-energy or relativistic particles. - Cosmological Constant, the - A physical constant that appears in the theory of general relativity. It corresponds to a force between particles which increases with separation, and only has an effect over cosmological distances, hence the name. Although the constant fits well into the equations, they are more elegant when the constant is dropped (set to zero). Einstein originally put the constant in because without it he found that general relativity predicted an expanding or contracting universe, and he assumed that the universe must be static. By doing this he missed the chance to claim the expansion of the universe as a prediction of GR. To rub salt into this wound, it later turned out that Einstein's model of a static universe held balanced by the cosmological constant could never have worked: the balance is precarious and the slightest disturbance would send the universe into accelerating contraction or expansion. Not surprisingly, Einstein called the cosmological constant `my greatest blunder'. - Cosmological Distance Ladder, The - Distances to galaxies are found by a long chain of arguments called the cosmological distance ladder. 1. A scale model of the solar system can be constructed from observations of the motions of the planets in the sky. All distances are known in terms of the radius of the Earth's orbit, the Astronomical Unit, (AU). Copernicus made the first roughly accurate solar system model, using data taken in ancient times, in his famous De Revolutionibus (1543). Modern models are exquisitely accurate. 2. The actual distances to most of the planets can be measured by radar, and since we also know the distance to them in AU, the length of the AU can be found (to nine significant figures!). 3. Distances to nearby stars can be found by various geometrical methods. The simples is via their annual parallax, i.e. their apparent change of position in the sky caused by the motion of the observer on Earth around the sun. The best precision is now about three significant figures in a handful of cases. More usefully, the final results of ESA's Hipparcos satellite (released in April 1997) give the parallaxes of around 10,000 stars to within few percent. Before Hipparcos, annual parallaxes were not as important as some more subtle geometric methods, but the new data will change the situation completely. 4. Stars of similar type have similar luminosities. Thus if we know a star's type (from its colour and/or spectrum) we can find its distance by comparing its apparent with its absolute magnitude; the latter derived from geometric parallaxes to nearby stars. Unfortunately nearby stars are not very bright in absolute terms, so we cannot see distant versions very far away (certainly not in other galaxies). 5. Distances to the super-bright stars that can be seen in other galaxies (especially Cepheid variables and RR Lyrae stars) are found by searching for distant star clusters in our Galaxy that contain both a Cepheid (say) and some fainter stars whose absolute magnitudes are known directly. 6. Distances to the nearest galaxies are found using Cepheids, RR Lyraes, etc. The Hubble Space Telescope is now finding Cepheids in galaxies about ten times more distant than was possible from the ground. 7. For more distant galaxies, we need objects even brighter than Cepheids. Examples are supernova explosions, planetary nebulae, and globular star clusters. The absolute magnitudes for such things can't be easily found in our own Galaxy, so they are measured in nearby galaxies (or clusters of galaxies) with Cepheid or similar distances. 8. At the furthest limits, only whole galaxies are detectable. Galaxies come in a very wide range of luminosities, so we need a way to find their luminosity before we can get their distance. Various methods exist. For instance galaxy luminosity is related to the speed of internal motions; most radio galaxies seem to have similar luminosities; the range of brightnesses in clusters of galaxies do not vary much from one cluster to another. At every step of the distance ladder, errors and uncertainties creep in. Each step inherits all the problems of the ones below, and also the errors intrinsic to each step tend to get larger for the more distant objects; thus the spectacular precision at the base of the ladder degenerates into an uncertainty of a factor of several at the very top. To find Hubble's Constant, the ratio of the cosmological recession speed to the distance, we need to go up to Step 7 of the ladder. This is because we can only measure the sum of the recession speed and the random motion of a galaxy, and so we need to go far enough away that the the random motions are small compared to the recession speed. More details of individual methods are given in Ned Wright's The ABC's of Distances. A good book describing the Distance Ladder in detail is Rowan-Robinson (1985) (although a lot has happened since it was written). | Sun, Solar System ||Cepheids, Main Sequence Fitting ||Cepheids, Supernovae, OB star ||HST Cepheids, OB stars, SN ||108 and up ||Brightest Galaxies, Tully-Fisher - Counter Weights - Many telescope motors are not very powerful, if you don't balance the system (telescope & camera) the motors cannot drive accurately. - CPU - Central Processing Unit - see Processor - A style of eyepiece focuser that does not use gears; very smooth motion. - Critical Angle - Angle of incidence for which the angle of refraction is 90° when light goes from one medium of high index of refraction into one of lower index. - CRT - Cathode Ray Tube - The CRT is a vacuum tube used as a display screen in a monitor or television set. The inner surface of the CRT is coated with phosphors, which glow and produce light when hit by an electron beam. - Curvature Constant (k) - A constant (k) appearing in the Robertson-Walker metric which determines the curvature of the spatial geometry of the universe. The three standard Friedmann models have k > 1 for positive curvature (spherical geometry) k < 1 for negative curvature (hyperbolic geometry) k = 1 for zero curvature (flat geometry) - Curvature of image field - The curved surface to which a microscopic image is to be clearly and distinctly mapped is described as image curvature aberration. It is conditional on the convex shape of the lens and makes itself apparent as an error due to the short focal distances of microscope objectives. Here the object image is not in focus both in the center and at the periphery at the same time. Objectives that are corrected for image curvature aberration are called plane objectives (plane = flat image field). - DAC - Digital-Analog-Converter - Opposite of the ADC. An electronic device, often an integrated circuit, that converts a digital value to an analog voltage. D/A converters are used in many instruments to convert digital reading information into an analog signal for analog output. A time- and value-discrete signal is converted into a time-continuous, value-discrete signal. - Dark Current - The electronic signal generated by the thermal characteristics of the CCD even in the absence of impinging light. - Dark Frame - An image of the dark current and camera readout and bias signal made by integrating an image while keeping the CCD array in total darkness. - Dark Matter - Term used to describe any astronomical mass that does not produce significant light and hence is hard to observe. Examples of dark matter include planets, black holes, white dwarfs (because they are low luminosity) and more exotic things like weakly interacting particles. - Dark Subtract - This is the process of removing noise (generated by dark current) from a CCD image. A dark frame is digitally subtracted from an image to eliminate dark noise. - Darlington Amplifier - An amplifier in which the collectors are tied together, and the the first directs current to the base of the second. - DC - Direct Current - The flow of electrons only goes in one direction. - DDP - Digital Development Processing. - This an image processing routine. DDP processing allows both bright and dim parts of an astronomical object to be displayed at the same time. DDP essentially compresses washed-out regions of an object into a range that the computer can display. This process is especially useful on galaxies which have bright cores and faint spiral arms. - An iterative image processing filter that uses Fourier transform mathematics to restore a blurred image as nearly as possible to an unblurred state. - The density of an object is equal to the mass of that object divided by its volume. Substances (like lead, water, iron, granite) have a certain density under normal pressures. In such cases the density of a substance can also be used to determine how much mass will be present given a certain volume of the substance. For example, water has a density of 1 gram per cubic centimeter (gm/cm3) so a cube of water 10 centimeters on a side weighs 1000 gm (1 kilogram). Some substances (like gases) are compressible and have different densities depending on how much pressure is exerted upon them. The Sun is composed of compressible (and hot!) gases and is much denser at its center than near its surface. - Depth of field - The distance between the nearest and farthest points that appear in acceptably sharp focus in a photograph. Depth of field varies with lens aperture, focal length, and camera-to-subject distance. - Depth of focus - The focal length of a lens system to maintain a precise image size. - Dew Heater - An electric strap that wraps around the front of a telescope and heats up when plugged in to a power supply. Keeping the front lens warm prevents dew condensing. - Dew Shield - A cylinder extending out from the front of refractors, SCTs, and Maks to prevent stay light from entering the telescope and to capture air in front of the lens to delay the formation of dew. - Dichroic filters are interference filters at an angle of incidence of light of 45°. The transmissivity and reflectivity of dichroites depend on a specific wavelength of light. For an RSP 510 filter (reflection short pass), for example, the excitation light below 510 nm is reflected and the excitation light above this value is transmitted. The transmission values are generally between 80% and 90% and the reflection values between 90% and 95%. - An insulating layer. A material that has high resistance. This term is usually used when the insulating layer separates the plates of a capacitor. - Spreading or bending of a wave upon passing around an obstacle or through a narrow opening. - A method of representing information in an electrical circuit by switching the current ON or OFF. Only two output voltages are possible, usually represented by "0" and "1." In clocked circuits a digital signal may change its logic state once during a clock cycle. Opposite to analog. - Digital Circuit - A circuit that operates like a switch and can perform logical functions. Used in computers or similar logic-based equipment. - DIL - Dual In-Line - See DIP below - Simple semiconductor element, comparable to a valve. The ideal diode is blocking electric current in one direction and conducting it in the other. The real diode causes a voltage drop and energy loss in the forward direction, therefore power electronics sometimes use active diodes. - DIP - Dual In-line Package also DIL - The most common type of IC package; circuit leads or pins extend symmetrically outward and downward from the long sides of the rectangular package body. Usual package form for ICs in the past, where the pins lie in two rows with 2.54mm (1/10") distance. The pins are put through holes in the circuit board and are soldered from the back side. Nowadays in series fabrication the SMD-package is used. - Discrete Device - A semiconductor containing only one active element, such as a or a diode. - a) The separation of a beam of light into the individual wavelengths of which it is composed by means of refraction or - b) Resolution of white light into its component wavelengths, either by refraction or by diffraction. - DMM - Digital Multi Meter - An electronic instrument that measures voltage, current, resistance, or other electrical parameters by converting the analog signal to digital information and display. The typical five-function DMM measures DC volts, DC amps, AC volts, AC amps, and resistance. - Dobsonian (Dob) - A very simple and stable mount usually used with reflectors; especially very large (greater than 10-inches) telescopes. There are usually no motors or tripods. A simple rotating base and a tilting tube make for an easy to push or pull viewing session. - Doppler Effect - The change in frequency of a wave (light, sound, etc.) due to the relative motion of source and receiver. Things moving toward you have their wavelengths shortened (blueshift). Things moving away have their emitted wavelengths lengthened (redshift). - Doppler Shift - Change in the apparent wavelength of radiation (e.g., light or sound) emitted by a moving body. A star moving away from the observer will appear to be radiating light at a lower frequency than if at rest; consequently, lines in the star's spectrum will be shifted toward the red (lower frequency) end of the spectrum. The existence of a direct relationship between the redshift of light from galaxies and their distances is the fundamental evidence for the expansion of the universe. - Dot pitch - Describes the distance between the perforations on the monitor's shadow mask- The better displays usually have a dot pitch under 0.28mm. As with fine halftone screens, the smaller the dots, the sharper the image. - Double Star - A "system" of two stars that appear - because of coincidental alignment when viewed from Earth - to be close together; it is, however, an optical effect only, and therefore not the same as a binary star system (although until the twentieth century there were few means of distinguishing double and - Two simple lenses used in combination, placed close together or in contact. If they are cemented together, they constitute a "cemented doublet". If they are merely closely adjacent, they are a "separated doublet". - DRAGN (Double Radiosource Associated with - i.e. extragalactic radio sources - are clouds of radio-emitting plasma which have been shot out of active galactic nuclei (AGN) via narrow jets. - The very best example is Cygnus A, the brightest DRAGN in the sky. - DRAM - Dynamic Random Access Memory - A semiconductor read/write memory chip, in which the presence or absence of a capacitive charge represents the state of a binary storage element (zero or one). The charge must be periodically refreshed. - Program operating as an interface between an application and a special hardware. A graphics card driver has to insure, that every program can display itself independently from the actual graphics board by means of a special command set. - DSP - Digital Signal Processor - A versatile all purpose chip, used in cameras to handle basic contrast and brightness adjustment and image compression. - Tiny grains of stuff, e.g., carbon grains (soot) and silicate grains (sand) that are about 0.1-1.0 micron in size. Dust grains are a major component of the interstellar medium. Dust blocks visible light causing interstellar extinction. Dust scatters incident starlight, particularly the blue wavelengths of light (blue light has a wavelength comparable to the dust grain's size) causing interstellar reddening. The dust itself is cold, and cools even further by giving off infrared emission. - Dynamic Range - The ratio of a CCD pixel's full-well capacity to the readout noise. Useful in determining the appropriate number of digitization levels that the analog-to-digital conversion system should use. - Third planet from the Sun. First forms of life appeared about 3.2 to 3.5 × 109 years ago (Homo sapiens appeared as a species about 105 years ago). - The obscuration of a celestial body caused by its passage through the shadow cast by another body. - EEPROM OR EPROM. Electrically-Erasable Programmable Read-Only - Similar to PROM, but with the capability of selective erasure of information through special electrical stimulus. Information stored in EEPROM chips is retained when the power is turned off. - Electromagnetic Spectrum - The distribution of light separated in order of some varying characteristic such as frequency. The "electromagnetic spectrum" refers to the full range of possible frequencies and wavelengths of light. If we "take a spectrum" of a star we analyze its light according to wavelength or frequency by, say, passing the light through a prism. A "spectral line" refers to emission or absorption at a particular wavelength of light. - A negatively charged particle revolving round the nucleus of an atom. - An elementary particle (of the type known as a lepton) with a negative charge. One of the components of atoms, the electrons orbit around the nucleus, and the distribution and number of electrons determine the chemical properties of an element. - Electronic Shutter - Electronic shuttering is the process of controlling the exposure period of the CCD by electronic methods (compared to conventional shuttering, which involves an electro- mechanical shutter system that opens, allowing light to fall momentarily on the CCD, and then closes). In most interline transfer CCDs the process of electronic shuttering involves "dumping" accumulated charge from the imaging photosites to the substrate for a predetermined amount of time and then stopping the "dumping" process for the actual exposure period. This is followed by the normal charge transfer during vertical blanking and then readout during the next field. - A particular type of atom, with specific atomic number and chemical properties. The smallest unit into which matter may be broken by chemical means. - EMF - Electromagnetic Force - The force between charged particles, which accounts for electricity and magnetism. One of the four fundamental forces of nature, it is carried by photons, and is responsible for all observed macroscopic forces, except for gravity. - Emission Filter - Also Barrier filter, Emitter. A color filter that attenuates all of the light transmitted by the excitation filter and very efficiently transmits any fluorescence emitted by the specimen. - Emission Spectrum - The bright lines seen against a darker background, created when a hot gas emits photons characteristic of the elements of which the gas is composed. - The region of a bipolar transistor that serves as a source or input end for carriers. N-type for NPN, P-type for PNP. - Encke's Division - A region of decreased brightness in the outermost ring of Saturn. - Energy is usually defined as "the capacity to do work" but just what does that mean? Work is defined in physics as the exertion of a force over some distance, e.g., lifting a rock up against the gravity of the Earth. You probably have a pretty good colloquial grasp of the idea of "work" as something that takes effort. Energy is also something that is conserved within a closed system. This means that it is neither created nor destroyed but simply moved about (possibly changing from one form of energy to another). Light is basically a form of energy, one that radiates through space. So the Sun can release nuclear energy, creating light which travels through space to the Earth, where it can be absorbed by, say, a photocell, which in turn permits a motor to run propelling a solar-powered car forward. - Entrance Pupil - The apparent size of the limiting aperture of a lens or lens system (properly that of the diaphram), as seen from the object plane. This can shift and become a complex matter in some circumstances. - A quantitative measure of the disorder of a system. The greater the disorder, the higher the entropy. - Equatorial (Eq) - a) A special kind of telescope mount that has its axes tilted up to match the latitude of your observing site and is pointed at the North (or South below the equator). - b) The classic type of telescope mount with one axis parallel to the Earth's polar axis (i.e. pointing at the celestial pole) and the other at right angles. Once the object is located, only the polar axis need be driven by a motor to counteract the Earth's rotation. - See also GEM. - A balance in the rates of opposing processes, such as emission and absorption of photons, creation and destruction of matter, etc. so that there is no net change. - Escape Velocity - The outward velocity required to leave the surface of a body mass M and radius R and escape to infinity (not fall back). The formula for the escape velocity is (2GM/R) - Euclidean Geometry - Flat geometry based upon the geometric axioms of Euclid. - A point in four-dimensional spacetime; a location in both space and time. - Event Horizon - An event horizon is a lightlike surface in spacetime which divides spacetime into two regions: that which can be observed, and that which cannot. In the case of a black hole, the event horizon is that surface surround the region out of which light itself cannot escape. No signal or information from within the event horizon can reach the outside universe. For a nonrotating black hole, the horizon is located at the Schwarzschild radius, corresponding to Rs = 2GM/c2 - Excitation Filter - Also Exciter. A color filter that transmits only those wavelengths of the illumination light that efficiently excites a specific dye. See Emission - Exit Pupil - The exit pupil of a lens system is an image of the entrance pupil (hence conjugate to it) and normally should be the image of the limiting diaphram. In both the microscope and the telescope it is the eyepoint where the beam has its smallest cross-section. It is also called the Ramsden circle (q.v.) or eyepoint. - A controlled trial for the purpose of collecting data about a specific phenomenon. - A chemical that causes a sudden, almost instantaneous release of pressure, gas, and heat when subjected to sudden shock, pressure or high temperature. - 1. The act of allowing light to strike a light-sensitive surface. 2. The amount of light reaching the image sensor, controlled by the combination of aperture and shutter speed. In photographic terms this is the product of the intensity of light and the time the light is allowed to act on the sensor, or the film. In practical terms, the aperture controls the effective diameter of the hole that allows light through, and shutter speed controls the length of time the shutter is open. - As light from a star travels through interstellar space it encounters some amount of dust. This dust scatters some of the light, causing the total intensity of the light to diminish. The more dust, the dimmer the star will appear. It is important to take this effect into account when measuring the apparent brightness of stars. The dark bands running across portions of the milky way in the sky are due to extinction by copious amounts of dust in the plane of our - The lens system used in an optical instrument for magnification of the image formed by the objective. - Eyepieces come in various types. Every eyepiece has a focal length. The magnification that results when a given eyepiece is used with a given telescope, is equal to the focal length of the telescope divided by the focal length of the eyepiece. Thus if the telescope has a focal length of 1000 mm and the eyepiece has a focal length of 25 mm, the magnification will be 1000 / 25, or 40. - There are several types of eyepiece designs. The most popular are: - The Ramsden is a very old design, with a rather narrow apparent field of view -- perhaps as little as 30 degrees. Ramsdens do not work well at focal ratios shorter than about f/9, but good ones make surprisingly nice eyepieces for Lunar, planetary, and double-star observation, at longer focal ratios. Ramsdens often have prominent ghosts. The simplest form of Ramsden consists of two identical simple lenses, each flat on one side and convex on the other, facing each other with convex sides inward, spaced apart by a distance equal to or slightly less than their focal length. - In essence, the Kellner is an achromatized Ramsden. It has a slightly larger apparent field of view than the Ramsden, and works at slightly faster focal ratios. Kellners tend to have rather prominent ghosts. Kellner eyepieces consist of a small achromat -- a cemented doublet -- near your eye, and a simple lens at the far end of the eyepiece. - Orthoscopics have moderate apparent fields of view -- 40 or 45 degrees -- and work well at fast focal ratios. Many consider them the best eyepieces for Lunar, planetary, and double-star work. There are actually several designs called "orthoscopic". The most common kind has a simple lens nearest your eye, and a cemented triplet further away. Another kind resembles a Plossl. - The Erfle has a rather wide apparent field of view -- perhaps 68 degrees or more. The image quality at the edges of the field, at small focal ratios, is not as good as for more modern wide-field eyepieces. Erfles are generally composed of five or six simple lenses, grouped into two doublets and a singlet, or three doublets. - Plossls have moderate apparent fields of view -- 50 degrees is typical -- and work well at fast focal ratios. Plossls consist of four simple lenses, grouped as two cemented doublets. - Ultra Wide - Ultra Wide Angle is a "house brand" of Meade. These eyepieces are well corrected, with very large apparent fields of view, of 84 degrees. Ultra Wide Angle eyepieces are reported to be generally similar in design to - Koenigs have a rather wide apparent field of view -- perhaps as much as 70 degrees. The various eyepieces commonly labeled "Koenig" contain anywhere from four to seven simple lenses, grouped into various combinations of cemented doublets and singlets. - Noted for a very wide apparent field of view -- almost 80 degrees -- and for excellent correction at fast focal ratios. Speers-Waller eyepieces are reported to be similar to Naglers. - Nagler is a "house brand" of Tele Vue. These eyepieces are noted for a very wide apparent field of view -- 82 degrees -- and for excellent correction at fast focal ratios. Naglers are big, heavy, and expensive, and consist of seven or eight simple lenses grouped together into four singlets or doublets. - Eye Relief - The distance from the surface of the rearmost lens of the eyepiece, to the exit pupil. When the eyepiece is in use, that distance should be the distance from the rearmost lens of the eyepiece to the iris of the observer's eye. The remaining distance is the space between the observer's eye and the - It is the clearance available for moving the observer's head without bumping the telescope, and is also the place where the observer's spectacles must fit, if they are worn while observing. - Fastar is Celestron's high-speed CCD imaging system. Fastar involves removing a Schmidt-Cassegrain telescope's secondary mirror and placing the CCD camera at the front of the telescope. This provides a wide field of view and a vary fast imaging system. A telescope which is said to be "Fastar compatible" has a removable secondary mirror. The actual accessories needed for Fastar imaging are sold separately. - FET - Field Effect Transistor - A solid-state device in which current is controlled between source and drain terminals by voltage applied to a gate terminal, which is insulated from the semiconductor substrate. - Fermi Mechanism - A process hypothesised by Enrico Fermi whereby a small fraction of the charged particles in space can be boosted up to the high energies of cosmic rays. The mechanism works on particles which are already distinctly more energetic than average. Fermi imagined the high-energy particles scattered from dense clouds in the interstellar gas, or more correctly from the magnetic field embedded in the clouds. In these collisions the particle gains energy if the cloud is moving towards it, and loses if the cloud is moving away, but Fermi realised that the first case happens slightly more often than the second, so the particles slowly gain energy. In a strange coincidence, in 1977 four independent groups of researchers realised that in shock waves the high-energy particles find that every scattering is nearly head-on, giving much more efficient acceleration (called "first-order Fermi acceleration"). According to simplified calculations, the particles end up with a "power-law" distribution of energies, very like that of the electrons which actually produce cosmic synchrotron radiation. It therefore seems quite likely that the first-order Fermi mechanism is responsible, although things don't agree so nicely when the calculations are done more carefully. - FFT - Fast Fourier Transform. - FFT filters are image processing algorithms applied to CCD images, usually to sharpen or smooth and image. FFT filters have the same function as kernel filters, but are applied differently so the processing is faster. Applying an FFT to an image changes the image so that applying a kernel filter is easier. The FFT function is then reversed and a filtered image results. - A mathematical representation of a quantity describing its variations in space and/or time. - Field Rotation - Rotation of the FOV over time. With an Alt-Az system that does not control for field rotation, if tracking is otherwise perfect, the stars and other objects in the FOV will pivot around the center of the FOV during a CCD or film exposure or during a visual observing session. Equatorially mounted and driven systems will suffer from field rotation if polar alignment is imperfect. - FIFO - First In, First Out - Electronic pendant of the "stand in a queue". Used to synchronize data sources with different speed or unsynchronized data - Filter Wheel - Black and white CCD cameras use color filters to separate the three primary colors (red, green, and blue) in order to produce full color images. Usually an automated mechanical wheel holds the filters in front of the camera and rotates the appropriate filter into place before an exposure. - 1. Using color-dyed or interference-layered glass inserted into the optical path to restrict the passage of full-spectrum - 2. Applying a mathematical function to the pixels in an image array that modifies each pixel's value according to the values of an assigned set of neighboring pixels. Primarily used to blur or sharpen localized aspects of an image. - Apple's name for the communication standard IEEE - FITS - Flexible Image Transport System. - The standard data file format for astronomical CCD images. FITS images use file extension names of FTS, FIT, or FITS. - FLASH MEMORY - It is a non volatile memory technique with fast access times; rewriteable many times and uses a block erase technique as opposed to EEPROM, which erases one bit at a time. - Flat-Field Frame - A CCD image of the irregularities in the optical system and the CCD chip. The image is integrated while the optical/CCD system is pointed at a wide-field, evenly illuminated source, such as that provided by a specially manufactured light box, the inside wall of an observatory dome, a large poster board positioned in front of the telescope, or a twilight sky. - An electrical circuit having two stable states: on and off. A basic logic - Fluorite Objectives - Describes a correction class for objectives. Fluorite lenses are semi-apochromatic, meaning their degree of correction lies between the achromatic and apochromatic. - A flux is the rate at which something is transferred through a surface, like 10 flies per minute through the 1 square inch hole in the busted screen door. In astronomy flux is used to express the amount of energy radiated per second across an area like a square centimeter. - Flux Density (see also - When we describe Sirius as the brightest star in the sky, we are talking about flux density. This is a measure of how bright things seem to us from our platform on Earth. It depends both on the luminosity and the distance, since distant objects appear fainter. Radio astronomers measure this in units of Jansky, optical astronomers use apparent magnitude. - This is a property of a wave, and it is the number of wave crests that pass a given point per second. Frequency is is measured in units of inverse time (e.g., "cycles per second''). A cycle per second is the unit of frequency and it is known as a "Hertz.'' Since light moves at the constant speed of light, the frequency of a light wave is related to the wavelength: the frequency is given by the number of wavelengths that go by per second at the speed of light, hence frequency is wavelength (distance) divided by speed (c). The higher the frequency of light the greater its energy. - It designates the incidence of a change in signal or state. Given in Hz, that is number per second. FM-radio works in the MHz-region (M=Mega=106), fast integrated circuits in the GHz-region (G=Giga=109), as does the mircowave - Focal Length - The distance from the optical center of the lens to the image sensor when the lens is focused on infinity. The focal length is usually expressed in millimeters (mm) and determines the angle of view (how much of the scene can be included in the picture) and the size of objects in the image. The longer the focal length, the narrower the angle of view and the more that objects - Focal Ratio - The effective focal length of an optical system divided by the diameter of the primary optical component. I.e. The ratio between the focal length of a telescope and the aperture. - A telescope with an 8" aperture and 80" (2000mm) focal length has a focal ratio of f/10. Smaller focal ratios equate to shorter exposure times. An f/4 system is faster than an f/6 system, for example. - Focal Reducer - An optical component or system for changing the image scale of a telescope to achieve a better match between the seeing disk and the pixel size. - The process of bringing one plane of the scene into sharp focus on the - Fork Mount - A fork mount is a type of mount where the telescope is held by two arms, and swings between them. A fork mount can be either alt-azimuth or equatorial (through the use of a wedge). Fork mounts are most commonly used with Schmidt-Cassegrain telescopes, and are almost always equatorial. - FOV - Field of View - The size of an imaged or visual scene in terms of sky dimensions, usually stated in degrees, arcminutes, or arcseconds. The approximate width of the FOV in arcseconds can be calculated by dividing the width of the CCD chip in microns by the focal length of the optical system in millimeters and multiplying the result by 206. - Frame Grabber - A device that lets you capture individual frames out of a video camera or off a video tape. - Frame Rate - The number of pictures that can be taken in a given period of time. - A numerical designation (f/2, f 2.8, etc.) indicating the size of the aperture (lens opening). A small f-number ( large aperture diameter) gives a small depth of field, and a large f-number gives a large depth of field. - Full Well Capacity - Each photosite on a CCD chip can contain a certain number of electrons. This number is called the Full Well Capacity. Within a certain chip full well capacity is a function of binning, so that binning 2 times, for example, quadruples the full well capacity. Of course, 2x binning also makes the CCD effectively 4 times more sensitive and so the wells will fill in the same amount of time. Chips with anti-blooming generally have lower full well capacity, but will not bloom when the well is filled - Those events which could be influenced by a given event. All events located within the future light cone. - FWHM - Full Width Half Maximum - A measurement of the size of a point source image, such as a star, in terms of the width of the 50% peak value circumference. Usually stated in units of arcseconds or pixels. - GaAs - Gallium-Arsenide - A compound semiconductor material in which active devices are fabricated. GaAs has a higher carrier mobility than silicon, thus it has the capability of producing higher speed devices. - The factor by which an incoming signal is multiplied. - Galactic Equator - The primary circle defined by the central plane of the Galaxy. - Galaxies are often described as huge collections of stars. This is about as accurate as describing a city as a huge collection of street-lights: stars are certainly the most obvious feature of galaxies when see far away in the dark, but they represent a fairly small fraction of the total mass, 90% or so of which consists of the famous dark matter, whose nature is one of astronomy's biggest mysteries. As well as stars and dark matter, galaxies contain gas and dust in the inter-stellar medium between the stars. The may also contain an active galactic nucleus at their Galaxies come in a wide range of sizes (meaning diameter, brightness, and mass, all of which roughly go together), ranging from tiny dwarfs with only a few million stars, through normal galaxies like our own Milky Way, with a hundred billion stars, to the trillion-star cD galaxies that sit at the centers of the great clusters of galaxies. As is often the case in nature, the smallest are the most common; on the other hand the size of the larger galaxies more than makes up for their rarity, so that a typical star is likely to be in a galaxy with a size approaching that of the Milky Way or larger, and large galaxies are also the easiest to find in the sky, due to Malmquist bias. Normal galaxies come in two basic types: spiral and elliptical. - The logarithmic brightness value assigned to video monitors to allow replication of the logarithmic visual range of the eye. - 1. A signal which in the active state enables an operation to occur; and when in the inactive state inhibits an operation from occurring. 2. The basic digital logic element - where the binary value of the output depends on the values of the inputs. 3. The primary control terminal of a field effect transistor. - GEM - German Equatorial Mount - A telescope mount that can easily counteract the Earth's rotation and track the stars. - General Relativity (GR) - Einstein's theory of gravity. Einstein describes a gravitational field in terms of the "curvature" of space. J. H. Wheeler boiled the theory down to the slogan Space tells matter how to move; matter tells space how to curve. Because of the curvature of space, the various ways we use to define length end up giving different answers when general relativistic effects are important, for instance in cosmology, or near a black hole. When Einstein proposed the theory in 1916, there was very little evidence in its favor. The situation is very different today: the predictions of General Relativity have been tested with high precision in many ways, in laboratories, in the Solar System, and by observations of distant pulsars. The results agree very well with General Relativity, showing that it must be, at the least, very close to the right answer. ... A full appreciation of general relativity involves mastering some difficult mathematics (unlike special relativity). To be honest, even professional astronomers often never get around to learning GR properly! - Taking the Earth to be the center, e.g., of the solar system, or of the universe. - In geometry, that path between two points/events which is an extremum in length. In some geometries, such as Euclidean, the geodesics are the shortest paths, whereas in others, such as in the spacetime geometries appropriate to general relativity, the geodesics are the longest paths. - Giant Molecular Cloud - A region of dense interstellar medium that is sufficiently cold that molecules can form. They are very cold (10-20K) with relatively high densities (trillion particles per cubic meter), and huge. Even though the temperatures are very cold the molecules in these molecular clouds emit radio radiation which can be detected on Earth. These regions are believed to be where new stars can form. - GIF - Graphics Interchange Format - An image file format used heavily used on the Web (256 colors, patented by - Globular Cluster - A tightly packed, symmetrical group of thousands of very old (pure Population II) stars. The stellar density is so great in the center that the nucleus is usually unresolved. The stars within a globular cluster orbit each other because of their mutual gravity. - A motorized & computerized telescope. - Gravitational Lens - A massive object which causes light to bend and focus. This occurs because light falls in a gravitational - A hypothetical massless carrier boson which is the carrier of the gravitational force. - The weakest of the four fundamental forces; that force which creates the mutual attraction of masses. - Gray scale - A series of shades of gray ranging from pure white to pure black. - The linear array of brightness values assigned to a monochrome image represented in black and white, where 0 = black and the maximum array value = white. For example, in an 8-bit dynamic range, 0 = black, 256 = white, and medium gray = 128. - A common reference point for an electrical system. - No telescope mount can track perfectly, yet for CCD imaging it is necessary to very accurately track the object being imaged. This is done by guiding on a star to make small corrections to the mount to accurately follow the star. This makes up for any errors in the telescope's drive system. Guiding can be done manually by watching a star through a crosshair eyepiece, or, more commonly, by using an autoguider to automatically guide. See also, Unguided Exposure, Autoguider, Self-Guiding, and Track & Accumulate. - H-alpha Solar Filter - A very special filter that allows a very small portion of the sun's total spectrum through the telescope. It is on the order of 1.7 to 0.7 Angstroms. It allows you to view solar prominences. - A class of particles which participate in the strong interaction. Hadrons consist of those particles (baryons, mesons) which are composed of quarks. - A method of printing a continuous tone image, such as a photograph, using a screen of tiny dots - Used in inkjet and lithographic printers only four CMYK process colors are used. Once printed and when viewed from a distance, the human eye perceives these tiny dots as a continuous tone. - Generic term for electronic devices, that is, things you can touch. Contrary to software. (My definition: throw it out of the window, if it is broken it was hardware....) - Hawking Radiation - Emission of particles, mostly photons, near the event horizon of black holes due to the quantum creation of particles from the gravitational energy of the black hole. - Heat Sink - An assembly that serves to dissipate, carry away, or radiate into the surrounding atmosphere heat that is generated by an active electronic - Taking the Sun to be the center, e.g., of the solar system. - A histogram is a graph of number of pixels versus pixel value. - Pixel values run from lowest (displayed as black) to highest (displayed as white). A bar is plotted for each pixel value showing the number of pixels in the image with that value. An astronomical image typically has more bars toward the lower (darker) end of the histogram since most astroimages contain are large amount of dark sky around a brighter (but small) object. - HSL - Hue, Saturation, Luminance - A particular conformation of color theory. - HST - Hubble Space Telescope - Hubble Constant - The constant of proportionality (designated H) between recession velocity and distance in the Hubble law. It is a constant of proportionality but not a constant in time, because it can change over the history of the universe. The Hubble constant is defined as the rate of change of the scale factor with respect to time divided by the scale factor. Measuring the Hubble constant is difficult and remains and important task for astronomers. Present best values lie between approximately 50 km/sec/Mpc and 100 km/sec/Mpc, with a value around 70 km/sec/Mpc favored. - Hubble's law - In 1929 Edwin Hubble showed that the further a galaxy is away from us, the faster it is moving away; this is now called Hubble's law, and it means that the universe is expanding. What Hubble actually measured was the redshift, usually written z , of the spectrum of the galaxies: he found that the spectrum was `stretched out' so that a feature that should have been at a certain wavelength was actually detected at a wavelength longer by a factor (1 + z). The usual cause of such a redshift is the Doppler effect, and by assuming this it is possible to calculate the recession velocity of the galaxy. In the cosmological case, it turns out that the Doppler formula is only an approximation; the deeper meaning of the redshift is that the factor (1 + z) gives the ratio of the size of the universe at the time the light was emitted, to the size it is today. This expansion is quite a subtle concept. It does not mean that everything (galaxies, planets, people, ants, atoms) is getting bigger; if that were the case, we would never know, because our tape measures and rulers would be growing at the same rate. The idea is clearest for a toy universe consisting of `dust' spread evenly through space, with individual particles not interacting at all with each other. Expansion means that the distance between any two particles is getting larger. Now think of a row of particles: since each is moving away from its immediate neighbours (at the same rate, from homogeneity), the relative motion must be proportionately larger for particles separated by larger distances: Hubble's law. Of course the real universe is lumpy on `small' scales; instead of being a smooth soup, matter is condensed into galaxies and stars and all the rest (just as well for us!). Within these objects the original expansion has been overcome by gravity. This is a runaway process. Start with a small region in the early universe with a slightly higher-than-average density. Its gravity will slow down both its own expansion and that of surrounding matter, so both the size of the region and its relative over-density increase. Eventually the expansion near the centre is stopped altogether. The size of the region affected continues to grows as outlying particles have time to move in. At present the largest regions in which the expansion has been stopped are the clusters of galaxies a few million light years across. For points with larger separations, the motions induced by local density peaks are small compared to the relative motion from the overall expansion. - Distinct color or shade. - Hybrid Circuit - Mounting technique, where several elements are placed on a substrate (Al2O3-ceramics) by special pastes by silk-screen print, followed by a burning process. Semiconductors are bonded "naked" (without package) or soldered in SMD packages. Complete modules are often founded in - Hyperbolic Geometry - A geometry which has negative constant curvature. Hyperbolic geometries cannot be fully visualized, because a two-dimensional hyperbolic geometry cannot be embedded in the three-dimensional Euclidean space. However, the lowest point of a saddle, that point at which curvature goes both "uphill'' and "downhill,'' provides a local representation. - A proposed explanation for an observed phenomenon. In science, a valid hypothesis must be based upon data and must be subject to testing. - IC - Integrated Circuit - consists of several (up to some million) transistors, capacitors (sometimes planar inductors, too). These are built in a numerous process steps on a ultra-clean silicon disk (the wafer). These processes include the fabrication of "masks", used for structuring a light sensitive varnish. After etching the enlightened parts of the varnish, so called dotants are brought in by diffusion of ion implantation. This modifies the electrical behavior of the silicon. - ICC profiles - International Color Consortium - A profile is a list of characteristics that describes a particular color space Pixel such as the space of an Apple 13-inch monitor. ICC profiles are interpreted by Color Management Modules, when your image is viewed on a monitor in a different workstation. - IEEE - Institute of Electrical and Electronics Engineers - Abbreviation for Institute of Electrical and Electronics Engineers. Also a - IEEE 1394 - A port on the computer capable of transferring large amounts of data very - Sony's name for IEEE 1394. - Image sensor - A solid-state device containing a photosite for each pixel in the image. Each photosite records the brightness of the light that strikes it during an - One inch (1") is 25.4mm. The inch is the commonly used measure in electronics, mostly 1/10" or 1/20" of distance is found for pin distances of components. - Index of Refraction - Ratio of the speed of light in a vacuum to the speed of light in a particular substance; ratio of the sine of the angle of incidence to the sine of the angle of refraction. - A material that is a poor conductor of electricity - used to separate conductors from one another or to protect personnel from electricity. - Intensity (or Surface Brightness) (see also Brightness) - This is what we mean when we say that the center of a galaxy is brighter than the outer regions; an image is essentially a map of intensity. It measures the flux density we receive, not from the object as a whole, but from each unit area of the sky (technically, solid angle). The flux density of an object is thus the product of its intensity times its solid angle. - Acquisition of electronic data from a CCD array. Synonymous with image acquisition or exposure. Analogous to exposure in film terminology. - A wave is something that moves along and has high points (crests) and low points (troughs). If two (or more) different wave trains pass over one another the crests and troughs can add together to make bigger crests and troughs, and a crest and a trough can add together to produce zero. So if light is a wave phenomenon, then two light sources produce waves that in some places produce large amplitudes and other places produce zero. When to point sources of light are projected onto a screen this wave interference effect produces alternating light and dark spots. This demonstrates the wavelike nature of light. - In an image interpolation adds extra pixels. It's done with some zoom A process that uses software to add new pixels to the mosaic-like bitmap of an image or part of an image. The color of the new pixels is derived from the original adjacent pixels. Interpolation appears to increase the original resolution and quality of an image. - Interstellar Medium - The name given to the stuff that floats in space between the stars. It consists of gas (mostly hydrogen) and dust. Even at its densest the interstellar medium is emptier than the best vacuum humanity can create in the laboratory, but because space is so vast, the interstellar medium still adds up to a huge amount of mass. - An atom which has gained or lost an electron and thereby acquired an electric charge. (Charged molecules are called radicals, not ions.) - I/O - Input/Output - The process of transferring data to and from a computer-controlled system using its communication channels, operator interface devices, data acquisition devices, or control interfaces. The computer may exchange information in several ways, that is, it has to accept data or to provide data. At lowest level, the circuit level, the computer has a set of so called port addresses and memory addresses. A 8-bit port holds 8 binary signals, for example. - This is what an atom becomes when an electron is separated from the atom, leaving it with a net positive charge or, if an electron is added, leaving it with a net negative charge. - IRQ - Interrupt ReQuest - With the help of interrupts a device can get the attention of the processor, e.g. because an error appeared or new data is ready. This way the processor doesn't have to look at all available ports all the time, which is called polling. IRQs provide flexibility and can enhance performance. - ISO - International Standards Organization - ISO is taken from the Greek word isos, meaning equal. The speed or light sensitivity of conventional photographic film and of the CCDs found in digital cameras, is measured in ISO values. Lower ISO numbers indicate slower films, or lower CCD ratings which require less light to be correctly exposed. - One of the forms in which an element occurs. One isotope differs from another by having different numbers of neutrons in its nucleus. The number of protons determines the elemental identity of an atom, but the total number of nucleons affects properties such as radioactivity or stability, the types of nuclear reactions, if any, in which the isotope will participate, and so forth. - Unit of flux density, named after Karl Jansky, who first discovered extra-terrestrial radio waves (from the Milky Way) at Bell Telephone Laboratories in Holmdel, New Jersey, in 1932. 1 Jy = 10-26 Watt Hertz-1metre-2. - Perhaps the weirdest feature of AGN is that they can produce narrow jets of material streaming outwards from the centre. The jets are usually produced in pairs, pointing in opposite directions; occasionally there only seems to be one jet. These jets can sometimes be "seen" directly by radio telescopes, but more often we deduce their existence because far outside the galactic nucleus they produce a DRAGN. In one sense, we should not be too surprised to find jets, since in every other case where accretion is thought to occur in astronomy, jets are also common. These include young stars forming out of gas clouds, and binary stars where matter is falling from one of the pair onto the other. Unfortunately, we don't know how jets are produced in any of these circumstances! - JPEG - Joint Photographic Experts Group - A very popular digital camera file format that uses lossy compression to reduce file sizes. Developed by the Joint Photographic Experts Group. JPEG compression provides the best results with continuous-tone images, such as photographs, when the size of the file is an important factor. JPEG images are stored with a file extension name of JPG or JPEG. - Fifth planet from the Sun. It is more massive than all other planets and satellites combined; if it were about 80 times more massive, it would become self-luminous due to fusion reactions. The heat flux to from the center to the surface is mainly convective. For both Jupiter and Saturn it is necessary to invoke a substantial source of internal heating (presumably gravitational contraction) to account for the surface temperature (Jupiter radiates about 2 1/2 times as much heat as it receives from the Sun). Jupiter's surface shows pronounced horizontal striations: the light layers (zones) are at a slightly higher altitude and about 15° cooler than the dark layers (belts). It is surrounded by a partial torus of atomic H in the orbit of Io. Thirteen satellites, the four outermost of which have retrograde motion, high eccentricity, and high inclination. (Jupiter XIII, discovered in 1974, has a period of 239 days; i = 26°.7, e = 0.147.) - Kelvin Scale - This is the temperature scale which uses the same size of degree as the Celsius or Centigrade system, but which begins at absolute zero, the coldest temperature possible corresponding to the lowest possible energy state of a system. Temperature in degrees Kelvin gives a measure of the average energy of a system. - Kepler's Laws - The three laws of planetary motion discovered by Johann Kepler. 1. Planets orbit on ellipses with the Sun at one of the focii of the ellipse 2. Equal area is swept out by the planets motion as it moves around the ellipse (a planet moves fastest when it is nearest the 3. The square of the period of the orbit is equal to the cube of the semimajor axis (half the long axis of the ellipse). This is written as P2 = a3, where P stands for the period of the orbit (the sidereal period) and a is the size of the semimajor axis of the ellipse. - Kernel Filters - Kernel filters are image processing algorithms applied to a CCD image, generally to smooth or sharpen an image. A kernel is a small grid which tells the computer how to change the value of a certain pixel based on the values of neighboring pixels. The most common types of kernel filters are low-pass ("smoothing" and "blurring" filters) and high-pass ("sharpening" filters). - Kinetic Energy - The energy associated with macroscopic motion. In nonrelativistic physics the kinetic energy is equal to one half the mass times the velocity squared, i.e., 1/2 - Lab color mode - Lab mode describes a theoretical color space where color and brightness are split into three different channels. two for color and one for brightness. - LED - Light Emitting Diode - Semiconductor, emitting light of a defined wavelength when current flows. Because of its high efficiency, its robustness and its fast reaction LED have replaced bulbs for displays. - A member of a class of particles which do not participate in the strong interaction (the force that binds atomic nuclei together). The best-known lepton is the electron. Another example is the neutrino. - Any of several oscillations in the apparent aspect of the Moon as seen from Earth, which, when combined, enable Earth-based observers over a period of time to see altogether about 59 percent of the Moon's surface. Physical librations are angular motions about the center of mass due to gravitational torques on the Moon. Optical librations are the apparent rotations of the Moon, caused by our observing it from somewhat different directions at different times. - Electromagnetic radiation that may or may not be visible to the human eye. - Light Box - An internally illuminated container that uses indirectly placed light sources to evenly illuminate a translucent diffuser screen on one side of the container. The screen is sized to cover at least the entire aperture of a telescope so that accurate flat-field frames can be made. - Line Emission - An electromagnetic emission process which produces radiation at a number of specific frequencies, c.f. continuum emission. It is so-called because when the light is analyzed with a spectrograph, line emission shows up as bright lines crossing the spectrum. --picture?-- Line emission is generally produced by fluorescence, in which atoms or molecules are "excited" to a high energy state by absorbing ultraviolet radiation, and then "decay" to their "rest state" via a series of quantum jumps. In each quantum jump, a photon is emitted with a specific frequency which is determined by the quantum properties of the atom. The frequency of the photon when received on Earth is affected by the Doppler effect because the emitting atom is moving relative to us. Of course we actually receive the accumulated radiation from large numbers of atoms moving with different speeds, so emission lines have a finite width, which are usually specified in terms of the equivalent range of speeds. - Linear Circuit - A circuit whose output is an amplified version of its input or whose output is a predetermined variation of its input. See also OpAmp. - Linear Response - A CCD camera with a "linear response" has sensitivity such that doubling the exposure time of an object of a certain brightness will result in an image twice as bright. There is a linear relationship between exposure time and brightness. This is especially useful for making magnitude measurements of variable stars, comets, asteroids, or supernovae. A camera with a nonlinear response is not suitable for making magnitude estimates since a star which appears twice as bright in an image is not necessarily twice as bright in actuality. CCD cameras equipped with an anti-blooming feature are generally nonlinear. For taking pretty pictures linear response makes no difference. - Lithium ion battery - Long-focal-length lens (telephoto lens) - A lens that provides a narrow angle of view of a scene, including less of a scene than a lens of normal focal length and therefore magnifying objects in the image. - Images taken with a black and white CCD camera through red, green, and blue filters are combined to make RGB images. An interesting effect of human vision is that we get most of the spatial information about an image from the brightness, or luminance, portion of the image and not from the color, or hue, portion. This means it is possible to take a low resolution color image and combine it with a high resolution black and white image to make an LRGB image (the L standing for luminance). This is a definite advantage for CCD imaging; since placing a filter in front of a camera decreases its sensitivity, color images can be binned to gain higher sensitivity at the expense of resolution. As long as the low-resolution color image is combined with a high-res luminance image (taken unfiltered and thus at the camera's maximum sensitivity) the full resolution is maintained and the total exposure time is decreased. - Schottky-clamp TTL logic typically using one-third the power of TTL, but maintaining TTL speeds. - Lunar Eclipse - When the Earth' s shadow covers the moon. - Lumen (lm) - Represents luminous flux (cd·sr) - The intensity of a source of light. Synonymous with brightness. - Luminosity (see also Brightness) - When we describe Eta Carina as the brightest star in the Galaxy, we are talking about luminosity. This is a measure of how bright things are intrinsically. It can be measured in Watts, or Watts per Hertz to describe the luminosity at a specific frequency. For historical reasons in optical astronomy absolute magnitude is used, and radio astronomers quote the power, which is the luminosity per steradian (multiply by 4 pi to get back to luminosity). - Physics: Total amount of energy radiated per second. It has units of energy per second (e.g. ergs per second). Since many astronomical objects radiate away energy this is an important characteristic. We compare luminosity of an object to the solar luminosity, the total energy given off per second by the Sun. One solar luminosity is 4 × 1033 ergs per second. Luminosity has the same units as Power, e.g. energy per second. The Watt is the familiar unit of power. For comparison, a 400 Watt light bulb is 10-24 solar luminosities. - LZW - Lempel-Ziv-Welch - A compression scheme used to reduce the size of image files (used e.g. in TIFF, PDF, GIF, and PostScript language file formats). - Mach Number - The ratio of the speed of a fluid flow (e.g. a jet) to the speed of sound, so that supersonic flows have Mach numbers greater than one. In jets there are two Mach numbers, depending on whether we use the speed of sound in the outside medium (the external Mach number) or the speed of sound in the jet fluid (the internal Mach number). - a) Ratio of size of optical image to size of the object. - b) Determined by dividing the focal length of the telescope by the focal length of the eyepiece and multiplying by any amplifiers (Barlows). - In the second century B.C., the greek astronomer Hipparchus made a catalogue of stars, and divided them into five "magnitudes", with stars of the first magnitude the brightest, and the fifth magnitude the weakest. In the 19th century it became possible to measure accurately the relative brightness of stars. Tragically, rather than starting anew with a sensible system, astronomers followed a proposal by Pogson which re-defined magnitudes in such a way that most of the traditional magnitudes of stars stayed roughly the same. The awful formula that resulted has become one of the initiation rituals of modern astronomy, and is therefore not described here. Suffice it to say that we are stuck with a logarithmic scale on which the numbers get smaller as stars get brighter, so that the brightest stars even have negative magnitudes. These are actually measures of flux density, known as apparent magnitude in this context. The equivalent measure of luminosity, absolute magnitude, is the apparent magnitude an object would have if seen from a distance of 10 parsecs. - Mak - Maksutov-Cassegrain - A telescope that uses mirrors and lens to "fold" the light into a smaller tube. A Mak usually never needs collimation. - Fourth major planet out from the Sun. Its tiny satellites are locked in synchronous rotation with mars. - The measure of how much "stuff" something has, mass determines the inertia of an object (its resistance to being accelerated by a force) and how much gravitational force it exerts on another object. In pre-Einsteinian physics mass was conserved, neither created nor destroyed. Einstein discovered that mass can be converted into energy and vice versa. The conservation of mass is still a good approximation since mass-energy conversions generally involve relatively small amounts of mass. The mass of astronomical objects is often measure in terms of the Sun's mass. The solar mass is 2 × - Maximum Entropy Deconvolution - This is an image processing routine. It is used to enhance detail in CCD images that are slightly blurred due to atmospheric effects, tracking errors, or imperfect optics. The routine works by attempting to match an ideal image (essentially a perfect star image) to the actual blurred image. The image is adjusted through a series of iterations to a closer approximation of an ideal image. - Literally the middle value in a sequence of values arranged in increasing size order. A useful mathematical estimator of the true value from a set of values when one of these values is contaminated, i.e. known to be much larger than the average. - Median Combine - When combining images there are two methods employed. Most often images are summed to get as much information as possible. However, noisy images can be smoothed to some extent by taking the median value of all the pixels in the images. I.e. the middle value if all values are ordered. - Medium [Plural: media.] - Generic name for the stuff that fills space, a very dilute gas composed mainly of hydrogen and helium. When the temperature exceeds a few thousand Kelvin, the gas is ionised and is therefore technically a plasma. Important examples of cosmic media are: Inter-stellar medium (ISM) The medium between the stars within a galaxy. In spirals the ISM is a complex soufflé of hot (million Kelvin) and cool (3 to 100 Kelvin), gas, where the cooler regions have higher density so that the pressure is roughly constant. In ellipticals the ISM consists almost entirely of "hot" gas (roughly ten million Kelvin) which extends seamlessly into a gaseous halo surrounding the galaxy. Intra-cluster medium (ICM) The hot gas (several tens of millions of Kelvin) filling groups and clusters of galaxies. In big clusters, the ICM can contain more material than all the galaxies put together. Inter-galactic medium (IGM). The gas filling the regions between clusters of galaxies. We don't know whether there really is a proper IGM, or whether the gas just gets thinner and thinner as you go away from one cluster, until you reach the far outskirts of the next. - Innermost planet of the Solar System. Transits of the Sun occur either 7 or 13 years apart - last transit 1973 November 10. - Mercury Vapor Lamp - Discharge-lamp, producing light of defined wavelengths by ionization of mercury atoms with accelerated electrons. - Messier Catalog - a) One of the earliest catalogues of nebulous-appearing astronomical objects, compiled in 1781 by the French astronomer Charles Messier. Messier's catalogue included many objects that were later realized to be galaxies. - b) List of the locations in the sky of more than 100 galaxies and nebulae, compiled by Charles Messier between 1760 and 1784. Some designations he originated are still used in identification; M1 is the Crab Nebula (in Taurus). - One-millionth (10^-6) of a meter. About 40 millionths of an inch. Synonymous with micron. Symbol is µm. - Electromagnetic waves in the GHz-range. Related to radio waves or light. Used for mobile phones, RADAR, microwave ovens etc. - One-thousandth of an inch (10^-3 inch). Equal to 25.4 - Milky Way, The - The common name for our own Galaxy (which is just greek for Milky Way!). Astronomers usually just call it the Galaxy, with a capital G to distinguish it from other galaxies in the universe. The Milky Way is a large galaxy of spiral type, and our Sun is in the disk, roughly 25,000 light-years (8 kpc) from the center. The disk is visible as the faint band of milky light that circles the night sky, which is made up of myriads of distant stars. - The systematic study of shape or structure. Often used by astronomers as a pompous synonym for "structure". - Natural satellite of Earth. Studies of lunar rocks have shown that melting and separation must have begun at least 4.5 × 109 years ago, so the crust of the Moon was beginning to form a very short time after the solar system itself. It would have taken only 107 years to slow the Moon's rotation into its present lock with its orbital period. The Moon's orbit is always concave toward the - MOS - Metal silicon diOxide Silicon also Metallic Oxide Semiconductor - Semiconductor with isolated control terminal. By collection of electric charge on the gate a conducting channel between the source and the drain contact is build. The MOS Semiconductor is slower than a bipolar but is easy to produce and therefore cheap. The field of high-integration is based on MOS. - The assembly that supports a telescope and allows for smooth, steady movement to track the stars. The most important part in astrophotography. - MPEG - Motion Pictures Expert Group - A digital video format developed by the Motion Pictures Expert Group. - The ability of an operating system to run several programs simultaneously and independent from another. If a computer only has one CPU, this is achieved by distributing the computing power by a so-called scheduler, running each program for a small amount of time, calculated by special criterions. UNIX to some extent Win.. offer a good multitasking. See multithreading - NAND Gate (Not-AND gate). - An AND gate followed by an inverter. The output of the AND gate is inverted to the opposite value. - Nanometer (nm) - A unit of length commonly used for measuring the wavelength of light: I nm = 10 angstroms (Ċ) = 10-9 meters; and 1000 nm = 1 micron (µ) = 10-6 meters. - ND - Neutral-density Filter - An optical filter that attenuates light by an amount independent of the wavelength within the useful range of the filter. Metal-coated filters typically have a wider neutral range than glass filters and are more - The interstellar medium manifests itself to the astronomer in various phenomena. Many of these are amorphous, diffuse structures known as Nebula. This comes from the Latin word for "cloud." Its plural is nebulae. A beautiful example of a nebula is an emission nebula. Emission occurs when a cloud of gas is warmed up by some source of continuum radiation, say from a nearby star. The various atomic energy levels are excited by this radiation, and as the electrons jump back down to their lower energy states they emit photons at distinct spectral wavelengths. Since hydrogen gas is the most common form of gas, hydrogen emission lines are most often observed. In particular, a specific transition in the hydrogen atom corresponds to red light and color pictures of these emission regions appear red. This is the so called H-alpha emission line of hydrogen. - Eighth major planet out from the Sun. Discovered in by following predictions calculated by Urbain Le Verrier. Similar predictions had been made earlier by John Couch Adams but were not followed up. - Neutral Filter - Neutral filters are semi-reflective glass plates. They are used to distribute the light path independent of wavelength. The incident light is partially reflected and partially transmitted. Neutral filters are usually placed at a 45° angle in the path of the beam. The ratings of a neutral filter are based on its reflectivity-to-transmissivity ratio. A neutral filter RT 30/70, for example, reflects 30% of the excitation light and transmits 70%. - Any of three species of very weakly-interacting lepton with an extremely small, possibly zero, mass. Electron neutrinos are generated in the interior of the Sun (and other stars) in nuclear reactions. Generally such neutrinos do not interact with matter and stream out through the Sun. A very few of these many neutrinos can be detected in sophisticated detectors here on Earth, giving us a "window" into the interior of the Sun. In 1987 neutrinos from a Supernova in the Large Magellanic Cloud were detected in terrestrial neutrino experiments. If neutrinos have a small but nonzero mass, they would constitute an important type of dark matter in the universe. - A charge-neutral particle (of the hadron type) which is one of the two particles that make up the nuclei of atoms. Neutrons are unstable outside the nucleus, but stable within it. The number of protons in the nucleus determines what element that nucleus is. Different isotopes of a given element have different numbers of neutrons in the nucleus. The total number of neutrons and protons affects properties such as radioactivity or stability, the types of nuclear reactions, if any, in which the isotope will participate, and so forth. - Neutron Star - A dead "star'' supported by neutron degeneracy pressure. A neutron star is the core remnant left over after a supernova explosion. - NGC - New General Catalog - An object assigned a number in the New General Catalog of non-stellar objects. - NIR - Near infrared - The region of the electromagnetic spectrum ranging in wavelength from approximately 750 to 2500 nanometers. - Nickel cadmium battery. - Nickel metal hydride battery. Ecologically safe and very efficient. - 1. An undesirable electrical signal from an external source such as an AC power line, motors, generators, transformers, fluorescent lights, CRT displays, computers, radio transmitters, and others. 2. Pixels on an image which are caused by the electron movement in the sensor. The amount of movement depends on the temperature of the semiconductor and will increase with higher temperatures. - 3. The degree of randomness or unpredictability in a signal. Noise components are a primary determinant of the quality of an image. - NOR Gate (Not-OR) - An OR gate followed by an inverter. The output of the OR gate is inverted to the opposite value. - Normal incidence - An angle of incidence of zero degrees. - NOT Gate - The output is just the opposite from the single input. - Slight but recurrent oscillation of the axis of the Earth, caused by the Moon's minutely greater gravitational effect on the Earth's equatorial "bulge". - A star that experiences an abrupt increase in brightness by a factor of a million (in contrast to the much brighter supernova). A nova is produced in a semidetached binary system where hydrogen-rich matter is being transferred onto a white dwarf. As more and more hydrogen builds up on the surface the temperature rises. The material is degenerate so when the temperature becomes high enough for nuclear burning to take place it does so explosively producing a nova. - Numerical Aperture (NA) - A design function of a lens system that relates the refractive index of the lens material and the cone angle of the lens such that the NA is equal to the refractive index of the lens times the sine of the cone angle of the lens. (N.A. = n siny) - Occams Razor Principle - Any hypothesis should be shorn of all unnecessary assumptions; if two hypotheses fit the observations equally well, the one that makes the fewest assumptions should be chosen. - The cutoff of the light from a celestial body caused by its passage behind another object. - OP Amp - OPerational amplifier - A general-purpose IC used as a basic building block for implementation of linear functions. An operational amplifier has an inverting and a non-inverting input. The output voltage is the extremely gained difference between the inputs. By external circuitry nearly any function may be - Open Cluster - A small, loose cluster of stars that typically contains several hundred members. The best examples are the Hyades and the Pleiades, both in the constellation Taurus. Open clusters line the Galactic plane, in contrast with globular clusters, which are members of the Galaxy's halo or thick disk. - Optical Center - Point in a lens through which light rays pass without refraction. - Optical Density (OD) - Property of an optical material that determines the speed with which light rays travel through it. - A logarithmic unit of transmission: OD = -log (T), where T is the transmission (0 f T f 1). - Solid state devices capable of emitting a specific wavelength sensing a specific wavelength of energy or interacting together in a common package to perform an electronic function. - OR Gate - The output is yes if at least one input is yes. - Eyepieces with common focal planes so that they are interchangeable without refocusing. - Generally speaking, parallax is the apparent shift in the direction to an object as seen from two different locations. This shift can be used to determine distances (through "triangulation"). Stellar parallax occurs as the Earth orbits the Sun and our line of sight to a nearby star varies. The effect is to make the star appear to shift position over the course of the year. In reality, stellar distances are so great that parallax shifts are less than an arc second, completely unobservable to the unaided eye. - Parallel port - A port on the computer where 8 data lines work in parallel (therefore the name) and that is faster than a serial port but slower than SCSI, IEEE 1394 ports. Often used by printers and flash card readers. - PCB - Printed Circuit Board - A substrate on which a predetermined pattern or printed wiring and printed elements has been formed. Also called a printed wiring board (PWB). - PCI - Peripheral Component Interconnect - It is a standard for a local bus in a PC - PCMCIA - Personal Computer Memory Card International Association - A standard's association which gives its name to the credit sized devices used in notebooks and handheld PC's for memory storage, modems and other - Experiments have shown that light of a given energy (frequency) is not something that can be broken up indefinitely. Rather for a given frequency it comes in discrete bundles with energy hf where h is Planck's constant and f is the frequency. These discrete bundles of light are known as photons. It is often useful to think of light as a bunch of particle photons. Other times it is useful to think of light as a wave. Astronomers do both as needed. The idea of the photon was first put forward by Albert Einstein in his theory of the photoelectric effect. - A small area on the surface of an image sensor that captures the brightness for a single pixel in the image. There is one photosite for every pixel in the image - An effect seen when you enlarge a digital image too much and the pixels - Short for picture element. Each cell that constitutes an intersection of the grid of rows and columns on a CCD array is a pixel. A pixel is a single measurable entity for charge storage, release, and ADU conversion on a CCD. - Pixel Value - A measurement (usually in terms of ADUs or DNs) of the electronic charge obtained in a pixel during a CCD integration. After image processing, a measurement of the processed brightness of a pixel. - Plan Objectives (Flat Field Objectives) - Describes a correction class for objectives. The image curvature aberration is corrected for objectives of this type. Correcting this error requires lenses with stronger concave surfaces and thicker middles. Three types of plan objectives, planachromate, planapochromate and plan- fluorite, are based on the type of additional correction for - Planck Constant - A fundamental constant of physics, h, which sets the scale of quantum-mechanical effects. h = 6.625 × 10-27 erg-sec - Sometimes called the fourth state of matter (after solid, liquid and gas), a plasma is a gas in which some of the atoms or molecules are ionised, i.e. their electrons have been stripped off, and are floating around freely. Strictly speaking, almost all gas in space is a plasma, although only a tiny fraction of the atoms are ionised when the temperature is below about 1000 Kelvin. The gas in and around DRAGNs is much hotter than this, and so is a fully ionised plasma. The very low densities in space allow the electrons to travel without much obstruction, so paradoxically space is an almost perfect electrical conductor. Although charges can move around freely, averaged over even small volumes (say a million km across) cosmic plasmas are always neutral (i.e. they contain equal numbers of positive and negative charges), because the electrical forces which arise when charges are noticeably separated are enormously strong. Plasmas in space are permeated by magnetic fields, A good way to think about cosmic magnetic fields is in terms of field lines. These behave like rubber bands embedded in the plasma, so that as the plasma flows the field lines are pulled and stretched along with it. When they are stretched enough they can pull back on the plasma. Individual electrons and ions in the plasma feel a magnetic force which makes them travel in a helical path around the field lines, so that they can only travel long distances in the direction along the field. This binds the plasma together so that it behaves like a continuous medium even when (as in the plasma in DRAGNs) the individual electrons and ions almost never collide. - A type of eyepiece. Very common nowadays. Good field of view and easy to use. - The most distant known planet from the Sun. Its orbit has the highest eccentricity and highest inclination to the ecliptic of any planet and some astronomers suggest that it may be an escaped satellite of Neptune. In the mid-1970s Pluto crosses Neptune's orbit on its way in, and for the rest of this century Pluto will be closer to the Sun than Neptune (Pluto and Neptune, however, are never less than 2.6 AU apart). Its mass and radius have not been determined with any great certainty. - PNG - Portable Network Graphics - A compressed but lossless image format which is free to use and supported by most web browsers - can fully substitute GIF and at the cost of size also - An electrical connection on the computer into which a cable can be plugged so the computer can communicate with another device such as a printer or The "outdoor" of a computer. E.g. an 8-bit-port, owning a special address, provides 8 digital lines in parallel for data interchange. - Position Angle (PA) - The direction of an "arrow" on the sky, for instance the long axis of a DRAGN, is given by its Position Angle (PA), which is zero if the direction is due north and increases as the arrow rotates to the east, i.e. to the left, if north is at the top of the image, since you are looking up. - Primary Mirror - The first mirror encountered by incident light in a telescope system. - Prime Focus (PF) - a) The focal point of the large primary reflecting mirror in astronomical telescopes when the light source is extremely distant. This focus actually falls at a point just within the upper structure of the telescope itself and is therefore accessible to CCD cameras and other instruments; it provides a large field of view. - b) Attaching a camera to a telescope to make it a telephoto lens is sometimes called PF. - c) Placing the photosensor in the first focus point of the - Not all telescopes can be set up this way. - Processor or CPU (central processing unit) - heart of the computer. The CPU is the device executing the commands of a program and calculating results. In detail, we have to distinguish between the CPU for handling integer arithmetic and the FPU (floating point unit) for non-integer mathematics. - A particle of the hadron family which is one of the two particles that make up atomic nuclei. The proton has a postive electrical charge. - In contrast to truecolor, where every point of a picture has an individual color, a pseudocolor picture has an index to a color table for every point. The length of this table is limited (e.g. to 256 entries), so that only a finite amount of colors is available to a picture. During translation of a truecolor picture into pseudocolor, the "most important" colors have to be calculated und put in the table. - PSF -- Point Spread Function - The visible and measurable smear of photons from a point source. Optical diffraction and atmospheric distortion are primary factors causing light from point sources such as distant stars to be smeared into a PSF much larger than the geometrically calculable size of the point source. - A rotating magnetized neutron star that produces regular pulses of radiation when observed from a distance. A pulse is produced every time the rotation brings the magnetic pole region of the neutron star into view. In this way the pulsar acts much as a light house does, sweeping a beam of radiation through space. - QCUIAG - QuickCam and Unconventional Imaging Astronomy Group - An International group of astronomers producing Astronomical images with unconventional electronic imaging devices such as modified webcams, video surveillance cameras and digital cameras. - A process where the continuous range of values of an input signal is divided into non-overlapping sub-ranges and, to each sub-range, a discrete value of the output is uniquely assigned. - Quantum Efficiency - The ratio of effectiveness with which a CCD converts received photons (quanta of light) to measurable electrons. Amateur CCDs operate from about 15% up to about 75%, depending on the wavelength of light. High-end CCDs built for professional use frequently run 80% or higher. - From quasi-stellar object, a star-like (i.e. unresolved) object that has a very large luminosity and is located at very large distances from us (as indicated by their high cosmological redshifts). Although technically the term quasar refers to objects that are highly luminous in the radio band, the term tends to be used for both radio-loud and radio-quiet objects high-redshift unresolved objects. Quasars are believed to be powered by supermassive black holes in the centers of galaxies in the process of formation early in the history of the universe. - A unit of angle equal to about 57 degrees. The length along the arc of a circle covering by one radian is equal to the radius of the circle. The complete angle around the circle (360 degrees) is equal to 2 pi radians. The radian is particularly useful because if you know the distance to some object and you measure its apparent size as the angle it subtends in your field of view in radians, then the actual size is just that number of radians times the distance to the object. For example, a meter stick held up at a distance of 100 meters makes an apparent angle in your field of view of 0.01 radians. - The emission of particles or energy. Also the particle or energy so emitted. - Radio Galaxy - A galaxy that is emitting most of its energy in the form of radio waves rather than light in or near the visible bands where stars emit most of their radiation. This means that radio galaxies are dominated by some non-stellar process. - RAM - Random Access Memory - Stores digital information temporarily and can be changed as required. It constitutes the basic (read/write) storage element in a computer. - Readout Noise - ADU randomness introduced by the collection, amplification, and analog-to-digital conversion of signal data. - Readout Register - The part of a CCD image sensor that reads the charges built up during an - When free electrons are captured by ions to form atoms. The occurrence in the early universe when the temperature became sufficiently low that free electrons could no longer overcome the electrostatic attraction of the nuclei, and were captured to form atoms - Recycle Time - The time it takes to process and store a captured image. - Red Dwarf - A small, dim, low-mass Main Sequence star. Red dwarf stars are hard to detect because they are so dim. In principle, they could constitute a major mass constituent of the universe, if their production is heavily favored in the star formation process. In that case they could constitute a significant source of dark matter. - Red Giant - A star with low surface temperature (thus red) and large size (giant). These stars are found on the upper-right hand side of the HR diagram (high luminosity, low temperature). The red giant phase in a star's life occurs after it has left the main sequence. The Sun will become a red giant in about 5 billion years. - A redshift is a shift in the frequency of a photon toward lower energy, or longer wavelength. The redshift is defined as the change in the wavelength of the light divided by the rest wavelength of the light, as z = (Observed wavelength - Rest wavelength)/(Rest wavelength) Note: that postive values of z correspond to increased wavelengths (redshifts). Different types of redshifts have different causes. The Doppler Redshift results from the relative motion of the light emitting object and the observer. If the source of light is moving away from you then the wavelength of the light is stretched out, i.e., the light is shifted towards the red. These effects, individually called the blueshift, and the redshift are together known as doppler shifts. The shift in the wavelength is given by a simple formula (Observed wavelength - Rest wavelength)/(Rest wavelength) = (v/c) so long as the velocity v is much less than the speed of light. A relativistic doppler formula is required when velocity is comparable to the speed of light. The Cosmological Redshift is a redshift caused by the expansion of space. The wavelength of light increases as it traverses the expanding universe between its point of emission and its point of detection by the same amount that space has expanded during the crossing time. The Gravitational Redshift is a shift in the frequency of a photon to lower energy as it climbs out of a gravitational field. - Reflecting Telescope (Reflector) - Telescope that uses mirrors to magnify and focus an image onto an eyepiece. - Reflection short pass Filter - Are interference filters that transmit short-wave light while reflecting long-wave light. An optical reflection short pass filter is characterized by the reading of the wavelength edge at which the filter changes from transmission to reflection (50% threshold). - Reflection long pass Filter - Are interference filters that reflect short-wave light but are transparent for long-wave light. An optical reflection long pass filter is characterized by the reading of the wavelength edge at which the filter changes from reflection to transmission (50% threshold). - Refracting Telescope (Refractor) - Telescope that uses lenses to magnify and focus an image onto an eyepiece. - The deflection from a straight path undergone by a light ray or energy wave in passing obliquely from one medium to another. The refraction of a light ray or energy wave is governed by Snell's Law. - Refractive Index - The factor by which the light velocity in an optical medium is less than in a vacuum. - The index of refraction is the ratio of the constant velocity of light in vacuum (c) to the variable velocity of light in a transparent medium (v). n = c / v - This property is also known as optical density and expressed as nD20. The number 20 represent the Celsius temperature of the sample while D represents the monochromatic D line of the sodium spectrum (wavelength: 589.3 nm). - The consequence of this differing velocity in different materials results in varying amount of bending of light in different substances. Snells's law illustrates a well-known relationship between the angle of bending and the refractive index: n1sin(b)1 = n2sin(b)2 The value for the index of refraction of several materials is given below: The refractive index of a substance changes if the temperature changes or if the color of the light used changes. Refractive index measurements can be used to determine the concentration of a solution or ascertain purity and identify a substance. The nDvalue decreases as the temperature rises. To compare experimental results with those listed in standard tables, set at 20 degrees Celsius, the correction is as follows: nD20 = nDT + (T - 20)(0.00045) The temperature T is in degrees Celsius The rules relating observations in one inertial frame of reference to the observations of the same phenomenon in another inertial frame of reference. Casually applied only to the Einsteinian Special Theory of Relativity, but a more general term. The ability of a device to perform within the desired range over a measured period of time. Increase or decrease of the number of pixels in one or both axes of an image using a method that correlates adjacent pixel values to create a smooth pixel-to-pixel brightness transition. The difficulty in moving electrical current through a conductor to which voltage is applied. Expressed in ohms. A discrete device which implements a resistance for electronic circuits. Available in different types, shapes and made. Higher values are expressed in kilo- or mega-ohms. A resistor indicated as 1k5 is meant 1'500 Ohm or 1.5 kOhm. Resistors may wear also colored rings which indicate value and tolerance. All images are blurred to some extent; otherwise they would contain infinitely fine detail and hence an infinite amount of information. The amount of blurring is technically called the resolution of the image, with high resolution meaning little blurring. 1. The smallest value of input (or output) signal, other than zero, that can be measured (or sourced) and displayed. Also called sensitivity or minimum resolvable quantity. 2. An indication of the sharpness of images on a printout or the display screen. It is based on the number and density of the pixels used. The more pixels used in an image, the more detail can be seen and the higher the 3. Quantified by defining the Point spread function, which is the pattern produced by the spread-out light from a single point. The maximum resolution available from a telescope is set by the phenomenon of diffraction, and gives a FWHM approximately equal to the ratio of wavelength to the diameter of the telescope. A process that enlarges an image by adding extra pixels without actually capturing light from those pixels in the initial exposure. The true resolution of an image based on the number of photosites on the surface of the image sensor. A system of cross-hairs in the eyepiece of a telescope. RGB - Red, Green, Blue The color system used in most digital cameras where red, green, and blue light is captured separately and then combined to create a full color image. Named the additive colors of the human visual spectrum, since red + green + blue = white. Richardson-Lucy Method (RL) Image reconstruction algorithm. ROM - Read Only Memory Permanently stores information repeatedly used-such as tables of data, characters of electronic displays, etc. Unlike RAM, ROM cannot be altered. - Sample Rate - The rate at which a continuous-time (analog) signal is sampled. It is frequently expressed as samples/sec (S/s), kilosamples/sec (kS/s), or - The taking of readings from a single data source. In basic CCD imaging theory, for delivered optical system information not to be lost, the resolution of imaging CCD pixels must be at least twice as precise as the delivered resolution of the optical system (e.g., for appropriate sampling, a telescope delivering a PSF with an FWHM of 2 arcseconds calls for a CCD with pixels having a one-arcsecond FOV.) Less than this level of precision is termed undersampling. Significantly more is termed oversampling. - Sixth major planet out from the Sun. The most spectacular of the Solar System, it is circled by a series of concentric rings. All the satellites of Saturn are locked in synchronous rotation. - Reaching or exceeding the full-well capacity of a pixel. Also, in color theory, the degree of purity of a hue in terms of mixture with white light. - Schottky Diode - A junction or barrier formed by the direct contact of semiconductor materials with a metal. This type of contact rectifies signals and may also exhibit some resistance. - Schottky TTL - A form of TTL logic in which Schottky diodes are used to clamp the transistors out of saturation, effectively eliminating the storage of charge within the transistor - allowing increased switching speeds. - Schwarzschild Radius - The radius of the event horizon surrounding a nonrotating black hole. Its size is given by Rs = 2GM / c2. For a one solar mass star this is about 3 kilometers. - SCSI port - Small Computer - A port that's faster than the serial and parallel ports, harder to configure than the newer USB port. Mostly used for fast disk connections. - SCT -- Schmidt-Cassegrain Telescope - A telescope that uses mirrors and lens to "fold" the light into a smaller tube. - A telescope with a spherical primary mirror and a thin refractive corrector plate with a complex, non-spherical shape. - An SCT occasionally needs collimation. - Secondary Mirror - The second reflecting surface encountered by the light in a telescope. The secondary is usually suspended in the beam and therefore obstructs part of the - Self-Guiding is a feature of certain CCD cameras. In these cameras there are two CCD chips: one for imaging, and a smaller one for guiding the telescope. During long exposures, errors in a telescope's drive prevent perfect star images. Guiding eliminates this problem, but for most CCD cameras a second separate CCD must be used to guide. Self-Guiding CCDs simply use their built-in guiding chips to eliminate the need for a second camera. Also, a guidescope or off-axis guider does not need to be used, avoiding some of the other problems inherent in guiding a telescope using these methods. - An element such as silicon or germanium or a compound like GaAs that has an intermediate band gap. Unlike metals that freely conduct and insulators that do not conduct charge, semiconductors selectively conduct charge through the movement of holes and electrons. - Serial Cable - If it applies to your cam - this is the cable that connects your camera to your PC and allows you to export the images you have taken. Nowadays mostly replaced with USB connection. - Serial port - A port on the computer (RS-232) where the wires carry the information in a serial manner - (speed is usually low but it is some kind of the lowest denominator in computer communication). Many digital cameras came equipped with cable to download images through this port but it's slow! Both parallel and USB ports are faster connections. - Sharpening is a software routine used to enhance detail in a CCD image. There are numerous methods employed for sharpening images--there are even whole programs dedicated only to sharpening. Some are simple sharpening routines, while others involve more complex algorithms (such as the unsharp mask technique), and some employ iterative routines to produce successively sharper images while attempting to minimize the noise inherently generated in sharpening (the Lucy-Richardson and similar techniques). - A metal enclosure for the circuit being measured or a metal sleeve surrounding wire conductors (coax or triax cable) to lessen interference, interaction, or current leakage. The shield is usually grounded. - The device in the camera that opens and closes to let light from the scene strike the image sensor and expose the image. - Shutter Speed - The length of time the (mechanical) shutter is open and light strikes the image sensor. It also applies to systems with electronic shutter but here the control is just a digital signal. - Sidereal Year - A period of time based on the revolution of the Earth around the Sun, where a year is defined as the mean period of revolution with respect to the background stars. - The measured value recorded by a pixel during a CCD integration. The signal in a CCD image usually comprises input from sky, thermal, and electronic sources. The natural degree of randomness in the pixel value is the noise component of the signal. - Of a scientific hypothesis, the principle that a proposed explanation must not be unnecessarily - In classical general relativity, a location at which physical quantities such as density become infinite. Another definition is a point in spacetime where timelike worldlines end (or begin). Singularities can be initial singularities (such as the big bang itself) or ending singularities, such as at the center of a black hole, or the big crunch. - SN-Ratio - Signal to Noise Ratio (SNR or S/N) - The ratio of the maximum signal that can be measured to the level detected with no signal present (noise level). - SLR - Single-Lens Reflex - Cameras that have a mirror that allows the optical viewfinder to look through the main lens. With some digital cameras this means that the LCD panel cannot be used for previewing. This kind of camera offers professional quality, but at a price far above that of simpler models. There are many attachments that increase the versatility of the digital camera. Taking off the camera lens and gettting an adapter to attach some telescopes right to the camera to make it a giant lens. This technique is called - The second memory card format used in digital cameras. Holds between 2Mb and 128Mb of picture data. (Third is Sony's MemoryStick) - SMD - Surface Mounted Device - the modern packaging system for integrated circuits. In contrast to DIL-packages no drillings are necessary for mounting, the pin distance is much smaller (typ. 1.27mm (1/20") ... 0.8mm). - General term for things you cannot hold in hands, normally programs or data. A subdivision is firmware, which is built into a device and normally cannot be changed in contrast to other software. - Solar Eclipse - An eclipse in which the Earth passes through the shadow cast by the Moon. It may be total (observer in the Moon's umbra), partial (observer in the Moon's penumbra), or annular. - Solar Filter - A special filter added the FRONT of the telescope so you can safely view the Sun. It cuts the amount of light by a factor of 100,000 (ND5). Small solar filters added to the eyepiece are dangerous and should be thrown - Solar Flare - Sudden and short-lived brightening of a region of the solar chromosphere. - Solar Mass, Luminosity - The Sun is a fairly average star, and its mass and luminosity serve as good standards for comparison with other objects. The luminosity and mass of stars and galaxies is generally given in terms of the solar mass and solar luminosity. The solar mass is 2 × 1033 grams. One solar luminosity is 4 × 1033 ergs per second. Luminosity has the same units as Power, e.g. energy per second. The Watt is the familiar unit of power. For comparison, a 400 Watt light bulb is 10-24 solar luminosities. - Special Relativity - Einstein's theory of motion. The theory is "special" because it describes the special case where gravity can be neglected. For motions much smaller than the speed of light, represented by the symbol c, special relativity gives answers that are almost identical to those provided by "classical mechanics", i.e. the theory of Newton (which is much simpler to use). When objects move at speeds near c, classical mechanics gives the wrong answers and we have to use special relativity. The most important result is that it takes an infinite amount of energy to accelerate anything with finite mass to c, and therefore c acts as a universal speed limit. When the special features of Einstein's theory come into play, we describe motion (and the moving objects) as relativistic. For our purposes, motion faster than about 0.1c counts as mildly relativistic. When the motion is highly relativistic, it is not convenient to talk about speed because everything is moving at almost exactly c. Instead it is useful to use the Lorentz factor defined as - The spectrum of an object is a graph of its brightness at each wavelength of light, versus wavelength. Sometimes, instead of wavelength, frequency or photon energy is used; all three graphs are equivalent. Here 'light' is used as a shorthand for electromagnetic radiation of all kinds, from radio through to gamma rays. - The magnitude of the velocity. - Speed of Light (c) - The finite speed at which light travels. Unless otherwise stated, usually refers to the fundamental constant c, the speed of light in a perfect vacuum. Approximately equal to 3 × 1010 cm/sec. - Spherical Geometry - A geometry which has positive constant curvature. - Spiral Galaxy - A galaxy consisting of a flattened rotating disk of stars, a central bulge and a surrounding halo. The disk is prominent due to the presence of young, hot stars which are often arrayed in spiral patterns. The characteristic appearance of these bright spirals gives the galaxy type its name. - Spotting Scope - A small telescope, always a refractor or catadioptric, generally used for terrestrial viewing. - Adding or averaging multiple CCD integrations of the same FOV to emulate the improved SNR of a single longer - Standard Candle - Any astronomical object of known luminosity that can thus be used to obtain a distance. Cepheid variables, Main sequence stars, and type I supernovae have all been used as standard candles. - A self-luminous object held together by its own self-gravity. Often refers to those objects which generate energy from nuclear reactions occurring at their cores, but may also be applied to stellar remnants such as - Star Alignment - Telescope mounts that have computers attached to them can be aligned with a couple known stars. The computer finds a pointing/position factor that it can apply to any of the objects in its database to calculate where the telescope needs to point to find an object. Also called - Step-Up-switching regulator - Switching regulator, whose output voltage exceeds its input voltage. Necessary in many battery powered systems, often found in solar power - 1. An aperture setting that indicates the size of the lens opening. 2. A change in exposure by a factor of two. Changing the aperture from one setting to the next doubles or halves the amount of light reaching the image sensor. Changing the shutter speed from one setting to the next does the same thing. Either changes the exposure one stop. - Stop down - To decrease the size of the lens aperture. The opposite of open up. - Applying a mathematical transfer function to the pixel values in an array so that they are changed to better portray the image in terms of relative brightness levels. Often used synonymously with scaling. - Subexposure / Subintegration - One of a set of stacked CCD integrations. - Star that Earth orbits. Central body of solar system. It takes about 1-10 million years for photons to diffuse from the Sun's interior to its surface. About 3% of the energy radiated is in the form of neutrinos. Every second about 655 million tons of H are being converted into 650 million tons of He. A grazing light ray is deflected 1".7 by the Sun. If the total angular momentum of the solar system were concentrated in the Sun, its equatorial rotation speed would be about 100 km - The explosion of a star. Supernovae come in two types: - Type I is caused by sudden nuclear burning in a white dwarf star. - Type II is caused by the collapse of the core of a supermassive star at the end of its nuclear-burning life. In either case, the star is destroyed and the light given off in its explosion briefly rivals the total light given off by a whole galaxy. A Supernova Remnant is the material blown off during a supernova, now seen as a great glowing cloud expanding into space. - Switching Regulators - regulate a value not by variation of a controllable resistor or similar methods with energy loss, but switch a source periodically on and off. The generated ripple is filtered by high quality inductors and capacitors, therefore the efficiency is very high. The regulation is similar to the pulse width modulation (PWM). - The property under which some quantity does not change when certain attributes, such as spatial location, time, rotation, and so forth, vary. - In hardware, it is an event that occurs in a fixed time relationship to another event. In software, it refers to a function that begins an operation and returns to the calling program only when the operation is complete. - System Gain - The number of electrons represented by each ADU. Synonymous with conversion factor. - 1. The name given to a speculative particle that follows a space-like trajectory through spacetime. That is, the particle must travel faster than the speed of light. 2. A gluon that has not yet dried. - Reduction of the effective focal ratio of an optical system via the introduction of a positive lens system into the optical path between the basic optical elements and the focal plane. - A measure of the average energy of random motion of the constituents (e.g., molecules, atoms, photons) of a system. - In scientific usage, a hypothesis or related group of hypotheses which have become well established by the fundamental criteria of the scientific method. - Thermal Radiation - Radiation emitted by any object with a temperature greater than absolute zero. A thermal spectrum occurs because some of the heat energy of the object is converted into photons. In general, a thermal spectrum depends not only upon the temperature, but also upon the composition of the object, its shape, its heat capacity, and so forth. - Compare blackbody radiation which is thermal radiation from an ideal emitter. - The theory of heat and temperature, encompassing both the macroscopic behavior of bodies, and the statistical description of the submicroscopic world of particle motions. Laws of Thermodynamics: First Law: A statement of the conservation of energy: The change of the energy of a system is equal to the heat added plus the work done on the system. Second Law: The Entropy of an isolated system can only increase with time. Also can be stated: heat never flows spontaneously from a cooler to a hotter body. (Perpetual motion is impossible.) Third Law: Absolute Zero can never be attained, only approached arbitrarily closely. - Thought Experiments - Experiments which could be performed in principle, but might be very difficult in practice, and whose outcome can be predicted by pure logic. Often used to develop the consequences of a theory, so that more practical phenomena can be predicted and put to actual experimental tests. - TIFF - Tagged Image File Format - A popular lossless image format used in digital photography. TIFF images are stored with a file extension name of TIF. - Time-lapse photography - Taking a series of pictures at preset intervals to show such things as flower blossoms opening. - This ability allows a telescope mount to follow the apparent motion of the stars. Actually it is counter acting the rotational motion of the Earth. This feature is very useful for extended and high power viewing. It is required for astrophotos longer than a few seconds. - A semiconductor device in which a small control signal is used to control a larger current flow. Major families are bipolar transistors and - Contrary to pseudocolor. With truecolor, every point of a picture has an - TTL - Transistor-Transistor-Logic - A popular logic circuit family that uses multiple-emitter transistors. A low signal state is defined as a signal 0.8V and below. A high signal state is defined as a signal +2.0V and above. - TTL - Thru-The-Lens - A camera design that let's you compose an image while looking at the scene through the lens that will take the picture. - Uncertainty Principle - The principle of quantum mechanics which states that the values of both members of a certain pairs of variables, such as position and momentum or energy and time interval, cannot be determined simultaneously to arbitrary precision. For example, the more precisely the momentum of a particle is measured, the less determined is its position. The uncertainty in the values of energy and time interval permits the quantum creation of virtual particles from the vacuum. - Uniform Motion - Motion in which no forces are acting: unaccelerated motion. Motion at a constant velocity. The state of rest is a special case of uniform motion. - Exposing the film / image sensor to less light than is needed to render the scene as the eye sees it. Results in a too dark image. - An analog signal range that is always positive (through zero). - Units of Angle - You can think of an astronomical image as a map of the radiation received from some small region of the sky. The coordinates of the map therefore correspond to angles on the celestial sphere. This means that the directly-measurable "sizes" and "areas" of astronomical objects are actually angles and solid angles; to convert to real lengths we have to find the distance, which is usually not known very accurately. Astronomers still use the units of angles invented by the ancient Babylonians, in which the 360 degrees of the circle are each divided into 60 minutes of arc, or arcmin, and each minute of arc is divided into 60 seconds of arc, or arcsec. Since the Babylonians stopped here, fractions of an arcsec are quoted using decimal notation, and even S.I. prefixes, hence milliarcsec, microarcsec. Radians rarely makes an appearance in astronomy. However, solid angles are often quoted in steradians; square arcsec is a popular alternative. - Units of Length - Since meters are such a pathetically small unit of length compared to the distances between stars and galaxies, astronomers have invented other units of length: 9.46×1015 meters. - The distance light travels in a year. The preferred unit of astronomers, apparently adopted in the late 19th century because light-years were becoming too widely used by the general public! Roughly the distance to the nearest star. 1 pc = 3.26 light years = 3.086×1016 meters. A thousand parsecs. A useful unit when describing galaxies, which can be anything from a few kpc to 100 kpc in size. A million parsecs. Typically used to give distances to galaxies. - That which contains and subsumes all the laws of nature, and everything subject to those laws; the sum of all that exists physically, including matter, energy, physical laws, space, and time. Also, a cosmological model of the universe. - Unguided Exposure - There are small tracking errors inherent in every telescope system. To avoid these errors ruining the exposure there are two methods that are used. One is to guide the telescope using either a second CCD camera or by using a Self-Guiding CCD. The other way to do it is to take an unguided exposure, short enough to keep the tracking errors from being seen. It is possible to combine multiple unguided exposures to create the effect of a single guided exposure. Often high-quality telescope mounts give tracking accuracy specifications in terms of how long an unguided exposure they are capable of taking. - Unsharp Mask (USM) - Unsharp mask is an image-processing technique used to sharpen an image and enhance detail. True unsharp masking is done by subtracting a blurred version of an image from the original. Unsharp mask routines in software simulate this effect and make it easier to have control over the final image. - Seventh planet from the Sun. Has retrograde rotation. - USB port - Universal Serial Bus - A high-speed serial port which connects peripherals, different standards exists for slower and faster communication. - A mathematical entity which has direction as well as magnitude. Important physical quantities represented by vectors include velocity, acceleration, and force. A vector changes whenever either its direction or its magnitude change. - The rate of change of position with time, or, more precisely, the first derivative of position with respect to time, dx/dt. Velocity is a vector quantity, incorporating both the speed of motion (the amplitude) along with the direction of motion. - Second planet from the Sun. Has retrograde rotation. Mariner 10 has established that the cloud tops rotate every 4 hours retrograde. Radar experiments have established that the surface is somewhat smoother than the Moon, but there are mountains and there is extensive cratering. Last transit of Sun was in 1882; next one will be 2004. Venus's rotation period is in synchronism with Earth - that is, at inferior conjunction the same side is always toward the Earth. - VGA - Video Graphics Array - 1. IBMs implementation of 'high resolution color graphics' in ancient 2. Nowadays used to indicate a resolution of 640 x 480. - V-Drive chip - The V-drive (or vertical driver) chip is used to convert the TTL level vertical timing signals (V1, V2, V3, V4) to the bi-level or tri-level signals that are required by the CCD. Most V- drive chips also level shift the Vsub (or substrate drive) signal to the appropriate level that is required for norma1 substrate biasing as well as to a higher level for electronic shuttering. - Restriction of an FOV such that all parts of a desired imaging surface at the focal plane (such as a CCD chip, a piece of film, or the eye's pupil) are not illuminated by the entire primary optical element. Usually caused by an undersized or improperly placed secondary, or by narrow apertures in the focuser/camera assembly. - An acronym based on the words, volume and pixel. A voxel represents the smallest, indivisible volume element in a three-dimensional system. In this documentation, both the volume elements of the specimen as well as the 3D picture points are referred to as voxels. - A propagating disturbance which transmits energy from one point to another without physically transporting the oscillating quantity. A wave is characterized by a wavelength and a - A wave is an oscillation in space, with peaks and troughs. The distance from one peak to the next is the wavelength. It is measured in units of distance. The wavelength of ocean water waves will be meters. The wavelengths of visible light corresponds to hundreds of nanometers (billionths of a meter). Wavelength is an important way to characterize a wave. For light, the shorter the wavelength, the higher the energy of the light wave. - An equatorial wedge is used for imaging with a fork-mounted telescope. Most computerized telescopes have alt-azimuth fork mounts where the fork point straight up and down. This configuration is easiest to use because the eyepiece is always in a convenient position and the telescope is less bulky. However, field rotation results if the telescope is not polar aligned. An equatorial wedge mounts between the telescope and tripod to allow the forks to be aimed toward Polaris so the telescope can track in just one axis, eliminating field rotation. - Well Depth - Well depth is a measure of how much charge an individual photosite on a CCD chip can contain. Well depth is generally measured in electrons. For example, a Kodak KAF-1300 CCD chip has a well-depth of 150,000 electrons. When more charge than this fills the photosite, blooming occurs unless the CCD chip has a built-in anti-blooming protection. In general, anti-blooming chips have much lower well depths, which is one of the reason non-anti-blooming CCDs are popular (they are much more sensitive). - White balance - An automatic or manual control that adjusts the brightest part of the scene so it looks white. - White Dwarf - The remnant of a star, at the end of its life, consisting of a carbon and oxygen core supported by electron degeneracy pressure. The surface has a very high temperature and radiates mainly in the ultraviolet (hence white as in white hot), but it is only about the size of the Earth (hence dwarf). The maximum mass that can be supported by electron degeneracy pressure, and hence the maximum possible mass of a white dwarf, is known as the Chandrasekhar mass and is equal to 1.4 solar masses. - The standard number of bits that a processor manipulates at one time. Microprocessors typically use 16-, or 32-bit words. (Or 2 bytes and 4 bytes - In physics, a compound of the force exerted with the displacement produced. (More specifically, the vector product of force with displacement.) Work is not instantaneous, but is defined over the interval over which the force is applied. It has the same units as - Working Distance - The distance from the front lens of an objective to the focal point. The free working distance is defined as the distance between the front lens of the objective and the cover slip or uncovered sample. Usually objectives with large working distances have low numerical apertures, while high-aperture objectives have small working distances. If a high-aperture objective with a large working distance is desired, the diameter of the objective lens has to be made correspondingly large. These, however, are usually low-correction optic systems, because maintaining extreme process accuracy through a large lens diameter can only be achieved with great effort. - High-energy electromagnetic radiation (light), with short wavelength (less than, roughly, 10 nanometers) and high frequency (greater than about 1016 Hertz). X-rays would be produced by blackbody radiation at temperatures in excess of a million degrees. Sources of astrophysical X-rays include accretion disks, gas impacting on neutron stars, X-ray Bursters, and hot gas located in the centers of galaxy clusters. - YUV color space - Y stands for the luminance (lightness) information, and is compatible to black&white (and gray) signal. U and V are the so-called color difference signals B-Y an R-Y, and carry the additional color information (additive color space). The YUV representation of video information is also oriented on the human perception of visual information, whereby RGB representation is more based on the technical reproduction of color information. The human eye senses luminance and color with different receptors. There are less color receptors, and they have significant less spatial - The sky directly overhead. An object "transits" when its line of right ascension crosses the zenith. Most entries are collected from other sources but no full copy of any existing page was made. Feel free to link this page - if you cannot resist you may also copy it ;-) As of: Februar 2004
1
2
<urn:uuid:753de83f-2ad7-48d3-9385-ae7c53bbde32>
The time to defibrillation is a key factor that influences survival For every minute there is a 10% reduction in the survival chance of person in cardiac arrest from Ventricular Fibrillation says the Australian Resuscitation Council. Automated External Defibrillators in Basic Life Support (BLS) This guideline is applicable to adults and children. The importance of defibrillation has been well established as part of overall resuscitation, along with effective CPR. AEDs must only be used for victims who are unresponsive and not breathing normally. CPR must be continued until the AED is turned on and pads attached. The rescuer should then follow the AED prompts. The time to defibrillation is a key factor that influences survival. For every minute defibrillation is delayed, there is approximately 10% reduction in survival if the victim is in cardiac arrest due to VF. 1 The development of Automated External Defibrillators (AEDs) has made defibrillation part of basic life support. AEDs can accurately identify the cardiac rhythm as “shockable” or “non shockable”. Use of AED AED use should not be restricted to trained personnel. Allowing the use of AEDs by individuals without prior formal training can be beneficial and may be life saving. Since even brief training improves performance, (e.g. speed of use, correct pad placement) it is recommended that training in the use of AEDs (as a part of BLS) be provided. 2 [Class A; LOE II, III-1, III-2, IV, extrapolated evidence] The use of AEDs by trained lay and professional responders is recommended to increase survival rates in victims with cardiac arrest. 2 Implementation of AED programs in public settings should be based on evidence of effectiveness in similar settings. Because population (e.g. rates of witnessed arrest) and program (e.g. response time) characteristics affect survival, when implementing an AED program, community and program leaders should consider factors such as location, development of a team with a responsibility for monitoring and maintaining the devices, training and retraining programs for those who are likely to use the AED, coordination with the local Emergency Services, and identification of a group of paid or volunteer individuals who are committed to using the AED for victims of arrest.2 [Class A; LOE I, II, III-1, III-2, III-3, IV] Deployment of home AED’s for high-risk individuals who do not have an ICD (implantable cardioverter defibrillator), is safe and feasible and may be considered on an individual basis, but has not been shown to change overall survival rates. 2 [Class A; LOE 2, 3] Use of AED’s in public settings (airports, casinos, sports facilities, etc.) where witnessed cardiac arrest is likely to occur can be useful if an effective response plan is in place. 2 An AED can and should be used on pregnant victims. Use of AED’s is reasonable to facilitate early defibrillation in hospitals. 2 Studies to date have shown AED’s are effective in decreasing the time to first defibrillation during in-hospital cardiac arrest. 2 Pad Placement - Adults: Place pads on the exposed chest in an anterior-lateral position. Acceptable alternative positions are the anterior-posterior and apex-posterior. In large-breasted individuals it is reasonable to place the left electrode pad lateral to or underneath the left breast, avoiding breast tissue. All pads have a diagram on the outer covering demonstrating the area suitable for pad placement. 1 [Class A; LOE extrapolated evidence] Pad to skin contact is important for successful defibrillation. There may be a need to remove moisture or excessive chest hair prior to the application of pads but emphasis must be on minimizing delays in shock delivery.1 [Class A; LOE expert consensus opinion] Avoid placing pads over implantable devices. If there is an implantable medical device the defibrillator pad should be placed at least 8cm from the device.1 Do not place AED electrode pads directly on top of a medication patch because the patch may block delivery of energy from the electrode pad to the heart and may cause small burns to the skin. Remove medication patches and wipe the area before attaching the electrode pad.1 Pad Placement - Children: Standard adult AEDs and pads are suitable for use in children older than 8 years. Ideally, for children between 1 and 8 years paediatric pads and an AED with a paediatric capability should be used. These pads also are placed as per the adult and the pads come with a diagram of where on the chest they should be placed.3 If the AED does not have a paediatric mode or paediatric pads then the standard adult AED and pads can be used.2 Ensure the pads do not touch each other on the child’s chest. 3 Apply the pad firmly to the bare chest in the anterior lateral as shown above in adults. If the pads are too large and there is a danger of charge arcing, use the front-back position (antero-posterior): one pad placed on the upper back (between the shoulder blades) and the pad on the front of the chest, if possible slightly to the left4 Rescuers should follow the prompts: care should be taken not to touch the victim during shock delivery. There are no reports of harm to rescuers from attempting defibrillation in wet environments.2 [Class A; LOE extrapolated evidence] In the presence of oxygen, there are no case reports of fires caused by sparking when shocks were delivered using adhesive pads.2 [Class A; LOE extrapolated evidence] 1. Sunde K, Jacobs I, Deakin CD, Hazinski MF, Kerber RE, Koster RW, Morrison LJ, Nolan JP, Sayre MR. Part 6: Defibrillation: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science with Treatment Recommendations. Resuscitation 2010; 81: e71-e85. 2. Soar J, Mancini ME, Bhanji F, Billi JE, Dennett J, Finn J, Ma MHM, Perkins GD, Rodgers DL, Hazinski MF, Jacobs I, Morley PT, on behalf of the Education; Implementation and Teams Chapter Collaborators. Part 12: Education, implementation, and teams: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science with Treatment Recommendations. Resuscitation 2010; 81: e288-e330. 3. de Caen AR, Kleinman ME, Chameides L, Atkins DL, Berg RA, Berg MD, Bhanji F, Biarent D, Bingham R, Coovadia AH, Hazinski MF, Hickey RW, Nadkarni VM, Reis AG, Rodriguez-Nunez A, Tibballs J, Zaritsky AL, Zideman D, Nolan J. Part 10: Paediatric basic and advanced life support: 2010 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science with Treatment Recommendations. Resuscitation 2010; 81: e213-e259. 4. Biarent D, Bingham R, Eich C, López-Herce J, Maconochie I, Rodríguez-Núñez A, Rajka T, Zideman D. European Resuscitation Council Guidelines for Resuscitation 2010 Section 6. Paediatric life support. Resuscitation 2010; 81: 1364-1388.
1
6
<urn:uuid:352d372c-6bdc-42a2-a744-ed868428c53f>
Shallow submarine volcanoes have been newly discovered near the Tokara Islands, which are situated at the volcanic front of the northern Ryukyu Arc in southern Japan. Here, we report for the first time the volatile geochemistry of shallow hydrothermal plumes, which were sampled using a CTD-RMS system after analyzing water column images collected by multi-beam echo sounder surveys. These surveys were performed during the research cruise KS-14-10 of the R/V Shinsei Maru in a region stretching from the Wakamiko Crater to the Tokara Islands. The 3He flux and methane flux in the investigated area are estimated to be (0.99–2.6) × 104 atoms/cm2/sec and 6–60 t/yr, respectively. The methane in the region of the Tokara Islands is a mix between abiotic methane similar to that found in the East Pacific Rise and thermogenic one. Methane at the Wakamiko Crater is of abiotic origin but affected by isotopic fractionation through rapid microbial oxidation. The helium isotopes suggest the presence of subduction-type mantle helium at the Wakamiko Crater, while a larger crustal component is found close to the Tokara Islands. This suggests that the Tokara Islands submarine volcanoes are a key feature of the transition zone between the volcanic front and the spreading back-arc basin. Hydrothermal activity provides geochemical information from the interior of the Earth (e.g., the mantle) at its surface (e.g., the atmosphere or the ocean)1. Shallow hydrothermal fluids emitted from seafloor vents and/or volcanoes generally form plumes in which this information is preserved. Therefore, submarine hydrothermal plumes represent a means to observe and understand regional tectonic settings and fluid circulation. In addition, shallow volcanic eruptions pose a major threat to nearby ships and boats, as a zone of lower water density causes a loss of buoyancy and increases the risk of sinking. In 1944, during the eruption of the Kick ‘em Jenny volcano offshore of the northern coast of Grenada, a passenger vessel sank while crossing the degassing region, resulting in the deaths of 60 people2. The Myojinsho volcano erupted in September 1952 and caused damage to a research vessel, which killed all 31 passengers2,3 in Japan. These dramatic events highlight the necessity and importance of studying submarine volcanic activity in the form of time series. Helium is a chemically inert noble gas, and this characteristic is an advantage in deciphering the gas source. The isotope ratio 3He/4He shows a relationship with global geotectonic settings. Helium-3, in particular, is considered the most important volatile tracer of mantle-derived materials4,5. In addition, methane plays an important role in the greenhouse effect in the atmosphere and in the chemistry of ozone reduction. Major emissions of methane into the atmosphere originate from the biosphere (e.g., wetlands, rice paddies and animals), the geosphere (e.g., hydrocarbon basins and geothermal areas) and anthropogenic activity (e.g., natural gas production and distribution and coal mining)6,7. Methane is one of the most common and abundant gas species in submarine hydrothermal fluids. The concentrations of methane in hydrothermal fluids are often 104–107 times higher than those in ambient ocean water8,9. The stable carbon isotopic composition of methane can be used to understand the global carbon cycle and to characterize geological and microbial consumption processes10,11. The ratios of the elemental abundance of carbon and helium (C/3He) together with carbon isotope ratios (δ13C) may provide key information on the origin of the carbon12. Their fluxes from the solid Earth to the atmosphere may provide useful information on the geochemical cycle and the evolution of the atmosphere and ocean13. The Ryukyu Arc, a typical trench-arc system, is formed by the Philippine Sea plate subducting northwestward beneath the Eurasian plate, with variable convergent rates in the range of 4 to 7 cm/yr14. The Ryukyu Arc extends approximately 1,200 km from Kyushu Island (Japan) to Taiwan and can be classified into three segments, namely north, central and south Ryukyu Arc. The separations between these segments are marked by the Tokara Strait and the Kerama Gap. A new chain of submarine volcanoes has been identified by the detailed topography and petrology survey conducted during the cruise KS-14-10 of the R/V Shinsei Maru. This chain of submarine volcanoes has been classified as part of the Tokara Islands and as a part of the volcanic front of the Ryukyu Arc15 (Fig. 1a). To elucidate the regional tectonic setting and the formation of submarine volcanoes, we integrate geomorphological, geophysical and geochemical proxies from a field cruise survey during which we collected water samples from various hydrothermal plumes. We also report the first water column images acquired using multi-beam echo sounder techniques in the region of the Tokara Islands (Daiichi-Amami Knoll and Kotakara Shima; see Fig. 1 and Supplementary Video). These images suggest that degassing in form of bubbles occurs from the investigated shallow hydrothermal systems. Furthermore, we estimate the helium and methane fluxes and the origins of the respective gas species based on the elemental and isotopic analyses of the acquired samples. Our results may be useful for future research related to the carbon cycle and for the risk management of submarine volcanic eruptions in the investigated areas. Bathymetric mapping and dredge sampling were carried out at the Daiichi-Amami Knoll (Fig. 1b). In this region, rhyolite lava and cold seep mussels were recovered by dredging (Fig. 1f), implying that the hydrothermal system is an environment rich in methane and hydrogen sulfide. We collected hydrothermal plume samples via CTD-RMS hydrocasts immediately after the water column image analysis (Fig. 1c). The 3He/4He ratios, CH4 concentrations and δ13CCH4 values of the seawater samples acquired at Daiichi-Amami Knoll fall in the ranges of 1.01 to 1.57 Ra (where Ra is the atmospheric 3He/4He ratio of 1.382 × 10−6)16, 3 to 6738 nM and −48.0 to −27.9‰, respectively. Strong hydrothermal activity was identified at a water depth range between 275 to 300 m based on high turbidity, low pH values, high 3He/4He ratios and significant CH4 concentration anomalies corresponding to more positive δ13CCH4 values (Fig. 2), although no apparent temperature anomaly was detected. Both the bathymetric map and water column imaging (Fig. 1d,e) indicate the presence of shallow submarine volcanoes at a water depth of approximately 130 m in the region of Kotakara Shima. Gas geochemistry profiles of the shallow hydrothermal and/or seepage plume are shown in Fig. 2. The 3He/4He ratios varied from 1.00 to 1.30 Ra. The methane concentrations and carbon isotope ratios fall in the range of 4 to 199 nM and −43.5 to −24.8‰, respectively. Basic seawater physico-chemical parameters (turbidity, pH, and temperature) and gas geochemistry parameters (helium isotope ratios, methane concentrations and δ13CCH4 values) showed anomalies at the same water depth of approximately 130 m and indicate the presence of active fluid emissions at the seafloor. The observed negative anomaly in the seawater temperature profile suggests the occurrence of cold seepage enriched in mantle helium. The 3He/4He ratios increased with increasing water depth from 1.04 to 2.49 Ra, which agrees with previous studies17,18. The methane concentration profile, following a similar pattern as the helium isotope ratio profile, increased with depth from 22 to 4478 nM. The stable carbon isotopes of methane showed large variations, covering a range of values from −29.6 to 6.3‰. Hydrothermal activity was identified at a depth of approximately 150 m based on low pH values and high 3He/4He ratios measured in the seawater samples. A significant methane anomaly was observed at a depth of approximately 200 m. A positive correlation exists between the 3He/4He ratios and the methane contents, similar to the Daiichi-Amami Knoll and Kotakara Shima samples. A positive relationship between the δ3He value (where δ3He = (R − 1) × 100, R = 3He/4He) and the excess 4He/20Ne ratio relative to air-saturated seawater values suggests two-component mixing between the atmospheric and volcanic sources (Fig. 3). The end member for the Wakamiko Crater samples exhibits a subduction-type mantle helium signature of approximately 7 Ra, which is consistent gases from volcanoes and hot springs in the Circum-Pacific belt with the high 3He/4He ratios (up to 7.86 Ra)19. The Tokara Islands helium isotopic composition is affected by addition of a larger amount of crustal He (which has R/Ra typical of 0.02–0.03 Ra) lowering the mantle value down to 4 Ra. The helium isotope ratios of hydrothermal fluids from different geologic settings are listed in Table 1. The helium at the frontal arc of subduction zone exhibits a crustal signature characterized by low 3He/4He ratios and biogenic methane, while at the volcanic arc region exhibits mantle helium with high 3He/4He ratios and thermogenic methane. The Tokara samples show an intermediate signature, and the tectonic implications are described later. Helium and methane fluxes provide further geochemical information. It is possible to calculate the 3He flux based on the observed 3He concentration gradients and a simple steady-state diffusion model. Assuming that the concentration of neon dissolved in seawater is at atmospheric equilibrium (i.e., 1.86 × 10−7 cm3 STP/g H2O), it is possible to calculate the 3He gradient at each depth using the measured 3He/4He and 4He/20Ne ratios. Assuming a vertical eddy diffusivity of 100 cm2/s20, the shallow hydrothermal and/or low-temperature seepage 3He fluxes are estimated to be (1.9 ± 0.2) × 104, (9.9 ± 4.5) × 103 and (2.6 ± 0.3) × 104 atoms/cm2/s at Daiichi-Amami Knoll, Kotakara Shima and the Wakamiko Crater, respectively. The 3He fluxes at the Wakamiko Crater and the Tokara Islands contribute 0.01% and 0.07% of the total 3He emissions (8.97 × 1024 atoms/yr) along the Japan arc4. The 3He flux of the Tokara Islands is four orders of magnitude higher than that of the seafloor in the southern Okinawa Trough (1.6 3He atoms/cm2/s)21. The 3He flux of the Tokara Islands is derived from the venting of the submarine volcanoes and could be regarded as direct emissions from the crater. However, the 3He flux of the southern Okinawa Trough is represented as a diffusive flux from a volcanic edifice21. These phenomena are similar to phenomena in the study of terrestrial volcanoes, the diffuse degassing fluxes through volcanic edifice and plume degassing from summit craters might be complementary22. The 3He flux at the Wakamiko Crater is similar to a previously reported flux estimate18, implying that the volcanic activity was stable between 2010 and 2014. There is no significant difference in 3He flux between the Tokara Islands and the Wakamiko Crater. However, the estimated end members representing the sources of terrigenic helium imply a larger crustal contribution, i.e., a larger share of 4He, at the Tokara Islands. As both regions are located at the volcanic front of the Ryukyu Arc and its extension, the tectonic settings likely play an important role in determining the different terrigenic He isotope contributions at the respective locations. The slab bending beneath the north-central Ryukyu Arc, based on the information provided by the hypocentral earthquake distribution, is characterized by different dip angles to the north and south of the Tokara Strait, varying from sharp (e.g., 70° dips down to 80 km depth) in the north to gentle (40–50° dips) in the south23,24,25. Based on variations in geochemical parameters (e.g., Sr, Nd, and Pb isotopes) within the north Ryukyu Arc, various lavas from south Kyushu and the north-central Ryukyu arc were derived from mantle material of Indian Ocean-type and mantle wedge material of Pacific-type, respectively25. In the subduction zone, the relationship between helium isotope ratios and geotectonic setting is well documented (e.g. Japan19 and southern Italy26). In the fore arc region is characterized by crustal helium; while in the volcanic arc and back arc region shows mantle helium. In light of these geophysical and geochemical insights, we propose that the Tokara Islands represent a transition zone from island arc volcanism to back-arc basin spreading volcanism with a south-north orientation; thus, the helium isotopes are somewhat low. Submarine hydrothermal systems contain many gases. Methane is a useful species as well as an effective chemical tracer and can be used to identify the source of carbon. The positive correlation between 3He and methane concentrations suggests that mantle helium may be accompanied by methane. A least squares fitting of 3He to methane concentrations allows the determination of end members of the CH4/3He ratios of (3.5 ± 0.2) × 109, (7.1 ± 0.7) × 108 and (9.3 ± 6.6) × 108 in the regions of Daiichi-Amami Knoll, Kotakara Shima and the Wakamiko Crater, respectively. The CH4/3He ratio at Daiichi-Amami Knoll is comparable to the Rainbow hydrothermal site on the Mid-Atlantic Ridge (1.3 × 108; ref. 27). At the Wakamiko Crater, CH4/3He ratio is similar to the estimate of ~109 in a previous report17. Additionally, the ratios of all three of these regions are close to the ratios of the Okinawa Trough, i.e., (7.6–14) × 108 (refs 28 and 29). Based on the CH4/3He ratio and 3He flux at Daiichi-Amami Knoll, we estimate a methane flux of 6.62 × 1013 atoms/cm2/s. The methane fluxes at Kotakara Shima and the Wakamiko Crater are approximately one tenth and one third of the flux at Daiichi-Amami Knoll, respectively. Assuming a degassing area of 1 × 105 m2 (approximately 300 m × 300 m) based on the topographic highs, the total methane emissions of the Toakara Islands and the Wakamiko Crater are approximately 66 and 22 t/yr, respectively. These observed CH4 fluxes are higher than the ones found in the major geological methane emission regions, mud volcanoes, gas hydrate areas and methane-rich gas in Taiwan30,31,32 (Supplementary Table). Thus, methane contributed from geosphere, especially by shallow submarine systems, is an important component in the global natural methane budget. The estimated end-member δ13CCH4 value of the Tokara Islands is more positive, i.e., enriched in thermogenic methane, than the natural gases in brines from the gas fields in southwest Japan (−38.9 to −67.5‰ PDB33) but more negative than the gases observed in the hydrothermal fluids from mid-ocean ridge regions (−8.6 to −20‰ PDB5. However, the estimated end-member δ13CCH4 value at the Tokara Islands is similar to the hydrothermal fluids at Minami-Ensei and Yonaguni in the Okinawa Trough (−25 to −26.9‰ PDB34). Together with the carbon isotopes of methane-rich gas in Japan35, we conclude the methane origin related to the geological setting. The South Kanto region represents a fore-arc region of the subduction zone characterized by biogenic methane with light δ13CCH4 values and low 3He/4He ratios, while the back-arc region (Akita and Niigata) features thermogenic methane with high 3He/4He ratios. In this study, the Tokara Islands are located between the fore-arc and back-arc systems at the volcanic front; therefore, the origin of the methane is a mixture of thermogenic and biogenic methane. The plot of δ13CCH4 versus 1/CH4 for the Wakamiko Crater is too scattered to identify the end member of the δ13CCH4 values. This implies that the methane of the hydrothermal plume is not affected only by the mixing of terrigenic fluids with seawater. The profiles at the Wakamiko Crater (Fig. 2) show a rapid decrease in CH4 concentrations, especially in the interval of 150 to 200 m at station 5, but increasing trends in the δ13CCH4 values not only at station 5 but also at station 6 within the depth interval of 50 to 100 m. These phenomena suggest additional microbial oxidation activity is present within the hydrothermal plume, as microbial oxidation processes tend to preferentially consume CH4 molecules with lighter carbon isotopes (12C) than molecules with heavier carbon isotopes (13C)8,9,36. Figure 4 shows the relationship between the δ13CCH4 values and CH4/3He ratios measured in the samples. There can be four major end members for the origin of methane in shallow marine hydrothermal systems: (1) abiogenic methane produced by chemical reactions, as observed on the East Pacific Rise (EPR); (2) biogenic methane produced by microbial activity utilizing inorganic carbon; (3) thermogenic methane from the thermal decomposition of organic matter; and (4) oxidized methane with heavier carbon isotope values formed through microbial fractionation in old gas plumes9. Most of the data from the Tokara Islands (red circles in Fig. 4) indicate mixing between EPR-type abiogenic methane and thermogenic methane, similar to the data from the Okinawa Trough28,29. In these regions, the samples closest to the sea surface are characterized by relatively negative δ13CCH4 values and lower CH4/3He ratios, implying that the end members of methane are not only affected by thermogenic methane and EPR-type methane but also by biogenic influences. The hydrothermal vents in these regions are relatively shallow, suggesting that the presence of thermogenic CH4 in the plumes is caused by magmatic activity, as the methane is carried from the deep lithosphere towards the seafloor by vertical hydrothermal fluid migration. However, the data from the Wakamiko Crater (blue squares) appear to be characterized by a different end-member system from the data at the Tokara Islands in Fig. 4. Previous studies suggested that the origin of the methane at Wakamiko Crater is thermogenic due to volcanic heat interacting with organic matter in sediments and high ammonium concentrations in sediment pore water37. The Wakamiko Crater is located within Kagoshima Bay, where organic matter is supplied by surrounding lands and the seawater is affected by the basin morphology and somewhat stagnant currents. However, as mentioned above, in the hydrothermal plumes at the Wakamiko Crater, more complex processes seem to affect the dissolved gas species in the water column. Considering the geographical settings and the results of previous studies, we conclude that the thermogenic methane carried by the emitted hydrothermal fluids and the hydrothermal plumes experiences methane oxidation, resulting in microbial fractionation and isotopically heavy methane. In summary, we have carried out a research cruise on the R/V Shinsei Maru (expedition KS-14-10) in a region stretching from the Wakamiko Crater in Kagoshima Bay to the sea close to the Tokara Islands in SW Japan. The 3He and CH4 fluxes in the investigated shallow submarine hydrothermal systems are estimated to be (1.0–2.6) × 104 atoms/cm2/s and 6–60 t/yr, respectively. The methane fluxes are not negligibly small compared with major geological emissions. The mantle helium contribution is smaller at the Tokara Islands than at the Wakamiko Crater, which suggests that the volcanic front exhibits a mixing pattern between fore arc and volcanic front signatures. Sampling and on-site data acquisition The multi-beam echo sounder survey can provide in situ preliminary submarine plume information, such as water column imaging, which allows rapid identification of the ideal location for seawater sampling. The CTD-CMS system used in this work consists of a CTD (Conductivity Temperature Depth profiler), a CMS (Carousel Multiple Sampling system), 24 Niskin bottles, and a turbidity meter that allows detection of hydrothermal plumes based on turbidity anomalies in the water column. The water in the Niskin bottles was immediately transferred to approximately 60-cm-long copper tubes without exposure to the atmosphere. Both ends of the tubes were sealed airtight by steel clamps for storage. In the laboratory, we connected the tubes to a high-vacuum line with a lead glass container to extract the dissolved gases from the seawater samples based on the displacement method. The exsolved gases were transferred from the glass bottle into a purification line where helium was purified using hot titanium-zirconium getters and charcoal traps held at liquid nitrogen temperature. The 4He/20Ne ratios were measured by an online quadrupole mass spectrometer (QMS 100, Pfeiffer). Subsequently, helium was separated from neon (and other residual gas species) using a cryogenic charcoal trap held at an extremely low temperature, 40 K38,39. The 3He/4He ratios were determined using a conventional noble gas mass spectrometer (Helix-SFT, GV Instruments) and calibrated against the Helium Standard of Japan (HESJ)40 at the Atmosphere and Ocean Research Institute (AORI), The University of Tokyo. The experimental errors of the 3He/4He and 4He/20Ne ratios were approximately 0.4 and 3%, respectively, at the one sigma level41. The measured 3He/4He ratios are reported using the Ra-notation, where the determined He isotope ratios are normalized by the 3He/4He ratios of atmospheric air, Ra (i.e., 1.382 × 10−6 cm3 STP/g16). Methane concentrations and stable carbon isotope (δ13C) analysis Water samples were carefully introduced from the Niskin bottles into 125 mL glass vials using a Teflon tube and avoiding the presence of air bubbles. After overflowing by twice the volume, we removed the Teflon tubes slowly and added 0.6 mL of a saturated mercury chloride solution for sterilization. The samples were capped with gray butyl rubber, sealed with an aluminum cap, and kept in a dark refrigerator until analysis. CH4 concentrations and δ13C values were simultaneously determined from one sample vial by using an isotope-ratio-monitoring gas chromatography mass spectrometer at Nagoya University. The system consisted of three parts: an extraction and purification line, a gas chromatograph (HP6850), and an isotope ratio mass spectrometer (Finnigan MAT252)8. The stable carbon isotopic ratios are reported using the common δ−notation in per-mil against the PDB standard42. How to cite this article: Wen, H. et al. Helium and methane sources and fluxes of shallow submarine hydrothermal plumes near the Tokara Islands, Southern Japan. Sci. Rep. 6, 34126; doi: 10.1038/srep34126 (2016). We would like to thank the captain and crew of the R/V Shinsei Maru for their kind collaboration during the cruise of KS-14-10 leg-2. Constructive comments and suggestions made by two anonymous are very helpful for improving an early version of this manuscript. H.W. thanks the National Science Council of Taiwan for the Graduate Students Study Abroad Program for a visiting fellowship (102-2917-I-002-110). This research was also partly supported by funding from the EU Seventh Framework Programme for Reach and Technological Development (Marine Curie International Outgoing Fellowship to Y.T., Contract PIOF-GA-2012-332404) and a Grant-in-Aid for Science Research (15H05830) from MEXT, Japan. This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
1
4
<urn:uuid:1c59b452-c00c-4c23-a4d8-ce423d8afb4c>
Watch for the Recycle logo to find gems from the back issues! Feb 28, 2011 Three recent articles about amazing animals and fossils deserve entries of their own, but due to lack of time, will be corralled here lest, like strays, they wander off. Evolution was largely ignored in these stories. None of the popular articles on turtle navigation mentioned it, but the source paper in Current Biology1 only mentioned it in passing These results are consistent with the interpretation that birds, like turtles, have evolved a way to assess longitude that is independent of time-keeping. - Turtle navigation: Wired Science has a beautiful photo of a marine turtle in an article about how they achieve a difficult navigational skill: determining longitude from the earths magnetic field. By varying magnetic fields in research ponds with hatchlings, researchers at the University of North Carolina determined that, Against reasonable expectation, the turtles clearly sensed differences in geomagnetic angle. See also New Scientist and Human efforts to determine longitude required accurate clocks. The researchers didnt explain precisely how the turtles do it, other than to rule out biological clocks. They were clearly astonished by animal navigation in general: That turtles and other migratory animals could detect such a small change was considered unrealistic, but experiments on animals released in out-of-the-way locations repeatedly described them finding home with unerring accuracy and efficiency, explicable only as a product of both longitudinal and latitudinal awareness. - Cat bite: The BBC News reported on new findings about how sabretooth cats like Smilodon were able to close their mouths with those long, dagger-like teeth. Studies of the bones and neck muscle insertion points by a team at Aalborg University in Denmark revealed how the cats jaw muscles were aligned to pull its jaws closed, very directly and efficiently. The article ascribed all this efficiency to evolution. - Charismatic behemoth: A new sauropod species described on Science Daily had legs like bars of iron, by Job. Brontomerus mcintoshi, or thunder-thighs after its enormously powerful thigh muscles, found in Utah, may have kicked its attackers to kingdom come. Brontomerus mcintoshi is a charismatic dinosaur and an exciting discovery for us, said first author Mike Taylor, a researcher in the Department of Earth Sciences at University College London. Less emphasized in the article was the worry that this discovery undermines previously-held beliefs about sauropods i.e., that they were disappearing by the Cretaceous. It now seems that sauropods may have been every bit as diverse as they were during the Jurassic, but much less abundant and so much less likely to be found. Similarly, most of the dinosaur articles did not mention evolution. A researcher quoted in Live Science speculated, We think the most likely reason this evolved was over competition for mates, but once it evolved, it would be bizarre if it wasnt also used in predator defense. This makes it clear that the bones did not give a definitive answer to how thunder-thighs grew a thighs size to kick enemies asunder. The BBC article on sabretooth cats mentioned evolution the most. There, however, evolution was merely assumed: e.g., the cats jaw muscles evolved into a specialised pattern, which allowed them to open their mouths so wide. The article talked about how Smilodon evolved and how the researchers drew an evolutionary map to show how sabretooths evolved longer canine teeth under evolutionary pressure to kill prey with a deep and efficient stab to the throat.2 Even if there was variation among cats and their fangs, the fangs already existed in cats, and the cats and all their musculature already existed in the cat family. Even young-earth creationists would not disagree that variation in existing genetic information could lead to adaptations in particular environments. 1. Putnam et al, Longitude Perception and Bicoordinate Magnetic Maps in Sea Turtles, 24 February 2011, doi:10.1016/j.cub.2011.01.057. 2. Note: evolutionary pressure is a misnomer. Natural selection may constrain variation by preventing survival of mutants, but provides no pressure, impetus, guidance, force or direction (see 01/24/2008). Evolution was tacked-on to some of these stories like hot sauce on ice cream. It had nothing to do with the facts of the story and created a bad aftertaste to otherwise interesting stories about amazing creatures. Scrape off the hot sauce best you can. (Dont stir it.) Why your inner ear looks like a snail shell: from 02/28/2006. Next headline on: Darwin and Evolution Feb 27, 2011 Philosophy of science is a broad discipline incorporating many sub-disciplines such as intellectual history, sociology, ethics, rhetoric, logic, demarcation of science from pseudoscience, classification, discovery, verification, explanation and more. A dozen recent news stories discussed some of these topics. For a look at some of these issues from proponents of intelligent design, see an examination of Freeman Dysons article by Denyse O'Leary on Descent, another O'Leary article on about origin-of-life science, a treatment of Beddingtons outrage against pseudoscience on the blog Darwins God by Cornelius Hunter and In a subsequent post on Uncommon Descent, O'Leary quoted Frank Furedi who views Beddingtons intolerance as a fast-backward to the Middle Ages. - Medical ethics: PhysOrg reproduced an AP story about medical research on humans in the US in the 1940s to 1960s. The details are quite shocking and were unusually unethical, even at the time. They included giving diseases to prisoners and the disabled. The news media largely ignored these stories, the article said. This entry touches on the need to set ethical limitations on scientific inquiry. - Futurism, ethics, and health: Should genetic interventions be used to create healthy babies? This sensitive question, behind which lurks the ghost of positive eugenics, was discussed cheerfully in Science Magazine (25 February 2011: DOI: 10.1126/science.1204088) on the 10th anniversary of the Human Genome Project. Genetics is a way of thinking. Genomics is a set of tools, Mary-Claire King wrote, glossing over the potential for abuse of thinking and tools. If we think rigorously about genetics and use these tools well, she said, the resolution of inherited disorders on behalf of our patients will be bounded only by our imaginations. One healthy infant at a time is not a bad way to begin. But how will babies born without genetic intervention be treated by society? King assumed universal agreement on the meaning of well and spoke of rigor, good and bad as if bounded only by human imagination. A quick look back at the 20th century shows some not-so-cheerful ways our predecessors applied their imaginations using thinking and tools. - Philosophy of discovery: A story on PhysOrg exemplified how, in the philosophy of science, discovery is distinct from explanation. Some mathematicians at Emory University were on a nature hike when a Eureka! moment hit them. So what is an aha moment? the article asked. The way I see it, its not something that happens to you instantly, said Ken Ono. It just happens to be the moment that you realize the fruits of all your hard work. Article includes a video clip of Ono telling his story on the trail. - Paradigms and models: Some European philosophers have tried to put Thomas Kuhn on a chip. In Emergence and Decline of Scientific Paradigms described on PhysOrg, they produced a mathematical model showing how scientific paradigms rise and fall. Although many factors influence the emergence and decline of such scientific paradigms, the article said, a new model has captured how these ideas spread, providing a better understanding of paradigm shifts and the culture Like some meta-theory on theories, or observation of observers, their mathematical model had all the coldness of monitoring bacteria in a Petri dish. Paradigms mentioned included climate change, nanotechnology and chaos theory. Not apparent was how their model intersected any conception of validation, verification, or truth. - History of science: An article at PhysOrg might be enough to make a modern scientist scream. Dr. Lawrence Principe, historian of science at Johns Hopkins, is defending alchemy as legitimate research for its time. In Why many historians no longer see alchemy as an occult practice, Phillip Schewe wrote that the scholars who write the history of science and technology no longer lump alchemy in with witchcraft as a pseudo-science. Instead they view it as a precursor to chemistry. Alchemists, they said, should not be dismissed solely for failing their main mission to turn base metals into gold; Alchemists ... were active in assaying metals, refining salts, making dyes and pigments, making glass and ceramics, artificial fertilizers, perfumes, and cosmetics i.e., skills useful for the emerging science of chemistry. Famous practitioners of alchemy included Robert Boyle and Isaac Newton. - Design detection: How natures patterns form was the headline of a short article on PhysOrg. With an image of a Fibonacci spiral pattern leading the story, the article mentioned how many universal patterns, seen in sunflowers, galaxies, animal coloration or sand dunes are the result of some kind of stress, applied stress. Alan Newell at the University of Arizona was telling a meeting of the AAAS that biological forms are controlled more by the laws of physics than by evolution, i.e., the products of physical forces, rather than evolutionary ones. Further, Patterns arise when the symmetry of a system is broken, Newell said. The similarity in patterns from system to system occur when the systems have similar symmetry, rather than because the systems are made from the same materials. Newell believes patterns are impressed on nature mechanically, but as a consequence of biochemically and mechanically induced pattern-forming instabilities that can be described in mathematical models. The short article did not address why natural laws and instabilities should be symmetric, or finely tuned to reproduce a Fibonacci series, or why the human mind finds these patterns beautiful. Newell did end, though, on a poetic note: Mathematics is like a good poem, which separates the superfluous from the essentials and fuses the essentials into a kernel of truth. - Verification and falsification: Nature News reported that the Apex Chert in western Australia, thought to be evidence for the oldest life on the planet, may have formed by inorganic processes. This incident touches on several areas in philosophy of science: verification, interpretation of evidence, ethics, and history of science: Twenty years ago the palaeontological community gasped as geoscientists revealed evidence for the oldest bacterial fossils on the planet, the article said. Now, a report in Nature Geoscience shows that the filament structures that were so important in the fossil descriptions are not remnants of ancient life, but instead composed of inorganic material. This appears to be a case of scientists who wanted to find life so badly that they ignored the obvious, the article said. Olcutt Marshall opened some philosophical cans of worms with his remark, There is a willful blindness about these structures that sometimes has more to do with local politics than global truth. See also the PhysOrg write-up. - Paradigm backlash: As successful as Newtonian mechanical philosophy was in the 17th and 18th centuries, it produced a backlash, wrote George Rousseau in a book review in Nature (24 Feb 2011, doi:10.1038/470462a). Commenting on Stephen Gaukrogers new book The Collapse of Mechanism and the Rise of Sensibility: Science and the Shaping of Modernity 1680–1760, Rousseau noted that while most scientists are aware of Newtons achievement, Less familiar is the philosophical phase that followed sensibility, the view of humans as organic creatures, incapable of reduction to the sum of their mechanical parts, especially in the affective, moral and political realms. Accordingly, Stephen Gaukroger explains how the philosophies of mechanism collapsed over eight decades, to be replaced by a more sensory view of nature. The review warned of simplistic views of mechanical philosophy (sometimes abbreviated mechanism): Mechanism was never a single set of principles about machine-like systems, he said. It comprised an array of disparate beliefs, experiences and practices that were followed in far-flung places and presided over by its principal architects: René Descartes, Thomas Hobbes, John Locke and Newton. Sensibility, likewise, is a vague term, he said. According to Gaukroger, sensibility allows connections to be made between natural-philosophical and moral, political, and psychological theories in a new way, shaping a new field of the moral sciences. While a strict mechanist or 20th-century positivist might take issue with that phrase as an oxymoron, the definition points out the necessity of philosophical judgments on the nature of science. The 1760s, the review said, was a watershed decade and the start of the so-called Romantic era with roots in sensibility stretching back a century or more: Imaginative literature, later codified as Romantic, also drove nails into mechanisms coffin by postulating that matter was more complex than the mechanical natural philosophers thought. A human is not a mere machine; a fly is much harder to study than a pebble. By focusing on human nature rather than physical matter, the language of the new literature helped to alter the way scientists conceived their models, and enabled modernity to commence its work. It is ironic that the reviewer shares a surname with Jean-Jacques Rousseau (1712-1778), an icon of Romanticism. - Search for extra-terrestrial science: Can scientists justify their work based on what they expect to find, rather than what they have found? Rowan Hooper on New Scientist recouped the latest scoop on planet counts from the Kepler spacecraft, then launched into some philosophy: Exoplanet findings spark philosophical debate, he titled his article, noting that What were once speculative and philosophical questions are now being tackled with real data, generated by NASAs planet-hunting space telescope, Kepler. The word data is a philosophically-loaded question. To what extent does data about extrasolar planets apply to the question of extraterrestrial intelligence? Hooper heard two speakers at the recent AAAS meeting discuss how Christians and Muslims might respond positively to detection of aliens. Both their arguments amounted to the (to my mind) rather dubious claim that the discovery of extraterrestrial life would pose no challenge or crisis to terrestrial religion. Then he heard talks about the possibility of life detection by a pessimist, Howard Smith [Harvard-Smithsonian Institute for Astrophysics] and an optimist, Seth Shostak [SETI Institute]. Worried that it might take 100 generations to get in touch with aliens, Smith coined a new phrase: the misanthropic principle says that intelligent life is so unlikely to evolve that we might as well accept that well never know if we are unique or not. Hooper seemed to prefer Shostaks enthusiastic prediction of successful detection within 24 years, even though it was couched in a philosophical statement, Believing there arent ETs is believing in miracles. - Demarcation: According to Research Professional John Beddington, the Presidents science advisor, made waves by calling for scientists to be grossly intolerant of what he perceives as pseudoscience. As for what constitutes pseudoscience, Beddington referred to the building up of what purports to be science by the cherry-picking of the facts and the failure to use scientific evidence and the failure to use scientific method. Particularly, he had in mind politically or morally or religiously motivated nonsense. Beddington apparently does not realize that the demarcation problem and the scientific method are issues that loom large in philosophy of science. The assumption that science can be reduced to a bias-free method apparently motivated his sermon for scientists to be as grossly intolerant of that sort of thing as they are of racism or homophobia. He views religious or political influence as pernicious, but he left begging the question of whether secular consensus science itself is free of such influences. Sensing a little unease with his own moral plea, Beddington told his audience, Id urge you, and this is a kind of strange message to go out, but go out and be much more intolerant That is clearly a moral judgment, not a scientific finding. Beddington also did not distinguish morally ... motivated nonsense from his own moral judgments. Whether or not one agrees with his opinions, the story illustrates how science is inextricable from moral values. - Sociology of OOL: As a reporter at a recent conference of origin-of-life researchers, Dennis Overbye, writing for the New York Times, seemed amused by the curious sociology of his subjects: Two dozen chemists, geologists, biologists, planetary scientists and physicists gathered here recently to ponder where and what Eden might have been. Over a long weekend they plastered the screen in their conference room with intricate chemical diagrams through which electrons bounced in a series of interactions like marbles rattling up and down and over bridges through one of those childs toys, transferring energy, taking care of the business of nascent life. The names of elements and molecules tripped off chemists tongues as if they were the eccentric relatives who show up at Thanksgiving every year. While not unkind to their ramblings, Overbye found plenty of confusion, disagreement, and ignorance to showcase. His last quip was about Craig Venters intelligent design project to create synthetic life: And so his genome is now in the process of acquiring its first, non-Darwinian mutation. - Science and Meaning What does science mean? In the New York Review of Books, Freeman Dyson discussed information theory and the history of science under the headline, How We Know. In the body of his book review of The Information: A History, a Theory, a Flood by James Gleick, Dyson, while trying to clear up some misinformation, exposed some embarrassments in science that call into question not only how we know, but what we know: The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions. Scientists get a kick out of the endless quest: The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists, he said, but not to artists, writers, and ordinary people. Dyson worried about the flood of information around us being separated from meaning. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, Dyson noted, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are. Even physics, the most exact and most firmly established branch of science, is still full of mysteries.... Claude Shannon, who felt Meaning is irrelevant to his information theory, started a flood of information in which we are drowning, Dyson said. Is our fate to look out upon, as Jorge Luis Borges portrayed the universe in 1941, a library, with an infinite array of books and shelves and mirrors, never knowing what it all means? It is our task as humans to bring meaning back into this wasteland, Dyson concluded. As finite creatures who think and feel, we can create islands of meaning in the sea of information. While Dyson examined the definition of information in detail in his review, he left dangling an even more important definition: the meaning of meaning. Is meaning defined by the individual artist, writer, or ordinary person? Who decides when something is meaningful? Are islands of meaning grounded on a continent of truth, or are they adrift in an infinite sea of meaningless information? Theres a new anthology of essays by creationists that calls into question the objectivity of science. The description of Sacred Cows In Science: No Objectivity Allowed, Norbert Smith (ed.) on Science was at one time defined by its method. Carefully controlled experiments, provisional conclusions, and considered debate once defined the field. But those days have passed. Today, science is defined by public policy statements, consensus, and a set of metaphysical assumptions that cannot be directly tested. Students are told that science is in conflict with faith or, worse yet, that faith operates in a different magisterial [sic]with no real application to the world we inhabit. Chapters include material on life sciences, physical sciences, and behavioral sciences. The first reviewer agreed, Science should be a discipline based on dissent, but as more and more science becomes publicly funded, ideas become entrenched, and outside ideas are no longer heard. This is all interesting material with too much to comment on in each article. Readers are encouraged to become knowledgeable about these controversies with the Baloney Detector in good working order and refine their philosophy of science in light of these real-world issues. Science is what scientists do unless they can defend aspiring to an unattainable goal. Habitable Zones Constrained by Tides theme in all the above is how science and philosophy are both human enterprises, subject to all the biases, assumptions, limitations, mistakes, and changes of mind connected with any other human activity. One can hope to approach limitations with more clarity in a systematic way, but they are still limitations. One thing we need more than science or philosophy is wisdom. The writer of Psalm 119 offered a way up: I have more understanding than all my teachers, for your testimonies are my meditation. (verse 99). Indeed, the fear of the Lord is the beginning of wisdom (Proverbs 9:10), and of knowledge (Proverbs 1:7). Why is the fear of the Lord essential? Why is it the beginning of wisdom and knowledge? Because without it, science is impossible. The Lord is the source of the morality, integrity, and wisdom needed to even hope for a clear scientific understanding about any subject or a philosophy of anything. Atheists may do science, but they cannot justify what they do. When they assume the world is rational, approachable, and understandable, they plagiarize Judeo-Christian presuppositions about the nature of reality and the moral need to seek the As an exercise, try generating a philosophy of science from hydrogen coming out of the big bang. It cannot be done. Its impossible even in principle, because philosophy and science presuppose concepts that are not composed of particles and forces. They refer to ideas that must be true, universal, necessary and certain. Its time science gets back to the beginning of wisdom. You can help by rapping a scientists knuckles every time he steals from the Christian smorgasbord of presuppositions. While bandaging his knuckles, encourage him with the upside of a scientific revolution based on the Bible: it makes genuine scientific knowledge, if not exhaustive, at least possible. Next headline on: Philosophy of Science Mind and Brain Origin of Life Politics and Ethics Feb 26, 2011 The idea of a circumstellar habitable zone a radial range around a star where an earth-like planet could support life may be too simplistic. Science Daily reported that Tides can render the so-called habitable zone around low-mass stars uninhabitable. Astronomers at the Astrophysical Institute Potsdam studied the effects of tides on planets around low-mass stars (the most numerous stars in the galaxy) and found that the lack of seasons, the increased heat (and volcanism) and synchronous rotation make them uncomfortable at best, and perhaps uninhabitable. I think that the chances for life existing on exoplanets in the traditional habitable zone around low-mass stars are pretty bleak, when considering tidal effects, lead researcher Rene Heller remarked. If you want to find a second Earth, it seems that you need to look for a second Sun. So far we have narrowed the habitable zone to: Galactic Habitable Zone, where a star must be located (09/29/2009); Circumstellar Habitable Zone, the right radius from the star (10/08/2010); Continuously Habitable Zone, because too much variety can be lethal (07/21/2007); Temporal Habitable Zone, because habitable zones do not last forever (10/27/2008); Chemical and Thermodynamic Habitable Zone, where water can be liquid (12/30/2003); Ultraviolet Habitable Zone, free from deadly radiation (08/15/2006); Tidal Habitable Zone, which rules out most stars that are small (02/26/2011). Evolutionists Turn Misses into Wins Other constraints are bound to be realized from time to time, emphasizing the rarity of the sweet spot we inhabit. This would be, of course, predicted from the Architects message that he formed the Earth to be inhabited (Isaiah 45:18), but todays scientists have a bad habit of ruling out Architects from their master plan. Make a good habit of studying the Architects plans whenever starting life construction on our habitable Next headline on: Stars and Astronomy Origin of Life Feb 25, 2011 Evolutionists have evolved a skill by design the ability to turn falsification into confirmation. Its a kind of philosophical judo, or parry, that can turn the energy of a criticism into a win for Darwin. Darwinians appear very adept at turning criticism into praise. Whether this neat trick justifies evolution as a scientific theory is a different question. Does it really lead to deeper understanding of evolution, or is it sophistry? - Convergent turnarounds: A good example of an evolutionary parry can be seen in a post on entitled, Homoplasy: A Good Thread to Pull to Understand the Evolutionary Ball of Yarn. Homoplasy is a jargon term for convergent evolution the idea that unrelated organisms can converge on the same solution to a problem via evolution. Three evolutionists funded by the National Science Foundation came up with these whoppers: The authors provide many fascinating examples of homoplasy, including different species of salamanders that independently, through evolution, increased their body-length by increasing the lengths of individual vertebrae. By contrast, most species grow longer by adding vertebrae through evolution. If evolving eyes one time is spectacularly hard for a random process, it would seem that multiple independent cases would falsify evolution big time. Instead, these authors, with taxpayer funding, decided that the damaging evidence was really a triumph for Darwin: These kinds of examples of genetic and developmental biology help scientists elucidate relationships between organisms, as well as develop a fuller picture of their evolutionary history. The authors also explain how petals in flowers have evolved on six separate occasions in different plants. A particularly striking example of homoplasy cited by the authors is the evolution of eyes, which evolved many times in different groups of organismsfrom invertebrates to mammalsall of which share an identical genetic code for their eyes. - Victory in defeat: Even when admitting mistakes, evolutionists are never ready to give up on their theory. An example of unfeigned faith is seen in Live Science, where reporter Natalie Wolchover told how two headline-making fossils touted by their discoverers as human ancestors have turned out to have nothing to do with humans: theyre probably just non-hominin ape bones. Readers might recall how headlines blared in recent years that Orrorin tugenensis,Sahelanthropus tchadensis and Ardipithecus ramidus were shedding light on human evolution (03/05/2004, 10/02/2009, 11/25/2009) Now that Bernard Wood and Terry Harrison have debunked these claims (02/16/2011), is evolution in trouble? Not according to them: Skepticism regarding these famous primate fossil finds seems to call into question the rigor of the scientific process within the field of paleoanthropology. Woods and Harrisons paper certainly makes one wonder: Are these isolated incidents of misinterpretation followed by media hype, or does the problem pervade the whole branch of science? Is the human evolutionary fossil record a crapshoot? Harrisons firm response to Wolchovers worries recalls the cover story of National Geographic in Nov 2004, Was Darwin Wrong? with its confident NO inside (see 10/24/2004 and the resulting letters to the editor, 02/15/2005). No, said Harrison. There are reasons why this branch of science may seem messier than most, he said, but all things considered, it is doing extremely well. - Polygamy games: A particularly bizarre twist on evolutionary parrying was reported in PhysOrg about Mormon history: Polygamy hurt 19th century Mormon wives evolutionary fitness. After stating that fitness of sister wives decreased in polygamous households (measured by number of children produced), the researchers at Indiana University were left with the conundrum of why evolution would produce polygamy in the first place, whether among human beings or bacteria. Michael Wade was ready with a ring buoy for Darwin: So if polygamy (or the female equivalent, polyandry) is disadvantageous to most of the sequestered sex and most of the mate-sequestering sex, why should such systems survive? Aside from equating Mormons to fruit flies, Wade seems to have just said that natural selection can drag a species away from increased fitness. WWDD? What would Darwin do about that idea? The complete answer is still forthcoming, Wade said. One thing we know now, based on rigorous studies in many species, particularly the fruit fly, is that selection can be so strong on males that it can drag the entire species off of a naturally selected viability optimum. Its sophistry. See? Our commentaries are not always verbose. How strong is natural selection? Darwinists may be deluding themselves, a Canadian evolutionist said in the 02/16/2005 entry; in fact, the AAAS president almost called some of his fellow scientists insane (02/11/2005). Aside: Apparently the irony of #1s headline was lost on the reporters: ...the Evolutionary Ball of Yarn. Apt description. Next headline on: Darwin and Evolution Racial Evolution Education Proposed Feb 24, 2011 Skin color provides a handy tool for teaching evolution, says a anthropologist at Penn State. PhysOrg reported that professor Nina Jablonski believes The mechanism of evolution can be completely understood from skin color. She proposes using the easily-observed trait in humans to teach evolution to students. People are really socially aware of skin color, intensely self-conscious about it, she told the American Association for the Advancement of Science. The nice thing about skin color is that we can teach the principles of evolution using an example on our own bodies and relieve a lot of social stress about personal skin color at the same time. PhysOrg did not elaborate on how evolutionary theory would relieve stress about skin color. It is typical of Darwinists to try to prove their theory with simple examples of horizontal variation that are not controversial, then extrapolate the examples to say brains evolved from a primordial soup. Perhaps professor Jablonski should take note of the fact that young-earth creationist Ken Ham uses Scripture and science to explain the human races (actually, just variations on the single human race) from a Biblical viewpoint (see AiG), and also shows the disastrous history of racial politics in of Darwinian thought (AiG). Busted! Planet-Making Theories Dont Fit Extrasolar Planets Next headline on: Darwin and Evolution Politics and Ethics Feb 23, 2011 Famed planet-hunter Geoff Marcy is giving theorists headaches. The leading theories of planet formation wont stand up to observations of hundreds of planets we know. In National Geographic News reporter Richard Lovett lamented, The more new planets we find, the less we seem to know about how planetary systems are born, according to a leading planet hunter. We cannot apply theories that fit our solar system to other systems: In theory, other stars with planets should have gotten similar starts. But according to Marcy, theory has implications not born out in reality. Specifically, planetary orbits should be circular, but many extrasolar planets have elliptical orbits. Everything should orbit in the same plane and direction, but many have highly inclined or even retrograde orbits; Orbital inclinations are all over the map, Marcy said. And Neptune-sized planets should be rare, since models of our water giants require highly unusual starting conditions; there are too many out there, Marcy noted; Theory has struck out, he told the American Astronomical Society last month. His critics complained that modeling is complicated and difficult. Hal Levison said that simplifying them leads to crappy models. Marcy thought that without taking into account planetary interactions, future discoveries, as they multiply, will give the theoreticians yet more reasons to tear out their hair. For more on Geoff Marcy, see 02/02/2011. Maiers Law says, If the facts do not conform to the theory, they must be disposed of. (see corollaries, right sidebar). A science that cannot fit observations to theory does not win the honor of being called a science. It may be a job, a profession, an avocation, a hobby; but to be a science, there should be some concordance between theory and observations. Has planetary cosmogony done any better than alchemy yet? Let them play but come back later when they have something. One theory they never consider is the top-down theory; that planets were created with stars, but have been fragmenting and interacting since then. This theory has the advantage of fitting the observations of providential fine-tuning for our own New Cambrian Fossil: Missing Link? Next headline on: Feb 23, 2011 A weird animal from Chinese Cambrian strata looks like a worm with legs, the whole body studded with spines. Was it on the way to becoming an arthropod? The authors think so, but other members of its group were already known from the Cambrian fossil record. The walking cactus with ten pairs of legs was named Diania cactiformis by the discoverers from China and Germany, publishing in Nature.1 discussed it briefly and National Geographic News included an artists conception. Nature said it was already derived (advanced) on the arthropod lineage. The editors summary stated, The possession of what seem to be the beginnings of robust, jointed and spiny legs suggest that this bizarre animal might be very close to the origins of the arthropods. This was based on phylogenetic analysis, though, not on dating or genetics. It seems similar to other creatures known as Lobopodia, a group of poorly understood animals according to Wikipedia, which evolutionists feel might be ancestral to both onycophorans and arthropods; however, precise classification is still in flux. As for its place in the Cambrian explosion, National Geographic said, It would have lived about 500 million years ago during a period of rapid evolution called the Cambrian explosion. It was not, therefore, a missing link leading up to the explosion. The authors in Nature said, How close Cambrian lobopodians are to the ground plan of the arthropod common ancestor remains a point of debate, and as for its ancestry to arthropods, admitted, Our new fossils cannot resolve this question in its entirety, but they do demonstrate that appendage morphology was more diverse among Cambrian lobopodians than is sometimes realized. They emphasized that, to our knowledge, Diania has the most robust and arthropod-like limbs found in any lobopodian until now. But added doubt by saying, However, we should caution that dinocaridids, Diania and other potential stem-arthropods typically express mosaics of arthropod-like characters, which makes resolving a single, simple tree of arthropod origins problematic. In fact, their own phylogenetic analysis of Diania put it in a surprising position in the evolutionary tree. They entertained the option that it might represent a secondary reduction of more advanced animals like the large predator Anomalocaris; whatever it was, all could agree it was a highly unusual creature. 1. Liu, Steiner et al, An armoured Cambrian lobopodian from China with arthropod-like appendages, 470 (24 February 2011), pp. 526–530, doi:10.1038/nature09704. It was a highly unusual creature among many highly unusual creatures, to the extent that the unusual was usual. Simultaneous diversity and morphological disparity is not evolution. Diania is no more advanced or primitive than any of the many other animal body plans from the Cambrian explosion, so this fossil is not going to help solve the evolutionists magic act (see especially with vertebrates already present in the early Cambrian (01/30/2003). Remember, its whats inside that counts. This creature may have look primitive through Darwinian eyes, but it had the ability to move its limbs, detect food, eat, digest, and reproduce its body. Such things do not happen without a body plan. How to get a genetic code by chance: Current Biology made it sound so simple in the 02/19/2004 entry. What could be simpler than a frozen accident? Next headline on: Darwin and Evolution Is Star Formation Understood? Feb 22, 2011 Astronomers often speak with apparent confidence about regions of active star formation in nebulae or galaxies. A look at the fine print, however, shows plenty of wiggle room when observations dont quite match theory. We cant even understand our own universe, but some astronomers are talking about imaginary universes. Amanda Gefter at New Scientist, for instance, gave a positive review of Brian Greenes new book The Hidden Reality, which purports to give a tour of the multiverse. She described the book as Arcane yet exciting physics, wrapped up in effortless prose. The multiverse concept has become fashionable, she said, even though critics deride it as untestable metaphysics. Even Greene called it a battleground for the very soul of science. - Flocculent anomalies: Astronomers expected more star forming regions in one of the flocculent spiral galaxies (spirals without large arms), NGC 2841. But when the Hubble Space Telescope took its picture, Science Daily said it currently has a relatively low star formation rate compared to other spirals. Several revelations in the next paragraphs indicated astronomers are not so confident about star formation: Star formation is one of the most important processes in shaping the Universe; it plays a pivotal role in the evolution of galaxies and it is also in the earliest stages of star formation that planetary systems first appear. It would seem that if astronomers dont understand what triggers star formation, or how it varies from place to place, they dont understand it very much at all. Yet there is still much that astronomers dont understand, such as how do the properties of stellar nurseries vary according to the composition and density of the gas present, and what triggers star formation in the first place? The driving force behind star formation is particularly unclear for a type of galaxy called a flocculent spiral, such as NGC 2841 shown here, which features short spiral arms rather than prominent and well-defined galactic limbs. - Dark matter anomalies: Dark matter sometimes appears like a kind of cosmic flubber an unknown quantity that is useful in various amounts (sometimes none at all) when theories need fixing. Take this article from Science Daily about recent results from the Herschel Space Telescope as an example: Most of the mass of any galaxy is expected to be dark matter, a hypothetical substance that has yet to be detected but which astronomers believe must exist to provide sufficient gravity to prevent galaxies ripping themselves apart as they rotate.... And yet with these uncertainties, the article was confident that the star formation rate in this starburst galaxy had hit a sweet spot for star formation, even though the opening paragraph was puzzled that the region observed appeared too small for such luck: The size challenges current theory that predicts a galaxy has to be more than ten times larger, 5000 billion solar masses, to be able form [sic] large numbers of stars. Herschel is showing us that we don't need quite so much dark matter as we thought to trigger a starburst," says Asantha Cooray, University of California, Irvine.... Analysis of the brightness of the patches in the ... images has shown that the star-formation rate in the distant infrared galaxies is 3-5 times higher than previously inferred from visible-wavelength observations of similar, very young galaxies by the Hubble Space Telescope and other telescopes. - Computers as alternate reality: Meanwhile, in the computer center at Heidelberg University, astronomers concluded that The first stars in the universe were not as solitary as previously thought. According to PhysOrg, whatever they programmed into their computer models was a blockbuster: it cast an entirely new light on the formation of the first stars after the Big Bang. The next sentences described in graphic detail how a star is born. Given the uncertainties in the first two entries, however, it appears their computer universe was a figment of the programmers imagination rather than a finding about nature. The article contained at least six instances of may have, could have and other speculations: e.g., It is also conceivable that some of the first stars may have been catapulted out of their birth group through collisions with their neighbours before they were able to accumulate a great deal of mass. Whoever wins the battle should be able to do the Macarena to make Gefter happy. Thats what Brian Greene did in front of an audience, she described, as he pondered a hypothetical holographic universe that made him literally dance for joy at the thought. Gefter also was enraptured by the possibility that reality is not what it seems. While speculating about hidden realities that are not what they seem, might as well go all out: Greene doesnt shy away from important nuances or profound philosophical questions, she ended, winking, I suspect that this will be a hugely popular book in this universe and many, many others. See the 04/11/2009 for Gefters previous reaction to cosmologists speculations about imaginary universes. OK, lets take stock. Weve got star formation, about which we dont understand how it gets triggered or why it varies from place to place, but somehow dark matter flubber has something to do with it an ingredient that uses Skinners Constant.* However it works, star formation, which no one has watched, bursts forth in galaxies 1/10 the size they should be for it to burst in, producing sociable stars after the Big Bang, in computers at least, contrary to expectations. Are we still in the science lab? (cf. 04/13/2007). Astronomers are very smart people in terms of their ability to speak jargon and manipulate equations. Whether they have a grasp on reality is a very different matter (01/15/2008) a dark matter of a different sort. Its worth rereading Prophet Bermans sermons every once in awhile to avoid being swept up into the cosmic euphoria that ensnared poor Amanda Gefter in Brian Greenes fantasy bladderwort (02/17/2011). Next headline on: Stars and Astronomy Philosophy of Science *Skinners Constant: That quantity which, when added to, subtracted from, multiplied by or divided by the answer you got, gives you the answer you should have gotten. A photo from Messenger (see PhysOrg; also APOD) shows the solar system from the inside out. Its a nice complement to the historic Voyager image from the outside in (APOD). These actual photos underscore how tiny the planets are relative to their distances from the sun. Human Genome Project Supports Adam, Not Darwin Feb 21, 2011 Science magazine last week had a special series of articles on the 10-year anniversary of the Human Genome project. Most of the articles expanded on how different the findings were from predictions. The publication of the genome did not identify our evolution; it did not lead to miracle cures. What it did most of all was upset apple carts, and show just how complex the library of information behind our smiling faces really is.. A couple of excerpts are characteristic. John Mattick of the University of Queensland commented about how The Genomic Foundation is Shifting in his brief essay for Science.1 For me, he began, the most important outcome of the human genome project has been to expose the fallacy that most genetic information is expressed as proteins. He spoke of the Central Dogma of genetics the principle that DNA is the master controller of heredity, translating its information into proteins that create our bodies and brains. For one thing, the number of genes is far smaller than expected (only 1.5% of human DNA contains genes), and is overwhelmed by non-coding DNA (earlier assumed to be genetic junk) that generates RNA, that regulates the expression of genes, especially during development. The histone code and other revelations have generated aftershocks to the initial tremor that undermined the Central Dogma. He concluded, These observations suggest that we need to reassess the underlying genetic orthodoxy, which is deeply ingrained and has been given superficial reprieve by uncritically accepted assumptions about the nature and power of combinatorial control. As Nobel laureate Barbara McClintock wrote in 1950: Are we letting a philosophy of the [protein-coding] gene control [our] reasoning? What, then, is the philosophy of the gene? Is it a valid philosophy? … There is an alternative: Human complexity has been built on a massive expansion of genomic regulatory sequences, most of which are transacted by RNAs that use generic protein infrastructure and control the epigenetic mechanisms underpinning embryogenesis and brain function. I see the human genome not simply as providing detail, but more importantly, as the beginning of a conceptual enlightenment in biology. In another essay in the 18 February issue of Science, Maynard Olson [U of Washington, Seattle] asked, What Does a Normal Human Genome Look Like? Olson did not wish to get embroiled in old debates about nature vs. nurture other than to acknowledge that they still exist despite the publication of the human genome. Instead, he asked what factors are minor players in human variation. One of them, he said, in a statement that might have raised Darwins eyebrows, is balancing selection, the evolutionary process that favors genetic diversification rather than the fixation of a single best variant; instead, he continued, this appears to play a minor role outside the immune system. Another also-ran are the variations we most often notice in people: Local adaptation, which accounts for variation in traits such as pigmentation, dietary specialization, and susceptibility to particular pathogens is also a second-tier player. The primary factor is another eyebrow-raiser for Darwinists: What is on the top tier? Increasingly, the answer appears to be mutations that are deleterious by biochemical or standard evolutionary criteria. These mutations, as has long been appreciated, overwhelmingly make up the most abundant form of nonneutral variation in all genomes. A model for human genetic individuality is emerging in which there actually is a wild-type human genomeone in which most genes exist in an evolutionarily optimized form. There just are no wild-type humans: We each fall short of this Platonic ideal in our own distinctive ways. 1. John Mattick, The Genomic Foundation is Shifting, Science, 18 February 2011: Vol. 331 no. 6019 p. 874, DOI: 10.1126/science.1203703. 2. Maynard V. Olson, What Does a Normal Human Genome Look Like?, Science, 18 February 2011: Vol. 331 no. 6019 p. 872, DOI: 10.1126/science.1203236. Did you catch that? These are phenomenal admissions in a secular science journal. Mattick showed how many ways the evolutionary geneticists were wrong. They expected to find the secret of our humanness in DNA the master controller, honed by evolution, that made us what we are. Instead, they were astonished to find complexity in a vast array of regulatory sequences beyond the genes (epigenetic, above the gene), including codes upon codes. They appear to make DNA just a side show in a much more complex story that will require a conceptual enlightenment in biology. This implies that pre-Human Genome biology was unenlightened. By quoting McClintocks prescient questions, he declared that the philosophy of biology that has ruled the 19th and 20th centuries is invalid. Scientists Are Studying Your Garden for Ideas Olsons revelations are even more shocking, and, in a way, delightful for those who believe that the Bible, not Darwin, tells where man came from. Olson essentially said that Darwinists should pack up and go home, because the factors that they have counted on to explain human complexity are minor players. Then he said that most mutations are harmful, bad, deleterious, regressive, plaguing each individual person. For the coup-de-grace, he said that there seems to be a Platonic ideal of the human makeup (wild-type referring to natural) from which we all fall short. This is the opposite of Darwinian evolutionary ascent from slime; it is descent with modification downward from an initial ideal state. Biblical creationists will shout Amen: we have all fallen from Adam! Paul the Apostle explained in the classic statement about Adam that the first man was the wild type after which things went terribly wrong when he sinned: Therefore, just as sin came into the world through one man, and death through sin, and so death spread to all men because all sinned for sin indeed was in the world before the law was given, but sin is not counted where there is no law. Yet death reigned from Adam to Moses, even over those whose sinning was not like the transgression of Adam, who was a type [i.e., wild type, Platonic ideal in real human flesh] of the one who was to come (Romans 5:12-14). Isnt that exactly what we see around us? Not to leave us in despair, Paul continued with the joyful good news about the second Adam, Jesus Christ who by solving the sin problem through his death and resurrection, became the progenitor of all who could become righteous and inherit eternal life: But the free gift is not like the trespass. For if many died through one mans trespass, much more have the grace of God and the free gift by the grace of that one man Jesus Christ abounded for many. And the free gift is not like the result of that one mans sin. For the judgment following one trespass brought condemnation, but the free gift following many trespasses brought justification. For if, because of one mans trespass, death reigned through that one man, much more will those who receive the abundance of grace and the free gift of righteousness reign in life through the one man Jesus Christ. To be sure, Mattick and Olson were probably not intending to agree with the Bible in their revelations about the Human Genome, but everything they said is consistent with Scriptural teaching, but is not consistent with what the Darwinists teach. Their expectations have been falsified; their philosophy has been found wanting. The Bible had it right all along! If you are fallen from the ideal of Adam, Jesus Christ (not Darwin, not Plato) provides the pathway to a return to the Makers ideal. It is a gift, through faith, thanks to the grace of God in Jesus Christ. Paul, an early persecutor of Christians, who was transformed by seeing the risen Christ on the Damascus road, speaks to us all today: We urge you on behalf of Christ, be reconciled to God (II Corinthians 5:14-21). Therefore, as one trespass led to condemnation for all men, so one act of righteousness leads to justification and life for all men. For as by the one mans disobedience the many were made sinners, so by the one mans obedience the many will be made righteous. Now the law came in to increase the trespass, but where sin increased, grace abounded all the more, so that, as sin reigned in death, grace also might reign through righteousness leading to eternal life through Jesus Christ Next headline on: Philosophy of Science Bible and Theology Feb 20, 2011 Your garden plants are visited by a butterfly and various insects as you sip tea in a lawn chair. Did you have any idea that inventors are watching the same things with an eye to making money? Or that military officers are getting ideas from the garden to use against the enemy? Biomimetics the imitation of natures designs is on a roll, because some of the best design ideas are right in your yard. Chang Liu had one of the best recent summations of why biomimetics is such a hot trend: Using a bio-inspired approach is really important, he said. Nature has a lot of wonderful examples that can challenge us. No matter how good some of our technology is, we still cant do some of the basic things that nature can. Nature holds the secret for the next technology breakthrough and disruptive innovation. We are on a mission to find it. - Its a bird; its a plan: Watch the video clip of Matt Keennons ornithopter at Its a robotic mimic of a real hummingbird size, shape, wings and all. Like the real thing, it can hover and move in all directions. The military wants to use such devices as spybots to enter buildings with tiny cameras. The hummingbird makes it look easy: Manager of the project, Matt Keennon, said it had been a challenge to design and build the spybot because it pushes the limitations of - Moving plants: Schoolkids are often delighted with touching the leaves of the sensitive plant, Mimosa, and watching how they instantly fold up. that University of Michigan researchers are sensitive, too: they are leading studies of moving plants that are inspiring a new class of adaptive structures designed to twist, bend, stiffen and even heal themselves. Where could these efforts lead? When this technology matures, [Kon-Well] Wang said it could enable robots that change shape like elephant trunks or snakes to maneuver under a bridge or through a tunnel, but then turn rigid to grab a hold of something, the article ended. It also could lead to morphing wings that would allow airplanes to behave more like birds, changing their wing shape and stiffness in response to their environment or the task at hand. - Solar plants: What uses sunlight better than a leaf? Penn State researchers are trying to copy photosynthesis, reported PhysOrg, in order to make efficient fuels. Inexpensive hydrogen for automotive or jet fuel may be possible by mimicking photosynthesis, the article said, ...but a number of problems need to be solved first. Thomas Mallouk at the university has only achieved 2-3% hydrogen so far. He needs to aim for 100%. His team is trying to figure out how to handle the wrecking ball of oxygen produced by his experimental solar cells, and how to channel electrons so they stop recombining. Plants make it look so easy. - Hear thee: Chang Liu at Northwestern is fascinated by the hair cells of the inner ear. Like many researchers with the biomimetics bug, he is using insights from nature as inspiration for both touch and flow sensors areas that currently lack good sensors for recording and communicating the senses. Hes not all ears; Hair cells provide a variety of sensing abilities for different animals: they help humans hear, they help insects detect vibration, and they form the lateral line system that allows fish to sense the flow of water around them. This multi-application potential of natures design particularly impressed him: The hair cell is interesting because biology uses this same fundamental structure to serve a variety of purposes, Liu said. This differs from how engineers typically design sensors, which are often used for a specific task. Synthetic hair cells might be useful for anything from robots to heart catheters. - Fly me a computer: Last weeks Science (Feb 11) had an article by Jeffrey Kephart about Learning from Nature to build better computer networks.1 What, in nature, did he have in mind? Fruit flies. Studying the development of a fruit flys sensory bristles provided insight into developing a more practical algorithm for organizing networked computers, the caption said on a photo of the little bugs bristly head. Kephart explained that biomimetics has a long history. The tradition of biologically inspired computing extends back more than half a century to the original musings of Alan Turing about artificial intelligence and John von Neumann's early work on self-replicating cellular automata in the 1940s, he noted. Since then, computer scientists have frequently turned to biological processes for inspiration. Indeed, the names of major subfields of computer sciencesuch as artificial neural networks, genetic algorithms, and evolutionary computationattest to the influence of biological analogies. (Note: evolutionary computation is a form of intelligent design, in which a scientist or computer selects outcomes from randomly varying inputs according to purpose-driven goals.) - Crawl me a network: Speaking of IT, ants are inspiring new ideas for computer networks. According to PhysOrg, Ants are able to connect multiple sites in the shortest possible way, and in doing so, create efficient transport networks, scientists at the University of Sydney are finding. Even without leaders, they solve this complex problem by making many trails and pruning them back to the best ones. Ants are not the only inspiration for the next generation of networks: The findings sheds light on how other simple natural systems without leaders or even brains such as fungi, slime molds and mammalian vascular systems are able to form efficient networks, and can help humans design artificial networks in situations lacking central control, Dr. Tanya Latty said. - Firefly probe: Science Daily told how scientists at Lawrence Berkeley National Labs have made a probe of hydrogen peroxide levels in mice based on the chemical that makes fireflies glow: luciferin. Their device seems reminiscent of Doc Bones hovering probe that could detect problems non-invasively: the new probe enables researchers to monitor hydrogen peroxide levels in mice and thereby track the progression of infectious diseases or cancerous tumors without harming the animals or even having to shave their fur. How did Christopher Chang come up with this neat idea? The fact that in nature fireflies use the luciferin enzyme to communicate by light inspired us to adapt this same strategy for pre-clinical diagnostics, he said. Their PCL-1 probe has already passed a milestone and has found that hydrogen peroxide, natures disinfectant, is continuously made even in a healthy body. Now they are working to improve the sensitivity of the probe. - Roach model: Hopefully your garden experience was not interrupted by seeing a cockroach in the kitchen when getting your tea out of the refrigerator. Even so, Israeli scientists at Tel Aviv University are finding things to admire in the beasts: said, Ask anyone who has ever tried to squash a skittering cockroach theyre masters of quick and precise movement. Thats why Tel Aviv University is using their maddening locomotive skills to improve robotic technology too. While wes getting grossed out with bugs, the article added, Cockroaches are not the only insects that have captured the scientific imagination. Projects that highlight both the flight of the locust and the crawling of the soft-bodied caterpillar are also underway. Good. Get them out of the house and yard and give them to the scientists. 1. Jeffrey O. Kephart, Computer science: Learning from Nature, 11 February 2011: Vol. 331 no. 6018 pp. 682-683, DOI: 10.1126/science.1201003. As stated before, biomimetics provides a breakthrough that can bring scientists together. Evolutionists do not have to worry about how these things evolved, nor waste time and energy making up stories or building their shrines to the Bearded Buddha. Creationists do not have to mention God and risk alienating their colleagues who dont want to hear the design argument for Gods existence. Everyone can agree that the designs in nature, however they arose, are fascinating, important, and worth imitating. The public will benefit from the inventions that result. Last year at this time we presented eight candidates for masters degrees in physics plants, beetles, human ears, and bacteria among them (02/10/2010. Follow the biomimetic research lead, and pretty soon Eugenie Scott will be out of a job, because all scientists will be marching together away from Darwinland and into the promised land of nature-inspired technology, talking design without any need for help from those who already knew intelligent design is the inference to the best explanation. The ranks of the Darwinists will shrink by attrition. Why? Nobody will be looking to them for answers (re: stories), when practical science based on design is winning the hearts and minds of everyone. Books and lectures on garden-variety intelligent design will, by then, seem perfectly natural. Next headline on: Titans Methane Lakes Shallow, Dynamic Feb 19, 2011 Strange things are happening on Titan, Saturns largest moon: lakes are appearing and disappearing. This can only mean that the lakes are shallow and the liquid hydrocarbons in them are moving around. Lakes were discovered a few years ago in the northern regions of the Mercury-size moon. They consist predominantly of methane (CH4) and ethane (C2H6). Another large lake called Ontario Lacus (Lake Ontario, because of its similarity to Earths counterpart) was discovered near the south pole. Then, in Oct 2004, new dark areas appeared in Arrakis Planitia near the south after a presumed cloudburst of liquid methane the lakes in this area have also shrunk considerably in 44 months between observations. A new paper in Icarus1 presented observations in visible light, infrared and radar covering the period 2004-2009. They indicate that Ontario has been shrinking rapidly between 2005 and 2009 The southwest shoreline has retreated by 9-11 kilometers (5.5 to 7 miles).2 Though estimates are difficult due to the distance and resolution of some measurements, the authors best guess is that The observed retreat represents a decrease in area of ~500 km2 over almost 4 years. Estimating volume loss is more difficult. While impossible to calculate Ontarios volume loss directly, they estimated how much Arrakis gained and lost as a proxy. Based on estimates of methane-carrying capacity of the 2004 cloud system (about a million square kilometers), the cloudburst must have dropped 2.4 to 14 cm of methane rain into the Arrakis basin (upper limit 4.2 m). This yields estimates that between 24 to 140 km3 of liquid was lost at Arrakis in 4 years from a combination of evaporation and infiltration; probably similar amounts at Ontario. There are clues that the lake bottoms might be impermeable. The northern lakes are in Titan spring and have not shrunk between observations. Earlier estimates expected one meter of seepage into the interior per year. The rapid shrinkage at Arrakis and Ontario over a timescale of several months strongly suggests either a shallow impermeable layer or that the local methane table lies close to the surface. It will be interesting to watch what the methane cycle does to the southern and northern lakes as the seasons change and more sunlight hits the north. One other interesting observation was that the exposed lake bottom is not dark, as might be expected from sedimentation of hydrocarbons. Either wave action cleansed the bottom as the shoreline retreated, or any sediments are light colored. The authors favor the latter, saying that bright organic condensates may be deposited within the lakes and exposed as the liquid level drops (Barnes et al., 2009). This view is strengthened by the fact that the Cassini orbiters cameras saw numerous dark features in the south in 2004-2005, but light material as Ontario retreated. It is not possible to know from albedo (reflected brightness) alone the composition of the bright sediments. 1. E.P. Turtle, J.E. Perry, A.G. Hayes, A.S. McEwen, Shoreline Retreat at Titan’s Ontario Lacus and Arrakis Planitia from Cassini Imaging Science Subsystem Observations (accepted manuscript; final pending), Icarus, Feb 2011, S0019-1035(11)00054-6, DOI: 10.1016/j.icarus.2011.02.005. 2. Highest resolution was possible at the southern parts of the lake; more uncertainty exists at the northern boundaries. Radar altimetry suggests that the southwest shore has a gradual slope, while the eastern shore is steep. The study of Titan is a work in progress, so any conclusions drawn at this time are subject to revision as more data come in. We can, however, step back and consider what planetary scientists expected to find and what they have found so far. In the decades after the Voyager visits (1981), when scientists realized an irreversible erosion of atmospheric methane was precipitating hydrocarbons onto the surface (especially ethane, which has no way to get back into the atmosphere), scientists expected to find, over the course of 4.5 billion years, an accumulation of half a kilometer or more of liquid ethane in a global ocean. That was a clear prediction that has been spectacularly falsified by Cassini observations (see list of previous articles). In fact, the Huygens probe was designed to float on that ocean that failed to materialize. But Is it Evolution? Instead, we found Titan to have paltry accumulations of liquid in scattered lakes near the poles, while the equatorial regions are largely covered in icy sand dunes. Now we are learning that the polar lakes are probably shallow, could have impermeable bottoms, and move around so rapidly that they dont deposit sediment on the lake floors (or else they deposit bright sediments). But if the sediments are bright, which would be surprising in itself, is there enough sediment to account for 4.5 billion years of deposition? In addition, Titan, the largest moon with the greatest gravitational attraction, has few craters (three to five) after all that time. You have to ask yourself whether it is credible these processes have been going on for billions of years. Did 4.5 billion years ever exist? Is it a fiction? In order to save the blessed timescale so precious to planetary scientists (because Darwin depends on it), all kinds of evidence-free theory-rescue devices are being rigged: maybe the ethane seeped into the interior where no one can find it; maybe the interior has a methane reservoir that erupts through cryovolcanoes, replenishing the atmosphere; maybe this, maybe that. If scientists stuck to the observations and drew reasonable conclusions from data alone, they would have to conclude that there are severe upper limits on how long Titan has been acting this way. Let facts be submitted to a Next headline on: Feb 18, 2011 Scientists have been noticing some things that seem contrary to Darwins predictions but they give Darwin credit anyway. Gene comparisons underlying tree-of-life stories may have suffered a setback. Nature News reported that Around a fifth of non-primate genome databases seem to be contaminated with human DNA sequences, according to a study. The finding represents a failure of the filter in software that was supposed to weed out contamination. In a few cases, stretches of more than a thousand human bases were seen in assembled non-primate sequences. The article did not elaborate on what this means for previously-published phylogenetic studies. - Not till us: The chambered nautilus is a living fossil, that uses jet propulsion, Scientist said, with origins way back in the Cambrian. Has its fitness improved over all that time? Its movement is ungainly and slow, but it has survived virtually unchanged for at least 450 million years, so it must be doing something right, reporter Michael Marshall remarked. Its relatives the ammonoids dominated the oceans for millions of years before going extinct along with the dinosaurs 65 million years ago but the nautilus came through that disaster and is still with us today, despite having much simpler brains than other cephalopods. They also have weaker eyes and take longer to mature, and are currently endangered by overfishing. Wouldnt evolution get rid of adaptations that are inferior? Such jet power is a cumbersome way of getting around the seas, and most modern cephalopods have largely abandoned it, Marshall explained in a personification of negative selection. Despite its primitive way of getting around, however, the nautilus is no mental slouch. - Debunking neo-Darwinist genetics: According to neo-Darwinism, beneficial genetic mutations become established by selective sweeps in a population. The selective sweep model was introduced in 1974 and has pretty much been the central model ever since, Molly Przeworski [U of Chicago] said. In an article posted by PhysOrg. It is fair to say that it is the model behind almost every scan for selection done to date, in humans or in other organisms. Unfortunately, the model doesnt fit the DNA. Looking at the human genome in more detail, the article concluded, The result suggests that classic selective sweeps could not have been the most common cause of these low diversity troughs, leaving the door open for other modes of evolution. Unfortunately again, no other mode was provided: Phenotypic variation in humans isnt as simple as we thought it would be, [Ryan] Hernandez [UC San Francisco] said. The idea that human adaptation might proceed by single changes at the amino acid level is quite a nice idea, and its great that we have a few concrete examples of where that occurred, but its too simplistic a view.... Przeworski said... These findings call into question how much more there is to find using the selective sweep approach, and should also make us skeptical of how many of the findings to date will turn out to be validated. - Mystery of Mysteries: What was Darwins mystery of mysteries? Believe it or not, it was the thing his famous book set out to explain. Although Charles Darwin titled his book On the Origin of Species, speciation was one thing he could not explain, wrote Bob Holmes in He called it the mystery of mysteries, and even a century-and-a-half later the mechanism by which two groups of animals become genetically incompatible remains one of the greatest puzzles in biology. That is a surprising statement, because in popular understanding, it was Darwins mechanism of natural selection that promoted evolution into a scientific theory over earlier speculations about common ancestry. Holmes went on to describe suggestions that a speciation gene named Prdm9 might be evolutions missing X factor to solve the mystery. Its a rapidly evolving gene, he claimed a virtual evolutionary sprinter on the basis of sequence dissimilarities between humans and chimpanzees. After a convoluted tale about how this gene blinks on and off, and incompatible mutants make mice sterile, he proposed an intriguing idea that Variation in this gene could be driving a wedge between different parts of our human population. But alas, the evidence to date seems not to corroborate it. Evidence, however, should never be allowed to get in the way of a good story. One expert he quoted said, We can speculate that this could be some sort of universal reproductive-isolation gene in animals, which would be beautiful, but, alas again, we shall have to wait, after waiting 150 years already since Darwin, to find out If that turns out to be the case. - Feathery evolution: Ken Dial, the Montana man who watches the partridge family run up ramps (12/22/2003), got notoriety again in National Geographics story on the evolution of feathers. Writer Carl Zimmer could never quite figure out if feathers arose for sexual display, or for insulation, or for flight, but they evolved somehow, he is sure. Dinosaurs with imaginary feathers also made the final cut of the Darwinian script. The origin of this wonderful mechanism is one of evolutions most durable mysteries, Zimmer said. Whatever happened, or why, there is one natural wonder that just about all of us can see, simply by stepping outside, he teased: dinosaurs using their feathers to fly. From there he went on to describe the marvelous design Airplane wings exploit some of the same aerodynamic tricks. But a bird wing is vastly more sophisticated than anything composed of sheet metal and rivets. From a central feather shaft extends a series of slender barbs, each sprouting smaller barbules, like branches from a bough, lined with tiny hooks. When these grasp on to the hooklets of neighboring barbules, they create a structural network thats featherlight but remarkably strong. When a bird preens its feathers to clean them, the barbs effortlessly separate, then slip back into place. Most people believe airplane wings came from intelligent design, but all Zimmer could propose for the origin of those vastly more sophisticated feathers from simple scales were suggestive analogies. The long, hollow filaments on theropods posed a puzzle, he said of the barbs on some dinosaur skins that lack the complex interlocking structures of flight feathers. If they were early feathers, how had they evolved from flat scales? Fortunately, there are theropods with threadlike feathers alive today: baby birds. He then said that reptiles and birds both have tiny patches in their skins called placodes that produce bristles. Did reptile placodes evolve into feathers via a simple switch in the wiring of the genetic commands inside placodes? If so, Once the first filaments had evolved, only minor modifications would have been required to produce increasingly elaborate feathers. Obviously. Stuff happens all the time in evolution. Voila, said the viola: In other words, feathers were not merely a variation on a theme: They were using the same genetic instruments to play a whole new kind of music. Unmixing of metaphors is left as an exercise. Complete that exercise before tackling the more difficult assignment: understanding the evolutionary significance of another of Zimmers evidence-challenged plot lines: So perhaps the question to ask, say some scientists, is not how birds got their feathers, but how alligators lost theirs. (Caution: do NOT visualize a magic dragon.) It would seem that if the ancestor of all these animals already had feathers, the origin of feathers has just been pushed back into the unknown. Ken Dials partridge family (05/01/2006, 01/25/2008) got the final exit pun, complete with an apparition of Haeckels friendly ghost: Perhaps, says Dial, the path the chick takes in development retraces the one its lineage followed in evolutionwinging it, so to speak, until it finally took wing. So to speak. We could go on and on. This borders on the criminal. Taking data that falsifies evolution and using it to praise Charlie is like election fraud. Unlike Dawkins, though, we will not stoop to calling our opponents ignorant, stupid, insane, or wicked just deceived. So deceived, in fact, that they cannot even receive the sight to conceive their own deception. The only remedy for the self-deceived is truth given with tough love. Youngs Law confirmed Next headline on: Darwin and Evolution By accident, researchers at UCLA seem to have found a cure for baldness, at least in mice Now they are seeking funding to study it further. See Youngs Law, Plant Accelerates 600 G's Feb 17, 2011 Among the fastest organisms in the world is a plant. The bladderwort Utricularia, a carnivorous plant that lives in the water, sucks in its prey in a thousandth of a second with an acceleration 600 times the force of gravity. Scientist and Science Daily reported on work by the University of Freiberg, where scientists filmed the action with a high-speed camera because the motion is too fast to observe with the naked eye. BBC News included a video clip showing the action in slow motion. The remarkable door that acts like a flexible valve operates by glands in the plant that continually pump out water, creating a depression inside the tiny bladder, the BBC News explained. When a passing creature stimulates microscopic, super-sensitive hairs, this trapdoor buckles inward and opens, allowing the bladderwort to suck in water and any unsuspecting creature it contains. Science Daily said there are four trigger hairs. The resulting response ranks among the fastest plant movements known so far. The BBC explained the scientists reaction to this phenomenon: The plants tiny suction trap was much faster and more efficient than the scientists had predicted. Dr. Philippe Marmottant exclaimed, The same trap can fire hundreds of times. It is an amazing piece of mechanics. Science Daily explained, Prey animals are sucked in with an acceleration of up to 600 times that of gravity, leaving them no chance to escape. The door deformation involves a complete inversion of curvature which runs in several distinguishable intermediate steps. Marmottant and the other researchers would like to reverse-engineer this marvel: the plant could provide a template to design miniature medical devices, such as a lab-on-a-chip, which samples tiny amounts of blood that could be used in diagnostic tests. None of the articles speculated on how this high-speed trap mechanism might have evolved, but Science Daily mentioned, These so-called bladders have fascinated scientists since Darwins early works on carnivorous plants. It also shielded the question of origin of the bladderworts amazing design with the indirect, passive-voice statement, This ultra-fast, complex and at the same time precise and highly repetitive movement is enabled by certain functional-morphological adaptations. The wonders of nature should inspire design and lead to appreciation of design not to storytelling about how stuff happens by mistake (01/26/2011). Logic quiz: what do you get when you add mistakes to mistakes, or multiply mistakes by mistakes? Mistakes. What do you get when you add or multiply mistakes to design? Broken designs. Where, then, do good designs come from? Design that minimizes mistakes i.e., intelligent design. Evolutionists hunger and thirst for righteousness at the empty market of natural selection (02/09/2009), and leave empty-handed. Next headline on: New Ediacaran Fossils: Do They Ignite the Cambrian Explosion? Feb 17, 2011 Well-preserved fossils of seaweed-like colonies have been reported from China. They are dated by the scientists at 600 million years old, from the Ediacaran period. Can these be missing links, lighting the fuse of biodiversity that culminated in the Cambrian explosion? summarized the findings published in Nature.1 In addition to perhaps ancient versions of algae and worms, the Lantian biota named for its location included macrofossils with complex and puzzling structures, the article said. In all, scientists identified about 15 different species at the site. Pictures of the seaweed-like fossils show fronds with a distinctive holdfast, like modern seaweed use to cling to the seafloor. The paper in Nature shows pictures of frond-like and tube-like organisms with uncertain phylogenetic affinities, but no clear Cambrian-like body plans. A couple of them, Guy Narbonne speculated in the same issue of are probable ancestors of radial and bilaterian animals. The discoverers mentioned a hopeful case: The axial structure in Types D and E is puzzling and it could represent the digestive structure of worm-like animals, they hinted; In an animal model, the holdfast and stalk of Type D would be alternatively interpreted as the proboscis of an early worm-like organism. The photos are not compelling. Both papers spent much of their space discussing what effects varying levels of oxygen in the oceans might have had on the evolution of life. For instance, To reconcile the conflicting geochemical and palaeontological indicators of palaeoredox conditions, we propose that the Lantian basin was largely anoxic but punctuated by brief oxic episodes, the original paper said, harking back to the jargon of punctuated equilibrium, if that somehow helps evolution. These oxic episodes were opportunistically capitalized on by benthic macroeukaryotes that were subsequently killed and preserved by frequent switch-backs to anoxic conditions. Evolution did not seem to be going in any particular direction toward complex animal life. The paper claims the Lantian biota is probably older than and taxonomically distinct from the Avalon biota, the previous record setter for earliest-known fossil assemblage with macroscopic and morphologically complex life forms at 579-560 million years old. That suggestion, however, does not create any evolutionary linkage between the two independent groups. Even so, neither fossil beds show any of the complex organs seen in Cambrian phyla. The paper calls them multicellular eukaryotes. The focus of the research was not so much on evolution upward and onward from these seaweed-like impressions, but suggests that morphological diversification of macroscopic eukaryotes may have occurred in the early Ediacaran Period, perhaps shortly after the Marinoan glaciation, and that the redox history of Ediacaran oceans was more complex than previously thought. 1. Yuan et al, An early Ediacaran assemblage of macroscopic and morphologically differentiated eukaryotes, Nature 470 (17 February 2011), pp. 390–393, doi:10.1038/nature09810. 2. Guy Narbonne, Evolutionary biology: When life got big, Nature 470 (17 February 2011), pp. 339–340, doi:10.1038/470339a. Even with the most generous concessions to Mr. Darwin, these fossils do not help explain the Cambrian Explosion. They are simple frond-like colonies of eukaryotic algae, with no clear differentiation or body plans typical of Cambrian animals. No amount of acquiescence to the evolution-incestuous dating methods can link these imprints with trilobites and vertebrates by an Anthropology: a Science in Crisis Thats why they changed the subject to talking about rising and falling oxygen levels in the oceans. Its a distraction and a red herring, intended to give a false impression that they are making some kind of progress solving this super-falsification of evolutionary predictions Next headline on: Darwin and Evolution Feb 16, 2011 Students memorize the different -ologies of science geology, biology, paleontology and others often without knowing the history of the fields. An impression is sometimes given that each branch of science has equal validity. Some recent articles indicate that anthropology (the study of man) is struggling with internal squabbles and Anthropology includes a number of subfields, such as paleoanthropology (fossil man), cultural anthropology, biological anthropology, linguistic anthropology, and archaeology, but it also overlaps with psychology, sociology, evolution, political science, economics, history, and more making it distinct by having roots in science and the humanities. Perhaps that is a source of its struggles. By including too much in its big tent, with varying degrees of epistemic support among its sub-branches, anthropology has always been poised for Man is undoubtedly a dauntingly complex subject of study. To be sure, it is not easy to make general statements about human nature, or even to define it, Kuper and Marks said, especially when human biology has been co-evolving with technology for millions of years. The most fundamentally hard-wired human adaptations walking and talking are actively learned by every person, in each generation, they noticed. So whatever human nature may be, it clearly takes a variety of local forms, and is in constant flux. Maybe anthropologists should study fluid dynamics or chaos theory if they want to be scientists. - Inside out: Too simple and not so fast were complaints made about alleged human ancestor fossils by biological anthropologists from George Washington University and New York University. According to PhysOrg, the anthropologists question the claims that several prominent fossil discoveries made in the last decade are our human ancestors. Instead, the authors offer a more nuanced explanation of the fossils place in the Tree of Life. They conclude that instead of being our ancestors the fossils more likely belong to extinct distant cousins. Bernard Wood and Terry Harrison chided fellow paleoanthropologists for their jumping to conclusions: to simply assume that anything found in that time range has to be a human ancestor is naďve. Their article is published in this weeks Nature.1 It should be kept in mind when evaluating the latest claim about human ancestry, such as the claim that a foot bone puts Prehuman Lucy on a Walking Path to humanity (e.g., Live Science), or that Lucy, a human ancestor, was no swinger but walked like us Geographic News). Even in the most favorable possible light (e.g., that Lucy did walk upright), Bernard Wood says it is naďve to jump to conclusions that Australopithecus afarensis had anything to with human ancestry an assertion the media invariably make (cf. 06/22/2010). - Upside down: Science Dailys coverage of the Nature article included a picture of an orangutan as an instance of false identification of human ancestry. Ramapithecus, a species of fossil ape from south Asia, was mistakenly assumed to be an early human ancestor in the 1960s and 1970s, but later found to be a close relative of the orangutan. A mistake like that could certainly not be made today... could it? The debunkers do not question human evolution itself, but their own more nuanced explanation requires believing that sister groups acquired human-like characteristics in parallel. The authors suggest there are a number of potential interpretations of these fossils and that being a human ancestor is by no means the simplest, or most parsimonious explanation. That would seem to leave a lot of room for speculation, to say nothing of upsetting textbook explanations that have been like gospel truth for decades. - In their own blurs: The paper in Nature1 behind the above two entries contains a strange mix of confidence in human evolution with diffidence about the details: The relationships among the living apes and modern humans have effectively been resolved, but it is much more difficult to locate fossil apes on the tree of life because shared skeletal morphology does not always mean shared recent evolutionary history. Sorting fossil taxa into those that belong on the branch of the tree of life that leads to modern humans from those that belong on other closely related branches is a considerable challenge. A gaping question, though, is how, if the fossils cannot easily be sorted into a tree-like pattern, that one could know that a tree of life exists, without assuming it. Subtitles in the paper indicative of trouble include Shared morphology need not mean shared history, Simplicity or complexity in phylogeny, Scale in phylogeny reconstruction, Cautionary tales from South Asia and Tuscany, and Implications for palaeoanthropology. Moreover, in the conclusion, they stated, There is no reason why higher primate evolution in Africa in the past ten million years should not mirror the complexity observed in the evolutionary histories of other mammals during the same time period, thus casting the same doubts on other evolutionary stories as well. - The Geico fallacy: Another PhysOrg had a paradigm-debunking headline, Earliest humans not so different from us, research suggests. The subtitle reads, That human evolution follows a progressive trajectory is one of the most deeply-entrenched assumptions about our species. This assumption is often expressed in popular media by showing cavemen speaking in grunts and monosyllables (the GEICO Cavemen being a notable exception). But is this assumption correct? Were the earliest humans significantly different from us? The rhetorical answer is: negative. Indeed, John Shea of Stony Brook University says his colleagues have all been wrong about the measurement of behavioral modernity, the assumed identifier of when Homo sapiens emerged from animal to thinking man. There are no such things as modern humans, Shea argues, just Homo sapiens populations with a wide range of behavioral variability, the article ended, casting doubt on the epistemic foundations of human evolution theories. Whether this range is significantly different from that of earlier and other hominin species remains to be discovered. - Demotion from science: In a kind of manifesto, Anthropologists, unite!, an appeal went out from Adam Kuper and Jonathan Marks to rescue anthropology as a science in last weeks Nature.2 They were responding to a change of mission announced in December: In December 2010, The New York Times reported that the term science had been dropped in a new long-range plan of the American Anthropological Association (AAA). Where once the association had dedicated itself to advance anthropology as the science that studies humankind in all its aspects, it now promised rather to advance public understanding of humankind in all its aspects. Clearly, Kuper and Marks did not like this development. Anthropology isnt in the crisis that parts of the media would have you believe, Nature assured readers in damage control mode, but it must do better. One internal memo stated, we evolutionary anthropologists are outnumbered by the new cultural or social anthropologists, many but not all of whom are postmodern, which seems to translate into antiscience. So it appears the evolutionary anthropologists are the most concerned about appearing to be scientific. Within the ranks, some are asking all over: What is anthropology? The authors observe that anthropology is a nineteenth-century discipline that fragmented, spawning a variety of specializations with relationships [that] are often distant. The evolutionary anthropologists are miffed at their postmodern cousins: Some do seem to feel that if only they could spare the time they would be able to knock some evolutionist sense into cultural anthropology, Kuper and Marks complained, But they are too busy. Busy doing what might be a good follow-up question: busy doing science? The authors roster of embarrassing studies, from Margaret Meads Coming of Age in Samoa (1928) to later questionable depictions of the Yanomami as sex tyrants, and ostensibly racist theories about intelligence, have marred the field. Recent interdisciplinary efforts, they said, have left anthropologists in a sadder but wiser default position, in a head-down posture, afraid to embarrass the field further. Human evolution suffers the most: Only a handful still try to understand the origins and possible connections between biological, social and cultural forms, or to debate the relative significance of history and microevolution in specific, well-documented instances. 1. Bernard Wood and Terry Harrison, The evolutionary context of the first hominins, 470 ( 17 February 2011), pp. 347–352, doi:10.1038/nature09709. 2. Adam Kuper and Jonathan Marks, Anthropologists, unite!, 470 (10 February 2011), pp. 166–168, doi:10.1038/470166a. Kuper and Marks made some pretty damaging admissions in their piece that was intended to shore up the scientific status of anthropology. They thought that interdisciplinary programs might help; but can shared ignorance rise above ignorance? Look at what they admit: Critical Thinking Needed in Science Education The obvious conclusion is that interdisciplinary research is imperative. Yet too few biological anthropologists attend to social or cultural or historical factors. A minority of cultural anthropologists and archaeologists do apply evolutionary theory, or cognitive science, or adopt an ecological perspective on cultural variation, or play about with the theory of games, but they feel that they are isolated, even marginalized. And they do not feature in the front line of current debates about cognition, altruism or, for that matter, economic behaviour or environmental degradation, even though these debates typically proceed on the basis of very limited reliable information about human variation. So where is the science in anthropology? Is there anything in the above articles that points to something objective, true, and credible? No; it is a hodgepodge of debunked ideas, ignorance masquerading as explanation, embarrassing episodes, and complex questions evading simplistic answers. It is clearly a fallible human activity prone to category errors and misplaced priorities. If anthropologists were consistent, they should study themselves as a cultural tribe in evolutionary terms. That would lead to a quick implosion of any pretences to being objective scientists on some higher plane than the rest of us. To gain credibility, they should ditch evolution, which tries to explain walking and language emerging by mistake (01/26/2011), and study the Anthropology chapter in a good text on systematic theology, as long as it is consistent with the Operations Manual that came from the Manufacturer. Next headline on: Darwin and Evolution Mind and Brain Philosophy of Science Feb 15, 2011 Several recent articles noted that students are being dumbed down in science education. Can this be applied to their learning about evolution? PhysOrg reported that critical thinking has been called into question at the university level of education. A post-secondary education wont necessarily guarantee students the critical thinking skills employers have come to expect from university grads, the article said of a recent study from New York University. Other academics were surprised at the findings; they said students are motivated and curious as ever, spending a great deal of time on their studies. But Richard Arum was not speaking of time spent or motivation, but of critical thinking ability. His book revealed 45 per cent of students made no significant improvement in critical thinking, reasoning or writing skills during the first two years, and 36 per cent showed no improvement after four years of schooling. Science educators sometimes conflate knowledge with acceptance. Jon Miller, from the University of Michigan, has tracked scientific literacy from 1998 to 2008, and found that it has actually improved, according to PhysOrg. Only 37 percent of American adults accepted the concept of biological evolution in 2008, the article noted, and the level of acceptance has declined over the last twenty years. It would seem, though, that understanding of evolution should be distinguished from acceptance of evolution if critical-thinking students are able to judge the evidence and accept or deny the theory on the basis of sound reasoning. According to his statistics, scientific literacy has grown while acceptance of evolution has declined. If applied uniformly, critical thinking should include evaluating claims of religion and science. Religion is already routinely criticized, of course, but two recent articles on philosophy and history of science show how it might be applied to the latter. Scientist posted a short eyebrow-raising article by Jonathon Keats about how a group of four Victorian Englishmen, John Herschel, Charles Babbage, William Whewell and Richard Jones invented modern science over eggs and bacon (and ale). Meeting at the Philosophical Breakfast Club, they reasoned how to take Sir Francis Bacons ideas on induction to create a new path to natural knowledge. While admiring their boundless curiosity, Keats recognized that their vision was visionary: they envisioned a future for science as visionary and elusive as Utopia. Whats more, their efforts led to a Big Science that became increasingly divorced from the humanities, he argued. Ken Conner, writing for Town Hall Magazine applied even more critical thinking to science. The paradigm of the brave scientist as unbiased seeker of the truth, using an objective method, with unimpeachable motives, personal integrity, and the best interests of mankind at heart, is beginning to crumble, he said. As it turns out, scientists are just as fallible and flawed as the rest of humanity, and this fallibility impacts their work. Conner argued against the false dichotomy of science and faith, pointing out with examples that one needs faith to do science. After centuries of hegemony in an increasingly secular world, it is ironic that faith faith in the right thing may be the only thing that can restore credibility to the world of science. Case Study: Having a BLAST An educational tool proposed in PLoS Biology can be evaluated for its effectiveness at teaching critical thinking. Cheryl A. Kerfeld (Joint Genome Institute, Walnut Creek, California) and Kathleen M. Scott (UC Berkeley) wrote on how to use software to teach evolution: Using BLAST to Teach E-value-tionary Concepts, they titled their paper.1 BLAST (Basic Local Alignment Search Tool) is a common genome-comparison tool used by geneticists and evolutionists. Kerfeld and Scott described ways students can learn evolutionary concepts, such as molecular evolution (e.g., gene duplication and divergence; orthologs versus paralogs) using the software. As for what E-value-tionary concepts are, BLAST makes use of E-values, defined as the number of subject sequences that can be expected to be retrieved from the database that have a bit score equal to or greater than the one calculated from the alignment of the query and subject sequence, based on chance alone. E-values calculated from program runs can help students see homologies as evidence of common ancestry, they argued. In addition, BLAST has a pedagogical benefit, they argued, by providing an opportunity to illustrate how mathematics functions as a language of biology. But does their teaching tool illustrate or obfuscate? Does their method teach students to be critical of the method? Apparently not, because when E-values show common ancestry, the authors assume it supports evolution, but when they do not, critical thinking must be suspended by tweaking the inputs: Sometimes it is helpful to mask parts of the query sequence to prevent them from being aligned with subject sequences. Masking is helpful when the query sequence has low-complexity regions, such as stretches of small hydrophobic amino acids that are commonly present in transmembrane helices of integral membrane proteins. Because these features arose from convergent evolution, and their inclusion in BLAST searches could result in spurious hits, it is best to set the BLAST search parameters to eliminate these sorts of regions from word generation, as well as alignment scoring. So in this case, because the authors somehow know that certain features are due to convergent evolution, the data have to be masked when they would otherwise falsify evolution. Evolution itself is protected from critical analysis; it must be assumed. E-values that seem to indicate divergent evolution, by contrast, are not masked; they are accepted at face value A meaningful alignment will facilitate the comparison of two sequences with a shared evolutionary history by maximizing the juxtaposition of similar and identical residues. Sequences with a recent shared ancestry will have a high degree of similarity; their alignments will have many identical residues, few substitutions and gaps, and tiny E-values. Conversely, sequences with an ancient common ancestor will be deeply divergent, with few shared sequence identities, many gaps, and larger E-values. Furthermore, an alignment of two sequences can clarify which portions are conserved (e.g., active sites), and which are divergent, which helps cultivate students understanding of protein structure and function. They seem to be saying is that sequence comparisons demonstrate common ancestry by evolution, except when they show convergent evolution or conservation. The search for homologies can therefore whiz right past the genetic evidence that might falsify These scientist-educators seemed oblivious to the fact that homology as evidence for common ancestry is a circular argument. Even creationists accept a hierarchical order of their created kinds, and would expect more divergent traits the more two organisms are distant within the hierarchy, without assuming those differences are due to common ancestry. Yet Kerfeld and Scott seemed to insist that students be guided against falsifying evolution in the data: Students (and researchers as well) tend to draw an arbitrary line below which they consider E-values to provide convincing evidence that two sequences are homologs (e.g., E<0.00001). It is informative to scrutinize this assumption, and ask the students to consider whether and when more stringent E-values might be appropriate (e.g., to assist in sorting paralogs from orthologs), or when larger E-values do not provide definitive evidence of evolutionary independence (as is the case when two sequences share an ancient ancestor). While the authors appear in this quote to support critical thinking, they have constructed their teaching method to guarantee that evolutionary theory itself is protected from criticism. Basically, they want students to have a more nuanced way of manipulating the data to ensure evolution wins. Their concluding paragraph raises disturbing questions about the power of mathematics to give the illusion of credibility (see statistics): It is also informative for the students to discuss what their alignments mean, and whether the pairwise alignments between their query sequence and the subject sequences prove whether the sequences are homologs. Indeed, it can catalyze a larger discussion of whether it is possible to prove that two sequences are homologs, and what other approaches (e.g., protein structure, gene context) might be used to strengthen or refute such an assertion. In summary, deconstructing the BLAST algorithm and manipulating parameters systematically and evaluating the results with students helps them understand not only what the scores mean but also how to manipulate parameters to optimize their searches.... Finally, explicating the algorithm in this way allows students to explore research databases thoughtfully and illustrates the critical connection between mathematics and science, showing how numbers can be used to quantify biological relationships from the level of gene to organism.... Dr. Cornelius Hunter, by contrast, had a good laugh over what he perceived as simplistic homology arguments being used to support evolutionary theory at another educational website, Understanding Evolution (produced by UC Berkeley, the same institution where Scott teaches). This Just In: Plants Have LeavesEvolution Must Be True. his headline quipped. As if evolution was not silly enough already evolutionists are now claiming that the fact that different plants all have leaves is a compelling evidence for their belief that all of nature just happened to spontaneously arise, all by itself, he said. I occasionally enjoy a good spoof, but this is no joke. Readers can compare Hunters view with that of the Understanding Evolution website. Students wanting a course on evolution that will teach both sides might look at summer seminars on intelligent design presented by the Discovery Institute. 1. Cheryl A. Kerfeld and Kathleen M. Scott, Using BLAST to Teach E-value-tionary Concepts, 9(2): e1001014. doi:10.1371/journal.pbio.1001014. Kerfeld and Scott provide another example of manipulating students impressionable heads with the illusion of scientific credibility in order to indoctrinate them into the cult of Charlie worship (cf. David Sloan Wilsons Evolution for Everyone curriculum, 12/21/2005). Evolution itself is never to be questioned; the objectivity of science itself is never under scrutiny. This entry provides an opportunity for you to hear several points of view and evaluate which are credible with the fewest fallacies. In the 02/11/2008 Darwin Day entry, we critically examined Kevin Padians 10 reasons why Darwin, but not Newton or Einstein, should be honored with a special celebration each year. Exercise: Apply your critical thinking skills to the following evolution articles: Next headline on: Philosophy of Science Darwin and Evolution - Science Daily: New Research Changes Understanding of C4 Plant Evolution. - Science Daily: Molecular Link Between Reproduction in Yeast and Humans. - Science Daily: New View of Human Evolution? 3.2 Million-Year-Old Fossil Foot Bone Supports Humanlike Bipedalism in Lucys Species. - Live Science: How Dinosaurs Handed Down Their Fingers to Birds. Bubble Life Could Have Had Armor Feb 14, 2011 A headline posted by Science Daily is self-explanatory: Clay-Armored Bubbles May Have Formed First Protocells: Minerals Could Have Played a Key Role in the Origins of Life. The operative words are may have and could have, which, being mere suggestions, are unfalsifiable. If it didnt happen here, it may have or could have happened on the planet Zorx in Sector 1906523-A. The claybubble theory of life is a new twist on Jack Szostaks old fatbubble theory (see (09/03/2004). That story also had plenty of mays and coulds. This version by Howard Stone and Anand Bala Subramaniam (Harvard) imagines air bubbles armored with montmorillonite, a clay mineral. The advantage of claybubbles is one-way osmosis, allowing small building block molecules to get in, but keeping the complex molecules evolving inside protected, assuming they could self-organize into life somehow (10/08/2010). If there is a benefit to being protected in a clay vesicle, this is a natural way to favor and select for molecules that can self-organize, Stone said. He did not explore whether selection can operate without accurate replication (see online book). He also did not speculate on how the building blocks became one-handed (see online book and 01/10/2011), or what might happen if a deadly toxin happened to grab the one-way key to the interior. Grad student Subramaniam hedged his bets with a few more could words: Whether clay vesicles could have played a significant role in the origins of life is of course unknown, but the fact that they are so robust, along with the well-known catalytic properties of clay, suggests that they may have had some part to play. It does not appear either of them speculated on whether sand grains, soap bubbles, or lava might also qualify for the suggestion that they may have had some part to play. Remember, these guys get paid for this. Lets sing verse 2 of the chorus introduced back in 09/03/2004 (read that whole commentary again, too): Tipping Point for Embryonic Stem Cells? Surrounding them with armor of clayNext headline on: Origin of Life Gets building blocks in trouble; Theyre stuck inside forever to stay, Flop goes the bubble. Feb 13, 2011 At any time, courts could rule on whether funding of embryonic stem cell research can continue or must be halted. Whichever way a decision is rendered, whether by Judge Lamberth on the legality of the NIH guidelines, or by the Court of Appeals for DC, the issue will probably wind up before the Supreme Court. Passions run high on both sides. A crusader for adult stem cells, profiled in Nature this past week,1 was surprised by how many scientists support her antagonism to the use of human embryos for research. More on that later; first, some news highlights: - Cooling the flame: Science Daily told how adult stem cell therapy can reduce inflammatory damage from stroke. We are seeing a paradigm shift in the way some types of stem cells may enhance recovery from stroke, an excited researcher at the University of Texas said. The adult stem cell therapy appears to dampen inflammation involving the spleen. This new treatment holds promise to improve clinical care, reduce long-term health care costs, and improve the quality of life for millions of people. - iPS momentum: PhysOrg reported that researchers at Harvard and Columbia have demonstrated that many iPS cells are the equal of hESCs in creating human motor neurons, the cells destroyed in a number of neurological diseases, including Parkinsons. Induced pluripotent stem cells (iPS) are a form of adult stem cell that does not involve the destruction of embryos (11/20/2007), as in human embryonic stem cells (hESC). The article says that iPS cells meet the gold standard of pluripotency. In addition, new methods are speeding the tests for pluripotency of iPS cells. - Hearty iPS: Another story on PhysOrg highlighted research at Stanford that shows iPS cells can generate beating heart cells that carry a genetic defect under study, allowing for the first time to examine and characterize the disorder at the cellular level. - ESC economics: PhysOrg also discussed the current disarray of patent laws surrounding stem cell lines, data, and treatments. Some scientists warn of a potential stifling effect of widespread patenting in stem cell field. Bioethicist Debra Matthews (Johns Hopkins) said, Pervasive taking of intellectual property rights has resulted in a complex and confusing patchwork of ownership and control in the field of stem cell science. Although the article was unclear whether the dispute includes adult stem cell research, it mentioned one recommendation being a centralized portal for access to existing databases, such as the UK Stem Cell Bank and the Human Embryonic Stem Cell Registry. - Mixed bag: Another article on PhysOrg discussed the new Massachusetts Medical School Human Stem Cell Bank, which opened with seven high-quality stem cell lines (5 embryonic, 2 iPS, with more to follow), and how they are being preserved in liquid nitrogen and made available to researchers around the world. The article mixed these two sources of stem cells with no mention of ethics: e.g., The Registry includes information on the derivation, availability and characteristics for more than 1,200 hESC and iPS cell lines developed in over 22 different countries, including more than 200 cell lines with genetic disorders. - Sex cells: Parthenogenetic stem cells are taken from reproductive cells (03/12/2005). Lacking the full complement of chromosome pairs, they might contain a good or bad copy of a gene implicated in a disease like tuberous sclerosis or Huntington's disease. Science Daily discussed how work at Nationwide Childrens Hospital is constructing good embryonic stem cells from parthenogenetic cells. These single-parent/patient-derived embryonic stem cells can theoretically be used for correction of a diverse number of diseases that occur when one copy of the gene is abnormal, a research at the hospital said. With the decision by Judge Lamberth last September prohibiting federal funding of embryonic stem cell research (09/03/2010) still under an injunction (09/26/2010), researchers and bioethicists are waiting to see what the next court ruling will bring. Nature published the story of The Crusader, Theresa Deisher, one of the two remaining plaintiffs who won in the September case.1 Reporter Meredith Wadman presented Deisher in a fairly positive light as an intelligent, confident, persistent, self-sacrificing, hard-working PhD in cell biology, respected by her enemies, a Roman Catholic who once shunned religion for science but regained her faith when realizing that fetuses were not just clumps of cells, but human beings (cf. 11/07/2002). Deishers politics in college were very left-wing, after she ditched her mothers religious faith. I was in science, and science was much more interesting than religion, she said. I encouraged a couple of friends to have abortions. Her return to faith came by degrees: first, the sight of an adult cadaver preserved in formalin made her realize that a fetus preserved in a jar only looks alien because of the preservation method. Second, she encountered first-hand the passions of those bent on researching human embryos; And the vehemence with which colleagues resisted made me open my eyes, Deisher says, to the very real and, she says, unscientific passions that can infect defenders of scientific orthodoxy, Wadman wrote. Science, she reasoned, was not so objective after all. Third, Deishers growing antipathy to embryonic stem cell research got an emotional kick when speaking to Republican state lawmakers in Washington state in 2007. One of the other speakers was a mother who had adopted a frozen embryo from a fertility clinic, Wadman continued. The resulting child, a girl then four years old, stood beside her. Deisher sold her house and used her retirement savings to start an institute for the advancement of adult stem cell therapies. She is not, thereby, antagonizing scientists by opposing them through the political process; when asked, she reluctantly signed on as a plaintiff in the lawsuit that resulted in Lamberths ruling: It is frightening to speak out, she said; I dont care for the notoriety. Instead, her AVM Biotechnology company seeks to provide positive alternatives: The companys mission, in part, is to eliminate the need for embryonic-stem-cell therapies and enable adult-stem-cell companies to succeed by developing, for instance, drugs that promote stem-cell retention in target organs, It is also working on alternatives to vaccines currently produced using cell lines derived from fetuses that had been aborted decades ago. Unlike the institutes in California that have $3 billion in taxpayer-approved bonds at their disposal, Deisher runs her company in a dormitory with five unpaid staff. A lot rides on the courts next move. If the court agrees with Deisher, Wadman ended, it will shut down hundreds of human-embryonic-stem-cell experiments once more possibly for good. One of the most interesting things Deisher learned from the lawsuit indeed, the biggest lesson, Wadman called it was, in Deishers words, how many scientists are against [human-embryonic-stem-cell research]. I did not know that. I did not expect the level of support and encouragement that I have received. 1. Meredith Wadman, The Crusader, Nature 470, 156-159 (Feb 9, 2011) | doi:10.1038/470156a. That Nature would print this story about Deisher is an encouraging sign that the momentum may be turning away from embryonic stem cell research. Nature used to wield its editorial pen against the opponents the way it does against creationists, calling them ignorant moralists standing in the way of progress Dr. Tracy Deisher certainly does not fit that description, nor does Dr. James Sherley, an adult stem cell researcher at Boston Biomedical Research Institute, the other remaining plaintiff in the lawsuit. For sure, Wadman snuck in enough jibes about Deisher to titillate Natures leftist readers (calling her a bundle of contradictions, pointing out that she never applied for a NIH grant, pointing out that she studies the pernicious and disproven hypothesis that autism might be triggered by vaccines, quoting people who call her polarizing, remarking in a callout box that shes kind of the Sarah Palin of stem cells,), but she gave Deisher a lot of room to respond, too. Chernobyl Mutation Experiment Fails to Support Darwinism What was not said may be more telling. Wadman did not point out any benefits of embryonic stem cells over adult stem cells. She did not quote any leading ES researchers making a good case for cutting up embryos. And she did not even attempt to defend ES research on ethical grounds. Instead, she gave Deisher space to make two striking blows: (1) that many scientists are opposed to human embryonic stem cell research, and (2) that hESC researchers are not driven primarily by concern for the sick. Researchers prefer to work on ES cells because they are convenient, Deisher argued; their science is not about helping patients and its not about advancing the common good. Instead, she argued, There is no commercial, clinical or research utility in working with human embryonic stem cells. That anecdote about the four-year-old girl born from a frozen embryo added emotional clout. Here was a darling human being obviously a great deal more than a clump of cells. These are signs that embryonic stem cell research is losing its hype-driven public mandate (cf. 01/02/2011). After all the promises, it has produced no cures (while adult stem cell research is on a roll; see 11/18/2010 starting from initial promise in 01/24/2002). It is superfluous, now that iPS technology is its equal, without the ethical qualms. Its credibility has been marred by fraud (12/16/2005), while others worry about future abuses (10/21/2004; cf. 04/22/2004 and 07/30/2001 on eugenics). Opponents within the scientific community are becoming more bold. And it is hanging by a thread, waiting for the next court ruling that might end its federal funding for good (double entendre intentional). But why should it get federal funding in the first place? If the promises were credible, commercial and charitable support would be overwhelming. That ES researchers have to lean on the government dole is a sign it is not commercially viable. Is this subject relevant for Creation-Evolution Headlines? Maybe not directly, but ones view of the origin of life and humanity has direct bearing on ethics. The stem cell controversy of the past decade has been a direct outgrowth of competing views on the significance of human life. If an embryo is just a clump of cells, then playing with those clumps because of their convenience or the temptation of a Nobel prize has no ethical consequences. But if human life was created by God, it never loses its sanctity from conception to burial. It will affect how we view a fetus in a jar, a plasticized body in an exhibit, an Alzheimers patient in a nursing home, a woman considering an abortion, the direction of scientific research. Its where the rubber of worldview meets the road of scientific practice. Next headline on: Politics and Ethics Feb 12, 2011 Bird brains are getting smaller in the region around Chernobyl. Organisms in the vicinity of the radiation from the nuclear disaster 25 years ago have not improved, but suffered under the onslaught of mutations. There is no evidence of any population increasing in fitness in any way; on the contrary, animals are struggling to survive. Yet according to neo-Darwinism, mutational change is the seedbed of evolutionary gains in fitness. Timothy Mousseau was a co-author of a paper in PLoS ONE studying bird populations in the affected area.1 They studied 550 birds belonging to 48 species and found an overall 5% decrease in brain size, especially among yearlings: Brain size was significantly smaller in yearlings than in older individuals, implying directional selection against small brain size. This means that the radiation was a drag, not a help, on the fitness of these birds: their bodies want to make the brains larger, but they cant: the directional selection is contrary to the mutational load. Mousseau explained in a press release on PhysOrg, These findings point to broad-scale neurological effects of chronic exposure to low-dose radiation. The fact that we see this pattern for a large portion of the bird community suggests a general phenomenon that may have significant long-term repercussions. The radiation affects other organisms, too: The study revealed that insect diversity and mammals were declining in the exclusion zone surrounding the nuclear power plant. The birds provide a test case of population response to a mutagen. Although the brains were the organs measured, the whole body suffers: Stressed birds often adapt by changing the size of some of their organs to survive difficult environment conditions, the article said. The brain is the last organ to be sacrificed this way, meaning the radiation could be having worse impacts on other organs of the birds. But isnt this a case of adaptation, then? Neo-Darwinists should not take comfort in the findings: Mousseau said not only are their brains smaller, but it seems they are not as capable at dealing with their environment as evidenced by their lower rates of survival. 1. Moller, Bonisol-Alquati, Rudolfsen and Mousseau, Chernobyl Birds Have Smaller Brains, Public Library of Science ONE 6(2): e16862. doi:10.1371/journal.pone.0016862. Oh, the Darwinist says, but you must give it millions of years. Dont fall for that. Evolution runs both fast and slow, dont they tell us? (01/15/2002, If Charlies mutation magic can turn a cow into a whale in six million years, it could surely produce a measurably fitter bird brain in 25 years. Lets expand the population and ask how many human CAT-scan patients have gotten smarter and produced genius kids. How many dental patients have grown new improved teeth or new organs after X-rays? Tumors, maybe, but not some new sense organ or function. If Darwins tree of life was toppled 4 years ago, why are evolutionists still teaching it? Time to re-read the 02/01/2007 entry to them. The Chernobyl bird populations have been under a steady dose of radiation for decades now, giving ample opportunity for mutations to help at least one chick get a lucky break. Evolution fails another real-world test. Dont go to Chernobyl hoping to get fit. Under mutational load (12/14/2006, 04/09/2007), you dont get a choice of Evolve or Perish; just the latter. Next headline on: Darwin and Evolution This Is Your Brain on Bytes Feb 11, 2011 Its mind-boggling time. Some recent articles have tried to quantify the information capacity of the eye, the brain, and the world. Ready? Think hard. You may now put an ice pack on your head and reboot. - Eye boggle: Your eyes contain about 120 million rods and 6 million cones each. If each receptor represents a pixel, that is 2 x 126 million pixels, or 252 megapixels. And remember these are moving pictures, not stills (talk about high-def). How can the brain transmit and process that much visual information? The answer is, apparently, it uses compression just like computers compress raw camera photos into more manageable JPEG images. Thats the title of an article on JPEG for the Mind: How the Brain Compresses Visual Information. The article begins, The brain does not have the transmission or memory capacity to deal with a lifetime of megapixel images. Instead, the brain must select out only the most vital information for understanding the visual world. Researchers at Johns Hopkins found that certain cells in the image transmission pathway apparently focus on highly curved edges that are the most informative, dropping flat edges resulting in an 8-fold compression ratio comparable to the JPEG algorithm. Eyesight compression, though, is done in-line, in real time, during the image transmission process (see also the 05/22/2003 entry). Geeks will enjoy the punch line: Computers can beat us at math and chess, said [Ed] Connor [Johns Hopkins], but they cant match our ability to distinguish, recognize, understand, remember, and manipulate the objects that make up our world. This core human ability depends in part on condensing visual information to a tractable level. For now, at least, the .brain format seems to be the best compression algorithm around. - Cerebellum boggle: Your cerebellum (a portion of the brain near the brain stem) is important for motor functions, emotions and language. Live Science claims that wiring in the cerebellum starts with surprisingly bad wiring, because axons seeking connections to granule cells of the cerebellum sometimes link up incorrectly to Purkinje cells. But bad wiring may be in the eye of the beholder, because an international team found that a substance known as bone morphogenetic protein 4, which plays a role in bone development, helped correct these errors. One of the researchers publishing in PLoS Biology explained,1 What we demonstrate here is that you have a negative system that repels axons from an inappropriate target, thereby steering them to the right target. If it works, can it be called bad? The authors said, In summary, we show that the specificity of the synaptic connections in the ponto-cerebellar circuit emerges through extensive elimination of transient synapses. But that raises an interesting question: what regulates the regulators? - Memory boggle: Get ready for the punch line on this one. An article on Live Science discussed the tipping point of human information technology from analog to digital storage. In 2000, the article said, about 75% of the worlds information was stored in analog form (e.g., paper, analog tape, analog sound recordings). By 2007, 93% of that information was stored digitally (computer files, digital tape, digital recordings). Digital information can be quantified in the familiar bits, bytes, megabytes, gigabytes, yotta yotta yotta....2 Now that information can be quantified digitally, its possible to estimate all the human information in the world. As of 2007, that quantity was 295 trillion megabytes (295 x 1018 bytes, or 295 exabytes), according to Martin Hilbert of USC. Before divulging the punch line, lets quote the articles comparisons: Have a hard time imagining 295 trillion megabytes? Hilbert suggests thinking of it this way: If we would use a grain of sand to represent one bit each of the 295 trillion, we would require 315 times the amount of sand that is currently available on the worlds beaches. Now the punch line: that incredibly huge amount of information represents still only enough for 0.33 percent of the information that can be stored in all DNA molecules of one human adult. Lets do the math: multiply all the values in the quote above by 300, and you get into the ballpark of the information storage inside your body: 94,500 times the grains of sand of all the worlds beaches; 18,300 CDs for every person on Earth, enough to reach over halfway to Mars; 350 times the GDP of the US if printed in $1 newspapers; enough to cover the US in 3,900 layers of books. Now you know. For a better idea of what these numbers all mean, Hilbert and his colleague, Priscila López of the Open University of Catalonia, express the information through other analogies. 295 trillion megabytes is roughly: Equivalent to 61 CD-ROMs per person on Earth. Piling up the imagined 404 billion CD-ROM would create a stack that would reach the moon and a quarter of this distance beyond. Enough that, if printed in newspapers that sold for $1 each, the United States entire global Gross Domestic Product would not be enough to buy them all. (The cost would be 17 percent beyond the GDP.) Enough information to cover the entire area of the United States or China in 13 layers of books. - Brain boggle: If your mind is not sufficiently boggled yet, lets finish with a measurement posted on Wired Science. Author John Timmer of Ars Technica expanded on the work by Hilbert and López to estimate the processing power of the human brain. After several more mind-numbing analogies of the combined processing power of all the worlds computers, storage and memory, the article ended with another surprise. First, Hilbert and López estimated the combined processing power of all the worlds computers at 6.4 x 1018 operations per second. Then, Timmer wrote: Lest we get too enamored with our technological prowess, however, the authors make some comparisons with biology. To put our findings in perspective, the 6.4*1018 instructions per second that human kind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second, they write. Our total storage capacity is the same as an adult humans DNA. And there are several billion humans on the planet. 1. Kalinovsky et al, Development of Axon-Target Specificity of Ponto-Cerebellar Afferents, Public Library of Science Biology, 9(2): e1001013. doi:10.1371/journal.pbio.1001013. 2. A byte is 8 bits (in the ASCII encoding format). Kilobyte=103 bytes. Megabyte=106 bytes. Gigabyte=109 bytes. Terabyte=1012 bytes. Petabyte=1015 bytes. Exabyte=1018 bytes. Each level represents 1000 times the prior category (103). Those wanting to boggle their brains further can consider zettabytes (1000 exabytes), yottabytes (1000 zettabytes), brontobytes (1000 yottabytes), geobytes (1000 brontobytes).... While reading this article, your brain just outperformed all the computers on the planet, and your body stored genetic information that, if stored on CDs, would reach over halfway to Mars. And who could forget the stunning analogy we published nine years ago about the information storage capacity of one cubic millimeter of DNA? (see 08/16/2002). Facts are powerful things. The information in this article could be taught with some clever presentation slides or posters. Nothing is more effective than facts like these to make people reconsider assumptions about how the human body and brain came to be (see 01/19/2011 commentary). Evolution Running Backwards Evolutionists want you to believe this all happened by chance, through mistakes, without purpose or guidance. How about asking for a time out in your local high school biology teachers evolution spiel to write some of these facts on the board in front of the students? Then say (nicely), According to your textbook, evolution teaches that your brain, but not computers, got here by mistake. You dont even have to put the school at risk of a lawsuit by setting off the alarms with the emotionally-charged phrase intelligent design. Next headline on: Mind and Brain Feb 10, 2011 For Darwins doctrine of universal common ancestry to be demonstrably true, there must have been a common ancestor of insects and humans. That base of the family tree has just been discredited, leaving a gap in this important junction of Darwins tree of life. For decades, evolutionists have taught that acoelomorphs, a kind of marine worm, were at the base of the tree that branched one way toward insects and another way toward man. Now, however, as published in Nature,1 two large groups of marine worms are more closely related to us than are insects and mollusks, a new study shows According to a co-author quoted by Live Science, We can no longer consider the acoelomorphs as an intermediate between simple groups such as jellyfish and the rest of the animals, said researcher Max Telford of the Department of Genetics, Evolution and Environment, University College London. This means that we have no living representative of this stage of evolution: the missing link has gone missing. To explain the confusing genomes in evolutionary terms, the researchers are having to suppose that the last common ancestor, whatever it was, was even more complex than these worms and the living worms lost some of the genetic information contained in the ancestor: Being such simple creatures and yet still mixing and mingling on the family tree with us complex creatures suggests these marine worms were once complex themselves, Telford said. Commenting on this development in the same issue of Nature,1 Amy Maxmen titled her entry, Evolution: A can of worms and wrote: This is an interesting evolutionary question, Telford told LiveScience. Why do animals lose complex features, and how do they do it? What genes have they lost? The rearrangement has triggered protests from evolutionary biologists, who are alarmed that they may lose their key example of that crucial intermediate stage of animal evolution. Some researchers complain that the evidence is not strong enough to warrant such a dramatic rearrangement of the evolutionary tree, and claim that the report leaves out key data. In any case, the vehemence of the debate shows just how important these worms have become in evolutionary biology. But rather than bemoaning the loss of evidence, or teaching the controversy, some reporters are promoting this finding as a triumph for evolution. PhysOrg wrote its headline, Revisited human-worm relationships shed light on brain evolution, even though the source paper had nothing to say about brains. PhysOrg also buried the Telford quote about the missing link still being missing under a bold headline, Simple marine worms distantly related to humans.& Live Science announced, Lowly Worms Get Their Place in the Tree of Life, downplaying the confusion over where these organisms fit. As if doing penance, though, a later PhysOrg, read, New evolutionary research disproves living missing link theories. I will say, diplomatically, this is the most politically fraught paper Ive ever written, says Max Telford, a zoologist at University College London and last author on the paper. 1. Come back soon for reference. 2. Amy Maxmen, Evolution: A can of worms, 470, 161-162 (2011), Published online 9 February 2011, doi:10.1038/470161a. This is why you must read past the headlines and mute the Darwin Marching Band music and look at the data. Evolutionists thought they had a missing link at a critical juncture in Darwins tree of life, only to find, according to their own apologists, that the genetics dont fit. To keep their story going requires more speculation about less evidence. This is no happy ending; it turns Charlies bedtime story into a nightmare generator. Next headline on: Darwin and Evolution One demonstration of how science should be used. Watch the 90-second video at How Bacteria Use Their Flagella Feb 09, 2011 Do an imaginary mind-meld with a bacterium for a moment. Visualize yourself encased in a membrane, surrounded by fluid. You have no eyes, ears, or hands. You need to find where food is, and avoid danger, so you have organelles that can take in molecules that provide information about what is going on outside, where other bacteria can also communicate information to you. To get around, you have a powerful outboard motor, called a flagellum. Lacking eyes, how do you know where to go? How do you steer and make progress toward food or away from danger? These are the questions of chemotaxis the ability to move toward or away from chemicals. Two recent papers discuss how bacteria use their rotary motors to succeed in life. Some bacteria have only one flagellum (monotrichous, or one-haired, since the flagella look like hairs at low resolution). One such critter is Vibrio alginolyticus, an inhabitant of the coastal ocean. In a PNAS Commentary,1 Roman Stocker discussed how this microbe uses its single flagellum in a reverse and flick movement to explore its environment. This newly discovered mechanism for turning, he said, ....is part of an advanced chemotaxis system. The bacterium can actually make better progress toward or against a concentration gradient with this semi-random search method. How can a simple back-and-forth movement result in high-performance chemotaxis, rather than causing the bacterium to endlessly retrace its steps? Stocker asked. The answer is that the flick action, which involves a sudden kinking of the U-joint of the flagellum, combined with reversal of flagellar rotation, provides three times the chemotaxis efficiency of E. coli. He showed this with mathematical models. Stocker attributed this to evolution: Despite the limited morphological repertoire of the propulsive system, radically different movement strategies have evolved, likely reflecting the diversity of physicochemical conditions among bacterial habitats. But what he was really talking about was adaptation of different microbes to different habitats and conditions. He ended with praise, not for evolution, but for the cleverness of microbe transportation: the study he cited makes monotrichous marine bacteria an appealing model system to expand our knowledge of motility among the smallest life forms on our planet. Other bacteria have 2, 4, or 8 flagella (peritrichous), like Escherichia coli. When all 8 flagella begin turning in the same direction, they bundle into a kind of V8 engine that can propel the germ at around 30 micrometers per second (µm/s). To change direction, they reverse one or more flagella, causing the bundle to fall apart, stopping forward movement in a strategy called tumbling, after which unified motion begins in another direction. While not as efficient at chemotaxis as V. alginolyticus, it should be remembered that E. coli live in different environments and they have other tricks up their sleeve. Flagellum specialist Howard Berg and colleagues figured out how to watch fluid movement around swarms of bacteria. Reporting in PNAS,2 they discovered that bacteria, by rotating their flagella counterclockwise in swarms, create small rivers of fluid moving clockwise ahead of the swarm that help them move faster as a group than they could be swimming alone. They wrote, we discovered an extensive stream (or river) of swarm fluid flowing clockwise along the leading edge of an Escherichia coli swarm, at speeds of order 10 µm/s, about three times faster than the swarm expansion. The flow is generated by the action of counterclockwise rotating flagella of cells stuck to the substratum, which drives fluid clockwise around isolated cells (when viewed from above), counterclockwise between cells in dilute arrays, and clockwise in front of cells at the swarm edge. The river provides an avenue for long-range communication in the swarming colony, ideally suited for secretory vesicles that diffuse poorly. The observations may have practical applications: These findings, they wrote, broaden our understanding of swarming dynamics and have implications for the engineering of bacterial-driven microfluidic devices. 1. Roman Stocker, Reverse and flick: Hybrid locomotion in bacteria, Proceedings of the National Academy of Sciences, published online before print February 2, 2011, doi: 10.1073/pnas.1019199108 PNAS February 2, 2011. 2. Wu, Hosu, and Berg, Microbubbles reveal chiral fluid flows in bacterial swarms, Proceedings of the National Academy of Sciences, published online before print February 7, 2011, doi: 10.1073/pnas.1016693108 PNAS February 7, 2011. And these are simple or primitive organisms that were the first to evolve, they tell us. The outboard motors alone are phenomenally complex, but when they work together with signal transduction mechanisms and group search strategies, its overkill for Darwin, who was dead anyway. The precision of cells quality control systems was described in the 02/03/2006 entry. Next headline on: Bizarre Fossils Raise Questions Feb 08, 2011 For decades, students have been taught that the fossil record shows a long, slow, gradual progression of increasing complexity over millions of years. Scientific data are usually not so simple. Only the third entry tried to tie the fossil to an evolutionary prediction, but even then, the story was not straightforward. It is not clear, for instance, that the loss of legs represents an increase in genetic information or in fitness. Flightless birds are adapted to their land-based habitats, but it would be a greater leap for birds to evolve from ground to air than the other way around. Same for snakes losing legs instead of evolving them de novo. In the first two entries, though, the discoveries were clearly unexpected, surprising, and contrary to conventional wisdom. - Surprising youth in old fossil: When you see the word unexpected in a headline, expect the unexpected. Unexpected exoskeleton remnants found in Paleozoic fossils, reported about chitin protein remains found in scorpion-like arthropod fossils alleged to be 310 million and 417 million years old. The previous record was 25 to 80 million years. The subtitle reads, Surprising new research shows that, contrary to conventional belief, remains of chitin-protein complex structural materials containing protein and polysaccharide are present in abundance in fossils of arthropods from the Paleozoic era. George Cody of the Carnegie Institution speculates that the vestigial protein-chitin complex may play a critical role in organic fossil preservation by providing a substrate protected from total degradation by a coating waxy substances [sic] that protect the arthropods from desiccation. Is he claiming the proteins protected the rock impressions, and not the other way around? Other than that, the article did not explain how proteins could last for over 400 million years. Prior to the discovery, it was unexpected, surprising, and contrary to conventional belief. - Antarctic forests: The caption of artwork in a BBC News piece reads, Dinosaurs once foraged beneath the Southern Lights in Antarctica. It shows young dinosaurs admiring the skylights while grazing around conifers in the long polar night. It may be hard to believe, but Antarctica was once covered in towering forests. Fossil trees in Antarctica have been known since Robert Falcon Scott explored the frozen wastes of the south polar regions, finding evidence of a subtropical climate where no trees grow today. Jane Francis (University of Leeds) has spent 10 seasons collecting samples. As she described her adventures, it was evident the surprise of fossil trees in ice has not worn off: I still find the idea that Antarctica was once forested absolutely mind-boggling, she told the BBC. The article says that this was not the only period of warmth. Fossil plants dated 100 million years old indicate the area must have resembled forested areas of New Zealand. We commonly find whole fossilised logs that must have come from really big trees. One of the specimens found is Ginkgo biloba, a well-known living fossil that was thought extinct from the age of dinosaurs till living trees were discovered in Japan (cf. NW Creation.net article with links). We take it for granted that Antarctica has always been a frozen wilderness, but the ice caps only appeared relatively recently in geological history. One of her most amazing fossil discoveries to date was made in the Transantarctic Mountains, not far from where Scott made his own finds. She recalled: We were high up on glaciated peaks when we found a sedimentary layer packed full of fragile leaves and twigs. These fossils proved to be remains of stunted bushes of beech. At only three to five million years old, they were some of the last plants to have lived on the continent before the deep freeze set in. How did the trees adapt to the polar light conditions, when long periods of darkness alternate with six months of light? Francis did experiments growing trees in simulated polar light conditions and found they adapted remarkably well. In addition to the trees, dinosaurs lived under these conditions. One kangaroo-size vegetarian dinosaur had large optic lobes, possibly suggesting adaptation to the low light of the long winters. The article tried to tie this evidence into the current debate over global warming, but clearly the climate changes of those prior times were not caused by humans. Visiting the frozen wasteland of Antarctica today, it is hard to believe that rainforests haunted by small dinosaurs once flourished where 3km thick ice-sheets now exist, the article ended. However, the geological record provides irrefutable evidence that dramatic climate fluctuations have occurred throughout our planets history. - Snakes alive and dead: Fossil snakes show remnant hind legs, reported MSNBC News. At first, this seems to support the belief that snakes descended from lizards, and lost their legs through evolution. The snake fossil studied by Alexandra Houssaye (National Museum of Natural History in Paris), named Eupodophis descouensi, has ultra tiny 0.8 inch legs with four anklebones but no foot or toe bones. It appears that calling these structures legs requires some interpretation; they were clearly not used for walking. Questions remain, however, about the evolution of snakes. The oldest snake remains are dated to 112 to 94 million years ago, and this snake is dated to around 90 million years ago, Houssaye said. Yet her evolutionary story seemed to allow opposite conclusions: If something is not useful it can regress without any impact on the (animals) survival, or regression can even be positive, as for here if the leg was disturbing a kind of locomotion, like for burrowing snakes or swimming snakes. But why would useless structures remain for 4 to 22 million years? It would seem millions of generations of snakes would have had to contend with useless structures getting in their way, if it took that long for legs to regress. Houssaye was not prepared to announce a victory for evolutionary theory: The question of snake origin should not be resolved in the next 10 years, the article quoted her saying, ending, She is, however, hopeful that all of the separate teams working on this puzzle can one day pinpoint what species was the common ancestor of all snakes. The lizard-like ancestor, if there was one, is not known from the fossil record. According to Live Science, which also reported the story, the bones suggests that evolution took snakes legs not by altering the way they grew. Instead, Houssaye said, it looks as though the limbs grew either slower or for a shorter period of time. PhysOrgs coverage includes an image of the very simple structures. According to this entry, Only three specimens exist of fossilised snakes with preserved leg bones. None of the articles mentioned whether the structures had a function, or might have been developmental anomalies, such as when babies are born with an enlarged coccyx (cf. CMI). What evolutionary stories could be told if a fossil two-headed snake were found? Conventional wisdom is not always wise. A better term might be conventional folly, or popular credulity. Enough reports like this, and a consistent theme emerges: evolutionists are clueless about not only their own theory of common ancestry, but about the millions-of-years scheme on which their theory is built. You cant just read one BBC News or PhysOrg article to get the whole picture. Individual articles present puzzles, but maintain the triumphal theme of the march of secular science toward Understanding Reality. That is a false picture. Sites like CEH help document the reality, that secular scientists sold on an evolutionary world view maintain their belief system by telling stories in spite of the evidence. And for you creation-bashing lurkers out there who lambaste CEH as anti-science, pay attention! This is not anti-science, because we clearly honor and support legitimate scientific discovery and analysis (see yesterdays entry, for instance). This is anti-storytelling anti- twisting evidence to support a belief system. An honest rationalist skeptic should join with us in that goal. Next headline on: Darwin and Evolution Feb 07, 2011 Imitating spider silk or gecko feet is one thing, but some researchers are going to extremes to try to do what living organisms do. Heres an update on an old biomimetics story: the imitation of nacre, or mother-of-pearl (see 07/06/2004; 09/18/2008, bullet 4; 12/06/2008, PhysOrg said that researchers at Northwestern University and McCormick School of Engineering are still trying to understand the molecular structure of this attractive material that is strong yet resistant to cracking. They created an interlocking composite material that, while not as good as nacre, achieved a remarkable improvement in energy dissipation. - DNA railcar: Researchers at University of Oxford have constructed a programable [sic] molecular transport system that travels like a railcar on DNA molecules, And thats not all: they would like to build synthetic ribosomes, the article said. DNA origami techniques allow us to build nano- and meso-sized structures with great precision, said Prof. Hiroshi Sugiyama. We already envision more complex track geometries of greater length and even including junctions. Autonomous, molecular manufacturing robots are a possible outcome. - DNA iPad: More DNA origami is at work creating smaller components for consumer and industrial electronics like iPods, iPads and similar devices, reported another article on Japanese researchers at Arizona State University, familiar with their cultures art of origami, work with have discovered a way to use DNA to effectively combine top-down lithography with chemical bonding involving bottom-up self-assembly. - Turbo dragonflies: Imagine micro wind turbines that can withstand gale-force winds. Such marvels are being prepared with inspiration from dragonfly wings, reported Scientist. Who would have thought that the energy source for powering your cell phone might some day owe its design to the dragonfly? - Flagella carnival: Nanoscopic inventions being built at Rice University look like a carnival ride gone mad, said Science Daily. Researchers want to build arrays of programmable rotating machines modeled after the bacterial flagellum (07/12/2010) and ATP synthase Such devices could be used for radio filters that would let only a very finely tuned signal pass, depending on the nanorotors frequency. The computers used to model the molecular rotors are not yet capable of characterizing ATP synthase found in all living things, but as computers get more powerful and our methods improve, a team member said, we may someday be able to analyze such long molecules. - Plankton armor: Science Daily said that The ability of some forms of plankton and bacteria to build an extra natural layer of nanoparticle-like armour has inspired chemists at the University of Warwick to devise a startlingly simple way to give drug bearing polymer vesicles (microscopic polymer based sacs of liquid) their own armoured protection. One goal is stealth armor that looks like water but can allow drugs to sneak past the immune system. What were they looking at for inspiration? Organisms that particularly attracted our interest were those with a cell wall composed of an armour of colloidal objects for instance bacteria coated with S-layer proteins, or phytoplankton, such as the coccolithophorids, which have their own CaCO3-based nano-patterned colloidal armour. If these researchers succeed in getting DNA and rotating molecules to do the work of molecular machines already active in the living cell, will science finally admit that life shows evidence of intelligent design? Notice that they cannot yet come close to doing what ATP synthase, a flagellum, mother-of-pearl, a ribosome or a dragonfly wing has been doing for millennia. Ironic, is it not, that ATP synthase is powering their bodies and minds to imitate it. Shrinking Brains Prove Human Evolution Intelligent design is revolutionizing science via biomimetics, promising amazing benefits for human health and society, forcing thinking along engineering concepts, challenging our best scientific minds, inspiring awe at natural capabilities, ignoring Darwin entirely. Next headline on: Feb 06, 2011 Ever since Darwin, brain size has been the measure of human nature (e.g., Except for some anomalies with Neanderthal and Cro-magnon skull sizes, the iconic march of human evolution showed growing upright posture accompanied by increasing brain size (example on and brain size was used to discriminate between races on the presumption it was a measure of intelligence. It is not clear, therefore, what to make of a question on PhysOrg, Are brains shrinking to make us smarter? It seems evolutionists want to have it both ways. Larger brains are evidence of evolution; smaller brains are evidence of evolution. Does the new claim muddy the waters of brain size as the measure of increasing human intelligence? The article tries to draw links between brain size as a function of body mass, or of population size, but its not clear any trend is detectable. In fact, the article later admits that brain size is not well linked to intelligence. Brian Hare (Duke U) said, But the downsizing does not mean modern humans are dumber than their ancestors rather, they simply developed different, more sophisticated forms of intelligence. The article ended by Hare hoping that humans will express their inner bonobo. Chimps are more aggressive and violent. Humans are both chimps and bobos [sic] in their nature and the question is how can we release more bonobo and less chimp, he said. I hope bonobos win... it will be better for everyone. This comedy show is brought to you for your Superbowl halftime entertainment. Imagine if the Steelers were able to set the rules so that no matter which goal the ball landed on, they would win and they could get away with it, because they bought off all the referees. The Packers cant complain because if they dont play according to those rules, they are accused of practicing religion instead of Football. The cameramen aim the cameras to make the Steelers look good and the Packers bad, and the commentators have their talking points down to make it all look enlightened and progressive. Anyone who complains this is unfair gets Expelled from the stadium. Octopus arms have an optimal design: 02/09/2005. Next headline on: Darwin and Evolution Intelligence as a Cosmic Reality Feb 06, 2011 The "I" in SETI takes "Intelligence" seriously. It requires that intelligence is a recognizable, quantifiable property of nature. The origin of intelligence is a question that separates theists from materialists whether it is a fundamental or emergent property. Before engaging that question, it might be instructive to see how scientists who are not necessarily theists are regarding it. Intelligence is a concept that overlaps the fringes of many sciences. Researchers in neuroscience, artificial intelligence, linguistics, information theory, cryptography, SETI and communications all assume intelligence is real, but like life, have a difficult time defining it (01/16/2011). While using the term as applied to birds, rats, machines or aliens, there is something about human intelligence that yearns to communicate not just for food or sex, or as a response to a stimulus or program but for understanding at a deep level. Is that just more of the same as observed in animals? And can such longings, while making use of atoms (as in brain memory centers), be reduced to atoms? - SETI protocol: The Arecibo Message beamed to the stars in 1974 was a binary encoded stream of bits. Subsequent messages have included graphical depictions of humans, and catalogs of human science and art. PhysOrg recalled those attempts at communication with other intelligences and asked what would be the most likely protocol that aliens would recognize as intelligent on the receiving end. This is the study of METI: messaging to METI includes considerations of how to maximize communication effectively at the lowest cost. What good would an engraving of human forms be for aliens without eyes? An international team, PhysOrg reported, considered factors like signal encoding, message length, information content, anthropocentrism, transmission method, and transmission periodicity for an upcoming report in Space Policy. Their current recommendation is to concentrate on short, simple messages with minimal anthropocentrism, and which rely on simple physical or The scientists also emphasize that searching for and attempting to communicate with extraterrestrials is as much about understanding ourselves as it is about finding aliens, the press release continued. We need, in other words, to understand human intelligence. The only way we have to calibrate a test message, though, is to try it on other human beings with other cultures and languages. Whatever they decide to send for the next broadcast from Earth, they must assume intelligence is real at both the sending and receiving end. - Universal intelligence: Science Daily expanded the concept with an article, On the hunt for universal intelligence. The question is, How do you use a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial? To plumb that question, Spanish and Australian AI researchers (artificial intelligence) devised a new intelligence test to replace the historic Turing Test that Allan Turing developed in 1950 to demonstrate intelligence in machines. Their new Anytime Universal Intelligence test that can be applied to any subject whether biological or not at any point in its development (child or adult, for example), for any system now or in the future, and with any level of intelligence or speed. Their model measures Kolmogorov Complexity, the number of computational resources needed to describe an object or a piece of information, yet they admit this is a first step in an ongoing evaluation of intelligence. - Language efficiency: Philip Ball at Nature News reported on a new proposal in linguistics at MIT theorizing that longer words carry more information. In contrast to a 1930-era model by George Kingsley Zipf that language speakers seek to minimize time and effort when speaking, Steven Piantadosi and colleagues propose that to convey a given amount of information, it is more efficient to shorten the least informative and therefore the most predictable words, rather than the most frequent ones. While not speaking of intelligence directly, this article overlaps with the means of communication between intelligent agents. The words informative and predictable presuppose intelligences able to discriminate those factors using abstract reasoning. - Mind matters: At the threshold of mind and matter, neuroscientists continue to probe how intelligence is mediated by the physical brain. PhysOrg reported on experiments at the University of Sydneys Centre for the Mind that seemed to indicate electrical stimulation of the anterior temporal lobe produced flashes of insight that might lead to an electronic thinking cap some day. Neuroscientists at New York University found, according to Science Daily, that memory storage and reactivation is more complex than thought. Experiments on lab rats showed that different effects of specifically inhibiting the initiation of protein synthesis on memory consolidation and reconsolidation, making clear these two processes have greater variation than previously thought. Memory, however is a tool of mind, not mind itself if the distinction is more than academic. No SETI researcher, however, is expecting lab rats to attempt purposeful communication with alien civilizations. If memory is more complex than thought, thought is also more complex than memory. These are deep questions that have not been exhausted by philosophers despite millennia of trying. But when you use your intelligence to define intelligence, or think about thinking, who is acting? While intelligence is somewhat quantifiable in birds or dolphins or apes, our self-consciousness as beings, as persons, able to communicate and desiring communication with others, is unheard of in the animal kingdom. Unlike bird chirps and ape grunts, we speak with meaning (semantics) using complex syntax, referring to abstractions in the conceptual realm. We use codes and references. We write philosophy books and symphonies with no survival value. We can communicate the same message through entirely different physical media. Perhaps the better question is the search for extra-terrestrial personality. Like the fire triangle (heat, oxygen, fuel), the triad of personality intellect, emotions, and will lights the fire of communication as only intelligent persons experience it. It is doubtful todays human SETI staff would be particularly thrilled if future intelligent robots made contact with alien robots, intelligent as they might be. Even if emotions and will were programmed into the robots, we would recognize the robots to be just carrying out the program. Similarly, if our self-conscious intelligence is to be accepted as real as we know it to be deep in our souls, it cannot be just executing a genetic program. If intelligence were an epiphenomenon of matter in motion, no scientist could ever know that to be true. Truth implies morality (honesty). If morality is also an epiphenomenon of matter in motion, the materialist soon multiplies epiphenomena upon epiphenomena, reducing his explanation to ghost stories. The only self-consistent explanation for intelligence, personality, and truth is that they derive from a Creator who is intelligent, personal, and true: I AM. Next headline on: Mind and Brain Philosophy of Science Feb 05, 2011 Here is a quick list of headlines to scan for your next baloney detecting safari. Be discerning: theres great, good, bad, and ugly in the list, often within the same article. Scientists insert themselves into many subjects. Not all scientific research is equally valid. By standing for too much, does science spread its presumptive authority too thin? - Planets Why the moon is getting farther from Earth: BBC News. - Stars First stars were not born solitary: PhysOrg. - Zoology How do insects survive the cold winter? Answer at PhysOrg. - Health Exercising outdoors provides value added: PhysOrg. - Education Colleges not teaching critical thinking: PhysOrg. - Physics Information can be erased without energy: PhysOrg. - Geology Dead Sea drill cores may illuminate Biblical history: PhysOrg. - Astrophysics The waters above: measuring H2O in space: PhysOrg. - Evolution Cancer as an evolutionary process: PhysOrg. - Health Discoverer of induced pluripotent stem cells wins top award: PhysOrg. - Planets The MESSENGER spacecraft approaches Mercury orbit on March 17: PhysOrg. - Health Popeye was right: eat your spinach for efficient muscle: PhysOrg. - Archaeology Zechariahs tomb found? PhysOrg; ask Todd Bolen. - Evolution Lampreys provide clues on evolution of immune system: PhysOrg. - Biology Why do fish sleep? PhysOrg. - Birds Birds use right nostril to navigate: PhysOrg. - Geology Finding gold by a new model: Science Daily. - Physiology When the nose smells, the brain wants to know: Science Daily. - Genetics/Evolution Jumping genes tangle Darwins tree of life: Science Daily. - Fossils Eleven-foot bear fossil found in Argentina: Live Science. - Education Kids believe literally everything they read online: Live Science. - History Vikings could have navigated on cloudy days: Live Science. - Genetics Lessons learned from the Human Genome Project: Live Science. - Anthropology Mans best friend, the fox: news at Live Science. - Marine Biology Ocean in motion: how do squid hear? Live Science. - Evolution Childhood diseases rooted in evolution: Live Science. - Early man You could outrun Neanderthals in a race, says New Scientist. - Mathematics Explore fractals in Google: New Scientist. - Politics Climate forecasting is a form of soothsaying: New Scientist. - Origin of Life Is this Dr Frankenstein creating life from scratch and bootstrapping evolution? New Scientist. Although many subjects are touched on above, we only have space to categorize them in Amazing Facts or Dumb Ideas. Some include both, perhaps some neither. Readers are encouraged to analyze these articles with their critical thinking skills. Write in if there is one you would like reported in detail. Fossils by Faith Next headline on: Feb 04, 2011 Fossils are real artifacts you can hold in your hand. The stories behind them are not. How does science connect the one with the other? Sometimes, it requires faith in incredible stories. If paleontologists unfamiliar with the consensus views on age, origin, ancestry and evolutionary mechanisms were to examine these fossils, its interesting to consider what stories they might come up with. - Stay, sis: Darwin portrayed a world in flux, with natural selection continually sifting and amplifying minute changes over time. Why, then did Science Daily title an article, Rare Insect Fossil Reveals 100 Million Years of Evolutionary Stasis? Sure enough, the article claims that a certain splay-footed cricket in rock alleged to be 100 million years old has undergone very little evolutionary change since the Early Cretaceous Period, a time of dinosaurs just before the breakup of the supercontinent Gondwana. But is a phrase like evolutionary stasis an explanation, or just a term providing protection from - Goldilocks and the 3 Dinos: According to computer models show that dinosaurs can only leave footprints in strata that are just right for the mass of the animal. Now we can use this Goldilocks effect as a baseline for exploring more complicated factors such as the way dinosaurs moved their legs, or what happens to tracks when a mud is drying out. But even if the model allows the scientist to tweak all the parameters in a computer, what happened to good old-fashioned field experiments? - Titanoceratops the granddaddy: Analysis of a partial skeleton from New Mexico could be the new granddaddy of horned dinosaurs, Geographic News teased. Its a big one, the biggest horned dinosaur found in North America, dated at 74 million years old, but hold on; they gave this bone a new name when they are not sure it isnt a member of a previously-identified species called Pentaceratops. No sooner was it given a titanic name but paleontologists were describing its Darwinian pedigree: If indeed a new species, Titanoceratops discovery could also mean that triceratopsinsmembers of a family of giant horned dinosaursevolved their gigantic sizes evolved [sic] at least five million years earlier than previously thought, the study says. Its not clear why this specimen had anything to do with ancestry. Does the smaller evolve from the larger? Sometimes, perhaps, but clearly, much of Darwins story had to get things bigger than the last universal common ancestor, a cell. A Yale paleontologist remarked, Its pretty surprisingI would have not have thought something this big and this advanced was living in this time period. But have faith: I would like it to be real, a paleontologist at Cleveland Natural History Museum said, struggling with his doubts. Another brother helped his unbelief: After all, Triceratops must have had ancestors in this earlier time, and this individual does show specialized traits that we see in the Triceratops complex. Pardon, your assumptions are showing. Did you catch the slips? The specimen must have ancestors during this earlier time says who? Darwin, thats who. The evidence may not show it, and claiming it may require willing suspension of disbelief, but the Bearded Buddha asks for unfeigned faith. But then why not apply the same faith to Titanoceratops (if such a species even existed) that was applied to the splay-footed cricket, saying it showed incredible stasis for 100 million years? Darwin Day, that annual non-event, is coming up on the 12th. Its pretty much blown itself out after the 2009 Bicentennial hoopla, but if you want to spice up your celebration, look at our 02/13/2004 entry for game ideas; more after the 02/13/2008 entry. Better yet, read Darwin Day in America by John West. Evolutionists have come up with the perfect crime. No evidence will ever convict Darwin, because he bought out the police, the researchers, the politicians, the teachers, and the judges. Will any magistrate in his totalitarian regime ever pay attention to a citizens arrest of these scientist impersonators? (see 09/30/2007 commentary). If the foundations be destroyed, what can the righteous do? Tell the unvarnished truth to whoever will listen, thats what. Next headline on: Darwin and Evolution Feb 03, 2011 Recent news stories about Mars can be categorized into past, present, and future. Meanwhile, back on Earth, a volunteer crew of six have reached Mars orbit in a simulated experiment testing how humans might endure long term space travel. PhysOrg reported on the milestone of Mars200 project, now 244 days into their experiment living and working in hermetically-sealed modules as if traveling to Mars and back. On February 14 (Valentines Day) they get to emerge onto a simulated Martian surface. The arrival back on Earth (a place they never really left) occurs in early November. - Mars past: How Mars formed is a convoluted story. That is evident from a report on PhysOrg that might suggest Mars modelers are drinking too much to relieve stress: Marstinis could help explain why the red planet is so small. Mars seems the right size for itself, but for modelers, it is too small for their theories. The article describes a kind of complex billiard game requiring Mars to migrate outward before it could grow its expected size. As it went, it perturbed smaller planetesimals, objects the modelers dubbed Marstinis. Those, in turn, might have gotten perturbed by giant planets, which were also migrating at the time. This complex scenario has the benefit of simultaneously providing source material to explain another mystery the Late Heavy Bombardment, needed to explain cratering on the moon. Divining Mars history in meteorites was discussed in another article on Studying wafer-thin slices of meteorites thought to have landed on Earth from Mars, scientists look for clues indicating large impacts on the red planet. From conclusions reached, they try to infer impact effects on subsurface water and the production of carbonates, serpentine, clay and methane. A scientist promised, We are now starting to build a realistic model for how water deposited minerals formed on Mars, showing that impact heating was an important process. - Mars present: Several sources reported the surprise that Mars sand dunes can change quickly: Space.com, the BBC News, and PhysOrg among them. Scientists had considered the dunes to be fairly static, shaped long ago when winds on the planets surface were much stronger than seen today, according to analysis of images from the Mars Reconnaissance Orbiter HiRISE camera. Several sets of before-and-after images from HiRISE over a period covering two Martian years four Earth years tell a different story. seeking to understand how carbon dioxide sublimation a process not occurring on earth contributes to the rapid changes. Theres lots of debate about whether features we see on Mars could be produced in the current Mars climate or whether they require different conditions, one scientist commented. The numbers and magnitude of the changes have been really surprising, Meanwhile, the THEMIS infrared camera on the Mars Odyssey orbiter, NASAs longest-running Mars mission, is studying Mars dust. Dull as that sounds, it is actually an important source of information on Mars, as PhysOrg explained. Principal investigator Philip Christensen has a puzzle: Theres a good question why Mars isnt a billiard-ball planet covered by a kilometer of dust, he said, considering that scientists believe it has been there 4.5 billion years. Explaining why the dust layer is thin required imagination: Well, maybe throughout most of its history, Mars has had too thin an atmosphere to make dust or initiate saltation or wind abrasion, he said; No dust devils, no storms. Mars seems poised on the brink of global dust storms that occasionally obscure the entire surface of the planet with dust as fine as talcum powder. Calculations show that 100 meters of dust should blanket the planet in 4.5 billion years given current estimated dust creation rates. To wriggle out of that anomaly, Christensen imagined that the atmosphere cycles in and out, actively creating dust only 2% of the time. Even so, that would have produced 2 meters of dust on Mars, which he says is about right, provided he be forgiven for tweaking an unseen history to match the observations. - Mars future: Interesting missions are being planned for Mars. The big one is Jet Propulsion Laboratorys long-awaited Mars Science Laboratory (MSL), nicknamed Curiosity, scheduled for launch in the fall. Science Daily talked about its Sample Analysis at Mars instrument (SAM), which, along with other instruments, will check for the ingredients of life. Ingredients is the operative word. MSL will be unable to detect life, but may be able to determine once for all whether organic molecules are found on the Red Planet. An article about future Mars missions on PhysOrg has illustrations reminiscent of sci-if comic books. The Center for Space Nuclear Research at Idaho National Laboratory is working on fleets of Mars hoppers they feel would be more efficient explorers of the Martian surface than rovers. CSNRs mission design includes a method for sample return delivering Martian material back to Earth for analysis. A single rocket launch from Earth could deploy several hoppers at once, the article explained. A few dozen hoppers could map the entire Martian surface in a few years.... Hoppers could also serve as a network of weather stations monitoring the Martian climate and could collect a trove of air, rock and soil samples to send back to Earth. Mars exists in the present. We dont see Mars in the past, or in the future. We see effects produced by an unobserved history, and can extrapolate current processes a reasonable amount forward. When scientists tweak too many parameters in their imagined scenarios moving the planet in and out, imagining lucky-strike impacts at certain times and places where needed for theory, turning the atmosphere on and off to maintain a belief in billions of years, we have good reason to doubt the infallibility of their science. Remember that scientific explanation is an entirely different enterprise than scientific discovery. Lets discover! Go forth and conquer with Curiosity and better instrumentation. Hop to it. Data clear the fog and put storytellers out of business. But always keep a wary eye on the opinions of scientists who practice divination, or who make reckless drafts on the bank of time. Planets a-Plenty, but Are They Lively? Next headline on: Feb 02, 2011 The Kepler spacecraft has found over 1,235 planets so far (Space.com), 54 in their stars habitable zone, and some Earth-size or smaller. Science media are having a field day reporting the discoveries, portraying them with artist imaginations, licking their chops at the possibility of life in outer space. What does this mean? Space.com is racking up the most headlines: tabulating the leading earthlike candidates, posting videos with expert prognosticators, posting a gallery of the strangest, keeping the tally current. So far, the number of habitable planets with life is: 1. (Thats us, folks.) The number of Earth-size and Earthlike habitable planets confirmed to exist with intelligent life. We call this planet Earth. Thats assuming we can agree on a definition of intelligent life. Scientists were surprised to find a six-pack of planets around a star named Kepler 11, reported Space.com. The smallest in the system is 2.3 times the size of Earth; others are the size of Uranus or Neptune. The planets orbits do not fit planetary evolution theories unless the planets migrated: the close proximity of the inner planets is an indication that they probably did not form where they are now, one scientist commented. No sense looking for life on these planets; none are habitable by any measure. You can take a tour of the system on Space.com anyway. Another Space.com article described the 54 potentially habitable planets Kepler has found (see also the BBC News article by Jason Palmer). One of the leading contenders for Earthlike Planet, named Kepler 10-b (see Space.com gallery), was announced last month: the first rocky planet ever discovered outside our solar system according to David Tyler writing for ARN. Trouble is, its rocks are hot 1500°C because the planet is closer to its parent star than Mercury to our sun. What are the implications of Keplers unquestionably exciting finds? Before the latest Kepler tally was announced, one of the leading planet hunters gave his thoughts in an interview on (see also MSNBC News). Geoff Marcy had participated in finding more planets than anyone else. The first questions concerned technology and statistics, but then he admitted a scientific embarrassment: hot Jupiters. No one predicted gas giants close to the star. All the scientists expected extrasolar planetary systems to resemble ours, with the rocky planets close in and the gas giants farther out. It was silly reasoning, based on a sample size of one, he agreed: It would be like trying to characterize human psychology by going to one distant Indonesian island and interviewing one person, and thinking that that gave you the full range of human psychology. We also dont know how long planets last, he said, or how common Earth-like planets are. The existence of life is the big question. According to the UK Mail Online, Dr. Howard Smith (Harvard) has lost hope of finding intelligent life. Of the first 500 planets found, none are habitable; they are downright hostile. The new information we are getting suggests we could effectively be alone in the universe, he said. Geoff Marcy is mildly pessimistic, too: We might be rare, he remarked. Where are the SETI [search for extraterrestrial intelligence] signals? he asked. There is a non-detection thats like the elephant in the room. Forty years of searching has turned up empty. So theres an indication not definitive that maybe the Earth is more precious than we had thought. He was not considering intelligent design as an option. He said, after considering how comparatively young our solar system is in an ancient universe, maybe habitable planets that sustain Darwinian evolution for a billion years maybe theyre rare. Maybe. Asked if he has a gut feel about cosmic loneliness, he said, I do. If I had to bet and this is now beyond science I would say that intelligent, technological critters are rare in the Milky Way galaxy. The evidence mounts. We Homo sapiens didnt arise until some quirk of environment on the East African savannah so quirky that the hominid paleontologists still cant tell us why the australopithecines somehow evolved big brains and had dexterity that could play piano concertos, and things that make no real honest sense in terms of Darwinian evolution. He was speaking of the giant dinosaurs ruling the earth with chicken-size brains. He could not point to anything making sense in Darwinism, but he dismissed purposeful direction out of hand: Why the high chaparral on the East African savannah wouldve led to a Tchaikovsky piano concerto, never mind the ability to build rocket ships theres no evolutionary driver that the australopithecines suffered from that leads to rocket ships. And so that and the fact that we had to wait four billion years without humans. Four billion years? SPACE.com: Yes, it took four billion years to get there. Marcy: Since the Cambrian explosion, we had hundreds of millions of years of multi-cellular, advanced life in which, guess what happened with brain size? Nothing. We humans came across braininess because of something weird that happened on the East African savannah. And we cant imagine whether thats a common or rare thing. From there, the interviewer and Marcy pepped themselves up with dreams of a souped-up SETI project. He implied it would be easy to separate an intelligently-designed signal from a natural one: We know what to look for, he said. That would be the rat-a-tat-tat of a radio signal. We dont know exactly what the code would be, but wed be looking for pulses in the radio, in the infrared maybe, in the X-ray or UV. Wed have to think broadly. But this is a great quest for humanity. SPACE.com: People assume evolution is directed, and its always leading toward higher complexity and greater intelligence, but its not. Marcy: Its not. Dinosaurs show this in spades. David Tyler drew different conclusions from the same evidence for the uniqueness of our planet. In the ARN article, he said, Based on evidence, some argue that the Earth is a Privileged Planet. The basic approach of that book is being vindicated as research discovers just how extraordinary the Earth is. Are you sometimes undecided whether to laugh or weep for the SETI cultists? Both responses can make you shed tears. Marcy and his interviewer both admitted they are clueless, surprised, ignorant, and resigned to Stuff Happens as their scientific explanation for everything. Swallowing the whole Darwin baggage of billions of years of evolution, he could only say that something weird that happened on the East African savannah a hominid got a brain, and presto: a Tchaikovsky Got Twitter? Follow us at @psa104 and tweet your followers to our easy-to-remember URL, http://crev.info. Now, while Dr. Marcy and the Kepler scientists deserve honors for collecting data with intelligently designed instruments, theyre not likely to rank very high as philosophers or theologians. If the best philosophy they can invent is stuff happens, they have flunked out. And if they cannot be convinced they are hopelessly lost via the evidence of the Privileged Planet, the SETI silence, the origin of life, the Cambrian explosion, and a Tchaikovsky piano concerto, is there any hope for todays secular scientists being rescued from self-deception? Remember, these are the same people who refuse to let criticisms of Darwinism be heard in the schools or research labs. Emperor Charlie is not only naked himself, he is surrounded by naked soldiers arresting the clothed little boy for indecent exposure. Added to that, when you hear of communist and Muslim radicals calling for the complete overthrow of Western civilization, and the brutal murder of Supreme Court justices and the news media totally ignoring their hate speech while calling out peace-loving conservative Christians for alleged violent rhetoric, it is hard not to conclude that most of the world has gone completely crazy. Dont be surprised; it has gone crazy many times before. Escape the craziness with power, love, and a sound mind (II Timothy 1:7). Then rescue a neighbor. Next headline on: Darwin and Evolution Metaphors of Evolution Feb 01, 2011 If Will Rogers never met a man he didnt like, science never metaphor it didnt force. The history of science is replete with examples of metaphors not only trying to explain phenomena, but actually driving scientific research. Many times thoughtless metaphors have said more about current social values than science. So argued Mary Midgley, a a freelance philosopher, specialising in moral philosophy, in an article on The trouble with metaphors is that they dont just mirror scientific beliefs, they also shape them. Our imagery is never just surface paint, it expresses, advertises and strengthens our preferred interpretations. It also usually carries unconscious bias from the age we live in and this can be tricky to ditch no matter how faulty, unless we ask ourselves how and why things go wrong, and start to talk publicly about how we should understand metaphor. The article was developed from her book, The Solitary Self. But did her conclusion learn the lessons of history? Here is a short list of metaphors she found in science over the centuries: So did Midgley argue that we need to rid science of metaphors? No; she proposed new and better ones suitable for the 21st century the language of integrated systems: - Nature, the clock: Scientists in Newtons day envisioned the world as a mechanical clock wound up by God. - Nature, the billiard game: Early atomists interpreted everything as colliding billiard-ball atoms. Rousseau applied this to social atomism. - Nature, the war of all against all: Thomas Hobbes metaphor of a war of individuals accidentally launched a wider revolt against the notion of citizenship, Midgley said. The slogan made it possible to argue later that there is no such thing as society, that we owe one another nothing. - Nature, the capitalist: Laissez-faire capitalism, Midgley argued, is an application of atomism to economics. - Nature, the competitor: Spencer and Darwin used the metaphor of competition to interpret nature, although Midgley asserts that Charles Darwin actually hated much of it, flatly rejecting the crude, direct application of natural selection to social policies. Whether or not his emotions against competition were derived from science or from his cultural milieu is another question. - Nature as selfish genes: Evolution has been the most glaring example of the thoughtless use of metaphor over the past 30 years, with the selfish/war metaphors dominating and defining the landscape so completely it becomes hard to admit there are other ways of conceiving it, Midgley complained. - Nature as self-organization: D'Arcy Thompson, Brian Goodwin, Steven Rose and Simon Conway Morris have worked on the metaphor of unfolding organic forms, a kind of self-organisation within each species, which has its own logic. Contrary to the long-held view of nature red in tooth and claw, Goodwin has written that humans are every bit as co-operative as we are competitive; as altruistic as we are selfish. Now the old metaphors of evolution need to give way to new ones founded on integrative thinking reasoning based on systems thinking. This way, the work of evolution can be seen as intelligible and constructive, not as a gamble driven randomly by the forces of competition. And if non-competitive imagery is needed, systems biologist Denis Noble has a good go at it in The Music Of Life, where he points out how natural development, not being a car, needs no single driver to direct it. Symphonies, he remarks, are not caused only by a single dominant instrument nor, indeed, solely by their composer. And developing organisms do not even need a composer: they grow, as wholes, out of vast and ancient systems which are themselves parts of nature. She did not reveal whether she is an admirer of John Cages chance music, but his kind of music seems to be the only kind that emerges without a composer. All other symphonies are usually composed and performed by intelligent design. It could be argued, though, that even John Cage purposefully chose to produce his works in certain directed ways. He had to choose to sit at a piano, for instance, and decide not to play for 4 minutes and 33 seconds, turning pages at pre-designed movements. For the metaphor to work, Cage would have had to step aside and do absolutely nothing but even that would be a choice. Metaphors bewitch you (07/04/2003). If Mary Midgley wants to criticize earlier scientists for imposing their social values (like competition) on nature, then how can she avoid being criticized for imagining nature to be a self-organizing system? The next philosopher in future years could just as easily sneer at Midgleys own misguided conceptions of nature, just as she sneered at evolutionists for being guilty of the most thoughtless uses of metaphor. Is it even possible for humans to perceive nature without metaphors? If you look at the list, all of the suggested metaphors have presupposed intelligent agency: clocks, billiards, warfare, competition, selfish genes, symphonies. Intelligence in the atomistic view is a little harder to spot, until you recognize that colliding atoms presuppose natural laws: spherical shapes, and consistent physics of collisions. Theists draw on the metaphor of a Creator as Architect, Designer, Maker, and Overseer. That is how God describes himself. So if every other metaphor already presupposes intelligent agency, then theism must be the most accurate one. Metaphors, therefore, can be true. If metaphors are inescapable, the symphony one is a good one. God becomes the composer and conductor, His creatures the obedient yet skilled musicians, the instruments the capabilities, skills and talents he has endowed on his works. The music is extended in time, with moments of tension and relaxation, periods where the listener is uncertain where the work is headed, but all working toward a planned finale. Remove the sheet music and the conductor, though, and you get nothing but endless tuning exercises that all sound alike. Eventually the musicians leave and the music stops, having gone nowhere. John Cage might be happy, but not the rest of us, who know design when we see it and hear it. The fact that audiences vastly prefer Mozart to John Cage just might reveal something about reality. Next headline on: Darwin and Evolution Politics and Ethics Philosophy of Science Bible and Theology I love your blog [sic; science news site], and have been very blessed by your hard work of staying up to speed on current science news. In fact, I like your blog articles so much that I am constantly quoting you and using some of your material on Facebook discussion boards. (a paralegal in California) The things these scientists and priests of Darwin write are hysterical.... Thanks for the work of gathering it into a convenient location. (a maintenance tech in Colorado) I just found this site earlier today, and I can't stop reading! This seems like a great place to keep myself updated in this heated debate between Creation science and Darwinian nonsense. Though being an electrical engineer student, I may or may get involved with much biology in my future career; I still hope that my appreciation for Gods design may help me become a better designer myself. (an undergrad in California) Your site is great. I read it every day. Whoever is doing the writing and researching is a genius: witty, funny, insightful, informed and always right on....you make the darwinian farce look utterly laughable. Keep up the great work...I'll be tuned in. God Bless you! (a real estate investor in Texas) First, thank you for your site. It is truly wonderful. I spend a lot of time debating evos on web sites ... and your site is indispensable reading for me. I've been reading it for several years now and I would like to become a regular donor.... It wont be much, but Id like to do what I can. I feel this issue is the primary issue standing between humanism and revival in America and the world. (a reader in Florida) I have just recently started studying this science of God stuff with some degree of diligence and I am astonished at what I did not know and did not understand, despite my 58 years, and 4 Masters Degrees, to include from an Ivy League school. Thank you for your efforts to get the true word out. (a reader in Alabama) Just wanted to let you know how much I enjoy creation headlines. I check them every few days and constantly gain perspective. Without you, even I would possibly believe at least some of the baloney. Your baloney detector is more sensitive and I appreciate it!! (a pastor in Ontario, Canada) Crev.info is by far my favorite website. The information is helpful in understanding and dismantling Darwinian foundations. Some of my favorite articles are the biomimetics articles revealing the fantastic design that is obvious to anyone not blinded by the evolutionary goggles. The commentaries are priceless, and love the embedded humor and clever innuendoes... Keep up the tremendous work! (a database administrator in Texas) Creation Evolution Headlines is my favorite website on intelligent design and Darwinism. This is both due to your being always on the cutting edge of all the latest research relevant to the debate over evolution, as well as due to your soberness coupled with your uncomprimising [sic] stand against Darwinism. Your work is invaluable. Please keep it up all costs. (founder of the Danish Society for Intelligent Design, who is beginning to translate some of our articles into Danish at Thank you for your work. You show how the theory of macroevolution makes absolutely no contribution to scientific progress and in fact impedes scientific progress. Conversely you show how the design assumption points the way toward true progress in science. (an aerospace engineer in California) Your website is by far the best for getting the most up to date news on what is going on in the science realm, and then separating the useful information from the baloney. You have a knack for ripping the mask off of Darwin. Keep up the great work! (an electrical engineer in North Carolina) I love your work. I check in nearly every day. (an associate pastor in California who works with college grad students) Ive been a fan of your site for some time, since a friend in an intelligent design group I joined (quite clandestine for the sake of job security of those involved) clued me in to it. I check what you have to say every day. I cant say enough about your highly credible scientific arguments, your incisive dissection of the issues, and your practical format. Your references are always on target. (a physician, surgeon and writer in Georgia) Nice site. I enjoy reading the comments, and find it quite informative.... I am a frequent visitor to your headlines page. I am a former agnostic, and Creation Safaris was one of the first pieces of open evolution questioning read (The Baloney Detector). Great stuff. (a historian in Australia) You do a terrific job on snatching content from the headlines and filtering it for stupidity and lockstep paradigm thinking! Not only are you on top of things but you do garnish the dish well! (an IT security consultant in the midwest) I always thought that science and the Bible should not be at odds with each other and prayed that God would reveal the truth about evolution/creation through science to us. I wondered if there existed scientists who were believers and how they reconciled Genesis with science. Where were they when I was teaching? Now I understand that these Godly men and women had been silenced.... I am so thankful for your website containing your insightful and educational articles that reveal your understanding of science and Gods word. (a retired biology teacher in Ohio) It keeps getting better and better. Wonderful resources there. (a mechanical engineer and educational consultant in Texas) Just stopped by to say Hi; Thanks again for your posting--still the best web site on the net!! (a regular reader in Illinois) I accidentally came across your BRILLIANT website today.... your website is mesmerising and i sincerely thank you for it. Wishing you every success. (an author in Ireland) I appreciate your reviews more than I can tell. Being able to find the references enables me to share them with my colleagues and students. (a teacher in Virginia) Thank you for your site. I have thoroughly enjoyed it for a few years now and find it an awesome resource. (a pastor in the arctic circle) This is a lovely site, and I personally visit this often.... An interesting thing is also the creation scientist of the month .... just this information alone is enough to write a book from. (a reader in South Africa) What God has done through you and crev.info in the past 9 years is nothing less than miraculous. (an author, PhD in science, and head of a Christian apologetics organization) I thank God for you and your contribution to His Kingdom. Yours is my favorite site. May the Lord bless you this season as you get some rest. We really appreciate your work. (a consultant in Virginia responding to our Thanksgiving-week hiatus) Instead of criticising every piece of evidence for evolution how about presenting some evidence for creationism? Obviously there are holes in evolutionary theory we cant even define a species! But its a theory with a whole load of evidence and if taken at its definition is a mathmatical [sic] certainty. (a student in Leeds, UK, who must have reacted to one or a few articles, and appears to be philosophically and mathematically challenged) In the creation vs. evolution world, which oftentimes is filled with a strong negative vibe, your website is a breath of fresh air! Keep it up. (a business manager in Texas) The maple-seed helicopter (10/21/2009) is fascinating. Ill be spending some time surfing your encyclopedic collection of articles. (dean of the aerospace engineering department at a major university) I stumbled upon this web site more than once by following links from my usual creationist web sites but now I visit here quite often. I am glad to see that there are more and more creationist web sites but disappointed to find out that this one has been running for nearly 10 years and I never knew about it. (an electronics engineer in Sweden) I am a teacher ... For three years ive been learning from you at crev.info/... My wife, a teacher also, passes your website on to all interested. We are blessed by your gifts to the body of Christ through this site! Thank-you for ALL your efforts over the decade. (a teacher in California) I just want to thank you for these resources that go back 9 years. It has helped be tremendously when debating evolutionists. Just like in the Parable of the Talents, God will say to you, Well done, good and faithful servant! (an engineer in Maryland) There is no other place I can find the breadth of subjects covered, yet with the detailed insight you give. People actually think I am smarter than I really am after I read your summaries. (a business owner in Utah) I believe there is a middle ground between ID and Evolution that defines what goes on in the real world. It hasnt been labeled by humanity yet, and its probably better that it hasnt, for now. The problem is there is still so much that humanity doesnt know about the universe we live in and our learning progress is so uneven throughout our population. If there is an Intelligent Designer, and I believe there is, these problems too will be taken care of eventually. In the meantime, you do the best you can, the best that's humanly possible, to be objective and logical, while maintaining your faith. (a retired letter carrier in Pennsylvania) The information you have provided has been instrumental in completely resolving any lingering doubts I had when I became a Christian and being faced with the monolithic theory of evolution. Your website is unique in that it allows the evolutionists themselves to shoot them in the feet by quoting them in context. Bravo! (a retired surveyor in Australia) I really enjoy reading your posts and often send out links to various friends and family members to direct them to your site. You have an incredible gift and I truly appreciate how you use it.... I have been a satisfied reader of your headlines for the last 5 years at least... cant remember when I first stumbled on your site but it is now a daily must-stop for me. (a senior software engineer in Ohio) Thank you so much for your news. Ive fully enjoyed your articles and commentary for a while now and look forward to the future. (a doctor in North Carolina) I like your stuff. (a doctor in New York) Thank you and may God bless you all at CEH, for the wonderful work you do. (a retired surveyor in Australia) The information you put out there is absolutely superb. (a lawyer in Kansas) Your website is the best website on the web for keeping me current of fast developing crev material. (a medical doctor in California) I am a Christian & really appreciate the creation websites, I check your site every night. (a logger in New Zealand) I just found your website a day or so ago and am totally addicted. You dont know what that says, considering Im only now within the last few days, as a matter of fact a recovering old-earther ... Talk about going down internet rabbit trails. I could go deeper and deeper into each headline you post and never get anything else done... (a home school educator, graphic designer, painter, former geologist in Texas) I very much enjoy your web site. I have used it as a resource for debating evolutionist for about a year. I am impressed at the breadth of journals and quantity of articles you report on. I have recommended your site to several of my on line friends. I dont care if you publish this post but I wanted you to know how thankful I am for all the hard work you do. (an engineering recruiter in California) I pray that our Lord continue to give you strength to continue writing your articles on Creation-headlines. I have been really blessed to read it daily....Unlike all other creation sites I am familiar with, yours has such a high scientific quality and your discussions are great. (a scientist and university professor in Iceland, where 95% of the people believe in evolution) Thank you for the work you do ... I scratch my head sometimes, wondering how you have the time for it all. (a former atheist/evolutionist in aerospace engineering, now Biblical creationist) Im a regular (daily :) reader of your site. It is amazing the amount of work that you impart in such a project. Thank you very much. (an IT professional with a degree in mechanical engineering from Portugal) I find your site so helpful and you are so fast in putting up responses to current news. I have your site RSS feed on my toolbar and can easily see when you have new articles posted. (a geologist in Australia) I have been reading your website for several years now. Working in an environment where most people believe that there are only two absolutes, evolution and relativism, it has been wonderful to be able to get the facts and the explanations of the bluffs and false logic that blows around. I have posted your website in many places on my website, because you seem to have the ability to cut through the baloney and get to the truth--a rare quality in this century. Thank you for all that you do. (a business analyst in Wisconsin) ...this is one of the websites (I have like 4 or 5 on my favorites), and this is there. Its a remarkable clearinghouse of information; its very well written, its to the point... a broad range of topics. I have been alerted to more interesting pieces of information on [this] website than any other website I can think of. (a senior research scientist) I would assume that you, or anyone affiliated with your website is simply not qualified to answer any questions regarding that subject [evolution], because I can almost single-handedly refute all of your arguments with solid scientific arguments.... Also, just so you know, the modern theory of evolution does not refute the existence of a god, and it in no way says that humans are not special. Think about that before you go trying to discredit one of the most important and revolutionary scientific ideas of human history. It is very disrespectful to the people who have spent their entire lives trying to reveal some kind of truth in this otherwise crazy world. (a university senior studying geology and paleontology in Michigan) Hi guys, thanks for all that you do, your website is a great source of information: very comprehensive. (a medical student in California) You are really doing a good job commenting on the weaknesses of science, pointing out various faults. Please continue. (a priest in the Netherlands) I much enjoy the info AND the sarcasm. Isaiah was pretty sarcastic at times, too. I check in at your site nearly every day. Thanks for all your work. (a carpet layer in California) I just wanted to write in to express my personal view that everyone at Creation Evolution Headlines is doing an excellent job! I have confidences that in the future, Creation Evolution Headline will continue in doing such a great job! Anyone who has interest at where science, as a whole, is at in our current times, does not have to look very hard to see that science is on the verge of a new awakening.... Its not uncommon to find articles that are supplemented with assumptions and vagueness. A view point the would rather keep knowledge in the dark ages. But when I read over the postings on CEH, I find a view point that looks past the grayness. The whole team at CEH helps cut through the assumptions of weary influences. CEH helps illuminate the true picture that is shining in todays science. A bright clear picture, full of intriguing details, independence and fascinating complexities. I know that Creation Evolution Headlines has a growing and informative future before them. Im so glad to be along for the ride!! (a title insurance employee in Illinois, who called CEH The Best Web Site EVER !!) Thank you very much for your well presented and highly instructive blog [news service]. (a French IT migration analyst working in London) Please keep up the great work -- your website is simply amazing! Dont know how you do it. But it just eviscerates every evolutionary argument they weakly lob up there -- kind of like serving up a juicy fastball to Hank Aaron in his prime! (a creation group leader in California) I just want to thank you for your outstanding job. I am a regular reader of yours and even though language barrier and lack of deeper scientific insight play its role I still draw much from your articles and always look forward to them. (a financial manager and apologetics student in Prague, Czech Republic) You guys are doing a great job! ... I really appreciate the breadth of coverage and depth of analysis that you provide on this site. (a pathologist in Missouri) I have read many of your creation articles and have enjoyed and appreciated your website. I feel you are an outstanding witness for the Lord.... you are making a big difference, and you have a wonderful grasp of the issues. (a PhD geneticist, author and inventor) Thank you for your great creation section on your website. I come visit it every day, and I enjoy reading those news bits with your funny (but oh so true) commentaries. (a computer worker in France) I have been reading Creation Evolution Headlines for many years now with ever increasing astonishment.... I pray that God will bless your work for it has been a tremendous blessing for me and I thank you. (a retired surveyor in N.S.W. Australia) I totally enjoy the polemic and passionate style of CEH... simply refreshes the heart which its wonderful venting of righteous anger against all the BS were flooded with on a daily basis. The baloney detector is just unbelievably great. Thank you so much for your continued effort, keep up the good work. (an embedded Linux hacker in Switzerland) I love to read about science and intelligent design, I love your articles.... I will be reading your articles for the rest of my life. (an IT engineer and 3D animator in South Africa) I discovered your site about a year ago and found it to be very informative, but about two months back I decided to go back to the 2001 entries and read through the headlines of each month.... What a treasure house of information! ....you have been very balanced and thoughtful in your analysis, with no embarrassing predictions, or pronouncements or unwarranted statements, but a very straightforward and sometimes humorous analysis of the news relating to origins. (a database engineer in New York) I discovered your site several months ago.... I found your articles very informative and well written, so I subscribed to the RSS feed. I just want to thank you for making these articles available and to encourage you to keep up the good work! (a software engineer in Texas) Your piece on Turing Test Stands (09/14/2008) was so enlightening. Thanks so much. And your piece on Cosmology at the Outer Limits (06/30/2008) was another marvel of revelation. But most of all your footnotes at the end are the most awe-inspiring. I refer to Come to the light and Psalm 139 and many others. Thanks so much for keeping us grounded in the TRUTH amidst the sea of scientific discoveries and controversy. Its so heartwarming and soul saving to read the accounts of the inspired writers testifying to the Master of the Universe. Thanks again. (a retired electrical engineer in Mississippi) I teach a college level course on the issue of evolution and creation. I am very grateful for your well-reasoned reports and analyses of the issues that confront us each day. In light of all the animosity that evolutionists express toward Intelligent Design or Creationism, it is good to see that we on the other side can maintain our civility even while correcting and informing a hostile audience. Keep up the good work and do not compromise your high standards. I rely on you for alerting me to whatever happens to be the news of the day. (a faculty member at a Bible college in Missouri) Congratulations on reaching 8 years of absolute success with crev.info.... Your knowledge and grasp of the issues are indeed matched by your character and desire for truth, and it shows on every web page you write.... I hope your work extends to the ends of the world, and is appreciated by all who read it. (a computer programmer from Southern California) Your website is one of the best, especially for news.... Keep up the great work. (a science writer in Texas) I appreciate the work youve been doing with the Creation-Evolution Headlines website. (an aerospace engineer for NASA) I appreciate your site tremendously.... I refer many people to your content frequently, both personally and via my little blog.... Thanks again for one of the most valuable websites anywhere. (a retired biology teacher in New Jersey, whose blog features beautiful plant and insect photographs) I dont remember exactly when I started reading your site but it was probably in the last year. Its now a staple for me. I appreciate the depth of background you bring to a wide variety of subject areas. (a software development team leader in Texas) I want to express my appreciation for what you are doing. I came across your website almost a year ago.... your blog [sic; news service] is one that I regularly read. When it comes to beneficial anti-evolutionist material, your blog has been the most helpful for me. (a Bible scholar and professor in Michigan) I enjoyed reading your site. I completely disagree with you on just about every point, but you do an excellent job of organizing information. (a software engineer in Virginia. His criticisms led to an engaging dialogue. He left off at one point, saying, You have given me much to think about.) I have learned so much since discovering your site about 3 years ago. I am a homeschooling mother of five and my children and I are just in wonder over some the discoveries in science that have been explored on creation-evolution headlines. The baloney detector will become a part of my curriculum during the next school year. EVERYONE I know needs to be well versed on the types of deceptive practices used by those opposed to truth, whether it be in science, politics, or whatever the subject. (a homeschooling mom in Mississippi) Just wanted to say how much I love your website. You present the truth in a very direct, comprehensive manner, while peeling away the layers of propaganda disguised as 'evidence' for the theory of evolution. (a health care worker in Canada) Ive been reading you daily for about a year now. Im extremely impressed with how many sources you keep tabs on and I rely on you to keep my finger on the pulse of the controversy now. (a web application programmer in Maryland) I would like to express my appreciation for your work exposing the Darwinist assumptions and speculation masquerading as science.... When I discovered your site through a link... I knew that I had struck gold! ....Your site has helped me to understand how the Darwinists use propaganda techniques to confuse the public. I never would have had so much insight otherwise... I check your site almost daily to keep informed of new developments. (a lumber mill employee in Florida) I have been reading your website for about the past year or so. You are [an] excellent resource. Your information and analysis is spot on, up to date and accurate. Keep up the good work. (an accountant in Illinois) This website redefines debunking. Thanks for wading through the obfuscation that passes for evolution science to expose the sartorial deficiencies of Emperor Charles and his minions. Simply the best site of its kind, an amazing resource. Keep up the great work! (an engineer in Michigan) I have been a fan of your daily news items for about two years, when a friend pointed me to it. I now visit every day (or almost every day)... A quick kudo: You are amazing, incredible, thorough, indispensable, and I could list another ten superlatives. Again, I just dont know how you manage to comb so widely, in so many technical journals, to come up with all this great news from science info. (a PhD professor of scientific rhetoric in Florida and author of two books, who added that he was awe-struck by this site) Although we are often in disagreement, I have the greatest respect and admiration for your writing. (an octogenarian agnostic in Palm Springs) your website is absolutely superb and unique. No other site out there provides an informed & insightful running critique of the current goings-on in the scientific establishment. Thanks for keeping us informed. (a mechanical designer in Indiana) I have been a fan of your site for some time now. I enjoy reading the No Spin of what is being discussed.... keep up the good work, the world needs to be shown just how little the scientist [sic] do know in regards to origins. (a network engineer in South Carolina) I am a young man and it is encouraging to find a scientific journal on the side of creationism and intelligent design.... Thank you for your very encouraging website. (a web designer and author in Maryland) GREAT site. Your ability to expose the clothesless emperor in clear language is indispensable to us non-science types who have a hard time seeing through the jargon and the hype. Your tireless efforts result in encouragement and are a great service to the faith community. Please keep it up! (a medical writer in Connecticut) I really love your site and check it everyday. I also recommend it to everyone I can, because there is no better website for current information about ID. (a product designer in Utah) Your site is a fantastic resource. By far, it is the most current, relevant and most frequently updated site keeping track of science news from a creationist perspective. One by one, articles challenging currently-held aspects of evolution do not amount to much. But when browsing the archives, its apparent youve caught bucketfulls of science articles and news items that devastate evolution. The links and references are wonderful tools for storming the gates of evolutionary paradise and ripping down their strongholds. The commentary is the icing on the cake. Thanks for all your hard work, and by all means, keep it up! (a business student in Kentucky) Thanks for your awesome work; it stimulates my mind and encourages my faith. (a family physician in Texas) I wanted to personally thank you for your outstanding website. I am intensely interested in any science news having to do with creation, especially regarding astronomy. Thanks again for your GREAT (an amateur astronomer in San Diego) What an absolutely brilliant website you have. Its hard to express how uplifting it is for me to stumble across something of such high quality. (a pharmacologist in Michigan) I want to make a brief commendation in passing of the outstanding job you did in rebutting the thinking on the article: Evolution of Electrical Engineering ... What a rebuttal to end all rebuttals, unanswerable, inspiring, and so noteworthy that was. Thanks for the effort and research you put into it. I wish this answer could be posted in every church, synagogue, secondary school, and college/university..., and needless to say scientific laboratories. (a reader in Florida) You provide a great service with your thorough coverage of news stories relating to the creation-evolution controversy. (an elder of a Christian church in Salt Lake City) I really enjoy your website and have made it my home page so I can check on your latest articles. I am amazed at the diversity of topics you address. I tell everyone I can about your site and encourage them to check it frequently. (a business owner in Salt Lake City) Ive been a regular reader of CEH for about nine month now, and I look forward to each new posting.... I enjoy the information CEH gleans from current events in science and hope you keep the service going. (a mechanical engineer in Utah) It took six years of constant study of evolution to overcome the indoctrination found in public schools of my youth. I now rely on your site; it helps me to see the work of God where I could not see it before and to find miracles where there was only mystery. Your site is a daily devotional that I go to once a day and recommend to everyone. I am still susceptible to the wiles of fake science and I need the fellowship of your site; such information is rarely found in a church. Now my eyes see the stars God made and the life He designed and I feel the rumblings of joy as promised. When I feel down or worried my solution is to praise God the Creator Of All That Is, and my concerns drain away while peace and joy fill the void. This is something I could not do when I did not know (know: a clear and accurate perception of truth) God as Creator. I could go on and on about the difference knowing our Creator has made, but I believe you understand. I tell everyone that gives me an opening about your site. God is working through you. Please dont stop telling us how to see the lies or leading us in celebrating the truth. Thank you. Thank you. Thank you. (a renowned artist in Wyoming) I discovered your site a few months ago and it has become essential reading via RSS to (a cartographer and GIS analyst in New Zealand) I love your site, and frequently visit to read both explanations of news reports, and your humor about Bonny Saint Charlie. (a nuclear safety engineer in Washington) Your site is wonderful. (a senior staff scientist, retired, from Arizona) Ive told many people about your site. Its a tremendous service to science news junkies not to mention students of both Christianity and (a meteorology research scientist in Alabama) ...let me thank you for your Creation-Evolution Headlines. Ive been an avid reader of it since I first discovered your website about five years ago. May I also express my admiration for the speed with which your articles appearoften within 24 hours of a particular news announcement or journal article being published. (a plant physiologist and prominent creation writer in Australia) How do you guys do it--reviewing so much relevant material every day and writing incisive, (a retired high school biology teacher in New Jersey) Your site is one of the best out there! I really love reading your articles on creation evolution headlines and visit this section almost daily. (a webmaster in the Netherlands) Keep it up! Ive been hitting your site daily (or more...). I sure hope you get a mountain of encouraging email, you deserve it. (a small business owner in Oregon) Great work! May your tribe increase!!! (a former Marxist, now ID speaker in Brazil) You are the best. Thank you.... The work you do is very important. Please dont ever give up. God bless the whole team. (an engineer and computer consultant in Virginia) I really appreciate your work in this topic, so you should never stop doing what you do, cause you have a lot of readers out there, even in small countries in Europe, like Slovenia is... I use crev.info for all my signatures on Internet forums etc., it really is fantastic site, the best site! You see, we(your pleased readers) exist all over the world, so you must be doing great work! Well i hope you have understand my bad english. (a biology student in Slovenia) Thanks for your time, effort, expertise, and humor. As a public school biology teacher I peruse your site constantly for new information that will challenge evolutionary belief and share much of what I learn with my students. Your site is pounding a huge dent in evolutions supposed solid exterior. Keep it up. (a biology teacher in the eastern USA) Several years ago, I became aware of your Creation-Evolution Headlines web site. For several years now, it has been one of my favorite internet sites. I many times check your website first, before going on to check the secular news and other creation web sites. I continue to be impressed with your writing and research skills, your humor, and your technical and scientific knowledge and understanding. Your ability to cut through the inconsequentials and zero in on the principle issues is one of the characteristics that is a valuable asset.... I commend you for the completeness and thoroughness with which you provide coverage of the issues. You obviously spend a great deal of time on this work. It is apparent in ever so many ways. Also, your background topics of logic and propaganda techniques have been useful as classroom aides, helping others to learn to use their baloney detectors. Through the years, I have directed many to your site. For their sake and mine, I hope you will be able to continue providing this very important, very much needed, educational, humorous, thought provoking work. (an engineer in Missouri) I am so glad I found your site. I love reading short blurbs about recent discoveries, etc, and your commentary often highlights that the discovery can be interpreted in two differing ways, and usually with the pro-God/Design viewpoint making more sense. Its such a refreshing difference from the usual media spin. Often youll have a story up along with comment before the masses even know about the story yet. (a system administrator in Texas, who calls CEH the UnSpin Zone) You are indeed the Rush Limbaugh Truth Detector of science falsely so-called. Keep up the excellent work. (a safety director in Michigan) I know of no better way to stay informed with current scientific research than to read your site everyday, which in turn has helped me understand many of the concepts not in my area (particle physics) and which I hear about in school or in the media. Also, I just love the commentaries and the baloney detecting!! (a grad student in particle physics) I thank you for your ministry. May God bless you! You are doing great job effectively exposing pagan lie of evolution. Among all known to me creation ministries [well-known organizations listed] Creationsafaris stands unique thanks to qualitative survey and analysis of scientific publications and news. I became permanent reader ever since discovered your site half a year ago. Moreover your ministry is effective tool for intensive and deep education for cristians. (a webmaster in Ukraine, seeking permission to translate CEH articles into Russian to reach countries across the former Soviet Union) The scholarship of the editors is unquestionable. The objectivity of the editors is admirable in face of all the unfounded claims of evolutionists and Darwinists. The amount of new data available each day on the site is phenomenal (I cant wait to see the next new article each time I log on). Most importantly, the TRUTH is always and forever the primary goal of the people who run this website. Thank you so very much for 6 years of consistent dedication to the TRUTH. (11 months earlier): I just completed reading each entry from each month. I found your site about 6 months ago and as soon as I understood the format, I just started at the very first entry and started reading.... Your work has blessed my education and determination to bold in showing the unscientific nature of evolution in general and Darwinism in particular. (a medical doctor in Oklahoma) Thanks for the showing courage in marching against a popular unproven unscientific belief system. I dont think I missed 1 article in the past couple of years. (a manufacturing engineer in Australia) I do not know and cannot imagine how much time you must spend to read, research and compile your analysis of current findings in almost every area of science. But I do know I thank you for it. (a practice administrator in Maryland) Since finding your insightful comments some 18 or more months ago, Ive visited your site daily.... You so very adeptly and adroitly undress the emperor daily; so much so one wonders if he might not soon catch cold and fall ill off his throne! .... To you I wish much continued success and many more years of fun and frolicking undoing the damage taxpayers are forced to fund through unending story spinning by ideologically biased scientists. (an investment advisor in Missouri) I really like your articles. You do a fabulous job of cutting through the double-talk and exposing the real issues. Thank you for your hard work and diligence. (an engineer in Texas) I love your site. Found it about maybe two years ago and I read it every day. I love the closing comments in green. You have a real knack for exposing the toothless claims of the evolutionists. Your comments are very helpful for many us who dont know enough to respond to their claims. Thanks for your good work and keep it (a missionary in Japan) I just thought Id write and tell you how much I appreciate your headline list and commentary. Its inspired a lot of thought and consideration. I check your listings every day! (a computer programmer in Tulsa) Just wanted to thank you for your creation/evolution news ... an outstanding educational (director of a consulting company in Australia) Your insights ... been some of the most helpful not surprising considering the caliber of your most-excellent website! Im serious, ..., your website has to be the best creation website out there.... (a biologist and science writer in southern California) I first learned of your web site on March 29.... Your site has far exceeded my expectations and is consulted daily for the latest. I join with other readers in praising your time and energy spent to educate, illuminate, expose errors.... The links are a great help in understanding the news items. The archival structure is marvelous.... Your site brings back dignity to Science conducted as it should be. Best regards for your continuing work and influence. Lives are being changed and sustained every day. (a manufacturing quality engineer in Mississippi) I wrote you over three years ago letting you know how much I enjoyed your Creation-Evolution headlines, as well as your Creation Safaris site. I stated then that I read your headlines and commentary every day, and that is still true! My interest in many sites has come and gone over the years, but your site is still at the top of my list! I am so thankful that you take the time to read and analyze some of the scientific journals out there; which I dont have the time to read myself. Your commentary is very, very much appreciated. (a hike leader and nature-lover in Ontario, Canada) ...just wanted to say how much I admire your site and your writing. Youre very insightful and have quite a broad range of knowledge. Anyway, just wanted to say that I am a big fan! (a PhD biochemist at a major university) I love your site and syndicate your content on my church website.... The stories you highlight show the irrelevancy of evolutionary theory and that evolutionists have perpetual foot and mouth disease; doing a great job of discrediting themselves. Keep up the good work. (a database administrator and CEH junkie in California) I cant tell you how much I enjoy your article reviews on your websiteits a HUGE asset! (a lawyer in Washington) Really, really, really a fantastic site. Your wit makes a razor appear dull!... A million thanks for your site. (a small business owner in Oregon and father of children who love your site too.) Thank God for ... Creation Evolution Headlines. This site is right at the cutting edge in the debate over bio-origins and is crucial in working to undermine the deceived mindset of naturalism. The arguments presented are unassailable (all articles having first been thoroughly baloney detected) and the narrative always lands just on the right side of the laymans comprehension limits... Very highly recommended to all, especially, of course, to those who have never thought to question the fact of evolution. (a business owner in Somerset, UK) I continue to note the difference between the dismal derogations of the darwinite devotees, opposed to the openness and humor of rigorous, follow-the-evidence scientists on the Truth side. Keep up the great work. (a math/science teacher with M.A. in anthropology) Your material is clearly among the best I have ever read on evolution problems! I hope a book is in the works! (a biology prof in Ohio) I have enjoyed reading the sardonic apologetics on the Creation/Evolution Headlines section of your web site. Keep up the good work! (an IT business owner in California) Your commentaries ... are always delightful. (president of a Canadian creation group) Im pleased to see... your amazing work on the Headlines. (secretary of a creation society in the UK) We appreciate all you do at crev.info. (a publisher of creation and ID materials) I was grateful for creationsafaris.com for help with baloney detecting. I had read about the fish-o-pod and wanted to see what you thought. Your comments were helpful and encouraged me that my own baloney detecting skill are improving. I also enjoyed reading your reaction to the article on evolution teachers doing battle with students.... I will ask my girls to read your comments on the proper way to question their teachers. (a home-schooling mom) I just want to express how dissapointed [sic] I am in your website. Instead of being objective, the website is entirely one sided, favoring creationism over evolution, as if the two are contradictory.... Did man and simien [sic] evovlve [sic] at random from a common ancestor? Or did God guide this evolution? I dont know. But all things, including the laws of nature, originate from God.... To deny evolution is to deny Gods creation. To embrace evolution is to not only embrace his creation, but to better appreciate it. (a student in Saginaw, Michigan) I immensely enjoy reading the Creation-Evolution Headlines. The way you use words exposes the bankruptcy of the evolutionary worldview. (a student at Northern Michigan U) ...standing O for crev.info. (a database programmer in California) Just wanted to say that I am thrilled to have found your website! Although I regularly visit numerous creation/evolution sites, Ive found that many of them do not stay current with relative information. I love the almost daily updates to your headlines section. Ive since made it my browser home page, and have recommended it to several of my friends. Absolutely great site! (a network engineer in Florida) After I heard about Creation-Evolution Headlines, it soon became my favorite Evolution resource site on the web. I visit several times a day cause I cant wait for the next update. Thats pathetic, I know ... but not nearly as pathetic as Evolution, something you make completely obvious with your snappy, intelligent commentary on scientific current events. It should be a textbook for science classrooms around the country. You rock! (an editor in Tennessee) One of the highlights of my day is checking your latest CreationSafaris creation-evolution news listing! Thanks so much for your great work -- and your wonderful humor. (a pastor in Virginia) Thanks!!! Your material is absolutely awesome. Ill be using it in our Adult Sunday School class. (a pastor in Wisconsin) Love your site & read it daily. (a family physician in Texas) I set it [crev.info] up as my homepage. That way I am less likely to miss some really interesting events.... I really appreciate what you are doing with Creation-Evolution Headlines. I tell everybody I think might be interested, to check it out. (a systems analyst in Tennessee) I would like to thank you for your service from which I stand to benefit a lot. (a Swiss astrophysicist) I enjoy very much reading your materials. (a law professor in Portugal) Thanks for your time and thanks for all the work on the site. It has been a valuable resource for me. (a medical student in Kansas) Creation-Evolution Headlines is a terrific resource. The articles are always current and the commentary is right on the mark. (a molecular biologist in Illinois) Creation-Evolution Headlines is my favorite anti-evolution website. With almost giddy anticipation, I check it several times a week for the latest postings. May God bless you and empower you to keep up this FANTASTIC work! (a financial analyst in New York) I read your pages on a daily basis and I would like to let you know that your hard work has been a great help in increasing my knowledge and growing in my faith. Besides the huge variety of scientific disciplines covered, I also enormously enjoy your great sense of humor and your creativity in wording your thoughts, which make reading your website even more enjoyable. (a software developer in Illinois) THANK YOU for all the work you do to make this wonderful resource! After being regular readers for a long time, this year weve incorporated your site into our home education for our four teenagers. The Baloney Detector is part of their Logic and Reasoning Skills course, and the Daily Headlines and Scientists of the Month features are a big part of our curriculum for an elective called Science Discovery Past and Present. What a wonderful goldmine for equipping future leaders and researchers with the tools of (a home school teacher in California) What can I say I LOVE YOU! I READ YOU ALMOST EVERY DAY I copy and send out to various folks. I love your sense of humor, including your politics and of course your faith. I appreciate and use your knowledge What can I say THANK YOU THANK YOU THANK YOU SO MUCH. (a biology major, former evolutionist, now father of college students) I came across your site while browsing through creation & science links. I love the work you do! (an attorney in Florida) Love your commentary and up to date reporting. Best site for evolution/design info. (a graphic designer in Oregon) I am an ardent reader of your site. I applaud your efforts and pass on your website to all I talk to. I have recently given your web site info to all my grandchildren to have them present it to their science teachers.... Your Supporter and fan..God bless you all... (a health services manager in Florida) Why your readership keeps doubling: I came across your website at a time when I was just getting to know what creation science is all about. A friend of mine was telling me about what he had been finding out. I was highly skeptical and sought to read as many pro/con articles as I could find and vowed to be open-minded toward his seemingly crazy claims. At first I had no idea of the magnitude of research and information thats been going on. Now, Im simply overwhelmed by the sophistication and availability of scientific research and information on what I now know to be the truth about creation. Your website was one of dozens that I found in my search. Now, there are only a handful of sites I check every day. Yours is at the top of my list... I find your news page to be the most insightful and well-written of the creation news blogs out there. The quick wit, baloney detector, in-depth scientific knowledge you bring to the table and the superb writing style on your site has kept me interested in the day-to-day happenings of what is clearly a growing movement. Your site ... has given me a place to point them toward to find out more and realize that theyve been missing a huge volume of information when it comes to the creation-evolution issue. Another thing I really like about this site is the links to articles in science journals and news references. That helps me get a better picture of what youre talking about.... Keep it up and I promise to send as many people as will listen to this website and others. (an Air Force Academy graduate stationed in New Mexico) Like your site especially the style of your comments.... Keep up the good work. (a retired engineer and amateur astronomer in Maryland) Support This Site| Scientist of the Month Find our articles in: Dutch Spanish Russian |Guide to Evolution Featured Creation Scientist for February 1838 - 1923 This is the Morley of the famous Michelson-Morley Experiment, which failed to find an expected lumeniferous ether that might serve as a medium for light waves. The result was vital to Einsteins theory of relativity, in which Einstein treated the constancy of the speed of light as a fundamental principle of the universe in the development of his revolutionary ideas. Here is what Dr. Don DeYoung wrote about Morley in his new book, Pioneers of Intelligent Design (BMH Books, 2006), p. 68: His Congregational minister father home schooled Morley. He later received training at Andover Theological Seminary in Massachusetts, and pastored a church in Ohio. Morley also had an unusual ability to make precise experimental measurements. He shared this talent with a generation of engineering students at Case Western Reserve Academy in Cleveland, Ohio. Morleys Christian testimony is shown in the creed that he wrote for his students at Case Western: I believe Jesus Christ shall come with the clouds of heaven to judge the world in righteousness and that those who have believed in Him shall inherit eternal life through the Grace of God. If you are enjoying this series, you can learn more about great Christians in science by reading our online book-in-progress: The Worlds Greatest Creation Scientists from Y1K to Y2K. Copies are also available from our online store. A Concise Guide| You can observe a lot by just watching. First Law of Scientific Progress The advance of science can be measured by the rate at which exceptions to previously held laws accumulate. 1. Exceptions always outnumber rules. 2. There are always exceptions to established exceptions. 3. By the time one masters the exceptions, no one recalls the rules to which they apply. Nature will tell you a direct lie if she can. So will Darwinists. Science is true. Dont be misled by facts. Finagles 2nd Law No matter what the anticipated result, there will always be someone eager to (a) misinterpret it, (b) fake it, or (c) believe it happened according to his own pet theory. 3. Draw your curves, then plot your data. 4. In case of doubt, make it sound convincing. 6. Do not believe in miracles rely on them. Murphys Law of Research Enough research will tend to support your theory. If the facts do not conform to the theory, they must be disposed of. 1. The bigger the theory, the better. 2. The experiments may be considered a success if no more than 50% of the observed measurements must be discarded to obtain a correspondence with the theory. The number of different hypotheses erected to explain a given biological phenomenon is inversely proportional to the available knowledge. All great discoveries are made by mistake. The greater the funding, the longer it takes to make the mistake. The solution to a problem changes the nature of the problem. Peters Law of Evolution Competence always contains the seed of incompetence. An expert is a person who avoids the small errors while sweeping on to the grand fallacy. Repetition does not establish validity. What really matters is the name you succeed in imposing on the facts not the facts themselves. For every action, there is an equal and opposite criticism. Thumbs Second Postulate An easily-understood, workable falsehood is more useful than a complex, incomprehensible truth. There is nothing so small that it cant be blown out of proportion Hawkins Theory of Progress Progress does not consist in replacing a theory that is wrong with one that is right. It consists in replacing a theory that is wrong with one that is more subtly wrong. The best theory is not ipso facto a good theory. Error is often more earnest than truth. Advice from Paul| Guard what was committed to your trust, avoiding the profane and idle babblings and contradictions of what is falsely called knowledge by professing it some have strayed concerning the faith. I Timothy 6:20-21 Song of the True Scientist O Lord, how manifold are Your works! In wisdom You have made them all. The earth is full of Your possessions . . . . May the glory of the Lord endure forever. May the Lord rejoice in His works . . . . I will sing to the Lord s long as I live; I will sing praise to my God while I have my being. May my meditation be sweet to Him; I will be glad in the Lord. May sinners be consumed from the earth, and the wicked be no more. Bless the Lord, O my soul! Praise the Lord! from Psalm 104 Through the creatures Thou hast made Show the brightness of Thy glory. Be eternal truth displayed In their substance transitory. Till green earth and ocean hoary, Massy rock and tender blade, Tell the same unending story: We are truth in form arrayed. Teach me thus Thy works to read, That my faith, new strength accruing May from world to world proceed, Wisdoms fruitful search pursuing Till, thy truth my mind imbuing, I proclaim the eternal Creed Oft the glorious theme renewing, God our Lord is God indeed. James Clerk Maxwell One of the greatest physicists of all time (a creationist). I really enjoy your website, the first I visit every day. I have a quote by Mark Twain which seems to me to describe the Darwinian philosophy of science perfectly. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact. Working as I do in the Environmental field (I am a geologist doing groundwater contamination project management for a state agency) I see that kind of science a lot. Keep up the good work!! (a hydrogeologist in Alabama) I visit your website regularly and I commend you on your work. I applaud your effort to pull actual science from the mass of propaganda for Evolution you report on (at least on those rare occasions when there actually is any science in the propaganda). I also must say that I'm amazed at your capacity to continually plow through the propaganda day after day and provide cutting and amusing commentary.... I can only hope that youthful surfers will stop by your website for a fair and interesting critique of the dogma they have to imbibe in school. (a technical writer living in Jerusalem) I have enjoyed your site for several years now. Thanks for all the hard work you obviously put into this. I appreciate your insights, especially the biological oriented ones in which I'm far behind the nomenclature curve. It would be impossible for me to understand what's going on without some interpretation. Thanks again. (a manufacturing engineer in Vermont) Love your site and your enormous amount of intellectualism and candor regarding the evolution debate. Yours is one site I look forward to on a daily basis. Thank you for being a voice for the rest of us. (a graphic designer in Wisconsin) For sound, thoughtful commentary on creation-evolution hot topics go to (Access Research Network Your website is simply the best (and Id dare say one of the most important) web sites on the entire WWW. (an IT specialist at an Alabama university) Ive been reading the articles on this website for over a year, and Im guilty of not showing any appreciation. You provide a great service. Its one of the most informative and up-to-date resources on creation available anywhere. Thank you so much. Please keep up the great work. (a senior research scientist in Georgia) Just a note to thank you for your site. I am a regular visitor and I use your site to rebut evolutionary "just so" stories often seen in our local media. I know what you do is a lot of work but you make a difference and are appreciated. (a veterinarian in Minnesota) This is one of the best sites I have ever visited. Thanks. I have passed it on to several others... I am a retired grandmother. I have been studying the creation/evolution question for about 50 yrs.... Thanks for the info and enjoyable site. (a retiree in Florida) It is refreshing to know that there are valuable resources such as Creation-Evolution Headlines that can keep us updated on the latest scientific news that affect our view of the world, and more importantly to help us decipher through the rhetoric so carelessly disseminated by evolutionary scientists. I find it Intellectually Satisfying to know that I dont have to park my brain at the door to be a believer or at the very least, to not believe in Macroevolution. (a loan specialist in California) I have greatly benefitted from your efforts. I very much look forward to your latest posts. (an attorney in California) I must say your website provides an invaluable arsenal in this war for souls that is being fought. Your commentaries move me to laughter or sadness. I have been viewing your information for about 6 months and find it one of the best on the web. It is certainly effective against the nonsense published on Talkorigins.org. It great to see work that glorifies God and His creation. (a commercial manager in Australia) Visiting daily your site and really do love it. (a retiree from Finland who studied math and computer science) I am agnostic but I can never deny that organic life (except human) is doing a wonderful job at functioning at optimum capacity. Thank you for this ... site! (an evolutionary theorist from Australia) During the year I have looked at your site, I have gone through your archives and found them to be very helpful and informative. I am so impressed that I forward link to members of my congregation who I believe are interested in a higher level discussion of creationist issues than they will find at [a leading origins website]. (a minister in Virginia) I attended a public school in KS where evolution was taught. I have rejected evolution but have not always known the answers to some of the questions.... A friend told me about your site and I like it, I have it on my favorites, and I check it every day. (an auto technician in Missouri) Thanks for a great site! It has brilliant insights into the world of science and of the evolutionary dogma. One of the best sites I know of on (a programmer in Iceland) The site you run creation-evolution headlines is extremely useful to me. I get so tired of what passes for science Darwinism in particular and I find your site a refreshing antidote to the usual junk.... it is clear that your thinking and logic and willingness to look at the evidence for what the evidence says is much greater than what I read in what are now called science journals. Please keep up the good work. I appreciate what you are doing more than I can communicate in this e-mail. (a teacher in California) Im a small town newspaper editor in southwest Wyoming. Were pretty isolated, and finding your site was a great as finding a gold mine. I read it daily, and if theres nothing new, I re-read everything. I follow links. I read the Scientist of the Month. Its the best site Ive run across. Our local school board is all Darwinist and determined to remain that way. (a newspaper editor in Wyoming) have been reading your page for about 2 years or so.... I read it every day. I ...am well educated, with a BA in Applied Physics from Harvard and an MBA in Finance from Wharton. (a reader in Delaware) I came across your website by accident about 4 months ago and look at it every day.... About 8 months ago I was reading a letter to the editor of the Seattle Times that was written by a staunch anti-Creationist and it sparked my interest enough to research the topic and within a week I was yelling, my whole lifes education has been a lie!!! Ive put more study into Biblical Creation in the last 8 months than any other topic in my life. Past that, through resources like your website...Ive been able to convince my father (professional mathematician and amateur geologist), my best friend (mechanical engineer and fellow USAF Academy Grad/Creation Science nutcase), my pastor (he was the hardest to crack), and many others to realize the Truth of Creation.... Resources like your website help the rest of us at the grassroots level drum up interest in the subject. And regardless of what the major media says: Creationism is spreading like wildfire, so please keep your website going to help fan the flames. (an Air Force Academy graduate and officer) I love your site! I **really** enjoy reading it for several specific reasons: 1.It uses the latest (as in this month!) research as a launch pad for opinion; for years I have searched for this from a creation science viewpoint, and now, Ive found it. 2. You have balanced fun with this topic. This is hugely valuable! Smug Christianity is ugly, and I dont perceive that attitude in your comments. 3. I enjoy the expansive breadth of scientific news that you cover. 4. I am not a trained scientist but I know evolutionary bologna/(boloney) when I see it; you help me to see it. I really appreciate this. (a computer technology salesman in Virginia) I love your site. Thats why I was more than happy to mention it in the local paper.... I mentioned your site as the place where..... Every Darwin-cheering news article is reviewed on that site from an ID perspective. Then the huge holes of the evolution theory are exposed, and the bad science is shredded to bits, using real (a project manager in New Jersey) Ive been reading your site almost daily for about three years. I have never been more convinced of the truthfulness of Scripture and the faithfulness of God. (a system administrator and homeschooling father in Colorado) I use the internet a lot to catch up on news back home and also to read up on the creation-evolution controversy, one of my favourite topics. Your site is always my first port of call for the latest news and views and I really appreciate the work you put into keeping it up to date and all the helpful links you provide. You are a beacon of light for anyone who wants to hear frank, honest conclusions instead of the usual diluted garbage we are spoon-fed by the media.... Keep up the good work and know that youre changing lives. (a teacher in Spain) I am grateful to you for your site and look forward to reading new stories.... I particularly value it for being up to date with what is going on. (from the Isle of Wight, UK) [Creation-Evolution Headlines] is the place to go for late-breaking news [on origins]; it has the most information and the quickest turnaround. Its incredible I dont know how you do it. I cant believe all the articles you find. God bless you! (a radio producer in Riverside, CA) Just thought I let you know how much I enjoy reading your Headlines section. I really appreciate how you are keeping your ear to the ground in so many different areas. It seems that there is almost no scientific discipline that has been unaffected by Darwins Folly. (a programmer in aerospace from Gardena, CA) I enjoy reading the comments on news articles on your site very much. It is incredible how much refuse is being published in several scientific fields regarding evolution. It is good to notice that the efforts of true scientists have an increasing influence at schools, but also in the media.... May God bless your efforts and open the eyes of the blinded evolutionists and the general public that are being deceived by pseudo-scientists.... I enjoy the site very much and I highly respect the work you and the team are doing to spread the truth. (an ebusiness manager in the Netherlands) I discovered your site through a link at certain website... It has greatly helped me being updated with the latest development in science and with critical comments from you. I also love your baloney detector and in fact have translated some part of the baloney detector into our language (Indonesian). I plan to translate them all for my friends so as to empower them. (a staff member of a bilateral agency in West Timor, Indonesia) ...absolutely brilliant and inspiring. (a documentary film producer, remarking on the I found your site several months ago and within weeks had gone through your entire archives.... I check in several times a day for further information and am always excited to read the new articles. Your insight into the difference between what is actually known versus what is reported has given me the confidence to stand up for what I believe. I always felt there was more to the story, and your articles have given me the tools to read through the hype.... You are an invaluable help and I commend your efforts. Keep up the great work. (a sound technician in Alberta) I discovered your site (through a link from a blog) a few weeks ago and I cant stop reading it.... I also enjoy your insightful and humorous commentary at the end of each story. If the evolutionists blindness wasnt so sad, I would laugh harder. I have a masters degree in mechanical engineering from a leading University. When I read the descriptions, see the pictures, and watch the movies of the inner workings of the cell, Im absolutely amazed.... Thanks for bringing these amazing stories daily. Keep up the good work. (an engineer in Virginia) I stumbled across your site several months ago and have been reading it practically daily. I enjoy the inter-links to previous material as well as the links to the quoted research. Ive been in head-to-head debate with a materialist for over a year now. Evolution is just one of those debates. Your site is among others that have been a real help in expanding my understanding. (a software engineer in Pennsylvania) I was in the April 28, 2005 issue of Nature [see 04/27/2005 story] regarding the rise of intelligent design in the universities. It was through your website that I began my journey out of the crisis of faith which was mentioned in that article. It was an honor to see you all highlighting the article in Nature. Thank you for all you have done! (Salvador Cordova, George Mason University) I shudder to think of the many ways in which you mislead readers, encouraging them to build a faith based on misunderstanding and ignorance. Why dont you allow people to have a faith that is grounded in a fuller understanding of the world?... Your website is a sham. (a co-author of the paper reviewed in the 12/03/2003 entry who did not appreciate the unflattering commentary. This led to a cordial interchange, but he could not divorce his reasoning from the science vs. faith dichotomy, and resulted in an impasse over definitions but, at least, a more mutually respectful dialogue. He never did explain how his paper supported Darwinian macroevolution. He just claimed evolution is a fact.) I absolutely love creation-evolution news. As a Finnish university student very interested in science, I frequent your site to find out about all the new science stuff thats been happening you have such a knack for finding all this information! I have been able to stump evolutionists with knowledge gleaned from your site many times. (a student in Finland) I love your site and read it almost every day. I use it for my science class and 5th grade Sunday School class. I also challenge Middle Schoolers and High Schoolers to get on the site to check out articles against the baloney they are taught in school. (a teacher in Los Gatos, CA) I have spent quite a few hours at Creation Evolution Headlines in the past week or so going over every article in the archives. I thank you for such an informative and enjoyable site. I will be visiting often and will share this link with others. [Later] I am back to May 2004 in the archives. I figured I should be farther back, but there is a ton of information to digest. (a computer game designer in Colorado) The IDEA Center also highly recommends visiting Creation-Evolution Headlines... the most expansive and clearly written origins news website on the internet! (endorsement on Intelligent Design and Evolution Hey Friends, Check out this site: Creation-Evolution Headlines. This is a fantastic resource for the whole family.... a fantastic reference library with summaries, commentaries and great links that are added to dailyarchives go back five years. (a reader who found us in Georgia) I just wanted to drop you a note telling you that at www.BornAgainRadio.com, Ive added a link to your excellent Creation-Evolution news site. (a radio announcer) I cannot understand why anyone would invest so much time and effort to a website of sophistry and casuistry. Why twist Christian apology into an illogic pretzel to placate your intellect? Isnt it easier to admit that your faith has no basis -- hence, faith. It would be extricate [sic] yourself from intellectual dishonesty -- and from bearing false witness. Sincerely, Rev. [name withheld] (an ex-Catholic, apostate Christian Natural/Scientific pantheist) Just wanted to let you folks know that we are consistent readers and truly appreciate the job you are doing. God bless you all this coming New Year. (from two prominent creation researchers/writers in Oregon) Thanks so much for your site! It is brain candy! (a reader in North Carolina) I Love your site probably a little too much. I enjoy the commentary and the links to the original articles. (a civil engineer in New York) Ive had your Creation/Evolution Headlines site on my favourites list for 18 months now, and I can truthfully say that its one of the best on the Internet, and I check in several times a week. The constant stream of new information on such a variety of science issues should impress anyone, but the rigorous and humourous way that every thought is taken captive is inspiring. Im pleased that some Christians, and indeed, some webmasters, are devoting themselves to producing real content that leaves the reader in a better state than when they found him. (a community safety manager in England) I really appreciate the effort that you are making to provide the public with information about the problems with the General Theory of Evolution. It gives me ammunition when I discuss evolution in my classroom. I am tired of the evolutionary dogma. I wish that more people would stand up against such ridiculous beliefs. (a science teacher in Alabama) If you choose to hold an opinion that flies in the face of every piece of evidence collected so far, you cannot be suprised [sic] when people dismiss your views. (a former Christian software distributor, location not disclosed) ...the Creation Headlines is the best. Visiting your site... is a standard part of my startup procedures every morning. (a retired Air Force Chaplain) I LOVE your site and respect the time and work you put into it. I read the latest just about EVERY night before bed and send selection[s] out to others and tell others about it. I thank you very much and keep up the good work (and (a USF grad in biology) Answering your invitation for thoughts on your site is not difficult because of the excellent commentary I find. Because of the breadth and depth of erudition apparent in the commentaries, I hope Im not being presumptuous in suspecting the existence of contributions from a Truth Underground comprised of dissident college faculty, teachers, scientists, and engineers. If thats not the case, then it is surely a potential only waiting to be realized. Regardless, I remain in awe of the care taken in decomposing the evolutionary cant that bombards us from the specialist as well as popular press. (a mathematician/physicist in Arizona) Im from Quebec, Canada. I have studied in pure sciences and after in actuarial mathematics. Im visiting this site 3-4 times in a week. Im learning a lot and this site gives me the opportunity to realize that this is a good time to be a creationist! (a French Canadian reader) I LOVE your Creation Safari site, and the Baloney Detector material. (a reader in the Air Force) You have a unique position in the Origins community. Congratulations on the best current affairs news source on the origins net. You may be able to write fast but your logic is fun to work through. (a pediatrician in California) Visit your site almost daily and find it very informative, educational and inspiring. (a reader in western Canada) I wish to thank you for the information you extend every day on your site. It is truly a blessing! (a reader in North Carolina) I really appreciate your efforts in posting to this website. I find it an incredibly useful way to keep up with recent research (I also check science news daily) and also to research particular topics. (an IT consultant from Brisbane, Australia) I would just like to say very good job with the work done here, very comprehensive. I check your site every day. Its great to see real science directly on the front lines, toe to toe with the pseudoscience that's mindlessly spewed from the prestigious (a biology student in Illinois) Ive been checking in for a long time but thought Id leave you a note, this time. Your writing on these complex topics is insightful, informative with just the right amount of humor. I appreciate the hard work that goes into monitoring the research from so many sources and then writing intelligently about them. (an investment banker in California) Keep up the great work. You are giving a whole army of Christians plenty of ammunition to come out of the closet (everyone else has). Most of us are not scientists, but most of the people we talk to are not scientists either, just ordinary people who have been fed baloney for years and years. (a reader in Arizona) Keep up the outstanding work! You guys really ARE making a difference! (a reader in Texas) I wholeheartedly agree with you when you say that science is not hostile towards religion. It is the dogmatically religious that are unwaveringly hostile towards any kind of science which threatens their dearly-held precepts. Science (real, open-minded science) is not interested in theological navel-gazing. Note: Please supply your name and location when writing in. Anonymous attacks only make one look foolish and cowardly, and will not normally be printed. This one was shown to display a bad example. I appreciate reading your site every day. It is a great way to keep up on not just the new research being done, but to also keep abreast of the evolving debate about evolution (Pun intended).... I find it an incredibly useful way to keep up with recent research (I also check science news daily) and also to research particular topics. (an IT consultant in Brisbane, Australia) I love your website. (a student at a state university who used CEH when writing for the campus newsletter) ....when you claim great uncertainty for issues that are fairly well resolved you damage your already questionable credibility. Im sure your audience loves your ranting, but if you know as much about biochemistry, geology, astronomy, and the other fields you skewer, as you do about ornithology, you are spreading heat, not (a professor of ornithology at a state university, responding to the 09/10/2002 headline) I wanted to let you know I appreciate your headline news style of exposing the follies of evolutionism.... Your style gives us constant, up-to-date reminders that over and over again, the Bible creation account is vindicated and the evolutionary fables are refuted. (a reader, location unknown) You have a knack of extracting the gist of a technical paper, and digesting it into understandable terms. (a nuclear physicist from Lawrence Livermore Labs who worked on the Manhattan Project) After spending MORE time than I really had available going thru your MANY references I want to let you know how much I appreciate the effort you have put forth. The information is properly documented, and coming from recognized scientific sources is doubly valuable. Your explanatory comments and sidebar quotations also add GREATLY to your overall effectiveness as they 1) provide an immediate interpretive starting point and 2) maintaining the readers (a reader in Michigan) I am a huge fan of the site, and check daily for updates. (reader location and occupation unknown) I just wanted to take a minute to personally thank-you and let you know that you guys are providing an invaluable service! We check your Web site weekly (if not daily) to make sure we have the latest information in the creation/evolution controversy. Please know that your diligence and perseverance to teach the Truth have not gone unnoticed. Keep up the great work! (a PhD scientist involved in origins research) You've got a very useful and informative Web site going. The many readers who visit your site regularly realize that it requires considerable effort to maintain the quality level and to keep the reviews current.... I hope you can continue your excellent Web pages. I have recommended them highly to others. (a reader, location and occupation unknown) As an apprentice apologist, I can always find an article that will spark a spirited debate. Keep em coming! The Truth will prevail. (a reader, location and occupation unknown) Thanks for your web page and work. I try to drop by at least once a week and read what you have. Im a Christian that is interested in science (Im a mechanical engineer) and I find you topics interesting and helpful. I enjoy your lessons and insights on Baloney Detection. (a year later): I read your site 2 to 3 times a week; which Ive probably done for a couple of years. I enjoy it for the interesting content, the logical arguments, what I can learn about biology/science, and your pointed commentary. (a production designer in Kentucky) I look up CREV headlines every day. It is a wonderful source of information and encouragement to me.... Your gift of discerning the fallacies in evolutionists interpretation of scientific evidence is very helpful and educational for me. Please keep it up. Your website is the best I know of. (a Presbyterian minister in New South Wales, Australia) Ive written to you before, but just wanted to say again how much I appreciate your site and all the work you put into it. I check it almost every day and often share the contents (and web address) with lists on which I participate. I dont know how you do all that you do, but I am grateful for your energy and knowledge. (a prominent creationist author) I am new to your site, but I love it! Thanks for updating it with such cool information. (a home schooler) I love your site.... Visit every day hoping for another of your brilliant demolitions of the foolish just-so stories of those who think themselves wise. (a reader from Southern California) I visit your site daily for the latest news from science journals and other media, and enjoy your commentary immensely. I consider your web site to be the most valuable, timely and relevant creation-oriented site on the internet. (a reader from Ontario, Canada) Keep up the good work! I thoroughly enjoy your site. (a reader in Texas) Thanks for keeping this fantastic web site going. It is very informative and up-to-date with current news including incisive (a reader in North Carolina) Great site! For all the Baloney Detector is impressive and a great tool in debunking wishful thinking theories. (a reader in the Netherlands) Just wanted to let you know, your work is having quite an impact. For example, major postings on your site are being circulated among the Intelligent Design members.... (a PhD organic chemist) opening a can of worms ... I love to click all the related links and read your comments and the links to other websites, but this usually makes me late for something else. But its ALWAYS well worth it!! (a leader of a creation group) I am a regular visitor to your website ... I am impressed by the range of scientific disciplines your articles address. I appreciate your insightful dissection of the often unwarranted conclusions evolutionists infer from the data... Being a medical doctor, I particularly relish the technical detail you frequently include in the discussion living systems and processes. Your website continually reinforces my conviction that if an unbiased observer seeks a reason for the existence of life then Intelligent Design will be the unavoidable (a medical doctor) A church member asked me what I thought was the best creation web site. I told him CreationSafaris.com. (a PhD geologist) I love your site... I check it every day for interesting information. It was hard at first to believe in Genesis fully, but now I feel more confident about the mistakes of humankind and that all their reasoning amounts to nothing in light of a living God. (a college grad) Thank you so much for the interesting science links and comments on your creation evolution headlines page ... it is very (a reader from Scottsdale, AZ) visit your site almost every day, and really enjoy it. Great job!!! (I also recommend it to many, many students.) (an educational consultant) I like what I seevery much. I really appreciate a decent, calm and scholarly approach to the whole issue... Thanks ... for this fabulous It is refreshing to read your comments. You have a knack to get to the heart of (a reader in the Air Force). Love your website. It has well thought out structure and will help many through these complex issues. I especially love the I believe this is one of the best sites on the Internet. I really like your side-bar of truisms. Yogi [Berra] is absolutely correct. If I were a man of wealth, I would support you financially. (a registered nurse in Alabama, who found us on TruthCast.com.) WOW. Unbelievable.... My question is, do you sleep? ... Im utterly impressed by your page which represents untold amounts of time and energy as well as your faith. (a mountain man in Alaska). Just wanted to say that I recently ran across your web site featuring science headlines and your commentary and find it to be A++++, superb, a 10, a homerun I run out of superlatives to describe it! ... You can be sure I will visit your site often daily when possible to gain the latest information to use in my speaking engagements. Ill also do my part to help publicize your site among college students. Keep up the good work. Your material is appreciated and used. (a college campus minister) Disclaimer: Creation-Evolution Headlines includes links to many external sites, but takes no responsibility for the accuracy or legitimacy of their content. Inclusion of an external link is strictly for the readers convenience, and does not necessarily constitute endorsement of the material or its authors, owners, or sponsors.|
1
8
<urn:uuid:ecfc39fd-4318-4463-848c-436058c9d965>
Mosaic (web browser) NCSA Mosaic 3.0 for Windows |Initial release||0.5 / January 23, 1993| 3.0 / January 7, 1997 Classic Mac OS NCSA Mosaic, or simply Mosaic, is a discontinued early web browser. It has been credited with popularizing the World Wide Web. It was also a client for earlier protocols such as File Transfer Protocol, Network News Transfer Protocol, and Gopher. The browser was named for its support of multiple internet protocols. Its intuitive interface, reliability, Windows port and simple installation all contributed to its popularity within the web, as well as on Microsoft operating systems. Mosaic was also the first browser to display images inline with text instead of displaying images in a separate window. While often described as the first graphical web browser, Mosaic was preceded by WorldWideWeb, the lesser-known Erwise and ViolaWWW. Mosaic was developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign beginning in late 1992. NCSA released the browser in 1993, and officially discontinued development and support on January 7, 1997. However, it can still be downloaded from NCSA. Netscape Navigator was later developed by Netscape, which employed many of the original Mosaic authors; however, it intentionally shared no code with Mosaic. Netscape Navigator's code descendant is Mozilla Firefox. Starting in 1995 Mosaic lost a lot of share to Netscape Navigator, and by 1997 only had a tiny fraction of users left, by which time the project was discontinued. Microsoft licensed Mosaic to create Internet Explorer in 1995. David Thompson tested ViolaWWW and showed the application to Marc Andreessen. Andreessen and Eric Bina originally designed and programmed NCSA Mosaic for Unix's X Window System called xmosaic. Then, in December 1991, the Gore Bill created and introduced by then Senator and future Vice President Al Gore was passed, which provided the funding for the Mosaic project. Development began in December 1992. Marc Andreessen announced the project on Jan 23, 1993. The first alpha release (numbered 0.1a) was published in June 1993, and the first beta release (numbered 0.6b) followed quickly thereafter in September 1993. Version 1.0 for Windows was released on November 11, 1993. NCSA Mosaic for Unix (X-Windows) version 2.0 was released on November 10, 1993. A port of Mosaic to the Commodore Amiga was available by October 1993. Ports to Windows and Macintosh had already been released in September. From 1994 to 1997, the National Science Foundation supported the further development of Mosaic. Marc Andreessen, the leader of the team that developed Mosaic, left NCSA and, with James H. Clark, one of the founders of Silicon Graphics, Inc. (SGI), and four other former students and staff of the University of Illinois, started Mosaic Communications Corporation. Mosaic Communications eventually became Netscape Communications Corporation, producing Netscape Navigator. Spyglass, Inc. licensed the technology and trademarks from NCSA for producing their own web browser but never used any of the NCSA Mosaic source code. Microsoft licensed Spyglass Mosaic in 1995 for US$2 million, modified it, and renamed it Internet Explorer. After a later auditing dispute, Microsoft paid Spyglass $8 million. The 1995 user guide The HTML Sourcebook: The Complete Guide to HTML, specifically states, in a section called Coming Attractions, that Internet Explorer "will be based on the Mosaic program".:331 Versions of Internet Explorer before version 7 stated "Based on NCSA Mosaic" in the About box. Internet Explorer 7 was audited by Microsoft to ensure that it contained no Mosaic code, and thus no longer credits Spyglass or Mosaic. The licensing terms for NCSA Mosaic were generous for a proprietary software program. In general, non-commercial use was free of charge for all versions (with certain limitations). Additionally, the X Window System/Unix version publicly provided source code (source code for the other versions was available after agreements were signed). Despite persistent rumors to the contrary, however, Mosaic was never released as open source software during its brief reign as a major browser; there were always constraints on permissible uses without payment. As of 1993, license holders included these: - Amdahl Corporation - Fujitsu Limited (Product: Infomosaic, a Japanese version of Mosaic. Price: Yen5,000 (approx US$50) - InfoSeek Corporation (Product: No commercial Mosaic. May use Mosaic as part of a commercial database effort) - Quadralay Corporation (Consumer version of Mosaic. Also using Mosaic in its online help and information product, GWHIS. Price: US$249) - Quarterdeck Office Systems Inc. - The Santa Cruz Operation Inc. (Product: Incorporating Mosaic into "SCO Global Access," a communications package for Unix machines that works with SCO's Open Server. Runs a graphical e-mail service and accesses newsgroups.) - SPRY Inc. (Products: A communication suite: Air Mail, Air News, Air Mosaic, etc. Also producing Internet In a Box with O'Reilly & Associates. Price: US$149–$399 for Air Series.) - Spyglass, Inc. (Product: Relicensing to other vendors. Signed deal with Digital Equipment Corp., which would ship Mosaic with all its machines.) In the October 1994 issue of Wired Magazine, Gary Wolfe notes in the article titled "The (Second Phase of the) Revolution Has Begun: Don't look now, but Prodigy, AOL, and CompuServe are all suddenly obsolete - and Mosaic is well on its way to becoming the world's standard interface": When it comes to smashing a paradigm, pleasure is not the most important thing. It is the only thing. If this sounds wrong, consider Mosaic. Mosaic is the celebrated graphical "browser" that allows users to travel through the world of electronic information using a point-and-click interface. Mosaic's charming appearance encourages users to load their own documents onto the Net, including color photos, sound bites, video clips, and hypertext "links" to other documents. By following the links - click, and the linked document appears - you can travel through the online world along paths of whim and intuition. Mosaic is not the most direct way to find online information. Nor is it the most powerful. It is merely the most pleasurable way, and in the 18 months since it was released, Mosaic has incited a rush of excitement and commercial energy unprecedented in the history of the Net. Importance of Mosaic Mosaic was the web browser that led to the Internet boom of the 1990s. Robert Reid underscores this importance stating, "while still an undergraduate, Marc wrote the Mosaic software ... that made the web popularly relevant and touched off the revolution".:xlii Reid notes that Andreessen's team hoped: ... to rectify many of the shortcomings of the very primitive prototypes then floating around the Internet. Most significantly, their work transformed the appeal of the Web from niche uses in the technical area to mass-market appeal. In particular, these University of Illinois students made two key changes to the Web browser, which hyper-boosted its appeal: they added graphics to what was otherwise boring text-based software, and, most importantly, they ported the software from so-called Unix computers that are popular only in technical and academic circles, to the Windows operating system, which is used on more than 80 percent of the computers in the world, especially personal and commercial computers.:xxv Mosaic is not the first web browser for Windows; this is Thomas R. Bruce's little-known Cello. And the Unix version of Mosaic was already famous before the Windows and Mac versions were released. Other than displaying images embedded in the text rather than in a separate window, Mosaic's original feature set is not greater than of the browsers on which it was modeled, such as ViolaWWW. But Mosaic was the first browser written and supported by a team of full-time programmers, was reliable and easy enough for novices to install, and the inline graphics reportedly proved immensely appealing. Mosaic is said to have made the Web accessible to the ordinary person for the first time and already had 53% market share in 1995. Reid also refers to Matthew K. Gray's website, Internet Statistics: Growth and Usage of the Web and the Internet, which indicates a dramatic leap in web use around the time of Mosaic's introduction.:xxv In addition, David Hudson concurs with Reid, noting that: Marc Andreessen's realization of Mosaic, based on the work of Berners-Lee and the hypertext theorists before him, is generally recognized as the beginning of the web as it is now known. Mosaic, the first web browser to win over the Net masses, was released in 1993 and made freely accessible to the public. The adjective phenomenal, so often overused in this industry, is genuinely applicable to the... 'explosion' in the growth of the web after Mosaic appeared on the scene. Starting with next to nothing, the rates of the web growth (quoted in the press) hovering around tens of thousands of percent over ridiculously short periods of time were no real surprise.:42 Ultimately, web browsers such as Mosaic became the killer applications of the 1990s. Web browsers were the first to bring a graphical interface to search tools the Internet's burgeoning wealth of distributed information services. A mid-1994 guide lists Mosaic alongside the traditional, text-oriented information search tools of the time, Archie and Veronica, Gopher, and WAIS but Mosaic quickly subsumed and displaced them all. Joseph Hardin, the director of the NCSA group within which Mosaic was developed, said downloads were up to 50,000 a month in mid-1994. In November 1992, there were twenty-six websites in the world and each one attracted attention. In its release year of 1993, Mosaic had a What's New page, and about one new link was being added per day. This was a time when access to the Internet was expanding rapidly outside its previous domain of academia and large industrial research institutions. Yet it was the availability of Mosaic and Mosaic-derived graphical browsers themselves that drove the explosive growth of the Web to over 10,000 sites by Aug 1995 and millions by 1998. Metcalf expressed the pivotal role of Mosaic this way: In the Web's first generation, Tim Berners-Lee launched the Uniform Resource Locator (URL), Hypertext Transfer Protocol (HTTP), and HTML standards with prototype Unix-based servers and browsers. A few people noticed that the Web might be better than Gopher. In the second generation, Marc Andreessen and Eric Bina developed NCSA Mosaic at the University of Illinois. Several million then suddenly noticed that the Web might be better than sex.In the third generation, Andreessen and Bina left NCSA to found Netscape... End of Mosaic Mosaic's popularity as a separate browser began to lessen upon the release of Andreessen's Netscape Navigator in 1994. This was noted at the time in The HTML Sourcebook: The Complete Guide to HTML: "Netscape Communications has designed an all-new WWW browser Netscape, that has significant enhancements over the original Mosaic program.":332 By 1998 its user base had almost completely evaporated, being replaced by other web browsers. After NCSA stopped work on Mosaic, development of the NCSA Mosaic for the X Window System source code was continued by several independent groups. These independent development efforts include mMosaic (multicast Mosaic) which ceased development in early 2004, and Mosaic-CK and VMS Mosaic. VMS Mosaic, a version specifically targeting OpenVMS operating system, was one of the longest-lived efforts to maintain Mosaic. Using the VMS support already built-in in original version (Bjorn S. Nilsson ported Mosaic 1.2 to VMS in the summer of 1993), developers incorporated a substantial part of the HTML engine from mMosaic, another defunct flavor of the browser. As of 3 September 2003[update], VMS Mosaic supported HTML 4.0, OpenSSL, cookies, and various image formats including GIF, JPEG, PNG, BMP, TGA, TIFF and JPEG 2000 image formats. The browser works on VAX, Alpha, and Itanium platforms. Another long-lived version of Mosaic – Mosaic-CK, developed by Cameron Kaiser – saw its last release (version 2.7ck9) on July 11, 2010; a maintenance release with minor compatibility fixes (version 2.7ck10) was released on 9 January 2015, followed by another one (2.7ck11) in October 2015. The stated goal of the project is "Lynx with graphics" and runs on Mac OS X, Power MachTen, Linux and other compatible Unix-like OSs. |This section needs expansion. You can help by adding to it. (November 2010)| - Comparison of web browsers - History of the World Wide Web - Kevin Hughes (Internet pioneer) - List of web browsers - Stewart, William. "Mosaic -- The First Global Web Browser". Retrieved 22 February 2011. - "xmosaic 1.2 source code". NCSA. 1994-06-29. Retrieved 2009-06-02. - Andreessen, Marc. "Mosaic -- The First Global Web Browser". Retrieved 2006-12-16. - Berners-Lee, Tim. "What were the first WWW browsers?". World Wide Web Consortium. Retrieved 2010-06-15. - Holwerda, Thom (3 Mar 2009). "The World's First Graphical Browser: Erwise". OSNews. Retrieved 2009-06-02. - Vetter, Ronald J. (October 1994). "Mosaic and the World-Wide Web" (PDF). North Dakota State University. Archived from the original (PDF) on 24 August 2014. Retrieved 20 November 2010. - "Exhibits - Internet History - 1990's". Computer History Museum. 2006. Retrieved 2006-12-16. - "Mosaic FTP". NCSA. Retrieved 30 May 2010. - Clark, Jim (1999). Netscape Time. St. Martin's Press. - Berners-Lee, Tim. "A Brief History of the Web". World Wide Web Consortium. Retrieved 16 August 2010. - Andreessen, Marc; Bina, Eric (1994). "NCSA Mosaic: A Global Hypermedia System". Internet Research. Bingley, U.K.: Emerald Group Publishing Limited. 4 (1): 7–17. ISSN 1066-2243. doi:10.1108/10662249410798803. - "NCSA X Mosaic 0.5 released". Retrieved 2013-07-06. - "The History of NCSA Mosaic". NCSA. - "About NCSA Mosaic". NCSA. Archived from the original on September 27, 2013. - "NCSA Mosaic for X 2.0 available". Retrieved 2013-07-06. - Mace, Scott (7 March 1994). "SCO brings Internet access to PCs". InfoWorld. p. 47. - Sink, Eric (2003-05-15). "Memoirs From the Browser Wars". Eric Sink's Weblog. Retrieved 2006-12-16. - Thurrott, Paul (22 January 1997). "Microsoft and Spyglass kiss and make up". Retrieved 9 February 2011. - Elstrom, Peter (22 January 1997). "MICROSOFT'S $8 MILLION GOODBYE TO SPYGLASS". Bloomberg Businessweek. Retrieved 9 February 2011. - Graham, Ian S. (1995). The HTML Sourcebook: The Complete Guide to HTML (First ed.). New York: John Wiley & Sons. ISBN 0-471-11849-4. - Wolfe, Gary (October 1994). "The (Second Phase of the) Revolution Has Begun". Wired Magazine. 2: 10. Retrieved January 7, 2015. - "A Little History of the World Wide Web From 1960s to 1995". CERN. 2001-05-05. Retrieved 2006-12-16. - Reid, Robert H. (1997). Architects of the Web: 1000 Days That Built the Future of Business. John Wiley and Sons. ISBN 0-471-17187-5. - Cockburn, Andy; Jones, Steve (6 December 2000). "Which Way Now? Analysing and Easing Inadequacies in WWW Navigation". CiteSeerX . - Hudson, David (1997). Rewired: A Brief and Opinionated Net History. Indianapolis: Macmillan Technical Publishing. ISBN 1-57870-003-5. - Lucey, Sean (9 May 1994). "Internet tools help navigate the busy virtual highway.". MacWeek: 51. - Levitt, Jason (9 May 1994). "A Matter of Attribution: Can't Forget to Give Credit for Mosaic Where Credit is Due". Open Systems Today: 71. - http://info.cern.ch/. Retrieved 2014-06-16. Missing or empty - Web Server Survey | Netcraft. News.netcraft.com. Retrieved on 2014-06-16. - "InfoWorld". 17 (34). August 21, 1995. - Roads and Crossroads of Internet History Chapter 4: Birth of the Web - dauphin, Gilles (1996). "W3C mMosaic". World Wide Web Consortium. Retrieved 2007-11-02. - Nilsson, Bjorn (1993). "README.VMS". National Center for Supercomputing Applications. Retrieved 2007-11-02. - NCSA and VMS Mosaic Version Information - "OpenVMS.org - OpenVMS Community Portal (VMS Mosaic V4.2)". OpenVMS.org. 2007. Retrieved 2007-11-02. - "Mosaic 4.0 freeware_readme.txt". Hewlett-Packard Development Company, L.P. 2006. Retrieved 2007-11-02. - "Official Mosaic-CK homepage". - Kahan, José (7 June 2002). "Change History of libwww". World Wide Web Consortium. Retrieved 30 May 2010. - Petrie, Charles; Cailliau, Robert (November 1997). "Interview Robert Cailliau on the WWW Proposal: "How It Really Happened."". Institute of Electrical and Electronics Engineers. Retrieved 18 August 2010. - Kahan, José (5 August 1999). "Why Libwww?". Retrieved 15 June 2010. - Tikka, Juha-Pekka (March 3, 2009). "The Greatest Internet Pioneers You Never Heard Of: The Story of Erwise and Four Finns Who Showed the Way to the Web Browser". Xconomy. - Welcome to Mosaic Communications Corporation! - NCSA Mosaic 1.0 home page at Déjà Vu (dejavu.org) - Beyond the Web: Excavating the Real World Via Mosaic - early application of Mosaic - NCSA Mosaic for modern Linux systems at GitHub - NCSA Mosaic Archive - In The Beginning... - A history of the Windows development effort. - Mosaic archive on evolt.org - Mosaic for OpenVMS systems - VMS Mosaic home page at the Wayback Machine (archived November 17, 2013) - Mosaic-CK home page
1
6
<urn:uuid:ea7673ee-24db-4ed6-99e1-26efbd25079c>
Identity development in adolescents with mental problems © Jung et al.; licensee BioMed Central Ltd. 2013 Received: 23 January 2013 Accepted: 17 June 2013 Published: 31 July 2013 In the revision of the Diagnostic and Statistical Manual (DSM-5), “Identity” is an essential diagnostic criterion for personality disorders (self-related personality functioning) in the alternative approach to the diagnosis of personality disorders in Section III of DSM-5. Integrating a broad range of established identity concepts, AIDA (Assessment of Identity Development in Adolescence) is a new questionnaire to assess pathology-related identity development in healthy and disturbed adolescents aged 12 to 18 years. Aim of the present study is to investigate differences in identity development between adolescents with different psychiatric diagnoses. Participants were 86 adolescent psychiatric in- and outpatients aged 12 to 18 years. The test set includes the questionnaire AIDA and two semi-structured psychiatric interviews (SCID-II, K-DIPS). The patients were assigned to three diagnostic groups (personality disorders, internalizing disorders, externalizing disorders). Differences were analyzed by multivariate analysis of variance MANOVA. In line with our hypotheses, patients with personality disorders showed the highest scores in all AIDA scales with T>70. Patients with externalizing disorders showed scores in an average range compared to population norms, while patients with internalizing disorders lay in between with scores around T=60. The AIDA total score was highly significant between the groups with a remarkable effect size of f= 0.44. Impairment of identity development differs between adolescent patients with different forms of mental disorders. The AIDA questionnaire is able to discriminate between these groups. This may help to improve assessment and treatment of adolescents with severe psychiatric problems. KeywordsIdentity Assessment Personality disorder Adolescence Psychopathology Identity is a broadly discussed construct and is linked to different psychodynamic [1, 2], social cognitive [3, 4], and philosophical theories (see Sollberger in this issue). Erikson defines identity as a hybrid concept providing a sense of continuity and a frame to differentiate between self and others, which enables a person to function autonomously. Ermann describes identity similarly as aligned in a transitional space between a given person and his or her community. On the one hand, a person has a sense of uniqueness regarding the past and the future; on the other hand, he or she sees differences as well as resemblances to others. “This sense of coherence and continuity in the context of social relatedness shapes life” , p. 139. Establishing a stable identity is one major development task in adolescence . These challenges of identity formation go along with identity crises that are normal and temporary phenomena in mastering age-related developmental tasks in adolescence . According to Kernberg , the transformation of the physical and psychological experiences of young people and the discrepancy between the sense of self and the others’ view of the adolescent lead to identity crises. Erikson emphasizes the need for resolution of identity crises by synthesizing previous identifications and introjections into a consolidated identity. In contrast to the non-pathological identity crisis, we use the concept of identity diffusion as a pathological identity development that is viewed as a psychiatric syndrome underlying all severe personality disorders [7, 8]. According to Kernberg’s theory of personality disorders , borderline personality organization is hallmarked by identity diffusion. Patients with identity diffusion have a non-integrated concept of the self and significant others so that a clinician cannot get a clear picture of the patient’s description of himself and of significant others in his life . There is often no commitment to jobs, goals and relationships as well as an avoidance of ambivalence associated with a painful sense of incoherence . Probably due to present changes in society with transitions in family and work, the number of patients with identity diffusion increases over time [5, 12, 13]. In contrast to the understanding outlined above, other authors (e.g. Marcia’s identity status paradigm ) view identity diffusion as a concept containing a broad range from adaptability to psychopathology like borderline personality disorders. From an optimistic point of view, identity diffused individuals are flexible (due to the lack of commitment) and seem to accommodate well to the fast-moving technological world . For other authors , post-modern life as a whole is hallmarked by a condition of diffusion. Whether one agrees with the post-modern view or not, the development of healthy and disturbed identity is a topic of high interest. In the following, new conceptualizations, methods of treatment, and diagnostic instruments of healthy and disturbed identity are discussed. Goth et al. presented an integrative understanding of healthy and disturbed identity and developed the self-report instrument AIDA (Assessment of Identity Development in Adolescence) to assess pathology-related identity development in adolescence. In the present study, the potential of AIDA is proved by investigating differences in identity development between adolescents with different psychiatric diagnoses. New conceptualizations: identity concepts in DSM-5 The DSM-IV includes identity disturbance as a criterion of borderline personality disorder and defines it as “markedly and persistently unstable self-image or sense of self” , p. 654. In the revision from DSM-IV to DSM-5[18, 19], the concept of identity is a central part of a new conceptualization of personality disorders in the alternative approach to the diagnosis of personality disorders in Section III of DSM-5 (see Schmeck et al. in this issue). The core criteria of personality disorders are composed of impairments in personality functioning in the two domains of self-functioning (self-direction and identity) and interpersonal functioning (empathy and intimacy). Identity is defined as the “experience of oneself as unique, with clear boundaries between self and others; stability of self-esteem and accuracy of self-appraisal; capacity for, and ability to regulate, a range of emotional experience” . The new model is placed in Section III of DSM-5 to stimulate further research in this field. New method of treatment: Adolescent Identity Treatment (AIT) Research of the last 15 years reveals increasing evidence that personality disorders are a prominent form of psychopathology in adolescence [21–24]. Personality disorders prior to age of 18 years can be reliably diagnosed [25, 26]. They have a good concurrent [24, 27] and predictive validity with adequate internal consistency and similar stability to personality disorders in adulthood [27, 29, 30]. Thus, symptoms of personality disorders in adolescence can be diagnosed and targeted for treatment [11, 31, 32]. Paulina Kernberg described a model for understanding the impact of identity diffusion as a pathogenic mechanism in developing a personality disorder in adolescence and stressed the need to differentiate between normal identity crisis and pathological identity diffusion for a targeted therapeutic intervention. These ideas lead to the development of the psychodynamic treatment approach “Adolescent Identity Treatment” (AIT) . This treatment focuses on identity diffusion in adolescence and is designed to help young patients to establish satisfying relationships, gain self-esteem and clarify aims in life. New diagnostic instrument: the questionnaire AIDA (Assessment of Identity Development in Adolescence) Our research group developed the questionnaire AIDA - Assessment of Identity Development in Adolescence to assess pathology-related identity development in healthy and disturbed adolescents aged 12 to 18 years in self-report for diagnostic and prognostic issues. Thus, AIDA is predestinated to be used as a research tool to evaluate therapy efficacy of AIT as well as of every therapy addressing improvement in self-related personality functioning related to constructs described below. Theory-based suggestion for a meaningful substructure of the construct “Identity Integration vs. Identity Diffusion” and its operationalization into AIDA scales, subscales, and facets Identity integration vs. Identity diffusion Identity-Continuity vs. Discontinuity Identity-Coherence vs. Incoherence Ego-Stability, intuitive-emotional “I” (“Changing while staying the same”) Ego-Strength, defined “ME” (“non-fragmented self with clear boundaries”) Sub 1.1: Stability in attributes / goals vs. lack of perspective Sub 2.1: Consistent self image vs. contradictions Self-related intrapersonal “Me and I” F1: capacity to invest / stabilizing commitment to interests, talents, perspectives, life goals F1: same attributes and behaviors with different friends or situations, consistent appearance F2: stable inner time-line, historical-biographical self, subjective self-sameness, sense of continuity F2: no extreme subjective contradictions / diversity of self-pictures, coherent self-concept F3: stabilizing moral guidelines and inner rules F3: awareness of a defined core and inner substance Sub 1.2: Stability in relations / roles vs. lack of affilitation Sub 2.2: Autonomy / ego-strength vs. over-identification, suggestibility Social-related interpersonal “Me and You” F1: capacity to invest / stabilizing commitment to lasting relationships F1: assertiveness, ego-strength, no over-identification or over-matching F2: positive identification with stabilizing roles (ethnic - cultural - family self) F2: independent intrinsic self-worth, no suggestibility F3: positive body-self F3: autonomous self (affect) regulation Sub 1.3: Positive emotional self reflection vs. distrust in stability of emotions Sub 2.3: positive cognitve self reflection vs. superficial, diffuse representations Mental representations accessability and complexity concerning own and others’ emotions / motives F1: understanding own feelings,good emotional accessibility F1: understanding motives and behavior, good cognitive accessibility F2: understanding others´ feelings, trust in stability of others’ feelings F2: differentiated and coherent mental representations The construct “Continuity” represents the vital experience of “I” and subjective emotional self-sameness with an inner stable time line. High “Continuity” is associated with the stability of identity-giving goals, talents, commitments, roles, and relationships, and a good and stable access to emotions as well as the trust in the stability of them. A lack of Continuity (i.e. high “Discontinuity”) is associated with a missing self-related perspective, no feeling of belonging and affiliation, and a lack of access to emotional levels of reality and trust in the durability of positive emotions. The construct “Coherence” stands for clarity of self-definition as a result of self-reflective awareness and elaboration of the “ME”, accompanied by consistency in self-images, autonomy and Ego-strength, and differentiated mental representations. A lack of Coherence (i.e. high “Incoherence”) is associated with being contradictory or ambivalent, suggestible and over-matching, and having poor access to cognitions and motives, accompanied by superficial and diffuse mental representations. The scales are coded towards psychopathology. High scores in the AIDA scales “Discontinuity” and “Incoherence” are indicators of an identity diffusion. The current study contrasts the identity development of personality disordered adolescents with the identity development of adolescents suffering from internalizing or externalizing disorders. In child and adolescent psychiatric research a procedure like this is often used to clarify the question if discrepancies from a normal sample are specific for a special diagnostic group or if they are a characteristic of mental disorders in general. As outlined above, identity problems are one of the core criteria of personality disorders so that we hypothesize adolescents with personality disorders reaching significantly higher scores in identity diffusion in comparison to other clinical groups. Up to now there are no studies about systematic differences in the level of identity problems in non-PD adolescent patients so that our second hypothesis is based on clinical experience. Patients with severe anxiety disorders and major depression experience a substantially reduced self-esteem which could have an impact on identity development. In contrast, patients with externalizing disorders boost their self-esteem by externalizing their problems. Based on these observations we hypothesize elevated scores of identity diffusion in patients with internalizing disorders in comparison with patients with externalizing disorders. Participants and procedures Mean score (M) and standard deviation (SD) differences with associated significance level p and effect size f in the different diagnostic groups: personality disorder (PD), internalizing disorder (internal), and externalizing disorder (external) Differences between diagnostic groups AIDA total score: Identity diffusion 1.3 emotional self-refl. 2.1 consistent self 2.3 cognitive self-refl. N= 24 were assigned to the “PD”-group according to the results of the SCID-II interview (15 Borderline PD (F60.3), 5 other cluster-B PD, 3 cluster-C PD and 1 cluster-A PD). N= 22 were assigned to the group “internal” (15 depressive disorders (F33), 5 anxiety disorders (F40) and 2 emotional disorders (F93)). N= 10 patients were assigned to the “external”-group (7 ADHD (F90, F90.1, F98.8) and 3 conduct disorder (F91)). N= 30 could not be assigned to one of the research groups because of comorbidities or non-target diagnoses. In this process we took especially care to create “pure” diagnostic groups to enable valid interpretations of differences between these types of psychiatric disorders in terms of differences in identity development. AIDA (Assessment of Identity Development in Adolescence) is a self-report questionnaire for adolescents from 12 to 18 years to assess pathology-related identity development. Its construction was based on a broad description of the field integrating classical approaches and constructs from psychodynamic and social-cognitive theories, focusing on a comprehensive and methodological optimized assessment. The 58 5-step format items were coded towards pathology and add up to a total score ranging from “identity integration to identity diffusion”. To facilitate scientific communication on the one hand and research concerning possible specific relations to external variables on the other hand, the integrated subconstructs constituting “Identity Diffusion” together are formulated in terms of distinct scales and subscales. The differentiated scales and subscales are referring to distinct psychosocial or functional constituents without regarding them to be statistically independent variables (see Table 1). In a mixed school (N = 305) and clinical sample (N = 52) AIDA showed excellent total score (Diffusion: α = .94), scale (Discontinuity: α = .86; Incoherence: α = .92) and subscale (α = .73-.86) reliabilities . Construct validity could be shown by high intercorrelations between the scales supporting as well the subdifferentiation as the subsumed total score. EFA on item level confirmed a joint higher order factor explaining already 24.3% of variance. High levels of Discontinuity and Incoherence were associated with low levels in Self Directedness (JTCI 12–18 R[41, 42]), an indicator of maladaptive personality functioning. Criterion validity could be demonstrated with both AIDA scales differentiating between patients with a personality disorder (N = 20) and controls with remarkable effect sizes (d) of 2.17 and 1.94 standard deviations. Several translations of AIDA in different languages are in progress and show similar promising results concerning psychometric properties (for the Mexican version of AIDA see Kassin & Goth, this issue). SCID-II and K-DIPS As the aim was to explore the thresholds between healthy development, identity crisis and identity diffusion, valid and broad measures for psychopathology were needed. We used the two well-established semi-structured diagnostic interviews SCID-II and K-DIPS. SCID-II (The Structured Clinical Interview for DSM-IV Axis II) is designed to assess personality disorders according to DSM-IV criteria. Administration time is about 60–90 minutes. K-DIPS (Children – Diagnostic Interview for Psychiatric Diseases) is designed to assess axis I psychopathology in children and adolescents according to ICD-10 and DSM-IV criteria, and takes about 90–120 minutes to administer. We used the Statistical Package for the Social Sciences (SPSS 19 for Windows) for data analyses. Differences between the three groups of psychiatric disorders in AIDA scores were analyzed by multivariate analysis of variance MANOVA with the factor “pathology” (PD, internal, external). The factor “sex” was integrated as a covariate since systematic differences had been detected between boys and girls in the validation sample and different population norms had been suggested . Effect size f is supposed to be big with >.40 but should be at least medium with >.25 to avoid overinterpretation of significant group differences. The sample size is sufficient to test for big effect sizes with significance level p<.05. In line with our hypotheses, the patients with personality disorders showed the highest scores in all AIDA scales, the patients with externalizing disorders the lowest scores, while the patients with internalizing disorders scored in between (see Table 2). For the AIDA total score “Identity Diffusion” the effect size of this highly significant group difference was big with f= 0.44. The two primary scales “Discontinuity” and “Incoherence” seemed to differentiate with a similar quality between the groups, both reaching nearly big effect sizes with f= 0.36. On AIDA subscale level, distinct potential to differentiate between types of pathology was detected. While the identity component “Incoherence concerning consistent self-picture” differentiated with a big effect size of f= 0.43 between the groups, the subscale “Discontinuity concerning attributes and goals” did not significantly differentiate between the groups. The other subscales all reached high significance and medium effect sizes in differentiation. The reformulation of the diagnostic category “Personality Disorders” was one of the highly discussed changes in the revision of DSM-IV to DSM-5. The alternative approach to the diagnosis of personality disorders in Section III of DSM-5 defines a combination of impairments in “self” and “interpersonal” functioning as core criteria of personality disorders. “Self-related personality functioning” is composed of the two constructs “Self-direction” and “Identity”. As indicated by placing the new approach in section III of the new manual further research is recommended to unify the different conceptualizations of personality disorders. To perform this research, valid and reliable tools to assess the core constructs of PD are urgently needed. The new self-report inventory AIDA assesses pathology-related identity development in adolescence with good reliability and validity . We investigated the power of the inventory to differentiate between adolescents with different psychiatric disorders in respect to normal and disturbed identity development. In line with our assumptions, the results clearly indicated a high discriminative power of AIDA concerning different psychiatric groups, each assigned theoretically with different levels of clinically relevant identity diffusion. The patients with PD, mostly borderline or other B-type, scored not only remarkably higher than the healthy norm population but also higher than the other patient groups with internalizing or externalizing disorders. Moreover, these findings indicate that identity diffusion as it is defined in the AIDA model is a distinguishing mark of PD, not only of psychiatric impairment in general. While patients with PD (Diffusion total score ∅ T= 73) showed highly elevated scores, patients with internalizing disorders, mostly with clinically relevant depression, showed only slightly elevated scores concerning identity diffusion (Diffusion total score ∅ T= 61) and patients with externalizing disorders, mostly diagnosed with ADHD, did not differ from the school population in their identity development at all (Diffusion total score ∅ T= 49). One of the main aims of AIDA is to differentiate between healthy identity integration, current identity crises, and severe identity diffusion. Patients with internalizing disorders scored slightly above the population norm, which may be interpreted as the presence of a current identity crisis. We intended to build homogenous psychiatric groups to also find possible “typical profiles” of identity development and may detect distinct relations between AIDA subscales and type of pathology to help defining the threshold between “crisis” and “diffusion”. But most of the subscales did not differ in their characteristics compared to the primary scales. Thus, further research is needed in this field. Only in the “external” group noticeable differences seemed to occur: patients with externalizing behavior problems had higher levels of “good emotional access to own and others’ feelings” (sub 1.3) and of “autonomy and Ego-strength” (sub 2.2) compared to the healthy controls, while their “stabilizing commitments to interests and goals, subjective selfsameness” (sub 1.1) was nearly as impaired as in the patients of the “internal” group. It would be comprehensible, however, that patients with externalizing behavior problems (e.g. with conduct disorders) have a relatively consistent self-image (e.g. in terms of a stable criminal identity like “I am a bad guy and feel confident about that.”) and perceive themselves as autonomous (e.g. “I do whatever I want.”), but in our sample only 3 patients with conduct disorder are integrated, thus a separate examination is not possible (see “Limitations” below). With the limited number of patients in the “externalizing disorder” group it is far too early to draw far reaching conclusions from our results. It is essential to enlarge this group with much more patients to be able to differentiate between adolescents with pure ADHD and those with conduct disorder problems. In general, it is in line with the AIDA-definition of pathology-related identity development that only patients with a personality disorder show elevated scores. The frequently existing artificial overlap in assessing “contradictory behavior” (as part of all descriptions of identity diffusion) and “impulsive behavior” (as part of externalizing behavior), known from a lot of inventories assessing identity-related constructs, is avoided carefully in the questionnaire AIDA. Given this, AIDA might provide the possibility to differentiate those patients with ADHD from those with emerging antisocial personality disorder. The criteria for assignment to the three diagnostic groups were strict in order to build homogenous groups. In a classification conference, where we took the results of the diagnostic interviews and clinical experience into account, heterogeneity and comorbidity could be decreased at the cost of a large residual category. This residual category includes 30 of 86 patients which could not be assigned to one of the research groups. Therefore especially the number of patients in the externalizing group was quite low. Furthermore, the group of patients with internalizing problems remains heterogenic. Compared to the other diagnostic groups, the “internal” group shows relatively large standard deviations in their AIDA scores. We can’t exclude that there might be patients in this group who will develop manifest personality disorders in the future. In this study we used the semi-structured diagnostic interview SCID-II that has been developed to assess personality disorders in adults. Along with the ongoing revisions of DSM and ICD it would be very helpful if assessment instruments could be established that are focused on the symptomatology of adolescents with severe impairment of personality functioning. From a theoretical perspective, it is very useful to know that mean differences in the AIDA scores exist between diagnostic groups, but mean differences do not translate automatically into accurate diagnoses. For diagnostic purposes, we have to consider whether cut-off points regarding identity diffusion and/or crisis might be useful. Once those markers are established, we could determine false positive and false negative rates. Furthermore, when comparing groups, such as adolescents with differing diagnoses, it is important to establish the equivalence of the groups on as many potentially confounding variables as possible. Including more variables (e.g. socio-economic status, level of education, type of parenting received, relationship status of their parents, or arrest records) as well as in-group comparisons or symptom-oriented rearrangements of the sample could lead to new interesting results and show clearly that the differences in the observed identity functioning have more to do with the psychiatric condition than with other variables. All in all, further research with a bigger sample and even more homogenous groups is needed to highlight distinct profiles and to examine the thresholds between identity crisis and diffusion in detail to develop a more accurate conceptualization of the construct “Identity crisis”. For this aim, longitudinal studies would be of high interest to model the prognostic power of different levels of identity development on subscale level as well as possible changes over time. “Identity” is a construct of high interest and is discussed as an essential diagnostic criterion for personality disorders in the new DSM-5. For diagnostic purposes, AIDA seems to be a useful self-report questionnaire for adolescents from 12 to 18 years to assess pathology-related identity development in terms of this self-related personality function. As patients with personality disorders showed the highest AIDA scores compared to patients with other diagnoses and lied clearly above the population norm in their levels of identity diffusion, remarkable criterion validity can be assumed for this questionnaire and the use of AIDA can be recommended for several clinical tasks. The Article processing charge (APC) of this manuscript has been funded by the Deutsche Forschungsgemeinschaft (DFG). - Erikson EH: The theory of infantile sexuality. Childhood and Society. 1959, New York: W. W. Norton, 42-92.Google Scholar - Kernberg O: Object Relations Theory and Clinical Psychoanalysis. 2004, Oxford: Roman & Littlefield PublishersGoogle Scholar - Resch F: Zur Entwicklung von Identität. Klinische Psychotherapie des Jugendalters. Edited by: Du Bois R, Resch F. 2005, Stuttgart: Kohlhammer, 55-64.Google Scholar - Fonagy P, Gergely G, Jurist EL, Target M: Affect regulation, mentalization, and the development of the self. 2002, New York: Other PressGoogle Scholar - Ermann M: Identität, Identitätsdiffusion, Identitätsstörung. Psychotherapeut. 2011, 56: 135-141. 10.1007/s00278-011-0813-8.View ArticleGoogle Scholar - Foelsch P, Odom A, Schmeck K, Schlüter-Müller S, Kernberg O: Behandlung von Adoleszenten mit Identitätsdiffusion - Eine Modifikation der Übertragungsfokussierten Psychotherapie (TFP). Persönlichkeitsstörungen: Theorie und Therapie. 2008, 12: 153-162.Google Scholar - Kernberg O: The diagnosis of borderline conditions in adolescence. Adolescent Psychiatry. Edited by: Feinstein S, Giovacchini P. 1978, Chicago: University of Chicago Press, Volume 6, 298-319.Google Scholar - Samuel S, Akthar S: The Identity Consolidation Inventory (ICI): Development and application of a questionnaire for assessing the structuralization of individual identity. Am J Psychoanal. 2009, 69: 53-61. 10.1057/ajp.2008.39.View ArticlePubMedGoogle Scholar - Clarkin JF, Yeomans FE, Kernberg OF: Psychotherapy of Borderline Personality: Focusing on object relations. 2006, Washington, D.C: American Psychiatric Publishing, IncGoogle Scholar - Kernberg PF, Weiner AS, Bardenstein KK: Personality Disorders in Children and Adolescents. 2000, New York: Basic BooksGoogle Scholar - Foelsch P, Odom A, Arena H, Krischer M, Schmeck K, Schlüter-Müller S: Differenzierung zwischen Identitätskrise und Identitätsdiffusion und ihre Bedeutung für die Behandlung am Beispiel einer Kasuistik. Prax Kinderpsychol Kinderpsychiatr. 2010, 59: 418-34.View ArticlePubMedGoogle Scholar - Keupp H, Ahbe T, Gmür W, Höfer R, Mitzscherlich B, Kraus W, Sraus F: Identitätskonstruktionen: Das Patchwork der Identitäten in der Spätmoderne. 1999, Reinbek bei Hamburg: Rowohlt-Taschenbuch-VerlagGoogle Scholar - Seiffge-Krenke I: Therapieziel Identität. Veränderte Beziehungen, Krankheitsbilder und Therapie. 2012, Stuttgart: Klett-CottaGoogle Scholar - Marcia JE: Ego identity and personality disorders. J Pers Disord. 2006, 20: 577-596. 10.1521/pedi.2006.20.6.577.View ArticlePubMedGoogle Scholar - Gergen K: The saturated self: Dilemmas of identity in contemporary life. 2000, New York: Basic BooksGoogle Scholar - Goth K, Foelsch P, Schlüter-Müller S, Birkhölzer M, Jung E, Pick O, Schmeck K: Assessment of identity development and identity diffusion in adolescence - Theoretical basis and psychometric properties of the self-report questionnaire AIDA. Child Adolesc Psychiatry Ment Health.http://www.capmh.com/content/pdf/1753-2000-6-27.pdf. - Association American Psychiatric: Diagnostic and Statistical Manual. 1994, Washington: American Psychiatric Association, 4Google Scholar - American Psychiatric Association: Personality Disorders.http://www.dsm5.org/proposedrevision/Pages/PersonalityDisorders.aspx. - Skodol AE: Personality Disorders in DSM-5. Annu Rev Clin Psychol. 2012, 8: 317-344. 10.1146/annurev-clinpsy-032511-143131.View ArticlePubMedGoogle Scholar - American Psychiatric Association: Levels of Personality Functioning.http://www.dsm5.org/ProposedRevisions/pages/proposedrevision.aspx?rid=468. - Chanen AM, Jovev M, Jackson HJ: Adaptive functioning and psychiatric symptoms in adolescents with borderline personality disorder. J Clin Psychiatry. 2007, 68: 297-306. 10.4088/JCP.v68n0217.View ArticlePubMedGoogle Scholar - Johnson JG, Cohen P, Skodol AE, Oldham JM, Kasen S, Brook JS: Personality disorders in adolescence and risk of major mental disorders and suicidality during adulthood. Arch Gen Psychiatry. 1999, 56: 805-811. 10.1001/archpsyc.56.9.805.View ArticlePubMedGoogle Scholar - Westen D, Dutra L, Shedler J: Assessing adolescent personality pathology. Br J Psychiatry. 2005, 186: 227-238. 10.1192/bjp.186.3.227.View ArticlePubMedGoogle Scholar - Levy KN, Becker DF, Grilo CM, Mattanah JJ, Garnet KE, Quinlan DM, Edell WS, McGlashan TH: Concurrent and predictive validity of the personality disorder diagnosis in adolescent inpatients. Am J Psychiatry. 1999, 156: 1522-1528.View ArticlePubMedGoogle Scholar - Westen D, Shedler J, Durrett C, Glass S, Martens A: Personality diagnoses in adolescence: DSM-IV axis II diagnoses and an empirically derived alternative. Am J Psychiatry. 2003, 160: 952-966. 10.1176/appi.ajp.160.5.952.View ArticlePubMedGoogle Scholar - Grilo CM, McGlashan TH, Quinlan DM, Walker ML, Greenfeld D, Edell WS: Frequency of personality disorders in two age cohorts of psychiatric inpatients. Am J Psychiatry. 1998, 155: 140-142.View ArticlePubMedGoogle Scholar - Bernstein DP, Cohen P, Velez CN, Schwab-Stone M, Siever LJ, Shinsato L: Prevalence and stability of the DSM-III-R personality disorders in a community-based survey of adolescents. Am J Psychiatry. 1993, 150: 1237-1243.View ArticlePubMedGoogle Scholar - Becker DF, Grilo CM, Morey LC, Walker ML, Edell WS, McGlashan TH: Applicability of personality disorder criteria to hospitalized adolescents: evaluation of internal consistency and criterion overlap. J Am Acad Child Adolesc Psychiatry. 1999, 38: 200-205. 10.1097/00004583-199902000-00020.View ArticlePubMedGoogle Scholar - Johnson JG, Cohen P, Kasen S, Skodol AE, Hamagami F, Brook JS: Age-related change in personality disorder trait levels between early adolescence and adulthood: a community-based longitudinal investigation. Acta Psychiatr Scand. 2000, 102: 265-275. 10.1034/j.1600-0447.2000.102004265.x.View ArticlePubMedGoogle Scholar - Crawford TN, Cohen P, Brook JS: Dramatic-erratic personality disorder symptoms: II. Developmental pathways from early adolescence to adulthood. J Pers Disord. 2001, 15: 336-350. 10.1521/pedi.15.4.336.19185.View ArticlePubMedGoogle Scholar - Schmid M, Schmeck K, Petermann F: Persönlichkeitsstörungen im Kindes- und Jugendalter. Kindheit und Entwicklung. 2008, 17: 190-202. 10.1026/0942-5403.17.3.190.View ArticleGoogle Scholar - Schmeck K, Schlüter-Müller S: Persönlichkeitsstörungen im Jugendalter. 2009, Heidelberg: SpringerView ArticleGoogle Scholar - Foelsch PA, Odom A, Arena H, Borzutzky A, Schmeck K, Schlüter-Müller S: Transference Focused Psychotherapy for Adolescents with Impaired Personality Functioning - Identifying and Treating Identity Pathology. Heidelberg: Springer; in pressGoogle Scholar - Kernberg O: Schwere Persönlichkeitsstörungen. 1985, Stuttgart: Klett-CottaGoogle Scholar - Westen D, Betan E, Defife JA: Identity disturbance in adolescence: associations with borderline personality disorder. Dev Psychopathol. 2011, 23: 305-313. 10.1017/S0954579410000817.View ArticlePubMedGoogle Scholar - Akhtar S, Samuel S: The concept of identity developmental origins, phenomenology, clinical relevance and measurement. Harv Rev Psychiatry. 1996, 3: 254-267. 10.3109/10673229609017193.View ArticlePubMedGoogle Scholar - Bateman A, Fonagy P: Mentalization-based Treatment for Borderline Personality Disorder: A Practical Guide. 2006, Oxford: Oxford University PressView ArticleGoogle Scholar - Schneider S, Unnewehr S, Margraf J: Kinder‒DIPS für DSM‒IV TR. Diagnostisches Interview bei psychischen Störungen im Kindes‒ und Jugendalter. 2009, Heidelberg: SpringerGoogle Scholar - Wittchen HU, Zaudig M, Fydrich T: SCID II - Strukturiertes Klinisches Interview für DSM-IV. 1997, Göttingen: HogrefeGoogle Scholar - Goth K, Foelsch P, Schlüter-Müller S, Schmeck K: AIDA: A self report questionnaire for measuring identity in adolescence – Short manual. 2012, Basel: Child and Adolescent Psychiatric Hospital, Psychiatric University Hospitals BaselGoogle Scholar - Goth K, Cloninger CR, Schmeck K: The Junior Temperament und Character Inventory for adolescents - JTCI 12–18 R. 2004, Frankfurt: J. W. Goethe University Frankfurt: Department of Child and Adolescent PsychiatryGoogle Scholar - Goth K, Schmeck K: Das Junior Temperament und Charakter Inventar. Eine Inventarfamilie zur Erfassung der Persönlichkeit vom Kindergarten- bis zum Jugendalter nach Cloningers biopsychosozialem Persönlichkeitsmodell. 2009, Göttingen: HogrefeGoogle Scholar This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2
10
<urn:uuid:9d9ab125-ad4a-4d69-a0c3-a49c9dc855e4>
What we have used up to now(IPv4) has been used up(32 bit). With new devices adding in to the network we need a bigger address space. So IPv6 comes with lot of extended capabilities like, - Expanded address space - Expanded routing capabilities - Simplified header format - Support for authentication and privacy Quality of service (QoS) capabilities Time taken by a packet to reach the destination from the source, we call it “Delay“. Variation of that delay we call as “Jitter“. Jitter is dangerous much more than delay. We have to reduce it to provide QoS In IPv6 Packet we see no of header fields Traffic Class – This contain a number which represent the priority level this is what make QoS possible. Flow Label – Contain a small no which is easy to handle This make routing faster and easy. In IPv6 we find 128 bit address. There are there kinds of addresses Uni-cast = A packet with a uni-cast address to a single interface identified by that address. ID for single interface Any-cast = ID for group of interfaces. But packet will be delivered to the nearest one. Multi-cast = ID for group of interfaces. Packet will go to everyone identified by that address. Loop back address(127.0.0.1) – Virtual address that return to same node. - Ipv6 addresses are assigned to interfaces, not nodes. All interfaces must have at least one link-local uni-cast address.(A single interface may have more than one ipv6 address having different type and scope) - x:x:x:x:x:x:x:x, where ‘x’ s are the hexadecimal values of the eight 16-bit pieces of the address Can omit leading 0s at each field. But there must be one character in every field. e.g. 23A7:00F3:0004:0000…. = 23A7:F3:4:0…. :: represent one or more groups of zeros (only once in a address). e.g. 23:0:0:0:0:3456:A987:8 = 23::3456:A987:8 x:x:x:x:x:x:d.d.d.d, where ‘x’ represent 16 bit hexadecimal values and ‘d’ represent 8 bit values Uses CIDR notation as in IPv4 (ipv6-address/prefix-length ) Address type indicated by leading bits |Aggregatable Global Uni-cast Addresses||001| |Link-local Uni-cast Addresses||1111 1110 10| |Site-local Uni-cast Addresses||1111 1110 11| |Multicast Addresses||1111 1111| Loopback address – ::1 Unspecified address – :: (has no scope, cannot be assigned to any interface) The format prefixes 001 through 111(001, 011, 101, 111 as there should be 1 at the end), except for Multicast Addresses, are all required to have to have following format. This IEEE EUI – 64 is generated using MAC c – company bits, m – device bits, g – global bit IPv6 addresses with embedded IPv4 addresses IPv4-compatible IPv6 address IPv4-mapped IPv6 address This situation first ISP was given 184.108.40.206/20. It distribute this between 8 companies. To represent 8 companies we need 3 bits. new subnet will be 23. IPv6 with hierarchical Addressing Transition from IPv4 to IPv6 (rfc 2893) Dual Stack approach - IPv6 nodes should also have full implementation of IPv4 stack - If any of the two ends is only IPv4 capable then both ends must communicate in IPv4 - Encapsulate IPv6 datagram with an IPv4 header
1
2
<urn:uuid:d80c2cd0-0521-4eca-a3be-cea1d5b3cc53>
The artificial artist. (personal computers)(includes related article) by Gregg Keizer They write; they paint; they make music. They pretend to be human. But they're not. They're PCs. And PCs are, with our help, masquerading as novelists, painters, musicians, poets, and sculptors. If they're good, they can fool the best of us. If they're bad, they still get a laugh. But as their mimicry improves, the lines between what they make and what we create begin to blur. Ethical, legal, and artistic questions dance around like balls in a pinball machine. They're not doing it by themselves, though. We're still pulling the strings, crafting the programs, and pushing the technology to fire our own creative juices--or to see if we can make these machines jump through the hoops. But computers of the future may not be willing to play second fiddle. Says John Grimes of the Institute of Design at the Illinois Institute of Technology, "I don't really think machines can replace what humans intrinsically do. Whatever computers become, we will define ourselves in contradistinction." Yet Grimes has spent years developing CameraWork, which makes images in ways no human could. A professional photographer, Grimes wanted to experiment with images, try countless variations--many of which he knew would fail--quickly and interactively. CameraWork was the result. By offering 30 fundamental processes and then letting users combine those artistic atoms in any number of ways, CameraWork can add, subtract, and metamorphose images in an almost infinite variety of ways. Painters have used it to transform charcoal drawings into sweeping pastels, and Grimes uses it to produce variations of his photographs. "What you end up with is something unimaginable," he says. "You create a new image that cannot be anticipated." While these synthetic photographs wouldn't be possible without the computer, Grimes dismisses the idea of computers as creators. "They don't make instant art, nor do they make anyone an instant artist. What the computer provides is a lever for the imagination." But the definitions blur. Harold Cohen, a Los Angeles-based painter, has spent the last 15 years perfecting a program that paints. Written in LISP, a computer language long associated with artificial intelligence research and development, Arron's works have appeared in several electronic art shows. AutoDesk's Chaos: The Software, though not styled as an art program certainly produces interesting images. Fire up the program, walk away, and when you come back, you'll find strange clouds, mountains, or abstracts on your monitor. To some, those images are as much art as any Jackson Pollock. Computerized self-animations--such as the MIT Media Lab's classic Cootie, in which an animated Cootie toy scuttles from place to place by its own set of rules--evoke images of the kind of electronic life software only now filtering down to the home computer. Maxis Software's SimAnt, a simulated ant colony, is a good example. "Who knows if that isn't an art form?" asks John Grimes. Total Eclipse of Art Dead women tell tales. So claims Scott French. This Foster City, California freelance writer brought Jacqueline Susann back from the grave. She was the flam-boyant author of such sultry 1960s novels as Valley of the Dolls. Using a Macintosh llcx and off-the-rack artificial intelligence software, French painstakingly re-created Susann's style, characters, and stories, and then collaborated with the Mac on Just This Once, a steamy pseudo-Susann novel updated for the 1990s. French picked apart Susann's writing and then, using the Al software, distilled her prose and plot lines into several hundred formulas. These told the computer how to write, shape the plot, and develop characters. After French made some suggestions, the computer gushed out copy that would make the late author proud--or ashamed. Just This Once is no fiction-by-silicon fiat. "It's really a collaboration," says French. "I like to think we did it together." French estimates that he wrote about 10 to 15 percent, while the computer cranked out anoter 25 percent on its own. The rest was a back-and-forth between authors, much as in any other writing tag team. How good is Just This Once? How good was Jacqueline Susann? Susann, in Valley of the Dolls: "She went into the house and grabbed a bottle of Scotch off the bar. Then she went into her bedroom, pulled the blinds to shut out the daylight, shut off her phone and swallowed five red pills. Five red ones hardly did anything now." Franch/Macintosh in Just This Once: "Lisa picked up the large propane torch and cracked the valve open a hair. The compressed air hissed out like an angry rattlesnake. She snapped the flint wheel on her lighter and the stream of invisible gas flashed into an iridescent blue streak." French has signed with a New York agent and hopes to see Just This Once in bookstores soon. If so, it will be the world's first fiction written primarily by a computer. But French isn't ready to rest on just one novel--or one writer. "It's possible to take two separate writers, in the same genre perhaps, and combine them to come out with a synergistic product. You're making a third person out of it." Other artificial writers are less ambitious. Headliner helps write advertising slogans but is more of a brainstorming tool than anything else. Give this PC program a word--say, art-and ask it to pull up some titles, proverbs, and idioms with rhyming words, and you'll end up with something like the headline at the beginning of this section. Corporate Voice, a souped-up grammar and style checker, tries to mold your text to a standard you set. For companies that want all outgoing material to reflect a single style, Voice can twist words to sound as if they came from Raymond Chandler or Mark Twain. Strangest of all, a computer went undercover on UseNet, an online network that links corporate, academic, and government research labs, and spewed out bizarre messages. Never challenged, Mark V. Shaney, the computer's nom de plume, sent back nonsensical ditties like "I am afraid of it becoming another island in a nice suit." No one suspected it was software. They thought it was just another electronic nut. Computers can help artists visualize 3-D works, but they can't put hands to clay to execute the dream. Not yet. Advances in desktop manufacturing foreshadow a future where artists sit at the screen create a sculpture with something akin to CAD (Computer Aided Design), and then build it on their desktop, all under computer control. Several competing technologies that range from solidfying liquid plastic with an ultraviolet laser to hardening a powder with a jet of silica, deliver small-sized replicas of computer-generated designs. The computer scans a design in superthin slices and then translates the image into just-as-thin cross sections of the object. The high cost of such desktop manufacturing machines--they go for up to half a million dollars--means that, for the moment, they'll stay in high-profit manufacturing where they're used to create ceramic molds and heart valve prototypes. But if pricces drop, on-the-edge artists may grab the technology to build works of art at their desks without getting their hands dirty. It's Pretty, but Is It Art? "Is it possible for computers to be a great aid in expression?" asks Grimes. "Yes. Is it possible for the computer to be an integral part of that process? Yes. Can computers replace artists? No." No? Artificial intelligence is still in its infancy, even after years of research. Artists and nonartists will continue refining electronic efforts that ape our ability to express ourselves in words, paint, and music if only to prove that it can be done. "The computer suggested changes that I couldn't see," claims French. "No human could do it; it's simply overwhelming." If computers can create something pretty, something art, it's our fault. We taught them everything they know.
1
2
<urn:uuid:333542d5-2efe-4867-adcd-5427bc65f955>
The AVR ATmega16 is a low-power CMOS 8-bit microcontroller based on the AVR enhanced RISC architecture. The throughput of AVR ATmega16 is about 1 MIPS per MHz using single clock per instruction allowing the system designed to optimize power consumption versus processing speed.The AVR has 32 general purpose working registers and rich instruction set. They are directly connected to the ALU, and allow two independent registers to be accessed in one single instruction executed in one clock cycle.Getting started with avr microcontroller tutorial helps you to understand real time examples. Atmel studio 6 is used for writing c code and generating hexa file. ATmega16 Microcontroller Features The AVR ATmega16 provides the following features: We used Atmega16 because of the following features: - It has low power CMOS 8-bit controller with AVR RISC - Its throughput is up to 16MIPS per - It has 32 General purpose registers directly connected to - 16 Kbytes In-System programmable flash - 512 bytes of EEPROM, 1k byte SRAM, JTAG - Three timer/counter for comparison - Internal and external interrupts - Serial programmable USART+I2C protocol ATmega16 Microcontroller Pin Descriptions VCC Digital supply voltage Port A (PA7-PA0) It is used for analog inputs of A / D. But if A/D converter is not enabled it also serves as a self in 8-bit bi-directional port for input and output. Port pins can provide internal pull-up resistors (selected for each bit). Port B (PB7-PB0) Port B is used as input/output 8-bit bi-directional port having internal pull- up resistors. The output buffers of Port B are of symmetrical drive characteristics having high sink and source capability. When acting as input pins, Port B pins; if pulled externally low; will source current if the pull-up resistors are activated. Port C (PC7-PC0) Port C’s special feature is JTAG interface. If the JTAG interface is enabled, the pull-up resistors on pins PC5 (TDI), PC3 (TMS) and PC2 (TCK) will be activated even if a reset occurs. Along with this Port C can also be used as input/output 8-bit bi-directional port having internal pull-up resistors. Port D (PD7-PD0) Port D serves the functions of various special features of the ATmega16 like interrupts input, timer/counter output and UART. In addition to this Port D is used as input/output 8-bit bi-directional port having internal pull-up resistors. ATmega16 Microcontroller CLOCK SOURCES ATmega16 has the following clock source options, selectable by Flash Fuse bits as shown below. The clock from the selected source is input to the AVR clock generator, and routed to the appropriate modules. In device clocking option for External Crystal or Ceramic Resonator combination for bits is 1111 or 1010. For External Low-frequency Crystal it is 1001 and for External RC Oscillator 1000 or 0101. Calibrated Internal RC Oscillator 0100 or 0001 and for External Clock it is 0000. Bit “1” means un-programmed while “0” means programmed. ATmega16 Microcontroller USART Main feature used from microcontroller is USART used for serial communication between controller and GSM modem. The Universal Synchronous and Asynchronous serial Receiver and Transmitter (USART) is a highly supple serial communication device.General Baud rates supported are 1200, 1800, 2400, 4800, 7200, 9600, 14400, 19200, 38400,57600, 115200. In our project we are using 9600. Serial USART Hardware Elements include: - USART Clock Generator to provide clock source and set baud rate using UBRR - USART Transmitter to send a character through TxD pin and handle start/stop bit framing, parity bit, shift - USART Receiver to receive a character through RxD pin and performs the reverse operation of the - USART Registers to configure control and monitor the serial USART Baud Rate Registers comprises of two registers, UBRRH and UBRRL, i.e. in total 16 bits. The UBRRH Register shares the same I/O location as the UCSRC Register. To cater this bit 15 (URSEL) i.e. Register Select is used. This bit selects between accessing the UBRRH or the UCSRC Register. It is read as zero when reading UBRRH. The URSEL must be zero when writing the UBRRH. Bit 14:12 are reserved bits for future use. For compatibility with future devices, these bits must be written to zero when UBRRH is written. Bits 11-0 contain the USART baud rate. The UBRRH contains the four most significant bits, and the UBRRL contains the 8 least significant bits of the USART baud rate. Ongoing transmissions by the transmitter and receiver will be corrupted if the baud rate is changed. Writing UBRRL will trigger an immediate update of the baud rate prescaler. USART Control & Status Registers comprises of three registers; UCSRA, UCSRB and UCSRC; and hold flags and control bits for USART. UCSRA contains following flags: RXC (USART Receive Complete), TXC (USART Transmit Complete), UDRE (USART Data Register Empty), FE (Frame Error), DOR (Data Over Run), PE (Parity Error), MPCM (Multi-processor Communication Mode) and control bit U2X (Double the USART Transmission Speed). UCSRB contains following control bits: RXCIE (RX Complete Interrupt Enable), TXCIE (TX Complete Interrupt Enable), UDRIE (USART Data Register Empty Interrupt Enable), RXEN (Receiver Enable), TXEN (Transmitter Enable), UCSZ2 (Character Size), RXB8 (Receive Data Bit 8) and TXB8 (Transmit Data Bit 8). UCSRC contains following control bits: URSEL (Register Select), UPM1:0 (Parity Mode), USBS (Stop Bit Select), UCSZ1:0 (Character Size) and UCPOL (Clock Polarity). USART Data Register is the buffer for characters sent or received through the serial port. To start sending a character, write it to UDR. Then to check a received character, read it from UDR.Serial USART main tasks are initializing the serial port, sending a character, receiving a character and sending/receiving formatted strings. Initializing serial port includes setting USART communication parameter (data bit, stop bit, and parity bit), enable transmitter and receiver, setting USART for asynchronous mode and setting baud rate. For sending a character UDRE flag has been set to 1 and then write the character to register UDR for transmission. For receiving a character RXC flag has been set to 1 and then read the received character from register UDR.
1
4
<urn:uuid:6d73eec4-75fc-4891-b20d-9c67674d63d9>
Rajan Zaveri/Climate Central This flaw of treating bioenergy as carbon neutral is enmeshed in climate policies and models worldwide. Its impacts have become apparent in recent years as European power plants were encouraged to switch from coal to wood, and as the U.S. increased the amount of biofuel that must be blended into gasoline. Both. Woodworking projects for everyone count where pollution is heavily regulated and taxed under Europes carbon trading program, the approach disguises the climate impacts of wood power by spreading responsibility for pollution away from individual power plants, there, woodworking projects for everyone count in reality, to foreign nations forestry sectors. But that would require an extraordinary overhaul of global climate policy.pine plantations and natural forests is being turned into pellets woodworking projects for everyone count and shipped to European power plants, mostly to Drax power station in the U.K.keeping it out of the atmosphere. Those forests lock carbon on land, given the loose enforcement and targets for greenhouse gas reductions in many parts woodworking projects for everyone count of the world, reilly warned that incentives to avoid deforestation could remain insufficient. In general, The mills are often opposed by neighbors and environmental groups fine woodworking projects rocking worried about pollution, deforestation, and heavy 24-hour truck traffic. Public subsidies on both sides of the ocean enrich the industry. European countries subsidize the pellet purchases by energy companies while county and southern state governments in America subsidize pellet producers with tax breaks and free. Most of the measures were dismissed by environmental groups as inadequate. Those groups tepidly welcomed one proposed rule, however, which would impose efficiency restrictions on new wood-burning power plants. The legislation was welcomed by members of the burgeoning wood energy sector, which could quickly collapse if European subsidies are yanked. We look forward to continuing. Cutting down trees to produce so-called biomass energy also reduces a forests ability to absorb carbon dioxide. Producing and shipping the pellets worsens climate impacts and those are the only climate impacts from wood energy for which Drax is held accountable by European authorities. On Dec. 19, the European Commission announced it had completed an. Woodworking projects for everyone count: David Carr, an attorney with the Southern Environmental Law Center, which is one of the environmental groups in Europe and the U.S. uniting in a campaign to oppose most wood energy. They have been pushing for new rules that could halt the use of subsidized wood for electricity in Europe after 2020. Its a big. said John Reilly, some scientists are supportive of such an approach but only theoretically. A member of a panel that provides scientific advice to the EPA about measuring pollution from bioenergy. It basically provides a whitewash. Mark Harmon, this can all work as kid woodworking projects recipe long as there is comprehensive accounting, As American foresters ramp up logging to meet the growing demand for wood pellets by power plants on the other side of the Atlantic Ocean, a new European wood energy proposal would allow the power plants to continue claiming their operations are green for at least 13 more years, despite releasing more heat-trapping pollution than. The rules provide a regulatory path forward for ensuring the European Union meets its 2030 pledge under the Paris climate agreement. If they become law following votes in European Parliament, as currently drafted, that path forward would be paved with deceitful accounting practices. The proposed rules were released just a few months after the commission. (Rules covering the period after 2030 will be considered in future years.) The E.U. aims to reduce climate emissions to 20 percent below 1990 levels by 2020. With global warming nearing 2 degrees F since the early 1800s, fueling heat waves, floods, and coral die-offs, the E.U. pledged under last years U.N. Paris climate agreement. Is falling short of climate targets. It would also demand more ambitious and costly efforts to meet future ones. In August, the European Commission appeared to be leaning toward trying to close the loophole, with the release of a report detailing the heavy risks that wood energy posed to American forests and the climate. At. such double counting would occur if one country reported carbon pollution from deforestation when pellets were produced, reducing forest carbon, a spokesperson said its biomass energy proposal mirrors international rules and that it was crafted to avoid double counting. The European Commission declined interview requests for this story, woodworking projects for everyone count but in emails,and activists who point to wood energys risks and harms. American woodworking plans children 7 families forestry companies operate a blog that attacks the credibility of scientists, with industry groups touting woodworking projects for everyone count non-scientific assumptions to make false claims about greenhouse gas savings. Misinformation about the climate impacts of wood energy is rife, journalists, Cradle woodworking plans on youtube! Babies are comfortably introduced to the computer and learn cause-and-effect relationships. We encourage you to further tailor your baby's learning experience by discussing the animals. Platforms: Mac License: Freeware Size: 43.4 MB Download (51 Giggles Computer Funtime For Baby: My Animal Friends for Mac OS Download Baby Names 2011 Released: August 08, 2012 Added: August 08. Bonus pattern includes number 400 Plant Cart for Indoors and Out. Visit our FAQ page for a full definition. View the Larger Image Slideshow to see the actual item you are buying. Planter Box Downloadable Woodworking Plan PDF This planter box (or pot plant holder) is completely held together with wire. Construction is simply a. Consumer Product Safety Commission Playground Safety Tips By Safe Kids Worldwide International Playground Contractors Association National Program for Playground Safety (NPPS ) National Recreation and Park Association U.S. Consumer Product Safety Commission Age-Appropriate Equipment Choose toys and equipment that are safe and suitable for the ages and stages of children in your program. The Consumer. Continue the woodworking project by making the frame of the coffee table. As you can see in the plans, we recommend you to build the exterior frame out of 14 lumber, making sure the corners are right-angled. Top Tip: Drill pocket holes at both ends of the short components and insert 1 1/4 screws. Add. Displaying Page 4. Hope Chest. Each piece is a seperate component that can be manipulated and measure individually. This Hope Chest was based on the plans provided by American Woodworker. Link Type: free plans Wood Source: Google 3D Fix Link? Hope Chest This hope chest could be used for toys, blankets or extra storage. Here. Free Plans For Small Woodworking furniture plans to kick Projects. Free Plans For Small Woodworking Projects Fine Woodworking Roubo Workbench Plans Free Plans For Octagon Walk In. Free plans to help anyone build simple, stylish furniture at large discounts from retail furniture. All woodworking plans are step by step, and include table plans. Reader Showcase Two Toned Chaise Lounge for FFA Fair. Plans. Outdoor. Furniture. Seating. Chaise Lounges. Intermediate. Chesapeake Collection. Outdoor Chaise Lounge Plans. Order outdoor chaise lounge plans today and. Get free woodworking tutorials and project ideas fit for beginner and advanced skill sets. Learn about common tools, woodworking techniques and more. Woodworking Getting Started in Woodworking - Basics for Beginners Woodworking. Want to Build Your Own Cabinets? It's Easier Than You Might Think Woodworking The 7 Power Tools Every Woodworker Needs to Own. Woodworking Learn to Make Beautiful Louvered Doors and Window Shutters. Woodworking Safety Rules Every Woodworker Should Know. Handcrafted Round Dining Tables From Erik Organic Experience Erik Organic Beauty Do you need help finding the perfect dining table for your home? Give us a call at. Click here » Request a catalog and browse through our beautiful selection of tables. Click here » Erik Organic, 3125 East 78th Street, Inver Grove Heights MN, 55076, U.S.A. Copyright 2017 Erik Organic. All Rights Reserved. this small easy woodworking projects in san diego bandsaw was designed by Matthias Wandel, plans for this bandsaw are available here: This video I guess is really a construction diary of building a homemade or shop built bandsaw out woodworking projects for everyone count of wood, homemade Bandsaw - Pt.1. A slideshow and a couple of shots through building my own bandsaw.
1
2
<urn:uuid:440f1b91-d32f-47e0-ba49-26171bedfb33>
||It has been suggested that 3D data acquisition and object reconstruction be merged into this article. (Discuss) Proposed since July 2016.| ||It has been suggested that 3D reconstruction be merged into this article. (Discuss) Proposed since June 2015.| A 3D scanner is a device that analyses a real-world object or environment to collect data on its shape and possibly its appearance (e.g. colour). The collected data can then be used to construct digital three-dimensional models. Many different technologies can be used to build these 3D-scanning devices; each technology comes with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present, for example, optical technologies encounter many difficulties with shiny, mirroring or transparent objects. For example, industrial computed tomography scanning can be used to construct digital 3D models, applying non-destructive testing. Collected 3D data is useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games. Other common applications of this technology include industrial design, orthotics and prosthetics, reverse engineering and prototyping, quality control/inspection and documentation of cultural artifacts. - 1 Functionality - 2 Technology - 2.1 Contact - 2.2 Non-contact active - 2.3 Hand-held laser scanners - 2.4 Structured light - 2.5 Modulated light - 2.6 Volumetric techniques - 2.7 Non-contact passive - 3 Reconstruction - 4 Applications - 4.1 Construction industry and civil engineering - 4.2 Design process - 4.3 Entertainment - 4.4 3D photography - 4.5 Law enforcement - 4.6 Reverse engineering - 4.7 Real estate - 4.8 Virtual/Remote Tourism - 4.9 Cultural heritage - 4.10 Medical CAD/CAM - 4.11 Quality assurance and industrial metrology - 5 See also - 6 References The purpose of a 3D scanner is usually to create a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours on the surface of the subject can also be determined. 3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified. For most situations, a single scan will not produce a complete model of the subject. Multiple scans, even hundreds, from many different directions are usually required to obtain information about all sides of the subject. These scans have to be brought into a common reference system, a process that is usually called alignment or registration, and then merged to create a complete model. This whole process, going from the single range map to the whole model, is usually known as the 3D scanning pipeline. There are a variety of technologies for digitally acquiring the shape of a 3D object. A well established classification divides them into two types: contact and non-contact. Non-contact solutions can be further divided into two main categories, active and passive. There are a variety of technologies that fall under each of these categories. Contact 3D scanners probe the subject through physical touch, while the object is in contact with or resting on a precision flat surface plate, ground and polished to a specific maximum of surface roughness. Where the object to be scanned is not flat or can not rest stably on a flat surface, it is supported and held firmly in place by a fixture. The scanner mechanism may have three different forms: - A carriage system with rigid arms held tightly in perpendicular relationship and each axis gliding along a track. Such systems work best with flat profile shapes or simple convex curved surfaces. - An articulated arm with rigid bones and high precision angular sensors. The location of the end of the arm involves complex math calculating the wrist rotation angle and hinge angle of each joint. This is ideal for probing into crevasses and interior spaces with a small mouth opening. - A combination of both methods may be used, such as an articulated arm suspended from a traveling carriage, for mapping large objects with interior cavities or overlapping surfaces. A CMM (coordinate measuring machine) is an example of a contact 3D scanner. It is used mostly in manufacturing and can be very precise. The disadvantage of CMMs though, is that it requires contact with the object being scanned. Thus, the act of scanning the object might modify or damage it. This fact is very significant when scanning delicate or valuable objects such as historical artifacts. The other disadvantage of CMMs is that they are relatively slow compared to the other scanning methods. Physically moving the arm that the probe is mounted on can be very slow and the fastest CMMs can only operate on a few hundred hertz. In contrast, an optical system like a laser scanner can operate from 10 to 500 kHz. Other examples are the hand driven touch probes used to digitise clay models in computer animation industry. Active scanners emit some kind of radiation or light and detect its reflection or radiation passing through object in order to probe an object or environment. Possible types of emissions used include light, ultrasound or x-ray. The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight laser range finder. The laser range finder finds the distance of a surface by timing the round-trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. If is the round-trip time, then distance is equal to . The accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the time: 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre. The laser range finder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second. Time-of-flight devices are also available in a 2D configuration. This is referred to as a time-of-flight camera. Triangulation based 3D laser scanners are also active scanners that use laser light to probe the environment. With respect to time-of-flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot. Depending on how far away the laser strikes a surface, the laser dot appears at different places in the camera's field of view. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. The length of one side of the triangle, the distance between the camera and the laser emitter is known. The angle of the laser emitter corner is also known. The angle of the camera corner can be determined by looking at the location of the laser dot in the camera's field of view. These three pieces of information fully determine the shape and size of the triangle and give the location of the laser dot corner of the triangle. In most cases a laser stripe, instead of a single laser dot, is swept across the object to speed up the acquisition process. The National Research Council of Canada was among the first institutes to develop the triangulation based laser scanning technology in 1978. Strengths and weaknesses Time-of-flight and triangulation range finders each have strengths and weaknesses that make them suitable for different situations. The advantage of time-of-flight range finders is that they are capable of operating over very long distances, on the order of kilometres. These scanners are thus suitable for scanning large structures like buildings or geographic features. The disadvantage of time-of-flight range finders is their accuracy. Due to the high speed of light, timing the round-trip time is difficult and the accuracy of the distance measurement is relatively low, on the order of millimetres. Triangulation range finders are exactly the opposite. They have a limited range of some meters, but their accuracy is relatively high. The accuracy of triangulation range finders is on the order of tens of micrometers. Time-of-flight scanners' accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse. The coordinate relative to the scanner's position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place. When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object. Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance. Software can also help by determining that the first object to be hit by the laser beam should cancel out the second. At a rate of 10,000 sample points per second, low resolution scans can take less than a second, but high resolution scans, requiring millions of samples, can take minutes for some time-of-flight scanners. The problem this creates is distortion from motion. Since each point is sampled at a different time, any motion in the subject or the scanner will distort the collected data. Thus, it is usually necessary to mount both the subject and the scanner on stable platforms and minimise vibration. Using these scanners to scan objects in motion is very difficult. When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature. If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another. Some laser scanners have level compensators built into them to counteract any movement of the scanner during the scan process. In a conoscopic system, a laser beam is projected onto the surface and then the immediate reflection along the same ray-path are put through a conoscopic crystal and projected onto a CCD. The result is a diffraction pattern, that can be frequency analyzed to determine the distance to the measured surface. The main advantage with conoscopic holography is that only a single ray-path is needed for measuring, thus giving an opportunity to measure for instance the depth of a finely drilled hole. Hand-held laser scanners Hand-held laser scanners create a 3D image through the triangulation mechanism described above: a laser dot or line is projected onto an object from a hand-held device and a sensor (typically a charge-coupled device or position sensitive device) measures the distance to the surface. Data is collected in relation to an internal coordinate system and therefore to collect data where the scanner is in motion the position of the scanner must be determined. The position can be determined by the scanner using reference features on the surface being scanned (typically adhesive reflective tabs, but natural features have been also used in research work) or by using an external tracking method. External tracking often takes the form of a laser tracker (to provide the sensor position) with integrated camera (to determine the orientation of the scanner) or a photogrammetric solution using 3 or more cameras providing the complete six degrees of freedom of the scanner. Both techniques tend to use infra red light-emitting diodes attached to the scanner which are seen by the camera(s) through filters providing resilience to ambient lighting. Data is collected by a computer and recorded as data points within three-dimensional space, with processing this can be converted into a triangulated mesh and then a computer-aided design model, often as non-uniform rational B-spline surfaces. Hand-held laser scanners can combine this data with passive, visible-light sensors — which capture surface textures and colors — to build (or "reverse engineer") a full 3D model. Structured-light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an LCD projector or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view. Structured-light scanning is still a very active area of research with many research papers published each year. Perfect maps have also been proven useful as structured light patterns that solve the correspondence problem and allow for error detection and error correction. [See Morano, R., et al. "Structured Light Using Pseudorandom Codes," IEEE Transactions on Pattern Analysis and Machine Intelligence. The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time. VisionMaster creates a 3D scanning system with a 5-megapixel camera – 5 million data points are acquired in every frame. A real-time scanner using digital fringe projection and phase-shifting technique (certain kinds of structured light methods) was developed, to capture, reconstruct, and render high-density details of dynamically deformable objects (such as facial expressions) at 40 frames per second. Recently, another scanner has been developed. Different patterns can be applied to this system, and the frame rate for capturing and data processing achieves 120 frames per second. It can also scan isolated surfaces, for example two moving hands. By utilising the binary defocusing technique, speed breakthroughs have been made that could reach hundreds of to thousands of frames per second. Modulated light 3D scanners shine a continually changing light at the subject. Usually the light source simply cycles its amplitude in a sinusoidal pattern. A camera detects the reflected light and the amount the pattern is shifted by determines the distance the light travelled. Modulated light also allows the scanner to ignore light from sources other than a laser, so there is no interference. Computed tomography (CT) is a medical imaging method which generates a three-dimensional image of the inside of an object from a large series of two-dimensional X-ray images, similarly Magnetic resonance imaging is another medical imaging technique that provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. These techniques produce a discrete 3D volumetric representation that can be directly visualised, manipulated or converted to traditional 3D surface by mean of isosurface extraction algorithms. Although most common in medicine, Industrial computed tomography, Microtomography and MRI are also used in other fields for acquiring a digital representation of an object and its interior, such as non destructive materials testing, reverse engineering, or studying biological and paleontological specimens. Passive 3D imaging solutions do not emit any kind of radiation themselves, but instead rely on detecting reflected ambient radiation. Most solutions of this type detect visible light because it is a readily available ambient radiation. Other types of radiation, such as infra red could also be used. Passive methods can be very cheap, because in most cases they do not need particular hardware but simple digital cameras. - Stereoscopic systems usually employ two video cameras, slightly apart, looking at the same scene. By analysing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is based on the same principles driving human stereoscopic vision. - Photometric systems usually use a single camera, but take multiple images under varying lighting conditions. These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel. - Silhouette techniques use outlines created from a sequence of photographs around a three-dimensional object against a well contrasted background. These silhouettes are extruded and intersected to form the visual hull approximation of the object. With these approaches some concavities of an object (like the interior of a bowl) cannot be detected. User assisted (image-based modelling) |This section needs expansion. You can help by adding to it. (October 2010)| There are other methods that, based on the user assisted detection and identification of some features and shapes on a set of different pictures of an object are able to build an approximation of the object itself. This kind of techniques are useful to build fast approximation of simple shaped objects like buildings. Various commercial packages are available like D-Sculptor, iModeller, Autodesk ImageModeler, 123DCatch or PhotoModeler. This sort of 3D imaging solution is based on the principles of photogrammetry. It is also somewhat similar in methodology to panoramic photography, except that the photos are taken of one object on a three-dimensional space in order to replicate it instead of taking a series of photos from one point in a three-dimensional space in order to replicate the surrounding environment. From point clouds The point clouds produced by 3D scanners and 3D imaging can be used directly for measurement and visualisation in the architecture and construction world. - Polygon mesh models: In a polygonal representation of a shape, a curved surface is modeled as many small faceted flat surfaces (think of a sphere modeled as a disco ball). Polygon models—also called Mesh models, are useful for visualisation, for some CAM (i.e., machining), but are generally "heavy" ( i.e., very large data sets), and are relatively un-editable in this form. Reconstruction to polygonal model involves finding and connecting adjacent points with straight lines in order to create a continuous surface. Many applications, both free and nonfree, are available for this purpose (e.g. MeshLab, PointCab, kubit PointCloud for AutoCAD, JRC 3D Reconstructor, imagemodel, PolyWorks, Rapidform, Geomagic, Imageware, Rhino 3D etc.). - Surface models: The next level of sophistication in modeling involves using a quilt of curved surface patches to model the shape. These might be NURBS, TSplines or other curved representations of curved topology. Using NURBS, the spherical shape becomes a true mathematical sphere. Some applications offer patch layout by hand but the best in class offer both automated patch layout and manual layout. These patches have the advantage of being lighter and more manipulable when exported to CAD. Surface models are somewhat editable, but only in a sculptural sense of pushing and pulling to deform the surface. This representation lends itself well to modelling organic and artistic shapes. Providers of surface modellers include Rapidform, Geomagic, Rhino 3D, Maya, T Splines etc. - Solid CAD models: From an engineering/manufacturing perspective, the ultimate representation of a digitised shape is the editable, parametric CAD model. In CAD, the sphere is described by parametric features which are easily edited by changing a value (e.g., centre point and radius). These CAD models describe not simply the envelope or shape of the object, but CAD models also embody the "design intent" (i.e., critical features and their relationship to other features). An example of design intent not evident in the shape alone might be a brake drum's lug bolts, which must be concentric with the hole in the centre of the drum. This knowledge would drive the sequence and method of creating the CAD model; a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter, but instead, to the center. A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model. Vendors offer different approaches to getting to the parametric CAD model. Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD (e.g., Geomagic, Imageware, Rhino 3D). Others use the scan data to create an editable and verifiable feature based model that is imported into CAD with full feature tree intact, yielding a complete, native CAD model, capturing both shape and design intent (e.g. Geomagic, Rapidform). Still other CAD applications are robust enough to manipulate limited points or polygon models within the CAD environment (e.g., CATIA, AutoCAD, Revit). From a set of 2D slices CT, industrial CT, MRI, or Micro-CT scanners do not produce point clouds but a set of 2D slices (each termed a "tomogram") which are then 'stacked together' to produce a 3D representation. There are several ways to do this depending on the output required: - Volume rendering: Different parts of an object usually have different threshold values or greyscale densities. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various thresholds, allowing different colours to represent each component of the object. Volume rendering is usually only used for visualisation of the scanned object. - Image segmentation: Where different structures have similar threshold/greyscale values, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation. - Image-based meshing: When using 3D image data for computational analysis (e.g. CFD and FEA), simply segmenting the data and meshing from CAD can become time consuming, and virtually intractable for the complex topologies typical of image data. The solution is called image-based meshing, an automated process of generating an accurate and realistic geometrical description of the scan data. From laser scans Laser scanning describes the general method to sample or scan a surface using laser technology. Several areas of application exist that mainly differ in the power of the lasers that are used, and in the results of the scanning process. Low laser power is used when the scanned surface doesn't have to be influenced, e.g. when it only has to be digitised. Confocal or 3D laser scanning are methods to get information about the scanned surface. Another low-power application uses structured light projection systems for solar cell flatness metrology, enabling stress calculation throughout in excess of 2000 wafers per hour. The laser power used for laser scanning equipment in industrial applications is typically less than 1W. The power level is usually on the order of 200 mW or less but sometimes more. Construction industry and civil engineering - Robotic control: e.g. a laser scanner may function as the "eye" of a robot. - As-built drawings of bridges, industrial plants, and monuments - Documentation of historical sites - Site modelling and lay outing - Quality control - Quantity surveys - Payload monitoring - Freeway redesign - Establishing a bench mark of pre-existing shape/state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire. - Create GIS (geographic information system) maps and geomatics. - Subsurface laser scanning in mines and Karst voids. - Forensic documentation - Increasing accuracy working with complex parts and shapes, - Coordinating product design using parts from multiple sources, - Updating old CD scans with those from more current technology, - Replacing missing or older parts, - Creating cost savings by allowing as-built design services, for example in automotive manufacturing plants, - "Bringing the plant to the engineers" with web shared scans, and - Saving travel costs. 3D scanners are used by the entertainment industry to create digital 3D models for movies, video games and leisure purposes. They are heavily utilised in virtual cinematography. In cases where a real-world equivalent of a model exists, it is much faster to scan the real-world object than to manually create a model using 3D modeling software. Frequently, artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer. 3D scanners are evolving for the use of cameras to represent 3D objects in an accurate manner. Companies are emerging since 2010 that create 3D portraits of people (3D figurines or 3D selfies). 3D laser scanning is used by the law enforcement agencies around the world. 3D Models are used for on-site documentation of: - Crime scenes - Bullet trajectories - Bloodstain pattern analysis - Accident reconstruction - Plane crashes, and more Reverse engineering of a mechanical component requires a precise digital model of the objects to be reproduced. Rather than a set of points a precise digital model can be represented by a polygon mesh, a set of flat or curved NURBS surfaces, or ideally for mechanical components, a CAD solid model. A 3D scanner can be used to digitise free-form or gradually changing shaped components as well as prismatic geometries whereas a coordinate measuring machine is usually used only to determine simple dimensions of a highly prismatic model. These data points are then processed to create a usable digital model, usually using specialized reverse engineering software. Land or buildings can be scanned into a 3d model, which allows buyers to tour and inspect the property remotely, anywhere, without having to be present at the property. There is already at least one company providing 3d-scanned virtual real estate tours. A typical virtual tour would consist of dollhouse view, inside view, as well as a floor plan. The environment at a place of interest can be captured and converted into a 3D model. This model can then be explored by the public, either through a VR interface or a traditional "2D" interface. This allows the user to explore locations which are inconvenient for travel. There have been many research projects undertaken via the scanning of historical sites and artifacts both for documentation and analysis purposes. The combined use of 3D scanning and 3D printing technologies allows the replication of real objects without the use of traditional plaster casting techniques, that in many cases can be too invasive for being performed on precious or delicate cultural heritage artifacts. In an example of a typical application scenario, a gargoyle model was digitally acquired using a 3D scanner and the produced 3D data was processed using MeshLab. The resulting digital 3D model was fed to a rapid prototyping machine to create a real resin replica of the original object. In 1999, two different research groups started scanning Michelangelo's statues. Stanford University with a group led by Marc Levoy used a custom laser triangulation scanner built by Cyberware to scan Michelangelo's statues in Florence, notably the David, the Prigioni and the four statues in The Medici Chapel. The scans produced a data point density of one sample per 0.25 mm, detailed enough to see Michelangelo's chisel marks. These detailed scans produced a large amount of data (up to 32 gigabytes) and processing the data from his scans took 5 months. Approximately in the same period a research group from IBM, led by H. Rushmeier and F. Bernardini scanned the Pietà of Florence acquiring both geometric and colour details. The digital model, result of the Stanford scanning campaign, was thoroughly used in the 2004 subsequent restoration of the statue. In 2002, David Luebke, et al. scanned Thomas Jefferson's Monticello. A commercial time of flight laser scanner, the DeltaSphere 3000, was used. The scanner data was later combined with colour data from digital photographs to create the Virtual Monticello, and the Jefferson's Cabinet exhibits in the New Orleans Museum of Art in 2003. The Virtual Monticello exhibit simulated a window looking into Jefferson's Library. The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer. The glasses, combined with polarised projectors, provided a 3D effect. Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around, creating the illusion that the display is actually a hole in the wall looking into Jefferson's Library. The Jefferson's Cabinet exhibit was a barrier stereogram (essentially a non-active hologram that appears different from different angles) of Jefferson's Cabinet. In 2003, Subodh Kumar, et al. undertook the 3D scanning of ancient cuneiform tablets. Again, a laser triangulation scanner was used. The tablets were scanned on a regular grid pattern at a resolution of 0.025 mm (0.00098 in). A 2009 CyArk 3D scanning project at Uganda's historic Kasubi Tombs, a UNESCO World Heritage Site, using a Leica HDS 4500, produced detailed architectural models of Muzibu Azaala Mpanga, the main building at the complex and tomb of the Kabakas (Kings) of Uganda. A fire on March 16, 2010, burned down much of the Muzibu Azaala Mpanga structure, and reconstruction work is likely to lean heavily upon the dataset produced by the 3D scan mission. "Plastico di Roma antica" In 2005, Gabriele Guidi, et al. scanned the "Plastico di Roma antica", a model of Rome created in the last century. Neither the triangulation method, nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details. They found though, that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed. The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model. The 3D Encounters Project at the Petrie Museum of Egyptian Archaeology aims to use 3D laser scanning to create a high quality 3D image library of artefacts and enable digital travelling exhibitions of fragile Egyptian artefacts, English Heritage has investigated the use of 3D laser scanning for a wide range of applications to gain archaeological and condition data, and the National Conservation Centre in Liverpool has also produced 3D laser scans on commission, including portable object and in situ scans of archaeological sites. The Smithsonian Institution has a project called Smithsonian X 3D notable for the breadth of types of 3D objects they are attempting to scan. These include small objects such as insects and flowers, to human sized objects such as Amelia Earhart's Flight Suit to room sized objects such as the Gunboat Philadelphia to historic sites such as Liang Bua in Indonesia. Also of note the data from these scans is being made available to the public for free and downloadable in several data formats. 3D scanners are used to capture the 3D shape of a patient in orthotics and dentistry. It gradually supplants tedious plaster cast. CAD/CAM software are then used to design and manufacture the orthosis, prosthesis or dental implants. Many Chairside dental CAD/CAM systems and Dental Laboratory CAD/CAM systems use 3D Scanner technologies to capture the 3D surface of a dental preparation (either in vivo or in vitro), in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology (such as a CNC milling machine, or 3D printer). The chairside systems are designed to facilitate the 3D scanning of a preparation in vivo and produce the restoration (such as a Crown, Onlay, Inlay or Veneer). Quality assurance and industrial metrology The digitalisation of real-world objects is of vital importance in various application domains. This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy. Industrial processes such as assembly are complex, highly automated and typically based on CAD (Computer Aided Design) data. The problem is that the same degree of automation is also required for quality assurance. It is, for example, a very complex task to assemble a modern car, since it consists of many parts that must fit together at the very end of the production line. The optimal performance of this process is guaranteed by quality assurance systems. Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions, fit together and finally work reliably. Within highly automated processes, the resulting geometric measures are transferred to machines that manufacture the desired objects. Due to mechanical uncertainties and abrasions, the result may differ from its digital nominal. In order to automatically capture and evaluate these deviations, the manufactured part must be digitised as well. For this purpose, 3D scanners are applied to generate point samples from the object's surface which are finally compared against the nominal data. The process of comparing 3D data against a CAD model is referred to as CAD-Compare, and can be a useful technique for applications such as determining wear patterns on moulds and tooling, determining accuracy of final build, analysing gap and flush, or analysing highly complex sculpted surfaces. At present, laser triangulation scanners, structured light and contact scanning are the predominant technologies employed for industrial purposes, with contact scanning remaining the slowest, but overall most accurate option. Nevertheless, 3D scanning technology offers distinct advantages compared to traditional touch probe measurements. White-light or laser scanners accurately digitize objects all around, capturing fine details and freeform surfaces without reference points or spray. The entire surface is covered at record speed without the risk of damaging the part. Graphic comparison charts illustrate geometric deviations of full object level, providing deeper insights into potential causes. - 3D printing - 3D reconstruction - 3D computer graphics software - Angle-sensitive pixel - Depth map - Epipolar geometry - Full body scanner - Light-field camera - Range imaging - Structured-light 3D scanner - Fausto Bernardini, Holly E. Rushmeier (2002). "The 3D Model Acquisition Pipeline" (pdf). Comput. Graph. Forum. 21 (2): 149–172. doi:10.1111/1467-8659.00574. - Brian Curless (November 2000). "From Range Scans to 3D Models". ACM SIGGRAPH Computer Graphics. 33 (4): 38–41. doi:10.1145/345370.345399. - Roy Mayer (1999). Scientific Canadian: Invention and Innovation From Canada's National Research Council. Vancouver: Raincoast Books. ISBN 1-55192-266-5. OCLC 41347212. - François Blais; Michel Picard; Guy Godin (6–9 September 2004). "Accurate 3D acquisition of freely moving objects". 2nd International Symposium on 3D Data Processing, Visualisation, and Transmission, 3DPVT 2004, Thessaloniki, Greece. Los Alamitos, CA: IEEE Computer Society. pp. 422–9. ISBN 0-7695-2223-8. - Salil Goel; Bharat Lohani (2014). "A Motion Correction Technique for Laser Scanning of Moving Objects". IEEE Geoscience and Remote Sensing Letters: 225–228. - K. H. Strobl; E. Mair; T. Bodenmüller; S. Kielhöfer; W. Sepp; M. Suppa; D. Burschka; G. Hirzinger (2009). "The Self-Referenced DLR 3D-Modeler" (PDF). Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA. pp. 21–28. - K. H. Strobl; E. Mair; G. Hirzinger (2011). "Image-Based Pose Estimation for 3-D Modeling in Rapid, Hand-Held Motion" (PDF). Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China. pp. 2593–2600. - Song Zhang; Peisen Huang (2006). "High-resolution, real-time 3-D shape measurement". Optical Engineering: 123601. - Kai Liu; Yongchang Wang; Daniel L. Lau; Qi Hao; Laurence G. Hassebrook (2010). "Dual-frequency pattern scheme for high-speed 3-D shape measurement" (PDF). Optics Express. 18 (5): 5229–5244. PMID 20389536. doi:10.1364/OE.18.005229. - Song Zhang; Daniel van der Weide; James H. Oliver (2010). "Superfast phase-shifting method for 3-D shape measurement". Optics Express. 18: 9684–9689. doi:10.1364/OE.18.009684. - Yajun Wang; Song Zhang (2011). "Superfast multifrequency phase-shifting technique with optimal pulse width modulation". Optics Express. 19: 9684–9689. doi:10.1364/OE.19.005149. - W. J. Walecki; F. Szondy; M. M. Hilali (2008). "Fast in-line surface topography metrology enabling stress calculation for solar cell manufacturing allowing throughput in excess of 2000 wafers per hour". Meas. Sci. Technol. 19 (2): 025302. doi:10.1088/0957-0233/19/2/025302. - Larsson, Sören; Kjellander, J.A.P. (2006). "Motion control and data capturing for laser scanning with an industrial robot". Robotics and Autonomous Systems. 54 (6): 453–460. doi:10.1016/j.robot.2006.02.002. - Landmark detection by a rotary laser scanner for autonomous robot navigation in sewer pipes, Matthias Dorn et al., Proceedings of the ICMIT 2003, the second International Conference on Mechatronics and Information Technology, pp. 600- 604, Jecheon, Korea, Dec. 2003 - Bewley, A.; et al. "Real-time volume estimation of a dragline payload". IEEE International Conference on Robotics and Automation. 2011: 1571–1576. - Murphy, Liam. "Case Study: Old Mine Workings". Subsurface Laser Scanning Case Studies. Liam Murphy. Retrieved 11 January 2012. - Lamine Mahdjoubi; Cletus Moobela; Richard Laing (December 2013). "Providing real-estate services through the integration of 3D laser scanning and building information modelling". Science Direct. 64 (9). - "Matterport Surpasses 70 Million Global Visits and Celebrates Explosive Growth of 3D and Virtual Reality Spaces". Market Watch. Market Watch. Retrieved 19 December 2016. - "The VR Glossary". Retrieved 26 April 2017. - Daniel A. Guttentag (October 2010). "Virtual reality: Applications and implications for tourism". Science Direct. 31 (5). - Paolo Cignoni; Roberto Scopigno (June 2008). "Sampled 3D models for CH applications: A viable and enabling new medium or just a technological exercise?" (PDF). ACM Journal on Computing and Cultural Heritage. 1 (1): 1–23. doi:10.1145/1367080.1367082. - Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M. (November 2015). "Digital Fabrication Techniques for Cultural Heritage: A Survey". Computer Graphics Forum. doi:10.1111/cgf.12781. - Marc Levoy; Kari Pulli; Brian Curless; Szymon Rusinkiewicz; David Koller; Lucas Pereira; Matt Ginzton; Sean Anderson; James Davis; Jeremy Ginsberg; Jonathan Shade; Duane Fulk (2000). "The Digital Michelangelo Project: 3D Scanning of Large Statues" (PDF). Proceedings of the 27th annual conference on Computer graphics and interactive techniques. pp. 131–144. - Roberto Scopigno; Susanna Bracci; Falletti, Franca; Mauro Matteini (2004). Exploring David. Diagnostic Tests and State of Conservation. Gruppo Editoriale Giunti. ISBN 88-09-03325-6. - David Luebke; Christopher Lutz; Rui Wang; Cliff Woolley (2002). "Scanning Monticello". - Subodh Kumar; Dean Snyder; Donald Duncan; Jonathan Cohen; Jerry Cooper (6–10 October 2003). "Digital Preservation of Ancient Cuneiform Tablets Using 3D-Scanning". 4th International Conference on 3-D Digital Imaging and Modeling : 3DIM 2003, Banff, Alberta, Canada. Los Alamitos, CA: IEEE Computer Society. pp. 326–333. - Scott Cedarleaf (2010). "Royal Kasubi Tombs Destroyed in Fire". CyArk Blog. - Gabriele Guidi; Laura Micoli; Michele Russo; Bernard Frischer; Monica De Simone; Alessandro Spinetti; Luca Carosso (13–16 June 2005). "3D digitisation of a large model of imperial Rome". 5th international conference on 3-D digital imaging and modeling : 3DIM 2005, Ottawa, Ontario, Canada. Los Alamitos, CA: IEEE Computer Society. pp. 565–572. ISBN 0-7695-2327-7. - Payne, Emma Marie (2012). "Imaging Techniques in Conservation". Journal of Conservation and Museum Studies. Ubiquity Press. 10: 17–29. doi:10.5334/jcms.1021201. - Christian Teutsch (2007). Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners (PhD thesis). - "3D scanning technologies.". Retrieved 2016-09-15.
1
5
<urn:uuid:ab47a3e4-80de-4be6-ac6a-02b254668a1c>
As part of the Penal Laws which were enforced in 1695, the native Irish was forbidden to educate or receive education. Daniel Boyd a Scottish Presbyterian took up residence in Carrickfinn in the last years of the seventeenth century. There was always good will between the new families and their neighbours. It is possible that Carrickfinn being on the far North West Coast of Ireland would not be harshly affected and together with their new neighbours, they may have had some sort of educational system. When these laws were relaxed toward the end of the eighteenth century the people were too poor to build a schoolhouse so the teachers held classes in a barn or bothóg (a sod built house). When the weather was good, the teacher held their classes in the open air, usually in a sheltered place beside a hedge. These schools were called “Hedge Schools”. The lessons were given by travelling teachers and their usual payment was 2d per week from each scholar but quite often they were paid in kind. In 1782 Templecrone Parish Vestry gave £15 to build three schools. One of these schools was built in Carnboy and is still in use as a barn. In 1835 Thomas Boyd was recorded as being the master at this school. He received £5 10s 9d from the Robertson Fund, a gratuity of £2 from the Parish and payments of 2s per quarter from each child. He taught reading, writing and arithmetic and had 34 pupils in 1835. With the population of Carnboy only 24 in 1841, a large proportion of the pupils were neighbouring Catholics and children of the local coastguards. The teachers in the neighbouring Ranafast and Belcruit Hedge Schools were then paid between one and two schillings per quarter by each child. Since these Schools had an average of twenty pupils, the teacher’s pay averaged £1 10s per quarter. Belcruit Hedge School was only opened during the winter. There is a ruin of long thatched building in Carrickfinn which according to local history, housed a school. This building is known as Coyles. Daniel Coyle is recorded as a tenant on this property until 1823. From 1822 a detachment of around thirty officers of the newly formed Coastguard Service were stationed on this property. Jack O’Donnell born in Meenaleck around 1830 became a hedge school teacher in Carrickfinn sometime in the 1850s. He may have taught at Coyle’s or at another site. Jack received his education from the Landlord Mr Stewart of Horn Head where his mother was a servant. Jack was later to become a licensee of two taverns in Meenaleck where he held a monthly fair. This village was then known as Jackstown. In 1968 musician Leo Brennan bought one of these taverns and started a singing lounge. His family are now known worldwide as Clannad and Enya. There were several societies during this period using the Gaelic Bible as a vehicle to educate Catholic children and hopefully convert them to Protestantism. They also educated the local Protestant children. One of these societies was the Island and Coast Society. In April 1840 the acting curate of Templecrone Parish wrote a letter to the Island and Coast Society thanking them for the appointment of a Mr. Foley to the Guidore School based in Carrickfinn. It was called Guidore because it was located on the property of the Guidore Coastguard Station. The letter stated the conditions the new teacher was being afforded and that he would stay at a “decent respectable Protestant widow who lived half ways between Guidore and Rutland”, possibly in Mullaghderg or Kincasslagh. Cathal Ó Cearbhalláin taught in Carrickfinn from the 1840s but he may have used the old watch house. Master Ó Cearbhalláin or O Carolan a Catholic and a native Gaelic speaker was employed by the Society. O’Carolan was at the seminary with Fr. Dan O’Donnell who was Parish Priest in Kincasslagh until his death in 1879. O’ Carolan dropped out and went to teaching ended up in Carrickfin, but he was recognized by Fr. Dan. Charles Boyle born 1819 in Belcruit next to Sally Ned’s pub was a pupil of Charles O’Carolan. Cathal was married to Ann McDonnell the only daughter of the then famous poet and schoolteacher Aodh Mac Domhnaill. Aodh was born in Co. Meath in 1802. A descendent of Mac Domhnaill chieftains of Co. Antrim, Aodh was one of the earliest Catholic born schoolteachers employed by a proselytising Society. He was at first employed as a schoolteacher by the London Hibernian Society and was later an inspector with the Presbyterian Home Mission in the Glens of Antrim. After a row between himself and Fr. Luke Walsh a Parish Priest in North Antrim, he lost his position. Robert MacAdam a respected antiquarian and Gaeilge language revivalist employed Aodh as a collector of folklore and as his chief assistant in compiling printed versions of some of Ireland’s ancient manuscripts. In 1856 Aodh came to live with his daughter in Kincasslagh. He taught in Carrickfinn and was also an inspector for the Island and Coast Society. In 1863 while in Carrickfinn he wrote a manuscript, This manusript is now kept in Maynooth College. On a visit back to see his relations, he fell ill, and died in the workhouse in Cootehill, Co. Cavan in March 1867. His former employer Robert MacAdam paid for his funeral to Myrath Cemetery. His daughter Ann and her husband Cathal were sequentially buried in the same plot. While the greater numbers of pupils at this school were Catholic who received an education in their native tongue, it must be emphasized that Gaelic was the daily language of most members of the local Church of Ireland community. One of these pupils, James Boyd from Carnboy educated there in 1853 spoke very little English. It is possible that Cathal O Cearbhalláin continued to teach up until the death of his wife and daughter in 1877. While we have no record of his birth, his wife was born before her mother died in 1836. In 1850 the Society gave a grant of £50 for the erection of a dual purpose building. The former coastguard watch house was renovated and was used as a church and school. Cork native Rev. Thomas Wolfe, a superintendent with the Society became the first Rector of the Church of Ireland community in Carrickfinn during the spring of 1858. Rev Wolfe resided with the Alcorn family in the former Coastguard Station. He succumbed to the fever on December 22nd that same year. He was just thirty five years old and was buried near the ancient Cross of Columcille (this high cross fell forty years earlier) on Christmas Eve at Myrath cemetery near Falcarragh, a distance of some twenty miles. In 1868 the dual purpose building became a Church of Ireland Chapel of Ease. With the aid of the Col. Robertson Fund a new schoolhouse was opened in 1868. Richard Given a native of Ardara became the teacher at this schoolhouse in the 1882. My grandfather Jimmy Duffy born in 1876 was a pupil. The pupils were now given Bible instruction in English, the language of the Established Church. When my grandfather was old enough to make his confirmation, he and his Catholic classmates were taught catechism by an elderly neighbour Nablá Ní Bhaoill born around 1830, herself was a former pupil of the Island and Coast Society. Many Carrickfinn natives received a good education from the Hedge, Robertson and Society Schools. Duncan Boyle born in 1822 became a captain of ocean going sailing ships plying their trade between Canada and England. Charles and Patrick Boyle worked in the engineering section of Scotland’s Rail Network. James P Sinnott born in Carrickfinn in 1848 became a Monsignor in America, his brother Joseph who owned the Moore and Sinnott distillery, once the largest rye distillery in America and Charles Boyle born 1819 in Belcruit who was a pupil of Charles O’Carolan. Boyle worked as a railway engineer in the new frontier in the US. My grandfather Jimmy Duffy was the captain of one of the first motor fishing boats on this coast from 1914. Together with his crew most of whom were former pupils of Richard Given’s navigated the coasts of Ireland and Britain using primitive equipment such as compass, sextant and charts. It was essential that they had knowledge of English and Mathematics to succeed. This school was funded by the Erasmus Smith Fund from 1902 until its closure in 1937. It was the penultimate school Erasmus Smith in Donegal to close down. The schools’ emphasis was teaching English, Greek, Latin and Hebrew. In an examination carried out by the fund in 1935, Richard Given was still teaching and giving high results at the advanced age of 81 years. Annie Alcorn (Tammy) from Carrickfinn was a monitor sometime during Master Given’s career. The National Education System finally came to Carrickfinn in 1898. This system which began in 1831 was based on the Robertson model. The building which housed the school was recorded in the Ordnance Survey of 1835 and was inhabited by Edward Sweeney at the time of the Griffiths Valuation of 1858. The first teacher was Mrs Bridget Diver, known locally as Biddy Durín. Biddy nee Durning was born in Bunbeg in the neighbouring Parish of Gweedore and upon her marriage to John Diver (Tharlaigh Mhicí), she taught in the National School in his native Gola Island. John died early in life as did their two young daughters. Broken hearted, Biddy left for a new life in Canada. When she arrived she quickly found that life there didn’t suit her, so she returned home to Bunbeg. She got a post in the new national school which opened on May 5thth 1898. She rowed her currach across the narrow but hazardous estuary that separates Carrickfinn and Gweedore. When the days were short and inclement she stayed in Carrickfinn where she is recorded in the 1901 census in the Duffy household. She was a well qualified teacher and many children came from surrounding areas to avail of her tuition. Biddy taught only in the medium of English. One of her sayings was “who owns these rags” while holding a pupil’s coat up with a stick. There were eleven boys and seven girls recorded on the first roll call on this historic day. The oldest pupil was Patrick Doherty (Padraig Airt) a fourteen year from the neighbouring island of Inishinny while five years old John Boyle (Michael) of Carrickfinn was the youngest. Maggie Forker who died in 2007 aged 103 years was the last surviving pupil of Biddy Durnín’s era. There are no records for the rate of the teacher’s pay but it would be less than the £17 13s 8d per quarter, the principal of the two teacher Annagry National School received. On July 19th 1904 an indenture was signed by Victor George Henry Francis Marquis of Conyngham of Slane Castle, Connell Gallagher Tenant and the Most Rev Patrick O’Donnell Bishop of Raphoe. The contract was start of a process which would see Carrickfinn Island getting the first purpose built National School. The school was built on a site given by Connell Gallagher, a tenant of local Conyngham Estate and was supervised by Rev James Walker, Parish Priest of Lower Templecrone in which Carrickfinn Island was a part. The cost of the building was £228 stg, a grant £152 stg was given by Westminster to the Commissioners of Public Works while the remainder was raised within the Parish. This was a very progressive year for the parish with contracts being signed for the erection of a new National School in Annagry and Annagry Carpet Factory, the later by the Congested Districts Board. Hughie McCole from the Hills built the new school and it was opened on March 28th 1906. During World War 1, the teachers received a war bonus of £4 4s per annum because of rising inflation. When Mrs Diver retired in 1915, a young teacher who had just graduated temporarily filled the position. His name was Jimmy Greene from Ranafast who later was to become the most famous Gaelic novelist of the twentieth century under the penname Máire or Seamus Ó Grianna. Jimmy Fheilimidh as he was locally known travelled to school by currach. When the sea was rough he’d stay in the home of Lanty Gallagher, a serving gunner in the Royal Navy. Lanty who joined the Navy in the 1890s was home on leave during Jimmy’s teaching term in Carrickfinn. Lanty, a past pupil of Richard Given ended his naval career in 1919 having seen action as a gunner aboard the flagship HMS Lion during the Battle of Jutland. Many of the pupils recorded on the Carrickfinn Island N. S rolls were attending while at service at neighbouring families. They came from Gweedore, Ranafast, Braade, Drumnacart, Innishinny Island as well as some from other parts of Ireland and Scotland while they visited relatives. At the old school house, slates and chalk were used to write on, but with inkwells being part of the desks in the new school, the pupils started to use copy books. There were many poems taught at this school over the years with some being later recited by past pupils, poems such as “Casabianca” by Felicia Hemans and “Wee Hughie” by Elizabeth Shane. Elizabeth Shane was the pen name of Gertrude Hind, a regular visitor to Carrickfinn in the 1920s. In 1926 it became compulsory for every child under the age of 14 to attend National School every day. Mothers who in the past kept the most robust members of the family off school while their husbands were either at the harvest in Scotland or were at sea now faced the full rigours of the law. It was compulsory that every pupil would have with them two sods of turf from home to keep the school fire going, failure to do so resulted in the offender getting a couple of slaps with a willow cane or “sally” rod. With the establishment of the Irish Free State there were some changes to the educational system. It was customary to send the children to herd cattle after school. This was done, so that the unfenced areas could be grazed without destroying their own or the neighbour’s crops. It was common for these children not to have their homework done which resulted in more slaps. The pupils got a pandy (a tin can made by a travelling tinker) of cocoa and a slice of loaf bread and butter. Two of the older children in the school were sent to the Dunleavy’s shop in Calhame, a five mile round trip to get the bread. On the way back hunger pangs would overcome them, so they would carefully open the heel of the loaf and eat the inside. When they would land back with the provisions, an inquisition would begin which would result in……even more slaps! The Irish language was promoted in Co. Donegal by Crann Eithne, an organisation set up by the Bishop of Raphoe in 1909. The participating teachers availed of a bilingual grant in the region of 17s per year. In 1927 Gaelic became the teaching medium of the education system. The principal of the Carrickfinn Robertson School, Richard Given died on February 2nd 1937 at the advanced age of 84 years, This marked the end of Protestant education in Carrickfinn. Without a teacher it was decided to close the school and to amalgamate with the National School. Before the end of the school year of 1937, eight pupils from the local Church of Ireland community attended Carrickfinn Island National School for the first time. They were sisters, Maggie and Lily Boyd (Mary Jane), Susan Boyd (Johnny Richard Óg) of Carnboy, May Boyd (Christy) Carrickfinn. Also changing school were two sets of brothers George and John Boyd (Johnny Richard Óg) and Joe and John Boyd (Mary Jane), all of Carnboy. May Boyd later McElhinney now reside in Dunfanaghy and is the last surviving pupil of Master Given. The school’s religious instruction was now controlled by the Catholic Church; the Protestant pupils would be let out to the playground while the less fortunate Catholic children had to suffer on. It was decided by the Diocese of Raphoe and the Department of Education that the smaller schools in the Parish of Annagry should close and amalgamate with the larger school in the village close to the Parish Church. On July 1st 1968 ten girls and five boys attended Annagry National School for the first time. The school fell into disrepair after its closing. It was later renovated and is now a holiday cottage. Air Aodh Mac Domhnaill Gen ach bhfuil úr chórn le na luir a choimead go buan No cloch na sgeala do choimhnach dhuin gach huair Achd teas shuil na ndeor, sé shaibhleas se é A ainm ulmhaitheas ó dhath uain na cré Caranacht na ndeor mar bhuil sé na luidhe Le gaoil spiridibh silteadh beith ro chaoimh Ní thuitfuidh na tuilte gan tairbhe no mithórbhuil uabh Achd foilseaidh dhuin an ait a bhfaisfaidh snuadh na huadh Sud an ait a gcafaidh an tsamar a snuadh ghlas le sgeimh Agus aibid úr an earaidh beith go bun do réir Sud an ait a mbeith an dearg rós ag fáilte gnuis an lae Agus drucht oidhche silt de dhealgibh gear Ansin ní fhasan luibh neimhnach go lá an luan No ni bheith athair neimhe a bhfogas do na leabhadh ro shuan Dheandaid faire ar huairibh air a luir le brón Agus banfaid an fomhnan frithe dhe na huadh gach nóin Ansa t-sean Cill Mhoira shuamneas a chean go crom Agus codlain go luidhchan ameasg na marbh go trom (Written by his son-in-law Cathal after Aodh death.) Aodh Mac Domhnaill wrote two manuscripts, one in Belfast in 1858 and another more than likely in Carrickfinn in 1863 (this one was owned by the late Cardinal Ó Fiaich). Additional information of teachers. *Sean McColgan got a post in Dublin in September 1926. *Annie Breslin transferred to Knockastollar N.S in April 1927. *Norah Boyle came from Inishirrer N.S. and spent 5 years in Carrickfinn *Kitty Bonner was in Carrickfinn when Master Given died in 1937. * Ms McGarvey Boyle said she taught here from 1953/54 to 1957..Annie Carr from Gortahork taught there before her and she thinks Máire McGinley replaced her. *The were two teachers called Frances Boyle, one from Arranmore and the other know as Nuala Boyle from Meendernasloe, one of them appeared in the Annagry notes (Derry People) March 1952. ©Jimmy Duffy November 2015
1
4
<urn:uuid:e292fadc-2c46-4bf6-ae8a-59932fec2a0a>
Over the past ten years, corneal transplantation surgical techniques have undergone revolutionary changes1,2. Since its inception, traditional full thickness corneal transplantation has been the treatment to restore sight in those limited by corneal disease. Some disadvantages to this approach include a high degree of post-operative astigmatism, lack of predictable refractive outcome, and disturbance to the ocular surface. The development of Descemet's stripping endothelial keratoplasty (DSEK), transplanting only the posterior corneal stroma, Descemet's membrane, and endothelium, has dramatically changed treatment of corneal endothelial disease. DSEK is performed through a smaller incision; this technique avoids 'open sky' surgery with its risk of hemorrhage or expulsion, decreases the incidence of postoperative wound dehiscence, reduces unpredictable refractive outcomes, and may decrease the rate of transplant rejection3-6. Initially, cornea donor posterior lamellar dissection for DSEK was performed manually1 resulting in variable graft thickness and damage to the delicate corneal endothelial tissue during tissue processing. Automated lamellar dissection (Descemet's stripping automated endothelial keratoplasty, DSAEK) was developed to address these issues. Automated dissection utilizes the same technology as LASIK corneal flap creation with a mechanical microkeratome blade that helps to create uniform and thin tissue grafts for DSAEK surgery with minimal corneal endothelial cell loss in tissue processing. Eye banks have been providing full thickness corneas for surgical transplantation for many years. In 2006, eye banks began to develop methodologies for supplying precut corneal tissue for endothelial keratoplasty. With the input of corneal surgeons, eye banks have developed thorough protocols to safely and effectively prepare posterior lamellar tissue for DSAEK surgery. This can be performed preoperatively at the eye bank. Research shows no significant difference in terms of the quality of the tissue7 or patient outcomes8,9 using eye bank precut tissue versus surgeon-prepared tissue for DSAEK surgery. For most corneal surgeons, the availability of precut DSAEK corneal tissue saves time and money10, and reduces the stress of performing the donor corneal dissection in the operating room. In part because of the ability of the eye banks to provide high quality posterior lamellar corneal in a timely manner, DSAEK has become the standard of care for surgical management of corneal endothelial disease. The procedure that we are describing is the preparation of the posterior lamellar cornea at the eye bank for transplantation in DSAEK surgery (Figure 1). 27 Related JoVE Articles! Pseudofracture: An Acute Peripheral Tissue Trauma Model Institutions: University of Pittsburgh, University of Aachen Medical Center. Following trauma there is an early hyper-reactive inflammatory response that can lead to multiple organ dysfunction and high mortality in trauma patients; this response is often accompanied by a delayed immunosuppression that adds the clinical complications of infection and can also increase mortality.1-9 Many studies have begun to assess these changes in the reactivity of the immune system following trauma.10-15 Immunologic studies are greatly supported through the wide variety of transgenic and knockout mice available for in vivo modeling; these strains aid in detailed investigations to assess the molecular pathways involved in the immunologic responses.16-21 The challenge in experimental murine trauma modeling is long term investigation, as fracture fixation techniques in mice, can be complex and not easily reproducible.22-30 This pseudofracture model, an easily reproduced trauma model, overcomes these difficulties by immunologically mimicking an extremity fracture environment, while allowing freedom of movement in the animals and long term survival without the continual, prolonged use of anaesthesia. The intent is to recreate the features of long bone fracture; injured muscle and soft tissue are exposed to damaged bone and bone marrow without breaking the native bone. The pseudofracture model consists of two parts: a bilateral muscle crush injury to the hindlimbs, followed by injection of a bone solution into these injured muscles. The bone solution is prepared by harvesting the long bones from both hindlimbs of an age- and weight-matched syngeneic donor. These bones are then crushed and resuspended in phosphate buffered saline to create the bone solution. Bilateral femur fracture is a commonly used and well-established model of extremity trauma, and was the comparative model during the development of the pseudofracture model. Among the variety of available fracture models, we chose to use a closed method of fracture with soft tissue injury as our comparison to the pseudofracture, as we wanted a sterile yet proportionally severe peripheral tissue trauma model. 31 Hemorrhagic shock is a common finding in the setting of severe trauma, and the global hypoperfusion adds a very relevant element to a trauma model. 32-36 The pseudofracture model can be easily combined with a hemorrhagic shock model for a multiple trauma model of high severity. 37 Medicine, Issue 50, Trauma, musculoskeletal, mouse, extremity, inflammation, immunosuppression, immune response. Thermal Ablation for the Treatment of Abdominal Tumors Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison. Percutaneous thermal ablation is an emerging treatment option for many tumors of the abdomen not amenable to conventional treatments. During a thermal ablation procedure, a thin applicator is guided into the target tumor under imaging guidance. Energy is then applied to the tissue until temperatures rise to cytotoxic levels (50-60 °C). Various energy sources are available to heat biological tissues, including radiofrequency (RF) electrical current, microwaves, laser light and ultrasonic waves. Of these, RF and microwave ablation are most commonly used worldwide. During RF ablation, alternating electrical current (~500 kHz) produces resistive heating around the interstitial electrode. Skin surface electrodes (ground pads) are used to complete the electrical circuit. RF ablation has been in use for nearly 20 years, with good results for local tumor control, extended survival and low complication rates1,2 . Recent studies suggest RF ablation may be a first-line treatment option for small hepatocellular carcinoma and renal-cell carcinoma3-5 . However, RF heating is hampered by local blood flow and high electrical impedance tissues (eg, lung, bone, desiccated or charred tissue)6,7 . Microwaves may alleviate some of these problems by producing faster, volumetric heating8-10 . To create larger or conformal ablations, multiple microwave antennas can be used simultaneously while RF electrodes require sequential operation, which limits their efficiency. Early experiences with microwave systems suggest efficacy and safety similar to, or better than RF devices11-13 Alternatively, cryoablation freezes the target tissues to lethal levels (-20 to -40 °C). Percutaneous cryoablation has been shown to be effective against RCC and many metastatic tumors, particularly colorectal cancer, in the liver14-16 . Cryoablation may also be associated with less post-procedure pain and faster recovery for some indications17 . Cryoablation is often contraindicated for primary liver cancer due to underlying coagulopathy and associated bleeding risks frequently seen in cirrhotic patients. In addition, sudden release of tumor cellular contents when the frozen tissue thaws can lead to a potentially serious condition known as cryoshock 16 Thermal tumor ablation can be performed at open surgery, laparoscopy or using a percutaneous approach. When performed percutaneously, the ablation procedure relies on imaging for diagnosis, planning, applicator guidance, treatment monitoring and follow-up. Ultrasound is the most popular modality for guidance and treatment monitoring worldwide, but computed tomography (CT) and magnetic resonance imaging (MRI) are commonly used as well. Contrast-enhanced CT or MRI are typically employed for diagnosis and follow-up imaging. Medicine, Issue 49, Thermal ablation, interventional oncology, image-guided therapy, radiology, cancer Microvascular Decompression: Salient Surgical Principles and Technical Nuances Institutions: Vanderbilt University Medical Center, Vanderbilt University Medical Center. Trigeminal neuralgia is a disorder associated with severe episodes of lancinating pain in the distribution of the trigeminal nerve. Previous reports indicate that 80-90% of cases are related to compression of the trigeminal nerve by an adjacent vessel. The majority of patients with trigeminal neuralgia eventually require surgical management in order to achieve remission of symptoms. Surgical options for management include ablative procedures (e.g., radiosurgery, percutaneous radiofrequency lesioning, balloon compression, glycerol rhizolysis, etc.) and microvascular decompression. Ablative procedures fail to address the root cause of the disorder and are less effective at preventing recurrence of symptoms over the long term than microvascular decompression. However, microvascular decompression is inherently more invasive than ablative procedures and is associated with increased surgical risks. Previous studies have demonstrated a correlation between surgeon experience and patient outcome in microvascular decompression. In this series of 59 patients operated on by two neurosurgeons (JSN and PEK) since 2006, 93% of patients demonstrated substantial improvement in their trigeminal neuralgia following the procedure—with follow-up ranging from 6 weeks to 2 years. Moreover, 41 of 66 patients (approximately 64%) have been entirely pain-free following the operation. In this publication, video format is utilized to review the microsurgical pathology of this disorder. Steps of the operative procedure are reviewed and salient principles and technical nuances useful in minimizing complications and maximizing efficacy are discussed. Medicine, Issue 53, microvascular, decompression, trigeminal, neuralgia, operation, video Rapid and Low-cost Prototyping of Medical Devices Using 3D Printed Molds for Liquid Injection Molding Institutions: University of California, San Francisco, University of California, San Francisco, University of Southern California. Biologically inert elastomers such as silicone are favorable materials for medical device fabrication, but forming and curing these elastomers using traditional liquid injection molding processes can be an expensive process due to tooling and equipment costs. As a result, it has traditionally been impractical to use liquid injection molding for low-cost, rapid prototyping applications. We have devised a method for rapid and low-cost production of liquid elastomer injection molded devices that utilizes fused deposition modeling 3D printers for mold design and a modified desiccator as an injection system. Low costs and rapid turnaround time in this technique lower the barrier to iteratively designing and prototyping complex elastomer devices. Furthermore, CAD models developed in this process can be later adapted for metal mold tooling design, enabling an easy transition to a traditional injection molding process. We have used this technique to manufacture intravaginal probes involving complex geometries, as well as overmolding over metal parts, using tools commonly available within an academic research laboratory. However, this technique can be easily adapted to create liquid injection molded devices for many other applications. Bioengineering, Issue 88, liquid injection molding, reaction injection molding, molds, 3D printing, fused deposition modeling, rapid prototyping, medical devices, low cost, low volume, rapid turnaround time. An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School. HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel. Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable In Situ Neutron Powder Diffraction Using Custom-made Lithium-ion Batteries Institutions: University of Sydney, University of Wollongong, Australian Synchrotron, Australian Nuclear Science and Technology Organisation, University of Wollongong, University of New South Wales. Li-ion batteries are widely used in portable electronic devices and are considered as promising candidates for higher-energy applications such as electric vehicles.1,2 However, many challenges, such as energy density and battery lifetimes, need to be overcome before this particular battery technology can be widely implemented in such applications.3 This research is challenging, and we outline a method to address these challenges using in situ NPD to probe the crystal structure of electrodes undergoing electrochemical cycling (charge/discharge) in a battery. NPD data help determine the underlying structural mechanism responsible for a range of electrode properties, and this information can direct the development of better electrodes and batteries. We briefly review six types of battery designs custom-made for NPD experiments and detail the method to construct the ‘roll-over’ cell that we have successfully used on the high-intensity NPD instrument, WOMBAT, at the Australian Nuclear Science and Technology Organisation (ANSTO). The design considerations and materials used for cell construction are discussed in conjunction with aspects of the actual in situ NPD experiment and initial directions are presented on how to analyze such complex in situ Physics, Issue 93, In operando, structure-property relationships, electrochemical cycling, electrochemical cells, crystallography, battery performance High Efficiency Differentiation of Human Pluripotent Stem Cells to Cardiomyocytes and Characterization by Flow Cytometry Institutions: Medical College of Wisconsin, Stanford University School of Medicine, Medical College of Wisconsin, Hong Kong University, Johns Hopkins University School of Medicine, Medical College of Wisconsin. There is an urgent need to develop approaches for repairing the damaged heart, discovering new therapeutic drugs that do not have toxic effects on the heart, and improving strategies to accurately model heart disease. The potential of exploiting human induced pluripotent stem cell (hiPSC) technology to generate cardiac muscle “in a dish” for these applications continues to generate high enthusiasm. In recent years, the ability to efficiently generate cardiomyogenic cells from human pluripotent stem cells (hPSCs) has greatly improved, offering us new opportunities to model very early stages of human cardiac development not otherwise accessible. In contrast to many previous methods, the cardiomyocyte differentiation protocol described here does not require cell aggregation or the addition of Activin A or BMP4 and robustly generates cultures of cells that are highly positive for cardiac troponin I and T (TNNI3, TNNT2), iroquois-class homeodomain protein IRX-4 (IRX4), myosin regulatory light chain 2, ventricular/cardiac muscle isoform (MLC2v) and myosin regulatory light chain 2, atrial isoform (MLC2a) by day 10 across all human embryonic stem cell (hESC) and hiPSC lines tested to date. Cells can be passaged and maintained for more than 90 days in culture. The strategy is technically simple to implement and cost-effective. Characterization of cardiomyocytes derived from pluripotent cells often includes the analysis of reference markers, both at the mRNA and protein level. For protein analysis, flow cytometry is a powerful analytical tool for assessing quality of cells in culture and determining subpopulation homogeneity. However, technical variation in sample preparation can significantly affect quality of flow cytometry data. Thus, standardization of staining protocols should facilitate comparisons among various differentiation strategies. Accordingly, optimized staining protocols for the analysis of IRX4, MLC2v, MLC2a, TNNI3, and TNNT2 by flow cytometry are described. Cellular Biology, Issue 91, human induced pluripotent stem cell, flow cytometry, directed differentiation, cardiomyocyte, IRX4, TNNI3, TNNT2, MCL2v, MLC2a A Murine Model of Myocardial Ischemia-reperfusion Injury through Ligation of the Left Anterior Descending Artery Institutions: The Ohio State University. Acute or chronic myocardial infarction (MI) are cardiovascular events resulting in high morbidity and mortality. Establishing the pathological mechanisms at work during MI and developing effective therapeutic approaches requires methodology to reproducibly simulate the clinical incidence and reflect the pathophysiological changes associated with MI. Here, we describe a surgical method to induce MI in mouse models that can be used for short-term ischemia-reperfusion (I/R) injury as well as permanent ligation. The major advantage of this method is to facilitate location of the left anterior descending artery (LAD) to allow for accurate ligation of this artery to induce ischemia in the left ventricle of the mouse heart. Accurate positioning of the ligature on the LAD increases reproducibility of infarct size and thus produces more reliable results. Greater precision in placement of the ligature will improve the standard surgical approaches to simulate MI in mice, thus reducing the number of experimental animals necessary for statistically relevant studies and improving our understanding of the mechanisms producing cardiac dysfunction following MI. This mouse model of MI is also useful for the preclinical testing of treatments targeting myocardial damage following MI. Medicine, Issue 86, Myocardial Ischemia/Reperfusion, permanent ligation, left anterior descending artery, myocardial infarction, LAD, ligation, Cardiac troponin I Propagation of Homalodisca coagulata virus-01 via Homalodisca vitripennis Cell Culture Institutions: University of Texas at Tyler, USDA ARS. The glassy-winged sharpshooter (Homalodisca vitripennis ) is a highly vagile and polyphagous insect found throughout the southwestern United States. These insects are the predominant vectors of Xylella fastidiosa (X. fastidiosa), a xylem-limited bacterium that is the causal agent of Pierce's disease (PD) of grapevine. Pierce’s disease is economically damaging; thus, H. vitripennis have become a target for pathogen management strategies. A dicistrovirus identified as Homalodisca coagulata virus-01 (HoCV-01) has been associated with an increased mortality in H. vitripennis populations. Because a host cell is required for HoCV-01 replication, cell culture provides a uniform environment for targeted replication that is logistically and economically valuable for biopesticide production. In this study, a system for large-scale propagation of H. vitripennis cells via tissue culture was developed, providing a viral replication mechanism. HoCV-01 was extracted from whole body insects and used to inoculate cultured H. vitripennis cells at varying levels. The culture medium was removed every 24 hr for 168 hr, RNA extracted and analyzed with qRT-PCR. Cells were stained with trypan blue and counted to quantify cell survivability using light microscopy. Whole virus particles were extracted up to 96 hr after infection, which was the time point determined to be before total cell culture collapse occurred. Cells were also subjected to fluorescent staining and viewed using confocal microscopy to investigate viral activity on F-actin attachment and nuclei integrity. The conclusion of this study is that H. vitripennis cells are capable of being cultured and used for mass production of HoCV-01 at a suitable level to allow production of a biopesticide. Infection, Issue 91, Homalodisca vitripennis, Homalodisca coagulata virus-01, cell culture, Pierce’s disease of grapevine, Xylella fastidiosa, Dicistroviridae Technique and Considerations in the Use of 4x1 Ring High-definition Transcranial Direct Current Stimulation (HD-tDCS) Institutions: Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Pontifical Catholic University of Ecuador, Charité University Medicine Berlin, The City College of The City University of New York, University of Michigan. High-definition transcranial direct current stimulation (HD-tDCS) has recently been developed as a noninvasive brain stimulation approach that increases the accuracy of current delivery to the brain by using arrays of smaller "high-definition" electrodes, instead of the larger pad-electrodes of conventional tDCS. Targeting is achieved by energizing electrodes placed in predetermined configurations. One of these is the 4x1-ring configuration. In this approach, a center ring electrode (anode or cathode) overlying the target cortical region is surrounded by four return electrodes, which help circumscribe the area of stimulation. Delivery of 4x1-ring HD-tDCS is capable of inducing significant neurophysiological and clinical effects in both healthy subjects and patients. Furthermore, its tolerability is supported by studies using intensities as high as 2.0 milliamperes for up to twenty minutes. Even though 4x1 HD-tDCS is simple to perform, correct electrode positioning is important in order to accurately stimulate target cortical regions and exert its neuromodulatory effects. The use of electrodes and hardware that have specifically been tested for HD-tDCS is critical for safety and tolerability. Given that most published studies on 4x1 HD-tDCS have targeted the primary motor cortex (M1), particularly for pain-related outcomes, the purpose of this article is to systematically describe its use for M1 stimulation, as well as the considerations to be taken for safe and effective stimulation. However, the methods outlined here can be adapted for other HD-tDCS configurations and cortical targets. Medicine, Issue 77, Neurobiology, Neuroscience, Physiology, Anatomy, Biomedical Engineering, Biophysics, Neurophysiology, Nervous System Diseases, Diagnosis, Therapeutics, Anesthesia and Analgesia, Investigative Techniques, Equipment and Supplies, Mental Disorders, Transcranial direct current stimulation, tDCS, High-definition transcranial direct current stimulation, HD-tDCS, Electrical brain stimulation, Transcranial electrical stimulation (tES), Noninvasive Brain Stimulation, Neuromodulation, non-invasive, brain, stimulation, clinical techniques Feeder-free Derivation of Neural Crest Progenitor Cells from Human Pluripotent Stem Cells Institutions: Sloan-Kettering Institute for Cancer Research, The Rockefeller University. Human pluripotent stem cells (hPSCs) have great potential for studying human embryonic development, for modeling human diseases in the dish and as a source of transplantable cells for regenerative applications after disease or accidents. Neural crest (NC) cells are the precursors for a large variety of adult somatic cells, such as cells from the peripheral nervous system and glia, melanocytes and mesenchymal cells. They are a valuable source of cells to study aspects of human embryonic development, including cell fate specification and migration. Further differentiation of NC progenitor cells into terminally differentiated cell types offers the possibility to model human diseases in vitro , investigate disease mechanisms and generate cells for regenerative medicine. This article presents the adaptation of a currently available in vitro differentiation protocol for the derivation of NC cells from hPSCs. This new protocol requires 18 days of differentiation, is feeder-free, easily scalable and highly reproducible among human embryonic stem cell (hESC) lines as well as human induced pluripotent stem cell (hiPSC) lines. Both old and new protocols yield NC cells of equal identity. Neuroscience, Issue 87, Embryonic Stem Cells (ESCs), Pluripotent Stem Cells, Induced Pluripotent Stem Cells (iPSCs), Neural Crest, Peripheral Nervous System (PNS), pluripotent stem cells, neural crest cells, in vitro differentiation, disease modeling, differentiation protocol, human embryonic stem cells, human pluripotent stem cells Small Bowel Transplantation In Mice Institutions: University of California, San Francisco - UCSF. Since 1990, the development of tacrolimus-based immunosuppression and improved surgical techniques, the increased array of potent immunosuppressive medications, infection prophylaxis, and suitable patient selection helped improve actuarial graft and patient survival rates for all types of intestine transplantation. Patients with irreversible intestinal failure and complications of parenteral nutrition should now be routinely considered for small intestine transplantation. However, Survival rates for small intestinal transplantation have been slow to improve compares increasingly favorably with renal, liver, heart and lung. The small bowel transplantation is still unsatisfactory compared with other organs. Further progress may depend on better understanding of immunology and physiology of the graft and can be greatly facilitated by animal models. A wider use of mouse small bowel transplantation model is needed in the study of immunology and physiology of the transplantation gut as well as efficient methods in diagnosing early rejection. However, this model is limited to use because the techniques involved is an extremely technically challenging. We have developed a modified technique. When making anastomosis of portal vein and inferior vena cava, two stay sutures are made at the proximal apex and distal apex of the recipient s inferior vena cava with the donor s portal vein. The left wall of the inferior vena cava and donor s portal vein is closed with continuing sutures in the inside of the inferior vena cava after, after one knot with the proximal apex stay suture the right wall of the inferior vena cava and the donor s portal vein are closed with continuing sutures outside the inferior vena cave with 10-0 sutures. This method is easier to perform because anastomosis is made just on the one side of the inferior vena cava and 10-0 sutures is the right size to avoid bleeding and thrombosis. In this article, we provide details of the technique to supplement the video. Issue 7, Immunology, Transplantation, Transplant Rejection, Small Bowel The C-seal: A Biofragmentable Drain Protecting the Stapled Colorectal Anastomosis from Leakage Institutions: University Medical Center Groningen. Colorectal anastomotic leakage (AL) is a serious complication in colorectal surgery leading to high morbidity and mortality rates1 . The incidence of AL varies between 2.5 and 20% 2-5 . Over the years, many strategies aimed at lowering the incidence of anastomotic leakage have been examined6, 7 The cause of AL is probably multifactorial. Etiological factors include insufficient arterial blood supply, tension on the anastomosis, hematoma and/or infection at the anastomotic site, and co-morbid factors of the patient as diabetes and atherosclerosis8 . Furthermore, some anastomoses may be insufficient from the start due to technical failure. Currently a new device is developed in our institute aimed at protecting the colorectal anastomosis and lowering the incidence of AL. This so called C-seal is a biofragmentable drain, which is stapled to the anastomosis with the circular stapler. It covers the luminal side of the colorectal anastomosis thereby preventing leakage. The C-seal is a thin-walled tube-like drain, with an approximate diameter of 4 cm and an approximate length of 25 cm (figure 1). It is a tubular device composed of biodegradable polyurethane. Two flaps with adhesive tape are found at one end of the tube. These flaps are used to attach the C-seal to the anvil of the circular stapler, so that after the anastomosis is made the C-seal can be pulled through the anus. The C-seal remains in situ for at least 10 days. Thereafter it will lose strength and will degrade to be secreted from the body together with the gastrointestinal natural contents. The C-seal does not prevent the formation of dehiscences. However, it prevents extravasation of faeces into the peritoneal cavity. This means that a gap at the anastomotic site does not lead to leakage. Currently, a phase II study testing the C-seal in 35 patients undergoing (colo-)rectal resection with stapled anastomosis is recruiting. The C-seal can be used in both open procedures as well as laparoscopic procedures. The C-seal is only applied in stapled anastomoses within 15cm from the anal verge. In the video, application of the C-seal is shown in an open extended sigmoid resection in a patient suffering from diverticular disease with a stenotic colon. Medicine, Issue 45, Surgery, low anterior resection, colorectal anastomosis, anastomotic leakage, drain, rectal cancer, circular stapler Aseptic Laboratory Techniques: Plating Methods Institutions: University of California, Los Angeles . Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method. Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions The Specification of Telencephalic Glutamatergic Neurons from Human Pluripotent Stem Cells Institutions: The University of Connecticut Health Center, The University of Connecticut Health Center, The University of Connecticut Health Center. Here, a stepwise procedure for efficiently generating telencephalic glutamatergic neurons from human pluripotent stem cells (PSCs) has been described. The differentiation process is initiated by breaking the human PSCs into clumps which round up to form aggregates when the cells are placed in a suspension culture. The aggregates are then grown in hESC medium from days 1-4 to allow for spontaneous differentiation. During this time, the cells have the capacity to become any of the three germ layers. From days 5-8, the cells are placed in a neural induction medium to push them into the neural lineage. Around day 8, the cells are allowed to attach onto 6 well plates and differentiate during which time the neuroepithelial cells form. These neuroepithelial cells can be isolated at day 17. The cells can then be kept as neurospheres until they are ready to be plated onto coverslips. Using a basic medium without any caudalizing factors, neuroepithelial cells are specified into telencephalic precursors, which can then be further differentiated into dorsal telencephalic progenitors and glutamatergic neurons efficiently. Overall, our system provides a tool to generate human glutamatergic neurons for researchers to study the development of these neurons and the diseases which affect them. Stem Cell Biology, Issue 74, Neuroscience, Neurobiology, Developmental Biology, Cellular Biology, Molecular Biology, Stem Cells, Embryonic Stem Cells, ESCs, Pluripotent Stem Cells, Induced Pluripotent Stem Cells, iPSC, neural differentiation, forebrain, glutamatergic neuron, neural patterning, development, neurons Flexible Colonoscopy in Mice to Evaluate the Severity of Colitis and Colorectal Tumors Using a Validated Endoscopic Scoring System Institutions: Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland. The use of modern endoscopy for research purposes has greatly facilitated our understanding of gastrointestinal pathologies. In particular, experimental endoscopy has been highly useful for studies that require repeated assessments in a single laboratory animal, such as those evaluating mechanisms of chronic inflammatory bowel disease and the progression of colorectal cancer. However, the methods used across studies are highly variable. At least three endoscopic scoring systems have been published for murine colitis and published protocols for the assessment of colorectal tumors fail to address the presence of concomitant colonic inflammation. This study develops and validates a reproducible endoscopic scoring system that integrates evaluation of both inflammation and tumors simultaneously. This novel scoring system has three major components: 1) assessment of the extent and severity of colorectal inflammation (based on perianal findings, transparency of the wall, mucosal bleeding, and focal lesions), 2) quantitative recording of tumor lesions (grid map and bar graph), and 3) numerical sorting of clinical cases by their pathological and research relevance based on decimal units with assigned categories of observed lesions and endoscopic complications (decimal identifiers). The video and manuscript presented herein were prepared, following IACUC-approved protocols, to allow investigators to score their own experimental mice using a well-validated and highly reproducible endoscopic methodology, with the system option to differentiate distal from proximal endoscopic colitis (D-PECS). Medicine, Issue 80, Crohn's disease, ulcerative colitis, colon cancer, Clostridium difficile, SAMP mice, DSS/AOM-colitis, decimal scoring identifier Murine Ileocolic Bowel Resection with Primary Anastomosis Institutions: University of Alberta, University of Alberta. Intestinal resections are frequently required for treatment of diseases involving the gastrointestinal tract, with Crohn’s disease and colon cancer being two common examples. Despite the frequency of these procedures, a significant knowledge gap remains in describing the inherent effects of intestinal resection on host physiology and disease pathophysiology. This article provides detailed instructions for an ileocolic resection with primary end-to-end anastomosis in mice, as well as essential aspects of peri-operative care to maximize post-operative success. When followed closely, this procedure yields a 95% long-term survival rate, no failure to thrive, and minimizes post-operative complications of bowel obstruction and anastomotic leak. The technical challenges of performing the procedure in mice are a barrier to its wide spread use in research. The skills described in this article can be acquired without previous surgical experience. Once mastered, the murine ileocolic resection procedure will provide a reproducible tool for studying the effects of intestinal resection in models of human disease. Medicine, Issue 92, Ileocolic resection, anastomosis, Crohn's disease, mouse models, intestinal adaptation, short bowel syndrome Substernal Thyroid Biopsy Using Endobronchial Ultrasound-guided Transbronchial Needle Aspiration Institutions: State University of New York, Buffalo, Roswell Park Cancer Institute, State University of New York, Buffalo. Substernal thyroid goiter (STG) represents about 5.8% of all mediastinal lesions1 . There is a wide variation in the published incidence rates due to the lack of a standardized definition for STG. Biopsy is often required to differentiate benign from malignant lesions. Unlike cervical thyroid, the overlying sternum precludes ultrasound-guided percutaneous fine needle aspiration of STG. Consequently, surgical mediastinoscopy is performed in the majority of cases, causing significant procedure related morbidity and cost to healthcare. Endobronchial Ultrasound-guided Transbronchial Needle Aspiration (EBUS-TBNA) is a frequently used procedure for diagnosis and staging of non-small cell lung cancer (NSCLC). Minimally invasive needle biopsy for lesions adjacent to the airways can be performed under real-time ultrasound guidance using EBUS. Its safety and efficacy is well established with over 90% sensitivity and specificity. The ability to perform EBUS as an outpatient procedure with same-day discharges offers distinct morbidity and financial advantages over surgery. As physicians performing EBUS gained procedural expertise, they have attempted to diversify its role in the diagnosis of non-lymph node thoracic pathologies. We propose here a role for EBUS-TBNA in the diagnosis of substernal thyroid lesions, along with a step-by-step protocol for the procedure. Medicine, Issue 93, substernal thyroid, retrosternal thyroid, intra-thoracic thyroid, goiter, endobronchial ultrasound, EBUS, transbronchial needle aspiration, TBNA, biopsy, needle biopsy A Zebrafish Model of Diabetes Mellitus and Metabolic Memory Institutions: Rosalind Franklin University of Medicine and Science, Rosalind Franklin University of Medicine and Science. Diabetes mellitus currently affects 346 million individuals and this is projected to increase to 400 million by 2030. Evidence from both the laboratory and large scale clinical trials has revealed that diabetic complications progress unimpeded via the phenomenon of metabolic memory even when glycemic control is pharmaceutically achieved. Gene expression can be stably altered through epigenetic changes which not only allow cells and organisms to quickly respond to changing environmental stimuli but also confer the ability of the cell to "memorize" these encounters once the stimulus is removed. As such, the roles that these mechanisms play in the metabolic memory phenomenon are currently being examined. We have recently reported the development of a zebrafish model of type I diabetes mellitus and characterized this model to show that diabetic zebrafish not only display the known secondary complications including the changes associated with diabetic retinopathy, diabetic nephropathy and impaired wound healing but also exhibit impaired caudal fin regeneration. This model is unique in that the zebrafish is capable to regenerate its damaged pancreas and restore a euglycemic state similar to what would be expected in post-transplant human patients. Moreover, multiple rounds of caudal fin amputation allow for the separation and study of pure epigenetic effects in an in vivo system without potential complicating factors from the previous diabetic state. Although euglycemia is achieved following pancreatic regeneration, the diabetic secondary complication of fin regeneration and skin wound healing persists indefinitely. In the case of impaired fin regeneration, this pathology is retained even after multiple rounds of fin regeneration in the daughter fin tissues. These observations point to an underlying epigenetic process existing in the metabolic memory state. Here we present the methods needed to successfully generate the diabetic and metabolic memory groups of fish and discuss the advantages of this model. Medicine, Issue 72, Genetics, Genomics, Physiology, Anatomy, Biomedical Engineering, Metabolomics, Zebrafish, diabetes, metabolic memory, tissue regeneration, streptozocin, epigenetics, Danio rerio, animal model, diabetes mellitus, diabetes, drug discovery, hyperglycemia Direct Pressure Monitoring Accurately Predicts Pulmonary Vein Occlusion During Cryoballoon Ablation Institutions: Piedmont Heart Institute, Medtronic Inc.. Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. Medicine, Issue 72, Anatomy, Physiology, Cardiology, Biomedical Engineering, Surgery, Cardiovascular System, Cardiovascular Diseases, Surgical Procedures, Operative, Investigative Techniques, Atrial fibrillation, Cryoballoon Ablation, Pulmonary Vein Occlusion, Pulmonary Vein Isolation, electrophysiology, catheterizatoin, heart, vein, clinical, surgical device, surgical techniques Implantation of the Syncardia Total Artificial Heart Institutions: Virginia Commonwealth University, Virginia Commonwealth University. With advances in technology, the use of mechanical circulatory support devices for end stage heart failure has rapidly increased. The vast majority of such patients are generally well served by left ventricular assist devices (LVADs). However, a subset of patients with late stage biventricular failure or other significant anatomic lesions are not adequately treated by isolated left ventricular mechanical support. Examples of concomitant cardiac pathology that may be better treated by resection and TAH replacement includes: post infarction ventricular septal defect, aortic root aneurysm / dissection, cardiac allograft failure, massive ventricular thrombus, refractory malignant arrhythmias (independent of filling pressures), hypertrophic / restrictive cardiomyopathy, and complex congenital heart disease. Patients often present with cardiogenic shock and multi system organ dysfunction. Excision of both ventricles and orthotopic replacement with a total artificial heart (TAH) is an effective, albeit extreme, therapy for rapid restoration of blood flow and resuscitation. Perioperative management is focused on end organ resuscitation and physical rehabilitation. In addition to the usual concerns of infection, bleeding, and thromboembolism common to all mechanically supported patients, TAH patients face unique risks with regard to renal failure and anemia. Supplementation of the abrupt decrease in brain natriuretic peptide following ventriculectomy appears to have protective renal effects. Anemia following TAH implantation can be profound and persistent. Nonetheless, the anemia is generally well tolerated and transfusion are limited to avoid HLA sensitization. Until recently, TAH patients were confined as inpatients tethered to a 500 lb pneumatic console driver. Recent introduction of a backpack sized portable driver (currently under clinical trial) has enabled patients to be discharged home and even return to work. Despite the profound presentation of these sick patients, there is a 79-87% success in bridge to transplantation. Medicine, Issue 89, mechanical circulatory support, total artificial heart, biventricular failure, operative techniques Modeling Astrocytoma Pathogenesis In Vitro and In Vivo Using Cortical Astrocytes or Neural Stem Cells from Conditional, Genetically Engineered Mice Institutions: University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, Emory University School of Medicine, University of North Carolina School of Medicine. Current astrocytoma models are limited in their ability to define the roles of oncogenic mutations in specific brain cell types during disease pathogenesis and their utility for preclinical drug development. In order to design a better model system for these applications, phenotypically wild-type cortical astrocytes and neural stem cells (NSC) from conditional, genetically engineered mice (GEM) that harbor various combinations of floxed oncogenic alleles were harvested and grown in culture. Genetic recombination was induced in vitro using adenoviral Cre-mediated recombination, resulting in expression of mutated oncogenes and deletion of tumor suppressor genes. The phenotypic consequences of these mutations were defined by measuring proliferation, transformation, and drug response in vitro . Orthotopic allograft models, whereby transformed cells are stereotactically injected into the brains of immune-competent, syngeneic littermates, were developed to define the role of oncogenic mutations and cell type on tumorigenesis in vivo . Unlike most established human glioblastoma cell line xenografts, injection of transformed GEM-derived cortical astrocytes into the brains of immune-competent littermates produced astrocytomas, including the most aggressive subtype, glioblastoma, that recapitulated the histopathological hallmarks of human astrocytomas, including diffuse invasion of normal brain parenchyma. Bioluminescence imaging of orthotopic allografts from transformed astrocytes engineered to express luciferase was utilized to monitor in vivo tumor growth over time. Thus, astrocytoma models using astrocytes and NSC harvested from GEM with conditional oncogenic alleles provide an integrated system to study the genetics and cell biology of astrocytoma pathogenesis in vitro and in vivo and may be useful in preclinical drug development for these devastating diseases. Neuroscience, Issue 90, astrocytoma, cortical astrocytes, genetically engineered mice, glioblastoma, neural stem cells, orthotopic allograft Laparoscopic Left Liver Sectoriectomy of Caroli's Disease Limited to Segment II and III Institutions: University of Insubria, University of Insubria. Caroli's disease is defined as a abnormal dilatation of the intra-hepatica bile ducts: Its incidence is extremely low (1 in 1,000,000 population) and in most of the cases the whole liver is interested and liver transplantation is the treatment of choice. In case of dilatation limited to the left or right lobe, liver resection can be performed. For many year the standard approach for liver resection has been a formal laparotomy by means of a large incision of abdomen that is characterized by significant post-operatie morbidity. More recently, minimally invasive, laparoscopic approach has been proposed as possible surgical technique for liver resection both for benign and malignant diseases. The main benefits of the minimally invasive approach is represented by a significant reduction of the surgical trauma that allows a faster recovery a less post-operative complications. This video shows a case of Caroli s disease occured in a 58 years old male admitted at the gastroenterology department for sudden onset of abdominal pain associated with fever (>38C° ), nausea and shivering. Abdominal ultrasound demonstrated a significant dilatation of intra-hepatic left sited bile ducts with no evidences of gallbladder or common bile duct stones. Such findings were confirmed abdominal high resolution computer tomography. Laparoscopic left sectoriectomy was planned. Five trocars and 30° optic was used, exploration of the abdominal cavity showed no adhesions or evidences of other diseases. In order to control blood inflow to the liver, vascular clamp was placed on the hepatic pedicle (Pringle s manouvre), Parenchymal division is carried out with a combined use of 5 mm bipolar forceps and 5 mm ultrasonic dissector. A severely dilated left hepatic duct was isolated and divided using a 45mm endoscopic vascular stapler. Liver dissection was continued up to isolation of the main left portal branch that was then divided with a further cartridge of 45 mm vascular stapler. At his point the left liver remains attached only by the left hepatic vein: division of the triangular ligament was performed using monopolar hook and the hepatic vein isolated and the divided using vascular stapler. Haemostatis was refined by application of argon beam coagulation and no bleeding was revealed even after removal of the vascular clamp (total Pringle s time 27 minutes). Postoperative course was uneventful, minimal elevation of the liver function tests was recorded in post-operative day 1 but returned to normal at discharged on post-operative day 3. Medicine, Issue 24, Laparoscopy, Liver resection, Caroli's disease, Left sectoriectomy Use of Human Perivascular Stem Cells for Bone Regeneration Institutions: School of Dentistry, UCLA, UCLA, UCLA, University of Edinburgh . Human perivascular stem cells (PSCs) can be isolated in sufficient numbers from multiple tissues for purposes of skeletal tissue engineering1-3 . PSCs are a FACS-sorted population of 'pericytes' (CD146+CD34-CD45-) and 'adventitial cells' (CD146-CD34+CD45-), each of which we have previously reported to have properties of mesenchymal stem cells. PSCs, like MSCs, are able to undergo osteogenic differentiation, as well as secrete pro-osteogenic cytokines1,2 . In the present protocol, we demonstrate the osteogenicity of PSCs in several animal models including a muscle pouch implantation in SCID (severe combined immunodeficient) mice, a SCID mouse calvarial defect and a femoral segmental defect (FSD) in athymic rats. The thigh muscle pouch model is used to assess ectopic bone formation. Calvarial defects are centered on the parietal bone and are standardly 4 mm in diameter (critically sized)8 . FSDs are bicortical and are stabilized with a polyethylene bar and K-wires4 . The FSD described is also a critical size defect, which does not significantly heal on its own4 . In contrast, if stem cells or growth factors are added to the defect site, significant bone regeneration can be appreciated. The overall goal of PSC xenografting is to demonstrate the osteogenic capability of this cell type in both ectopic and orthotopic bone regeneration models. Bioengineering, Issue 63, Biomedical Engineering, Stem Cell Biology, Pericyte, Stem Cell, Bone Defect, Tissue Engineering, Osteogenesis, femoral defect, calvarial defect A New Single Chamber Implantable Defibrillator with Atrial Sensing: A Practical Demonstration of Sensing and Ease of Implantation Institutions: University Hospital of Rostock, Germany. Implantable cardioverter-defibrillators (ICDs) terminate ventricular tachycardia (VT) and ventricular fibrillation (VF) with high efficacy and can protect patients from sudden cardiac death (SCD). However, inappropriate shocks may occur if tachycardias are misdiagnosed. Inappropriate shocks are harmful and impair patient quality of life. The risk of inappropriate therapy increases with lower detection rates programmed in the ICD. Single-chamber detection poses greater risks for misdiagnosis when compared with dual-chamber devices that have the benefit of additional atrial information. However, using a dual-chamber device merely for the sake of detection is generally not accepted, since the risks associated with the second electrode may outweigh the benefits of detection. Therefore, BIOTRONIK developed a ventricular lead called the LinoxSMART S DX, which allows for the detection of atrial signals from two electrodes positioned at the atrial part of the ventricular electrode. This device contains two ring electrodes; one that contacts the atrial wall at the junction of the superior vena cava (SVC) and one positioned at the free floating part of the electrode in the atrium. The excellent signal quality can only be achieved by a special filter setting in the ICD (Lumax 540 and 740 VR-T DX, BIOTRONIK). Here, the ease of implantation of the system will be demonstrated. Medicine, Issue 60, Implantable defibrillator, dual chamber, single chamber, tachycardia detection Deep Neuromuscular Blockade Leads to a Larger Intraabdominal Volume During Laparoscopy Institutions: Aleris-Hamlet Hospitals, Soeborg, Denmark, Aleris-Hamlet Hospitals, Soeborg, Denmark. Shoulder pain is a commonly reported symptom following laparoscopic procedures such as myomectomy or hysterectomy, and recent studies have shown that lowering the insufflation pressure during surgery may reduce the risk of post-operative pain. In this pilot study, a method is presented for measuring the intra-abdominal space available to the surgeon during laproscopy, in order to examine whether the relaxation produced by deep neuromuscular blockade can increase the working surgical space sufficiently to permit a reduction in the CO2 insufflation pressure. Using the laproscopic grasper, the distance from the promontory to the skin is measured at two different insufflation pressures: 8 mm Hg and 12 mm Hg. After the initial measurements, a neuromuscular blocking agent (rocuronium) is administered to the patient and the intra-abdominal volume is measured again. Pilot data collected from 15 patients shows that the intra-abdominal space at 8 mm Hg with blockade is comparable to the intra-abdominal space measured at 12 mm Hg without blockade. The impact of neuromuscular blockade was not correlated with patient height, weight, BMI, and age. Thus, using neuromuscular blockade to maintain a steady volume while reducing insufflation pressure may produce improved patient outcomes. Medicine, Issue 76, Anatomy, Physiology, Neurobiology, Surgery, gynecology, laparoscopy, deep neuromuscular blockade, reversal, rocuronium, sugammadex, laparoscopic surgery, clinical techniques, surgical techniques Surgical Induction of Endolymphatic Hydrops by Obliteration of the Endolymphatic Duct Institutions: Case Western Reserve University. Surgical induction of endolymphatic hydrops (ELH) in the guinea pig by obliteration and obstruction of the endolymphatic duct is a well-accepted animal model of the condition and an important correlate for human Meniere's disease. In 1965, Robert Kimura and Harold Schuknecht first described an intradural approach for obstruction of the endolymphatic duct (Kimura 1965). Although effective, this technique, which requires penetration of the brain's protective covering, incurred an undesirable level of morbidity and mortality in the animal subjects. Consequently, Andrews and Bohmer developed an extradural approach, which predictably produces fewer of the complications associated with central nervous system (CNS) penetration.(Andrews and Bohmer 1989) The extradural approach described here first requires a midline incision in the region of the occiput to expose the underlying muscular layer. We operate only on the right side. After appropriate retraction of the overlying tissue, a horizontal incision is made into the musculature of the right occiput to expose the right temporo-occipital suture line. The bone immediately inferio-lateral the suture line (Fig 1) is then drilled with an otologic drill until the sigmoid sinus becomes visible. Medial retraction of the sigmoid sinus reveals the operculum of the endolymphatic duct, which houses the endolymphatic sac. Drilling medial to the operculum into the area of the endolymphatic sac reveals the endolymphatic duct, which is then packed with bone wax to produce obstruction and ultimately ELH. In the following weeks, the animal will demonstrate the progressive, fluctuating hearing loss and histologic evidence of ELH. Medicine, Issue 35, Guinea Pig, Endolymphatic hydrops, Meniere's disease, surgical induction, endolymphatic duct
1
2
<urn:uuid:95c5da9a-c9b2-40e4-ba22-54f68c728dc7>
Limping in crutches, his broken leg shielded in plaster following a jogging accident, the distinguished biologist Edward O. Wilson made his way slowly toward the stage at a convention of the American Association for the Advancement of Science in 1978. Climbing the stairs, taking his seat, and shuffling his notes, a sudden burst of activity punctuated the silence as the entire front row of the audience leapt onto the stage hurling insults. They jostled Wilson and then poured iced water over his head. The protesters would turn out to be Marxists, incensed by the publication of Wilson’s book Sociobiology. This story has become a familiar feature of the nature/nurture debate, used to illustrate the vitriolic hostility expressed by ideological groups scrambling to silence what most people already take to be an incontrovertible fact: that humans, just like every other species on earth, have a nature. As crowds abandoned Wilson to evacuate the auditorium that day, one man at the back of the room tried to push his way forward against the multitude heading towards the exits. “It was the most hateful, frightening, and disgusting behavior I’ve ever witnessed at an academic assembly,” the famed anthropologist Napoleon Chagnon would later recall. He didn’t know it then, but the events of that day were an omen of things to come for Chagnon himself, whose Wilsonian worldview would help to bring about one of anthropology’s greatest controversies. Chagnon, who passed away last week, has been remembered as one of the last titans of anthropology, and perhaps the last ethnographer in the vein of Mead and Malinowski to go deep into a remote part of the world and live among a relatively un-acculturated and unstudied people. The research he conducted there would inspire millions to take an interest in the world’s traditional cultures and the field of cultural anthropology. Dripping in perspiration, his hands and face swollen by stinging insects, Chagnon disembarked at a remote Venezuelan village along a piranha-infested river deep in the Amazonian interior in 1964. Stepping out of his aluminum rowboat, he was immediately nauseated by the smell of decaying vegetable matter and feces. He pushed his way through a wall of leaves and stepped into the open: “I saw a dozen burly, naked, sweaty, hideous men nervously staring at us down the shaft of their drawn arrows!” The Yanomamö people lived with the ever-present threat of violence from raiding villages, and from his very first encounter with them, Chagnon understood the paranoia that consequently pervaded their everyday life. It took a long time for Chagnon to acclimatize to the deep interior of the Amazon Rainforest and its unique threats. The insects continued to plague him—not just the flying stinging ones, but termites that claimed unguarded shoes as nests, and spiders and scorpions drawn to warm clothes in the middle of the night. He would later find himself face to face with an anaconda. “I laid my double-barrel twelve-gauge shotgun on the bank next to me,” he recalled. Moments later, “the water exploded in front of me: a very large anaconda head shot out of the water and whizzed just inches from my face. I immediately went into a rage: this son-of-a-bitch of a snake was trying to kill me!” Chagnon began firing rounds into the snake, which violently twisted and turned as he reloaded, fired, and reloaded. But Chagnon was more worried about jaguars, which were known to kill groups of men in a single attack. From time to time, Chagnon and his companions would find themselves stalked by these predators, sometimes for hours on end. They could be heard at night, prowling around the makeshift camps he slept in as he travelled between villages. One night, he awoke to find a jaguar baring its teeth at him as he lay in his hammock. But the mosquito net and the yelling of villagers confused the animal, which darted back into the bush. In 1966, Chagnon began working with the geneticist James Neel. Neel had had managed to convince the Atomic Energy Commission to fund a genetic study of an isolated population and was able to pay Chagnon a salary to assist his research there. Neel’s team took blood samples from the Yanomamö, and began administering the Edmonston B vaccine when they discovered that the Yanomamö had no antibodies to the measles. In some ways, the Yanomamö sounded like something out of any anthropology textbook—they were patrilineal and polygamous (polygyny); like other cultures around the world, they carved a position for the levirate—a man who married his dead brother’s wife; they had ceremonial roles and practised ritual confinement with taboos on food and sex. But sometimes this exotic veneer would be punctured by their shared humanity, particularly their mischievous sense of humour. Early in Chagnon’s research, the Yanomamö pranked the anthropologist by providing him with vulgarities when he asked their names. He did not realise this until he began bragging to a group of Yanomamö about how well he now understood their genealogies. As he began, the Yanomamö erupted into laughter, tears streaming from their faces. They begged him to continue and, oblivious, Chagnon went on: “Hairy Cunt was married to the headman, Long Dong, their youngest son was Asshole, and so on.” When he discovered he’d been tricked, Chagnon was embarrassed and furious that five months of patient name gathering had yielded nothing but a litany of insults. From that day forward, he would cross-check all information between individual Yanomamö informants and villages. But for all their jocularity, Chagnon found that up to 30 percent of all Yanomamö males died a violent death. Warfare and violence were common, and duelling was a ritual practice, in which two men would take turns flogging each other over the head with a club, until one of the combatants succumbed. Chagnon was adamant that the primary causes of violence among the Yanomamö were revenge killings and women. The latter may not seem surprising to anyone aware of the ubiquity of ruthless male sexual competition in the animal kingdom, but anthropologists generally believed that human violence found its genesis in more immediate matters, such as disputes over resources. When Chagnon asked the Yanomamö shaman Dedeheiwa to explain the cause of violence, he replied, “Don’t ask such stupid questions! Women! Women! Women! Women! Women!” Such fights erupted over sexual jealousy, sexual impropriety, rape, and attempts at seduction, kidnap and failure to deliver a promised girl. Internecine raids and attacks often involved attempts by a man or group to abduct another’s women. “The victim is grabbed by her abductors by one arm, and her protectors grab the other arm. Then both groups pull in opposite directions,” Chagnon learned. In one instance, a woman’s arms were reportedly pulled out of their sockets: “The victim invariably screams in agony, and the struggle can last several long minutes until one group takes control of her.” Although one in five Yanomamö women Chagnon interviewed had been kidnapped from another village, some of these women were grateful to find that their new husbands were less cruel than their former ones. The treatment of Yanomamö women could be particularly gruesome, and Chagnon had to wrestle with the ethical dilemmas that confront anthropologists under such circumstances—should he intervene or remain an observer? Men frequently beat their wives, mainly out of sexual jealousy, shot arrows into them, or even held burning sticks between their legs to discourage the possibility of infidelity. On one occasion, a man bludgeoned his wife in the head with firewood and in front of an impassive audience. “Her head bounced off the ground with each ruthless blow, as if he were pounding a soccer ball with a baseball bat. The head-man and I intervened at that point—he was killing her.” Chagnon stitched her head back up. The woman recovered but she subsequently dropped her infant into a fire as she slept, and was later killed by a venomous snake. Life in the Amazon could be nasty, brutish, and short. Chagnon would make more than 20 fieldwork visits to the Amazon, and in 1968 he published Yanomamö: The Fierce People, which became an instant international bestseller. The book immediately ignited controversy within the field of anthropology. Although it commanded immense respect and became the most commonly taught book in introductory anthropology courses, the very subtitle of the book annoyed those anthropologists, who preferred to give their monographs titles like The Gentle Tasaday, The Gentle People, The Harmless People, The Peaceful People, Never in Anger, and The Semai: A Nonviolent People of Malaya. The stubborn tendency within the discipline was to paint an unrealistic façade over such cultures—although 61 percent of Waorani men met a violent death, an anthropologist nevertheless described this Amazonian people as a “tribe where harmony rules,” on account of an “ethos that emphasized peacefulness.”1 Anthropologists who considered such a society harmonious were unlikely to be impressed by Chagnon’s description of the Yanomamö as “The Fierce People,” where “only” 30 percent of males died by violence. The same anthropologist who had ascribed a prevailing ethos of peace to the Waoroni later accused Chagnon, in the gobbledygook of anthropological jargon, of the “projection of traditional preconceptions of the Western construction of Otherness.”2 These anthropologists were made more squeamish still by Chagnon’s discovery that the unokai of the Yanomamö—men who had killed and assumed a ceremonial title—had about three times more children than others, owing to having twice as many wives. Drawing on this observation in his 1988 Science article “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” Chagnon suggested that men who had demonstrated success at a cultural phenomenon, the military prowess of revenge killings, were held in higher esteem and considered more attractive mates. In some quarters outside of anthropology, Chagnon’s theory came as no surprise, but its implication for anthropology could be profound. In The Better Angels of Our Nature, Steven Pinker points out that if violent men turn out to be more evolutionarily fit, “This arithmetic, if it persisted over many generations, would favour a genetic tendency to be willing and able to kill.” The question of whether or not higher fitness for violent males is a universal phenomenon common to all humanity in prehistory remains contested. But Chagnon appears to have thought so: “Conflicts over the means of reproduction—women—dominated the political machinations of men during the vast span of human history and shaped human male psychology.” Chagnon’s detractors were appalled. Not only was he accusing a pristine Amazon society of rewarding its most violent males with reproductive success, he was also inferring that mankind itself was stained with the blood of our ancestors. This hypothesis threatened to force an entirely new way of thinking about human behaviour, and promote a new paradigm of human behavioural ecology. Chagnon had tottered onto the unforgiving battlefield of the science wars, and anthropologists lined up to shower him with criticisms and derision. The contempt for Chagnon became so petty that some anthropologists refused to use his transliteration “Yanomamö,” opting for “Yanomami” instead. If they couldn’t agree on the name of the people, what else could they hope to agree about? Chagnon considered his most formidable critic to be the eminent anthropologist Marvin Harris. Harris had been crowned the unofficial historian of the field following the publication of his all-encompassing work The Rise of Anthropological Theory. He was the founder of the highly influential materialist school of anthropology, and argued that ethnographers should first seek material explanations for human behavior before considering alternatives, as “human social life is a response to the practical problems of earthly existence.”3 Harris held that the structure and “superstructure” of a society are largely epiphenomena of its “infrastructure,” meaning that the economic and social organization, beliefs, values, ideology, and symbolism of a culture evolve as a result of changes in the material circumstances of a particular society, and that apparrently quaint cultural practices tend to reflect man’s relationship to his environment. For instance, prohibition on beef consumption among Hindus in India is not primarily due to religious injunctions. These religious beliefs are themselves epiphenomena to the real reasons: that cows are more valuable for pulling plows and producing fertilizers and dung for burning. Cultural materialism places an emphasis on “-etic” over “-emic” explanations, ignoring the opinions of people within a society and trying to uncover the hidden reality behind those opinions. Naturally, when the Yanomamö explained that warfare and fights were caused by women and blood feuds, Harris sought a material explanation that would draw upon immediate survival concerns. Chagnon’s data clearly confirmed that the larger a village, the more likely fighting, violence, and warfare were to occur. In his book Good to Eat: Riddles of Food and Culture Harris argued that fighting occurs more often in larger Yanomamö villages because these villages deplete the local game levels in the rainforest faster than smaller villages, leaving the men no option but to fight with each other or to attack outside groups for meat to fulfil their protein macronutrient needs. When Chagnon put Harris’s materialist theory to the Yanomamö they laughed and replied, “Even though we like meat, we like women a whole lot more.”4 Chagnon believed that smaller villages avoided violence because they were composed of tighter kin groups—those communities had just two or three extended families and had developed more stable systems of borrowing wives from each other. Despite the Yanomamö’s rebuke, it is evident from his popular book Cows, Pigs, War and Witches and his technical book Cultural Materialism: The Struggle for A Science of Culture that Harris saw himself as the world’s foremost anthropological theoretician. His mission was to take anthropology to new heights of knowledge by uncovering the material logic behind the world’s belief systems and social behavior. Harris threw down a challenge to Chagnon that would shape his Amazon research in 1975: prove that the Yanomamö get more daily protein than what is in a Big Mac and Harris would eat his own hat. While Chagnon and another anthropologist Raymond Hames did indeed find this to be the case in 1975, another anthropologist Kenneth Good found daily protein consumption to be just under that of a Big Mac, leaving the debate in limbo.5 Nonetheless, these findings probably left Harris uncomfortable, and his opposition to human behavioral ecology and sociobiology continued to escalate. One evening, Chagnon attended a debate about sociobiology between Edward Wilson and Harris at the Smithsonian Institute. At one point, Harris began describing the dangers of sociobiology, and then paused. “Did you know,” he asked, “that there is a certain anthropologist, a man who has become famous for his long-term studies of Amazon Indians, who claims, ladies and gentleman, that this tribe not only has a gene for warfare, but he claims they also have genes for infanticide!” This was such a caricature of Chagnon’s actual view that he challenged Harris to defend it during the Q&A that followed the debate. Questions were handed from the audience to the debaters written on cards, and Chagnon demanded that Harris “Identify the anthropologist who claimed that the people he studied had genes for warfare and infanticide.” Throughout the question time period Harris kept shuffling Chagnon’s question to the back of the pile until the moderator brought the event to a close and thanked everyone for their attendance. Unwilling to let Harris off the hook, Chagnon rose from his seat in the audience and again demanded that Harris identify this famous anthropologist who had spoken of genes for warfare and infanticide. The audience immediately recognized Chagnon from his documentaries and began shouting, “Let him speak! Let him speak!” Momentarily taken aback, Harris confessed that if he had misunderstood Chagnon that he was welcome to return to anthropology, to which Chagnon replied that he had never left anthropology. While Harris and other anthropologists in the United States continued to criticize Chagnon, his standing began to deteriorate on another front. From the moment he arrived in the Amazon, Chagnon maintained cordial relations with a missionary priest of the Salesians of Don Bosco. In fact, Chagnon and the priest became such good friends that the priest asked Chagnon to kill one of his fellow missionaries for him, a man who had broken his vows of celibacy by sleeping with a Yanomamö woman. The priest worried that this could bring shame to the Salesian order. Of course, Chagnon refused, and his refusal strained their relationship. Their relationship worsened when Chagnon discovered that the missionaries had been distributing shotguns to the natives and that these were being used in warfare. Furthermore, all of Chagnon’s recommendations for preventing measles outbreaks were ignored by the Salesians, who built missionaries and tried to have the Yanomamö concentrate around them, which helped the disease to spread rapidly. Their relationship finally collapsed altogether after Chagnon cooperated with a documentary that painted the Salesians in a less than flattering light. By the early 1990s, the missionaries were increasingly worried about Chagnon’s presence in the Amazon, especially when it came to light that the BBC and Nova would be producing a new documentary in the rainforest about his dispute with Marvin Harris. By the early 1990s, the Salesians were attempting to block his lifetime of fieldwork in the Amazon, and they successfully lobbied Maria Luisa Allais, the head of Venezuelan Indian Commission, to refuse him a permit he required for re-entry. Then, in 1993, tragedy struck in the Amazon when gold miners crossed the border from Brazil and slaughtered a number of Yanomamö, including women and children. The explorer Charles Brewer-Carías was chosen to head a presidential commission into the massacre, and he wanted Chagnon on the commission as one of the few anthropologists in the world who spoke Yanomamö. When President Carlos Perez of Venezuela learned that Chagnon had been denied an entry permit, he telephoned the Ministry of Education and ordered them to issue Chagnon with one at once. A visibly nervous Maria Luisa Allais offered Chagnon his papers. That Chagnon went above the head of the Indian Commission and was now installed on the presidential commission investigating the massacre only infuriated the Salesians further. They believed that they ought to be the ones conducting the investigation. On the very first day of their investigation at the site of the massacre, a helicopter arrived bearing men armed with machine guns and a Salesian bishop, who ordered Brewer-Carías and Chagnon to leave. With the government on the brink of a coup and unwilling to enforce law and order in the deep interior of the Amazon, the commission to investigate the massacre quickly fell apart. Chagnon was left with lifelong regrets that there had been no justice for the dead. Notwithstanding their bitter intellectual rivalry, Marvin Harris would play no role in the sensational accusations that Chagnon had behaved unethically while conducting his research in the Amazon. These would be made by a coalition of less prominent anthropologists, some with official functions in activist organizations, which had been formed to oppose Chagnon in any way possible. David Maybury-Lewis, the head of the organization Cultural Survival was an early critic of Chagnon, and one of the first anthropologists to complain about the subtitle of Yanomamö: The Fierce People. Maybury-Lewis’s student Terence Turner, president of Survival International USA, was an even more outspoken critic of Chagnon. Survival International, an organization that has more recently attacked Steven Pinker for The Better Angels of Our Nature has long promoted the Rousseauian image of a traditional people who need to be preserved in all their natural wonder from the ravages of the modern world. Survival International does not welcome anthropological findings that complicate this harmonious picture, and Chagnon had wandered straight into their line of fire. Their website still features a petition denouncing Chagnon’s characterization of the Yanomamö, signed by a handful of his critics, “We absolutely disagree with Napoleon Chagnon’s public characterization of the Yanomami as fierce, violent, and archaic people.” For years, Survival International’s Terence Turner had been assisting a self-described journalist, Patrick Tierney, as the latter investigated Chagnon for his book, Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon. In 2000, as Tierney’s book was being readied for publication, Turner and his colleague Leslie Sponsel wrote to the president of the American Anthropological Association (AAA) and informed her that an unprecedented crisis was about to engulf the field of anthropology. This, they warned, would be a scandal that, “in its scale, ramifications, and sheer criminality and corruption, is unparalleled in the history of Anthropology.” Tierney alleged that Chagnon and Neel had spread measles among the Yanomamö in 1968 by using compromised vaccines, and that Chagnon’s documentaries depicting Yanomamö violence were faked by using Yanomamö to act out dangerous scenes, in which further lives were lost. Chagnon was blamed, inter alia, for inciting violence among the Yanomamö, cooking his data, starting wars, and aiding corrupt politicians. Neel was also accused of withholding vaccines from certain populations of natives as part of an experiment. The media were not slow to pick up on Tierney’s allegations, and the Guardian ran an article under an inflammatory headline accusing Neel and Chagnon of eugenics: “Scientists ‘killed Amazon Indians to test race theory.'” Turner claimed that Neel believed in a gene for “leadership” and that the human genetic stock could be upgraded by wiping out mediocre people. “The political implication of this fascistic eugenics,” Turner told the Guardian, “is clearly that society should be reorganised into small breeding isolates in which genetically superior males could emerge into dominance, eliminating or subordinating the male losers.” By the end of 2000, the American Anthropological Association announced a hearing on Tierney’s book. This was not entirely reassuring news to Chagnon, given their history with anthropologists who failed to toe the party line. During the Freeman-Mead controversy, in which New Zealand anthropologist Derek Freeman had critiqued Margaret Mead’s book Coming of Age in Samoa, the American Association for the Advancement of Science’s magazine Science had praised Freeman’s critique at the same time the American Anthropological Association had denounced it. Thereafter, the AAA denounced Science and the American Association for the Advancement of Science for not denouncing Freeman. Now, an Academies of Sciences investigation concluded that Tierney’s claims in Darkness in El Dorado were “demonstrably false,” and that his book represented “a grave disservice…to science itself.” The American Anthropological Association, on the other hand, stated that, “Darkness in El Dorado has contributed a valuable service to our discipline.” A taskforce was formally set up after this, not to “investigate” Chagnon, which would have violated the AAA’s Code of Ethics, but to “inquire” about Tierney’s allegations. Behind the closed doors of that inquiry, tensions developed. “The book is just a piece of sleaze, that’s all there is to it,” the head of AAA’s taskforce Jane Hill wrote to another anthropologist about Tierney’s book. “But I think the AAA had to do something,” Hill added, “because I really think that the future of work by anthropologists with indigenous peoples in Latin America—with a high potential to do good—was put seriously at risk by its accusations.”6 Tormented to learn that the anthropological community were actually taking the accusations of Darkness in El Dorado seriously, Chagnon was hospitalized after he collapsed suffering from stress. Suspecting that the taskforce had been constituted to find Chagnon guilty of at least some of Tierney’s accusations, the anthropologist Raymond Hame resigned from the panel. In 2002, the AAA accepted the taskforce’s report. Although the taskforce was not an “investigation” concerned with any particular person, for all intents and purposes, it blamed Chagnon for portraying the Yanomamö in a way that was harmful and held him responsible for prioritizing his research over their interests. Nonetheless, the most serious claims Tierney made in Darkness in El Dorado collapsed like a house of cards. Elected Yanomamö leaders issued a statement in 2000 stating that Chagnon had arrived after the measles epidemic and saved lives, “Dr. Chagnon—known to us as Shaki—came into our communities with some physicians and he vaccinated us against the epidemic disease which was killing us. Thanks to this, hundreds of us survived and we are very thankful to Dr. Chagnon and his collaborators for help.”7 Investigations by the American Society of Human Genetics and the International Genetic Epidemiology Society both found Tierney’s claims regarding the measles outbreak to be unfounded. The Society of Visual Anthropology reviewed the so-called faked documentaries, and determined that these allegations were also false. Then an independent preliminary report released by a team of anthropologists dissected Tierney’s book claim by claim, concluding that all of Tierney’s most important assertions were either deliberately fraudulent or, at the very least, misleading. The University of Michigan reached the same conclusion. “We are satisfied,” its Provost stated, “that Dr. Neel and Dr. Chagnon, both among the most distinguished scientists in their respective fields, acted with integrity in conducting their research… The serious factual errors we have found call into question the accuracy of the entire book [Darkness in El Dorado] as well as the interpretations of its author.”8 Academic journal articles began to proliferate, detailing the mis-inquiry and flawed conclusions of the 2002 taskforce. By 2005, only three years later, the American Anthropological Association voted to withdraw the 2002 taskforce report, re-exonerating Chagnon. A 2000 statement by the leaders of the Yanomamö and their Ye’kwana neighbours called for Tierney’s head: “We demand that our national government investigate the false statements of Tierney, which taint the humanitarian mission carried out by Shaki [Chagnon] with much tenderness and respect for our communities.”9 The investigation never occurred, but Tierney’s public image lay in ruins and would suffer even more at the hands of historian of science Alice Dreger, who interviewed dozens of people involved in the controversy. Although Tierney had thanked a Venezuelan anthropologist for providing him with a dossier of information on Chagnon for his book, the anthropologist told Dreger that Tierney had actually written the dossier himself and then misrepresented it as an independent source of information.10 By 2012, Tierney had disappeared. He would not write or appear in public again. Chagnon, on the other hand, was elected to the National Academy of Sciences, the most prestigious accolade that can befall a scientist after the Nobel Prize. Chagnon considered this a vindication, but to this day, some anthropologists cling to Tierney’s allegations, or some revised version of them. Turner abandoned many of Tierney’s claims but spent years looking for further evidence against Chagnon. In 2013, the anthropologist David Price wrote an article for the radical left-wing outlet CounterPunch castigating the National Academy of Sciences for electing Chagnon to such a prestigious position, and cited Tierney’s book without bothering to mention that author and argument had since been discredited. Anthropologist Marshall Sahlins, who had also praised Tierney’s book in earlier times, resigned from the National Academy of Sciences to protest Chagnon’s election. Sahlin’s protégé David Graeber explained that, “Sahlins is a man of genuine principle… He’s never had a lot of patience for shirtless macho Americans who descend into jungles, declaring their inhabitants to be violent savages, and then use that as an excuse to start behaving like violent savages themselves.” The row between Chagnon’s detractors and supporters continues to this day, in spite of the available evidence. As Alice Dreger told Graeber on social media in 2013, “If Sahlins can’t face the facts about what Chagnon didn’t do, then maybe he shouldn’t be in the Nat Acad of Sci anyway.” .@davidgraeber Well, if Sahlins can't face the facts about what Chagnon didn't do, then maybe he shouldn't be in the Nat Acad Sci anyway. — Alice Dreger (@AliceDreger) February 23, 2013 Scientific American has described the controversy as “Anthropology’s Darkest Hour,” and it raises troubling questions about the entire field. In 2013, Chagnon published his final book, Noble Savages: My Life Among Two Dangerous Tribes—The Yanomamö and the Anthropologists. Chagnon had long felt that anthropology was experiencing a schism more significant than any difference between research paradigms or schools of ethnography—a schism between those dedicated to the very science of mankind, anthropologists in the true sense of the word, and those opposed to science; either postmodernists vaguely defined, or activists disguised as scientists who seek to place indigenous advocacy above the pursuit of objective truth. Chagnon identified Nancy Scheper-Hughes as a leader in the activist faction of anthropologists, citing her statement that we “need not entail a philosophical commitment to Enlightenment notions of reason and truth.”11 Whatever the rights and wrong of his debates with Marvin Harris across three decades, Harris’s materialist paradigm was a scientifically debatable hypothesis, which caused Chagnon to realize that he and his old rival shared more in common than they did with the activist forces emerging in the field: “Ironically, Harris and I both argued for a scientific view of human behavior at a time when increasing numbers of anthropologists were becoming skeptical of the scientific approach.” When Nancy Scheper-Hughes wrote that “if we cannot begin to think about social institutions and practices in moral or ethical terms, then anthropology strikes me as quite weak and useless,”12 Marvin Harris added that “if we cannot begin to think about social institutions and practices in scientific-objective terms then anthropology will be even weaker and more useless.”13 Both Chagnon and Harris agreed that anthropology’s move away from being a scientific enterprise was dangerous. And both believed that anthropologists, not to mention thinkers in other fields of social sciences, were disguising their increasingly anti-scientific activism as research by using obscurantist postmodern gibberish. Observers have remarked at how abstruse humanities research has become and even a world famous linguist like Noam Chomsky admits, “It seems to me to be some exercise by intellectuals who talk to each other in very obscure ways, and I can’t follow it, and I don’t think anybody else can.” Chagnon resigned his membership of the American Anthropological Association in the 1980s, stating that he no longer understood the “unintelligible mumbo jumbo of postmodern jargon” taught in the field. 14 In his last book, Theories of Culture in Postmodern Times, Harris virtually agreed with Chagnon. “Postmodernists,” he wrote, “have achieved the ability to write about their thoughts in a uniquely impenetrable manner. Their neo-baroque prose style with its inner clauses, bracketed syllables, metaphors and metonyms, verbal pirouettes, curlicues and figures is not a mere epiphenomenon; rather, it is a mocking rejoinder to anyone who would try to write simple intelligible sentences in the modernist tradition.” Harris was generally recognized as the most prolific and influential theorist of anthropology in recent decades. Chagnon was one of anthropology’s last great ethnographers in the vein of Mead and Malinowski. And, in their latter years, both men watched as the field became unrecognizable under the spell of, as Harris put it, the “mantra of Foucault” with its consequent suspicion of objective knowledge independent of the subjective person. By 2004, the cultural materialists who had long disputed Chagnon’s behavioral ecological views, but who shunned postmodernism, were among his greatest supporters. Writing in American Anthropologist, Daniel R. Gross, Marvin Harris’s former student and research collaborator on the protein-warfare theory, came to Chagnon’s defense by pointing out numerous instances where the American Anthropological Association had over-relied on the validity of subjective points of view in its report. Gross argued that the turn toward postmodernism in the field had irrevocably altered not only the content of anthropological research, but how the American Anthropological Association as a professional body chooses to conduct investigations and handles allegations of misconduct. The AAA’s suspiciousness of the authenticity of objective evidence, Gross wrote, “reflects a philosophical stance of postmodern scholarship, in which objective truth may be seen as unattainable and contingent.” It was Daniel R. Gross, along with Thomas A. Gregor, who forced the American Anthropological Association to a vote to rescind the 2002 report of the taskforce. The quest for knowledge of mankind has in many respects become unrecognizable in the field that now calls itself anthropology. According to Chagnon, we’ve entered a period of “darkness in cultural anthropology.” With his passing, anthropology has become darker still. Matthew Blackwell is an Australian writer and graduate of the University of Queensland where he studied economics and anthropology. Follow him on Twitter @MBlackwell27 Feature photo courtesy of the Chagnon family. You can help preserve Chagnon’s legacy by donating to the preservation of his film documentaries on the Yanomamö here. 1 , 2 Albert, Bruce. “Yanomami” Violence”: Inclusive Fitness or Ethnographer’s Representation?.” (1989): 637-640. 3 Harris, Marvin. Cultural materialism: The struggle for a science of culture. AltaMira Press, 2001. 4Kappeler, Peter M., and Joan B. Silk. Mind the gap. New York, NY: Springer, 2010. 5 Chagnon, Napoleon A., and Raymond B. Hames. “Protein deficiency and tribal warfare in Amazonia: New data.” Science 203.4383 (1979): 910-913. 6 Dreger, Alice. “Darkness’s descent on the American Anthropological Association.” Human Nature 22.3 (2011): 225-246. 7 Gregor, Thomas A., and Daniel R. Gross. “Guilt by association: The culture of accusation and the American Anthropological Association’s investigation of Darkness in El Dorado.” American anthropologist 106.4 (2004): 687-698. 8 Cantor, Nancy. The University of Michigan Statement on ‘Darkness in El Dorado.’ 2000. Available at: http://ns.umich.edu/Releases/2000/Nov00/r111300a.html 9 Gregor, Thomas A., and Daniel R. Gross. “Guilt by association: The culture of accusation and the American Anthropological Association’s investigation of Darkness in El Dorado.” American anthropologist 106.4 (2004): 687-698. 10 Dreger, Alice. Galileo’s middle finger: Heretics, activists, and one scholar’s search for justice. Penguin Books, 2016. 11, 12Williams, Gareth. The other side of the popular: Neoliberalism and subalternity in Latin America. Duke University Press, 2002. 13 Harris, Marvin. Theories of culture in postmodern times. Rowman Altamira, 1998. 14 Chagnon, Napoleon A. Noble Savages: My Life Among Two Dangerous Tribes–the Yanomamo and the Anthropologists. Simon and Schuster, 2013.
1
3
<urn:uuid:483d22c8-0038-4107-8480-ae81588726a6>
Check Point Full Disk Encryption gives you the highest level of Data Security. It combines boot protection, Preboot authentication, and strong encryption to ensure that only authorized users can access data stored in desktop and laptop computers. Overview of Computer Data Security With computer security becoming increasingly important, almost all focus has been on securing large, multi-user machines. This makes sense because mainframes and large servers are not only major repositories of data, they are also crucial to daily operations. However, there is an equally serious and growing risk of compromise to the many smaller, mostly single-user, machines, such as desktop, laptop, tablet PCs, as well as Mobile Hand-held devices, such as Smart Phones, Smart Pads (iOS/Android etc) etc. These computers frequently store an enterprise's most current and valuable information. Increasingly, portable computers also store passwords, logon scripts, and certificates used to access the enterprise network. The small size and portability of these computers mean that they are also much more vulnerable than large machines are to theft or illicit access. An additional and often unrecognized problem is that a PC is the most available and vulnerable starting point for access to a network. Studies of computer crime reveal that insiders pose the largest threat. Clearly, providing secure PCs is an essential component of establishing network security. Data Security Types There are two general types of protection for data at rest: file encryption and full disk encryption. This illustrates the difference between unprotected data, standard file encryption, and Full Disk Encryption protection. File encryption enables users to protect vital data on a file-by-file basis, which is a good solution when, for example, transferring files between users or computers. However, organizations often find file encryption insufficient since they then have to rely on the users' ability to secure the correct information and their willingness to consistently follow security procedures. Full Disk Encryption Unlike file encryption, which is not mandatory and therefore dependent on user discretion, Full Disk Encryption provides boot protection and sector-by-sector disk encryption. Boot protection means authenticating users before a computer is booted. Full Disk Encryption uses the user's credentials to derive a user key, which is used to encrypt the disk volume keys. The disk volume keys encrypt the PC disk volumes. This prevents unauthorized persons from accessing the operating system using authentication bypass tools at the operating system level, or alternative boot media to bypass boot protection. Disk encryption includes the system files, temp files, and even deleted files. Encryption is user-transparent and automatic, so there is no need for user intervention or user training. There is no user downtime because encryption occurs in the background without noticeable performance loss. This provides enforceable security that users cannot bypass. Because the data on the disk is encrypted, it is inaccessible to any unauthorized persons. Full Disk Encryption Features and Benefits Full Disk Encryption secures desktop and laptop computers from unauthorized physical access by using both boot protection and full disk encryption. Full Disk Encryption provides the following security functions: Strong Multi-User Authentication Support for Multi-type Authentication Methods using Smart Card, Password etc. Secure Remote Help for users who have forgotten their passwords Central management, deployment, configuration, monitoring and reporting. Single Sign-On, Password Synchronization within the OneCheck concept. Advanced Security features for Preboot Bypass using Network Authorization, TPM and Enhanced Network Location Awareness. Audit logging of events such as successful and failed logon attempts With Full Disk Encryption, all logical partitions/volumes are boot protected and encrypted, even if the disk is removed and loaded into a controlled machine. The integration of boot protection and automatic encryption provides a high degree of security with minimal impact on users. This allows an organization to determine the security level instead of leaving it up to the user to encrypt information. Boot protection prevents subversion of the operating system or the introduction of rogue programs, while sector-by-sector encryption makes it impossible to copy individual files for brute force attacks. Full Disk Encryption guarantees that unauthorized users cannot access or manipulate information on a protected computer, from available, erased, or temporary files. Full Disk Encryption safeguards the operating system and the important system files (which often contain clues to passwords), shared devices, and the network. The Full Disk Encryption installation on the user's PC contains all the necessary user account information, keys, and other data to protect the PC. This means there is no central user database or key repository to manage. Benefits for Administrators As a Full Disk Encryption administrator, you have centralized control of a decentralized system where you can very easily perform: Installation, modification and removal of Full Disk Encryption on users from computers in the network. Configuration and deployment of a wide range of security and policy settings on users PCs. Modification of security policy settings to suit the needs of the entire user population, selected groups of users, or individual users. The daily administration of the system. Deploying Full Disk Encryption to One or Many Computers Using just one installation policy, you can deploy Full Disk Encryption to anywhere from one to hundreds of thousands of users from a central management. This can be done either in Online Mode or Offline Mode. Operative System Support Check Point Endpoint Data Security Full Disk Encryption is available both on Microsoft and Apple Operating Systems, using the same Preboot Environment, Recovery Console and the majority of other features. This significantly lower the cost for keeping administrators and help desk personnel trained on a separate product per OS. It simplifies the user experience both for the administrator and user. All cryptographic functionality in FDE is implemented using the Check Point Crypto Core. Check Point Crypto Core is a 140-2 Level 1 cryptographic module for Microsoft Windows OS, Check Point Preboot Environment OS and Apple Mac OS X. The module provides cryptographic services accessible in preboot mode, kernel mode and user mode on the respective platforms through implementation of platform specific binaries. The FIPS certificate number is 2788. The certificate and the security policy are available from the NIST website here: Full Disk Encryption can be installed using the following algorithms and key lengths: XTS-AES-128 (available from E80.64 and higher on UEFI machines.) XTS builds on top of XEX and extends this by a tweak value and ciphertext stealing. The mode defined by IEEE uses an AES cipher. The implementation of AES supports 128 bit key lengths. XTS-AES-256 (available from E80.64 and higher on UEFI machines.) XTS builds on top of XEX and extends this by a tweak value and ciphertext stealing. The mode defined by IEEE uses an AES cipher. The implementation of AES supports 256 bit key lengths. The implementation of AES supports 256 bit key lengths. It is implemented in CBC mode with a block size of 128 bits. On CPUs supporting the Intel AES-NI instructions, the AES-NI instructions are used automatically in order to speed up the execution. Note that the FIPS validation covers both modes: software or AES-NI (hybrid). The implementation of Blowfish supports 256 bit key lengths. It uses full 16 rounds. It is implemented in CBC mode with a block size of 64 bits. The implementation of 3DES uses 168 bits of key length. It is implemented in CBC mode with a block size of 64 bits. The implementation of CAST supports 128 bit keys. It is implemented in CBC mode with a block size of 64 bits. Implementation of Software-Based Encryption The software disk encryption uses one disk sector as the smallest block (512 bytes). When using Blowfish, the relative sector number within the logical volume is first encrypted, and the result is used as the initialization Vector (IV) for the sector encryption. For all other algorithms, the relative sector number is used as the IV. Each sector is encrypted in 64-bit CBC or, for AES, 128-bit CBC mode, equal to the block size of the algorithm. Mac OS X Note The Mac version of FDE only supports AES, implemented as described above. Implementation of Support for Self-Encrypting Drives (SED/OPAL) Self-Encrypting Drives (SED/Opal) disks are only supported and implemented in the Windows version of Full Disk Encryption. When Check Point Full Disk Encryption is used together with Self Encrypting Drives (SED), all of the key management, remote-help, authentication modes, and other features described in the following sections still apply. The only difference is that the disk will handle the disk sector encryption. The key size and mode used are specific to the disk vendor, however the OPAL standard that FDE implements mandates at least 128-bit AES to be used. Unfortunately the standard omits the encryption mode, and consequently, some vendors have chosen to use AES in ECB mode. For details on the mode used, see the specific disk vendor documentation. In order to lock or unlock the SED, the device key, described in the user authentication section, is used. The device key is encrypted, as described in the following sections, using encryption based on the authentication model deployed. Two modes exist for the SED (Opal) disks, depending on if Windows has taken ownership of the disk or not. Setting up disks from manufactured-inactive state (8). This model enables locking on the global range and sets new, unlocked, ranges for the EFI system partition, the FDE system area partition, and the GPT tables. Unlocking at preboot, for example, unlocks the global range. Setting up disks that have been activated by Windows 8 and later. These disks are in manufactured state and probed with ioctls from the Windows ehstor "band management" API to find out the status of the disk. Enabling locking is done via these ioctls. The AUTH_KEY for the bands is set to the FDE device master key for all ranges, including the unlocked ranges. Locking is then enabled on all ranges except the EFI system partition and the FDE system area partition. Each encrypted partition has a single partition encryption key. The algorithm chosen for the partition encryption mandates the key type and length for the partition encryption functionality. Each partition key is encrypted with a device encryption key. The device encryption key is unique per device (PC). The device encryption key is always a 256 bit AES key, implying that the internal key encryption is 256-bit AES. The encrypted partition keys are stored on device level. The device encryption key will be encrypted using the authentication method chosen for the individual user. For example, if password authentication is used, the device encryption key will be encrypted and stored in the user database, per user, using a password derived key. For Smart Cards, the partition key encryption key will be encrypted using RSA public key cryptography. Password strength depends on the length, and the type of character set involved. Full Disk Encryption will calculate based on a password with a random use of all characters; i.e. Upper Case, Lower Case, Numeric, and Special Characters (94 characters total). Remote Help Security Architecture When performing remote help, the user gives a challenge to the helper. The helper then gives the user a response that unlocks the device. The login is performed using the helpers login info. Schematically the remote help algorithm is: Challenge + Shared Secret = Response, the response is used as a password to encrypt the device key and the shared secret in the same way as password based encryption described above. The encrypted key blobs are stored in the local device database. The challenge is a random number generated by the DRBG. The "+" is a hash algorithm and Shared Secret is a device unique 256-bit AES key created with the DRBG. Hashing is done using the key-encryption algorithm run in "one-way mode" by using the input data as key. The shared secret is encrypted with a 2048-bit RSA key received from the Management Server and transmitted to the server in the recovery data. RSA encryption is done. RSA encryption uses PKCS#1 v1.5 padding in E80.61 and earlier and OAEP padding starting with E80.62. The transmission to the server is done using TLS. Deployment Mode is required before encryption can start because an installed Full Disk Encryption client first needs to: Retrieve device policy information provided from the Endpoint Management Server Gather and deliver requested information back to the Endpoint Management Verify that the policy configuration is valid/complete for encryption to be initiated If enabled, the user acquisition will continue to acquire new users until the configured condition has been fulfilled: Number of accounts This setting specifies the number of accounts to acquired, if this number of acquired users is reached, then the user acquisition is considered complete. Limitation of acquisition period When enabled, this setting specifies that the acquisition should be considered complete after a certain number of days even though the number of acquired users hasn't been reached. Note: At least one user still has to be acquired Deployment Mode Step-by-Step Overview: First stage - Deployment Generation of device/master key. Status shown and reported: Init Wait for policy to be delivered. Status shown and reported: Wait for policy Wait for User Acquisition to finish. Status shown and reported: User Acquisition Verify that a user has the permission to log on. Status shown and reported: Verify Setup Activate the functionality for Preboot Bypass or Temporary Preboot Bypass, if enabled. Status shown and reported: Verify Setup Activate the functionality for Remote Help. Status shown and reported: Verify Setup Creating Preboot OS System Area. Status shown and reported: Setup Protection Wait for the recovery data to be delivered. Status shown and reported: Delivering Recovery Information Activate Preboot (update boot record on boot volume). Status shown and reported: Setup Protection Second stage Post-deployment Remove old restore points. Status shown and reported: Setup Protection Update boot records on all volumes. Status shown and reported: Setup Protection Remove intermediate files. Status shown and reported : Setup Protection Notify that reboot is required. Status shown and reported: Awaiting Reboot Third and final stage Enforcement Activate background encryption. Status shown and reported: Encrypting/Encrypted/Decrypted Preboot description, major difference depending on firmware type (BIOS/UEFI) etc. Password - Username and password. This is the default method. The password can be the same as the Windows password or created by the user or administrator. Smart Card - A physical card that you associate with a certificate. This is supported in E80.30 clients and higher. Users must have a physical card, an associated certificate, and Smart Card drivers installed. Dynamic Token - A physical device that generates a new password each time users start their computers. This is supported in E80.60 clients and higher on E80.60 and higher management. This can be configured for specified users and not as the global Pre-boot authentication method. Network Authorized Preboot Bypass Network Authorized Preboot Bypass (formerly called UOL / Unlock On LAN) uses preboot network capabilities to access RSA private keys stored on the Endpoint Management Server in order to achieve a secure boot sequence into the OS with no user interaction. The Management Server generates a 2048-bit RSA key pair and the internal Check Point CA on the Management Server certifies the public key. An X.509 certificate is generated. Security Overview Step-by-Step: As part of the client device policy, the X.509 certificate is provisioned to the clients when Unlock on LAN is enabled. This is sent under the TLS session that all FDE messages use for server communication (See Server Connection section). The public key, embedded in the certificate, is used to encrypt the disk keys. This is done with a 256-bit random AES-CBC key encryption key (KEK) that encrypts the disk unlock key (also called the device master key). The KEK is, in turn, encrypted with the public RSA key. E80 61 and earlier uses PKCS#1 1.5 padding. From E80.62, OAEP padding is used. The encryption creates an encrypted key blob which is stored in the FDE system area on the client machine. When in pre-boot, the pre-boot detects the Unlock on LAN policy and tries to connect to the Endpoint Server, or one of its Policy Servers (connection points). This is done in the same way as the DA, using TLS (SSL), with AES-256, to encrypt the traffic and a server certificate issued by the internal CA to strongly authenticate the server. The encrypted key-blob created in step 3 is sent, on the TLS/SSL encrypted connection, to the server. The server decrypts the device key using the corresponding RSA private key, only stored on the Management Server. The decrypted disk unlock key is sent back to the client, still within the same TLS/SSL connection. The client uses the disk unlock key to decrypt the local disk volume keys. Legacy Preboot Bypass The legacy preboot bypass (formerly known as WIL / Windows Integrated Logon) is only recommended, due to the feature's lowering of the overall security below what is considered encryption strength, to be used for computers that are in a physically secure environment, protected by concrete walls, alarms, lock systems and other security measures to ensure that the computer will not leave the compound. To additionally protect the computer from theft, there are several security features specifically designed to disable the legacy preboot bypass in the event that the physical security is in some way compromised either by theft, abuse or similar. Security Triggers Available to Disable Preboot Bypass If a physically secured computer with preboot bypass enabled (for example an ATM), still should fall into the wrong hands, the following features are available to detect being vulnerable and self-trigger the disabling of the preboot bypass and enabling the preboot environment access control and authentication requirement: Legacy Network Location Awareness To make sure that the client is connected to the correct network, the computer pings a defined number of IP addresses during the boot process. If none of the IP addresses replies in a timely manner, the computer might have been removed from the trusted network and Preboot Bypass is disabled. The computer reboots automatically and the user must authenticate in Preboot. If one IP address replies, Preboot Bypass remains enabled. Note: While this option is enabled, Windows cannot be started in Safe Mode. If selected, the client generates a hardware hash from identification data found in the BIOS and on the CPU. If the hard drive is stolen and put in a different computer, the hash will be incorrect and Preboot Bypass will be disabled. The computer reboots automatically, and the user must authenticate in Preboot. Warning: Disable Preboot Bypass before upgrading BIOS firmware or replacing hardware. After the upgrade, the hardware hash is automatically updated to match the new Max failed logon attempts If the number of failed logon attempts exceeds the number of tries specified, Preboot Bypass is disabled. The computer automatically reboots and the user must authenticate in Preboot. If the Maximum Failed Logon value is set to "1" and the end-user logs on incorrectly, Preboot Bypass is not disabled because the number of logon failures has not exceeded the number entered in this property. However, if the subsequent attempt to log onto Windows fails, Preboot Bypass is disabled. Introduction to Remote Help functionality Remote Help is a feature for administrators and help desk personnel to remotely help users to access computers protected with Full Disk Encryption, if the users password has been forgotten. The basic functionality is intended to: Help users that have forgotten their password to change their password Allow users to access a machine where Windows Integrated Logon has been turned off due to some event Help a user whose account has been locked to access and unlock the account Temporarily grant access to a computer (until reboot) to enable service operations on the computer User Interface User Experience There are several settings and procedures that need to be properly handled when doing Remote Help in the Preboot Environment. Previously described modes of Remote Help will in this section be shown from the User/Administrator perspective. Preboot User Experience: Click the lower right Remote Help button for below dialog to display. Once the user has called, or in another way contacted the administrator or help desk, the selected mode of Remote Help is selected, either Password Change or One-Time Logon. Remote Help configuration in Endpoint Management: Overview of Remote Help Infrastructure The Remote Help infrastructure is divided into "Legacy" and "E80", with the major difference in E80 being the PKI security enhancement of each device having a unique key, while in the legacy versions having a much simpler design with remote helper accounts created at deployment, but being the same on all systems installed with the same installation profile. There is still in E80 the possibility to use the legacy way of doing remote help, this setting is then called "User-bound Remote Help" and can be configured on the "Internal Account" user type. Setting up the secure E80.xx Remote Help PKI: When the UEPM server is installed, an asymmetric key pair is created and stored in the server database table named ASYM_KEY. When each FDE device policy is generated in XML, the public key is inserted into the device policy. The FDE device policy XML is stored in the server database. The FDE device policy XML is transferred to the client. A remote helper account is created on the endpoint with uniqe information for that device. The unique information is only generated once, at the first time the client receives the public key. New recovery information is generated on the client containing the device's unique information The recovery information is transferred to the server in a message named SET_FDE_RECOVERY_DATA. When received on the server, the recovery data is retrieved from the message. The recovery data is then stored in the server database in a table named RECOVERY_DATA. Change Password in Preboot If allowed, the user can at any time decide to change the preboot password from the preboot environment. The user enters the user name and password and instead of pressing enter or clicking on "OK", the user instead clicks on "Change Password" and the below dialog will be shown: Preboot Options Menu In the lower right corner of the Preboot Environment, the Options menu panel can be shown if selected The following features are available from the options menu: Enable HII Keyboard Layout The help dialog can either be used with the standard text shown below, or be changed to a specific text message chosen by the Endpoint administrator. When using a tablet computer or a regular laptop with touch screen, the user can select to use the touch screen to logon using the Virtual Keyboard instead of the build-in or external keyboard. When using a tablet computer there is often no availability of either build-in or external keyboard so using the Virtual Keyboard becomes the standard procedure. When selected the virtual keyboard and doing a preboot logon, the virtual keyboard will stay active and be automatically shown the next time the user starts the computer. At installation the language of the preboot environment is set based on the language used within Windows, if that language is supported. Still there is a need to be able to change the language if a non-native user of the language selected at installation would need to logon. The languages shown in the above displayed preboot dialog are currently available to be selected. To find complex characters on a regular keyboard can sometimes be hard. The following standard characters maps are available: Enable HII Keyboard Layout By default, the keyboard layout used in preboot is the defined layout provided by Full Disk Encryption for the selected language. Even if found extremely rare, there is a potential chance of a hardware model that is not compatible with the built-in provided layouts. If this event should happen, the option is available to select Enable HII Keyboard Layout and instead use the keyboard layout provided in the UEFI firmware to hopefully resolve any experienced issue. Recovery Methods for Disaster Recovery Decryption / Data Access The most important process/information for an encrypted system is the security and accessibility of the recovery information/data. Disaster Recovery Console / Bootable Recovery Media The standard method of creating recovery data is by The recovery progress will be monitored in % finished, decryption speed in Megabyte per second, as well as showing the estimated remaining duration until the recovery decryption will finish. Drive Slaving Utility / Dynamic Mount Utility The fastest method to access data on the encrypted disk if the system cannot boot normally, is to use the Drive Slaving Utility. The Drive Slaving Utility can be used in two different modes: Secure Enterprise Setup using Preboot Authentication The preboot environment authentication has a fundamental importance in Full Disk Encryption to provide solid and secure access control. This in providing an external (not physically attached as for example the TPM-chip) secret (Password, Smart Card PKI etc) that ensures the cryptologic strength, as well as making sure that the data on the disk does not at any time, before access control via successful authentication is made, become readable and thereby provide exposure to potential memory attacks (Cold Boot exploits) or other OS-level attacks. The authentication method to use for preboot users needs to be set, based on security level needs. If the data at rest on a specific computer needs to be protected with maximum security, the authentication method with the highest level of provided security should be selected (in the defined example, a Smart Card with 4096-bit encryption.) For users that rarely handle any data that would require maximum-security precautions, a regular password with a good mix of upper and lower characters, numbers, and if needed special characters should be used. The next step is to define how many accounts need to be available for logon on each computer. Many times you may set the same number for all systems, however the more accounts present on a system the larger the risk is that a specific account would be assigned to the system with a low security password (if the defined password policy has allowed such to be set) and an attacker will in most cases attack the weakest link. In general, the default settings and value available for configuration in the Endpoint Management should be regarded as the best-practice recommendation. Secure Enterprise Setup using Network Authorized Preboot Bypass Before reading this section, it is important that you understand the previous section on why and when it is important to use preboot authentication, especially for mobile computers and devices that are not safeguarded by physical office security, such as walls, doors, locks and alarms. A good example of an environment when Network Authorized Preboot Bypass (previously known as Unlock On LAN) could be the optimal choice is a helpdesk or call center type environment with desktop systems that are not allowed and never regularly leave the safeguarded office. By having the Network Authorized Preboot Bypass enabled, no personal user information is needed for each machine, and any employee could select any workspace/computer within the office to use during the day. At system startup, the networking functionality within the preboot environment will contact the management server via secure communication and unlock encryption keys needed to be able to authenticate the system and start to boot into the regular OS. If any system would be stolen, it would become powerless and disconnected from the network LAN and at next boot on the "outside", the LAN would no longer be present, so the preboot authentication would be shown as a Network Authorization attempt that would fail due to no network available. Lowered Security Enterprise Setup using Preboot Bypass Before reading this section, it is important that you understand the first section on why and when it is important to use preboot authentication. In general this feature is not recommended. It can be used for systems that do not have the possibility to use Network Authorized Preboot Bypass. However, in that case it is important to utilize built-in security features such as TPM, Hardware hash, Network Location Awareness etc to somewhat make the security situation a bit better. Still, with no proper authentication (manual or via the network) before the regular OS (Windows/macOS) starts loading, the system will be vulnerable for either attacking the system memory or breaking past the login within the OS (something that is quite easily achieved and new exploits are yearly announced on events such as BlackHat etc, here is an example: https://www.youtube.com/watch?v=eRuca6eAdFM. As described in earlier, there are two options for preboot bypass, naturally the most secure option should always be preferred. However, sometimes hardware and technology limitations may create obstacles and then require a mixed or restricted alternative. Since ATMs especially have a zero downtime requirement, in combination with remote location challenges, an auto-boot configuration is sometimes setup in the OS to ensure that the system does not get halted at any stage during startup. This makes it especially important to strengthen the bypass-mode with as many available security features as possible, from Network Authorized Preboot Bypass and TPM/Hardware-Hash to Network Location Awareness. The most basic step of troubleshooting the installation of Full Disk Encryption is to view the status shown in the Endpoint Security Client App. In the above example, the deployment has reached the final step "Encrypted". If a system would be stuck at any step, the thing to ensure is that the client has fully functioning network access, confirm that the client is connected, and that you can make a successful ping to the shown IP address. Once it has been established that the client is connected and the system confirmed running, the next step to understand the state will be to analyze the logs. See Debug Procedures for the deployment. When planning to upgrade, it is important to make sure that all prerequisites are met, both from supported client versions, supported management backend version, OS version etc. sk107255 will provide you with a matrix of supported client/server versions. The major challenge faced when upgrading the OS from one major version to another, since the hard drive is fully encrypted, is to get to a point in the upgrade process (often when loading into an alternative upgrade environment) when the FDE filter driver is not loaded, and thereby the data on the disk cannot be read/written. So our goal is to make the upgrade or upgrade image aware that the FDE filter need to be active/loaded at all times. All the way back to XP there have been scripted procedures available to perform an OS in-place upgrade, upgrading from one OS major version to another with the encryption and product fully working and enabled thru the transition. The most popular upgrade path right now is to Windows 10, see sk112246 for the process and steps to take for a successful OS in-place upgrade. Since E80.61, OS in-place upgrade has also been available for systems encrypted with FDE running on macOS platform. This was a major innovation for the product since in prior versions and the legacy series, a full uninstallation and decryption of the system had to take place before the system could be upgraded with a major OS version and then the system again had to be re-installed and encrypted again. See sk114213 for the latest process. The Full Disk Encryption Preboot Environment is a resilient micro OS that resides in-between the firmware (BIOS/UEFI) and Windows/macOS. In normal circumstances there will be no need to troubleshoot the preboot itself. However, if there are issues with an external keyboard or mouse functioning properly, try to remove any connected USB-switch/hub to see if it will resolve the experienced issue. In the event of the preboot not loading properly (black screen etc), open the firmware settings and try to change the mode of operation for the hard disk controller to see it will resolve the issue. See Preboot Options Menu if you need to change language. For troubleshooting touchscreen devices on UEFI-hardware, see sk93032. If the preboot environment is not loading, see Debug Procedures on gathering proper logs for R&D CFG analysis. The challenge/response remote help functionality is one of the oldest and most matured features in the product. Normally there is not much troubleshooting needed. However if issues are reported, make sure to go check that the features has been properly configured on the management side and that the correct mode (One-time logon or Remote Password Change) is selected both on the client side and helper side. Also, ensure that the correct device has been selected since every device is protected by a unique key. The correct device must be selected for the challenge/response procedure to be successful. Full recovery decryption is performed by creating a recovery media, based on the unique recovery information for a specific device. It is thereby crucial that the correct device is selected when creating the media, as well as selecting (or creating) the user that will authenticate in the recovery console for the decryption process to start. If there is an issue with booting on the recovery media, ensure that the firmware has been correctly configured to boot towards external media, as well as the media being physically in good shape (an old USB-media for example can have damaged sectors etc that could lead to the recovery image not being able to be written to the media correctly). For hard drives that have experienced a crash due to a physical failure, it is important to not stress the drive and make the issue worse by starting a recovery decryption. Instead start such a procedure first by taking a sector-by-sector image of the failing drive and place the image on a fully functional and healthy hard drive. Then start the decryption process on the healthy drive and decrypt the data for rescue. If the drive is in such a bad state that even the sector-by-sector imaging will not finish, consider sending the disk drive for repair to a company that is an expert on such procedures. On all systems installed with Endpoint Security there is a debug log collector accessible via the Endpoint Security Client App, the tool is called CPinfo, see sk90445. CPinfo can be executed in the below three modes, from FDE there is not much difference between the modes, as shown, However in other blades the difference is more significant and this will lead to an "Extended" being significantly larger than a "Basic" log collection. Basic: Same information as General is gathered for FDE. General: Same information as Basic is gathered for FDE. Extended: In addition to Basic/General, the MSinfo is gathered in this mode. This is needed during some investigations. The following information will be displayed during the collection of logs on the FDE blade: As shown in the example above, there is also information extracted from the Windows registry in regards to FDE. For more information, refer to sk105818. The CPinfoPreboot is simply a tool for collecting logs directly from preboot level in case something has happened to the system that will disable it from logon to Windows. The Bootable CPinfoPreboot tool is an application that installs the special bootable cpinfo preboot image onto a USB media.The tool automatically finds the USB devices inserted and will present found media in a drop-down menu to the right in the UI. Just select the correct device and select "write". This USB stick can be booted both on a computers with the legacy BIOS firmware or with the new UEFI firmware. Once the CPinfoPreboot is booted from the created media, you will see a brief copyright notice and then a few lines of instructions telling the user not to remove the USB media during the coming process. After that there will be some '*' that indicates progress (note that on some computers, especially older systems the process can be slow). At the end of the information gathering process, the application will display that the scan is done and that a file called preboot.zip is generated and placed on the USB media. The user is then prompted to remove the media and turn off the computer. The USB media will now contain the zip file with gathered preboot debug logs. The USB media can be inserted into a working computer. The file can be copied via Windows and placed into a mail or uploaded to an ftp, for further analysis. The preboot.zip file has the same layout as the .cab files generated from the regular cpinfopreboot tool. Regular CPinfopreboot (Needs to be placed on WinPE bootable media) Use an external USB device to collect the Pre-boot data. The device must have at least 128 MB of free space, and sufficient storage for the output cab file. CPinfoPreboot cannot run on boot media prepared with the Full Disk Encryption filter driver How to run the tool: Copy CPinfoPreboot.exe to an external USB device. Boot the client from the USB device. Note - Microsoft Windows does not automatically detect USB devices after boot up. Open the command prompt and type: <path to CPinfoPreboot><CPinfoPreboot.exe<output cap filename><output folder name>. For example: C:\path\>CPinfoPreboot.exe SR1234 temp. CPinfoPreboot stores the output file to the designated folder. If no output name is specified, the output file has the same name as the output folder. If no output folder is specified, CPinfoPreboot saves the output file to the working directory on the external media. An output folder is required if the working director is on read-only media. Log cycle and versions. Common examples from dlog1.txt: MSI log location(s) Using Software Deployment, the MSI log(s) will be written to the following location: C:\ProgramData\CheckPoint\Endpoint Security\ The FDE debug logs (dlog) will be located in the following location on Windows systems: C:\ProgramData\CheckPoint\Endpoint Security\Full Disk Encryption\ Potential issues that can be encountered Deployment already started Resolution: Deployment of the system has already started, no need to try to deploy another package to the system. Resolution: Wait until the current ongoing uninstallation and decryption process has finished. Leftovers from prior installation due to reimage (not uninstall) Resolution: Remove and create a new partition before starting the reimage process. Waiting for User Acquisition User Acquisition with no password set on the user Resolution: Set a password for the user Verify Setup Protection No users assigned for preboot authentication Resolution: Assign user(s) to the device from the Endpoint Management. Missing Smart Card drivers for assigned Smart Card user
1
14
<urn:uuid:8880b025-f148-464b-a910-b0a1b3b59965>
KI7TU's Reference Page -- Tips On this page, I want to share some hard-won wisdom that I've gained over the years. Basically, it is stuff that I hope will save you from making the same mistakes that I made, thus making your hobby a more pleasant and rewarding experience. It may seem a bit rambling and disjointed, and frankly, it is. Maybe sometime I'll find the time to get it a bit more organized, but for now, my main goal is just to get the info out there. Since this is aimed at newcomers, and it was something I had a lot of confusion about when I was a newbie, I'll mention it right up front. The frequency specs in data sheets are typically the MAXIMUM frequency that the device is "guaranteed" to work at. They will (generally) quite happily work at lower frequencies. Just as a "for instance", suppose you have a circuit where you need to amplify an audio tone at around 400Hz to go to a speaker. You dig through your "junk box" and find a transistor, and look it up and the spec sheet says it's a "200 MHz" transistor, but the other specs (power handling, voltage, gain, and so on) will work in your circuit. Will it work? You betcha (assuming that it hasn't been "blown"). If you were ordering a new transistor from somewhere, you might want to look at ones with a frequency rating closer to what you actually need, just because they're likely to be less expensive, but in general it's fine to use transistors and ICs several orders of magnitude below their rated maximum. The one exception to this is that there have been some microprocessor designs that have a minimum frequency. If this is true for a given part, the data sheed should clearly state the "minimum operating frequency". (I should mention that some will state that the minumum frequency that the device is tested for, but that it should work at lower For "linear" devices, like transistors and op-amps, on the high end of the frequency range performance can fall off quite rapidly as you go beyond the stated maximum. For things like microprocessors, it gets a bit chancy: often times a manufacturer, through extensive testing, will find that there are one or two instructions in the microprocessor's instruction set that won't work beyond the maximum stated frequency. (They try to make the maximum stated frequency as high as possible, because the higher it is the higher the selling price for the parts.) Note, too, that the frequency rating is at the temperature extremes listed in the data sheet. They will often work at somewhat higher speeds if they are kept at "room temperature". A voltage regulator basically allows you to "feed" a circuit with a higher, and possibly varying, voltage, while providing a nice, steady voltage for the remainder of the circuit to consume. Or at least, that's the goal. I can remember when the three-terminal regulators, such as the 7805, were a new thing. Before that, you had to use a bunch of circuitry to acheive a steady voltage, which is important to digital electronics. There are a few key parameters on these things (besides the output voltage), and a couple of things you need to know if you're going to put together your own circuit involving one and have it work. The key parameters are: - Current capacity - how much power you can put through it - Dropout voltage - tells you what the minimum voltage you need to feed the regulator to achieve the appointed voltage (and note that if you just rectify AC, you'll need some capacitance to maintain the minimum voltage when the AC is near zero volts) - Power dissipation - essentially this is the heat that is generated by the difference in input voltage and output voltage times the current being drawn. It will determine whether or not you need to add a heat sink. Also, a very important consideration is bypass capacitors. They can be easily overlooked, even though for most voltage regulators, the data sheet does mention them. A good rule of thumb is to provide a 0.01µF ceramic disc capacitor on each side of the regulator, placed physically very close to the regulator (like within a quarter inch), plus a larger, say 10µF electrolytic on the input, also fairly close, and maybe another 10µF electrolytic on the output. At first glance, those little ceramic ones may seem redundant. Believe me, they are NOT redundant. The reason is that electrolytic caps have a fairly high internal inductace, and so can't react to high frequency spikes as well as a much smaller valued ceramic cap can. The small ceramic, though, doesn't store enough energy to be able to deal with the lower frequency I found all this out the hard way, back in the 1970s, when I designed my first computer, and it was acting in a very bizzare manner. With the help of a (partially working) oscilloscope, I tracked down the problem to an AC signal, about 2 MHz and about 7 volts peak-to-peak riding on the 5 volt lines. The addition of bypass capacitors eliminated the AC "noise". Note that there are a few voltage regulator chips around that in their data sheet claim that they don't need the extra bypass, but even so, it doesn't hurt, and the caps aren't particularly expensive. (I tend to buy the 0.01µF caps 100 at a time, which makes them even less expensive.) It can be very handy to have a few LM317 "adjustable" regulators around. They can be had from most Radio Shack stores, though for the price of one there you can get several at one of the mail order parts houses. The output voltage is set by the ratio of a couple of resistors on the output side. See the data sheet for I noted above (under regulators) putting a 0.01µF ceramic capacitor very close to both the input and output of a regulator. When I'm degining a digital circuit, I generally place one very close (again, within about a quarter inch, or for those who like metric, within about 6mm) to every "power" pin on every (Some ICs have two or more power pins, and so get two or more Admittedly, for something like a Pentium® processor, with dozens of power pins (and a hundred or more ground pins), this would be overkill, but I have yet to desgin a circuit that actually uses one of these behemoths. The reason for all those bypass caps is that digital ICs actually "switch" at very high speeds, and that can put a lot of noise onto the power bus. The bypass caps can do a lot to smooth out these tiny power surges. Virtually all of the so-called "passive" devices (resistors, capacitors, and inductors) a tolerance specification. For instnace, you may see a "5%" resistor, or a "2%" resistor (sometimes listed as "±5%" and "±2%", respectively). But what does this mean? What it means is that the manufacturer has tested the part to be within that tolerance of the marked value. For instance, a 5% 100Ω resistor can be as low as 95&Omega, or as high as 105&Omega, or anything in between. One of the little "gotchas" that sometimes bites people who haven't run into it before is thinking that "OK, I could take a batch of 5% resistors, and sort them out to find one that's within 2% of the specified value". The problem is that for a lot of parts, the manufacturer sorts the parts before marking them, and for our example of a 100 Ω resistor, ones that measure in the range 99 to 101 Ω will be marked as 1%, ones that are 98 or 102 Ω will be marked 2%, and ones that are 95 to 97 or 102 to 105 will be marked 5%, and ones that are actually 90 to 94 or 106 to 110 will be marked as 10%. (Today's manufacturing techniques have virtually eliminated the 20% classification for resistors.) This isn't always true, but it is done often enough that you need to be aware of it. Most "car batteries" fall into this category. Although there's a lot of wisdom about them, the first, and probably foremost, bit of wisdom is that the so-called 12 volt battery is actually around 13.8 volts. The reason for this is actually historical. These batteries have been around since the late Individual cells for a lead-acid battery are a bit over 2 volts each. Back in those days, volt meters that could accurately measure to a fraction of a volt were very expensive, so they just rounded off to 2 volts. Cars in the early 20th century had batteries with 3 cells, and were called 6 volt batteries. In about the middle of the 20th century, they went to batteries with 6 cells, and were called "12 volt" Near the end of the 20th century, volt meters that are accurate to tenths, and even hundreths of a volt became common and affordable, but the name "12 volt battery" has stuck around. There are two major classes of the conventional wet-cell lead-acid battery: "starting" batteries and "deep-cycle" batteries. The starting batteries are designed to supply a huge amount of power (often a couple hundred amps) for a few seconds to start a gasoline or diesel engine, and then be quickly recharged. They're also designed to be able to do this at fairly low temperatures. Starting batteries are not designed to be deeply discharged, that is, to have most of their power drawn out at a slower rate. Doing this can greatly shorten the useful life of the battery. Deep cycle batteries, on the other hand, are designed to have a fairly high percentage of the power they contain drawn out of them, albeit at a slower rate. They can be used for things like the "house" battery on an RV, operating an electric golf cart, or operating a trolling motor. They are not as affected by being deeply discharged as are starting batteries, but still running them "flat" can affect the life expectancy. Another aspect of a car's electrical system to be aware of is that there can be some substatial voltage spikes floating around at times, especially when starting the engine. The starter motor is a huge DC inductor, and when it is turned off, it still contains a huge amount of energy, and that can come out as a spike into the electrical system of the car. Anything electronic device that is going to be connected to a car's electrical system should be designed, at a minimum, to be able to handle a spike of 80 or 90 volts, though that spike only lasts a fraction of a second. Heavy power consumers, such as ham radios, often have dedicated wires going to the car's battery. It is customary to put fuses in both the positive and negative wires at both the battery end and at the radio end. The fuses at the radio end protect the radio, and the ones at the battery end will (hopefully) prevent a fire if there's a short in The reason for fusing the negative side is that there are things that can go wrong with the car's electrical system and the power from the starter can find a "return path" through the antenna and radio to the battery -- better to blow a couple of fuses than to fry the radio and/or antenna. While we're on the topic, many, but not all, mobile radios have a diode to protect them against being connected to the power supply backwards. This diode will cause the fuse to blow, hopefully before the radio gets fried. While we're talking about wiring radios, I should also mention that many mobile radios don't disconnect the power amplifier when the front panel switch is "turned off". So it's best to wire the radios to the battery through an appropriately sized relay. Unfortunately, math scares a lot of folks. (The big fancy word for this is "numerophobia".) To be sucessful around electronics, especially if you want to either design something from scratch, or even just modify an existing design, you're porbably going to have to do some math. However, don't get too scared! For the hobbyist, a grasp of basic algebra is sufficient. A decent calculator is useful, though the calculator programs on most computers will suffice. Sometimes you'll need to know a few fractions (typically 1/2 and 1/4), and decimal numbers. Addition, subtraction, multiplication, division, and being able to understand sines and cosines (such as "sin(x)" or "cos(x)") should be enough. OK, knowing what is meant by "squared" and "square root" can also be helpful. Enough, that is, if you are willing to "take my word for it" when someone says "the more advanced math shows". (Often the more advanced math is some sort of calculus, which engineers need to know, though to be honest, in my career I had to resort to actualy doing calculus only on rare occasions.) Resistors come in many different types and many different power ratings, from the teeny 1/100 watt that take a magnifying glass to see to big multi kilowatt things that take a fork lift to move. As a hobbyist, though, the most common ones you'll use are 1/4 and 1/2 watt, and sometimes either 1/8 or 1/10 watt. Common resistors in this range are typically color coded as to their value and tolerance. If you have a pile of resistors, and are going through them looking for a particular one, it's a wise idea to have that digital volt meter handy so that you can verify that the one you've found is actually the one you want. Even after about 45 years of looking at color bands, I still make mistakes. Speaking of "piles" of resistors, Radio Shack and a few other suppliers sell assortments of resistors, usually 1/4 watt resistors, in the $10 to $15 dollar range. Having one of these on hand can save a lot of trips to the store, and can allow you to quickly substitute a different value when you find your circuit doesn't quite work as predicted. Also remember that in most situations you can combine a few resistors to get a needed value. When you are ordering resistors from one of the mail-order places, be sure to look at the quantity pricing. If you happen to need, say, 15 of one value, it may be just a few pennies more (and sometimes a few pennies less!) to order 100. That's a good way to get a stash of them in your "junk box". While we're on the subject of resistors, it is worth mentioning that certain types of resistors do "drift" in value with age. I recall reading one author's comments in one of the professional magazines over 20 years ago that he'd gone through a batch of resistors that he'd had around for maybe 20 years, and that the carbon composition resistors had drifted so much that he referred to them as carbon decomposition resistors. The moral of the story is that if you have ones that are very old, or of unknown age, it's probably worth taking a few seconds and checking them with Soldering skills can be an essential part of the electronics hobby. You might find that my comments have some validity, since they're based on roughly 45 years of experience. Getting a good solder joint is something that does take some practice. None of us got it right the first few times we tried, so don't get discouraged. Try to find some scraps to practice on before getting going on any sort of "major" project. Several vendors sell "learning to solder" kits if you don't have access to any "scraps" that you can practice on. I have found that you're less likely to damage parts, and especially damage printed circuit boards, if you use a higher wattage iron (and thus a higher temperature) with a very small tip than if you use one with a lower wattage rating (and thus lower temperature). This is contrary to common opinion that you should use a lower temperature iron on small parts. The basic reason is that with the higher wattage iron, you can heat up the contacts (and solder) far more quickly, thus transferring LESS total heat to the parts and board than if you were using a cooler iron which takes much longer to heat up the work. I keep a one-pound spool of very fine guage solder on the bench close to my soldering iron, and it gets used for everything from very fine work to very heavy work. One trick that I learned many years ago was to cut off a piece of solder about six to eight inches long, and wrap it around the end of my index finger on the hand that isn't going to be holding the iron while I'm working. I leave the last couple of inches straight (or slightly curved), then I can use the other fingers (and my thumb) to hold tweezers or small pliers to hold a part, and use my index finger to bring in the solder once the work has been heated with the soldering iron. True, it takes some practice, but it can allow you to work a little easier. (I unwrap the solder between soldering joints to maintain the length of solder between the work and my finger.) By the way, it does mean that you end up wasting a half inch or so at the end of piece of solder, but when you buy it by the pound, it's not too expensive, and besides, you can save those pieces and use tweezers to feed them into the joint when you're working on something where you don't need the "extra hand". Also be sure to check out my comments on soldering irons and desoldering on the Tools It does bear repeating that having the surfaces to be soldered bright and shiny will make it a lot easier to get a good solder joint. This is not an original idea, and I don't recall exactly where I first encountered it, though it was likely in the directions for one of the many kits I've built over the years. One of the first steps that you should always do when building a kit is to check that you have all of the parts. For electronics kits, this means that you have to identify and count a lot of small parts. When they come loose in a bag, this can be a bit of an effort. If you just plop them onto the workbench, you'll have to go through them again when you're actually putting the thing together and find the parts a second time. There's a better way, though. Take a piece of corregated cardboard, and mark on it spaces for each of the values of the parts, or at least all of the small parts that have leads on them. Then, as you identify each of the parts slip it into the corregation corresponding to your markings. Here's an example from a kit I recently built: I just used a pocket knife to cut off part of the flap on a box that was headed for the recycle bin. Here's another (closer) view showing how the parts slip into the Notice that for these parts I've only used the "holes" on the side where I've marked the values. I recently unpacked a box that had been packed up several years ago when I moved, and had never been unpacked. One of the things that I found was a project that I'd built many years ago, and although I'd done a neat and careful job of building it (both inside and out), I'd neglected to put any labels on it. There's a power connector on the back, with no indication as to operating voltage and on the front are two binding posts (one red and one black), two switches, and two pots (probably multiturn), and 3 indicators (LED?), one red, one yellow, and one green, all carefully mounted in an aluminum project box. Inside is a relatively simple circuit, made with a combination of wire-wrapped and point-to-point wiring. If I'd even put a piece of masking tape and penned what it was for, it would probably jog my memory. When (and if) I ever remember what it was for, I'll make some nice labels. Hopefully, you won't duplicate my mistake! If you go to the typical hardware store, you'll find a dizzying array of different sorts of tape. I want to discuss a few types that are often misused or at least misunderstood, and one that few people seem to know about. - Electrical tape This is the plastic stuff, usually vinyl, sometimes referred to as "electrician's tape". The primary use for the stuff is to cover or wrap electrical connections. It will stretch some, and so can be form fitting. By far the most common color is black, but you can get it in other colors. (Radio Shack carries a package that has a roll in each of several different colors.) Thus you can use it for color-coding things. Unrelated tip: Wrapping a strip of colored electrical tape a couple of times around the handles of your luggage can make it a lot easier to spot on the carousel at the airport. Having strips of a couple of different colors can make your bags even more identifiable, and if your bags have multiple handles, mark every handle the same way. - Friction tape This is often mistaken for electrical tape. It is made with a cloth backing, and does not have the stretch that electrical What it is intended for is where electrical wires can be subject to abrasion (wear), such as where they have to cross a metal edge within a car. Although youi might be able to get away with using friction tape to insulate something at 12V, it's a lot better to use electrical tape, and if needed, a separate layer or two of friction tape over the electrical tape. - Duck tape Often erroneously referred to as "duct tape", this fabric backed tape was originally developed during the Second World War for sealing things, like cans, against moisture (thus the name It is good for some uses, but is often used inappropriately. For actually sealing duct work, you should use the type of tape that is metallic and has an adhesive that is designed to last for - Gaffer's tape At first glance, this stuff looks a lot like duck tape. It has a cloth backing, and although it can be had in many different colors, dark grey is the most common. There are two important differences between gaffer's tape and duck tape. The first is that gaffer's tape uses an adhesive that is designed to not leave a residue (duck tape is notorious for this) and generally does not remove paint when carefully pulled off. The second major difference is the price, with gaffer's tape usually being around $16 a roll while duck tape can often be had for $3 a Gaffer's tape is often difficult to find, but it can be worth both the price and the effort to do so. If your city still has a photographic supply store, check with them, as gaffer's tape is very popular with professional photographers. If not, try one of the big mail-order photo supply houses, such as Adorama (www.adorama.com), though be aware that shipping can be expensive One thing to be aware of is that if you're working with anything more modern than vacuum tubes: there is the possibility that it can be damaged by static electricity (in the lingo of engineers, it's called "electro-static damage" or ESD for short). One of the basic rules of thumb is that if you can feel a static electric discharge, even if you are trying to feel it, it is several times what it takes to ruin a modern integrated circuit. Back in the 1970s the usual wisdom (acutally "wisDUMB") was that only MOS (Metal Oxide Semiconductor) chips were sensitive to static discharges. Then in the early 80s, research started coming out indicating that static electric discharges that didn't instantly destroy TTL devices (and even descrete transistors) would still cause damage that would dramatically shorten their lifespan. The good news is that on the "bench", at least, static electricity is fairly easy to control. Have something that you KNOW is grounded, and touch it EVERY time you approach the bench. (On my bench, the oscilloscope has a three-prong power connector, and it is left plugged in all of the time. I've checked, and the outside of the BNC connectors for the probes is connected to ground, so I simply touch one of those before touching There are a variety of anti-static wrist straps available on the market. Please see that topic on the "Tools" page for more details. Also be sure to hold onto some of the anti-static bags that some things (like computer boards and disk drives) come in. I keep a large one over any projects on the bench, in case kitty decides to investigate an area where she shouldn't be. I recently had occasion to incorporate a small into a project to provide a mild airflow. I wanted a small amount of airflow over a temperature sensor so that it would be (close to) the ambient temperature outside of the case, and not be affected much by the heat generated inside It also needed to have very little acoustic noise, due to where it was going to be located (in a bedroom). My first experiment with the fan in question revealed that the airflow was way too high, and it was also way too loud. Many years ago I had some experience with slowing down DC motors by effectively decreasing the voltage that they saw by including a resistor in series with them. These motors were all of the "brushed" type. Today virtually all modern DC fans use brushless motors meaning that they incorporate some electronics to control the magnetic fields so that the shaft rotates, rather than the old method of using a commutator and brushes to control the electromagnetic These fans have a much longer life expectancy, as well as being more energy efficient. Also, they eliminate the sparking between the brushes and commutator, which greatly reduces both fire hazards and the amount of electrical noise produced by the fan. I was worried that the simple approach that I'd used years ago wouldn't work with the electronics inside the motor. After having asked a few fellow Engineers about it, and getting a consistent "I don't know", I did some research on-line, and found a number of references that suggested that there are two ways of regulating the speed of a brushless DC fan: You can either adjust the voltage, or you can use pulse width The advantage of adjusting the voltage is that in my case, at least, it meant just adding a series resistor. However, you have to be careful to not drop the voltage below a certain threshhold where the fan won't start. The consensus was that this is typically in the range of 25% to 50% of the nominal voltage. The advantage of using PWM is that you can get a much lower speed from the fan, but at the cost of added complexity in the cotrol of the fan. In my case, it would have meant using an additional GPIO pin on my microcontroller plus an additional transistor, both of which I'd prefer to avoid. I decided to experiment with the actual fan that I had. Using an adjustable bench power supply, and a cheap digital multi meter, I determined that the fan would start at just over 3V, and would run happily at that voltage, drawing roughly 60mA of current. At this speed, it was both quiet and giving about the amount of air movement that I wanted. Doing the math, it worked out that I should use a resistor of about 130Ω resistor to drop the 12V of the supply down to 3V to run the fan. It would be dissipating a bit over 0.6W. I decided that a slightly better approach was to use three resistors in parallel, each being rated 390Ω at 1W. (Although they still dissipate the same amount of heat, they won't get nearly as hot as they're running at about 1/5 their rating, rather than running at 60% of their power rating.) So far, it works just fine. In this section I want to share a few tips specific to amateur radio. A lot of us start with a hand-held radio, referred to in amateur radio parlance as a "handi-talkie", or just "HT", because of the price. There are a few of them on the market that are in the $100 (U.S.) range, though they can run up to about five times that. Generally, HTs work in the VHF and/or UHF part of the spectrum. This means that they'll usually be used with repeaters, because in these frequency ranges, communications is more-or-less line-of-sight. They can penetrate a few walls or trees, but not many. By making use of a repeater, which is typically installed in a high location, such as a mountain top or atop a tall building, two people can talk to each other if they both can "see" the repeater, even if they can't "see" each other. (To improve things even further, repeaters are not infequently linked together, so that anyone who can "see" one of the repeaters in the link can talk to anyone else who can "see" any of the repeates in the link. Also, some repeaters are linked via the Internet, and can talk HTs typically come equiped with a "rubber ducky" antenna. Folks who have been around ham radio for a while often refer to these as "rubber dummy loads" as they are such poor antennas. Fortunately, most hand held amateur radios have provision for connecting a better antenna. If you decide to purchase an HT, there are a couple of accessories I highly recommend. The first is a "dry cell" battery pack. This device allows you to run your radio on disposable AA size alkaline batteries, which means that it is easy to find replacements. It turns any convenience store or corner drug store into an "instant recharge station" for your radio. If you get a dry cell pack that takes six cells, then it's also possible to use ordinary AA size NiCd or NiMH rechargable batteries. (Because of the lower voltage, four cells, and sometimes five cells, typically won't "light up" the radio.) When I first got into ham radio, it was fairly common to see old HTs at bargain prices because the original rechargable battery packs no longer worked and the owner hadn't bought a dry cell pack while they were still available. Today, there are companies, such as "Batteries Plus" and "The Battery Lady" who will rebuild old battery packs, and some of them also have after-market packs for a lot of radios. The second item I strongly recommend buying is the original equipment manufacturer's soft case, especially if you are planning on using that nifty belt clip for it's stated purpose. If you're not used to wearing a bulky radio on your hip, you ARE going to bump into things, and catch on corners, and that soft case can absorb at least some of the abuse (and is less expensive than the whole radio). One of the things to be aware of is that most HTs will work on a range of voltages. If you get the "cigar lighter plug" for it, so that it is connected to the car battery, they typically will put out about 5 watts, though on the rechargable batteries they will typically put out in the neighborhood of 2 watts. The aforementioned nifty belt clip often serves another purpose, namely it is also a heat sink. These radios tend to get hot when run for very long, especially at the higher power. One other comment about HTs: The style in recent years has been to include a broad-band receiver that will supposedly work in frequency bands that are far outside the ham bands. The unfortunate thing about this is that it requires that the "front end" be very broad band, and therefore VERY susceptible to all kinds of interference. Some years ago, surprisingly, Radio Shack sold two HTs (the HTX-202 for 2 meters, and the HTX-404 for the 440MHz band) that had very narrow "front ends" and were therefore much less likely to have interference problems, especially in The next "step up" from HTs today are the mobile radios. These are designed for installation and operation in an automobile, though it's perfectly reasonable to use one as a "base station" (a radio in a fixed location, such as a home), and many hams do that. One thing to be aware of is that most mobile radios don't have the input voltage range that HTs do. A mobile radio will typically "drop out" if the input voltage drops more than a couple Most mobile radios will put out much more power than an HT -- often as much as 100 watts. So they need to have antennas that are capable of handling that. Many hams who use mobile radios in their home "shack" use an AC power supply, though there are some hams who use a deep cycle battery (and some have solar panels set up to keep the battery If you do decide to use an AC power supply, get one that can supply more power than the radio requires. Some power supplies are "linear", while others are "switching". The linear supplies are much less likely to inject noise into the radios, though they can weigh a lot, and are not tremendously efficient. The switching supplies are typically much more efficient, and a lot lighter weight, though they can produce a lot of noise that will find its way into the radio. There are "low noise" switching power supplies, but they can be very expensive, though both the noise levels of inexpensive switching power supplies and the price of low noise supplies are dropping. Be sure to read the stuff above about lead-acid batteries, which includes some comments about wiring There are "base-station" amateur radios available, though these tend to be a lot pricier (think "year-old economy car" price They do tend to have a lot more features than the mobile radios, and also tend to have a "knob for each feature" rather than having the features buried several menu layers down. In this section I want to point out a few tips about things that are purely mechanical in nature. They may seem like common sense, and it may even seem a bit silly to have to state some of it, but there may be folks that don't know it, so I'll put it in. This is certinaly one of those things that may sound silly to have to mention to most of us, but if just one reader learns something from it, it's been worth my effort. Cross-threading when assembling nuts, bolts, and/or screws can be a big There is a simple trick to help avoid doing it, though it does take a bit of practice: Once you've aligned the parts the best you can, holding the part (or the tool) lightly against the mating part, turn it some in the "loosening" direction (usually to the left). You should feel a slight "click" at one point, and if you were to make one full rotation, you'd feel it again. What this "click" is is the two threads "dropping off" each other, and if you stop just after this click, and start turning in the "tightening" direction (usually to the right), you stand a very good chance of NOT cross-threading When you're using self-tapping screws (also called self-threading screws), the first time one is put in its hole, there's nothing for it but to cut a new thread. However, if you are re-installing a screw that has gone into the hole before, it's important to use the above trick to try to avoid cutting a new thread. The reason for this is that every time a new thread is cut, it weakens the metal around the screw. If a self-tapping screw is put into a given hole several times and allowed to cut a new thread every time, it will soon strip out all of the metal around the screw, and there won't be anything left to hold the screw in the hole. The only solutions are to use a machine screw and nut if you can get to the "back side" to install the nut, to use a larger screw (which may mean enlarging the hole on the part being held), or using some form of "captive nut" (which can be difficult to obtain, as well as being a hassle to When to tighten When I was growing up, my father was an aircraft mechanic, and he was always adamant that you should "start" all the screws holding two parts together before tightening any of This works very well when the parts being assembled are fairly rigid and have accurately placed holes. However, I've seen a lot of electronics where the parts are made out of bent pieces of sheet metal, and sometimes the holes aren't located all that accurately. In these cases, it's sometimes best to tighten the first couple of screws so that you can get the other parts aligned. Tighten the nut When you have a choice, turn the nut rather than the bolt or We don't often have the luxury of that choice in electronics, but when we do, the nut should be turned to tighten the assembly rather than turning the screw or bolt that it's on. The reason for this is that the nut usually has less friction than the screw or bolt, since the screw or bolt will probably rub against the hole. Know how tight to make it I've seen some folks who don't have much experience really rare down on screws or nuts, and then wonder why they have problems. Knowing how tight to make a screw or nut does take some experience, and even with more than 45 years at it I sometimes make mistakes on this (though my mistakes tend towards "not quite tight enough" and I find that something works loose). One thought I have on this is if you have a couple of small pieces of scrap steel (preferably totaling at least a quarter inch thick) that have small holes, go to the hardware store and get some small screws and nuts (say, #4 or #6) and "test them to destruction" -- that is, tighten them until the screw breaks. (If you have cheap tools, you may find out why cheap tools are more expensive than good ones -- the tools may fail.) Also, if you've got a scrap printed circuit board ("PC board" or "PCB") try putting that up against the steel and tightening the screw until you crush it. These two experiments will give you something of a feel for "way too tight". These days, including a small microcomputer in the design for an electronic device is more the rule than the exception. It seems like they're everywhere in our lives. Some of them are fairly easy to program, and can be used to do some pretty amazing things. Often times an MCU (micro-controller unit) with just 14 pins, with a small handfull of other parts, can do things that just a few years ago would have required several large boards and been far beyond the typical hobbyist, both in price and complexity. The other neat thing is that you can sometimes dramatically change the capabilities of a "gadget" you've built without hardly lifting a soldering iron, just by installing new "firmware". However, including an MCU does imply programming (software, and since it's loaded into non-volatile memory to be executed directly, it's referred to as "firmware") which is a world of its own. I've been programming, at various levels, for about 40 years now, including 23 years when I officially held the title I have some tips to pass on to hobbyists that will probably make doing software easier and more enjoyable in the long run. Virtually every computer language has some form of comments -- that is, a way to include text in the "source code" which is ignored by the compiler, interpreter, or assembler. Comments are supposed to make absolutely no difference in how the code behaves. (I have seen a couple of languages where this wasn't the case, but it was because of errors in the compiler itself.) I had one professor who said that he included comments so that a total idiot could understand what was going on, because he was usually the "total idiot" trying to modify or correct his code six months after he'd written it. Comments should clarify why things are happening. You should be able to see what is happening by reading the code After many years of engineering, I've developed the habit of putting a big long comment (or sometimes several comments) at the beginning of each program. The first thing is the name of the program (or module). I've seen times when a disk drive has gotten messed up but it is still possible to recover a file using this key clue about what the contents of the file is. The next thing is a simple explanation about what the program is attempting to do. This "header comment" should also include a copyright notice, and a brief history of major changes to the program. Since this is one of the places where an example is good, here's an example from one of my recent programs (in the "C" language): /* waverter.c - This program is intended to take a .wav file and convert the actual samples to a format that will be acceptable to the PIC 18 The description of the contents of a .wav file comes from Copyright 2010 by Clark Jones. 28-Oct-2010 CJ Began development. 08-Nov-2010 CJ Switched from "hardwired" file names to command line control. 10-Nov-2010 CJ Added -v option, calculation of average divergence (from 0). By the way, in the original of the above code, I had used tab characters to get the indention in the "History" section. It's very easy to neglect to keep the comments up to date as the program evolves, but it is worth the effort. It takes a while to develop a knack for coming up with meaningful, but easy-to-type variable names. When you first get into programming, you'll see a lot of texts that use single letter character names, such as "i" or "x". After many years of experience, I've leared that the only place that single letter variable names are appropriate are on a marker board (or, if you're as old as I am, a chalk board). Real world programs are a lot better off with longer variable names. For example, if I need a variable for a loop index (in a "for" loop in C or a "DO" loop in Fortran) I'll use idx (short for "index"). Commonly I'll need two or three nested loops, so I'll add jdx and kdx (rather than using j and k as you commonly see in books). One of the big reasons for this is that it's a lot easier to find these three letter strings in an editor than to have to skip over dozens (or even hundreds) of occurances of a single letter. One of the problems with doing software is that it is incredibly easy to make mistakes. It's very frustrating to write several hundred lines of code, and try it for the first time and it crashes and you have no clue as to why it crashed. It is a lot easier to find the mistakes in just a few lines of code than it is to find the mistakes in a long program. If you start testing early, you can feel some confidence that you have at least some code that is working. That can help isolate the problems. Also if you run the same tests that were earlier passing, when they break you can feel pretty confident that what you did since the last time you ran the tests is likely what broke something. There are some wonderful "integrated development environments" and "debugging programs" around, but many of these are commercial programs and have very big price tags attached. The hobbyist is hard pressed to be able to afford, say, a $1000 program to debug her code for the $5 microcomputer that is going to control a $50 project. Not to fear, though, there are techniques you can use. For programs that run entirely on a computer, learn to use the languages' ability to "print" things to a terminal (or a file). For C, this is the "printf" function (or to send it to a file, "fprintf"). Printing even a silly message when the program gets to a key point will at least let you know that it got there. And printing out a key variable can let you look at what the computer thinks the value is, so that you can know whether or not that is what you think it should be. Many languages have a "conditional compile" feature, such as C's "#ifdef" feature, where you can put a line or two around the actual statement print statements to essentially turn them into comments, but yet be able to "turn them on" again if you need to later. For "embedded code" (a.k.a. firmware), turning an LED on or off at key points can be a useful trick to try to see if part of your program is working. When I'm first starting with a processor I haven't used before, or if I'm using a clock mode I haven't used before on a familiar processor, my first program is just one that flashes an LED. It may sound simple, but it at least proves that I've done enough to get the thing to "look alive". Also, look for areas of code that can be broken out into separate routines and/or files. True, this is a skill that takes a long while to develop, and the task of breaking stuff out into separate routines and/or files can seem like a waste of time to the beginner (I know it did when I first started). Breaking things out, though, gives you two big advantages. First is that you can re-use code to do the same task in some other part of the program (or even in another program later on), and second is that you can often test this code more thoroughly. Let's look at an example: A while back I was working on a project where I needed a small LCD that can display two rows of 16 characters each. I realized that I needed to be able to take an 8 bit value and display it as a decimal number. Very early on, I wrote a subroutine that would display a single ASCII character on the LCD. I then wrote some code that would translate the 8 bit value to three ASCII characters (since an 8 bit value is in the range 0 to 255), and then called te first subroutine to actually send the characters to the LCD. By putting all of the code to drive the LCD into a separate file, I was then able to build a "test harness" program that tested out the decimal display subroutine with some key values, such as 0, 9, 10, 99, 100, Later on I found that I needed to be able to display a 16 bit number, so I modified the original subroutine to be able to handle bigger numbers, and when I ran my original test harness, I found a couple of things I'd broken and was quickly able to fix, and then added some more tests, like displaying 65535. I could then call this subroutine from my main program, and when I got weird numbers, know that it wasn't because the LCD interface code was messing up. When I was wondering if some other hardware was working (a temperature sensor, to be exact), I was able to use the aforementioned subroutine to display the value that hardware was producing by just (temporarily) adding a few lines of code, rather than having to write a whole bunch of stuff to get the value to the display. One other advantage of breaking the LCD related code out is that I've thought about another project that uses the same processor but a different interface to the LCD. It will be simple to change, as all of the "traffic" to the LCD goes through just a couple of subroutines (one being that subroutine that sends a single ASCII character to the LCD). Most of the subroutines won't have to change at all. I'll be able to use the software test harness to verify the new interface hardware and software and go from there. This screen last updated: 07-Feb-2017 Copyright © 2010-2017 by Clark Jones
1
4
<urn:uuid:4af43d9c-f349-420a-806b-c41b0fe27e44>
History of Ghana Part of a series on the |History of Ghana| The Republic of Ghana is named after the medieval West African Ghana Empire. The empire became known in Europe and Arabia as the Ghana Empire after the title of its emperor, the Ghana. The Empire appears to have broken up following the 1076 conquest by the Almoravid General Abu-Bakr Ibn-Umar. A reduced kingdom continued to exist after Almoravid rule ended, and the kingdom was later incorporated into subsequent Sahelian empires, such as the Mali Empire several centuries later. Geographically, the ancient Ghana Empire was approximately 500 miles (800 km) north and west of the modern state of Ghana, and controlled territories in the area of the Sénégal River and east towards the Niger rivers, in modern Senegal, Mauritania and Mali. Central sub-Saharan Africa, agricultural expansion marked the period before 500 AD. Farming began earliest on the southern tips of the Sahara, eventually giving rise to village settlements. Toward the end of the classical era, larger regional kingdoms had formed in West Africa, one of which was the Kingdom of Ghana, north of what is today the nation of Ghana. Before its fall at the beginning of the 10th century Ashanti migrants moved southward and founded several nation-states, including the first empire of Bono founded in the 11th century and for which the Brong-Ahafo (Bono Ahafo) region is named. Later Akan ethnic groups such as the Ashanti empire-kingdom and Fante states are thought to possibly have roots in the original Bono settlement at Bono Manso. Much of the area was united under the Empire of Ashanti by the 16th century. The Ashanti government operated first as a loose network and eventually as a centralized empire-kingdom with an advanced, highly specialized bureaucracy centred on the capital Kumasi. - 1 Precolonial period - 2 Ashanti Empire - 3 Early European contact and the slave trade - 4 British Gold Coast - 4.1 Britain and the Gold Coast: the early years - 4.2 Protestant missions - 4.3 British rule of the Gold Coast: the colonial era - 4.4 The growth of nationalism and the end of colonial rule - 5 Moving toward independence - 6 Ghana's independence achieved in 1957 - 7 Since 1966 - 8 Religious history - 9 See also - 10 References - 11 Further reading - 12 External links By the end of the 16th century, most of the ethnic groups constituting the modern Ghanaian population had settled in their present locations. Archaeological remains found in the coastal zone indicate that the area has been inhabited since the Bronze Age (ca. 2000 BC), but these societies, based on fishing in the extensive lagoons and rivers, have left few traces. Archaeological work also suggests that central Ghana north of the forest zone was inhabited as early as 3,000 to 4,000 years ago. These migrations resulted in part from the formation and disintegration of a series of large states in the western Sudan (the region north of modern Ghana drained by the Niger River). Strictly speaking, ghana was the title of the king, but the Arabs, who left records of the kingdom, applied the term to the king, the capital, and the state. The 9th-century Berber historian and geographer Al Yaqubi described ancient Ghana as one of the three most organized states in the region (the others being Gao and Kanem in the central Sudan). Its rulers were renowned for their wealth in gold, the opulence of their courts, and their warrior/hunting skills. They were also masters of the trade in gold, which drew North African merchants to the western Sudan. The military achievements of these and later western Sudanic rulers, and their control over the region's gold mines, constituted the nexus of their historical relations with merchants and rulers in North Africa and the Mediterranean. Ghana succumbed to attacks by its neighbors in the 11th century, but its name and reputation endured. In 1957, when the leaders of the former British colony of the Gold Coast sought an appropriate name for their newly independent state—the first black African nation to gain its independence from colonial rule—they named their new country after ancient Ghana. The choice was more than merely symbolic, because modern Ghana, like its namesake, was equally famed for its wealth and trade in gold. Although none of the states of the western Sudan controlled territories in the area that is modern Ghana, several small kingdoms that later developed such as Bonoman, were ruled by nobles believed to have immigrated from that region. The trans-Saharan trade that contributed to the expansion of kingdoms in the western Sudan also led to the development of contacts with regions in northern modern Ghana, and in the forest to the south. The growth of trade stimulated the development of early Akan states located on the trade route to the goldfields, in the forest zone of the south. The forest itself was thinly populated, but Akan-speaking peoples began to move into it toward the end of the 15th century, with the arrival of crops from South-east Asia and the New World that could be adapted to forest conditions. These new crops included sorghum, bananas, and cassava. By the beginning of the 16th century, European sources noted the existence of the gold-rich states of Akan and Twifu in the Ofin River Valley. According to oral traditions and archaeological evidence, the Dagomba states were the earliest kingdoms to emerge in present-day Ghana as early as the 11th century, being well established by the close of the 16th century. Although the rulers of the Dagomba states were not usually Muslim, they brought with them, or welcomed, Muslims as scribes and medicine men. As a result of their presence, Islam influenced the north and Muslim influence spread by the activities of merchants and clerics. In the broad belt of rugged country between the northern boundaries of the Muslim-influenced state of Dagomba, and the southernmost outposts of the Mossi Kingdoms (of present-day northern Ghana and southern Burkina Faso), were peoples who were not incorporated into the Dagomba entity. Among these peoples were the Kassena agriculturalists. They lived in a so-called segmented society, bound together by kinship tie, and ruled by the head of their clan. Trade between Akan kingdoms and the Mossi kingdoms to the north flowed through their homeland, subjecting them to Islamic influence, and to the depredations of these more powerful neighbors. Under Chief Oti Akenten (r. ca. 1630–60), a series of successful military operations against neighboring Akan states brought a larger surrounding territory into alliance with Ashanti. At the end of the 17th century, Osei Tutu (died 1712 or 1717) became Asantehene (king of Ashanti). Under Osei Tutu's rule, the confederacy of Ashanti states was transformed into an empire with its capital at Kumasi. Political and military consolidation ensued, resulting in firmly established centralized authority. Osei Tutu was strongly influenced by the high priest, Anokye, who, tradition asserts, caused a stool of gold to descend from the sky to seal the union of Ashanti states. Stools already functioned as traditional symbols of chieftainship, but the Golden Stool represented the united spirit of all the allied states and established a dual allegiance that superimposed the confederacy over the individual component states. The Golden Stool remains a respected national symbol of the traditional past and figures extensively in Ashanti ritual. Osei Tutu permitted newly conquered territories that joined the confederation to retain their own customs and chiefs, who were given seats on the Ashanti state council. Tutu's gesture made the process relatively easy and nondisruptive, because most of the earlier conquests had subjugated other Akan peoples. Within the Ashanti portions of the confederacy, each minor state continued to exercise internal self-rule, and its chief jealously guarded the state's prerogatives against encroachment by the central authority. A strong unity developed, however, as the various communities subordinated their individual interests to central authority in matters of national concern. By the mid-18th century, Ashanti was a highly organized state. The wars of expansion that brought the northern states of Dagomba, Mamprusi, and Gonja under Ashanti influence were won during the reign of Opoku Ware I (died 1750), successor to Osei Kofi Tutu I. By the 1820s, successive rulers had extended Ashanti boundaries southward. Although the northern expansions linked Ashanti with trade networks across the desert and in Hausaland to the east, movements into the south brought the Ashanti into contact, sometimes antagonistic, with the coastal Fante, as well as with the various European merchants whose fortresses dotted the Gold Coast. Early European contact and the slave trade When the first Europeans arrived in the late 15th century, many inhabitants of the Gold Coast area were striving to consolidate their newly acquired territories and to settle into a secure and permanent environment. Initially, the Gold Coast did not participate in the export slave trade, rather as Ivor Wilks, a leading historian of Ghana, noted, the Akan purchased slaves from Portuguese traders operating from other parts of Africa, including the Congo and Benin in order to augment the labour needed for the state formation that was characteristic of this period. The Portuguese were the first Europeans to arrive. By 1471, they had reached the area that was to become known as the Gold Coast. The Gold Coast was so-named because it was an important source of gold. The Portuguese interest in trading for gold, ivory, and pepper so increased that in 1482 the Portuguese built their first permanent trading post on the western coast of present-day Ghana. This fortress, a trade castle called São Jorge da Mina (later called Elmina Castle), was constructed to protect Portuguese trade from European competitors, and after frequent rebuildings and modifications, still stands. The Portuguese position on the Gold Coast remained secure for over a century. During that time, Lisbon sought to monopolize all trade in the region in royal hands, though appointed officials at São Jorge, and used force to prevent English, French, and Flemish efforts to trade on the coast. By 1598, the Dutch began trading on the Gold Coast. The Dutch built forts at Komenda and Kormantsi by 1612. In 1637 they captured Elmina Castle from the Portuguese and Axim in 1642 (Fort St Anthony). Other European traders joined in by the mid-17th century, largely English, Danes, and Swedes. The coastline was dotted by more than 30 forts and castles built by Dutch, British, and Danish merchants primarily to protect their interests from other Europeans and pirates. The Gold Coast became the highest concentration of European military architecture outside of Europe. Sometimes they were also drawn into conflicts with local inhabitants as Europeans developed commercial alliances with local political authorities. These alliances, often complicated, involved both Europeans attempting to enlist or persuade their closest allies to attack rival European ports and their African allies, or conversely, various African powers seeking to recruit Europeans as mercenaries in their inter-state wars, or as diplomats to resolve conflicts. Forts were built, abandoned, attacked, captured, sold, and exchanged, and many sites were selected at one time or another for fortified positions by contending European nations. The Dutch West India Company operated throughout most of the 18th century. The British African Company of Merchants, founded in 1750, was the successor to several earlier organizations of this type. These enterprises built and manned new installations as the companies pursued their trading activities and defended their respective jurisdictions with varying degrees of government backing. There were short-lived ventures by the Swedes and the Prussians. The Danes remained until 1850, when they withdrew from the Gold Coast. The British gained possession of all Dutch coastal forts by the last quarter of the 19th century, thus making them the dominant European power on the Gold Coast. In the late 17th century, social changes within the polities of the Gold Coast led to transformations in warfare, and to the shift from being a gold exporting and slave importing economy to being a major local slave exporting economy. Some scholars have challenged the premise that rulers on the Gold Coast engaged in wars of expansion for the sole purpose of acquiring slaves for the export market. For example, the Ashanti waged war mainly to pacify territories that in were under Ashanti control, to exact tribute payments from subordinate kingdoms, and to secure access to trade routes—particularly those that connected the interior with the coast. The supply of slaves to the Gold Coast was entirely in African hands. Most rulers, such as the kings of various Akan states engaged in the slave trade, as well as individual local merchants. A good number of the Slaves were also brought from various countries in the region and sold to middle men. The demographic impact of the slave trade on West Africa was probably substantially greater than the number actually enslaved because a significant number of Africans perished during wars and bandit attacks or while in captivity awaiting transshipment. All nations with an interest in West Africa participated in the slave trade. Relations between the Europeans and the local populations were often strained, and distrust led to frequent clashes. Disease caused high losses among the Europeans engaged in the slave trade, but the profits realized from the trade continued to attract them. The growth of anti-slavery sentiment among Europeans made slow progress against vested African and European interests that were reaping profits from the traffic. Although individual clergymen condemned the slave trade as early as the 17th century, major Christian denominations did little to further early efforts at abolition. The Quakers, however, publicly declared themselves against slavery as early as 1727. Later in the century, the Danes stopped trading in slaves; Sweden and the Netherlands soon followed. In 1807, Britain used its naval power and its diplomatic muscle to outlaw trade in slaves by its citizens and to begin a campaign to stop the international trade in slaves. The British withdrawal helped to decrease external slave trade. The importation of slaves into the United States was outlawed in 1808. These efforts, however, were not successful until the 1860s because of the continued demand for plantation labour in the New World. Because it took decades to end the trade in slaves, some historians doubt that the humanitarian impulse inspired the abolitionist movement. According to historian Eric Williams, for example, Europe abolished the trans-Atlantic slave trade only because its profitability was undermined by the Industrial Revolution. Williams argued that mass unemployment caused by the new industrial machinery, the need for new raw materials, and European competition for markets for finished goods are the real factors that brought an end to the trade in human cargo and the beginning of competition for colonial territories in Africa. Other scholars, however, disagree with Williams, arguing that humanitarian concerns as well as social and economic factors were instrumental in ending the African slave trade. British Gold Coast Britain and the Gold Coast: the early years By the later part of the 19th century the Dutch and the British were the only traders left and after the Dutch withdrew in 1874, Britain made the Gold Coast a protectorate—a British Crown Colony. During the previous few centuries parts of the area were controlled by British, Portuguese, and Scandinavian powers, with the British ultimately prevailing. These nation-states maintained varying alliances with the colonial powers and each other, which resulted in the 1806 Ashanti-Fante War, as well as an ongoing struggle by the Empire of Ashanti against the British, the four Anglo-Ashanti Wars. By the early 19th century the British acquired most of the forts along the coast. About one tenth of the total slave trades that occurred happened on the Gold Coast. Two major factors laid the foundations of British rule and the eventual establishment of a colony on the Gold Coast: British reaction to the Ashanti wars and the resulting instability and disruption of trade, and Britain's increasing preoccupation with the suppression and elimination of the slave trade. During most of the 19th century, Ashanti, the most powerful state of the Akan interior, sought to expand its rule and to promote and protect its trade. The first Ashanti invasion of the coastal regions took place in 1807; the Ashanti moved south again in 1811 and in 1814. These invasions, though not decisive, disrupted trade in such products as gold, timber, and palm oil, and threatened the security of the European forts. Local British, Dutch, and Danish authorities were all forced to come to terms with Ashanti, and in 1817 the African Company of Merchants signed a treaty of friendship that recognized Ashanti claims to sovereignty over large areas of the coast and its peoples. The coastal people, primarily some of the Fante and the inhabitants of the new town of Accra came to rely on British protection against Ashanti incursions, but the ability of the merchant companies to provide this security was limited. The British Crown dissolved the company in 1821, giving authority over British forts on the Gold Coast to Governor Charles MacCarthy, governor of Sierra Leone. The British forts and Sierra Leone remained under common administration for the first half of the century. MacCarthy's mandate was to impose peace and to end the slave trade. He sought to do this by encouraging the coastal peoples to oppose Kumasi rule and by closing the great roads to the coast. Incidents and sporadic warfare continued, however. In 1823, the First Anglo-Ashanti War broke out and lasted until 1831. MacCarthy was killed, and most of his force was wiped out in a battle with Ashanti forces in 1824. When the English government allowed control of the Gold Coast settlements to revert to the British African Company of Merchants in the late 1820s, relations with the Ashanti were still problematic. From the Ashanti point of view, the British had failed to control the activities of their local coastal allies. Had this been done, Ashanti might not have found it necessary to attempt to impose peace on the coastal peoples. MacCarthy's encouragement of coastal opposition to Ashanti and the subsequent 1824 British military attack further indicated to the Ashanti authorities that the Europeans, especially the British, did not respect Ashanti. In 1830 a London committee of merchants chose Captain George Maclean to become president of a local council of merchants. Although his formal jurisdiction was limited, Maclean's achievements were substantial. For example, a peace treaty was arranged with the Ashanti in 1831. Maclean also supervised the coastal people by holding regular court in Cape Coast where he punished those found guilty of disturbing the peace. Between 1830 and 1843 while Maclean was in charge of affairs on the Gold Coast, no confrontations occurred with Ashanti, and the volume of trade reportedly increased threefold. Maclean's exercise of limited judicial power on the coast was so effective that a parliamentary committee recommended that the British government permanently administer its settlements and negotiate treaties with the coastal chiefs that would define Britain's relations with them. The government did so in 1843, the same year crown government was reinstated. Commander H. Worsley Hill was appointed first governor of the Gold Coast. Under Maclean's administration, several coastal tribes had submitted voluntarily to British protection. Hill proceeded to define the conditions and responsibilities of his jurisdiction over the protected areas. He negotiated a special treaty with a number of Fante and other local chiefs that became known as the Bond of 1844. This document obliged local leaders to submit serious crimes, such as murder and robbery, to British jurisdiction and laid the legal foundation for subsequent British colonization of the coastal area. Additional coastal states as well as other states farther inland eventually signed the Bond, and British influence was accepted, strengthened, and expanded. Under the terms of the 1844 arrangement, the British gave the impression that they would protect the coastal areas; thus, an informal protectorate came into being. As responsibilities for defending local allies and managing the affairs of the coastal protectorate increased, the administration of the Gold Coast was separated from that of Sierra Leone in 1850. At about the same time, growing acceptance of the advantages offered by the British presence led to the initiation of another important step. In April 1852, local chiefs and elders met at Cape Coast to consult with the governor on means of raising revenue. With the governor's approval, the council of chiefs constituted itself as a legislative assembly. In approving its resolutions, the governor indicated that the assembly of chiefs should become a permanent fixture of the protectorate's constitutional machinery, but the assembly was given no specific constitutional authority to pass laws or to levy taxes without the consent of the people. The Second Anglo-Ashanti War broke out in 1863 and lasted until 1864. In 1872, British influence over the Gold Coast increased further when Britain purchased Elmina Castle, the last of the Dutch forts along the coast. The Ashanti, who for years had considered the Dutch at Elmina as their allies, thereby lost their last trade outlet to the sea. To prevent this loss and to ensure that revenue received from that post continued, the Ashanti staged their last invasion of the coast in 1873. After early successes, they finally came up against well-trained British forces who compelled them to retreat beyond the Pra River. Later attempts to negotiate a settlement of the conflict with the British were rejected by the commander of their forces, Major General Sir Garnet Wolseley. To settle the Ashanti problem permanently, the British invaded Ashanti with a sizable military force. This invasion initiated the Third Anglo-Ashanti War. The attack, which was launched in January 1874 by 2,500 British soldiers and large numbers of African auxiliaries, resulted in the occupation and burning of Kumasi, the Ashanti capital. The subsequent peace treaty of 1875, required the Ashanti to renounce any claim to many southern territories. The Ashanti also had to keep the road to Kumasi open to trade. From this point on, Ashanti power steadily declined. The confederation slowly disintegrated as subject territories broke away and as protected regions defected to British rule. The warrior spirit of the nation was not entirely subdued, however, and enforcement of the treaty led to recurring difficulties and outbreaks of fighting. In 1896, the British dispatched another expedition that again occupied Kumasi and that forced Ashanti to become a protectorate of the British Crown. This became the Fourth Anglo-Ashanti War which lasted from 1894 until 1896. The position of "Asantehene" was abolished and the incumbent, Prempeh I, was exiled. A British resident was installed at Kumasi. The core of the Ashanti federation accepted these terms grudgingly. In 1900 the Ashanti rebelled again (the War of the Golden Stool) but were defeated the next year, and in 1902 the British proclaimed Ashanti a colony under the jurisdiction of the governor of the Gold Coast. The annexation was made with misgivings and recriminations on both sides. With Ashanti, and golden district subdued and annexed, British colonization of the region became a reality. The Protestant nations in Western Europe, including Britain, had a vigorous evangelical element in the 19th century that felt their nations had a duty to civilized slaves sinners and savages.Along with business opportunities, and the quest for national glory, the evangelical mission to save souls for Christ was a powerful impulse to imperialism. Practically all of Western Africa consisted of slave societies, in which warfare to capture new slaves—and perhaps sell them to itinerant slave traders—was a well-established economic, social, and political situation. The missionaries first of all targeted the slave trade, but they insisted that both the slave trade in the practice of traditional slavery were morally abhorrent. They worked hard and organized to abolish the trade. The transoceanic slave ships were targeted by the Royal Navy, and the trade faded away. Overland slave trading, practiced especially by Muslim traders from northeastern Africa, continued apace until the early 20th century. The abolition of slavery did not end the forced labor of children, however. The first missionaries to pre-colonial Ghana, were a multiracial mixture of European, African, and Caribbean pietists employed by Switzerland's Basel Mission. The policies were adopted by later missionary organizations. The Basel Mission had tight budgets and depended on child labor for many routine operations. The children were students in the mission schools who split their time between general education, religious studies, and unpaid labor. The Basel Mission made it a priority to alleviate the harsh conditions of child labor imposed by slavery, and the debt bondage of their parents. British rule of the Gold Coast: the colonial era Military confrontations between Ashanti and the Fante contributed to the growth of British influence on the Gold Coast, as the Fante states—concerned about Ashanti activities on the coast—signed the Bond of 1844 at Fomena-Adansi, that allowed the British to usurp judicial authority from African courts. As a result of the exercise of ever-expanding judicial powers on the coast and also to ensure that the coastal peoples remained firmly under control, the British proclaimed the existence of the Gold Coast Colony on July 24, 1874, which extended from the coast inland to the edge of Ashanti territory. Though the coastal peoples were unenthusiastic about this development, there was no popular resistance, likely because the British made no claim to any rights to the land. In 1896, a British military force invaded Ashanti and overthrew the native Asantehene named Prempeh I. The deposed Ashanti leader was replaced by a British resident at Kumasi. The British sphere of influence was, thus, extended to include Ashanti following their defeat in 1896. However, British Governor Hodgson went too far in his restrictions on the Ashanti, when, in 1900, he demanded the "Golden Stool," the symbol of Ashanti rule and independence for the Ashanti. This caused another Ashanti revolt against the British colonizers. The Ashanti were defeated again in 1901. Once the Asantehene and his council had been exiled, the British appointed a resident commissioner to Ashanti. Each Ashanti state was administered as a separate entity and was ultimately responsible to the governor of the Gold Coast. In the meantime, the British became interested in the Northern Territories north of Ashanti, which they believed would forestall the advances of the French and the Germans. After 1896 protection was extended to northern areas whose trade with the coast had been controlled by Ashanti. In 1898 and 1899, European colonial powers amicably demarcated the boundaries between the Northern Territories and the surrounding French and German colonies. The Northern Territories were proclaimed a British protectorate in 1902. Like the Ashanti protectorate, the Northern Territories were placed under the authority of a resident commissioner who was responsible to the governor of the Gold Coast. The governor ruled both Ashanti and the Northern Territories by proclamations until 1946. With the north under British control, the three territories of the Gold Coast—the Colony (the coastal regions), Ashanti, and the Northern Territories—became, for all practical purposes, a single political unit, or crown colony, known as the Gold Coast. The borders of present-day Ghana were realized in May 1956 when the people of the Volta region, known as British Mandated Togoland, a vote was made in a plebiscite on whether British Togoland should become part of modern Ghana; the Togoland Congress voted 42% against. 58% of votes opted for integration. Beginning in 1850, the coastal regions increasingly came under control of the governor of the British fortresses, who was assisted by the Executive Council and the Legislative Council. The Executive Council was a small advisory body of European officials that recommended laws and voted taxes, subject to the governor's approval. The Legislative Council included the members of the Executive Council and unofficial members initially chosen from British commercial interests. After 1900 three chiefs and three other Africans were added to the Legislative Council, though the inclusion of Africans from Ashanti and the Northern Territories did not take place until much later. The gradual emergence of centralized colonial government brought about unified control over local services, although the actual administration of these services was still delegated to local authorities. Specific duties and responsibilities came to be clearly delineated, and the role of traditional states in local administration was also clarified. The structure of local government had its roots in traditional patterns of government. Village councils of chiefs and elders were responsible for the immediate needs of individual localities, including traditional law and order and the general welfare. The councils ruled by consent rather than by right: though chosen by the ruling class, a chief continued to rule because he was accepted by his people. British authorities adopted a system of indirect rule for colonial administration, wherein traditional chiefs maintained power but took instructions from their European supervisors. Indirect rule was cost-effective (by reducing the number of European officials needed), minimized local opposition to European rule, and guaranteed law and order. Though theoretically decentralizing, indirect rule in practice caused chiefs to look to Accra (the capital) rather than to their people for decisions. Many chiefs, who were rewarded with honors, decorations, and knighthood by government commissioners, came to regard themselves as a ruling aristocracy. In its preservation of traditional forms of power, indirect rule failed to provide opportunities for the country's growing population of educated young men. Other groups were dissatisfied because there was insufficient cooperation between the councils and the central government and because some felt that the local authorities were too dominated by the British district commissioners. In 1925 provincial councils of chiefs were established in all three territories of the colony, partly to give the chiefs a colony-wide function. The 1927 Native Administration Ordinance clarified and regulated the powers and areas of jurisdiction of chiefs and councils. In 1935 the Native Authorities Ordinance combined the central colonial government and the local authorities into a single governing system. New native authorities, appointed by the governor, were given wide powers of local government under the supervision of the central government's provincial commissioners, who made sure that their policies would be those of the central government. The provincial councils and moves to strengthen them were not popular. Even by British standards, the chiefs were not given enough power to be effective instruments of indirect rule. Some Ghanaians believed that the reforms, by increasing the power of the chiefs at the expense of local initiative, permitted the colonial government to avoid movement toward any form of popular participation in the colony's government. The years of British administration of the Gold Coast during the 20th century were an era of significant progress in social, economic, and educational development. Communications and railroads were greatly improved. New crops were introduced. A leading crop that was the result of an introduced crop was coffee. However, most spectacular among these introduced crops was the cacao tree which had been indigenous to the New World and had been introduced in Africa by the Spanish and Portuguese. Cacao had been introduced to the Gold Coast in 1879 by Tetteh Quashie, a blacksmith from Gold Coast. Cacao tree raising and farming became widely accepted in the eastern part of the Gold Coast. In 1891, the Gold Coast exported 80 lbs of cacao worth no more than 4 pounds sterling. By the 1920s cacao exports had passed 200,000 tons and had reached a value of 4.7 million pounds sterling. By 1928, cacao exports had reached 11.7 million pounds sterling. From 1890 to 1911, cocoa exports went from zero to one of the largest in the world. Thus, cacao production became a major part of the economy of the Gold coast and later a major part of Ghana's economy. The colony's earnings increased further from the export of timber and gold. Revenue from export of the colony's natural resources financed internal improvements in infrastructure and social services. The foundation of an educational system more advanced than any other else in West Africa also resulted from mineral export revenue. It was through British-style education that a new Ghanaian elite gained the means and the desire to strive for independence. From beginnings in missionary schools, the early part of the 20th century saw the opening of secondary schools and the country's first institute of higher learning. Many of the economic and social improvements in the Gold Coast in the early part of the 20th century have been attributed to the Canadian-born Gordon Guggisberg, governor from 1919 to 1927. Within the first six weeks of his governorship, he presented a ten-year development programme to the Legislative Council. He suggested first the improvement of transportation. Then, in order of priority, his prescribed improvements included water supply, drainage, hydroelectric projects, public buildings, town improvements, schools, hospitals, prisons, communication lines, and other services. Guggisberg also set a goal of filling half of the colony's technical positions with Africans as soon as they could be trained. His programme has been described as the most ambitious ever proposed in West Africa up to that time. The colony assisted Britain in both World War I and World War II. In the ensuing years, however, postwar inflation and instability severely hampered readjustment for returning veterans, who were in the forefront of growing discontent and unrest. Their war service and veterans' associations had broadened their horizons, making it difficult for them to return to the humble and circumscribed positions set aside for Africans by the colonial authorities. The growth of nationalism and the end of colonial rule As Ghana developed economically, education of the citizenry progressed apace. In 1890 there were only 5 government and 49 "assisted" mission schools in the whole of the Gold Coast with a total enrollment of only 5,000. By 1920 there were 20 governmental schools, 188 "assisted" mission and 309 "unassisted" mission schools with a total enrollment of 43,000 pupils. By 1940, there were 91,000 children attending Gold Coast schools. By 1950, the 279,000 children attending some 3,000 schools in the Gold Coast. This meant that, in 1950, 43.6% of the school-age children in the Gold Coast colony were attending school. Thus by the end of the Second World War, the Gold Coast colony was the richest and most educated territories in West Africa. Within this educated environment, the focus of government power gradually shifted from the hands of the governor and his officials into those of Ghanaians, themselves. The changes resulted from the gradual development of a strong spirit of nationalism and were to result eventually in independence. The development of national consciousness accelerated quickly in the post-World War II era, when, in addition to ex-servicemen, a substantial group of urban African workers and traders emerged to lend mass support to the aspirations of a small educated minority. Early manifestations of nationalism in Ghana By the late 19th century, a growing number of educated Africans increasingly found unacceptable an arbitrary political system that placed almost all power in the hands of the governor through his appointment of council members. In the 1890s, some members of the educated coastal elite organized themselves into the Aborigines' Rights Protection Society to protest a land bill that threatened traditional land tenure. This protest helped lay the foundation for political action that would ultimately lead to independence. In 1920, one of the African members of the Legislative Council, Joseph E. Casely-Hayford, convened the National Congress of British West Africa. The National Congress demanded a wide range of reforms and innovations for British West Africa. The National Congress sent a delegation to London to urge the Colonial Office to consider the principle of elected representation. The group, which claimed to speak for all British West African colonies, represented the first expression of political solidarity between intellectuals and nationalists of the area. Though the delegation was not received in London (on the grounds that it represented only the interests of a small group of urbanized Africans), its actions aroused considerable support among the African elite at home. Notwithstanding their call for elected representation as opposed to a system whereby the governor appointed council members, these nationalists insisted that they were loyal to the British Crown and that they merely sought an extension of British political and social practices to Africans. Notable leaders included Africanus Horton, the writer John Mensah Sarbah, and S. R. B. Attah-Ahoma. Such men gave the nationalist movement a distinctly elitist flavour that was to last until the late 1940s. The constitution of April 8, 1925, promulgated by Guggisberg, created provincial councils of paramount chiefs for all but the northern provinces of the colony. These councils in turn elected six chiefs as unofficial members of the Legislative Council, which however had an inbuilt British majority and whose powers were in any case purely advisory. Although the new constitution appeared to recognize some African sentiments, Guggisberg was concerned primarily with protecting British interests. For example, he provided Africans with a limited voice in the central government; yet, by limiting nominations to chiefs, he drove a wedge between chiefs and their educated subjects. The intellectuals believed that the chiefs, in return for British support, had allowed the provincial councils to fall completely under control of the government. By the mid-1930s, however, a gradual rapprochement between chiefs and intellectuals had begun. Agitation for more adequate representation continued. Newspapers owned and managed by Africans played a major part in provoking this discontent—six were being published in the 1930s. As a result of the call for broader representation, two more unofficial African members were added to the Executive Council in 1943. Changes in the Legislative Council, however, had to await a different political climate in London, which came about only with the postwar election of a British Labour Party government. The new Gold Coast constitution of March 29, 1946 (also known as the Burns constitution after the governor of the time, Sir Alan Cuthbert Maxwell Burns) was a bold document. For the first time, the concept of an official majority was abandoned. The Legislative Council was now composed of six ex-officio members, six nominated members, and eighteen elected members, however the Legislative Council continued to have purely advisory powers – all executive power remained with the governor. The 1946 constitution also admitted representatives from Ashanti into the council for the first time. Even with a Labour Party government in power, however, the British continued to view the colonies as a source of raw materials that were needed to strengthen their crippled economy. Change that would place real power in African hands was not a priority among British leaders until after rioting and looting in Accra and other towns and cities in early 1948 over issues of pensions for ex-servicemen, the dominant role of foreigners in the economy, the shortage of housing, and other economic and political grievances. With elected members in a decisive majority, Ghana had reached a level of political maturity unequalled anywhere in colonial Africa. The constitution did not, however, grant full self-government. Executive power remained in the hands of the governor, to whom the Legislative Council was responsible. Hence, the constitution, although greeted with enthusiasm as a significant milestone, soon encountered trouble. World War II had just ended, and many Gold Coast veterans who had served in British overseas expeditions returned to a country beset with shortages, inflation, unemployment, and black-market practices. There veterans, along with discontented urban elements, formed a nucleus of malcontents ripe for disruptive action. They were now joined by farmers, who resented drastic governmental measures required to cut out diseased cacao trees in order to control an epidemic, and by many others who were unhappy that the end of the war had not been followed by economic improvements. Politics of the independence movements Although political organizations had existed in the British colony, the United Gold Coast Convention (UGCC), founded on 4 August 1947 by educated Ghanaians known as The Big Six, was the first nationalist movement with the aim of self-government "in the shortest possible time." It called for the replacement of chiefs on the Legislative Council with educated persons. They also demanded that, given their education, the colonial administration should respect them and accord them positions of responsibility. In particular, the UGCC leadership criticized the government for its failure to solve the problems of unemployment, inflation, and the disturbances that had come to characterize the society at the end of the war. Though they opposed the colonial administration, UGCC members did not seek drastic or revolutionary change. Public dissatisfaction with the UGCC expressed itself on February 28, 1948 as a demonstration of ex-servicemen organized by the ex-serviceman's union paraded through Accra. To disperse the demonstrators, police fired on them killing three ex-servicemen and wounding sixty. Five days of violent disorder followed in Accra in response to the shooting and rioters broke into and looted the shops owned by Europeans and Syrians. Rioting also broke out in Kumasi and other towns across the Gold Coast. The Big Six including Nkrumah were imprisoned by the British authorities from 12 March to 12 April 1948. The police shooting and the resultant riots indicated that the gentlemanly manner in which politics had been conducted by the UGCC was irrelevant in the new post-war world. This change in the dynamics of politics of the Gold Coast was not lost on Kwame Nkrumah who broke with the UGCC publicly during its Easter Convention in 1949, and created his Convention People's Party (CPP) on 12 June 1949. After his brief tenure with the UGCC, the US- and British-educated Nkrumah broke with the organization over his frustration at the UGCC's weak attempts to solve the problems of the Gold Coast colony by negotiating another new conciliatory colonial constitution with the British colonial authority. Unlike the UGCC's call for self-government "in the shortest possible time," Nkrumah and the CPP asked for "self-government now." The party leadership identified itself more with ordinary working people than with the UGCC and its intelligentsia, and the movement found support among workers, farmers, youths, and market women. The politicized population consisted largely of ex-servicemen, literate persons, journalists, and elementary school teachers, all of whom had developed a taste for populist conceptions of democracy. A growing number of uneducated but urbanized industrial workers also formed part of the support group. By June 1949, Nkrumah had a mass following. The constitution of January 1, 1951 resulted from the report of the Coussey Committee, created because of disturbances in Accra and other cities in 1948. In addition to giving the Executive Council a large majority of African ministers, it created an assembly, half the elected members of which were to come from the towns and rural districts and half from the traditional councils. Although it was an enormous step forward, the new constitution still fell far short of the CPP's call for full self-government. Executive power remained in British hands, and the legislature was tailored to permit control by traditionalist interests. With increasing popular backing, the CPP in early 1950 initiated a campaign of "Positive Action" intended to instigate widespread strikes and nonviolent resistance. When some violent disorders occurred on January 20, 1950 Nkrumah was arrested and imprisoned for sedition. This merely established him as a leader and hero, building popular support, and when the first elections were held for the Legislative Assembly under the new constitution from February 5–10, 1951, Nkrumah (still in jail) won a seat, and the CPP won a two-thirds majority of votes cast winning 34 of the 38 elected seats in the Assembly. Nkrumah was released from jail on 11 February 1951, and the following day accepted an invitation to form a government as "leader of government business," a position similar to that of prime minister. The start of Nkrumah's first term was marked by cooperation with the British governor. During the next few years, the government was gradually transformed into a full parliamentary system. The changes were opposed by the more traditionalist African elements, though opposition proved ineffective in the face of popular support for independence at an early date. On March 10, 1952 the new position of prime minister was created, and Nkrumah was elected to the post by the Assembly. At the same time the Executive Council became the cabinet. The new constitution of 5 May 1954 ended the election of assembly members by the tribal councils. The Legislative Assembly increased in size, and all members were chosen by direct election from equal, single-member constituencies. Only defense and foreign policy remained in the hands of the governor; the elected assembly was given control of virtually all internal affairs of the colony. The CPP won 71 of the 104 seats in the 15 June 1954 election. The CPP pursued a policy of political centralization, which encountered serious opposition. Shortly after the 15 June 1954 election, a new party, the Ashanti-based National Liberation Movement (NLM), was formed. The NLM advocated a federal form of government, with increased powers for the various regions. NLM leaders criticized the CPP for perceived dictatorial tendencies. The new party worked in cooperation with another regionalist group, the Northern People's Party. When these two regional parties walked out of discussions on a new constitution, the CPP feared that London might consider such disunity an indication that the colony was not yet ready for the next phase of self-government. The British constitutional adviser, however, backed the CPP position. The governor dissolved the assembly in order to test popular support for the CPP demand for immediate independence. On 11 May 1956 the British agreed to grant independence if so requested by a 'reasonable' majority of the new legislature. New elections were held on 17 July 1956. In keenly contested elections, the CPP won 57 percent of the votes cast, but the fragmentation of the opposition gave the CPP every seat in the south as well as enough seats in Ashanti, the Northern Territories, and the Trans-Volta Region to hold a two-thirds majority by winning 72 of the 104 seats. On May 9, 1956 a plebiscite was conducted under United Nations (UN) auspices to decide the future disposition of British Togoland and French Togoland. The British trusteeship, the western portion of the former German colony, had been linked to the Gold Coast since 1919 and was represented in its parliament. The dominant ethnic group, the Ewe people, were divided between the two Togos. A majority (58%) of British Togoland inhabitants voted in favour of union, and the area was absorbed into Ashantiland and Dagbon. There was, however, vocal opposition to the incorporation from the Ewe people (42%) in British Togoland. Moving toward independence In 1945 a conference (known as the 5th Pan-African Congress) was held in Manchester to promote pan-African ideas. This was attended by Nkrumah of Ghana, Nnamdi Azikiwe of Nigeria and I. T. A. Wallace-Johnson of Sierra Leone. The Indian and Pakistani independence catalysed this desire. There was also the rejection of African culture to some extent. Some external forces also contributed to this feeling. African-Americans such as W. E. B. Du Bois and Marcus Garvey (Afro-Jamaican) raised strong Pan-African conscience. Sir Alan Burns constitution of 1946 provided new legislative council that was made of the Governor as the President, 6 government officials, 6 nominated members and 18 elected members. The executive council was not responsible to the legislative council. They were only in advisory capacity, and the governor did not have to take notice. These forces made Dr J. B. Danquah form the United Gold Coast Conversion (UGCC) in 1947, and Nkrumah was invited to be this party's General Secretary. Other officers were George Alfred Grant (Paa Grant), Edward Akufo-Addo, William Ofori Atta, Emmanuel Obetsebi-Lamptey, Ebenezer Ako-Adjei, and J. Tsiboe. Their aim was Independence for Ghana. They rejected the Burns constitution amendment of a number of its clauses. It also granted a voice to chiefs and their tribal councils by providing for the creation of regional assemblies. No bill amending the entrenched clauses of the constitution or affecting the powers of the regional bodies or the privileges of the chiefs could become law except by a two-thirds vote of the National Assembly and by simple majority approval in two-thirds of the regional assemblies. When local CPP supporters gained control of enough regional assemblies, however, the Nkrumah government promptly secured passage of an act removing the special entrenchment protection clause in the constitution, a step that left the National Assembly with the power to effect any constitutional change the CPP deemed necessary. The electoral victory of the CCP in 1951 ushered in five years of power-sharing with the British. The economy prospered, with a high global demand and rising prices for cocoa. The efficiency of the Cocoa Marketing Board enabled the large profits to be spent on development of the infrastructure. There was a major expansion of schooling and modernizing projects such as the new industrial city at Tema. Favored projects by Nkrumah included new organizations such as the Young Pioneers, for young people, and the Builder's Brigades for mechanization of agriculture. There were uniforms, parades, new patriotic songs, and the presentation of an ideal citizenship in which all citizens learned that there their primary duty was to the state. Ghana's independence achieved in 1957 On August 3, 1956, the new assembly passed a motion authorizing the government to request independence within the British Commonwealth. The opposition did not attend the debate, and the vote was unanimous. The British government accepted this motion as clearly representing a reasonable majority, so on 18 September 1956 the British set 6 March 1957, the 113th anniversary of the Bond of 1844, as the date the former British colony of the Gold Coast was to become the independent state of Ghana, and the nation's Legislative Assembly was to become the National Assembly. Nkrumah continued as prime minister, and Queen Elizabeth II as monarch, represented in the former colony by a governor general, Sir Charles Noble Arden-Clarke. Dominion status would continue until 1960, when after a national referendum, Ghana was declared a republic. The Second Development Plan of 1959-1964 followed the Soviet model, and shifted away from expanding state services toward raising productivity in the key sectors. Nkrumah believe that colonialism had twisted personalities, imposing a competitive, individualistic and bourgeois mentality that had to be eliminated. Worldwide cocoa prices began to fall, budgets were cut, and workers were called upon for more and more self sacrifice to overcome neocolonialism. Nkrumah drastically curtailed the independence of the labor unions, and when strikes resulted he cracked down through the Preventive Detention Act. On the domestic front, Nkrumah believed that rapid modernization of industries and communications was necessary and that it could be achieved if the workforce were completely Africanized and educated. Expansion of secondary schools became a high priority in 1959-1964, along with expansion of vocational programs and higher education. Even more important, however, Nkrumah believed that this domestic goal could be achieved faster if it were not hindered by reactionary politicians—elites in the opposition parties and traditional chiefs—who might compromise with Western imperialists. Indeed, the enemies could be anywhere and dissent was not tolerated. Nkrumah's regime enacted the Deportation Act of 1957, the Detention Acts of 1958, 1959 and 1962, and carried out parliamentary intimidation of CPP opponents, the recognition of his party as the sole political organization of the state, the creation of the Young Pioneer Movement for the ideological education of the nation's youth, and the party's control of the civil service. Government expenditure on road building projects, mass education of adults and children, and health services, as well as the construction of the Akosombo Dam, were all important if Ghana were to play its leading role in Africa's liberation from colonial and neo-colonial domination. Nkrumah discussed his political views in his numerous writings, especially in Africa Must Unite (1963) and in NeoColonialism (1965). These writings show the impact of his stay in Britain in the mid-1940s. The pan-Africanist movement, which had held one of its annual conferences, attended by Nkrumah, at Manchester in 1945, was influenced by socialist ideologies. The movement sought unity among people of African descent and also improvement in the lives of workers who, it was alleged, had been exploited by capitalist enterprises in Africa. Western countries with colonial histories were identified as the exploiters. According to the socialists, "oppressed" people ought to identify with the socialist countries and organizations that best represented their interests; however, all the dominant world powers in the immediate post-1945 period, except the Soviet Union and the United States, had colonial ties with Africa. Nkrumah asserted that even the United States, which had never colonized any part of Africa, was in an advantageous position to exploit independent Africa unless preventive efforts were taken. According to Nkrumah, his government, which represented the first black African nation to win independence, had an important role to play in the struggle against capitalist interests on the continent. As he put it, "the independence of Ghana would be meaningless unless it was tied to the total liberation of Africa." It was important, then, he said, for Ghanaians to "seek first the political kingdom." Economic benefits associated with independence were to be enjoyed later, proponents of Nkrumah's position argued. But Nkrumah needed strategies to pursue his goals. On the continental level, Nkrumah sought to unite Africa so that it could defend its international economic interests and stand up against the political pressures from East and West that were a result of the Cold War. His dream for Africa was a continuation of the pan-Africanist dream as expressed at the Manchester conference. The initial strategy was to encourage revolutionary political movements in Africa. The CIA believed that Nkrumah's government provided money and training for pro-socialist guerrillas in Ghana, aided after 1964 by the Chinese Communist government. Several hundred trainees passed through this program, administered by Nkrumah's Bureau of African Affairs, and were sent on to countries such as Rhodesia, Angola, Mozambique, Niger and Congo. Politically, Nkrumah believed that a Ghana, Guinea, and Mali union would serve as the psychological and political impetus for the formation of a United States of Africa. When Nkrumah was criticized for paying little attention to Ghana or for wasting national resources in supporting external programmes, he reversed the argument and accused his opponents of being short-sighted. The heavy financial burdens created by Nkrumah's development policies and pan-African adventures created new sources of opposition. With the presentation in July 1961 of the country's first austerity budget, Ghana's workers and farmers became aware of and critical of the cost to them of Nkrumah's programmes. Their reaction set the model for the protests over taxes and benefits that were to dominate Ghanaian political crises for the next thirty years. CPP backbenchers and UP representatives in the National Assembly sharply criticized the government's demand for increased taxes and, particularly, for a forced savings programme. Urban workers began a protest strike, the most serious of a number of public outcries against government measures during 1961. Nkrumah's public demands for an end to corruption in the government and the party further undermined popular faith in the national government. A drop in the price paid to cocoa farmers by the government marketing board aroused resentment among a segment of the population that had always been Nkrumah's major opponent. Growth of opposition to Nkrumah Nkrumah's complete domination of political power had served to isolate lesser leaders, leaving each a real or imagined challenger to the ruler. After opposition parties were crushed, opponents came only from within the CPP hierarchy. Among its members was Tawia Adamafio, an Accra politician. Nkrumah had made him general secretary of the CPP for a brief time. Later, Adamafio was appointed minister of state for presidential affairs, the most important post in the president's staff at Flagstaff House, which gradually became the centre for all decision making and much of the real administrative machinery for both the CPP and the government. The other leader with an apparently autonomous base was John Tettegah, leader of the Trade Union Congress. Neither, however, proved to have any power other than that granted to them by the president. By 1961, however, the young and more radical members of the CPP leadership, led by Adamafio, had gained ascendancy over the original CPP leaders like Gbedemah. After a bomb attempt on Nkrumah's life in August 1962, Adamafio, Ako Adjei (then minister of foreign affairs), and Cofie Crabbe (all members of the CPP) were jailed under the Preventive Detention Act. The first Ghanaian Commissioner of Police, E. R. T Madjitey, from Asite in Manya-Krobo was also relieved of his post. The CPP newspapers charged them with complicity in the assassination attempt, offering as evidence only the fact that they had all chosen to ride in cars far behind the president's when the bomb was thrown. For more than a year, the trial of the alleged plotters of the 1962 assassination attempt occupied centre stage. The accused were brought to trial before the three-judge court for state security, headed by the chief justice, Sir Arku Korsah. When the court acquitted the accused, Nkrumah used his constitutional prerogative to dismiss Korsah. Nkrumah then obtained a vote from the parliament that allowed retrial of Adamafio and his associates. A new court, with a jury chosen by Nkrumah, found all the accused guilty and sentenced them to death. These sentences, however, were commuted to twenty years' imprisonment. Corruption had highly deleterious effects. It removed money from the active economy and put it in the hands of the political parties, and Nkrumah's friends and family, so it became an obstacle to economic growth. The new state companies that had been formed to implement growth became instruments of patronage and financial corruption; civil servants doubled their salaries and politicians purchase supporters. Politically, allegations and instances of corruption in the ruling party, and in Nkrumah's personal finances, undermined the very legitimacy of his regime and sharply decreased the ideological commitment needed to maintain the public welfare under Ghanaian socialism. Political scientist Herbert H. Werlin Has examined the mounting economic disaster: - Nkrumah left Ghana with a serious balance-of-payments problem. Beginning with a substantial foreign reserve fund of over $500 million at the time of independence, Ghana, by 1966, had a public external debt of over $800 million.....there was no foreign exchange to buy the spare parts and raw materials required for the economy. While inflation was rampant, causing the price-level to rise by 30 per cent in 1964-65, unemployment was also serious....Whereas between 1955 and 1962 Ghana's GNP increased at an average annual rate of nearly 5 per cent, there was practically no growth at all by 1965....Since Ghana's estimated annual rate of population growth was 2.6 per cent, her economy was obviously retrogressing. While personal per capita consumption declined by some 15 per cent between 1960 and 1966, the real wage income of the minimum wage earner declined by some 45 per cent during this period. Fall of Nkrumah: 1966 In early 1964, in order to prevent future challenges from the judiciary and after another national referendum, Nkrumah obtained a constitutional amendment allowing him to dismiss any judge. Ghana officially became a one-party state and an act of parliament ensured that there would be only one candidate for president. Other parties having already been outlawed, no non-CPP candidates came forward to challenge the party slate in the general elections announced for June 1965. Nkrumah had been re-elected president of the country for less than a year when members of the National Liberation Council (NLC) overthrew the CPP government in a military coup on 24 February 1966. At the time, Nkrumah was in China. He took up asylum in Guinea, where he remained until he died in 1972. Leaders of the 1966 military coup justified their takeover by charging that the CPP administration was abusive and corrupt, that Nkrumah's involvement in African politics was overly aggressive, and that the nation lacked democratic practices. They claimed that the military coup of 1966 was a nationalist one because it liberated the nation from Nkrumah's dictatorship. All symbols and organizations linked to Nkrumah quickly vanished, such as the Young Pioneers. Despite the vast political changes that were brought about by the overthrow of Kwame Nkrumah, many problems remained, including ethnic and regional divisions, the country's economic burdens, and mixed emotions about a resurgence of an overly strong central authority. A considerable portion of the population had become convinced that effective, honest government was incompatible with competitive political parties. Many Ghanaians remained committed to nonpolitical leadership for the nation, even in the form of military rule. The problems of the Busia administration, the country's first elected government after Nkrumah's fall, illustrated the problems Ghana would continue to face. It has been argued that the coup was supported by the U.S. Central Intelligence Agency; The National Liberation Council (NLC), composed of four army officers and four police officers, assumed executive power. It appointed a cabinet of civil servants and promised to restore democratic government as quickly as possible. These moves culminated in the appointment of a representative assembly to draft a constitution for the Second Republic of Ghana. Political parties were allowed to operate beginning in late 1968. In Ghana's 1969 elections, the first competitive nationwide political contest since 1956, the major contenders were the Progress Party (PP), headed by Kofi Abrefa Busia, and the National Alliance of Liberals (NAL), led by Komla A. Gbedemah. The PP found much of its support among the old opponents of Nkrumah's CPP – the educated middle class and traditionalists of the Ashanti Region and the North. The NAL was seen as the successor of the CPP's right wing. Overall, the PP gained 59 percent of the popular vote and 74 percent of the seats in the National Assembly. Gbedemah, who was soon barred from taking his National Assembly seat by a Supreme Court decision, retired from politics, leaving the NAL without a strong leader. In October 1970, the NAL absorbed the members of three other minor parties in the assembly to form the Justice Party (JP) under the leadership of Joseph Appiah. Their combined strength constituted what amounted to a southern bloc with a solid constituency among most of the Ewe and the peoples of the coastal cities. PP leader Busia became prime minister in September 1970. After a brief period under an interim three-member presidential commission, the electoral college chose as president Chief Justice Edward Akufo-Addo, one of the leading nationalist politicians of the UGCC era and one of the judges dismissed by Nkrumah in 1964. All attention, however, remained focused on Prime Minister Busia and his government. Much was expected of the Busia administration, because its parliamentarians were considered intellectuals and, therefore, more perceptive in their evaluations of what needed to be done. Many Ghanaians hoped that their decisions would be in the general interest of the nation, as compared with those made by the Nkrumah administration, which were judged to satisfy narrow party interests and, more important, Nkrumah's personal agenda. The NLC had given assurances that there would be more democracy, more political maturity, and more freedom in Ghana, because the politicians allowed to run for the 1969 elections were proponents of Western democracy. In fact, these were the same individuals who had suffered under the old regime and were, therefore, thought to understand the benefits of democracy. Two early measures initiated by the Busia government were the expulsion of large numbers of non-citizens from the country and a companion measure to limit foreign involvement in small businesses. The moves were aimed at relieving the unemployment created by the country's precarious economic situation. The policies were popular because they forced out of the retail sector of the economy those foreigners, especially Lebanese, Asians, and Nigerians, who were perceived as unfairly monopolizing trade to the disadvantage of Ghanaians. Many other Busia moves, however, were not popular. Busia's decision to introduce a loan programme for university students, who had hitherto received free education, was challenged because it was interpreted as introducing a class system into the country's highest institutions of learning. Some observers even saw Busia's devaluation of the national currency and his encouragement of foreign investment in the industrial sector of the economy as conservative ideas that could undermine Ghana's sovereignty. The opposition Justice Party's basic policies did not differ significantly from those of the Busia administration. Still, the party attempted to stress the importance of the central government rather than that of limited private enterprise in economic development, and it continued to emphasize programmes of primary interest to the urban work force. The ruling PP emphasized the need for development in rural areas, both to slow the movement of population to the cities and to redress regional imbalance in levels of development. The JP and a growing number of PP members favoured suspension of payment on some foreign debts of the Nkrumah era. This attitude grew more popular as debt payments became more difficult to meet. Both parties favoured creation of a West African economic community or an economic union with the neighboring West African states. Despite broad popular support garnered at its inception and strong foreign connections, the Busia government fell victim to an army coup within twenty-seven months. Neither ethnic nor class differences played a role in the overthrow of the PP government. The crucial causes were the country's continuing economic difficulties, both those stemming from the high foreign debts incurred by Nkrumah and those resulting from internal problems. The PP government had inherited US$580 million in medium- and long-term debts, an amount equal to 25 percent of the gross domestic product of 1969. By 1971 the US$580 million had been further inflated by US$72 million in accrued interest payments and US$296 million in short-term commercial credits. Within the country, an even larger internal debt fueled inflation. Ghana's economy remained largely dependent upon the often difficult cultivation of and market for cocoa. Cocoa prices had always been volatile, but exports of this tropical crop normally provided about half of the country's foreign currency earnings. Beginning in the 1960s, however, a number of factors combined to limit severely this vital source of national income. These factors included foreign competition (particularly from neighboring Côte d'Ivoire), a lack of understanding of free-market forces (by the government in setting prices paid to farmers), accusations of bureaucratic incompetence in the Cocoa Marketing Board, and the smuggling of crops into Côte d'Ivoire. As a result, Ghana's income from cocoa exports continued to fall dramatically. Austerity measures imposed by the Busia administration, although wise in the long run, alienated influential farmers, who until then had been PP supporters. These measures were part of Busia's economic structural adjustment efforts to put the country on a sounder financial base. The austerity programmes had been recommended by the International Monetary Fund. The recovery measures also severely affected the middle class and the salaried work force, both of which faced wage freezes, tax increases, currency devaluations, and rising import prices. These measures precipitated protests from the Trade Union Congress. In response, the government sent the army to occupy the trade union headquarters and to block strike actions—a situation that some perceived as negating the government's claim to be operating democratically. The army troops and officers upon whom Busia relied for support were themselves affected, both in their personal lives and in the tightening of the defense budget, by these same austerity measures. As the leader of the anti-Busia coup declared on January 13, 1972, even those amenities enjoyed by the army during the Nkrumah regime were no longer available. Knowing that austerity had alienated the officers, the Busia government began to change the leadership of the army's combat elements. This, however, was the last straw. Lieutenant Colonel Ignatius Kutu Acheampong, temporarily commanding the First Brigade around Accra, led a bloodless coup that ended the Second Republic. National Redemption Council years, 1972–79 Despite its short existence, the Second Republic was significant in that the development problems the nation faced came clearly into focus. These included uneven distribution of investment funds and favouritism toward certain groups and regions. Important questions about developmental priorities remained unanswered, and after the failure of both the Nkrumah and the Busia regimes (one a one-party state, and the other a multi-party parliamentary democracy) Ghana's path to political stability was obscure. Acheampong's National Redemption Council (NRC) claimed that it had to act to remove the ill effects of the currency devaluation of the previous government and thereby, at least in the short run, to improve living conditions for individual Ghanaians. To justify their takeover, coup leaders leveled charges of corruption against Busia and his ministers. The NRC sought to create a truly military government and did not outline any plan for the return of the nation to democratic rule. In matters of economic policy, Busia's austerity measures were reversed, the Ghanaian currency was revalued upward, foreign debt was repudiated or unilaterally rescheduled, and all large foreign-owned companies were nationalized. The government also provided price supports for basic food imports, while seeking to encourage Ghanaians to become self-reliant in agriculture and the production of raw materials. These measures, while instantly popular, did nothing to solve the country's problems and in fact aggravated the problem of capital flow. Any economic successes were overridden by other basic economic factors. Industry and transportation suffered greatly as oil prices rose in 1974, and the lack of foreign exchange and credit left the country without fuel. Basic food production continued to decline even as the population grew. Disillusionment with the government developed, and accusations of corruption began to surface. The reorganization of the NRC into the Supreme Military Council (SMC) in 1975 may have been part of a face-saving attempt. Little input from the civilian sector was allowed, and military officers were put in charge of all ministries and state enterprises down to the local level. During the NRC's early years, these administrative changes led many Ghanaians to hope that the soldiers in command would improve the efficiency of the country's bloated bureaucracies. Shortly after that time, the government sought to stifle opposition by issuing a decree forbidding the propagation of rumors and by banning a number of independent newspapers and detaining their journalists. Also, armed soldiers broke up student demonstrations, and the government repeatedly closed the universities, which had become important centres of opposition to NRC policies. The self-appointed Ashanti General I. K. Acheampong seemed to have much sympathy for women than his ailing economic policies. As the Commissioner(Minister) of Finance, he signed Government checks to concubines and other ladies he barely knew. VW Gulf cars were imported and given to beautiful ladies he came across. Import licenses were given out to friends and ethnic affiliates with impunity. The SMC by 1977 found itself constrained by mounting non-violent opposition. To be sure, discussions about the nation's political future and its relationship to the SMC had begun in earnest. Although the various opposition groups (university students, lawyers, and other organized civilian groups) called for a return to civilian constitutional rule, Acheampong and the SMC favoured a union government—a mixture of elected civilian and appointed military leaders—but one in which party politics would be abolished. University students and many intellectuals criticized the union government idea, but others, such as Justice Gustav Koranteng-Addow, who chaired the seventeen-member ad hoc committee appointed by the government to work out details of the plan, defended it as the solution to the nation's political problems. Supporters of the union government idea viewed multiparty political contests as the perpetrators of social tension and community conflict among classes, regions, and ethnic groups. Unionists argued that their plan had the potential to depoliticize public life and to allow the nation to concentrate its energies on economic problems. A national referendum was held in March 1978 to allow the people to accept or reject the union government concept. A rejection of the union government meant a continuation of military rule. Given this choice, it was surprising that so narrow a margin voted in favour of union government. Opponents of the idea organized demonstrations against the government, arguing that the referendum vote had not been free or fair. The Acheampong government reacted by banning several organizations and by jailing as many as 300 of its opponents. The agenda for change in the union government referendum called for the drafting of a new constitution by an SMC-appointed commission, the selection of a constituent assembly by November 1978, and general elections in June 1979. The ad hoc committee had recommended a nonparty election, an elected executive president, and a cabinet whose members would be drawn from outside a single-house National Assembly. The military council would then step down, although its members could run for office as individuals. In July 1978, in a sudden move, the other SMC officers forced Acheampong to resign, replacing him with Lieutenant General Frederick W. K. Akuffo. The SMC apparently acted in response to continuing pressure to find a solution to the country's economic dilemma. Inflation was estimated to be as high as 300 percent that year. There were shortages of basic commodities, and cocoa production fell to half its 1964 peak. The council was also motivated by Acheampong's failure to dampen rising political pressure for changes. Akuffo, the new SMC chairman, promised publicly to hand over political power to a new government to be elected by 1 July 1979. Despite Akuffo's assurances, opposition to the SMC persisted. The call for the formation of political parties intensified. In an effort to gain support in the face of continuing strikes over economic and political issues, the Akuffo government at length announced that the formation of political parties would be allowed after January 1979. Akuffo also granted amnesty to former members of both Nkrumah's CPP and Busia's PP, as well as to all those convicted of subversion under Acheampong. The decree lifting the ban on party politics went into effect on 1 January 1979, as planned. The constitutional assembly that had been working on a new constitution presented an approved draft and adjourned in May. All appeared set for a new attempt at constitutional government in July, when a group of young army officers overthrew the SMC government in June 1979. The Rawlings era On 15 May 1979, less than five weeks before constitutional elections were to be held, a group of junior officers led by Flight Lieutenant Jerry John Rawlings attempted a coup. Initially unsuccessful, the coup leaders were jailed and held for court-martial. On 4 June, however, sympathetic military officers overthrew the Akuffo regime and released Rawlings and his cohorts from prison fourteen days before the scheduled election. Although the SMC's pledge to return political power to civilian hands addressed the concerns of those who wanted civilian government, the young officers who had staged the June 4 coup insisted that issues critical to the image of the army and important for the stability of national politics had been ignored. Naomi Chazan, a leading analyst of Ghanaian politics, aptly assessed the significance of the 1979 coup in the following statement: Unlike the initial SMC II [the Akuffo period, 1978–1979] rehabilitation effort which focused on the power elite, this second attempt at reconstruction from a situation of disintegration was propelled by growing alienation. It strove, by reforming the guidelines of public behavior, to define anew the state power structure and to revise its inherent social obligations.... In retrospect the most irreversible outcome of this phase was the systematic eradication of the SMC leadership.... [Their] executions signaled not only the termination of the already fallacious myth of the nonviolence of Ghanaian politics, but, more to the point, the deadly serious determination of the new government to wipe the political slate clean. Rawlings and the young officers formed the Armed Forces Revolutionary Council (AFRC). The armed forces were purged of senior officers accused of corrupting the image of the military. In carrying out its goal, however, the AFRC was caught between two groups with conflicting interests, Chazan observed. These included the "soldier-supporters of the AFRC who were happy to lash out at all manifestations of the old regimes; and the now organized political parties who decried the undue violence and advocated change with restraint. Despite the coup and the subsequent executions of former heads of military governments (Afrifa of the NLC; Acheampong and some of his associates of the NRC; and Akuffo and leading members of the SMC), the planned elections took place, and Ghana had returned to constitutional rule by the end of September 1979. Before power was granted to the elected government, however, the AFRC sent the unambiguous message that "people dealing with the public, in whatever capacity, are subject to popular supervision, must abide by fundamental notions of probity, and have an obligation to put the good of the community above personal objective." The AFRC position was that the nation's political leaders, at least those from within the military, had not been accountable to the people. The administration of Hilla Limann, inaugurated on 24 September 1979, at the beginning of the Third Republic, was thus expected to measure up to the new standard advocated by the AFRC. Limann's People's National Party (PNP) began the Third Republic with control of only seventy-one of the 140 legislative seats. The opposition Popular Front Party (PFP) won forty-two seats, while twenty-six elective positions were distributed among three lesser parties. The percentage of the electorate that voted had fallen to 40 percent. Unlike the country's previous elected leaders, Limann was a former diplomat and a noncharismatic figure with no personal following. As Limann himself observed, the ruling PNP included people of conflicting ideological orientations. They sometimes disagreed strongly among themselves on national policies. Many observers, therefore, wondered whether the new government was equal to the task confronting the state. The most immediate threat to the Limann administration, however, was the AFRC, especially those officers who organized themselves into the "June 4 Movement" to monitor the civilian administration. In an effort to keep the AFRC from looking over its shoulder, the government ordered Rawlings and several other army and police officers associated with the AFRC into retirement; nevertheless, Rawlings and his associates remained a latent threat, particularly as the economy continued its decline. The first Limann budget, for fiscal year (FY—see Glossary) 1981, estimated the Ghanaian inflation rate at 70 percent for that year, with a budget deficit equal to 30 percent of the gross national product (GNP—see Glossary). The Trade Union Congress claimed that its workers were no longer earning enough to pay for food, let alone anything else. A rash of strikes, many considered illegal by the government, resulted, each one lowering productivity and therefore national income. In September the government announced that all striking public workers would be dismissed. These factors rapidly eroded the limited support the Limann government enjoyed among civilians and soldiers. The government fell on 31 December 1981, in another Rawlings-led coup. Rawlings and his colleagues suspended the 1979 constitution, dismissed the president and his cabinet, dissolved the parliament, and proscribed existing political parties. They established the Provisional National Defense Council (PNDC), initially composed of seven members with Rawlings as chairman, to exercise executive and legislative powers. The existing judicial system was preserved, but alongside it the PNDC created the National Investigation Committee to root out corruption and other economic offenses, the anonymous Citizens' Vetting Committee to punish tax evasion, and the Public Tribunals to try various crimes. The PNDC proclaimed its intent to allow the people to exercise political power through defense committees to be established in communities, workplaces, and in units of the armed forces and police. Under the PNDC, Ghana remained a unitary government. In December 1982, the PNDC announced a plan to decentralize government from Accra to the regions, the districts, and local communities, but it maintained overall control by appointing regional and district secretaries who exercised executive powers and also chaired regional and district councils. Local councils, however, were expected progressively to take over the payment of salaries, with regions and districts assuming more powers from the national government. In 1984, the PNDC created a National Appeals Tribunal to hear appeals from the public tribunals, changed the Citizens' Vetting Committee into the Office of Revenue Collection and replaced the system of defense committees with Committees for the Defense of the Revolution. In 1984, the PNDC also created a National Commission on Democracy to study ways to establish participatory democracy in Ghana. The commission issued a "Blue Book" in July 1987 outlining modalities for district-level elections, which were held in late 1988 and early 1989, for newly created district assemblies. One-third of the assembly members are appointed by the government. The second coming of Rawlings: the first six years, 1982–87 The new government that took power on 31 December 1981, was the eighth in the fifteen years since the fall of Nkrumah. Calling itself the Provisional National Defense Council (PNDC), its membership included Rawlings as chairman, Brigadier Joseph Nunoo-Mensah (whom Limann had dismissed as army commander), two other officers, and three civilians. Despite its military connections, the PNDC made it clear that it was unlike other soldier-led governments. This was immediately proved by the appointment of fifteen civilians to cabinet positions. In a radio broadcast on 5 January 1982, Rawlings presented a detailed statement explaining the factors that had necessitated termination of the Third Republic. The PNDC chairman assured the people that he had no intention of imposing himself on Ghanaians. Rather, he "wanted a chance for the people, farmers, workers, soldiers, the rich and the poor, to be part of the decision-making process." He described the two years since the AFRC had handed over power to a civilian government as a period of regression during which political parties attempted to divide the people in order to rule them. The ultimate purpose for the return of Rawlings was, therefore, to "restore human dignity to Ghanaians." In the chairman's words, the dedication of the PNDC to achieving its goals was different from any the country had ever known. It was for that reason that the takeover was not a military coup, but rather a "holy war" that would involve the people in the transformation of the socioeconomic structure of the society. The PNDC also served notice to friends and foes alike that any interference in the PNDC agenda would be "fiercely resisted." Opposition to the PNDC administration developed nonetheless in different sectors of the political spectrum. The most obvious groups opposing the government were former PNP and PFP members. They argued that the Third Republic had not been given time to prove itself and that the PNDC administration was unconstitutional. Further opposition came from the Ghana Bar Association (GBA), which criticized the government's use of people's tribunals in the administration of justice. Members of the Trade Union Congress were also angered when the PNDC ordered them to withdraw demands for increased wages. The National Union of Ghanaian Students (NUGS) went even farther, calling on the government to hand over power to the attorney general, who would supervise new elections. By the end of June 1982, an attempted coup had been discovered, and those implicated had been executed. Many who disagreed with the PNDC administration were driven into exile, where they began organizing their opposition. They accused the government of human rights abuses and political intimidation, which forced the country, especially the press, into a "culture of silence." Meanwhile, the PNDC was subjected to the influence of contrasting political philosophies and goals. Although the revolutionary leaders agreed on the need for radical change, they differed on the means of achieving it. For example, John Ndebugre, secretary for agriculture in the PNDC government, who was later appointed northern regional secretary (governor), belonged to the radical Kwame Nkrumah Revolutionary Guard, an extreme left-wing organization that advocated a Marxist–Leninist course for the PNDC. He was detained and jailed for most of the latter part of the 1980s. Other members of the PNDC, including Kojo Tsikata, P.V. Obeng, and Kwesi Botchwey, were believed to be united only by their determination either to uplift the country from its desperate conditions or to protect themselves from vocal opposition. In keeping with Rawlings's commitment to populism as a political principle, the PNDC began to form governing coalitions and institutions that would incorporate the populace at large into the machinery of the national government. Workers' Defence Committees (WDCs), People's Defence Committees (PDCs), Citizens' Vetting Committees (CVCs), Regional Defence Committees (RDCs), and National Defence Committees (NDCs) were all created to ensure that those at the bottom of society were given the opportunity to participate in the decision-making process. These committees were to be involved in community projects and community decisions, and individual members were expected to expose corruption and "anti-social activities". Public tribunals, which were established outside the normal legal system, were also created to try those accused of antigovernment acts. And a four-week workshop aimed at making these cadres morally and intellectually prepared for their part in the revolution was completed at the University of Ghana, Legon, in July and August 1983. Various opposition groups criticized the PDCs and WDCs, however. The aggressiveness of certain WDCs, it was argued, interfered with management's ability to make the bold decisions needed for the recovery of the national economy. In response to such criticisms, the PNDC announced on 1 December 1984, the dissolution of all PDCs, WDCs, and NDCs, and their replacement with Committees for the Defence of the Revolution (CDRs). With regard to public boards and statutory corporations, excluding banks and financial institutions, Joint Consultative Committees (JCCs) that acted as advisory bodies to managing directors were created. The public tribunals, however, despite their characterization as undemocratic by the GBA, were maintained. Although the tribunals had been established in 1982, the law providing for the creation of a national public tribunal to hear and determine appeals from, and decisions of, regional public tribunals was not passed until August 1984. Section 3 and Section 10 of the PNDC Establishment Proclamation limited public tribunals to cases of a political and an economic nature. The limitations placed on public tribunals by the government in 1984 may have been an attempt by the administration to redress certain weaknesses. The tribunals, however, were not abolished; rather, they were defended as "fundamental to a good legal system" that needed to be maintained in response to "growing legal consciousness on the part of the people." At the time when the foundations of these sociopolitical institutions were being laid, the PNDC was also engaged in a debate about how to finance the reconstruction of the national economy. The country had indeed suffered from what some described as the excessive and unwise, if not foolish, expenditures of the Nkrumah regime. The degree of decline under the NRC and the SMC had also been devastating. By December 1981, when the PNDC came to power, the inflation rate topped 200 percent, while real GDP had declined by 3 percent per annum for seven years. Not only cocoa production but even diamonds and timber exports had dropped dramatically. Gold production had also fallen to half its preindependence level. Ghana's sorry economic condition, according to the PNDC, had resulted in part from the absence of good political leadership. In fact, as early as the AFRC administration in 1979, Rawlings and his associates had accused three former military leaders (generals Afrifa, Acheampong, and Akuffo) of corruption and greed and of thereby contributing to the national crisis and had executed them on the basis of this accusation. In other words, the AFRC in 1979 attributed the national crisis to internal, primarily political, causes. The overthrow of the Limann administration by the PNDC in 1981 was an attempt to prevent another inept administration from aggravating an already bad economic situation. By implication, the way to resolve some of the problems was to stabilize the political situation and to improve the economic conditions of the nation radically. At the end of its first year in power, the PNDC announced a four-year programme of economic austerity and sacrifice that was to be the first phase of an Economic Recovery Programme (ERP). If the economy were to improve significantly, there was need for a large injection of capital—a resource that could only be obtained from international financial institutions of the West. There were those on the PNDC's ideological left, however, who rejected consultation with such agencies because these institutions were blamed in part for the nation's predicament. Precisely because some members of the government also held such views, the PNDC secretary for finance and economic planning, Kwesi Botchwey, felt the need to justify World Bank (see Glossary) assistance to Ghana in 1983: It would be naive and unrealistic for certain sections of the Ghanaian society to think that the request for economic assistance from the World Bank and its affiliates means a sell-out of the aims and objectives of the Ghanaian revolution to the international community.... It does not make sense for the country to become a member of the bank and the IMF and continue to pay its dues only to decline to utilize the resources of these two institutions. The PNDC recognized that it could not depend on friendly nations such as Libya to address the economic problems of Ghana. The magnitude of the crisis—made worse by widespread bush fires that devastated crop production in 1983–1984 and by the return of more than one million Ghanaians who had been expelled from Nigeria in 1983, which had intensified the unemployment situation—called for monetary assistance from institutions with bigger financial chests. Phase One of the ERP began in 1983. Its goal was economic stability. In broad terms, the government wanted to reduce inflation and to create confidence in the nation's ability to recover. By 1987 progress was clearly evident. The rate of inflation had dropped to 20 percent, and between 1983 and 1987, Ghana's economy reportedly grew at 6 percent per year. Official assistance from donor countries to Ghana's recovery programmeaveraged US$430 million in 1987, more than double that of the preceding years. The PNDC administration also made a remarkable payment of more than US$500 million in loan arrears dating to before 1966. In recognition of these achievements, international agencies had pledged more than US$575 million to the country's future programmes by May 1987. With these accomplishments in place, the PNDC inaugurated Phase Two of the ERP, which envisioned privatization of state-owned assets, currency devaluation, and increased savings and investment, and which was to continue until 1990. Notwithstanding the successes of Phase One of the ERP, many problems remained, and both friends and foes of the PNDC were quick to point them out. One commentator noted the high rate of Ghanaian unemployment as a result of the belt-tightening policies of the PNDC. In the absence of employment or redeployment policies to redress such problems, he wrote, the effects of the austerity programmes might create circumstances that could derail the PNDC recovery agenda. Unemployment was only one aspect of the political problems facing the PNDC government; another was the size and breadth of the PNDC's political base. The PNDC initially espoused a populist programme that appealed to a wide variety of rural and urban constituents. Even so, the PNDC was the object of significant criticism from various groups that in one way or another called for a return to constitutional government. Much of this criticism came from student organizations, the GBA, and opposition groups in self-imposed exile, who questioned the legitimacy of the military government and its declared intention of returning the country to constitutional rule. So vocal was the outcry against the PNDC that it appeared on the surface as if the PNDC enjoyed little support among those groups who had historically moulded and influenced Ghanaian public opinion. At a time when difficult policies were being implemented, the PNDC could ill afford the continued alienation and opposition of such prominent critics. By the mid-1980s, therefore, it had become essential that the PNDC demonstrate that it was actively considering steps towards constitutionalism and civilian rule. This was true notwithstanding the recognition of Rawlings as an honest leader and the perception that the situation he was trying to redress was not of his creation. To move in the desired direction, the PNDC needed to weaken the influence and credibility of all antagonistic groups while it created the necessary political structures that would bring more and more Ghanaians into the process of national reconstruction. The PNDC's solution to its dilemma was the proposal for district assemblies. Although the National Commission for Democracy (NCD) had existed as an agency of the PNDC since 1982, it was not until September 1984 that Justice Daniel F. Annan, himself a member of the ruling council, was appointed chairman. The official inauguration of the NCD in January 1985 signaled PNDC determination to move the nation in a new political direction. According to its mandate, the NCD was to devise a viable democratic system, utilizing public discussions. Annan explained the necessity for the commission's work by arguing that the political party system of the past lost track of the country's socio-economic development processes. There was the need, therefore, to search for a new political order that would be functionally democratic. Constitutional rules of the past were not acceptable to the new revolutionary spirit, Annan continued, which saw the old political order as using the ballot box "merely to ensure that politicians got elected into power, after which communication between the electorate and their elected representative completely broke down." After two years of deliberations and public hearings, the NCD recommended the formation of district assemblies as local governing institutions that would offer opportunities to the ordinary person to become involved in the political process. The PNDC scheduled elections of the proposed assemblies for the last quarter of 1988. If, as Rawlings said, the PNDC revolution was a "holy war," then the proposed assemblies were part of a PNDC policy intended to annihilate enemy forces or, at least, to reduce them to impotence. The strategy was to deny the opposition a legitimate political forum within which it could articulate its objections to the government. It was for this reason, as much as it was for those stated by Annan, that a five-member District Assembly Committee was created in each of the nation's 110 administrative districts and was charged by the NCD with ensuring that all candidates followed electoral rules. The district committees were to disqualify automatically any candidate who had a record of criminal activity, insanity, or imprisonment involving fraud or electoral offenses in the past, especially after 1979. Also barred from elections were all professionals accused of fraud, dishonesty, and malpractice. The ban on political parties, instituted at the time of the Rawlings coup, was to continue. By barring candidates associated with corruption and mismanagement of national resources from running for district assembly positions, the PNDC hoped to establish new values to govern political behaviour in Ghana. To do so effectively, the government also made it illegal for candidates to mount campaign platforms other than the one defined by the NCD. Every person qualified to vote in the district could propose candidates or be nominated as a candidate. Candidates could not be nominated by organizations and associations but had to run for district office on the basis of personal qualifications and service to their communities. Once in session, an assembly was to become the highest political authority in each district. Assembly members were to be responsible for deliberation, evaluation, coordination, and implementation of programmes accepted as appropriate for the district's economic development; however, district assemblies were to be subject to the general guidance and direction of the central government. To ensure that district developments were in line with national policies, one-third of assembly members were to be traditional authorities (chiefs) or their representatives; these members were to be approved by the PNDC in consultation with the traditional authorities and other "productive economic groups in the district." In other words, a degree of autonomy may have been granted to the assemblies in the determination of programmes most suited to the districts, but the PNDC left itself with the ultimate responsibility of making sure that such programmes were in line with the national economic recovery programme. District assemblies as outlined in PNDC documents were widely discussed by friends and foes of the government. Some hailed the proposal as compatible with the goal of granting the people opportunities to manage their own affairs, but others (especially those of the political right) accused the government of masking its intention to remain in power. If the government's desire for democracy were genuine, a timetable for national elections should have been its priority rather than the preoccupation with local government, they argued. Some questioned the wisdom of incorporating traditional chiefs and the degree to which these traditional leaders would be committed to the district assembly idea, while others attacked the election guidelines as undemocratic and, therefore, as contributing to a culture of silence in Ghana. To such critics, the district assemblies were nothing but a move by the PNDC to consolidate its position. Rawlings, however, responded to such criticism by restating the PNDC strategy and the rationale behind it: Steps towards more formal political participation are being taken through the district-level elections that we will be holding throughout the country as part of our decentralisation policy. As I said in my nationwide broadcast on December 31, if we are to see a sturdy tree of democracy grow, we need to learn from the past and nurture very carefully and deliberately political institutions that will become the pillars upon which the people's power will be erected. A new sense of responsibility must be created in each workplace, each village, each district; we already see elements of this in the work of the CDRs, the December 31 Women's Movement, the June 4 Movement, Town and Village Development Committees, and other organizations through which the voice of the people is being heard. As for the categorization of certain PNDC policies as "leftist" and "rightist," Rawlings dismissed such allegations as "remarkably simplistic … What is certain is that we are moving forward!" For the PNDC, therefore, the district elections constituted an obvious first step in a political process that was to culminate at the national level. Rawlings's explanation notwithstanding, various opposition groups continued to describe the PNDC-proposed district assemblies as a mere public relations ploy designed to give political legitimacy to a government that had come to power by unconstitutional means. Longtime observers of the Ghanaian political scene, however, identified two major issues at stake in the conflict between the government and its critics: the means by which political stability was to be achieved, and the problem of attaining sustained economic growth. Both had preoccupied the country since the era of Nkrumah. The economic recovery programmes implemented by the PNDC in 1983 and the proposal for district assemblies in 1987 were major elements in the government's strategy to address these fundamental and persistent problems. Both were very much part of the national debate in Ghana in the late 1980s. End of one-party state Under international and domestic pressure for a return to democracy, the PNDC allowed the establishment of a 258-member Consultative Assembly made up of members representing geographic districts as well as established civic or business organizations. The assembly was charged to draw up a draft constitution to establish a fourth republic, using PNDC proposals. The PNDC accepted the final product without revision, and it was put to a national referendum on 28 April 1992, in which it received 92% approval. On 18 May 1992, the ban on party politics was lifted in preparation for multi-party elections. The PNDC and its supporters formed a new party, the National Democratic Congress (NDC), to contest the elections. Presidential elections were held on 3 November and parliamentary elections on 29 December that year. Members of the opposition boycotted the parliamentary elections, however, which resulted in a 200-seat Parliament with only 17 opposition party members and two independents. The Fourth Republic (1993–present) The Constitution entered into force on 7 January 1993, to found the Fourth Republic. On that day, Rawlings was inaugurated as President and members of Parliament swore their oaths of office. In 1996, the opposition fully contested the presidential and parliamentary elections, which were described as peaceful, free, and transparent by domestic and international observers. Rawlings was re-elected with 57% of the popular vote. In addition, Rawlings' NDC party won 133 of the Parliament's 200 seats, just one seat short of the two-thirds majority needed to amend the Constitution, although the election returns of two parliamentary seats faced legal challenges. In the presidential election of 2000, Jerry Rawlings endorsed his vice president, John Atta-Mills, as the candidate for the ruling NDC. John Kufuor stood for the New Patriotic Party (NPP), won the election, and became the president on 7 January 2001. The vice president was Aliu Mahama. The presidential election of 2000 was viewed as free and fair. Kufuor won another term again in the presidential election in 2004. The presidency of Kufuor saw several social reforms, such as the reform in the system of National Health Insurance of Ghana in 2003. In 2005 saw the start of the Ghana School Feeding Programme, in which a free hot meal per day was provided in public schools and kindergartens in the poorest areas. Although some projects were criticised as unfinished or unfunded, the progress of Ghana was noted internationally. President Kufuor soon gave up power in 2008. The ruling New Patriotic Party chose Nana Akufo-Addo, son of Edward Akufo-Addo, as their candidate while National Democratic Congress's John Atta Mills stood for the third time. After a run-off, John Atta Mills won the election. On 24 July 2012, Ghana suffered a shocking blow when their president died. Power was then given to his vice-president, John Dramani Mahama. He chose the then Governor of the Bank of Ghana, Kwesi Amissah-Arthur, as his vice. The National Democratic Congress won the 2012 election, making John Mahama rule again, his first term. Portuguese Catholic missionaries arrived on the coast in the fifteenth century. It was the Basel/Presbyterian and Wesleyan/Methodist missionaries, however, who, in the nineteenth century, laid the foundation for the Christian church in Ghana. Beginning their conversions in the coastal area and amongools as "nurseries of the church" in which an educated African class was trained. There are secondary schools today, especially exclusively boys and girls schools, that are mission- or church-related institutions. Church schools have been opened to all since the state assumed financial responsibility for formal instruction under the Education Act of 1960. Various Christian denominations are represented in Ghana, including Evangelical Presbyterian and Catholicism. and the The Church of Jesus Christ of Latter-day Saints (Mormons). The unifying organization for most Christians is the Ghana Christian Council, founded in 1929. Representing the Methodist, Anglican, Mennonite, Presbyterian, Evangelical Presbyterian, African Methodist Episcopal Zionist, Christian Methodist, Evangelical Lutheran, and Baptist churches, and the Society of Friends, the council serves as the link with the World Council of Churches and other ecumenical bodies. The Seventh-day Adventist Church, not a member of Christian Council, has a strong presence in Ghana. The Church opened the premier private and Christian University in Ghana. Islam in Ghana is based in the north, brought in by the commercial activities of Arab Muslims. Islam made its entry into the northern territories of modern Ghana around the fifteenth century. Berber traders and clerics carried the religion into the area. Most Muslims in Ghana are Sunni, following Maliki school of jurisprudence. Traditional religions in Ghana have retained their influence because of their intimate relation to family loyalties and local mores. The traditional cosmology expresses belief in a supreme being referred as [Nyogmo-Ga, Mawu -Dangme and Ewe, Nyame-Twi] and the supreme being is usually thought of as remote from daily religious life and is, therefore, not directly worshipped. - Heads of government of Ghana - List of Ghana governments - List of heads of state of Ghana - Politics of Ghana - Accra history and timeline - Trade & Pilgrimage Routes of Ghana - Archaeology of Banda District (Ghana) - Encarta article on Ghana: "the new state took its name from that of the medieval empire of Ghana" is third line down from the top. Archived 2009-11-01. - "3: Islam in West Africa. Introduction, spread and effects – History Textbook". Retrieved January 21, 2020. - "Kingdom of Ghana [ushistory.org]". www.ushistory.org. Retrieved January 21, 2020. - McLaughlin & Owusu-Ansah (1994), "The Pre-Colonial. Peter is a national citizen of Ghana and is the dictator. - Levtzion, Nehemia (1973). Ancient Ghana and Mali. New York: Methuen & Co Ltd. p. 3. ISBN 0841904316. - Robin Hallett, Africa to 1875 (University of Michigan Press: Ann Arbor, 1970) p. 69. - Hallett, Africa to 1875, p. 153. - Hallett, Africa to 1875, pp. 153-154. - McLaughlin & Owusu-Ansah (1994), "Early European Contact and the Slave Trade". - Hallett, Africa to 1875, p. 164. - Hallett, Africa to 1875, p. 219. - Walter Rodney, "From Gold to Slaves on the Gold Coast", in Transactions of the Historical Society of Ghana. - Hallett, Africa to 1875, p. 188. - Baten, Joerg; Moradi, Alexander (January 2007). "Exploring the evolution of living standards in Ghana, 1880-2000: An anthropometric approach": 3. Cite journal requires - Baten, Joerg; Austin, Gareth; Moradi, Alexander (January 2007). "Exploring the evolution of living standards in Ghana, 1880-2000: An anthropometric approach": 3. Cite journal requires - McLaughlin & Owusu-Ansah (1994), "Britain and the Gold Coast: the Early Years". - Robin Hallett, Africa Since 1875 (Ann Arbor: University of Michigan Press, 1974), p. 279. - Hallett, Africa Since 1875: A Modern History, p. 281. - Paul Grant, “Strangers And Neighbors In Precolonial Ghana” ‘’Fides et Historia.’’ (2018) 50 (2): 94–107. - Catherine Koonar, "Using child labor to save souls: the Basel Mission in colonial Ghana, 1855–1900." ‘’Atlantic Studies’’ 11.4 (2014): 536-554. - McLaughlin & Owusu-Ansah (1994), "The Colonial Era: British Rule of the Gold Coast". - McLaughlin & Owusu-Ansah (1994), "Colonial Administration". - Hallett, Africa Since 1875: A Modern History, p. 327. - Hallett, Africa Since 1875: A Modern History, pp. 327-328. - Hallett, Africa Since 1875: A Modern History, p. 328. - Baten, Joerg; Austin, Gareth; Moradi, Alexander (January 2007). "Exploring the evolution of living standards in Ghana, 1880-2000: An anthropometric approach": 3. Cite journal requires - McLaughlin & Owusu-Ansah (1994), "Economic and Social Development". - Hallett, Africa Since 1875: a Modern History, p. 303. - Hallett, Africa Since 1875: A Modern History, p. 341. - Hallett, Africa Since 1875: A Modern History, p. 353. - McLaughlin & Owusu-Ansah (1994), "Early Manifestations of Nationalism". - Hallett, Africa Since 1875: A Modern History, pp. 364–365. - Hallet, Africa Since 1875: A Modern History, p. 365. - McLaughlin & Owusu-Ansah (1994), "The Politics of the Independence Movements". - Kennett Love, "BRITAIN PROMISES FREE GOLD COAST; African Colony Is Offered Independence as Soon as New Legislature Asks It", The New York Times, 12 May 1956. - McLaughlin & Owusu-Ansah (1994), "Independent Ghana". - Jeffrey S. Ahlman, Living with Nkrumahism: Nation, State, and Pan-Africanism in Ghana (2017) ch. 2. - Ahlman, Living with Nkrumahism (2017) ch. 3. - Thomas F. Brady, "GOLD COAST ASKS FOR ITS FREEDOM; Accra Assembly, 72-0, Under Opposition Boycott, Votes Formal Plea to Britain", The New York Times, 4 August 1956. - "British Set March 6 As Date of Freedom For the Gold Coast; GOLD COAST GETS DATE OF FREEDOM Pledge Is Welcomed", The New York Times, 19 September 1956. - Ahlman, Living with Nkrumahism (2017) ch. 4. - Betty Grace Stein George (1976). Education in Ghana. U.S. Office of Education. p. 44. - Jack Goody, "Consensus and dissent in Ghana." Political Science Quarterly 83.3 (1968): 337-352. online - Harcourt Fuller (2014). Building the Ghanaian Nation-State: Kwame Nkrumah’s Symbolic Nationalism. Palgrave Macmillan US. pp. 78–79. - McLaughlin & Owusu-Ansah (1994), "Nkrumah, Ghana, and Africa". - "IM: GHANA'S FREEDOM FIGHTERS' CAMP AND THE CHINESE COMMUNISTS | CIA FOIA (foia.cia.gov)". www.cia.gov. Retrieved January 18, 2017. - McLaughlin & Owusu-Ansah (1994), "The Growth of Opposition to Nkrumah". - John Mukum Mbaku, "Corruption in Africa–Part 1." History Compass 7.5 (2009): 1269-1285. - Herbert H. Werlin, "The consequences of corruption: The Ghanaian experience." Political Science Quarterly 88.1 (1973): 71-85 quoting pp 73-74 Online - Albert Kwasi Ocran, A myth is broken: an account of the Ghana coup d'état of 24th February, 1966 (Longmans, 1968). - Goody (1968) pp 338-39. - McLaughlin & Owusu-Ansah (1994), "The Fall of the Nkrumah Regime and its Aftermath". - Interview with John Stockwell in Pandora's Box: Black Power (Adam Curtis, BBC Two, 22 June 1992). - Foreign Relations of The United States 1964–1968, Volume XXIV. United States Department of State, Richard Helms (CIA) file on Nkrumah. ghanaweb.com, "4th February – A Dark Day In Our National History". ghanaweb.com. 24 February 2005. On Nkrumah assassination by CIA: Kevin Gaines (2006), American Africans in Ghana, Black expatriates and the Civil Rights Era, Chapel Hill: University of North Carolina Press. - McLaughlin & Owusu-Ansah (1994), "The National Liberation Council and the Busia Years, 1966–71". - McLaughlin & Owusu-Ansah (1994), "The National Redemption Council Years, 1972–79". - McLaughlin & Owusu-Ansah (1994), "Ghana and the Rawlings Era". - McLaughlin & Owusu-Ansah (1994), "The second coming of Rawlings: the first six years, 1982–87". - McLaughlin & Owusu-Ansah (1994), "The District Assemblies". - Defending Democracy: A Global Survey of Foreign Policy Trends 1992–2002 Democracy Coalition Project. demcoalition.org. - Pflanz, Mike. Ghana says goodbye to President John Kufuor a good man in Africa. The Daily Telegraph. 7 December 2008. - The Ghana School Feeding Programme. SNV Netherlands Development Organisation. - Ghana: From Kufuor to Mills. House of Commons Library. - Emmanuel Gyimah-Boadi, "The 2008 Freedom House Survey: Another Step Forward for Ghana." Journal of Democracy 20.2 (2009): 138-152 excerpt. - <David Owusu-Ansah, "Society and Its Environment" (and subchapters). A Country Study: Ghana (1994). - Hans Werner Debrunner, A history of Christianity in Ghana (Waterville Pub. House, 1967). - Patrick J. Ryan, "Islam in Ghana: its major influences and the situation today." Orita: Ibadan Journal of Religious Studies 28.1-2 (1996): 70-84. - Pew Forum on Religious Affiliation retrieved 4 September 2013 - Max Assimeng, "Traditional religion in Ghana: a preliminary guide to research." Thought and Practice Journal: The Journal of the Philosophical Association of Kenya 3.1 (1976): 65-89. - Ahlman, Jeffrey S. Living with Nkrumahism: Nation, State, and Pan-Africanism in Ghana (2017). - Asare, Abena Ampofoa. Truth without reconciliation: A human rights history of Ghana (U of Pennsylvania Press, 2018). - Austin, Gareth (2005). Labour, Land, and Capital in Ghana: From Slavery to Free Labour in Asante, 1807-1956. Boydell & Brewer. - Austin, Dennis. Politics in Ghana, 1946-1960 (Oxford University Press, 1970). - Biney, Ama. "The Legacy of Kwame Nkrumah in Retrospect." Journal of Pan African Studies 2.3 (2008). online, historiography - Boahen, Adu. "A new Look at the History of Ghana." African Affairs (1966): 212-222. in JSTOR - Boahen, A. Adu. Mfantsipim and the Making of Ghana: a centenary history, 1876-1976 (Sankofa Educational Pub, 1996) - Bourret, Florence Mabel. Gold Coast: A survey of the Gold Coast and British Togoland, 1919-1946. (Stanford University Press, 1949). online - Buah, F.K. A history of Ghana (London: Macmillan, 1998) - Claridge, W. W. A History of the Gold Coast and Ashanti (1915) - Davidson, Basil. Black star: a view of the life and times of Kwame Nkrumah (1990) - Fuller, Harcourt. Building the Ghanaian Nation-State: Kwame Nkrumah’s Symbolic Nationalism (2014) online - Gocking, Roger S. The History of Ghana (2005). online free to borrow - Graham, Charles Kwesi. The History of Education in Ghana: From the Earliest Times to the Declaration of Independence (Routledge, 2013) - McLaughlin, James L., and David Owusu-Ansah. "Historical setting." Ghana: A country study (1995, Library of Congress) pp: 1-58. online - Owusu-Ansah, David. Historical dictionary of Ghana (Rowman & Littlefield, 2014) - Szereszewski, R. Structural Changes in the Economy of Ghana, 1891-1911 (London, Weidenfeld and Nicolson, 1965) - Ward, W.E.F. A history of Ghana (Allen & Unwin, 1966) online free to borrow - Great Britain. Colonial Office. Annual report on the Gold Coast (annual 1931-1953) online free - Nkrumah, Kwame. I Speak of Freedom: A Statement of African Ideology. Westport, CT: Greenwood Press, 1976. http://www.questia.com/read/48142860/i-speak-of-freedom-a-statement-of-african-ideology. Online in Questia - Apter, David E. Ghana in Transition. New York: Atheneum, 1963. http://www.questia.com/read/70519474/ghana-in-transition. - Asare, Abena Ampofoa. Truth without Reconciliation: A Human Rights History of Ghana. Pennsylvania Studies in Human Rights. Philadelphia: University of Pennsylvania Press, 2018. http://www.questia.com/read/127562050/truth-without-reconciliation-a-human-rights-history. - Biney, Ama. "The Legacy of Kwame Nkrumah in Retrospect." Journal of Pan African Studies 2, no. 3 (2008): 129+. http://www.questia.com/read/1G1-192353346/the-legacy-of-kwame-nkrumah-in-retrospect. - Boateng, E. A. A Geography of Ghana. 2nd ed. Cambridge: Cambridge University Press, 1966. http://www.questia.com/read/57250144/a-geography-of-ghana. - Bourret, F. M. Ghana, the Road to Independence, 1919-1957. Revised ed. Stanford, CA: Stanford University Press, 1960. http://www.questia.com/read/8117832/ghana-the-road-to-independence-1919-1957. - Foster, Philip, and Aristide R. Zolberg, eds. Ghana and the Ivory Coast: Perspectives on Modernization. Chicago: University of Chicago Press, 1971. http://www.questia.com/read/3085544/ghana-and-the-ivory-coast-perspectives-on-modernization. - Gebe, Boni Yao. "Ghana's Foreign Policy at Independence and Implications for the 1966 Coup D'etat." Journal of Pan African Studies 2, no. 3 (2008): 160+. http://www.questia.com/read/1G1-192353342/ghana-s-foreign-policy-at-independence-and-implications. - Jahoda, Gustav. White Man: A Study of the Attitudes of Africans to Europeans in Ghana before Independence. Oxford University Press, 1961. http://www.questia.com/read/694336/white-man-a-study-of-the-attitudes-of-africans-to. - Lentz, Carola. Ethnicity and the Making of History in Northern Ghana. International African Library. Edinburgh: Edinburgh University Press, 2006. http://www.questia.com/read/125872346/ethnicity-and-the-making-of-history-in-northern-ghana. - Osemwengie, Ikonnaya. "Living with Nkrumahism: Nation, State, and Pan-Africanism in Ghana." African Studies Quarterly 18, no. 3 (2019): 3+. http://www.questia.com/read/1G1-595353128/living-with-nkrumahism-nation-state-and-pan-africanism. - Poe, D. Zizwe. Kwame Nkrumah's Contribution to Pan-Africanism: An Afrocentric Analysis. New York: Routledge, 2003. http://www.questia.com/read/107981752/kwame-nkrumah-s-contribution-to-pan-africanism-an. - Quist-Adade, Charles. "Ghana at Fifty Symposium: British Columbia, Canada." Journal of Pan African Studies 1, no. 9 (2007): 224+. http://www.questia.com/read/1G1-192394167/ghana-at-fifty-symposium-british-columbia-canada. - Quist-Adade, Charles. "Kwame Nkrumah, the Big Six, and the Fight for Ghana's Independence." Journal of Pan African Studies 1, no. 9 (2007): 230+. http://www.questia.com/read/1G1-192394169/kwame-nkrumah-the-big-six-and-the-fight-for-ghana-s. - Salm, Steven J., and Toyin Falola. Culture and Customs of Ghana. Westport, CT: Greenwood Press, 2002. http://www.questia.com/read/101350359/culture-and-customs-of-ghana. - Schittecatte, Catherine. "From Nkrumah to NEPAD and Beyond: Has Anything Changed?" Journal of Pan African Studies 4, no. 9 (2012): 58+. http://www.questia.com/read/1G1-306596679/from-nkrumah-to-nepad-and-beyond-has-anything-changed. - Smertin, Yuri. Kwame Nkrumah:. New York: International Publishers, 1987. http://www.questia.com/read/54335783/kwame-nkrumah. - Tettey, Wisdom J., Korbla P. Puplampu, and Bruce J. Berman, eds. Critical Perspectives in Politics and Socio-Economic Development in Ghana. Leiden, Netherlands: Brill, 2003. http://www.questia.com/read/109306612/critical-perspectives-in-politics-and-socio-economic.
1
13
<urn:uuid:51797977-ab8d-4c86-9062-ddff9d4fdc19>
Note from author: I would like to acknowledge the assistance of Ed Chaney, Deputy Director of the MAC Lab and Dr. Julia A. King, St. Mary’s College of Maryland in the preparation of this blog. Any errors are my own. Figure 1. Tulip shaped tobacco pipe from the Pine Bluff site. Tobacco had social and spiritual significance for native peoples and in some cultures, stone pipes were used in treaty ceremonies. This week’s Maryland artifact is a tobacco pipe recovered in the 1970s during an excavation at the Pine Bluff site (18WC20) near modern-day Salisbury in Wicomico County. The pipe, made from fired clay, is in a shape associated with the Susquehannock Indians and often described as a “tulip” pipe. Other materials found during the excavation, including gun parts, glass pharmaceutical bottle fragments and English ceramics, suggest that some components of this possible village site post-dated English contact (Marshall 1977). By the time of English colonization, the Eastern Shore had been home to Maryland’s native peoples for at least 13,000 years (Rountree and Davidson 1997:20). Archaeological surveys have revealed evidence of short-term camps, villages and places where resources were procured and processed. The abundant natural resources of the Eastern Shore—fish, shellfish, wild game and wild plants—made this area a favorable place to live. Continue reading → Returning home by air from a recent trip to Michigan, I was once again struck by the abundant waterways that bisect our little state. The Susquehanna, Potomac, Choptank, Patapsco and Patuxent are the major state rivers that empty into the Chesapeake Bay, the largest estuary in the United States. Overall, between Virginia and Maryland, more than 100,000 streams, creeks and rivers wind through the Chesapeake Bay watershed (Chesapeake Bay Program 2014). These waterways are the source of the fish and shellfish that have made the words “Maryland” and “seafood” all but synonymous. The thought of Maryland’s fishing industry is likely to bring up images of commercial vessels with trawl nets or sports fishermen hauling in citation weight rockfish from the back of a charter boat. But this week’s artifact, a diminutive carved bone fish hook from the Everhart Rockshelter (18FR4) in Frederick County, reminds us that fishing has long been an important part of Maryland’s past (Figure 1). This rockshelter, which was excavated by Spencer Geasey in the early 1950s (Geasey 1993), was occupied for thousands of years, all through the Archaic (7500 B.C. – 1000 B.C.) and Woodland periods (1000 B.C. – A.D. 1600). One of the rockshelter residents must have used this fish hook to catch dinner from nearby Catoctin Creek. Continue reading → The SS Columbus paddle wheel underwent conservation treatment in Louisiana and arrived at the MAC Lab for curation when the lab opened in 1998. By far the largest artifact in the MAC Lab collections, weighing in at a whopping 15,000 pounds (give or take), is the paddle wheel shaft from the SS Columbus (International Artifact Conservation 1998). Built in Baltimore and launched in 1828, the Columbus plied the waters of the Chesapeake Bay, transporting cargo and passengers between Baltimore and Norfolk (Holly 1994). On November 28, 1850, a fire broke out onboard the steamship, resulting in nine fatalities and the sinking of the vessel near Smith Point, Virginia. Although the location of the wreck had been known since the 1970s, a decision was made to bring up the 22 ft. long paddle wheel shaft, as well a number of other pieces of the vessel, after the Army Corp of Engineers dredged adjacent to the shipwreck in 1990 in order to deepen the shipping channel (Irion and Beard 1995). Continue reading → In this photograph, some of the petroglyphs can be clearly seen outlined in white (probably chalk). Among the more enigmatic artifacts curated at the Maryland Archaeological Conservation Lab are fragments of prehistoric rock art. Archaeological evidence of art dates back tens of thousands of years and has been an endless source of fascination for scholars, as well as the general public. The carved Venus of Willendorf figures, the painted bison at Lascaux, Chinese bronzeworks and other early artistic endeavors captivate and excite the human imagination. The recent discovery of 40,800 year old stenciled hands and painted dots in a Spanish cave is evidence that Neandertals may have been the first cave painters (Than 2012); it is almost certainly only a matter of time before future discoveries push the limits of early art even farther into the past. Continue reading →
1
3
<urn:uuid:bccd041b-d5eb-454f-aacb-2df39166014a>
Energy usage is arguably one of the most important factors in running any PC. At the same time, however, it’s also one of the most overlooked. Without sufficient power, components won’t run; without stable power, components get damaged. While manufacturers work ever harder on achieving record new efficiencies in low-end computing, enthusiasts spare no joule in trying to push their equipment to the limits. In either case, consumers rarely take the cost of the energy itself into account. The desktop isn’t the only thing on your desk that guzzles electricity. Monitors can draw a significant amount of power, especially when you consider the trend toward ever-increasing display sizes. Many manufacturers now ship 30 inch displays, and some users employ even larger HDTV units. With gaming and internet video becoming more popular, a good set of speakers is now an integral part of the multimedia experience. Add in a second monitor, external hard drive, external optical drive, printer, desk lamp and other accessories, and you’ll find the average home office suddenly balloon into an energy expenditure of significant proportions. Sometimes it isn’t enough to merely turn electronics off. With the introduction of “standby modes”, manufacturers have offered quicker startup times at the cost of electrical efficiency. Power adapters left plugged in while a device is off or not charging will often continue to draw current and let it dissipate uselessly as waste heat. This “vampire power” is estimated to add up to ten percent of the total electrical costs of a home. The International Energy Agency published a free PDF on this phenomenon some years back, cleverly titled “Things that Go Blip in the Night.” The Kill A Watt The question now becomes, “how do you figure out how much electricity your devices use?” Enter the Kill A Watt (KAW). The KAW is used to measure how much electricity is consumed by anything that plugs into a standard outlet. The KAW is plugged into an outlet and the gadget is plugged into an outlet on the KAW. The device then measures how much current is being drawn by the gadget. What’s interesting about the Kill A Watt is that it can give an answer in terms of kilowatt-hours (kWh), the nonstandard unit used by utility companies. This makes it easy to quickly calculate how much an item costs to leave running for an hour, day, or even a year. As a brief example, let’s say that you have four 75W light bulbs lighting up a kitchen.75W * one hour * four light bulbs = 300 Watt-hours, or 0.3 kWh. Assuming that they’re left on for 3 hours a day, those four bulbs will consume 0.9 kWh of electricity.The national average rate for electricity in the United States is 10.4 cents per kWh. This would mean those four light bulbs cost a little over nine cents per day to run, or just less than three dollars each month. When you consider the fact that many people leave one or more computers running twenty-four hours a day, it’s easy to see how costs can add up. It’s easy to ignore how much electricity is consumed by a device when it’s being used. You watch some television, heat up a pizza in the microwave, charge your cell phone and turn the air conditioner to make things a little cooler. When the bill comes at the end of the month, however, it’s always a surprise. People wonder how they could ever have possibly used that much energy (the U.S. Department of Energy has an easy-to-read graphic that may illuminate some of the confusion here). This is why the Kill A Watt is so useful – it forces you to look at exactly how much electricity a machine uses, and from there, how much it costs you every month. It’s harder to leave something turned on, whether it’s the A/C or PC, when you have a concrete understanding of the associated costs. To get an idea of the energy costs associated with various computer systems, readings were gathered from several machines. The systems fall along different points of the performance spectrum. This is not intended to be a comprehensive or even highly comparative analysis of computers, merely an example to show real world use. Keep in mind that a computer’s power supply rating, and how much power it uses, are two completely different things. The power supply in the Lenovo S10 is rated for 650W, but it doesn’t come close to using that whole amount. For the following systems, power use was checked both at idle and when playing a game. All measurements are in There are obvious disparities in these numbers, but again, the systems are comprised of wildly different components. While the Lenovo S10 definitely gives the user awe-inspiring amounts of processing power, it definitely draws its share of current at almost twice the wattage of the PS3. Even when idling, the S10 draws over 160W, whereas the Xbox 360 uses a good bit less. If power consumption is a serious concern, and you’re satisfied with the content available on consoles, you might think twice about investing in a heavy-hitting desktop system. More and more these days, the computer is becoming the place to go to experience digital content. High definition media is playing a significant role in this growth. Home Theater PC (HTPC) adoption is also on the rise, and these computers are particularly well-suited for mention in this article as many of these systems are left on 24 hours a day to provide quick access to media. The custom computer in this case has an Intel Core 2 Duo 1.86GHz processor, 3GB RAM, a 7200RPM hard drive and an NVIDIA 7600GT graphics card. Wprime, with its ability to use multiple processor cores, is an excellent way to force 100 percent use of a processor (thus simulating heavy activities and multitasking). Unsurprisingly, the Lenovo S10 uses a good deal more power than the other systems. Given the hardware in the machine, though, it’s expected. The real surprise is that when processing 1080p content, it used less than an eleven percent electricity premium. This is almost twice the power used by the Gateway GT5670 – which struggled with some high definition video. Despite this 2:1 ratio, you may consider using a high-powered system (if you already have one) to pipe HD content around because of its multitasking capabilities. Instead of running a secondary system to function as a media controller, and then booting up a high-powered rig anyway, it could be worthwhile to just use one machine. The S10 would have the ability to run high definition media and still blow through all but the most demanding tasks. As time goes on, technology advances and computers evolve. Few components, however, have seen such striking changes in both appearance and function as the monitor. From bulky CRTs that took up the majority of a desk to sleek LCDs with razor-sharp text, computer displays have experienced tremendous change, and their power requirements have changed along with them. LCDs are often touted as being much more energy-friendly than old CRTs, and to a certain extent, this is true. More and more often, though, consumers are buying brighter and bigger monitors. Some even use large HDTVs as primary displays for their computer. LCDs might use less power, but how many times have you seen a 37” CRT? These big screen devices can many times use even more power than the computer itself. You can find out more about how much energy your monitor consumes (along with many other appliances and electronics) at the Energy Star website. |Display||Off/Standby||Minimum Brightness||Maximum Brightness| |Sony 40″ KDL-40V3000||0||71||225| The Lenovo L220x used less power than either of the other displays, as expected. What was surprising was that the power used by the G520 (21” CRT) had little variation, whether it was set at the lowest brightness setting or highest. The LCD advantage with regards to power consumption dies when the HDTV is used. Again, it’s not very surprising, but it’s something to keep an eye on if you commonly hook your computer up to such a large display. They clutter the desk, they tangle the cords, and they also drink up electricity. It’s easy to forget that in addition to the computer and monitor, you might have an external hard drive for backing up files, an external optical drive for disc media, a printer, desk lamp, speakers … you get the idea. Most of the time, these devices carry their own power supplies and generate extra waste heat. They also get left plugged in even when you turn the computer off. All this idling adds up over time: |Seagate 7200RPM hard drive||0||8-9||10-13| |LaCie External DVD+/-RW||30||6-10||12 reading, 21 burning| |Samsung ML-2510 laser printer||0||5-500||800| |Dell All-in-One 966 inkjet printer||0||10||17| |Fluorescent desk lamp (13W)||0||—||11| |Incadescent desk lamp (50W/100W/150W)||0||—||47/100/148| One thing that’s apparent from looking at these numbers is just how efficient fluorescent lighting is over traditional incandescent. The 13W bulb drew 11W and put out more light than the 50W bulb using 47 Watts. What is almost astonishing, however, is the power draw for the laser printer. Even when idling, the power would occasionally spike into 500W range, and it peaked at 825 or so. Given the respective printer technologies, the laser printer was thoroughly predicted to consume more power than the inkjet, but an order of magnitude is an even larger difference than expected. In a heavy-use setting, such as a small business or home office, a simple laser printer can use more power than most other equipment. At its max, the inkjet printer only used 17W. Even though laser printers will print at a significantly higher speed than an inkjet, it looks like they’ll still use more power by far. As technology progresses, we find ourselves hopelessly entwined with device after device, and all of these appliances need power. We have iPods, cell phones, portable gaming systems. We have desktops in multiple bedrooms, in the living room, beneath the TV and sometimes even in the kitchen. Being aware of where we use electricity helps us to be aware of where we need to turn it off. Power meters such as the Kill A Watt are useful tools. They quantify something nebulous – energy consumption – by putting it into concrete, easily understood terms. Having a better understanding of the power cost your daily habits incur makes it easy to find and trim areas where you might be carrying a little extra electric baggage. This is important knowledge when you might be paying upwards of 14¢ per kilowatt-hour. The biggest and best tip that can be given is to TURN OFF YOUR COMPUTER. It’s easy to shrug the idea that turning off your unused desktop(s) every night can save power, but look at the facts: leaving a powerful computer like the Lenovo S10 on for a solid month, just idling, will cost an average of twelve dollars. It gets worse. Consider a family with children. They might have a computer in the family room, and two more in various bedrooms. Even if they used as little power as the Gateway GT5670, those three computers cost $17 a month to just sit there unused. This doesn’t even take displays into account, many of which are left on with screen savers constantly redrawing. If it’s hard to get into the habit of doing this, set the power options in your OS to do it for you after long periods of inactivity. If it’s absolutely imperative to leave the computer on, consider cutting down on the time it takes for the desktop to go to sleep. Waking from sleep can take only a few seconds compared to a cold boot, and the energy savings over typical idle states can be more than ninety percent! Here are a few more tips on conserving power around your computer: - Consolidate your machines. There’s little to be gained by having multiple systems each performing a single task. If one machine is capable of handling more than one job, let it. - Turn off your displays when they’re not being used. Screen savers are pretty, but all they do is waste electricity these days. - Speaking of displays, a smaller monitor will tend to use noticeably less electricity than a bigger HDTV, even if the resolutions are the same. - Just because something says it’s off, don’t believe it. Unplug unused and fully-charged devices; the power adapters may still draw a current. To make this easier, plug multiple chargers and adapters into a power strip. The main switch can be tripped to easily cut all power to the gathered devices. No more vampire power, no more electric waste. - Replace your desk lamp with a more energy efficient model. Fluorescents, and more recently, LEDs, have come a long way. Swapping your old incandescent bulb for a new fluorescent one when it dies can save over 50W. - A good set of headphones will draw less power and many times sound better than the sets of speakers that are sold with your computer. - Replace some of your desktop’s components with low-power or “green” counterparts, such as Western Digital’s 2TB Caviar Green drive. Lots of space at half the energy cost. While this guide is far from comprehensive, hopefully it’ll help you find areas in your life where you can scale back and save a little money. Try calculating how much the energy your computers use costs. You might end up surprised. Have any more eco-friendly tips to share? Leave us comments below!
1
8
<urn:uuid:2e2a26ac-e93b-4ffb-916a-8efb744ec892>
LED driver ICs An LED driver is a electronic circuit which serves as an energy source for LEDs changing AC (grid) voltage to DC while optimizing driving current for LEDs To drive modern types of luminaires with more and more added value (dimming possibility, emergency unit, presence sensor, remote control, etc.) more complex electronic circuitry is required. In some LED applications we understand under the term LED Driver also part of the power management like voltage conversion circuit used between input voltage and required output voltage. For this particualar case we have made a selection according to the type of output where we have three basic groups of drivers (regulators) Constant Current Regulators (CCR), Constant Voltage Regulators (CVR) and combination of both (CCR+CVR). Linear regulators are also called CONSTANT CURRENT REGULATORS (CCR) drive controlled constant current through the LEDs even if the supply voltage varies. However the linear regulators introduce efficiency and thermal drawbacks. Linear regulators have the advantage of simplicity, low part count and very little Electromagnetic Interference (EMI). At currents of 350 mA and above, the linear solution may require a heatsink, adding cost and size to the design.Constant Current (CC) – LEDs are in serial connection and driver delivers precise current value, Constant Voltage (CV)- LEDs are in parallel connection which is ideal for decorative LED strips, this topology is not recommended for dimming and Special Drivers (CC+CV) – which is a bit expensive solution allowing both serial and parallel connections. There are two ways how to control LED light source. Forward Current Control - Since LED emits light depending on forward current magnitude, the easiest way how to control LED light source intensity is changing bias current value. Change of luminous flux depends on forward-current change nearly linear; therefore control algorithm implementation is very easy. - Another way how to control LED light source intensity is pulse-width modulation (PWM) method. Principle of PWM lies on biasing LED by constant nominal current which is periodically switched on and off. Ratio between on-state and off-state defines resulting intensity of the LED. Switching frequency is high enough thus human eye perceives light emitted by LED as continuous luminous flux with intensity depending on PWM duty-cycle. LED power management While incandescent bulbs need a constant voltage to emit a constant amount of light, LED lights require a specific driving circuit DC/DC converters, AC/DC converters and other SWITCHED-mode POWER SUPPLIES enable the driving of LEDs in a more efficient way as their power losses are minimal. These driving circuits are more complex requiring a higher number of external components. They operate at switching frequencies from the low 100 kHz range up to more than 1 MHz. Sometimes built-in diagnostic features allow for precise monitoring of the devices. Depending on the application, different kinds of switched power supplies are used. For example if input voltage always exceeds the sum of the maximum forward voltages of every LED string, then Buck converters are the right solution. This is the case when driving one LED from a 12 V supply. When the minimum forward voltage of all LEDs in a string exceed the maximum input voltage, a step-up or boost regulator is needed. Driving five LEDs from a 5 V source would be a typical example. The inductive boost converter is the simplest regulator that can drive currents above 350 mA with a varying output voltage. As with linear and buck regulators, a boost converter with a feedback-divider network can be modified to become a constant current source.When the input voltage range overlaps the LED voltage range, a current regulator is needed that can both buck and boost, as required by the input and output conditions. As HB LEDs are adopted into more and more applications, situations will arise when the input voltage varies above and below the forward voltage of the LED string. This typically happens when four LEDs are driven from a 12 V car battery. Your EBV LightSpeed team will consult with you and provide advice about the most suitable driver circuit for your individual application, from their supplier partner companies. All EBV LED driver manufacturers have created detailed brochures with their product offerings including useful application notes. LED control and sensing The control unit brings the flexibility into your lighting desired feature set This is in order to create intelligent lighting with high integrated controllability of amount of light with delivery exactly where it is needed. For that purpose you will need digitally controlled lighting control units (based on MCUs) using DALI and DMX512 as most commonly used standards for digital communication networks that are commonly used to control stage lighting and effects.With digital control, designers can scale and easily adjust designs to multiple applications, maximizing reuse and decreasing design time. A digital approach also allows many hardware features, such as soft startup, delay and PWM phase shifting, to be implemented in software, eliminating extra components, cost and complexity. DALI is an International Standard (IEC 62386) for the control of electronic ballasts, transformers, LEDs, emergency lights and exit signs in an easy to manage digital lighting control system. DMX512 employs EIA-485 differential signaling at its physical layer, in conjunction with a variable-size, packet-based communication protocol. It is unidirectional. DMX512 does not include automatic error checking and correction, and so is not an appropriate control for hazardous applications. LED module communication EBV provides latest wired and wireless communication paths to LED module applications Under the LED communication & control paths we understand either wired or wireless lines. DALI and DMX512 are most commonly used for is a standard for digital communication networks that are commonly used to control stage lighting and effects. EBV offers a variety of power line modems available and dedicated for the data transmission on low- or medium-voltage power lines. These devices offer complete handling of the protocol layers from the physical up to the MAC on different kinds of modulations based either on SFSK or OFDM. On LPRF wireless product portfolio we can offer Zigbee, RF4CE, W Mbus and other solutions. The half duplex S-FSK modemis dedicated for the data transmission on low- or medium-voltage power lines. The device offers complete handling of the protocol layers from the physical up to the MAC. They complies with the EN 50065 CENELEC, IEC 1334-4-32 and the IEC 1334-5-1 standards. LIGHTDESK from Infineon The Infineon Light Desk is an interactive, cloud-based design and verification environment that enables selection and configuration of LED driver ICs in a broad range of applications Using the Infineon Light Desk, design engineers can identify the LED driver ICs that meet their specific system requirements. For the selected LED driver, the tool creates a custom design displayed in an interactive web schematic. Find out more on following link: https://infineon.transim.com/lightdesk/pages/appfinder.aspx. With the offered redesign functionality users can optimize the proposed LED driver design. Infineon Light Desk offers to download instantly the Bill of Material for the final design as well as a comprehensive design summary report that includes design data, schematic and simulation results. Users can save designs for future reference as well as collaborate on designs with other users within a secure shared workspace. Additionally, the tool has an offline simulator SimT powered by SIMetrix/SIMPLIS: The simulator allows users to run their reference designs offline which enhances the simulation time. Users can directly download the simulator from the Infineon Light Desk. The Infineon Light Desk already supports custom designs for new high power DC/DC LED driver ICs (ILD series) as well as linear LED driver ICs (BCR3x and BCR4x series) for general lighting applications. Power and lighting design tools from NXP The tool generates schematic, bill of material and transformer parameters, calculates the design's efficiency, and presents an overview of losses The isolated LED Driver design tool focuses on creating LED drivers from 2 W up to 25 W. The applications include: - SSL retro-fit lamps (e.g. GU10, E27) - LED modules, separate power supplies, e.g. LED spots, down-lights - LED strings, e.g. retail display - LED ballasts - Contour lighting - Channel letter lighting - Other lighting applications - Output power is user-defined and can be fixed, dimmed using pulse width modulation (PWM), or requires mains dimmable solutions. CompCalc from ON Semiconductor ON Semiconductor provides CompCalc circuit simulation and design tool for DC-DC power designs CompCalc is an interactive schematic-based circuit simulator where each components value can be dynamically adjusted, showing the effect on each of the key circuit characheristics like: power stage gain and phase, compensation network gain and phase, loop gain and phase, output impedance, load transient response, ripple voltage or inductor current.
1
3
<urn:uuid:be0ea6f7-7c0b-4845-85dd-521e37b73a5c>
In most countries, including Switzerland, ABA is mistakenly defined as a treatment for children with autism. This is not a mistake per se, however, ABA as the application of behavioural principles on human behaviour is much more (see also About ABA). Nevertheless, ABA is most frequently applied to the area of autism treatment, notably in early intervention programs. Dr. Ivar Lovaas‘ seminal article from 1987 and the subsequent novel from Catherine Maurice "Let Me Hear Your Voice. A Family’s Triumph Over Autism" helped establish ABA as a prominent conceptual framework for the treatment of Autism. The combination of Lovaas‘ pioneering work within scientific circles of experts and Maurice’s novel that ripped into the heart of so many parents of children with autism brought ABA in the area of autism a degree of popularity that as of yet not been achieved by ABA in other areas of application. In comparison to the United States, where the DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, 5th revision) is being widely used, Autism Spectrum Disorders are currently diagnosed in Switzerland using the ICD-10, which is the 10th revision of the International Statistical Classification of Diseases and Related Health Problems, a medical classification list by the World Health Organisation (WHO). Within the pervasive developmental disorders, the most common forms of Autism are: Childhood Autism (F84.0), Atypical Autism (F84.1) and Asperger’s Syndrome (F84.5). A person on the autistic spectrum show difficulties in the following areas in varying degrees (see ICD-10): A) Reciprocal social interactions, B) Patterns of communication, C) Restricted, stereotyped, repetitive repertoire of interests and activities. The ICD-10 is currently under revision (ICD-11) and is expected to adopt Autism Spectrum Disorder (ASD), as the DSM-5 does. On the internet one can find numerous interventions/treatments for ASD, not all of them are supported by research. For an overview of treatments and their scientific background, please check the Association for Science in Autism Treatment’s (ASAT) webpage. Due to numerous studies published in peer-reviewed journals over the course of the last 50 years, treatments based on ABA are considered to have strong scientific evidence. In contrast to Switzerland, US ABA based treatments are viewed as best practice and frequently paid for by insurance companies or state organisations. The BACB (Behavior Analysis Certification Board) has released a document that describes in a comprehensive manner what ABA in the treatment of ASD looks like and what healthcare funders and managers need to know. It is to be found here. In October 2018 the Swiss Federal Council released a report that stated the importance of better integration of people with ASD into society. The three main core areas to be targeted are: 1) early detection and diagnosis, 2) counselling and guidance, and 3) early intervention. The complete report can be downloaded here (German). Additionally, in October 2018 an interdisciplinary research group at ZHAW (Zurich University of Applied Sciences) published the results of their study that examined the effectiveness of the five early intensive intervention centers in Switzerland (not all based on ABA). This report can be downloaded here (German. Summaries also in English, French an Italian). These recent national movements in the area of autism are promising, but it is important to emphasise that there are still big difficulties to be overcome regarding treatment. One major issue being funding and the other the recognition of ABA. The latter issue means that there is little interest in moving towards early intensive interventions that are behavioural or even based on ABA and towards internationally recognised recommendations for EIBI (Early Intensive Behavioural Intervention) programs, as for example defined by the BACB (see above). Below you can find an example of how an ABA therapy session with a child with ASD might look like. Emma is a 3 year old girl on the autism spectrum. Her EIBI program is run with a BCBA (Board Certified Behaviour Analyst) functioning as the supervisor plus three RBTs (Registered Behavior Technician) and her parents. Emma likes jumping and being spun around and loves the movie “Frozen” as well as all the characters. She is non-vocal and at the beginning of treatment had frequent tantrums when she was not able to express what she wanted. A typical morning session looks like this: Yvonne, one of her RBTs, arrives at 8am and checks the folder with all the data that describe Emma ‘s therapy sessions over the past few days. Based on the data and Emma’s individual treatment plan, Yvonne plans the session and writes out the goals she will work on this morning. After this preparation she calls Emma and they start the session with a lot of fun and games: Yvonne is tickling and spinning her around for a couple of minutes. Because Emma is non-vocal, she was taught to use an alternative communication system to give her the opportunity to communicate in a way her family can understand her and reduce tantrums maintained by an inability to express her wants and needs. The communication system being implemented is PECS (Picture Exchange Communication System). In a folder there are several pictures of things Emma likes (e.g., favourite food, toys and activities). She has learned how to take the picture of the thing she wants and give it to a person that is able to provide this thing to her. This took her about 3 weeks to learn using intensive teaching and this has reduced her tantrums significantly. Yvonne stops her tickling and because Emma begins to reach for Yvonne’s hand indicating more tickling, Yvonne uses this as a teaching opportunity and points to Emma’s PECS folder. This prompts Emma to get the tickling-picture and give it to Yvonne. Yvonne comments on her request by saying "tickle!“ and tickles her in a playful manner. Again, Yvonne interrupts her tickling to check if Emma is able to emit the mand (i.e., communicate her request) without requiring the prompt of pointing to the PECS folder and she does so independently. Then Yvonne presents Emma with some easy (and previously acquired) instructions to set her up for success for the more intensive teaching sequence at the table: She asks her to “give me five” and asks her to imitate some easy movements such as clapping and waving. She then smoothly progresses to some more difficult imitation goals: Emma needs to pay more attention to finer differences in fine motor movements and attend more closely to faces. Yvonne asks her to imitate pointing to specific fingers and practices the difference between touching the front versus the side of the nose. Yvonne wants Emma to become a very precise observer. For every small success Emma achieves, Yvonne smiles and giggles as a reward to motivate her to learn more. Sometimes Yvonne needs to help Emma to be successful. She does this for example by moving Emma’s hand to the correct position. After a couple of correct imitations, Yvonne tickles Emma again. Emma's motivation to complete more tasks is high because she wants more of the tickles. That is when Yvonne transitions to another imitation task. At the table, Yvonne lays Elsa, a doll of the main character of Frozen, on the bed (play sleeping) and asks Emma to do the same. She lets her imitate other actions with other characters (e.g., Anna walks to the house, Olaf kisses Elsa, etc.) and provides help as needed. Emma has a hard time imitating even the easiest actions with objects and characters. That is why Emma is getting lots of praise and rewards for these types of activities. Now that she is improving at observing other people’s actions and that she is imitating novel actions at the table without direct or intensive teaching, they are working to transfer these play actions to more natural play settings (i.e., natural environment teaching, NET). Yvonne is asking Emma to sit on the floor where she set up a Frozen play station before and asks Emma to imitate the same play actions with the characters as before at the table. Because of all the distraction in the natural play setting, this is quite challenging for Emma and requires the prompting again, but she is staying motivated because of the frequent enthusiastic praise and tickling she is receiving as well as by being engaged with toys from her favourite movie. Yvonne writes down how much help Emma needs and if she can fade the prompts out. During the rest of the session she will pay attention to see if Emma demonstrates these play actions spontaneously. If she does, she will give her a high intensity of praise. During the Frozen play imitation tasks, Emma is getting thirsty. She walks to her PECS folder, takes the water picture and gives it to Yvonne who brings her a cup of water and gives her a 10 minute break. The session continues with a similar mixed schedule of structured teaching intervals at the table, natural play situations around various locations in the house, and breaks. Emma’s other goals are language comprehension (understanding the names of family members and Frozen characters), sorting different toys (with the functional goal of learning to clean up her room), and self-help skills (independence with using the toilet instead of diapers and washing her hands). After three hours (with breaks), Emma goes off to eat lunch with her mom while Yvonne finishes writing up the data she regularly collected during the session to let the next RBT know how far she has come with teaching and where the next RBT should continue in the afternoon. The data will be analysed by the BCBA who will make decisions how to proceed in agreement with the parents. „If a child can’t learn the way we teach, maybe we should teach the way they learn“ (Ignacio Estrada) BACB (2017). Applied Behavior Analysis Treatment of Autism Spectrum Disorder: Practice Guidelines for Healthcare Funders and Managers. https://www.bacb.com/wp-content/uploads/2017/09/ABA_Guidelines_for_ASD.pdf Eldevik, S., Hastings, R.P., Hughes, J. C., Jahr, E., Eikeseth, S., Cross, S. (2009). Meta-Analysis of Early intensive Behavioral Intervention for Children With Autism. Journal of Clinical Child and Adolescent Psychology, 38 (3), 439-450. Lovaas, O. I. (1987). Behavioral treatment and normal educational and intellectual functioning in young autistic children. Journal of Consulting and Clinical Psychology, 55, 3-9.
1
3
<urn:uuid:9d65dfd6-3f42-4734-87ad-b1be370c18e2>
Soft tissue sarcomas are malignant tumours that may arise in any of the mesodermal tissues (muscles, tendons, vessels that carry blood or lymph, joints, and fat). Sarcomas are a diverse range of tumours, they are named after the type of soft tissue cell they arise from. Types of soft tissue sarcomas include; alveolar soft-part sarcoma, angiosarcoma, fibrosarcoma, leiomyosarcoma, liposarcoma, malignant fibrous histiocytoma, hemangiopericytoma, mesenchymoma, schwannoma, peripheral neuroectodermal tumours, rhabdomyosarcoma, synovial sarcoma, and other types. In terms of treatment these different sub-types are usually treated in the same way using a uniform soft tissue protocol. Founded in 2003 the initiative aims improve the quality of life for people dealing with sarcomas around the world, raising awareness and research funds. It has an international panel of medical experts. Newcastle upon Tyne Hospitals NHS Foundation Trust One of the 5 specialist centres in England funded for the investigation and surgical treatment of primary bone tumours. Patients come from the North East of England, Cumbria, Yorkshire and beyond. PubMed Central search for free-access publications about Soft Tissue Sarcoma MeSH term: Sarcoma US National Library of Medicine PubMed has over 22 million citations for biomedical literature from MEDLINE, life science journals, and online books. Constantly updated. CTOS An international group, founded in 1995, comprised of physicians and scientists with a primary interest in the tumors of connective tissues to advance the care of patients increase knowledge through basic and clinical research. BioMed Central an open access, peer-reviewed journal that considers articles on all aspects of cancer research, including the pathophysiology, prevention, diagnosis and treatment of cancers. The journal welcomes submissions concerning molecular and cellular biology, genetics, epidemiology, and clinical trials. START, European School of Oncology Referenced statement including sections on epidemiology, pathology, diagnosis, staging, treatment and follow-up produced by an editorial board of top European oncologists. Last updated 2004 (accessed June 2013). This list of publications is regularly updated (Source: PubMed). Lovasik BP, Wang VL, Point du Jour KS, et al. Visceral Kaposi Sarcoma Presenting as Small Bowel Intussusception: A Rare Presentation and Call to Action. Am Surg. 2019; 85(7):778-780 [PubMed] Related Publications Surgical emergencies related to visceral involvement of Kaposi sarcoma (KS) are rare complications of the disease. In this report, we describe a case of visceral KS causing small bowel intussusception in a young, previously undiagnosed human immunodeficiency virus (HIV)-positive patient. Southern surgeons should be particularly attentive to HIV/AIDS-related disease as a cause of surgical pathology, particularly in the southeast, and can play a significant advocacy role for improved access to HIV/AIDS diagnostic and treatment services. Tsuji K, Ito A, Kurokawa S, et al. Primary carcinosarcoma of the ureteropelvic junction associated with ureteral duplication: A case report. Medicine (Baltimore). 2019; 98(32):e16643 [PubMed] Related Publications RATIONALE: Primary carcinosarcoma of the upper urinary tract is rare. Ureteral duplication is one of the most common urinary tract malformations. Additionally, the association between ureteral duplication and malignancy is unknown. To the best of our knowledge, no cases of malignant tumors diagnosed as carcinosarcoma with ureteral duplication have been reported. We herein report the case of a patient with carcinosarcoma of the ureteropelvic junction associated with incomplete ureteral duplication. PATIENT CONCERNS: A 60-year-old Japanese woman presented with painless gross hematuria. She had a history of total hysterectomy and chemotherapy for endometrioid carcinoma 5 years before. She had no history of occupational chemical exposure. DIAGNOSES: Radiographic imaging revealed right incomplete ureteral duplication, hydronephrosis, and a polypoid tumor in the ureteropelvic junction of the lower moiety of the right kidney. Urine cytology showed a small amount of degenerated atypical epithelial and nonepithelial cells. The transureteral biopsy specimen showed dysplastic urothelial cells and atypical myoid spindle cells. These findings were indefinite for malignancy. INTERVENTIONS: The patient underwent right nephroureterectomy. Pathological examination of the resected tumor showed a biphasic neoplasm composed of carcinomatous and sarcomatous components. The sarcomatous component was immunohistochemically positive for vimentin, desmin, h-caldesmon, and α-SMA and negative for pancytokeratin (AE1/AE3), low molecular weight cytokeratin (CAM 5.2), EMA, E-cadherin, GATA3, uroplakin 2, and p63. Based on these findings, we diagnosed the tumor as carcinosarcoma. OUTCOMES: The postoperative course was uneventful. No additional therapy was administered. The patient has remained alive without recurrence for 21 months since surgery. LESSONS: Carcinosarcoma can arise from ureteral duplication. Although the majority of carcinosarcomas of the upper urinary tract are diagnosed at an advanced stage and have a poor prognosis, some can have a less aggressive course. Further studies are needed to determine the association between ureteral duplication and malignancy. Khwaja R, Mantilla E, Fink K, Pan E Adult Primary Peripheral PNET/Ewing's Sarcoma of the Cervical and Thoracic Spine. Anticancer Res. 2019; 39(8):4463-4465 [PubMed] Related Publications This case report describes a patient with a rare occurrence of primary spinal intramedullary Ewing's sarcoma (ES) in the cervical and thoracic spine. The older age of disease occurrence, uncommon location in the cervical and thoracic spine, and EWSR1 gene fusion as the basis of diagnosis are unique features of this case. There is no clear protocol for treatment of primary extraskeletal ES of the spine, with controversy between evidence for pursuing surgery versus a combination of radiation and chemotherapy. Our patient was treated with temozolomide chemotherapy for recurrent metastatic disease of primary ES of the spine. Giorgi C, Gasser UE, Lafont ME, et al. Inhibition of Chemoresistance in Primary Tumor Cells by Anticancer Res. 2019; 39(8):4101-4110 [PubMed] Related Publications BACKGROUND/AIM: Despite improvements in cancer therapy, life expectancy after tumor recurrence remains low. Relapsed cancer is characterized by drug resistance, often mediated through overexpression of multidrug resistance (MDR) genes. Camellia sinensis non fermentatum extract is known for its anticancer properties in several cancer cell lines and might improve cancer therapy outcome after tumor recurrence. MATERIALS AND METHODS: Embryonal rhabdomyosarcoma cell lines, alveolar rhabdomyosarcoma cell lines and primary rhabdomyosarcoma MAST139 cells were used to test NPE® effects on cell viability in combination with chemotherapeutic agents. Cell viability was measured by the WST-1 assay and CV staining. Gene expression levels of chemotherapy-induced efflux pumps and their activity was assessed upon NPE® treatment by measuring doxorubicin retention through evaluation of the autofluorescence signal. RESULTS: Administration of increasing doxorubicin concentrations triggered immediate adaptation to the drug, which was surprisingly overcome by the addition of NPE®. Investigating the mechanism of immediate adaptation, MDR1 gene overexpression was observed upon doxorubicin treatment. Although NPE® did not alter pump gene expression, it was able to reduce pump activity, thus allowing the chemotherapeutic agent to stay inside the cells to exert its full anticancer activity. CONCLUSION: NPE® might improve chemotherapeutic treatment by re-sensitizing relapsed tumors to anticancer drugs. Fighting MDR represents the key to overcome tumor relapse and improve the overall survival of cancer patients. Higuchi T, Sugisawa N, Miyake K, et al. Sorafenib and Palbociclib Combination Regresses a Cisplatinum-resistant Osteosarcoma in a PDOX Mouse Model. Anticancer Res. 2019; 39(8):4079-4084 [PubMed] Related Publications BACKGROUND/AIM: Recurrent osteosarcoma is a recalcitrant disease; therefore, an improved strategy is urgently needed to provide therapy. In order to develop a novel strategy for this disease, our lab has developed a patient-derived orthotopic xenograft (PDOX) mouse model for osteosarcoma. The combination of sorafenib (SFN) and palbociclib (PAL) was shown to be effective of hepatocellular carcinoma. However, whether this combination is efficacious on osteosarcoma has not been reported. The aim of this study was to determine the efficacy of the SFN and PAL combination on a cisplatinum (CDDP)-resistant osteosarcoma PDOX model. MATERIALS AND METHODS: Osteosarcoma PDOX models were randomly divided into five treatment groups: untreated-control, CDDP, SFN, PAL and the combination of SFN and PAL. RESULTS: Of these agents, the SFN-PAL combination significantly regressed tumor growth, and enhanced tumor necrosis with degenerative changes in the osteosarcoma PDOX. CONCLUSION: The SFN-PAL combination is an effective treatment strategy for osteosarcoma and therefore holds promise for clinical efficacy. Glorie N, Baert T, VAN DEN Bosch T, Coosemans AN Circulating Protein Biomarkers to Differentiate Uterine Sarcomas from Leiomyomas. Anticancer Res. 2019; 39(8):3981-3989 [PubMed] Related Publications Uterine sarcomas are rare but very aggressive. Uterine myomas, on the other hand, are the most common benign tumors of the uterus. Currently there is no diagnostic technique available to distinguish them with certainty. This study aimed to summarize the published literature concerning protein-based biomarkers in the peripheral blood that can assist in this difficult differential diagnosis. In total, 48 articles, published between 1990 and 2017, were included. Most studies (n=37) concerned soft tissue sarcomas, while 11 discussed uterine sarcomas specifically. Vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), interleukins (IL), cancer antigen 125 (CA 125), lactate dehydrogenase, gangliosides (LDH) and growth differentiation factor 15 (GDF-15) are the most studied proteins in soft tissue sarcomas, including uterine sarcomas. Future research on improving sarcoma diagnosis should include these proteins. Fujiwara T, Medellin MR, Sambri A, et al. Preoperative surgical risk stratification in osteosarcoma based on the proximity to the major vessels. Bone Joint J. 2019; 101-B(8):1024-1031 [PubMed] Related Publications AIMS: The aim of this study was to determine the risk of local recurrence and survival in patients with osteosarcoma based on the proximity of the tumour to the major vessels. PATIENTS AND METHODS: A total of 226 patients with high-grade non-metastatic osteosarcoma in the limbs were investigated. Median age at diagnosis was 15 years (4 to 67) with the ratio of male to female patients being 1.5:1. The most common site of the tumour was the femur (n = 103) followed by tibia (n = 66). The vascular proximity was categorized based on the preoperative MRI after neoadjuvant chemotherapy into four types: type 1 > 5 mm; type 2 ≤ 5 mm, > 0 mm; type 3 attached; type 4 surrounded. RESULTS: Limb salvage rate based on the proximity type was 92%, 88%, 51%, and 0% for types 1 to 4, respectively, and the overall survival at five years was 82%, 77%, 57%, and 67%, respectively (p < 0.001). Local recurrence rate in patients with limb-salvage surgery was 7%, 8%, and 22% for the types 1 to 3, respectively (p = 0.041), and local recurrence at the perivascular area was observed in 1% and 4% for type 2 and 3, respectively. The mean microscopic margin to the major vessels was 6.9 mm, 3.0 mm, and 1.4 mm for types 1 to 3, respectively. In type 3, local recurrence-free survival with limb salvage was significantly poorer compared with amputation (p = 0.025), while the latter offered no overall survival benefit. In this group of patients, factors such as good response to chemotherapy or limited vascular attachment to less than half circumference or longitudinal 10 mm reduced the risk of local recurrence. CONCLUSION: The proximity of osteosarcoma to major blood vessels is a poor prognostic factor for local control and survival. Amputation offers better local control for tumours attached to the blood vessels but does not improve survival. Limb salvage surgery offers similar local control if the tumour attachment to blood vessels is limited. Cite this article: Background: In recent years, microRNA-211 (miR211) has been considered as a tumor suppressor in multiple malignancies. However, the function of miR211 in human osteosarcoma has not been explored intensively so far. In this study, the relationship between miR211 and EZRIN was analyzed in human osteosarcoma. Methods: The expression levels of miR211 and EZRIN were measured in both human osteosarcoma cells and tissues. The direct regulatory relationship between miR211 and EZRIN was evaluated using dual-luciferase assay. The effect of miR211 and EZRIN overexpression on cell proliferation, migration/invasion, and apoptosis was detected. Results: The expression of miR211 was obviously lower in osteosarcoma tissues than paracancerous tissues. EZRIN was identified as the direct target of miR211, and up-regulation of miR211 increased the percentage of cell apoptosis, and suppressed cell proliferation as well as cell migration/invasion via directly regulating EZRIN. Conclusions: Our study indicated that miR211 has an important role in the development and progress of osteosarcoma, and it might become a novel target in the diagnosis and treatment of human osteosarcoma. Introduction: odontogenic tumors originate from neoplastic transformation of the remnants of tooth forming apparatus. There are varying degrees of inductive interactions between odontogenic ectomesenchyme and epithelium during odontogenesis, leading to lesions that vary from benign to malignant. Malignant odontogenic tumours (MOTs) are very rare and are classified according to embryonic tissue of origin. Recently, there has been a few changes to the classification of MOTs according to the World Health Organization's (WHO) classification in 2017. This study aims to evaluate and reclassify MOTs, using a multi-centre approach in some major tertiary dental hospitals in Nigeria. Methods: this study reviewed the clinicopathological data on 63 cases of MOT diagnosed over 25 years in five major tertiary dental hospitals in Nigeria. All MOT cases were reclassified according to the recent revision to the 2017 WHO classification of odontogenic tumours. Results: from a total of 10,446 biopsies of oral and jaw lesions seen at the 5 study centres over the 25-year study period, 2199 (21.05%) cases were found to be odontogenic tumours (OTs), of which 63 were MOT. MOTs constituted 0.60% of the total biopsy cases and 2.86% of OTs. Odontogenic carcinomas presented with a mean age higher than odontogenic sarcomas. According to our 2017 WHO reclassification of MOTs, odontogenic carcinomas, ameloblastic carcinomas and primary intraosseous carcinomas were found to be the top three lesions, respectively. Carcinosarcomas were found to be extremely rare. Conclusion: using a multi-centre approach is a robust way to reduce diagnostic challenges associated with rare maxillofacial lesions such as MOTs. We are going to present a case of malignant fibrous histiocytoma in the right atrium, which is a very rare entity. The patient had a right atrial mass, which prolapsed through the tricuspid valve into the right ventricle, causing functional tricuspid valve stenosis. The tumor was completely resected and the patient had an uneventful postoperative period. Histopathological examination reported malignant fibrous histiocytoma. The patient presented to the emergency department five weeks after discharge with dyspnea and palpitation. Echocardiography and magnetic resonance imaging revealed recurrent right atrial tumor mass. His clinical status has worsened, with syncope and acute renal failure. On the repeated echocardiography, suspected tumor recurrence was observed in left atrium, which probably caused systemic embolization. Considering the aggressive nature of the tumor and systemic involvement, our Heart Council decided to provide palliative treatment by nonsurgical management. His status deteriorated for the next few days and the patient succumbed to a cardiac arrest on the 4th day. RATIONALE: Primary splenic angiosarcoma (PSA) is a rare mesenchymal malignancy of the splenic vascular origin often with a dismal prognosis. Genomic profile may provide evidence for the solution of therapy. PATIENT CONCERNS: We reported a case of a 51-year-old woman with splenectomy 4 years ago and the postoperative histopathology diagnosis revealed "splenic hemangioma" with spontaneous rupture. Two years after the operation, the patient's rechecked abdominal computed tomography (CT) showed multiple hepatic occupations. DIAGNOSES: Pathological test suggested PSA hepatic metastasis. INTERVENTIONS: The patient was treated with trans-catheter arterial chemoembolization (TACE) and a pathological diagnosis of PSA was highly suspected in the hepatic biopsy. Four somatic alterations, phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha (PIK3CA), Fos proto-oncogene, AP-1 transcription factor subunit (FOS), MCL1 apoptosis regulator (MCL1), and phosphoinositide-3-kinase regulatory subunit 1 (PIK3R1) were detected in the tumor tissue using a Next generation sequencing (NGS) technology. The results prompted that the patient may get clinical benefit from using some agents for targeted therapy, Everolimus, Temsirolimus, or Copanlisib. OUTCOMES: The patient refused targeted therapy. As a result, the patient passed away within 51 months after splenectomy. LESSONS: PSA is an aggressive disease that often presented with a high propensity for metastasis and rupture hemorrhage. Some of these mutations were first discovered in PSA and these findings added new contents to the genomic mutation profile of PSA. Mian A, Singal AK, Bakhshi S, et al. Treatment of Pulmonary Embolism with Chemotherapy in a Case of Newly Diagnosed Osteosarcoma. J Assoc Physicians India. 2019; 67(4):76-78 [PubMed] Related Publications A 21-year old female, recently diagnosed with osteosarcoma of right humerus, presented to the emergency with history of fever, productive cough, chest pain and progressive respiratory distress for six days. Initial investigations suggested pneumonia but she did not respond to parenteral antibiotics. CT pulmonary angiogram revealed bilateral pulmonary artery embolism. Thrombolysis was performed using alteplase, which failed to improve the clinical condition. In view of underlying malignancy, a possibility of tumour-embolism was considered and she was started on chemotherapy for osteosarcoma. There was dramatic improvement in her respiratory symptoms after the first chemotherapy cycle, along with radiological resolution of the embolism. This case highlights the importance of suspecting tumour embolism in a known case of malignancy with respiratory distress. Kaposi sarcoma (KS) is an endothelial tumor etiologically related to Kaposi sarcoma herpesvirus (KSHV) infection. The aim of our study was to screen out candidate genes of KSHV infected endothelial cells and to elucidate the underlying molecular mechanisms by bioinformatics methods. Microarray datasets GSE16354 and GSE22522 were downloaded from Gene Expression Omnibus (GEO) database. the differentially expressed genes (DEGs) between endothelial cells and KSHV infected endothelial cells were identified. And then, functional enrichment analyses of gene ontology (GO) and Kyoto encyclopedia of genes and genomes (KEGG) pathway analysis were performed. After that, Search Tool for the Retrieval of Interacting Genes (STRING) was used to investigate the potential protein-protein interaction (PPI) network between DEGs, Cytoscape software was used to visualize the interaction network of DEGs and to screen out the hub genes. A total of 113 DEGs and 11 hub genes were identified from the 2 datasets. GO enrichment analysis revealed that most of the DEGs were enrichen in regulation of cell proliferation, extracellular region part and sequence-specific DNA binding; KEGG pathway enrichments analysis displayed that DEGs were mostly enrichen in cell cycle, Jak-STAT signaling pathway, pathways in cancer, and Insulin signaling pathway. In conclusion, the present study identified a host of DEGs and hub genes in KSHV infected endothelial cells which may serve as potential key biomarkers and therapeutic targets, helping us to have a better understanding of the molecular mechanism of KS. Song Y, Bruce AN, Fraker DL, Karakousis GC Isolated limb perfusion and infusion in the treatment of melanoma and soft tissue sarcoma in the era of modern systemic therapies. J Surg Oncol. 2019; 120(3):540-549 [PubMed] Related Publications BACKGROUND AND OBJECTIVES: Isolated limb perfusion (ILP) and infusion (ILI) are treatment modalities for unresectable melanoma in-transit metastases and extremity soft tissue sarcomas (STS). We sought to characterize the national trend in their utilization in the context of novel melanoma therapies introduced in 2011. METHODS: Using the National Inpatient Sample (2005-2014), patients with a primary diagnosis of limb melanoma or STS who underwent ILP/ILI were identified by diagnosis and procedure codes. Annual percent change (APC) in ILP/ILI procedures was determined. RESULTS: From 2005 through 2014, 670 and 130 ILP/ILI procedures were performed for melanoma and STS, respectively. Mean age was 64 (SD 15) years for melanoma and 59 (SD 18) years for STS. Over time, procedures for melanoma decreased with an APC of -17 (P = .019). Comparing 2005-2010 and 2011-2014, the mean number of procedures for melanoma decreased from 91 to 32 per year (P = .007). In contrast, there was no change for STS (APC 6.5, P = .39; mean 11 and 16 per year in 2005-2010 and 2011-2014, respectively, P = .46). CONCLUSIONS: ILI/ILP utilization has decreased for melanoma, but not for STS. Whether trends for ILP and ILI differed could not be determined. ILP/ILI remains an important option to consider for regional disease control. Lee YJ, Chung JG, Chien YT, et al. Suppression of ERK/NF-κB Activation Is Associated With Amentoflavone-Inhibited Osteosarcoma Progression Anticancer Res. 2019; 39(7):3669-3675 [PubMed] Related Publications BACKGROUND/AIM: Amentoflavone has been implicated in reducing the metastatic potential of osteosarcoma (OS) cells in vitro. The aim of the present study was to verify the antitumoral efficacy and the potential mechanism of amentoflavone osteosarcoma progression inhibition in vivo. MATERIALS AND METHODS: A U-2 OS osteosarcoma xenograft mouse model was used in this study. Mice were treated with a vehicle control or amentoflavone (100 mg/kg/day) for 15 days. Tumor growth, signal transduction, and expression of tumor progression-associated proteins were evaluated using a digital caliper, bioluminescence imaging (BLI), animal computed tomography (CT), and ex vivo western blotting assay. RESULTS: Amentoflavone significantly inhibits tumor growth and reduces protein levels of phospho-extracellular signal-regulated kinase (P-ERK), nuclear factor-kappaB (NF-κB) p65 (Ser536), vascular endothelial growth factor (VEGF), matrix metallopeptidase 9 (MMP-9), X-linked inhibitor of apoptosis protein (XIAP), and cyclin-D1 in osteosarcoma in vivo. CONCLUSION: The inhibition of ERK/NF-κB activation is associated with amentoflavone-inhibited osteosarcoma progression in vivo. Iwasaki J, Komori T, Nakagawa F, et al. Schlafen11 Expression Is Associated With the Antitumor Activity of Trabectedin in Human Sarcoma Cell Lines. Anticancer Res. 2019; 39(7):3553-3563 [PubMed] Related Publications BACKGROUND/AIM: Trabectedin is a DNA-damaging agent and has been approved for the treatment of patients with advanced soft tissue sarcoma. Schlafen 11 (SLFN11) was identified as a dominant determinant of the response to DNA-damaging agents. The aim of the study was to clarify the association between SLFN11 expression and the antitumor activity of trabectedin. MATERIALS AND METHODS: The antitumor activity of trabectedin was evaluated under different expression levels of SLFN11 regulated by RNA interference and CRISPR-Cas9 systems, and the combined antitumor activity of ataxia telangiectasia and Rad3-related protein kinase (ATR) inhibitor and trabectedin in sarcoma cell lines using in vitro a cell viability assay and in vivo xenograft models. RESULTS: SLFN11-knockdown cell lines had a lower sensitivity to trabectedin, compared to parental cells. ATR inhibitor enhanced the antitumor activity of trabectedin in SLFN11-knockdown cells and in a SLFN11-knockout xenograft model. CONCLUSION: SLFN11 expression might be a key factor in the antitumor activity of trabectedin. Natarajan SK, Venneti S Poly Combs the Immune System: PRC2 Loss in Malignant Peripheral Nerve Sheath Tumors Can Dampen Immune Responses. Cancer Res. 2019; 79(13):3172-3173 [PubMed] Related Publications Epigenetic modifications including altered DNA methylation and histone posttranslational modifications (PTM) are central to the biology of several cancers. These modifications can regulate DNA accessibility and consequently, gene expression. In this issue, Wojcik and colleagues explore epigenetic drivers of malignant peripheral nerve sheath tumors (MPNST) harboring loss-of-function polycomb-repressive complex 2 mutations. They demonstrate alterations in specific histone PTMs and a global increase in DNA methylation. Notably, epigenetic alterations related with aberrant upregulation of proteins involved in immune evasion, which informed identification of potential therapeutic vulnerabilities. This study helps understand the complex biology of MPNSTs and may enable future therapeutic development. The aim of this study was to develop nomograms to predict long-term overall survival and cancer-specific survival of patients with osteosarcoma.We carried out univariate and multivariate analyses and set up nomograms predicting survival outcome using osteosarcoma patient data collected from the Surveillance, Epidemiology and End Results (SEER) program of the National Cancer Institute (2004-2011, n = 1426). The patients were divided into a training cohort (2004-2008, n = 863) and a validation cohort (2009-2011, n = 563), and the mean follow-up was 55 months.In the training cohort, 304 patients (35.2%) died from osteosarcoma and 91 (10.5%) died from other causes. In the validation cohort, 155 patients (27.5%) died from osteosarcoma and (12.3%) died from other causes. Nomograms predicting overall survival (OS) and cancer-specific survival (CSS) were developed according to 6 clinicopathologic factors (age, tumor site, historic grade, surgery, AJCC T/N, and M), with concordance indexes (C-index) of 0.725 (OS) and 0.718 (CSS), respectively. The validation C-indexes were 0.775 and 0.742 for OS and CSS, respectively.Our results suggest that we have successfully developed highly accurate nomograms for predicting 5-year OS and CSS for osteosarcoma patients. These nomograms will help surgeons customize treatment and monitoring strategies for osteosarcoma patients. Xu F, Song Y, Guo A Anti-Apoptotic Effects of Docosahexaenoic Acid in IL-1β-Induced Human Chondrosarcoma Cell Death through Involvement of the MAPK Signaling Pathway. Cytogenet Genome Res. 2019; 158(1):17-24 [PubMed] Related Publications Osteoarthritis (OA) is a degenerative disease characterized by progressive articular cartilage destruction and joint marginal osteophyte formation with different degrees of synovitis. Docosahexaenoic acid (DHA) is an unsaturated fatty acid with anti-inflammatory, antioxidant, and antiapoptotic functions. In this study, the human chondrosarcoma cell line SW1353 was cultured in vitro, and an OA cell model was constructed with inflammatory factor IL-1β stimulation. After cells were treated with DHA, cell apoptosis was measured. Western blot assay was used to detect protein expression of apoptosis-related factors (Bax, Bcl-2, and cleaved caspase-3) and mitogen-activated protein kinase (MAPK) signaling pathway family members, including extracellular signal-regulated kinase (ERK), c-JUN N-terminal kinase (JNK), and p38 MAPK. Our results show that IL-1β promotes the apoptosis of SW1353 cells, increases the expression of Bax and cleaved caspase-3, and activates the MAPK signaling pathway. In contrast, DHA inhibits the expression of IL-1β, inhibits IL-1β-induced cell apoptosis, and has a certain inhibitory effect on the activation of the MAPK signaling pathway. When the MAPK signaling pathway is inhibited by its inhibitors, the effects of DHA on SW1353 cells are weakened. Thus, DHA enhances the apoptosis of SW1353 cells through the MAPK signaling pathway. Chouliaras K, Senehi R, Ethun CG, et al. Recurrence patterns after resection of retroperitoneal sarcomas: An eight-institution study from the US Sarcoma Collaborative. J Surg Oncol. 2019; 120(3):340-347 [PubMed] Related Publications BACKGROUND AND OBJECTIVES: Resection of primary retroperitoneal sarcomas (RPS) has a high incidence of recurrence. This study aims to identify patterns of recurrence and its impact on overall survival. METHODS: Adult patients with primary retroperitoneal soft tissue sarcomas who underwent resection in 2000-2016 at eight institutions of the US Sarcoma Collaborative were evaluated. RESULTS: Four hundred and ninety-eight patients were analyzed, with 56.2% (280 of 498) having recurrences. There were 433 recurrences (1-8) in 280 patients with 126 (25.3%) being locoregional, 82 (16.5%) distant, and 72 (14.5%) both locoregional and distant. Multivariate analyses revealed the following: Patient age P = .0002), tumor grade (P = .02), local recurrence (P = .0003) and distant recurrence (P < .0001) were predictors of disease-specific survival. The 1-, 3-, and 5-year survival rate for patients who recurred vs not was 89.6% (standard error [SE] 1.9) vs 93.5% (1.8), 66.0% (3.2) vs 88.4% (2.6), and 51.8% (3.6) vs 83.9% (3.3), respectively, P < .0001. Median survival was 5.3 years for the recurrence vs 11.3+ years for the no recurrence group (P < .0001). Median survival from the time of recurrence was 2.5 years. CONCLUSIONS: Recurrence after resection of RPS occurs in more than half of patients independently of resection status or perioperative chemotherapy and is equally distributed between locoregional and distant sites. Recurrence is primarily related to tumor biology and is associated with a significant decrease in overall survival. Accurate diagnoses of sarcoma are sometimes challenging on conventional histomorphology and immunophenotype. Many specific genetic aberrations including chromosomal translocations have been identified in various sarcomas, which can be detected by fluorescence in situ hybridization and polymerase chain reaction analysis. Next-generation sequencing-based RNA sequencing can screen multiple sarcoma-specific chromosome translocations/fusion genes in 1 test, which is especially useful for sarcoma without obvious differentiation. In this report, we utilized RNA sequencing on formalin-fixed paraffin-embedded (FFPE) specimens to investigate the possibility of diagnosing sarcomas by identifying disease-specific fusion genes. Targeted RNA sequencing was performed on 6 sarcoma cases. The expected genetic alterations (clear cell sarcoma/EWSR1-ATF1, Ewing sarcoma/EWSR1-FLI1, myxoid liposarcoma/DDIT3-FUS) in four cases were detected and confirmed by secondary tests. Interestingly, three SS18 fusion genes (SS18-SSX2B, SS18-SSX2, and SS18-SSX4) were identified in a synovial sarcoma case. A rare fusion gene (EWSR1-PATZ1) was identified in a morphologically challenging case; which enabled us to establish the diagnosis of low grade glioneural tumor. In conclusion, RNA sequencing on FFPE specimen is a reliable method in establishing the diagnosis of sarcoma in daily practice. Chrisinger JSA, Al-Zaid T, Keung EZ, et al. The degree of sclerosis is associated with prognosis in well-differentiated liposarcoma of the retroperitoneum. J Surg Oncol. 2019; 120(3):382-388 [PubMed] Related Publications BACKGROUND AND OBJECTIVES: Well-differentiated liposarcomas (WDL) are often partly composed of sclerotic tissue, however, the amount varies widely between tumors, and its prognostic significance is unknown. We hypothesized that tumors with more sclerosis would behave more aggressively. METHODS: Primary retroperitoneal WDL from 29 patients resected at our institution with follow-up were histologically evaluated by soft tissue pathologists blinded to outcome. Tumors with ≥ 10% sclerosis were designated "sclerotic" while tumors with < 10% sclerosis were designated as "minimally sclerotic". Cellular and dedifferentiated tumors were excluded. Clinical parameters and radiologic assessments on computed tomography (CT) were recorded. RESULTS: Histological evaluation identified 13 minimally sclerotic WDL and 16 sclerotic WDL. Median follow-up was 9 years (range, 3-20). Median recurrence-free survival (RFS) and median overall survival (OS) were 6.16 and 13.9 years, respectively. Compared with patients with sclerotic WDL, those with minimally sclerotic WDL had superior RFS (HR = 0.17 [95% CI, 0.06-0.53], P = .002) and OS (log-rank test, P = .002). Sclerotic WDL exhibited higher Houndsfield Units than minimally sclerotic WDL (26 vs 1, P = .040). CONCLUSIONS: Minimally sclerotic WDL were associated with more favorable outcome compared with sclerotic tumors. Assessment of sclerosis in WDL is likely a useful prognostic marker. RATIONAL: The occurrence of Ewing's sarcoma in the vertebral body of elderly women is extremely rare, and the case of Ewing's sarcoma in the spine with secondary surgical repair after wrong diagnosis and treatment has not been reported. We report a case involving primary Ewing's sarcoma of the vertebral body in an elderly female. Owing to its rarity and controversial issues, we report a case report to discuss its clinical features, treatments, radiological, and histological characteristics. PATIENT CONCERNS: The elderly female patient came to see us with the manifestation of total paralysis of both lower limbs. The patient with a vertebral compression fracture as the primary manifestation was misdiagnosed in another hospital. The patient underwent inappropriate surgical treatment and was transferred to our hospital for diagnosis and second-stage surgery. DIAGNOSES: The postoperative pathological examination and immunohistochemical examination in our hospital confirmed: Ewing's sarcoma; Surgical history at other hospitals suggests: after Bone cement injection. INTERVENTIONS: The patient underwent a T6 and T8 laminectomy and T5/6-T9 pedicle screw fixation. OUTCOMES: Reexamination 1 month after the surgery showed that the tumor had been partially resected, the spinal cord compression was relieved, the tumor did not grow further, and the patient's lower limb physical ability, tactile sense, algesia and temperature sense recovered slightly. LESSONS: For patients with ewing's tumor in the spinal canal with symptoms of spinal cord compression, even if the patients with poor results after a unadvisable operation, it is still necessary to be actively in spinal cord compression by surgery. The differential diagnosis of Ewing's sarcoma and compression fractures is very important. For patients with vertebral tumors, special attention should be taken during vertebroplasty for bone cement leakage caused by excessive bone cement injection and increased local pressure. And some experience with imaging and laboratory findings. This study examines survival time in patients with small bowel tumors and determines its contributing factors. In this retrospective analytical study, the medical records of 106 patients with small bowel cancer (from 2006 to 2011) were investigated. The patients' data were extracted, including age, gender, clinical presentation, location of tumor, histological type, grade of tumor, site of metastasis, and type of treatment. The Kaplan-Meier test was used to estimate the overall survival time and the Log-rank test to compare the survival curves. The Cox regression was also used to evaluate the effect of the confounding variables on survival time. This study was conducted on 106 patients with a median age of 60 years (Min: 7, Max: 87). The tumor types included adenocarcinoma (n=78, 73.6%), MALToma (n=22, 20.8%), neuroendocrine tumors (n=4, 3.8%), and sarcoma (n=2. 1.8%). Grade 3 adenocarcinomas had a significantly lower survival time (HR: 1.48, 95% CI: 0.46-2.86; P=.001). Combined therapy (chemotherapy and surgery) vs. single-therapy (only surgery) had no significant effects on the survival of the patients with MALToma (5 vs. 3 months, 95% CI: 1.89-5.26; P=.06). There were no significant differences between the survival time in adenocarcinoma and MALToma (12 vs. 20 months, 95% CI: 6.24-24.76; P=.49). Tumor grade was the only independent prognostic factor that affected survival in adenocarcinoma. The patients diagnosed with MALToma in the study also had a poor prognosis, and the type of treatment had no significant effect on their survival. BACKGROUND: Retroperitoneal sarcomas (RPS) include a heterogeneous group of rare malignant tumours, and various treatment algorithms are still controversially discussed until today. The present study aimed to examine postoperative and long-term outcomes after resection of primary RPS. PATIENTS AND METHODS: Clinicopathological data of patients who underwent resection of primary RPS between 2005 and 2015 were assessed, and predictors for overall survival (OS) and disease-free survival (DFS) were identified. RESULTS: Sixty-one patients underwent resection for primary RPS. Postoperative morbidity and mortality rates were 31 and 3%, respectively. After a median follow-up time of 74 months, 5-year OS and DFS rates were 58 and 34%, respectively. Histologic high grade (5-year OS: G1: 92% vs. G2: 54% vs. G3: 43%, P = 0.030) was significantly associated with diminished OS in univariate and multivariate analyses. When assessing DFS, histologic high grade (5-year DFS: G1: 63% vs. G2: 24% vs. G3: 22%, P = 0.013), positive surgical resection margins (5-year DFS: R0: 53% vs. R1: 10% vs. R2: 0%, P = 0.014), and vascular involvement (5-year DFS: yes: 33% vs no: 39%, P = 0.001), were significantly associated with inferior DFS in univariate and multivariate analyses. CONCLUSIONS: High-grade tumours indicated poor OS, while vascular involvement, positive surgical resection margins, and histologic grade are the most important predictors of DFS. Although multimodal treatment strategies are progressively established, surgical resection remains the mainstay in the majority of patients with RPS, even in cases with vascular involvement. Taskin OC, Akkas G, Memis B, et al. Sarcomatoid carcinomas of the gallbladder: clinicopathologic characteristics. Virchows Arch. 2019; 475(1):59-66 [PubMed] Related Publications Sarcomatoid carcinomas recently came into the spotlight through genetic profiling studies and also as a distinct model of epithelial-mesenchymal transition. The literature on sarcomatoid carcinomas of gallbladder is limited. In this study, 656 gallbladder carcinomas (GBC) were reviewed. Eleven (1.7%) with a sarcomatoid component were identified and analyzed in comparison with ordinary GBC (O-GBC). Patients included 9 females and 2 males (F/M = 4.5 vs. 3.9) with a mean age-at-diagnosis of 71 (vs. 64). The median tumor size was 4.6 cm (vs. 2.5; P = 0.01). Nine patients (84%) presented with advanced stage (pT3/4) tumor (vs. 48%). An adenocarcinoma component constituting 1-75% of the tumor was present in nine, and eight had surface dysplasia/CIS; either in situ or invasive carcinoma was present in all cases. An intracholecystic papillary-tubular neoplasm was identified in one. Seven showed pleomorphic-sarcomatoid pattern, and four showed subtle/bland elongated spindle cells. Three had an angiosarcomatoid pattern. Two had heterologous elements. One showed few osteoclast-like giant cells, only adjacent to osteoid. Immunohistochemically, vimentin, was positive in six of six; P53 expression was > 60% in six of six, keratins in six of seven, and p63 in two of six. Actin, desmin, and S100 were negative. The median Ki67 index was 40%. In the follow-up, one died peri-operatively, eight died of disease within 3 to 8 months (vs. 26 months median survival for O-GBC), and two were alive at 9 and 15 months. The behavior overall was worse than ordinary adenocarcinomas in general but was not different when grade and stage were matched. In summary, sarcomatoid component is identified in < 2% of GBC. Unlike sarcomatoid carcinomas in the remainder of pancreatobiliary tract, these are seldom of the "osteoclastic" type and patients present with large/advanced stage tumors. Limited data suggests that these tumors are aggressive with rapid mortality unlike pancreatic osteoclastic ones which often have indolent behavior. Lenze F, Pohlig F, Knebel C, et al. Psychosocial Distress in Follow-up Care - Results of a Tablet-based Routine Screening in 202 Patients With Sarcoma. Anticancer Res. 2019; 39(6):3159-3165 [PubMed] Related Publications BACKGROUND: Patients with sarcoma are particularly vulnerable to psychosocial distress. The aim of this study was to collect preliminary data on the prevalence of psychosocial distress in such patients during follow-up care and identify risk factors associated with higher psycho-oncological stress levels. PATIENTS AND METHODS: The study retrospectively enrolled 202 patients with bone or soft-tissue sarcomas who underwent routine psychosocial distress screening during their follow-up care. All patients were screened using an electronic cancer-specific questionnaire. RESULTS: Females and patients who underwent radiotherapy were more distressed. Psychosocial distress levels were markedly higher in the early postoperative phase, but approximately one-third of patients showed high psychosocial distress levels even more than 2 years postoperatively. CONCLUSION: The results underscore the importance of routine psychosocial distress screenings in patients with sarcoma, which should be performed throughout the follow-up period. Desai KB, Mella D, Pan E An Adult Patient With Rare Primary Intracranial Alveolar Rhabdomyosarcoma. Anticancer Res. 2019; 39(6):3067-3070 [PubMed] Related Publications We report a rare case of primary intracranial alveolar rhabdomyosarcoma (ARMS) in the right temporal lobe of a 51-year-old male. ARMS is one of 3 histological subtypes of rhabdomyosarcoma that most commonly presents in older children and younger adults. To our knowledge, there have been no prior published reports of primary intracranial ARMS in adults. Known cases of intracranial ARMS in adults are due to central nervous system (CNS) metastases from the head and neck and extremities. Diagnostic workup did not reveal any primary source outside the CNS. Given that risk factors for ARMS have not been studied in adults, it is difficult to ascertain what aspects of this patient's clinical history may have contributed to his diagnosis. Interestingly, he had prior history of traumatic brain injury requiring evacuation of a right fronto-temporal intraparenchymal hematoma. Mühlhofer HML, Lenze U, Gersing A, et al. Prognostic Factors and Outcomes for Patients With Myxofibrosarcoma: A 13-Year Retrospective Evaluation. Anticancer Res. 2019; 39(6):2985-2992 [PubMed] Related Publications BACKGROUND: Soft-tissue sarcomas are rare entities that are divided into approximately 50 histological subtypes. Myxofibrosarcoma (MFS) represents approximately 20% of all soft-tissue sarcomas, especially in elderly patients in their sixth to eighth decades of life. The treatment for soft-tissue sarcomas varies from primary surgical resection to neoadjuvant or adjuvant radiotherapy or cytotoxic chemotherapy. The aim of this study was to evaluate the prognostic factors affecting survival of patients with MFS, taking into account gender, tumour grade, state of the resection margin, local recurrence, use of radiotherapy, presence of metastases and blood levels of haemoglobin and C-reactive protein in a retrospective, single-centre analysis with a minimum follow-up period of 60 months (range=60-156 months). PATIENTS AND METHODS: The study included 34 patients (male/female=20/14). Tumour localization, tumour grade, tumour margins, local recurrence, the use of radiotherapy, the presence of metastasis, and the blood levels of haemoglobin and C-reactive protein preoperatively and during follow-up were evaluated. RESULTS: MFS constituted the most common high-grade sarcoma (G2/G3, 79.4%) in our cohort and was generally located in a lower limb (73.6%). Negative margins (R0) were detected in 67.6% of patients after surgical resection, and local recurrence occurred in 23.5% of all patients after a mean disease-free period of 19.4 months. Both parameters exerted no significant influence on survival. Radiotherapy was performed in a neoadjuvant or an adjuvant setting in 50% of patients (eight neoadjuvant, nine adjuvant). Metastasis occurred after a mean of 20.4 months in 38.2% of the patients. Higher C-reactive protein levels showed a trend towards being associated with worse survival, but the association was not significant (p=0.084); haemoglobin level had no influence on the survival rate (p=0.426). Tumour grade and metastasis were significant prognostic factors of survival (log-rank test p=0.041 and p=0.00007). Ten patients (29.4%) died due to MFS during our follow-up period. CONCLUSION: The tumour grade and metastasis of MFS are independently associated with disease-specific survival, whereas negative surgical margins, local recurrence and blood levels of C-reactive protein and haemoglobin were not significant prognostic factors. The understanding of the molecular biological patterns that result in the metastasis of these tumours will help develop better treatment plans in the future. Jerez S, Araya H, Hevia D, et al. Extracellular vesicles from osteosarcoma cell lines contain miRNAs associated with cell adhesion and apoptosis. Gene. 2019; 710:246-257 [PubMed] Article available free on PMC after 20/08/2020 Related Publications Osteosarcoma is the most common primary bone tumor during childhood and adolescence. Several reports have presented data on serum biomarkers for osteosarcoma, but few reports have analyzed circulating microRNAs (miRNAs). In this study, we used next generation miRNA sequencing to examine miRNAs isolated from microvesicle-depleted extracellular vesicles (EVs) derived from six different human osteosarcoma or osteoblastic cell lines with different degrees of metastatic potential (i.e., SAOS2, MG63, HOS, 143B, U2OS and hFOB1.19). EVs from each cell line contain on average ~300 miRNAs, and ~70 of these miRNAs are present at very high levels (i.e., >1000 reads per million). The most prominent miRNAs are miR-21-5p, miR-143-3p, miR-148a-3p and 181a-5p, which are enriched between 3 and 100 fold and relatively abundant in EVs derived from metastatic SAOS2 cells compared to non-metastatic MG63 cells. Gene ontology analysis of predicted targets reveals that miRNAs present in EVs may regulate the metastatic potential of osteosarcoma cell lines by potentially inhibiting a network of genes (e.g., MAPK1, NRAS, FRS2, PRCKE, BCL2 and QKI) involved in apoptosis and/or cell adhesion. Our data indicate that osteosarcoma cell lines may selectively package miRNAs as molecular cargo of EVs that could function as paracrine agents to modulate the tumor micro-environment.
1
7
<urn:uuid:914b49c1-de76-4b1a-9e12-03a545532b16>
Current Procedural Terminology Codes (CPT Codes) are the standard for how United States medical professionals such as physicians and healthcare providers, including medical facilities, insurance companies and other accreditation groups, report and document medical, surgical, anesthesiology, laboratory, radiology, evaluation and management services. The more than 7,000 five-character CPT Codes are an important part of the billing process. They are used by insurers to aid in determining the amount of reimbursement the physician or healthcare provider will receive for services rendered. CPT Codes are copyrighted and maintained by the American Medical Association (AMA). Updated annually, these codes fall into three major categories. - Category I- The codes range is 00100 to 99499. Each five-digit code has a corresponding description of the procedure or service. - Category II – These are more of alphanumeric tracking codes to describe clinical components in clinic services or evaluation and management. - Category III – These provisional codes are for new and emerging technology, used for the collection of data and assessment of new procedures and services. CPT Codes and ICD Codes CPT Codes work in conjunction with ICD Codes. ICD-9-CM is a list of codes that correspond to procedures and diagnoses recorded in concurrence with hospital care in the U.S. ICD-10-M is the system employed by healthcare providers and physicians to code and classify all symptoms diagnoses and procedures recorded in concurrence with hospital care in the U.S. For example, a patient’s recorded symptoms are represented by the ICD code, and the procedure done for his treatment is represented by the CPT Code. When these are given to the payer or insurance company, a complete picture of the patient’s medical process is presented. CPT codes are a big help in measuring performance and efficiency as well as tracking important health data. CPT codes help government agencies keep tabs on the value and prevalence of particular procedures whereas hospitals may evaluate the efficiency of divisions and individuals in their facility using Current Procedural Terminology Codes. CPT Code Categories Category I is concerning procedures and contemporary medical practices performed across the United States. This category is generally identified with the 5-character CPT Codes that identify a service or procedure sanctioned by the FDA, and performed by a physician or healthcare professional. This category is broken down into six sections and they are: - Evaluation and Management (99201-99499) – which includes hospital observation services, office and other outpatient services, consultations, hospital inpatient services, emergency department, critical care services, nursing facility services, custodial care services and so on. - Anesthesiology (00100–01999; 99100–99150) – which includes procedures of the head, neck, thorax, infrathoracic, spine and spinal column, upper and lower abdomen, obstetrics and more. - Surgery (10000–69990) – which includes general surgery, integumentary system, musculoskeletal system, respiratory system, cardiovascular system, digestive system, urinary system, eye and reproductive, to name a few. - Radiology (70000-79999) –including ultrasound, mammography, bone/joint, oncology and nuclear medicine. - Pathology and Laboratory (80000–89398) – including organ or disease-oriented panels, drug testing, therapeutic drug assays, evocative/suppression testing, consultations (clinical pathology), urinalysis, transfusion medicine, microbiology and more. - Medicine (90281–99099; 99151–99199; 99500–99607) – including vaccines, toxoids, psychiatry, biofeedback, dialysis, gastroenterology, ophthalmology, special otorhinolaryngologic services, cardiovascular, noninvasive vascular diagnostic studies, pulmonary, allergy and clinical immunology, endocrinology and more. Category II pertains to clinical laboratory services. CPT codes for this category consist of secondary tracking codes employed for collecting information regarding quality of care rendered, and performance measurement. The use of these codes is not mandatory. Breakdown of Category II CPT Codes are: - Composite Measures (0001F-0015F) - Patient Management (0500F-0575F) - Patient History (1000F-1220F) - Physical Examination (2000F-2050F) - Diagnostic/Screening Processes or Results (3006F-3573F) - Therapeutic, Preventive or Other Interventions (4000F-4306F) - Follow-up or Other Outcomes (5005F-5100F) - Patient Safety (6005F-6045F) - Structural Measures (7010F-7025F) Category III is reserved for emerging technologies, with CPT codes of 0016T-0207T. These CPT codes are temporary ones to cover developing technologies, procedures and services. These codes identify services that are not generally performed by physicians and other healthcare professionals, may not be approved by FDA, and may not have been tested and proven to be effective. CPT codes are aids to researchers to track such technologies and services. A medical coder is expected to know this information to be able to find the best possible code for the service or procedure.
2
40
<urn:uuid:df448afc-89cc-45db-8f3e-820cff18d5d0>
Dublin (/ d ʌ among ᵻ n /; Irish: Baile Átha Cliath [blʲaːklʲiəh]) is the capital and largest city in Ireland. Dublin is in the province of Leinster in Ireland’s east coast, at the mouth effluents Liffey. The city has an urban population of 1,345,402. The population of the Greater Dublin Area, which in 2016 was 1,904,806 people. Founded as a Viking settlement, the Kingdom of Dublin became Ireland’s main city after the Norman invasion. The city expanded rapidly from the 17th century and was briefly the second largest city in the British Empire before the Acts of Union in 1800. After the partition of Ireland in 1922, Dublin became the capital of the Irish Free State, later renamed Ireland. Dublin is administered by a city council. The city is listed by the Globalization and World Cities Research Network (GaWC) as a global city, with a ranking of “Alpha”, placing it among the thirty cities in the world. It is a historical and contemporary center education, arts, administration, finance and industry. Main article: History of Dublin See also: Timeline of Dublin Although the area in Dublin Bay has been inhabited by humans since prehistoric times, the writings of Ptolemy (the Greco-Roman astronomer and cartographer) in about 140 AD gives perhaps the earliest reference to a settlement there. He called the deal Eblana Police (Greek: Ἔβλανα πόλις.) Dublin celebrated “official” Millennium 1988 AD, which means that the Irish government recognized 988 AD as the year when the city was settled and that the first settlement would later become the city of Dublin. The name Dublin comes from the Gaelic word Dublind , early classic IrishDubhlind / Duibhlind , Dubh / d̪uβ / alt. / Duw / alt / TO: / meaning “black, dark”, and winding / lʲiɲ [d] “pool”, referring to a dark tidal pool where the river Poddle into the Liffey on the site of the castle garden in the back of Dublin Castle. In modern Irish name is Duibhlinn , and Irish rhyme from Dublin County shows that the Dublin Leinster Irish it was pronounced Duílinn / DI: lʲiɲ /. The pronunciation originals are preserved in the names of the city in other languages like Old English Di f electrical , Norse Dy f flax , modern Icelandic Dy f linen and modern Manx Di v lyn and Welsh You Lyn . Other cities in Ireland also bear the name Duibhlinn , unlike anglicised as Devlin, Divlin and Difflin. Historically, the printers use Gaelic script wrotebra with a dot of B , making Duḃlinn or Duiḃlinn. Those without knowledge of Irish omitted point, to spell the name as Dublin . Variations of the name is also available in traditional Gaelic speaking areas (gàidhealtachd, Related to irländskGaeltacht) in Scotland, such as a linen Dhubh ( “black pool”), which is part of of Linnhe. It is now thought that the Viking settlement was preceded by a Christian ecclesiastical settlement called Duibhlinn , which Dyflin took its name. From the 9th and 10th century there were two settlements where the modern city stands. Viking settlement of about 841 was known as Dyflin , from the IrishDuibhlinn and Gaelic settlement, Áth Cliath ( “ford of obstacles”) was further up the river, at today’s Father Mathew Bridge (also known as Dublin Bridge), at the bottom of Church Street. Baile Átha Cliath , meaning “town of the hurdled ford”, is the common name for the city in modern Irish. Áth Cliath is a place name referring to a fording point of the river Liffey near Father Mathew Bridge. Baile Átha Cliath was an early Christian monastery, believed to have been in the area of Aungier Street, currently occupied by Whitefriar Street Carmelite Church. There are other cities with the same name, such asATH Cliath in East Ayrshire, Scotland, which is Anglicized as Hurlford. Father Mathew Bridge (also known as Dublin Bridge) is understood to be close to the old “Ford of the Hurdles” ( Baile Átha Cliath ), the original crossing point on the River Liffey. The subsequent Scandinavian settlement centered on Poddle River, a tributary of the Liffey in an area now known as Wood Quay. The Dubhlinn was a small lake used to moor the ship; the Poddle connected the lake with Liffey. This lake was covered during the early 18th century as the city grew.The Dubhlinn lay Castle Garden is now located, opposite the Chester Beatty Library in Dublin Castle. Táin Bó Cuailgne ( “The Cattle Raid of Cooley”) refers Dublind Rissa Ratter Áth Cliath , meaning “Dublin, called Ath Cliath”. Dublin was founded as a Viking settlement in the 10th century, and despite a number of uprisings of the native Irish remained largely under Viking control until the Norman invasion of Ireland was launched from Wales in 1169. It was at the death of Muirchertach Mac Lochlainn in early 1166 as Ruaidrí Ua Conchobair, king of Connacht, went to Dublin and was consecrated king of Ireland without resistance. Probably he was the primitive undebated entire King of Ireland, and also the only one Gaelic. The King of Leinster, Dermot MacMurrough, after his exile from Ruaidhrí, enlisted the help of Strongbow, the Earl of Pembroke, to conquer Dublin.Following Mac Murrough’s death, Strongbow declared himself King of Leinster to gain control of the city. In response to Strongbow successful invasion, King Henry II of England confirmed its sovereignty by mounting a larger invasion in 1171 and declared himself Lord of Ireland. At this time, the county in the city of Dublin was formed along with certain liberties adjacent to the city proper. This continued down to 1840 närbaroni Dublin separated from the barony Dublin. Since 2001, both baronies have redesignated the city of Dublin. Dublin Castle was fortified seat of British rule in Ireland until the 1922nd Dublin Castle, which became the center of Norman power in Ireland, founded in 1204 as a major defensive work on the orders of King John of England. Following the appointment of the first Mayor of Dublin in 1229, the city expanded and had a population of 8000 until the end of the 13th century. Dublin flourished as a trading center, despite an attempt by King Robert I of Scotland to capture the city in 1317. It remained a relatively small walled medieval town during the 14th century and was under constant threat from the surrounding native clans. In 1348, the Black Death, a lethal plague that had ravaged Europe, took hold in Dublin and killed thousands over the following decade. Dublin was incorporated into the English crown as the Pale, which was a narrow strip of English settlement along the eastern coast. The Tudor conquest of Ireland in the 16th century spelled a new era for Dublin, the city enjoyed a renewed importance as the center of administrative rule in Ireland.Determined to make Dublin a Protestant city, Queen Elizabeth I of England established Trinity College in 1592 as an exclusively Protestant universities, and ordered that the Catholic St. Patrick’s and Christ Church Cathedrals converted to Protestant. The city had a population of 21,000 in 1640 before a plague of 1649-1651 wiped out nearly half the city’s residents. But the city prospered again soon after as a result of woolen and linen trade with England and reached a population of over 50,000 in 1700. Henrietta Street, was developed in the 1720s, is the earliest Georgian Street in Dublin. As the city continued to flourish during the 18th century Georgian Dublin was for a short period, the second largest city in the British Empire and the fifth largest city in Europe, with a population exceeding 130,000. The vast majority of Dublin’s most notable architecture dates from this period, such as the Four Courts and the Custom House. Temple Bar and Grafton Street are two of the few remaining areas not hit by the wave of Georgian Reconstruction and retained its medieval character. Dublin grew even more dramatically during the 18th century, with the construction of many famous neighborhoods and buildings, such as Merrion Square, Parliament House and the Royal Exchange. The wide streets Commission was formed in 1757 at the request of the Dublin Corporation to control the architectural standards for the design of streets , bridges and buildings. 1759, founding of the Guinness brewery resulted in a substantial financial gain for the city. [ Citation needed ] For much of the time since its inception, the brewery was Dublin’s biggest employers. [ Citation needed ] Late modern and contemporary GPO on O’Connell Street was in the middle of the 1916 Easter Rising. Dublin suffered a period of political and economic decline during the 19th century after the Act of Union in 1800, according to which the seat of government was transferred to the Westminster Parliament in London. The city played no major role in the Industrial Revolution, but remained central administration and a focal point for most of the island. Ireland had no significant sources of coal, the fuel of the time, and Dublin was not a center of ship manufacturing, the other main driver of industrial development in the UK and Ireland. Belfast developed faster than Dublin during this period on a mixture of international trade, production and shipbuilding factory-based linen cloth. The Easter Rising of 1916, the Irish War of Independence, and the subsequent Irish Civil War resulted in a significant amount of physical destruction in central Dublin. The Government of the Irish Free State rebuilt the city center and located the new parliament, the Oireachtas in Leinster House. Since the beginning of Norman rule in the 12th century, the city served as the capital in varying geopolitical entities: Lordship of Ireland (1171-1541), the Kingdom of Ireland (1541-1800), the island as part of the United Kingdom of Great Britain and Ireland (1801- 1922), and the Irish Republic (1919-1922).After the partition of Ireland in 1922, it became the capital of the Irish Free State (1922-1937) and now is the capital of Ireland. One of the memorials to commemorate that time is the Garden of Remembrance. Dublin were also victims of the Troubles in Northern Ireland. Even during the 30 years of conflict, violence engulfed mainly Northern Ireland. But the Provisional IRA drew a lot of support from the Republic, particularly Dublin.This caused a loyalist paramilitary group Ulster Volunteer Force to bomb the city. The most notable atrocities committed by Loyalists during this time was Dublin and Monaghan bombings in which 34 people died, mainly in Dublin itself. Since 1997, the landscape of Dublin has changed immensely. The city was at the forefront of Ireland’s rapid economic expansion during the Celtic Tiger period, with enormous private and state development of housing, transport and business. Civic offices in Dublin City Council. From 1842, the boundaries of the city is understood by baronies in Dublin City and the Barony of Dublin. In 1930, the limits extended by the Local Government (Dublin) Act. Later, in 1953, the boundaries once again extended by municipal Provisional Order Confirmation Act. Dublin City Council is a chamber assembly of 63 members elected every five years by local election areas. It is headed by the mayor, who is elected for a yearly term and resides in the Mansion House. Council meetings take place at Dublin City Hall, while most of its administrative activities based on the civic offices at Wood Quay. The party or coalition of parties with the majority of seats judging committee, introduces policies and appoints the mayor. Council passes an annual budget for spending in areas such as housing, traffic management, refuse, drainage and planning. Dublin City Manager is responsible for the implementation of council decisions. Leinster House on Kildare Street houses the Oireachtas. As the capital city, Dublin is the seat of the national Parliament in Ireland, the Oireachtas. It consists of the President of Ireland, Seanad Éireann as the upper house, and the Dáil as underhuset.Ordföranden live in Aras an Uachtarain in Phoenix Park, while both houses of the Oireachtas meet in Leinster House, a former ducal palace on Kildare Street. It has been the home of the Irish Parliament since the creation of the Irish Free State in 1922. The old Irish Houses of Parliament of the Kingdom of Ireland is located in College Green. Government Buildings housing the Prime Minister’s Office, the Council Department, Ministry of Finance and the Office of the Attorney General. It consists of a main building (completed in 1911) with two wings (completed in 1921). It was designed by Thomas Manley Dean and Sir Aston Webb as the Royal College of Science. The first Dáil met originally in the Mansion House in 1919. The Irish Free State government took over the two wings to serve as a temporary home for some ministries, while the central building became the College of Technology until 1989. Although both it and Leinster House was intended to be temporary, became permanent houses of parliament then. For elections to Dáil Éireann, the city is divided into five constituencies: Dublin Central (3 seats), Dublin Bay North (5 places), Dublin North West (3 seats), Dublin South-Central (4 places) and Dublin Bay South (4 places ).Nineteen TD selected in total. In the past, Dublin was regarded as a stronghold for Fianna Fáil, [ citation needed ]but after the Irish local elections in 2004 the party was overshadowed by the center-left Labour Party. In the 2011 general election was elected Dublin region 18, the Labour Party, 17 Fine Gael, four Sinn Féin two socialist party, 2 People before profit Alliance and three independent TDs. Fianna Fail lost all but one of its its TDs in the region. Satellite image showing the River Liffey in iIrländska lake with which it shares the Dublin iNorthside and Southside. Dublin is located at the mouth of the River Liffey and covers an area of approximately 44 sq mi or 115 km 2 in the eastern and central Ireland. It is bordered by a low mountain range in the south and surrounded by flat farmland in the north and west. The Liffey divides the city into two parts between the Northside and Southside. Each of these is further divided by the two smaller rivers – the River Tolka running southeast into Dubin Bay and the River Dodder run northeast to the mouth of the Liffey. Another two rivers – the Grand Canal on the South Side and Djurgården at Northside – call the inner city on the way from the west and the River Shannon. The River Liffey bend in Leixlip from a northeasterly route to a predominantly eastward direction, and this point also marks the transition to urban development of more agricultural land use. A north-south division was traditionally present, with the River Liffey as dividers. Northside was widely seen as the working class, while the Southside seen as middle to upper middle class. The gap was interrupted by examples of Dublin “sub-culture” stereotypes, with upper middle class constituents seen as moving towards an accent and demeanor synonymous with South Side, and works Dubliners seen as going against the properties associated with Northside and inner city areas. Dublin economic gap was also previously an east-west and a north-south direction. It was also clear gaps between the coastal suburbs of the eastern part of the city, including those on the North Side, and the newer developments further west. As with much of the rest of northwestern Europe, Dublin is experiencing a maritime climate ( Cfb ), with cool summers, mild winters, and lack of extreme temperatures. The average maximum January temperatures are 8.8 ° C (48 ° F), while the average maximum July temperature is 20.2 ° C (68 ° F).On average, the sunniest months of May and June, while the wettest month is October, with 76 mm (3 inches) of rain, and the driest month is February with 46 mm (2 inches). Rainfall is evenly distributed throughout the year. Dublin protected location on east coast makes it the driest place in Ireland to receive only about half of rain on the West Coast. Ringsend in the southern part of the city records the lowest rainfall in the country, with an average annual rainfall of 683 mm (27 inches), the average annual rainfall in the center is 714 mm (28 inches). The main precipitation in winter is rain, but the rain occurring between November and March. Hail is more common than snow. The city is experiencing long summer days and short winter days.Strong Atlantic winds are most common in the fall. These winds can affect Dublin, but because of its eastern location is the least affected compared to other parts of the country. But in winter, easterly winds making the city colder and more likely to snow. Dublin is divided into several neighborhoods or districts. |This section’s factual accuracy moot . Please help to ensure that disputed statements reliably sourced. See current discussion omdiskussionssida. (April 2016) (Read more about how and when to remove this template message) This is the oldest part of the city, including Dublin Castle, Christchurch and St Patrick’s Cathedral with the old city wall. It was part of Dubh Linn settlement, this area became home to the Vikings in Dublin. St Stephens Green Dublin is known for its Georgian architecture. Here are some of the world’s finest Georgian buildings. It starts at St Stephens Green and Trinity College up to the channel. Merrion Square, St. Stephen’s Green and Fitzwilliam Square are examples of this type of architecture. This area is the Dublin Docklands containing “Silicon Docks”, Dublin Tech Quarter located in the Grand Canal Dock area. Global giants like Google, Facebook, Twitter and Accenture are based there. It used to be a deserted part of town, but has undergone revitalization with the development of offices and apartments. ~~ culture POS = TRUNC Temple Bar is located in the heart of Dublin’s social and cultural life. It was once a derelict but then revived in the 1990s. It is the newest district was created in 2012. It covers the area from South William Street to George Street, and from Lower Stephens Street Exchequer Street. Its a hub for design, creativity and innovation. Dublin skyline from the Guinness Storehouse Further information: List of public art in Dublin Dublin has many landmarks and monuments dating back hundreds of years.One of the oldest is Dublin Castle, which was first founded as a major defensive work on the orders of King John of England in 1204, shortly after the Norman invasion of Ireland in 1169, when it was commanded to a castle built with strong walls and good ditches for defense of the city, the administration of justice and the protection of the king’s tax. largely complete by 1230, the castle was typical Norman courtyard design, with a central square but keep, bounded all sides by high defensive walls and protected in every corner of a round tower . Positioned to the southeast Norman Dublin Castle formed a corner of the outer perimeter of the city, with the river Poddle as a natural way to defend themselves. The Spire of Dublin rises behind the statue of Jim Larkin. One of Dublin’s newest monuments is the Spire Dublin, or officially entitled “Monument of Light”. It is 121.2 meters (398 feet) conical spire made of stainless steel and is located on O’Connell Street. It replaces Nelson’s Pillar and is intended to mark Dublin’s place in the 21st century. The spire was designed by Ian Ritchie Architects, who sought an “elegant and dynamic simplicity of bridging art and technology.” During the day it maintains its steel look, but at dusk the monument appears together in the sky. The base of the monument is lit and the top is illuminated to provide a beacon in the night sky over the city. Many people visiting Trinity College to see the Book of Kells in the library there. The Book of Kells is an illustrated manuscripts created by Irish monks circa. 800 AD. The Ha’penny Bridge; an old iron footbridge over the River Liffey is one of the most photographed sights in Dublin and is considered one of Dublin’s most famous landmarks. Other popular sights and monuments include the Mansion House, the Anna Livia monument, the Molly Malone statue, Christ Church Cathedral, St Patrick’s Cathedral, Saint Francis Xavier Church on the Upper Gardiner Street near Mountjoy Square, Custom House and Aras an Uachtarain. The Poolbeg Towers is also iconic features of the Dublin and are visible in many places around the city. Dublin has more green space per square kilometer than any other European capital, with 97% of city residents living within 300 meters from a park area. [Citation needed ] The City Council gives 2.96 hectares (7.3 acres) of public green space per 1,000 people and 255 playing fields. [ citation needed ] the Council also planted about 5,000 trees annually [ citation needed ] and manages over 1,500 hectares (3,700 acres) of parks. The Molly Malone Statue, Grafton Street. There are many parks around the city, including the Phoenix Park, Herbert Park, and St. Stephen’s Green. Phoenix Park is about 3 km (2 miles) west of the center, north of the River Liffey. Its 16 km (10 mi) facade encloses 707 hectares (1,750 acres), making it one of the largest walled city parks in Europe. It includes large areas of grassland and tree-lined avenues, and since the 17th century has been home to a herd of wild fallow deer. The residence of the President of Ireland (Aras an Uachtarain), built in 1751, located in the park. The park is also home to Dublin Zoo, the official residence of the US Ambassador, and Ashtown Castle. Concerts have also been carried out in the park by many singers and musicians. St. Stephens Green is adjacent to one of Dublin’s main shopping streets, Grafton Street, a shopping mall named after it, while the surrounding streets are offices for a number of public bodies and the city terminus of one of Dublin’s Luas tram lines. Saint Anne’s Park is a public park and recreational facility, which is shared between Raheny and Clontarf, both suburbs on the north side of Dublin. The park, the second largest urban park in Dublin, is part of a former two square kilometers (0.8 sq mi, 500-acre) property assembled by members of the Guinness family, beginning with Benjamin Lee Guinness in 1835 (the largest municipal located in nearby (North) Bull Island, also shared between Clontarf and Raheny). Main article: Economy of Dublin Grafton Street is the main shopping street in Dublin city center Dublin region is the economic center of Ireland, and was in the forefront of the country’s rapid economic expansion during the Celtic Tiger period. In 2009, Dublin was listed as the fourth richest city in the world by purchasing power and the 10 richest of personal income. According to Mercer’s 2011 Worldwide Cost of Living Survey , Dublin is the 13th most expensive city in the European Union (down from 10 in 2010) and the 58th most expensive place to live in the world (down from 42 in 2010). from 2005, has about 800,000 employees in the Greater Dublin Area, of which about 600,000 were employed in the service sector and 200,000 in the industrial sector. [ needs update ] Many of Dublin’s traditional industries, such as food, textile manufacturing, brewing and distilling have gradually declined, although Guinness has been brewed at the St. James Gate Brewery since 1759. Economic improvements in the 1990s, has attracted a large number of global pharmaceutical, information and communications technology companies to the city and the Greater Dublin Area. Companies like Microsoft, Google, Amazon, eBay, PayPal, Yahoo, Facebook, Twitter, Accenture and Pfizer now has European headquarters and / or operational bases in the city. Ulster Bank on George’s Quay Plaza. Financial services have also become important to the city since the establishment of Dublin’s International Financial Services Centre in 1987, which is globally recognized as a leading location for a range of internationally traded financial services. More than 500 operations are approved to trade under the IFSC program. The center is host to half of the world’s top 50 banks and half of the 20 insurance companies. Many international companies have established major headquarters in the city, such as Citibank and Commerzbank. The Irish Stock Exchange (ISEQ), Internet Neutral Exchange (INEX) and Irish Enterprise Exchange (IEX) is also located in Dublin. The economic boom has led to a sharp rise in construction, with major redevelopment projects in the Dublin Docklands and Spencer Dock. Completed projects inkluderarConvention Centre, the 3Arena and Bord Gais Energy Theatre and Silicon Docks. Main article: Transport in Dublin The M50 motorway around Dublin. The road network in Ireland is mainly focused on Dublin. The M50 motorway, half a ring road that runs around the south, west and north of the city, connecting important national primary routes to the rest of the country. In 2008 the West-Link toll bridge was replaced by the eFlow barrier-free tolling system with a three-tiered charging system based on electronic tags and car förhandsregistrering.Vägtullen is currently € 2.10 for vehicles with a prepaid tag, € 2.60 for vehicles whose number plates have been registered with eFlow and 3.10 € for unregistered vehicles. The first phase of a proposed eastern bypass of the city is the Dublin Port Tunnel, which opened in 2006 to mainly cater for heavy vehicles. The tunnel connects the Dublin Port ochmotorvägen M1 near Dublin Airport. The city is also surrounded by an inner and outer orbital route. The inner orbital route runs roughly around the heart of the Georgian city and the outer orbital route runs mainly along the natural circle formed by Dublin’s two canals, the Grand Canal and Djurgården and the North and South Circular roads. Dublin is served by an extensive network of nearly 200 bus lines serving all parts of the city and suburbs. Most of these are controlled by Dublin Bus, but a number of smaller companies also work. Fares are generally calculated on a scene based on the distance traveled. There are several different levels of fares that apply to most services. A “Real Time Passenger Information” was introduced on the Dublin Bus bus stops 2012. Electronically showed signs convey information about the time of the next bus “arrival based on its GPS-determined position. The National Transport Authority is responsible for the integration of bus and rail services in Dublin and has been involved in introducing a prepaid smart card, such kalladLeap cards can be used on Dublin’s public transport. Railway and tram Luas trams at Tallaghtterminus. Heuston and Connolly stations are the two main railway stations in Dublin.Operated by Iarnród Éireann, Dublin Suburban Rail network consists of five rail lines serving the Greater Dublin area and bedroom community that Drogheda and Dundalk in County Louth. One of these lines are electrified Dublin Area Rapid Transit (DART) line, which runs mainly along Dublin, with a total of 31 stations, from Malahide and Howth southwards as far as Greystones in County Wicklow. Commuter trains operate on the other four lines with Irish Rail diesel multiple units. In 2013, passengers DART and Dublin Suburban lines were 16 million and 11.7 million respectively (about 75% of all Irish Rail passengers). The Luas is a light rail system operated by Veolia Transport has been operating since 2004 and now carries over 30 million passengers per year. The network consists of two tram lines; denröda line connects the Docklands and the city center with the south-western suburbs, while the green line connects downtown with the suburbs south of the city and together comprise a total of 54 stations and 38.2 km (23.7 mi) of track. Construction of a 6 km extension of the green line, bringing it to the north of the city, began in June 2013. Proposed projects trillion as the Dublin Metro and DART Underground will also be considered. Rail and ferry Connolly is connected by bus to Dublin Port and ferries operated by Irish Ferries and Stena Line Holyhead for connecting trains on the North Wales coast line to Chester, Crewe and London Euston. Dublin Connolly to Dublin Port, can be reached by walking beside tram lines around the corner from Amiens Street, Dublin to the Store or Luas an end to Busáras which Dublin Bus operates a service to the ferry terminal, or Dublin Bus route 53 or taking a taxi. Dublin airport is operated by the Dublin Airport Authority and is north of Dublin City in the administrative county of Fingal. It is the headquarters of the Irish airline Aer Lingus, the budget airline Ryanair and regional flygbolagStobart Air and CityJet. The airport offers a comprehensive short- and medium-haul network, as well as domestic services to many regional airports in Ireland. There are also extensive long-haul service to the United States, Canada and the Middle East. Dublin Airport is the busiest airport in Ireland, followed by Cork and Shannon. Construction of a second terminal began in 2007 and was inaugurated on November 19, 2010. Dublin Airport is currently ranked as the 18th busiest airport in Europe, recording over 25 million passengers in 2015, and has shown very strong growth in passenger numbers in recent years, especially over long distances.Dublin is now ranked 6th in Europe as a hub for transatlantic passengers, with 158 flights a week to the United States, especially much larger airports such as Istanbul and Rome. [ Citation needed ] Dublin City Council began installing bike paths and trails throughout the city in the 1990s, and as of 2012 the city has over 200 kilometers (120 mi) of specific on- and off-road trails for cyclists. In 2011, the city was ranked 9 of the world’s most important cities of Copenhagenize index of bicycle-friendly cities . Dublin Bikes terminal in Docklands. Dublin Bikes is a self-service system, bike rental which has been operating in Dublin since 2009. Sponsored by JCDecaux, the system consists of 550 French-made unisex bicycles stationed at 44 terminals throughout the city center. Users must make a subscription for either an annual long term rent card costs € 20 or a 3 day ticket costs € 2. The first 30 minutes of use are free, but after that the service fee due to the extra length of use apply. Dublin Bikes now has over 58,000 subscribers and there are plans to drastically expand the service of the city and its suburbs to provide for up to 5,000 bicycles and 300 terminals. 2011 Census showed that 5.9 percent of commuters in Dublin cycled. A report in 2013 from Dublin City Council about traffic flows passing through the channels in and out of the city found that almost 10% of all traffic consisted of riders, representing an increase of 14.1% compared with 2012 and an increase of 87.2% compared to 2006 the level and assigned actions, such as the Dublin Bikes bicycle rental system, provision of cycle paths, information campaigns to promote cycling and the introduction of 30kph center speed limit. Dublin is the primary center of education in Ireland, it is home to three universities, Dublin Institute of Technology and many other universities.There are 20 third-level institutions in the city and in surrounding towns and suburbs. Dublin was Europe of Science in 2012. Placement of TCD in central Dublin The University of Dublin is the oldest university in Ireland dating from the 16th century and is located in the center. Its sole constituent college, Trinity College (TCD) was established by Royal Charter1592 under Elizabeth I and was closed to Roman Catholics until Catholic Emancipation. The Catholic hierarchy then banned Roman Catholics to participate until 1970. It is located in the center, on College Green, and has 15,000 students. Dublin Institute of Technology påCathal Brugha St. Positioning of the DIT Grange Gorman in central Dublin With a continuous history dating back to 1887, Dublin’s and Ireland’s Department of Technical Education and Research, Dublin Institute of Technology (DIT) with over 23,000 students. Dublin Institute of Technology has been specializing in engineering, architecture, science, health, journalism, digital media, hospitality and business, but also offers many arts, design, music and humanities programs. DIT currently has campuses, buildings and research facilities at several sites, including large buildings on Kevin Street, Aungier Street, Bolton Street and Cathal Brugha Street in Dublin city center, it has begun consolidation to a new city center campus in Grange Gorman. Dublin City University (DCU), formerly known as the National Institute for Higher Education (Nihe), specializes in finance, technology, science, communication courses, language school and primary school. It has about 16,000 students, and its main campus, Glasnevin campus, located about 7 km (4 mi) from the center in the northern suburbs. It has two campuses on the North Side of the River, DCU campus Glasnevin and Drumcondra DCU campus. Drumcondra campus includes students formally Glasnevin Campus, St. Patrick’s College of Education, the nearby Mater Dei Institute and from the beginning of the 2016/17 academic year students from the Church of Ireland College of Education. These universities will be totally incorporated DCU in early school years 2016/17. The Royal College of Surgeons in Ireland (RCSI) is a medical school is a recognized college of the NUI, it is located at the St. Stephen’s Green in the city center. The Institute of European Affairs is also in Dublin. Dublin Business School (DBS) is Ireland’s largest private institutions of third level, with over 9,000 students located on Aungier Street. The National College of Art and Design (Ncad) supports training and research in art, design and media. The National College of Ireland (NCI) is also based in Dublin. The Economic and Social Research Institute, a social science research institute, is based on Sir John Rogerson’s Quay, Dublin 2nd The National University of Ireland (NUI) has its headquarters in Dublin, which is also the location of the associated constituent university of University College Dublin (UCD), has over 30,000 students. UCD’s Belfield campus is about 5 km (3 mi) from the center in the southeastern suburbs.The National University of Ireland, Maynooth, another constituent of the NUI, is in neighboring Co. Kildare, about 25 km (16 mi) northwest of the city center. The Irish public administration and management training center based in Dublin, the Institute of Public Administration provides a range of undergraduate and postgraduate awards through the National University of Ireland, and in some cases, Queens University Belfast. There are also smaller specialized colleges, including Griffith College Dublin, Gaiety School of Acting and New Media Technology College. Outside the city, the towns of Tallaght, South Dublin and Dun Laoghaire Dun Laoghaire-Rathdown has regional universities: University of Technology, has Tallaght full and part-time courses in a wide range of technical subjects and Dun Laoghaire Institute of Art, Design and Technology (IADT) supports training and research in art, design, business, psychology and media technology. The western suburb Blanchard offers childcare and sports management courses together with language and technical subjects at the Institute of Technology, Blanchard. The city of Dublin is the area administered by Dublin City Council, but the term “Dublin” usually refers to the contiguous urban area that includes parts of the neighboring municipalities in Dún Laoghaire-Rathdown, Fingal and South Dublin. Together the four areas form the traditional County Dublin.This area is sometimes called the Dublin Region .Befolkningen in the administrative area controlled by the City Council was 553,165 in the 2016 census, while the population in the urban area was 1,345,402. The County Dublin population was 1,273,069, and the Greater Dublin Area 1,904,806.The area’s population is growing rapidly, and it is estimated by the Central Statistical Office that it will reach 2.1 million years in 2020. Since the late 1990s, Dublin has experienced a substantial net immigration, with the largest numbers coming from Europe, especially in Great Britain, Poland and Lithuania. There are also a large number of immigrants from outside Europe, especially from India, Pakistan, China and Nigeria. Dublin is home to a greater proportion of new arrivals than any other part of landet.Sextio percent of Ireland’s Asian population lives in Dublin. Over 15% of Dublin’s population was foreign-born in 2006. The capital attracts the largest share of non-Catholic immigrants from other countries. Increasing secularization in Ireland has led to a decline in ordinary Catholic Church presence in Dublin from over 90 percent in the mid-1970s, down to 14 percent in a survey in 2011. National Museum of Ireland Dublin has a world-famous literary history, having produced many prominent literary figures, including Nobel laureate William Butler Yeats, George Bernard Shaw and Samuel Beckett. Other influential writers and playwrights include Oscar Wilde, Jonathan Swift and the creator of Dracula, Bram Stoker. It is undoubtedly best known as the site of the greatest works of James Joyce, including Ulysses , which is located in Dublin and full of topical detail. Dubliners is a collection of short stories by Joyce about incidents and characters typical of the city during the early 20th century. Other famous writers including JM Synge, Seán O’Casey, Brendan Behan, Maeve Binchy and Roddy Doyle. Ireland’s biggest libraries and literary museums are located in Dublin, including the National Print Museum of Ireland and the National Library of Ireland. In July 2010, Dublin was named as a UNESCO city of literature, visit Edinburgh, Melbourne and Iowa Citymed the permanent title. Book of Kells There are several theaters in the city center, and the world-famous players have emerged from the Dublin theatrical scene, including Noel Purcell, Sir Michael Gambon, Brendan Gleeson, Stephen Rea, Colin Farrell, Colm Meaney and Gabriel Byrne. The best-known theaters include the Gaiety, Abbey, the Olympia, Gate, and the Grand Canal. Gaiety specializes in music and opera productions, and is popular for opening its doors after the evening theater production to host a variety of live music, dance and film. The Abbey was founded in 1904 by a group that included Yeats with the aim of promoting indigenous literary talent. It went on to provide a breakthrough for some of the city’s most famous writers, such as Synge, Yeats himself and George Bernard Shaw. Gate was founded in 1928 to promote European and American Avant Garde works. Grand Canal Theatre is a new 2111 capacity theater that opened in March 2010 in Grand Canal Dock. Aside from being the focus of the country’s literature and theater, Dublin is also the focus of much of Irish Art and the Irish artistic scene. The Book of Kells, a world-famous manuscript produced by Celtic Monks in AD 800 and an example of Insular art, is on display in Trinity College. The Chester Beatty Library houses the famous collection of manuscripts, miniature paintings, prints, drawings, rare books and decorative arts assembled by American mining millionaire (and honorary Irish) Sir Alfred Chester Beatty (1875-1968). The collections are from 2700 BC onwards and are drawn from Asia, the Middle East, North Africa and Europe. In addition, public art galleries are located throughout the city, including the Irish Museum of Modern Art, the National Gallery, the Hugh Lane Municipal Gallery, Douglas Hyde Gallery, The Project Arts Centre and The Royal Hibernian Academy. In recent years, Dublin has become host to a thriving contemporary art scene. Some of the leading private galleries include Green on Red Gallery, Kerlin Gallery, Kevin Kavangh Gallery and Mother’s Tankstation, each of which focuses on facilitating innovative, challenging and engaging contemporary visual art practices. Three branches of the National Museum of Ireland in Dublin: Archaeology in Kildare Street, crafts and history in Collins Barracks and Natural History in Merrion Street. The same area is also home to many smaller museums # 29 Fitzwilliam Street and Little Museum of Dublin on St Stephen’s Green.Dublin is home to the National College of Art and Design, which is from 1746, and the Dublin Institute of Design, which was founded in 1991. Dublinia is a living history attraction showcasing the Viking and medieval history of the city. Dublin has long been a city with a strong underground art scene. Temple Bar was home to many artists in the 1980s, and areas such as Project Arts Centre was a hub for public transport and new exhibits. The Guardian noted that Dublin’s independent and underground art flourished during the recession of 2010. Dublin also has many acclaimed dramatic, musical and opera companies, including the Festival productions, Lyric Opera productions, pioneers musical & Dramatic Society, Glasnevin musical Society, second age Theatre company, Opera Theatre company and Opera Ireland. Ireland is known for his love of Baroque music, which is highly acclaimed at Trinity College. Dublin was nominated to be the World Design Capital 2014. Prime Minister Enda Kenny was quoted saying that Dublin “would be a perfect candidate to host the World Design Capital 2014”. St. Patrick’s Day Dublin has a vibrant night life and is said to be one of Europe’s most youthful cities, with an estimate of 50% of the citizens to be younger than 25. There are many pubs around the city center, the area around St . Stephens Green and Grafton Street, especially Harcourt Street, Camden Street, Wexford Street and Leeson Street, with the most popular nightclubs and pubs. The best known area for nightlife is the Temple Bar area, south of the River Liffey. The area has become popular with tourists, including stag and hen parties from the UK. It was developed as Dublin’s cultural quarter and not maintain this spirit as a center for small arts productions, photography and artists’ studios, and in the form of street performers and small music venues.However, it has been criticized as being expensive, false and dirty by Lonely Planet. In 2014 Temple Bar specified by the Huffington Post as one of the ten most disappointing destinations in the world. The areas around Leeson Street, Harcourt Street, South Williamsport and Camden Street / George Street are popular venues for locals. Live music is popularly played on streets and in locations throughout Dublin, and the city has produced several musicians and groups of international success, including The Dubliners, Thin Lizzy, The Boomtown Rats, U2, The Script, Sinéad O’Connor, Boyzone, encode Line, Westlife and Jedward. The two most famous theaters in the city center, the Savoy Cinema ochCineworld Cinema, both north of the Liffey. Options and special interest cinema can be found in the Irish Film Institute in Temple Bar, in the cinema at D’Olier Street and in the Lighthouse Cinema in Smithfield. Large modern flerskärms cinemas is over suburban Dublin. The 3Arena place in the Dublin Docklands has hosted many world-famous artists. Clerys on O’Connell Street. Dublin is a popular shopping destination for both locals and tourists. The city has many shopping areas, especially around Grafton Street and Henry Street. The center is also the site of major department stores, mainly Arnotts, Brown Thomas and Clerys (until June 2015, when it closes). Moore Street Market in Dublin. The city retains a thriving market culture, despite new purchase development and the loss of some traditional markets. Among several historic sites, Moore Street remains one of the city’s oldest commercial district. There has also been significant growth in local farmers’ markets and other markets. In 2007, the Dublin Food Co-op moved to a larger warehouse of the freedoms area, where there are many on the market and local events. Suburban Dublin has several modern venues, including Dundrum Town Centre, Blanchard’s Centre, the Square in Tallaght, Liffey Valley Shopping Centre in Clondalkin, Omni Shopping Centre Santry, Nutgrove shopping Centre in Rathfarnham and pavilions shopping Centre in swords. Dublin is the center of both the media and communications in Ireland, with many newspapers, radio stations, television stations and telecommunications companies based there. RTÉ is Ireland’s national public broadcaster, and has its headquarters in Donnybrook. Fair City is RTÉ’s soap opera, set in fictional Dublin suburb of Carraigstown . TV3 Media, UTV Ireland, Setanta Sports, MTV, Sky News Ireland and is also based in the city.The headquarters of An Post and telecommunications company Eircom, as well as mobile operators Meteor, Vodafone and 3 are all there. Dublin is also the headquarters of important national newspapers such as The Irish Timesand the Irish Independent , as well as local newspapers, The Evening Herald . Besides being home to RTÉ Radio, Dublin is also the host of the national radio networks Today FM and Newstalk, and many local stations.Commercial radio stations based in the city include 4FM (94.9 MHz), Dublin’s 98FM (98.1 MHz), Radio Nova 100FM (100.3 MHz), Q102 (102.2 MHz), SPIN 1038 (103.8 MHz) FM104 (104.4 MHz), TXFM (105.2 MHz) and Sunshine 106.8 (106.8 MHz). There are also many community and interest stations, including Dublin City FM (103.2 MHz), Dublin South FM (93.9 MHz), Liffey Sound FM (96.4 MHz), close to FM (90.3 MHz), Phoenix FM (92.5 MHz), Raidió na Life (106.4 MHz) and West Dublin access radio (96.0 MHz). Croke Park is the largest sports stadium in Ireland. The headquarters of the Gaelic Athletic Association, has a capacity of 84,500. It is the fourth largest stadium in Europe after the Nou Camp in Barcelona, Wembley Stadium in London and Santiago Bernabeu Stadium in Madrid. It hosts the premier Gaelic football and hurling games, international rules football and irregularly other sports and non -sportevenemang including concerts. During the redevelopment of Lansdowne Road, it played host to the Irish Rugby Union Team and the Ireland national team and the host förHeineken 2008-09 Cup rugby semi-final between Munster and Leinster who set a world record attendance for a club rugby match. The Dublin GAA team plays most of their home league games throwing at Parnell Park. IRFU Lansdowne Road Stadium was built in 1874. This was a place for home games for both the Irish Rugby Union Team and Ireland national football team. A joint venture between the Irish Rugby Football Union, the FAI and the government, saw it converted into a new state-of-the-art 50,000 seat Aviva Stadium, which opened in May 2010. Aviva Stadium hosted the 2011 UEFA Europa League Final . Rugby union team Leinster Rugby play their competitive home matches in the RDS and the Aviva Stadium while Donnybrook Stadium hosted its friendly matches and games, Ireland A and women, Leinster schools and young people, and the home club game of the all Ireland League clubs Old Wesley and Bective Rangers . Dublin is home to 13 of the leading rugby union clubs in Ireland, including five of the 10 sides in the top division 1A. County Dublin is home to six League of Ireland clubs associations, Bohemian FC Shamrock Rovers St Patricks Athletic, University College Dublin, the Shelbourne and newly elected sidaCabinteely. Current FAI Cup champions are St Patrick’s Athletic. The first Irish side to reach the group stage of a European competition (2011-12 UEFA Europa League group stage) ärShamrock Rovers who play at Tallaght Stadium in South Dublin.Bohemian FC play at Dalymount Park which is the oldest football stadium in the country, after playing host to Ireland’s football team from 1904 to 1990. St. Patricks Athletic games at Richmond Park, University College Dublin play their home games at the UCD Bowl in Dun Laoghaire-Rathdown, while Shelbourne based on Tolka Park. Cabinteelykommer to play on Stradbrook Road. Tolka Park, Dalymount Park, UCD Bowl, and Tallaght Stadium, along with Carlisle Grounds Bray, host all Group 3 games in the intermediary round of the 2011 UEFA Regions Cup. The Dublin Marathon has been run since 1980 on the last Monday of October. The Mini Marathon has been run since 1983 on the first Monday in June, which is also a public holiday in Ireland. It is said to be the largest all female event of its kind in the world. Dublin area has several racetracks including Shelbourne Park and the Leopards. Dublin Horse Show takes place at the RDS, which hosted the Show Jumping World Cup in 1982. The national boxing arena is located in the National Stadium on the South Circular Road. The National Basketball Arena in Tallaght is located, is home to the Irish basketball team, is the location of the basketball league finals and has also hosted boxing and wrestling events. National Aquatic Centre in Blanchard is Ireland’s largest indoor water leisure facility. Dublin has two ODI Cricket grounds in Castle Avenue, Clontarf and Malahide Cricket Club and College Park test mode and played host to Ireland’s only test cricket match so far, a women’s match against Pakistan in 2000. There is also the Gaelic handball, hockey and athletics arenas, notably Morton Stadium in Santry, which held the athletics events of the 2003 Special Olympics. There are 10.469 students in the Dublin region participating in 31 gaelscoileanna (Irish-speaking schools) and 8 gaelcholáistí (Irish-speaking schools). Dublin has the highest number of Irish medium schools in the country. It can also be up to another 10,000 Gaeltacht speakers who lives in Dublin. Two Irish language radio station Raidió Na Life and RTÉ Raidió na Gaeltachta have studios in the city, and on the web and DAB station Raidió ri-Ra broadcasts from studios in the city. Many other radio stations in the city broadcast at least one hour of Irish programming per week. Many Irish language agencies are also located in the capital. Conradh na Gaeilge offers language courses, has a bookstore and is a regular meeting place for various groups. The nearest Gaeltacht to Dublin County Meath Gaeltacht of Ráth Cairn and Baile Ghib which is 55 kilometers (34 mi) away. See also: List of twin town in Ireland Dublin is twinned with the following places: ||United States ||United Kingdom ||Spain ||China The city is also in talks to Twin with Rio de Janeiro, and the Mexican city of Guadalajara. - dublin English - List of people from Dublin - List of subdivisions of County Dublin - Jump up ^ “Dublin City Council, Dublin Coat of Arms”. Dublincity.ie.Hämtad29 August 2015. - Jump up ^ “Population of each Province, County and City, 2011”. Central Statistics Office. Retrieved April 9, 2014. - Jump up ^ “Census of Population 2011” (PDF). Preliminary results.Central Statistics Office. June 30, 2011. p. 21. Archives of the original (PDF) on 14 November 2012. Retrieved 25 May 2013. - Jump up ^ “Census of Population 2011”. Population density and the size of cities by size, Census and Statistics. Central Statistics Office. April 2012. Retrieved March 30, 2014. - Jump up ^ Greater Dublin Area - Jump up ^ “Census of Population 2011” (PDF). Profile 1 – Town and Country .Central Statistics Office. 26 April 2012. p. 11th Retrieved 16 January 2014. - ^ Jump up to: ab “global city GDP in 2014”. Brookings Institution.Hämtad18 November 2014. - Jump up ^ “The growth and development of Dublin”. Archived from the original (PDF) of 30 March 2013. Retrieved 30 December 2010. - Jump up ^ “primate city Definition and examples.” Hämtad21 October of 2009. - Jump up ^ “Dublin Region facts – Dublin Chamber of Commerce” .Dubchamber.ie. - Jump up ^ “Global Financial Centres Index 8” (PDF). Retrieved 30 December 2010. - Jump up ^ “The World According GaWC 2008”. Globalization and World Cities Research Network: Loughborough University. 3 June, 2009.Hämtatsex November 2009. - Jump up ^ Holder, Alfred (1896). Alt celtischer sprachschatz (in German). Leipzig: BG Teubner. col.1393. Retrieved November 7, 2014. - Jump up ^ “placental Database of Ireland: Duibhlinn / Devlin.”Hämtad13 September, 2013. - Jump up ^ “placental Database of Ireland: Béal Duibhlinne / Ballydivlin”. Hämtad13 September, 2013. - Jump up ^ “placental Database of Ireland: Duibhlinn / Difflin”.Hämtad13 September, 2013. - ^ Jump up to: ab Davies, Norman. (1999) The Isles: A History. London: Macmillan.s. 1222. ISBN 0-333-76370-X. - ^ Jump up to: ab “A Brief History of Dublin, Ireland.” Dublin.info.Hämtad19 August 2011. - Jump up ^ “Fitzhenry, Meiler.” Dictionary of National Biography.London: Smith, Elder & Co. 1885-1900. - Jump up ^ ” The Story of Ireland ‘. Brian Igoe (2009). p.49. - Jump up ^ ” Black Death “. Joseph Patrick Byrne (2004). p.58. ISBN 0-313-32492-1 - ^ Jump up to: abc “A Brief History of Dublin” (PDF). visiting Dublin.Hämtad19 August 2011. - Jump up ^ ” Dublin: a cultural history “. Siobhán Marie Kilfeather (2005). Oxford University Press USA. pp. 34-35. ISBN 0-19-518201-4 - Jump up ^ Lyons, FSL (1973). Ireland since the Famine. Suffolk: Collins / Fontana. s.880. ISBN 0-00-633200-5. - Jump up ^ Irish Statute Book. Local Government (Dublin) Act - Jump up ^ “Irish statute book, municipal Provisional Order Confirmation Act 1953” .Irishstatutebook.ie. March 28, 1953. Retrieved 13 September, 2013. - Jump up ^ “Local elections 2014”. Dublin City Council. Retrieved July 13, 2014. - Jump up ^ Taoiseach: Guide to Government Buildings (2005) - Jump up ^ “Constituency Commission Report 2012”. Constituency Commission. Retrieved July 22, 2014. - Jump up ^ “2004 local elections – Electoral Area Details”. Election Ireland. Hämtadskrevs 16 September 2011. - Jump up ^ “2011 general election”. Election Ireland. Retrieved twelve May 2013. - ^ Jump up to: ab “Dublin City Council: Facts about Dublin City”. Dublin City Council .Hämtad July 8, 2014. - Jump up ^ “Final Characterization Report” (PDF). Eastern River Basin District. Sek.7: Description of the Liffey catchment. Retrieved November 10, 2014. - Jump up ^ “Northside vs Southside”. Wn.com. 25 February 2009.Hämtad13 September, 2013. - Jump up ^ “Temperature – Climate – Met Éireann – The Irish Meteorological Service Online”. Met.ie. January 2, 1979. Retrieved August 202010th - Jump up ^ “climatology information about Station Dublin (Ringsend), Ireland and index RR: Precipitation sum”. European Climate Assessment & dataset .Hämtad 21 December 2012. - Jump up ^ Met Éireann – Irish weather Extremes - Jump up ^ “climatological information for Merrion Square, Ireland”.European Climate Assessment & Dataset. - Jump up ^ “Dublin Insider Tips – Get off the beaten track with these local tips – Visit Dublin”. - Jump up ^ “Dublin Town – Creative Quarter – Dublin Town – What’s on, shopping and events in Dublin – Dublin Town”. What’s on, shopping and events in Dublin – Dublin Town. - Jump up ^ McCarthy, Denis; Benton, David (2004). Dublin Castle: at the heart of Irish history. Dublin: Irish Government Stationery Office. pp. 12-18.ISBN 0-7557-1975-1. - Jump up ^ “Spire cleaners get prime view of the city.” Irish Independent.5 June 2007. Retrieved 5 June 2007. - Jump up ^ “The Dublin Spire.” Archi Seek. In 2003. Retrieved October 202011th - Jump up ^ “Some famous landmarks in Dublin – Dublin Hotels & Travel Guide” .Traveldir.org. March 8, 1966. Retrieved 16 September 2011. - Jump up ^ “Dublin City Parks.” Dublin City Council. Hämtad1 September 2015. - Jump up ^ It is over twice the size of New York’s Central Park. “If – Phoenix Park”. Office of Public Works. Are downloaded January 2010. - Jump up ^ “Outline History Áras an Uachtaráin”. Aras an Uachtarain.Hämtadsyv January 2013. - Jump up ^ “richest cities in the world by purchasing power in 2009”.Mayor. Retrieved 17 June 2010. - Jump up ^ “richest cities in the world of personal income in 2009 ‘.Citymayors.com. August 22, 2009. Retrieved June 17, 2010. - Jump up ^ “Dublin fall in the city cost rankings.” Irish Times. 12 July 2011. Retrieved 20 July 2011. - Jump up ^ Dublin employment at Internet Archive PDF (256 KB) Archives March 30, 2013 at the Wayback Machine. - Jump up ^ “IFSC”. IFSCie. 21 June 2010. Retrieved 21 January 2010. - Jump up ^ “E-Flow Site”. eFlow. Retrieved 29 July 2011. - Jump up ^ “DART (Dublin Area Rapid Transit).” Retrieved 28 July 2011. - Jump up ^ “rail passenger services by type of trip and years – the State Bank – data and statistics.” Retrieved April 20, 2016. - Jump up ^ “Luas – Frequently asked questions”. Luas.ie. - Jump up ^ “Luas – Frequently asked questions”. Retrieved 29 July 2011. - Jump up ^ “Luas Cross City”. Projects & Investment. Irish Rail. August 2013. Retrieved 2 August, 2013. - Jump up ^ “53 – Dublin Bus.” Dublinbus.ie. - Jump up ^ “opening date for Terminal 2 set”. Rte. 21 October 2010.Hämtadjuli 29, 2011. - Jump up ^ “Cycling Maps”. Dublincitycycling.ie. Hämtad13 September, 2013. - Jump up ^ “Copenhagenize Consulting -” Copenhagenize index of bicycle-friendly cities in 2011 ”. Copenhagenize.eu. Retrieved 13 August september2013. - Jump up ^ “Dublin Bikes – How does it work?”. Dublin Bikes. Retrieved 29 July 2011. - Jump up ^ “Dublin Bikes Strategic Planning Framework 2011-2016” (PDF) .Dublin City Council. Retrieved 29 July 2011. - Jump up ^ “Report on trends in the mode share of vehicles and people crossing the Grand Cordon 2006-2013” (PDF). Dublin City Council and the National Transport Authority. 2013. p. 4, 8; 16. Retrieved August 29, 2015. - Jump up ^ “ESOF Dublin”. Euroscience. 2012. Retrieved 29 August augusti2015. - Jump up ^ Walshe, John; Reigel, Ralph (25 November 2008).”Celebrations and hard work begins after the capital lands science” Olympics “of 2012”. Irish Independent. Retrieved 17 June 2010. - Jump up ^ “DCU Incorporation of CICE, St Pat’s and Mater Dei”. DCU.2014. Retrieved May 5, 2016. - Jump up ^ Call for improved infrastructure for Dublin 2 April 2007 - Jump up ^ “Dublin heralds a new era in publishing for immigrants.” The Guardian, March 12, 2006. - Jump up ^ Foreign nationals are now 10% of the Irish population July 26, 2007 - Jump up ^ “Dublin”. Open Cities, a British Council project. Archive March 30, 2013 at the Wayback Machine. - Jump up ^ Catholic Church’s hold on schools that are current in the Changing Ireland The New York Times, January 21, 2016 - Jump up ^ Irish Independent – Delight in the City of Literature Award for Dublin. Retrieved 26 July 2010. - Jump up ^ “National Museum of Ireland.” Museum.ie. 8 June 2010.Retrieved Seventeen June 2010. - Jump up ^ Conway, Richard (22 November 2010). “Dublin independent art scene is a bright spot in the recession hit the city.” The Guardian .London. - Jump up ^ “Baroque Music in Dublin, Ireland.” - Jump up ^ “RTÉ report on the World Design Capital” list. RTE News. 21 June, 2011. Retrieved 14 January 2012. - Jump up ^ McDonald, Frank (22 June 2011). “Dublin list to be the” World Design Capital ”. Irish Times. Retrieved 14 January 2012. - Jump up ^ “The Irish Experience”. The Irish Experience. Hämtad17 June 2010. - Jump up ^ “Dublin Guide, tourist information, travel planning, tours, sightseeing, attractions, things to do”. TalkingCities.co.uk. October 6, 2009. Archived from the original on August 29, 2015. Retrieved 6 October 2009. - Jump up ^ Article on stag / hen parties in Edinburgh, Scotland (who mentions their popularity in Dublin), which refers to Dublin. Retrieved 15 February, 2009. - Jump up ^ “New Lonely Planet guide hit Ireland for being too modern, Ireland vacation”. Irish Central. Retrieved 17 June 2010. - Jump up ^ Morse, Caroline (4 April 2014). “10 most disappointing places in the world”. Huffington Post. Retrieved May 11, 2014. - Jump up ^ Doyle, Kevin (17 December 2009). “Let’s open up for Sunday shoppers says Moore Street”. The Herald. Retrieved 28 December 2009. - Jump up ^ McKenna, John (7 July 2007). “Public appetite for real food.”The Irish Times. Retrieved 28 December 2009. - Jump up ^ Van Kampen, Sinead (21 September 2009). “Miss Thrifty: Death to the mall.” Irish Independent. Retrieved 28 December 2009. - Jump up ^ Mooney Sinead (7 July 2007). “Food Shorts”. The Irish Times .Hämtad 28 December 2009. - Jump up ^ Dublin Food Co-op website ref. Markets / News / Recent Events / Events Archive - Jump up ^ “Main site – Facts and figures”. Crokepark.ie. Hämtad13 September, 2013. - Jump up ^ “World Record crowd watches Harlequins Saracens sink”.The Sydney Morning Herald. 1 April 2012. Retrieved 27 April 2012. - Jump up ^ “Prime Minister officially opens the Aviva Stadium”.IrishRugby.ie. 14 May 2010. Retrieved 29 August 2015. - Jump up ^ “Website of Lansdowne Road Development Company (IRFU and FAI JV)”. Lrsdc.Ie. Retrieved 17 June 2010. - Jump up ^ Irish Rugby Club & Community: Ulster Bank League: Ulster Bank League Tables Archives August 4, 2013 at the Wayback Machine. - Jump up ^ “Irish Daily Mail FAI Senior Cup”. Fai.ie. - Jump up ^ Shamrock Rovers FC # European record - Jump up ^ 2011 UEFA Regions Cup Tour # 3 - Jump up ^ “History”. VHI Women’s Mini Marathon. 2015. Hämtad29 August 2015. - Jump up ^ “Ireland Women v Pakistan Women, 2000 Only Test”.CricketArchive. Retrieved September 5, 2013. - Jump up ^ “Oideachas Trí Mheán na Gaeilge in Éirinn said Ghalltacht 2010-2011” (PDF) (in Irish). gaelscoileanna.ie. 2011. Retrieved 9 January 2012. - Jump up ^ “Dublin City Council: International Relations Unit.” Dublin City Council. Retrieved July 8, 2014. - Jump up ^ “Sister City Program”. City of San Jose. June 19, 2013. Eight Retrieved July 2014. - Jump up ^ “Liverpool City Council twinning”. Liverpool.gov.uk.17 November 2008. Archived from the original February 11, 2012. Retrieved 23 June 2009. - Jump up ^ “Ciutats warehouse urged, Relacions bilateral, L’Acció outer” .Barcelona. 18 June 2009. Retrieved 23 June 2009. - Jump up ^ “Barcelona City Council signs cooperation agreement with Dublin, Seoul, Buenos Aires and Hong Kong.” Ajuntament de Barcelona.26 November 2012. Retrieved 29 August 2015. - Jump up ^ “Dublin sign twinning agreement with Beijing.” Dublin City Council. 2 June 2011. Retrieved 11 February 2012. - Jump up ^ Coonan, Clifford (3 June 2011). “Dublin is officially twinned with Beijing” .Irish Times. Retrieved July 8, 2014. (Subscription required) - Jump up ^ Coonan, Clifford (21 May 2011). “Dublin was also in talks with Rio de Janeiro in Brazil twinning with the town” .irishtimes.com.Retrieved 1 June 2011. (Subscription required) - Jump up ^ “Mexican town to be connected to Dublin, said the mayor.”irishtimes.com. March 21 2013. Filed originalpå from 30 March 2013.Retrieved 29 March 2013. (Subscription required)
1
13
<urn:uuid:5a293d65-d1c2-44b1-b4d7-0b4c7c86d790>
Recommendations for Abdominal Aortic Aneurysm Screening Ruptured abdominal aortic aneurysm (AAA) ranks as the 15th leading cause of death in the United States and the 10th leading cause of death in men older than 55 years. Abdominal aortic aneurysm screenings have shown a measurable and significant reduction in the overall rate of aneurysm-related death. In this article, we’ll review the U.S. Preventive Services Task Force’s (USPSTF) recently published recommendations for abdominal aortic aneurysm screening and how to code for this potentially life-saving test. What Is an Abdominal Aortic Aneurysm? The aorta is the largest artery in the body. It carries oxygenated blood from the heart through the chest and torso to the rest of the body. An aneurysm is an abnormal enlargement of part of a blood vessel. Thus, an abdominal aortic aneurysm is a balloon-like bulge in the portion of the aorta that runs through the abdomen. A ruptured AAA can cause life-threatening bleeding. In an adult, the abdominal aorta is typically about two centimeters in diameter. The definition of AAA is a focal dilation of the abdominal aorta such that the diameter is greater than 3 cm or more than 50 percent larger than normal. Who’s at Risk for AAA? A number of factors can play a role in the development of an aortic aneurysm, including: - Atherosclerosis (hardening of the arteries) – occurs when fat and other substances build up on the lining of a blood vessel. - Hypertension – High blood pressure can damage and weaken the walls of the aorta. - Blood vessel diseases – Cause the blood vessels to become inflamed. - Infection of the aorta – Rarely, bacterial or fungal infection causes AAA. Risk factors for AAA include being male, older, a smoker or former smoker, and having a first-degree relative with AAA. Other risk factors include a history of other vascular aneurysms, coronary artery disease, cerebrovascular disease, and hypercholesterolemia. Smoking is the strongest predictor of AAA prevalence, growth, and rupture rates. There is a dose-response relationship, as greater smoking exposure is associated with an increased risk for AAA. Most aortic aneurysms do not cause symptoms until they rupture, which is why they are so dangerous. AAAs progressively dilate over time. The biggest concern is that it can rupture and cause significant internal bleeding, which can be fatal. The larger a AAA is, the higher chance it has of rupturing. A ruptured AAA is a surgical emergency. Although the risk for rupture varies greatly by aneurysm size, the associated risk for death with rupture is as high as 81 percent. This is why it is imperative to screen those at risk, and once diagnosed, the size of a patient’s AAA should be monitored periodically. Large AAAs should be surgically repaired before they rupture. Screening for AAA and Strength of Recommendations The primary way of screening for AAA is with an abdominal ultrasound. This screening test is easy to perform, noninvasive, does not involve radiation, and is highly accurate in detecting AAA. The potential benefit of screening for AAA is detecting and repairing it before rupture, which requires emergency surgery and has a high mortality rate. The only potential harm of screening is related to the risks of surgical repair such as bleeding complications and death. The U.S. Preventive Services Task Force recommendation applies to adults aged 50 years or older who do not have any signs or symptoms of AAA. Early detection of AAA can save lives. Based on current evidence, the USPSTF concludes with moderate certainty that screening for AAA in men aged 65 to 75 years who have ever smoked is of moderate net benefit, even if they have no symptoms. For men aged 65 to 75 years who have never smoked, the USPSTF concludes with moderate certainty that screening is of small net benefit, and should be offered selectively based on medical history and risk factors. There is sufficient evidence that there is no net benefit of screening women who have never smoked and have no family history of AAA. For women aged 65 to 75 years who have ever smoked or have a family history of AAA, there is not enough evidence to adequately assess the balance of benefits and harms of screening for AAA. Coding AAA Screening Medicare covers a one-time AAA screening for beneficiaries with certain risk factors for AAA who have received a referral from their provider. There is no deductible or coinsurance/co-payment for the AAA ultrasound screening test. A patient is considered at risk if they have a family history of abdominal aortic aneurysms, or they’re a man age 65-75 and have smoked at least 100 cigarettes in their lifetime. When filing claims for this screening test, use the following codes to ensure proper billing and reimbursement. - 76706 Ultrasound, abdominal aorta, real time with image documentation, screening study for abdominal aortic aneurysm (AAA) The ICD-10-CM code to support AAA screening is Z13.6 Encounter for screening for cardiovascular disorders [abdominal aortic aneurysm (AAA)]. Latest posts by Stacy Chaplain (see all) - 4 Steps for Improved Excision Coding - January 6, 2020 - Recommendations for Abdominal Aortic Aneurysm Screening - January 6, 2020 - ‘Tis the Season: ICD-10 Holiday Coding Guide - December 17, 2019
1
2
<urn:uuid:86909923-52d2-4be5-9096-5864dc83b7fd>
What is LED Grow Light? LED lights in recent years have evolved as the most advanced type of lighting technology and are slowly but gradually replacing the conventional forms of lighting systems like incandescent, fluorescent, and high intensity discharge lights. LED lights have become globally popular and enjoy a high level of demand because they tend to illuminate an outdoor, indoor or enclosed space as required without consuming excessive electrical energy and dissipating low level of heat. Of late, LED lights are increasingly being used for stimulating the growth of plants and vegetation in greenhouses and nurseries as well as for herbarium and aquariums. Those types of LED lights that are specifically used in orangery, glasshouses, greenhouses, and conservatories as well as in hydroponics for propagating plant growth are called grow lights. Grow lights are mostly used in climatic zones that do not receive direct rays of the sun during the winter months, and the total number of daylight hours available is just not sufficient for healthy plant growth. Though fluorescents, HID, and incandescent lights can serve as grow lights, none of these types of lighting mechanisms can rival LED lights in terms of performance and efficient use of energy. How Many Watts of LED Grow Lights is Best Suited for Plant Growth? LED grow lights are more costly, compared to fluorescent, incandescent or HID grow lights but since the latter happen to be more energy efficient as well as last longer than the former kinds of lights, the investments are recouped over their lifetime. LED grow lights are available in the entire spectrum of colors similar to the color band of natural light sunlight. White LED plant lights have been found to be most suitable for all round plant growth and development as the white light reflects the full light spectrum, almost mimicking natural light. How much wattage of LED grow lights will you need to boost plant growth per sq ft of space? The actual LED grow light wattage needed will depend upon the total grow area, the plant species (flowering, fruits or vegetables), and the growth stage. Typically, for a 1sq ft grow area, 32watts of LED lights would be required. 1. How to Choose the Best LED Grow Lights? The following listing will serve as a ready reckoner if you’re planning to have a grow room of your own in the near future:- - The total grow area-The bigger your grow area, the greater will be the number of grow light units that the space will need and vice versa. However keep it in mind that installing the lights side by side will enable you to cover up more room. - Plant types-certain species of plants like tomatoes, carrots or cannabis require full spectrum (high light) light while leafy greens can do with low intensity grow lights. - The Growth Phase-Will you be needing grow lights only for a specific stage of growth (seedling, budding or flowering) or for the entire growth cycle? - Dispersion angle of LED light lens-LED grow light lens with a greater dispersion or scattering angle will cover more area while the illumination from ones with a lesser dispersion angle will be more focused. - Your budget-You may have to invest heavily for LED grow lights but you’ll be able to recover your outlay over the lifespan of the LEDs. 2. Reasons to Select Full Spectrum LED Grow Light (100 words in bullet points) - Full spectrum LED grow lights encourage healthy growth and lead to better yields - You’ll not have to invest in accessories and components for installing the LED lights (no need to buy ballasts) - Full spectrum LEDs are very energy efficient which implies that you’ll be saving upfront on utility bills - LED lights seem to last forever-the lifecycle of LEDs vary from 50,000 to 90,000 hours 3. Important Elements of LED Grow Tent (Grow Room Sunglasses, Grow Tent, Complete Tool Kit, Fertilizers for Cannabis, Nutrients, Thermometer & Hygrometer, and Light Meter) – Explain each in 20 words – (120 words) - Grow Room Sunglasses-LED grow lights installed in the indoor garden give off high spectrum beams and not wearing protective eyeglasses will in the long run adversely affect your eyesight and may even lead to blindness. Put on sunglasses before entering the greenhouse to safeguard your eyes from the damaging effects of UV rays. - Complete Tool Kit-The following is a list of items that will be included in your grow room tool kit:- - 1. Grow lights - 2. Indoor cultivation kits - 3. Fertilizers and pesticides - 4. Air treatments - 5. Measuring tools - 6. Growing media - 7. Pots and Trays - 8. Gardening & irrigation - 9. Dry out cure extracts - 10. Seeds and saplings - 11. Mylar box with high light reflectance - 12. Hydroponics system with irrigation timer 4. How does LED Grow Light Work? LED grow lights are just a variant of the LED lighting systems that are now extensively used in homes and offices around the world. LED lights that are used across greenhouses, nurseries, glasshouses and hydroponics rooms are specially designed for use in these areas. LED grow lights are made up of light emitting diodes, like all other kinds of LEDs. The diodes are sheathed inside a case that comes with a built-in cooling fan and a heat sink. Installing LED lights is extremely convenient as these don’t need a counterbalance or separate component, unlike a fluorescent tube but can be inserted straight inside a socket. LED diodes or simply diodes are semiconductor devices of the simplest type and the conducting material is aluminum gallium arsenide. The light emitting and dispersing diode is usually composed of N- and P-type materials. Since it is a diode (or valve), the flow of electricity is unidirectional. The LEDs emit light that is composed of photons which are the smallest units of light. The amount of energy released by a LED (in the form of light) determines the brightness of the emitted light which is indicated by wattage. So, the higher the wattage of the LED grow light, the brighter will be the dispersed light. LED grow lights are ideal for growing plants indoors as they can emit a light spectrum similar to that given off by the sun as well as providing the appropriate lighting conditions for plant growth at all stages. 5. Benefits of LED Grow Lights - More Light less Heat-The beauty of LED technology is that LED lights can keep glowing for hours on end without dissipating too much heat. Excessive heat can be detrimental for plants as far as their balanced growth and healthy development is concerned. - Lasts Long-Since LED lights use up very low electrical energy for lighting up, they tend to remain functional for years on end. Premium quality LEDs can keep on emitting light for up to 1, 00, 000 hours. - Energy Savings-LED lights have become the most popular amongst all kinds of lighting systems chiefly because they happen to be extremely energy efficient. As an instance, replacing high pressure sodium vapor lamps, metal halide or ceramic metal halides with LED lights will reduce energy consumption by approximately 40%. - Appropriate Wavelengths-LED lights are capable of emitting light rays with wavelengths that promote photosynthesis in controlled environments. - A wide choice of designs-The cultivator or horticulturist has the option of choosing from a wide variety of LED grow lights that are optimal for greenhouse lighting. 6. 10 Best LED Grow Lights Reviewed-200 x 10 (150 description and bullets-50) 1. Galaxyhydro LED Grow Light, 300W Indoor Plant Grow Lights Full Spectrum with UV&IR for Veg and Flower The Galaxyhydro LED Grow Light has a wattage output of 300 watts and is guaranteed to remain in service for 50,000 hrs. This grow light is what you should be opting for if you’re on a budget. The unit is comprised of 100 Epiled LED chips (each chip is of 3watts) covering the 9-band light spectrum. Though it does not cover the full light spectrum, it furnishes those specific spectrums that are essential for plant growth and development at all stages. The spectrums covered include 430nm, 450-470nm, 620-630nm, 740nm, white light spectrum, and also UV and IR spectrums. These particular spectrums are ideal for stimulating growth right from the seedling stage and make the plants acclimatized for outdoor growth, in case you’re planning to transplant them outside in due course. These LED lights can be used in place of high pressure sodium lamps or metal halide lights with a rating of 250 w and 400w respectively. At the same time, you can look forward to a significant reduction in your energy bills as the lights consume just 135watts of electricity. The Galaxyhydro LED grow lights are ideal for hydroponics, indoor farming and gardening. - Targeted wavelengths and spectrum suitable for plant growth at all stages - Energy efficient: consumes just 135w - High wattage rating: 300w for high intensity luminosity - Emits UV and IR spectrum: significant for promoting growth and sterilizing action - Cooling fans for heat dispersal 2. Roleadro Upgrade and Newly Developed LED Grow Light Full Spectrum 2nd Generation Series 300w Plants Light Plant growth under artificial light is optimum when full spectrum light is provided that mimics sunlight. The Roleadro Upgrade and Newly Developed LED Grow Light 2nd Generation Series emit full spectrum light covering the 420nm-780nm range. This spectrum band is ideal for accelerating the production of sugar exactly in the form required by indoor plants. The grow lights give off beams having a 3500k color temperature that is optimal for each and every stage of plant development. The light rays sent out by the Roleadro Upgrade are very similar to the spectrum of sunbeams, and amplifies the yellow or green light spectrums essential for cell growth and photosynthesis. Two distinct types of LED light lens have been used with dispersal angles of 90° and 120° respectively that not only help in covering up the grow room area but also enable focusing in specific sections. - LED grow light unit composed of 60 chips each of 5 watts: Total wattage 300 watts - Covers a 3.9sq meters area effectively - Lifetime: 50,000 hrs - Full spectrum grow lights - Can replace high intensity discharge lights of 300watts - Provides 5,000 lumens in total - Dual fans for keeping the lights cool If you’re looking for superlative quality of plant lights for your grow room or indoor garden, then you don’t need to look any further than the VIPARSPECTRA Reflector Series 600W LED Grow Light. This grow light has been specially designed to reflect the full spectrum of light so essential for promoting holistic growth of flowering and vegetable plants right from the sapling to the developing stages. The lights have been fabricated in a manner to maintain the balance between the output and coverage of PAR/Lumens. The VIPARSPECTRA plant lights can be used in place of metal hydride or high pressure sodium lamps with a 600w rating. Installing these lights inside the greenhouse will also lead to a considerable reduction in your monthly electricity bills as these lights consume only 276w. The LED lights are suitable for using in an area measuring 3x 3’ with a height of 24”. The casing houses large aluminum heat sinks and upgradable 4.72” fans. - 120 single lights each having a 5-watt chip - Total wattage: 600w - Can cover an area of 3X3 feet - Full spectrum lights - Longevity of the lights: 1, 00, 000 hrs - Can replace HPS or MH lights of the same wattage rating - Energy efficient: 276w power draw - Powerful heat sinks and cooling fans for keeping the grow lights in working order If you’ve a large or a mid-size indoor farm, then choosing the Kind LED Grow Lights K3 L300 will be perfect for you. Comprising a total of 90 individual LEDs each with a 3w chip, this grow light emits full spectrum light rays. The lights have been clubbed into 6 groups, each containing 15 LEDs, for a better coverage and focusing. These LEDs emit the complete 12-band light spectrum that also includes IR and UV light. UV and IR light spectrums are essential for sterilizing and preventing the growth of harmful bacteria that may impede the all round and balanced growth of flora. Additionally, if you’re looking to ultimately transplanting your vegetation outdoors, IR and UV rays can help the plants to become adapted to natural light conditions. - Perfect grow light for promoting photosynthesis - Secondary optical lens that help in reaching the base of plants - Ideal for covering 3×3 and 5×5 feet areas - Full light spectrum: reflects the entire 12-band spectrum including UV and IR lights - Wattage: 300w - Lifespan: 50,000 hrs - Can replace 300w HIDs The Apollo Horticulture GL80X5LED plants lights come supremely handy for encouraging the growth and development of plants in greenhouses and conservatories. The grow light has an output rating of 400w and comprises of eighty LED chips each of 5 watts. The light spectrums covered are 430nm to 440nm, 450-475nm, 620-630nm, 650-670nm, 730nm, and the 6500k white light spectrum. Two built-in cooling fans in the casing keep the chips from getting overheated. The plant light is optimally effective for an area of up to 4 x4 feet and has an active life of 50,000 hours. This grow light is suitable for a medium or small-sized indoor plot. - 400 watts output - Full spectrum light - Cooling fans for preventing lights from getting overheated - Can be an alternative to high pressure sodium and metal hydride lights of 400w - Consumes 195w: highly energy efficient - Can be used in grow tents having hydroponics plant lights and even in open areas 6. CrxSunny 1000W COB LED Grow Light Full Spectrum for Hydroponic Indoor Plants and Greenhouse Growing Veg and Flower The CrxSunny COB LED grow light comprising a total of 5 incorporated LED chips each of 200w works in a manner that is quite different from the conventional plant light models. The pattern in which the LED chips have been installed inside the casing underlines the terminology, ‘COB’ which in expanded form means ‘chip-on-board’. The purpose for instituting the COB set up is to replicate the spectrum of natural light which is extremely vital for fuelling harmonious development of flora in an indoor setting. The CrxSunny grow light is one of the best plant lights that you can buy for your indoor farm as the five LED chips work in seamless harmony to emit full spectrum light. An on/off switch and a hanging kit are provided with the package. - High wattage output of 1000w: intense luminosity - Low power consumption of 226w: enable savings in electricity bill - COB (chip on board) pattern for excellent illumination, focusing, and long active life - Mimics natural light spectrum so vital for plant development and cultivation - Can replace 1000w HPS/MH lights and lamps In case you’ve taken to indoor farming of late and wish to furnish the best care possible to the newly planted seedlings/saplings, then the AeroGarden Ultra LED grow light will come supremely handy. This advanced and smart plant light not only allows you to grow up to a maximum of 7 flowering, vegetative and fruit plants but also lets you keep track of their development via a LCD screen. The user-friendly LCD screen updates you on adding nutrients, water, the age of the saplings, and many other aspects. This 30-watt LED grow light system emits full spectrum light rays that can be fine-tuned for boosting photosynthesis, causing your plantings to develop holistically leading to your reaping a rich harvest. AeroGarden Ultra LED comes with a pod kit containing seeds of basil, parsley, mint, dill, and chives and nutrients for nurturing the seedlings. - Comes with seed pods of 7 herbs - Nutrients provided for promoting healthy development - Full spectrum light - 30-watt LED plant light can be tweaked for offering the optimum light conditions for development - Easy to install: just need to pop in the supplied seeds in the garden pots and add nutrients and water from time to time - The lights turn on/off automatically 8. HIGROW Optical Lens-Series 1000W Full Spectrum LED Grow Light for Indoor Plants Veg and Flower, Garden Greenhouse Hydroponic Plant Growing Lights(12-Band 5W/LED) Raise, breed, nurture, cultivate, and harvest flowering, vegetable, and fruit plants of your choice in your balcony garden with the HIGROW Optical Lens-Series Full Spectrum Grow Light. A total of 200 5-watt LED chips are built into the casing that hangs 24 inches over the grow room with a maximum area of 5×4 feet focusing excellent illumination. Also embedded in the casing are powerful heat sink that together with 3 cooling fans keep the grow light in perfect working order. The 1000-watt grow light reflects rays covering the entire light spectrum in the 380nm-760nm range that also comprise IR and UV spectrums. You can conveniently choose from off channel, full-channel, vegetative channel, and bloom channel for expediting growth at different stages. - Optimum light lens angle of dispersion for better focusing - Can replace 1000 HPS/MH lights resulting in energy savings - Full spectrum light - Easily switchable modes for stimulating growth at different stages - Heat sink and fans for keeping the LED chips cool 9. KINGBO Reflector 45W LED Grow Light Panel 225 LEDs 6-Band Full Spectrum Include UV IR with Switch for Indoor Plants Seeding & Growing & Flowering The newly upgraded KINGBO Reflector Grow Light Panel comes outfitted with 225 LED chips that emit a 6-band light spectrum ranging from 380nm to 740nm, including UV and IR spectrums, ideal for supporting flora growth at all stages. The LED chips have a lifespan of 50,000 hrs implying that they’ll last long after your plants have reached the end of their lifecycles. The plant light not only can easily replace high pressure sodium or metal halide lamps of 1000watts but also help you to save on energy bills as they consume only 35w of power. A robust hanging platform with all necessary installation accessories is supplied for getting the most out of this grow light. - Full spectrum light - Coverage area: 2x 2 feet - Lifespan: 50,000 hrs - Replaces 1000 HPS/MH lamps and lights - Power draw: 35w: results in huge energy savings - Supplied with hanging kit - Ideal for small grow rooms and container pot gardens 10. MEIZHI Reflector-Series 600W LED Grow Light Full Spectrum for Indoor Plants Veg and Flower – Dual Growth and Bloom Switch Daisy Chain Accord the best possible care to your indoor conservatory with the MEIZHI Reflector Series LED Grow Light. The Reflector Series comes with LED light lens inclined at an angle of 120° for focusing maximum illumination evenly in a 3x3feet grow area. The grow light reflects the 12-band spectrum light ideal for hydroponics, sprouting, seedling, budding, flowering, and ripening. The 600w plant light is also extremely energy efficient, consuming just 227watts and can replace HPS, MH, and HID lights of comparable wattage. - Full spectrum 12-band light with switches for regulating growth and bloom - Reflector panel with light lens dispersing light at angle of 120° for even and focused illumination - 4 IR LEDs for promoting cell division and expediting ripening - Hardy heat sinks and two fans for speedy heat dissipation - Ideal for small greenhouses, glasshouses, and conservatories A total of 10 grow lights were taken up for review and almost all the models met the minimum criterions of a plant light. However, the grow lights from Viparspectra, CrxSunny, Apollo Horticulture, MEIZHI, and HIGROW stood out from the rest. The model from CrxSunny had the highest wattage (1000) but was remarkably energy efficient at 226w, the KINGBO model was found to be most affordable, and the KIND LED grow light emerged as the most performance oriented.
1
7