text
stringlengths 182
626k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
379
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 49
202k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Computer numeric control (CNC) is a computer-based system used to control tools such as milling machines, lathes, routers, lasers, punches, water jets, and 3D printers. For a long time, CNC equipment remained constrained to the industrial shop. With the recent proliferation of personal computers, however, CNC has managed to move into the home environment with an ever-increasing number of do-it-yourself (DIY) enthusiasts and hobbyists building their own CNC equipment. The reason this technology enjoys such a large acceptance is because it has the accuracy and repeatability that only a computer-controlled system can offer. In this article, I detail an implementation for a set of electronics that can be used to control pretty much any small or midsized CNC machine.
The CNC motherboard has four stepper driver modules plugged into the respective slots.
(Source: Texas Instruments)
The CNC motherboard
I call this implementation the CNC motherboard, as it revolves around a backplane that accepts motor driver modules to drive the CNC machine axes. In this incarnation, the CNC motherboard can support up to four axes, which is more than enough for the great majority of CNC equipment topologies being built today. The motor driver modules typically are based on bipolar stepper motor drivers with a step/direction interface. In essence, any motor topology can be used, as long as it works by moving according to STEP commands. Since the number of stepper motor power stages with an inherent step/direction interface is always growing, we see the stepper motor ruling the great majority of DIY CNC equipment implementations.
The control of these four axes is supported by a series of blocks. The brain of a CNC machine is a computer called the CNC controller. It is in charge of sending a series of STEP pulses and setting the ENABLE and DIR control lines to the motor driver modules according to a command better known as G code. A G code command can be anything like "move in a line to coordinates X,Y,Z at speed F," "move in a curve," "drill a hole," and so forth. The CNC controller interprets this command and generates the respective combination of STEP/DIR pulses, and at the right frequency, to achieve the required motion.
The CNC controller for our DIY CNC machine can be any personal computer (PC) with a parallel port. Although PCs are no longer being fabricated with parallel ports, adding this resource in the form of an expansion card is very simple. The PC connects to the CNC motherboard through this parallel port, granting us a total of 12 output functions and five inputs.
Before we start distributing the control signals in and out of the different control functions, we add an isolation block for two main reasons. First, this isolation stage protects the computer in case something goes very wrong. Since the motor drivers can be employing high voltages that can hurt the computer, it is best to make sure the computer side cannot come in contact with this higher form of energy. Second, the isolation helps to decrease the control line's noise caused by ground bouncing.
The control of four motor modules claims nine control signals: one ENABLE, which will be shared among the four modules; one STEP; and one DIR per module. The remaining outputs are distributed as follows. Two are used to control two 250V AC/30A relays. The other output runs a watchdog protection block, often referred to as the charge pump. | <urn:uuid:ad603694-6d33-4673-8125-4c008115cbad> | CC-MAIN-2016-26 | http://www.designnews.com/author.asp?section_id=1365&doc_id=254579&itc=dn_analysis_element&piddl_msgorder=thrd | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943372 | 718 | 3.609375 | 4 |
Working with JAX-RPC
JAX-RPC Working Mechanism
Constituents of a JAX-RPC Server and Client
The javax.xml.rpc Package
The xrpcc Tool
The Types Supported by JAX-RPC
The CarPartRequest Web Service
The CarPartRequestClient Application
RPC stands for remote procedure calls. RPC is used for making procedure or function calls and receiving the responses, both over the network. During the days of procedural languages, this was the de facto method by which distributed applications communicated with each other. This often involved fairly complex socket programming to exchange data between the two remote applications.
However, the growth of object-oriented programming saw a steady decline in the use of procedural languages for business applications. It became evident that using object-oriented paradigms could result in a better return on investment (in terms of scalability, development time, and so on). So, the world of business applications moved from procedural languages to object-oriented languages, such as Java. However, this led a new level of complexity in communication between distributed applications. Methods in object-oriented languages could not only return and take as parameters simple data types, but also complex objects such as vectors. With increasing complexity, it became increasingly difficult to use RPC to represent complex objects. Another complexity stemmed from the fact that more often than not, the distributed applications are on heterogeneous systems. This means that the data exchanged between two applications written in different languages must be coded so that it can be used by these applications.
These complexities led to the gradual decline of RPC for low-level socket communications and the development of a plethora of alternative techniques. In the Java world, the primary new approach is Remote Method Invocation (RMI). Although useful in its own way, RMI has its own set of issues.
RMI is very resource-intensive because it needs to use quite a few classes. Also, JRMP, the backbone of RMI, is poor on performance. Of course, you could go ahead and write your own remote protocol, but that is not necessarily the easiest of programming tasks. Additionally, as clients make RMI calls, sockets need to be opened and maintainedthis also adds to the overhead. RMI also requires that the objects that can be called be bound to a server such as LDAP, an RMI registry, or a JNDI service. RMI is Java-only. This leads to the issue that the communicating parties have to be on Java to use RMIs, which is very difficult to ensure in a heterogeneous environment like the Internet. Finally, RMI-based applications require a lot of coding effort because there are quite a number of interfaces and classes that need to be implemented. So, while RMI solved the problem of how to exchange objects, it still had its complexities in terms of performance and simplicity of use.
But how did RPC come back in fashion? The proliferation of the Internet and the emergence of XML gave rise to the possibility of using XML as an RPC mechanism. The fall of RPC was primarily for two reasons: the necessity of using socket programming to provide a transport mechanism for data, and the difficulty in representing complex objects. The ubiquity of the Internet meant that almost every imaginable system had support for HTTP, so it could be used as the transport mechanism of choice. The rise of XML ensured that it was possible to textually describe objects in a standard way that could be understood by all systems, regardless of the environments and languages. The XML-based SOAP specification provides the necessary standard mechanism by which to use RPC. In this chapter, you will learn about the JAX-RPC mechanism of using XML-RPC, as well as how to use APIs defined in the JAX-RPC specification to create a Web service and a client.
JAX-RPC Working Mechanism
JAX-RPC uses SOAP and HTTP to do RPCs over the network. The SOAP specification defines the necessary structure, encoding rules, a convention for doing RPCs, and its corresponding responses. The RPCs and responses are transmitted over the network using HTTP as the primary transport mechanism.
From an application developer's point of view, an RPC-based system has two aspects: the server side (the Web service) and the client side. The Web service exposes the procedures that can be executed, and the client does the actual RPC over the network.
As discussed in earlier chapters, a Web service environment is based on open standards such as SOAP, HTTP, and WSDL. It is therefore possible that a Web service or a client wasn't developed using the Java platform. However, JAXR-RPC provides the mechanism that enables a non-Java client to connect to a Web service developed using Java platform, and vice versa. This chapter will focus on the development of a Web service and a client using JAX-RPC.
Communication exchange between a JAX-RPC client program and a Web service.
Before we discuss the communication exchange process, you need to understand what stubs and ties are. Stubs are local objects that represent the remote procedures. Ties are classes that reside on the server and enable communication with the client.
It is assumed that the client is aware of the Web service and the remote procedure that it can execute on the Web service. This is what happens:
The client calls the method on the stub that represents the remote procedure.
The stub executes the necessary routines on the JAX-RPC runtime system.
The runtime system converts this method call into a SOAP message and transmits the message to the server as an HTTP request.
The server, upon receipt of the SOAP message, invokes the methods on the JAX-RPC runtime. The JAX-RPC runtime converts the SOAP request into a method call.
The JAX-RPC runtime then calls the method on the tie object.
Finally, the tie object calls the method on the implementation of the Web service.
The response to the RPC call is sent in a SOAP response message as an HTTP response.
Now let's look at the physical implementation of a JAX-RPC-based server and client.
Constituents of a JAX-RPC Server and Client
Figure 11.2 shows the physical implementation of a JAX-RPC-based system.
Physical implementation of a JAX-RPC-based system.
A JAX-RPC-based system has the following constituents:
The Web service
The container in which the Web service is located
The WSDL file
The client application
The JAX-RPC runtime classes
The Web service is represented by two files: the service definition interface, and the corresponding implementation class. The service definition interface and the implementation class are collectively known as the service endpoint. An application developer has to write the interface and the implementation class constituting the service endpoint. As you will see later, the service definition interface extends the Remote interface of the rmi package and declares the methods that can be executed remotely.
The service endpoint is deployed in a container-based JAX-RPC runtime system. Typically, this is either a servlet or a stateless session bean deployed on an EJB container. In this chapter, we will use the Tomcat Web server and a servlet as the container for the service endpoint.
The ties are lower-level classes that are used by the server to communicate with the client. These are created by a tool called xrpcc.bat (or xrpcc.sh, if you are a Unix user).
The xrpcc tool also creates the WSDL file. A WSDL file is an XML document that describes the remote procedure call in a platform-independent way. This file enables the Web service to be accessible by clients, even if they are not on the Java platform. As an application developer, you need to run the xrpcc tool to generate the ties and the WSDL document. The WSDL document defines the parameters that a method can take, as well as the method's return value, which is expressed as an XML schema definition (XSD) data type. The JAX-RPC mechanism provides a direct mapping of these XSD data types to Java language types. For example, if a procedure returns a String value, then the WSDL document will describe the return type as xsd:string. Similarly, if a WSDL document declares that a method can take an xsd:dateTime type as parameter, then the JAX-RPC mechanism maps that to the Calendar data type in the Web service. As a developer, you won't need to worry about these mappings, because they are handled automatically by the xrpcc tool.
The client application is the application that makes the remote procedure call on the Web service. As an application developer, you need to create the application program.
The stubs are the local objects that represent the remote service. You can create the stubs by using the xrpcc tool.
In some scenarios, it's possible that the client is not aware of the signature of the remote procedure or the name of the service until runtime. To handle such cases, the client can use the dynamic invocation interface (DII). A client that uses the dynamic invocation interface does not use static stubs. A client that uses the DII is also more complex to code and implement than clients that use static stubs.
The JAX-RPC runtime classes are provided with the reference implementation.
Now let's look at the packages that make up the JAX-RPC environment.
The JAX-RPC specification defines the following packages:
Of all these packages, the one that you need to be concerned about as an application developer is the javax.xml.rpc package. It contains the interfaces that you will be using while developing the clients that use JAX-RPC mechanisms to perform remote procedure calls.
The classes of the other packages are used by the JAX-RPC runtime, or by xrpcc to generate stubs, ties, and the WSDL document.
The javax.xml.rpc Package
The javax.xml.rpc package contains the three interfaces, two classes, and two exceptions that you will use for developing JAX-RPC-based clients.
The javax.xml.rpc Interfaces
The interfaces in the javax.xml.rpc package are as follows:
CallThe Call interface is used by a client that uses the DII to call a remote procedure. The Call interface provides a number of methods with which a client can access the remote procedures.
ServiceThe Service interface is essentially used as a factory to create instances of the Call interface.
StubThe Stub interface is the base interface for the stub classes. Instances of the stub classes represent the procedures at the server. A client calls the method on a stub, and it is up to the stub to take up the remote procedure call from then on.
The javax.xml.rpc Classes
The classes in the javax.xml.rpc package are as follows:
ServiceFactoryThe ServiceFactory is an abstract factory class that enables you to create instances of the Service interface.
NamespaceConstantsThe NamespaceConstants class contains the constants that are used in JAX-RPC for namespace prefixes and URIs. For example, the NSPREFIX_SCHEMA_XSD represents the namespace prefix for the XML schema XSD.
ParameterModeThe ParameterMode class is an enumeration for parameter mode. It defines three constants that determine whether the parameter is of IN, OUT, or INOUT type. This class is used in the addParmeter(..) methods of the Call interface to determine the type of parameter that is being added.
The javax.xml.rpc Exceptions
The exceptions in the javax.xml.rpc package are as follows:
JAXRPCExceptionThe JAXRPCException is thrown when the JAX-RPC runtime fails to operate as expected.
ServiceExceptionThe ServiceException is thrown when the methods of the Service interface or the ServiceFactory class fail to execute as expected.
Now let's take a detailed look at the xrpcc tool.
Page 1 of 4 | <urn:uuid:95f71e02-2014-4386-8a1d-11dba38e86b5> | CC-MAIN-2016-26 | http://www.developer.com/java/ejb/article.php/2109561/Working-with-JAX-RPC.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.906127 | 2,578 | 3.1875 | 3 |
The MS-DOS Boot Process
If you have arrived here through a search engine, and there's no menu to the left click here!
The System Boot Sequence consists of a series of events that the system performs when it is turned on (or rebooted with the reset switch). This always starts with the special boot program software that is in the system BIOS ROM on the motherboard. The BIOS performs several steps to test the system and make it ready before an operating system can be loaded. These steps are explained in detail here: The Master Boot Record and the System Boot Sequence.
Once the BIOS has completed its testing and system configuration, it begins the process of loading the operating system. The BIOS accomplishes this by searching the installed drives for a Master Boot Record in which is contained a boot code. Once found, the boot code is executed and the system is boot into the operating system. When looking for this Master Boot Record, the BIOS will search for boot devices (drives) in the order specified in the BIOS configuration settings controlling this aspect of the boot sequence. If it cannot find a boot device it will terminate with an error.
If the operating system is MS-DOS®, or any variant of Windows® other than Windows® NT or Windows® 2000, that starts out by booting the equivalent of DOS, then a specific operating system load sequence commences, which is referred to as the DOS Boot Process. If you are booting into Windows, additional steps are performed after the underlying MS-DOS® operating system has loaded.
The steps below takes you through the boot process from the hard disk. If you were to boot from a floppy disk, the steps would only differ slightly in the first few steps, as the floppy disk structures are slightly different. Floppies cannot be partitioned, and hence have no master boot record or partitions. This means that the master boot record issues are skipped.
MS-DOS boot process:
- The BIOS, having completed its test and setup functions, loads the boot code found in the master boot record and then transfers control of the system to it. At that point, the master boot record code is executed. If the boot device is a floppy disk, the process skips to step 7 below.
- The next step in the process is the master boot code examining the master partition table. It first must determine if there is an extended DOS partition, then it must determine if there is a bootable partition specified in the partition table.
- If the master boot code locates an extended partition on the disk, it loads the extended partition table that describes the first logical volume in the extended partition. This extended partition table is examined to see if it points to another extended partition table. If it does, this second table is examined for information about the second logical volume in the extended partition. Logical volumes in the extended partition have their extended partition table chained together one to the next. This process continues until all of the extended partitions have been loaded and recognized by the system.
- Once the extended partition information (if any) has been loaded, the boot code attempts to start the primary partition that is marked active, referred to as the boot partition. If no boot partitions are marked active, then the boot process will terminate with an error. The error message is often the same as that which occurs if the BIOS could not locate a boot device, generally shown on screen as "No boot device", but also can show up as "NO ROM BASIC - SYSTEM HALTED". If there is a primary partition marked active and there is an installed operating system, the boot code will boot it. The rest of the steps presume this example is of an MS- DOS primary partition.
- At this stage, the master or volume boot sector is loaded into memory and tested, and the boot code that it contains is given control of the remainder of the boot process.
- The boot code examines the disk structures to ensure that everything is correct. If not, the boot process will end in an error here.
- During the next step, the boot code searches the root directory of the device being booted for the operating system files that contain the operating system. For MS-DOS, these are the files "IO.SYS", "MSDOS.SYS" and "COMMAND.COM".
- If no operating system files are found, the boot program will display an error message similar to "Non-system disk or disk error - Replace and press any key when ready". Keep in mind that his message does not means that the system was never booted. It means that the BIOS examined the floppy disk for example and just rejected it because it couldn't boot an operating system. The volume boot code was indeed loaded and executed, as that is what posts the message when it can't find the operating system files.
- In the final stages of the boot process, presuming that the operating system files are found, the boot program will load those operating system files into memory and transfer control to them. In MS-DOS, the first is IO.SYS and its code is executed. IO.SYS will then execute MSDOS.SYS. Then the more complete operating system code loads and initializes the rest of the operating system structures beginning with the command interpreter COMMAND.COM and then the execution of the CONFIG.SYS and AUTOEXEC.BAT files. At this point the operating system code itself has control of the computer.
If any of the Windows 95/98/ME versions were being started, the above would only be the beginning of the startup process. When MS-DOS starts in anticipation of loading these Windows versions, there are many more routines that are loaded and executed as part of the boot process, which includes such tasks such as reading the system registry, initializing hardware devices and starting the graphical user interface or operating system shell. We hope this has given you a better understanding of what occurs during the boot or startup process of your computer.
Notice: Windows® 95, Windows® 98, Windows® NT, Windows® 2000 and
Microsoft® Office are registered trademarks or trademarks of the Microsoft Corporation.
All other trademarks are the property of their respective owners. | <urn:uuid:767ae240-9725-44ed-be3a-1642749c1a54> | CC-MAIN-2016-26 | http://www.dewassoc.com/support/msdos/dos_boot_process.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929733 | 1,263 | 3.296875 | 3 |
The Path as Participatory
Life obviously must be lived. How does repeating the Three Refuges help one to live this life? One way to participate in the living expression of what it means to take refuge in the Three Treasures is through the ritual service. Participation in ritual, however, only becomes meaningful when it can help the individual to understand or see the Truth one is participating in. Because of this, ritual is created from and founded on the doctrinal understanding of truth of a particular faith or tradition. Through seeing how the transmission of doctrine can be expressed in life, the meaning of that transmission becomes clearer: it becomes a path that can help one to not only understand one’s life, but to live and share that life as well.
Because of this, descriptions of the ritual service (especially as practiced by the Jodo Shinshu Buddhist tradition) will be an integral part of this introductory course. This introduction to Buddhism begins with the assumption that life, like religion, must be participated in and not just studied if it is to have a deeper meaning.
You, as the reader, may approach these writings as a way to explore the richness of human expression, to gain a greater understanding of your life through the lens of another tradition, or to seek or deepen your commitment to a personal path. It is my hope as the writer, that through sharing the triple perspective of Buddha, Dharma and Sangha, we can all gain a greater appreciation of the uniqueness of life and be motivated to share our joy in the ability to participate in the rarest of rare things: your own life. | <urn:uuid:8836be7e-943d-403e-92dd-8a59e893f240> | CC-MAIN-2016-26 | http://www.dharmanet.org/coursesM/Shin/JodoShinshu0c.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958603 | 324 | 2.703125 | 3 |
Back to Cardiovascular Diseases
In physiology, constrictive pericarditis is due to a thickened, fibrotic
pericardium, which prevents the heart from expanding during diastole
In many cases, constrictive pericarditis is a late sequelae of an
inflammatory condition of the pericardium. The inflammatory condition is
usually an infection that involves the pericardium, but it may be after
a heart attack or after heart surgery.
Almost half the cases of constrictive pericarditis in the developing
world are idiopathic in origin. In regions where tuberculosis is common,
it is the cause in a large portion of cases.
Causes of constrictive pericarditis include:
- Post Viral Pericarditis
- Prior Mediastinal Radiation Therapy
- Chronic Renal Failure
- Connective Tissue Disorders
- Neoplastic Pericardial Infiltration
- Incomplete Drainage of Purulent pericarditis
- Fungal and Parasitic Infections
- Following Pericarditis Associated with Acute Myocardial Infarction
- Following Postmyocardial Infarction (Dressler) Syndrome
- In Association with Pulmonary Asbestosis
Constrictive pericarditis is due to a thickened, fibrotic pericardium
that forms a non-compliant shell around the heart. This shell prevents
the heart from expanding when blood enters it. This results in
significant respiratory variation in blood flow in the chambers of the
During inspiration, the negative pressure in the thoracic cavity will
cause increased blood flow into the right ventricle. This increased
volume in the right ventricle will cause the interventricular septum to
bulge towards the left ventricle, leading to decreased filling of the
left ventricle. Due to the Frank-Starling law, this will cause decreased
pressure generated by the left ventricle during systole.
During expiration, the amount of blood entering the right ventricle will
decrease, allowing the interventricular septum to bulge towards the
right ventricle, and increased filling of the left ventricle and
subsequent increased pressure generated by the left ventricle during
This is known as ventricular interdependance, since the amount of blood
flow into one ventricle is dependent on the amount of blood flow into
the other ventricle.
Symptoms and signs
Right sided heart failure symptoms predominate with progressive
shortness of breath, palpitations, fatigue, lower limb edema and ascites.
Clinical signs, on examination, include tachycardia, Kussmaul's sign as
well as jugular venous distension with a characteristic rapid y-descent.
Auscaltation may reveal a "pericardial knock". The abdomen may show
ascites and congestive hepatosplenomegaly.
The diagnosis of constrictive pericarditis often difficult to make. In
particular, restrictive cardiomyopathy has many similar clinical
features to constrictive pericarditis, and differentiating them in a
particular individual is often a diagnostic dilemma.
1. Chest x-ray may show pericardial calcification.
2. EKG may show low QRS voltage.
3. Echocardiography may show wall thickening
4. CT or MRI are more sensitive than echocardiography
5. Cardiac catheterization to measure pressure changes in the left and
1. Pericardial stripping
The definitive treatment for constrictive pericarditis is pericardial
stripping, which is a surgical procedure where the entire pericardium is
peeled away from the heart. This may be effective in up to 50% of
patients. This procedure has significant risk
involved, since the thickened pericardium is often adherant to the
myocardium and coronary arteries. In patients who have undergone
coronary artery bypass surgery with pericardial sparing, there is danger
of tearing a bypass graft while removing the pericardium.
If any pericardium is not removed, it is possible for bands of
pericardium to cause localized constriction which may cause symptoms and
signs consistent with constriction.
Due to the significant risks involved with pericardial stripping, many
patients are treated medically, with judicious use of diuretics.
Are you a doctor or a nurse?
Do you want to join the Doctors Lounge online medical community?
Participate in editorial activities (publish, peer review, edit) and
give a helping hand to the largest online community of patients.
Click on the link below to see the requirements:
Doctors Lounge Membership | <urn:uuid:b4e9c460-630f-44cd-ab36-b4905c44e6a6> | CC-MAIN-2016-26 | http://www.doctorslounge.com/cardiology/diseases/constrictive_pericarditis.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.840548 | 1,032 | 3.328125 | 3 |
The modern high-speed network is not perfect. Errors can creep into message data during transmission or reception, altering or erasing one or more message bytes. Sometimes, errors are introduced deliberately to sow disinformation or to corrupt data. Several algorithms have been developed to guard against message errors. One such algorithm is Reed-Solomon.
This article takes a close, concise look at the Reed-Solomon algorithm. I discuss the benefits offered by Reed-Solomon, as well as the notable issues it presents. I examine the basic arithmetic behind Reed-Solomon, how it encodes and decodes message data, and how it handles byte errors and erasures. Readers should have a working knowledge of Python. The Python class
ReedSolomon is available for download.
Overview of Reed-Solomon
The Reed-Solomon algorithm is widely used in telecommunications and in data storage. It is part of all CD and DVD readers, RAID 6 implementations, and even most barcodes, where it provides error correction and data recovery. It also protects telemetry data sent by deep-space probes such as Voyagers I and II. And it is employed by ADSL and DTV hardware to ensure data fidelity during transmission and reception.
The algorithm is the brainchild of Irving Reed and Gustave Solomon, both engineers at MIT's Lincoln Labs. Its public introduction was through the 1960 paper "Polynomial Codes over Certain Finite Fields." Interestingly enough, that paper did not provide an efficient way to decode the error codes presented. A better decoding scheme was developed in 1969 by Elwyn Berklekamp and James Massey.
Reed-Solomon belongs to a family of error-correction algorithms known as BCH. BCH algorithms use finite fields to process message data. They use polynomial structures, called "syndromes," to detect errors in the encoded data. They add check symbols to the data block, from which they can determine the presence of errors and compute the correct valuse. BCH algorithms offer precise, customizable control over the number of check symbols. They have simplified code structures, making them attractive for hardware implementations.
Reed-Solomon is also a linear algorithm because it processes message data as discrete blocks. And it is a polynomial algorithm because of its use of modular polynomials in the encoding and decoding processes.
But Reed-Solomon is not without its issues. It performs poorly with large message blocks. Doubling the block size quadruples the time it takes to encode or decode the message data. You can minimize this limitation by maintaining small and uniform blocks. Reed-Solomon also relies heavily on modular operations: And modular operations, especially multiplication, consume more clock cycles than non-modular ones.
The Mathematics of Reed-Solomon
As stated, Reed-Solomon uses a finite field in its encoding and decoding processes. A finite (or Galois) field is a square matrix, whose elements are the possible bytes values for both message and error data. Its size (2
m) is always a power of two, with
m being a prime number. It is also commutative. Combining two of its elements with a primitive modular operator (addition, subtraction, and so on) will return another element.
Figure 1 shows a simple finite field, size
4 (m = 2). Notice its largest element is
3 (0b11), which is less than the matrix size. If you take two field elements (
0b11) and add them, you get
0b00, which is also a field element.
Listing One shows how the Python class
ReedSolomon prepares its finite fields. Here, it declares two list properties. The property
__GFEXP (line 4) is the actual field, with 256 possible byte values. The property
__GFLOG (line 7) is the complement field. Its 256 elements are the binary weights of each field element in
__GFEXP. Together, these properties help simplify modular multiplication and division as you shall see later.
class ReedSolomon: # Galois fields # -- exponents (anti-logarithms) __GFEXP = * 512 # -- logarithms __GFLOG = * 256 # INITIALIZATION CONSTRUCTOR def __init__(self): # prepare the exponential and logarithmic fields self.__GFEXP = 1 byteVal = 1 for bytePos in range(1,255): byteVal <<= 1 if (byteVal & 0x100): byteVal ^= 0x11d # update the field elements self.__GFEXP[bytePos] = byteVal self.__GFLOG[byteVal] = bytePos # finalize the exponential field for bytePos in range(255,512): self.__GFEXP[bytePos] = self.__GFEXP[bytePos - 255] # ... continued
Next, the constructor method
__init__() starts by setting element
__GFEXP to 1 (line 13). Then it populates the first 255 elements in both
__GFLOG (lines 15-22). Finally, it populates the remaining 256 elements in
__GFEXP (lines 25-26), by copying the first 256 elements from that same property.
Listing Two shows how the class handles modular multiplication and division. The private methods
__gfDivi() get the same two arguments:
__gfMult() method checks if both arguments are zero and returns a zero result (lines 9-10). Otherwise, it reads an element from
argX as the list index (line 13). Then it reads an element from
argY as the index, and adds that element to
byteValu (line 14). Finally, it reads an element from
byteValu as the index (line 15), and returns that element as the result.
class ReedSolomon: # ...previous listings # # Galois multiplication # argX, argY: multiplicand, multiplier # byteValu: product def __gfMult(self, argX, argY): # parametre checks if ((argX == 0) or (argY == 0)): byteValu = 0 else: # perform the operation byteValu = self.__GFLOG[argX] byteValu += self.__GFLOG[argY] byteValu = self.__GFEXP[byteValu] # return the product result return (byteValu) # Galois division # argX, argY: dividend, divisor # byteValu: quotient def __gfDivi(self, argX, argY): # validate the divisor if (argY == 0): raise ZeroDivisionError() # validate the dividend if (argX == 0): byteValu = 0 else: # perform the division byteValu = self.__GFLOG[argX] - self.__GFLOG[argY] byteValu += 255 byteValu = self.__GFEXP[byteValu] # return the division result return (byteValu) # ... continued
__gfDivi() method also checks for zero arguments. But it returns a zero if
argX is zero, and it raises a
argY is zero (lines 25-30). Otherwise, it uses both
argY to read elements from
__GFLOG. It subtracts the two elements (
byteValu) and adds 255 to the difference (lines 33-34). Then it uses
byteValu to access
__GFEXP and returns that element as the quotient (line 35).
Next, Reed-Solomon uses polynomials in its encoding and decoding processes. In Python, you can represent a polynomial as a list object (Figure 2). Each element in the list corresponds to a coefficient, each index to a term power. Notice the coefficients of each polynomial term is a hexadecimal number.
One important polynomial is the generator polynomial (Figure 3). This is a normalized polynomial. All its terms have a coefficient of 1. It is irreducible. It cannot be factored into two or more polynomials.
Listing Three shows how the class
ReedSolomon prepares a generator polynomial. This private method
_rsGenPoly() gets one argument: the number of error symbols (
errSize). It assigns the local
polyValu a single list element of 1 (line 8). Then it builds the generator polynomial by creating a two-term list object (
__GFEXP (line 11), and combining
_gfPolyValu() (line 12). The final value of
polyValu then becomes the generator polynomial (line 15).
class ReedSolomon: # ...previous listings # # Prepare the generator polynomial # errSize: number of error symbols # polyValu: generator polynomial def _rsGenPoly(self, errSize): polyValu = for polyPos in range(0, errSize): polyTemp = [1, self.__GFEXP[polyPos]] polyValu = self._gfPolyMult(polyValu, polyTemp) # return the polynomial result return (polyValu) # ... continued
Listing Four shows the four private methods that
ReedSolomon uses to process its polynomial list objects. The method
_gfPolyAdd() (lines 7-20) combines its two arguments,
polyB, through modular addition. The method
_gfPolyMult() (lines 25-36) combines its two arguments through modular multiplication. Notice addition is done with exclusive-or, while multiplication is done with
class ReedSolomon: # ...previous listings # # Polynomial addition # polyA, polyB: polynomial addends # polySum: polynomial sum def _gfPolyAdd(self, polyA, polyB): # initialise the polynomial sum polySum = * max(len(polyA), len(polyB)) # process the first addend for polyPos in range(0, len(polyA)): polySum[polyPos + len(polySum) - len(polyA)] = polyA[polyPos] # add the second addend for polyPos in range(0, len(polyB)): polySum[polyPos + len(polySum) - len(polyB)] ^= polyB[polyPos] # return the sum return (polySum) # Polynomial multiplication # polyA, polyB: polynomial factors # polyProd: polynomial product def _gfPolyMult(self, polyA, polyB): # initialise the product polyProd = len(polyA) + len(polyB) - 1 polyProd = * polyProd # start multiplying for posB in range(0, len(polyB)): for posA in range(0, len(polyA)): polyProd[posA + posB] ^= self.__gfMult(polyA[posA], polyB[posB]) # return the product result return (polyProd) # Polynomial scaling # argPoly: polynomial argument # argX: scaling factor # polyVal: scaled polynomial def _gfPolyScale(self, argPoly, argX): # initialise the scaled polynomial polyVal = * len(argPoly) # start scaling for polyPos in range(0, len(argPoly)): polyVal[polyPos] = self.__gfMult(argPoly[polyPos], argX) # return the scaled polynomial return (polyVal) # Polynomial evaluation # argPoly: polynomial argument # argX: independent variable # byteValu: dependent variable def _gfPolyEval(self, argPoly, argX): # initialise the polynomial result byteValu = argPoly # evaluate the polynomial argument for polyPos in range(1, len(argPoly)): tempValu = self.__gfMult(byteValu, argX) tempValu = tempValu ^ argPoly[polyPos] byteValu = tempValu # return the evaluated result return (byteValu) # ... continued
The next method,
_gfPolyScale(), takes two arguments: a polynomial (
argPoly) and an integer (
argX). It multiplies each polynomial term by
__gfMult() (lines 47-48). The method
_gfPolyEval() also gets
argX as its arguments. However,it solves the polynomial
__gfMult() to combine
argX with each term, and tallies the sum using exclusive-or (lines 62-65).
Encoding with Reed-Solomon
To encode a message block with Reed-Solomon, first you need to set the number of error symbols (
errSize). These symbols are generated by Reed-Solomon and appended to the message block. They are later used to correct any erasures or errors found in the block. | <urn:uuid:5870ac74-a5b9-47b1-9a50-94817766fce8> | CC-MAIN-2016-26 | http://www.drdobbs.com/windows/net-development-on-linux/windows/error-correction-with-reed-solomon/240157266 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.742719 | 2,779 | 3.4375 | 3 |
June 2014 - This month all classes focus on working and short term memory. Studies show that students with Down syndrome have verbal short term memory challenges and relative visual short term memory strengths. Understanding memory and interventions to improve memory can lead to improvements in targeted memory tasks and, hopefully, improved academic achievement We wrote a brief article about memory that was published in the National Down Syndrome Congress' Newsletter. You can download it here. Please let us know if you have any questions! | <urn:uuid:ff401dd6-4522-489f-b26b-2c72f91ce2ee> | CC-MAIN-2016-26 | http://www.dsfoc.org/article/junes-lp-focus | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955241 | 93 | 3.296875 | 3 |
IFs keeps track of separate domestic, ENPRI, and world, WEP, energy price indices, that apply to all forms of energy. These are initialized to a value of 100 in the first year. It also tracks the world energy price in terms of dollars per BBOE, WEPBYEAR, which is initialized as a global parameter.
A number of pieces are needed for the calculation of energy prices. These include a world stock base, wstbase, world energy stocks, wenst, world energy production by energy type, WENP, world energy capital, WorldKen, and a global capital output ratio, wkenenpr. These are calculated as follows:
- ENSHO is domestic energy shortage (described below)
- ken is capital for each energy type
- lke is the average lifetime of capital for each energy type
In cases when at least one country has an exogenous restriction on the production of oil, i.e., enpm(oil) < 1 for at least one country, a few additional variables are calculated:
Otherwise these three variables all take on a value of 0.
These values are used to calculate an adjustment factor driven by global energy stocks that affects domestic energy prices. The effect in the current year, wmul, is calculated using the ADJSTR function, which looks at the difference between world energy stocks, wenstks and the desired level, given by dstlen * wstbase, and the change in world energy stocks from the previous year. The presence of an exogenous restriction on the production of oil has two effects on the calculation of wmul. First, the value of ShortFallSub affects the two differences that feed into the ADJSTR function. Second, the elasticities applied in the ADJSTR function are tripled.
The adjustment factor calculated in the current year is not applied directly to the calculation of domestic energy prices. Rather, a cumulative value, cumwmul, is calculated as:
Other factors affect the domestic energy price index – domestic energy stocks, possible cartel price premiums, encartpp , the first year value of the world energy price index, IWEP, changes in the global capita output ratio from the first year, whether the user has set a global energy price override. enprixi, and whether there are any restriction on oil production.
The domestic energy stocks affect a country-specific “markup” factor, MarkUpEn. This starts at a value of 1 and changes as a function of the value of mul, which is calculated using the ADJSTR function. Here the differences are those between domestic energy stocks and desired stocks, given as dstlen * StBase, and the changes in energy stocks from the previous year. Shortages from the previous year are also taken into account. The user can also control the elasticities used in the ADJSTR function with the parameters epra and eprafs . This markup evolves over time as
The domestic energy price index, ENPRI, is first calculated as:
- X = enprixi, when this parameter is set to a value greater than 1 and IWEP otherwise
It is then recomputed as:
- X is 100 whenthere is a restriction on oil production in at least one country and 20 otherwise
Furthermore, ENPRI is not allowed to fall by more than 10 in a given year.
It is possible for the user to override this price calculation altogether. Any positive value of the exogenous country-specific energy price specification ( enprix ) will do so.
It is only now that a country’s energy stocks and shortages are finalized for the current year. If ENST is less than 0, then a shortage is recorded as ENSHO = -ENST and ENST is set to 0. In addition, for countries that have a low propensity for exports, XKAVE < 0.2, a share of any global shortfall is added to their shortage, with the share determined by the country’s share of moving average energy demand among those countries:
The energy shortage enters the Economic model in the calculation of gross sectoral production.
The same differences in domestic stock from their target level and their change since the previous year, taking into account shortages from the previous year, are used to update the value of capacity utilization in energy, CPUTF, which was introduced earlier. The multiplier affecting CPUTF, Mul, is calculated using the ADJSTR function, with elasticities given by elenpst and elenpst2 . In addition, the capacity utilization is smoothed over time.
This value is further assumed to converge to a value of 1 over a period of 100 years and is bound to always have a value between 0.2 and 2.
This still leaves the need to calculate the world energy price. IFs actually tracks a world price including carbon taxes, WEP, and a world price ignoring carbon taxes, WEPNoTax. Carbon taxes are ignored in cases where the energy price is set exogenously using enprix .
In both cases, the world energy price is a weighted average of domestic energy prices:
- WEP and WEPBYEAR convert CarTaxEnPriAdd from $/BBOE to an index value
- the term with CarTaxEnPriAdd is ignored in countries with exogenous energy prices in a given year
- CarTaxEnPriAdd is
Finally, the value of WEPBYEAR is computed as: | <urn:uuid:b24ad71b-a32f-41ae-aa9f-278009254b65> | CC-MAIN-2016-26 | http://www.du.edu/ifs/help/understand/energy/equations/prices.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.914551 | 1,132 | 2.640625 | 3 |
Nati Harnik/Associated Press
Nati Harnik/Associated Press
NEW YORK – Cornflakes won’t necessarily be more expensive as a result of rising corn prices, but the milk you pour over them might be.
A drought covering two-thirds of the country has damaged much of the country’s corn crops and pushed grain prices to record levels, triggering fears that a spike in food prices will soon follow.
But there are many factors that determine the price of goods on supermarket shelves. A diminished corn supply doesn’t mean that all food prices will be affected the same way.
In fact, you’re more likely to see higher prices for milk and meat than corn on the cob. That’s because the sweet corn that shoppers buy at a grocery store is grown differently and not as vulnerable to drought conditions. As for the corn that’s used as grain feed for cows, however, farmers are paying more as the drought persists.
“The financial stress is starting to mount because the bills (to feed the cows) are bigger than they were six months ago,” says Chris Galen, a spokesman for the National Milk Producers Federation. “What will consumers see as a result? That’s where it gets a little murkier.”
One major factor that complicates the equation is the amount that supermarkets decide to mark up the foods they sell to shoppers. Because supermarkets are facing stiffer competition from big-box retailers and drug stores, they’re being much more judicious about how much of their rising costs they pass on to customers.
Nevertheless, the Agriculture Department said recently that it expects grocery prices to rise between 3 percent and 4 percent next year, which is slightly higher than normal.
Here’s a look at how different foods will be affected:
Meat and dairy
In addition to paying more to feed their cows, farmers are dealing with grazing pastures that have been baked dry. The combination is resulting in farmers selling off the animals they can’t afford to feed in recent weeks, particularly since cattle supplies are already limited and beef prices have been climbing steadily in recent years.
Beef from those animals streaming into auction yards is expected to start showing up in grocery stores in November and December, temporarily driving down meat prices.
“The irony is that we could start seeing some price reductions in the short run,” says Bruce Jones, a professor of agricultural economics at the University of Wisconsin.
By early next year, however, prices are expected to spike as a result of the smaller livestock herds and dwindling meat supplies. Already, the number of cattle in the U.S. has been dropping for years, and the USDA said this month that the nation’s cattle inventory was the smallest since the agency began an annual count in 1973.
Next year, the USDA says beef prices are expected to jump 4 percent to 5 percent, making it among the biggest price increases for food. Dairy product prices are expected to climb 3.5 percent to 4.5 percent, poultry and egg prices up by 3 percent to 4 percent, and pork prices up by 2.5 percent to 3.5 percent.
Fruits and vegetables
So why isn’t anyone talking about a shortage of fruits and vegetables in light of the drought? Unlike the corn that’s grown to make animal feed and oil, produce sold in supermarkets typically is irrigated by farms and not as affected when there’s a lack of rain.
In addition, supermarkets import many of their fruits and vegetables from other countries – such as, bell peppers from Holland –so that they can keep supplies and prices in check even if one source isn’t producing a large amount.
Fruits and vegetables also are a loss leader for supermarkets. That means they’re often sold at a loss in hopes of attracting shoppers who will spend on other items, says Lisa Schacht, president of the Ohio Produce Growers and Marketers Association.
At farmers markets where consumers buy directly from growers, a spike in prices might be more pronounced. That’s because the relentless heat is making it harder to grow certain fruits and vegetables.
“Even if you irrigate your peppers, you’re seeing a 30 percent reduction from the heat,” says Bryn Bird, whose family owns Bird’s Haven Farms, a farm outside of Granville, Ohio. “They just don’t want to grow.”
The result is that Bird’s Haven is selling tomatoes at $3.25 per pound, compared with the $1.99 per pound that’s more typical this time of year. The types of available produce might differ, too. At Bird’s Haven, the okra and eggplant are growing fine in the heat, but the family has given up on cucumbers.
“They’re coming up, but they’re just not fruiting,” Bird says. “There a lot of vines with nothing on them.”
As for the ears of corn sold at supermarkets, there shouldn’t be a huge spike in prices. The sweet corn that people eat typically is irrigated like other fruits and vegetables. And although the drought is pressuring farmers, it’s not to the same severity as the corn fields that produce animal feed.
Overall, the USDA projects an overall 2 percent to 3 percent price increase for fruits and vegetables next year. That’s in line with this year’s increase.
Another worry is that the price of many packaged foods that contain corn or corn ingredients will climb. High-fructose corn syrup, for example, is used in a wide variety of foods such as cookies, yogurt, cereals and spaghetti sauces. A can of regular soda contains 40 grams of the sweetener.
The corn ingredients that are used in packaged foods mostly aren’t irrigated, either, meaning they’re also vulnerable to the vagaries of weather and the price fluctuations.
But keep in mind that such ingredients are often a tiny fraction of the costs that go into packaged foods. Among the many expenses food makers such as Kellogg Co. and Kraft Foods Inc. also have to foot: packaging material, labor, advertising and fuel for trucks to get their products in stores.
Based even on today’s high corn prices, a 12-ounce box of cornflakes would have only about 8 cents worth of corn, says Paul Bertels, vice president of production and utilization at the National Corn Growers Association. That’s a very small portion of the $4 or so that consumers might pay for that box of cereal.
“When you look at final food products, the more processing there is, the less significant the price of the raw materials,” Bertels says. “A lot of it is advertising and marketing.”
Food makers also have other ways of managing their costs, such as cutting back on how much they put in a package. Even before the drought, PepsiCo Inc. says earlier this year that it put fewer chips in its Frito-Lay bags as a way to offset higher ingredient costs.
And for consumers watching their budgets, a few less chips per bag might be preferable to paying more anyway. | <urn:uuid:539b8f55-8901-48f4-8e22-6ada140e530a> | CC-MAIN-2016-26 | http://www.durangoherald.com/article/20120805/NEWS04/708059973/0/NEWS05/href | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960603 | 1,514 | 2.546875 | 3 |
More New and Used
from Private Sellers
Note: Supplemental materials are not guaranteed with Rental or Used book purchases.
Questions About This Book?
What version or edition is this?
This is the edition with a publication date of 2/1/1998.
What is included with this book?
- The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
In a series of pithy, amusing vignettes, Aesop created a vivid cast of characters to demonstrate different aspects of human nature. Here we see a wily fox outwitted by a quick-thinking cicada, a tortoise triumphing over a self-confident hare and a fable-teller named Aesop silencing those who mock him. Each jewel-like fable provides a warning about the consequences of wrong-doing, as well as offering a glimpse into the everyday lives of the Ancient Greeks. This definitive edition is the first translation into English of the entire corpus of 358 unbowdlerized fables. It is full annotated, with an introduction that rescues the fables from a tradition of moralistic interpretation.
Table of Contents
|A Note on the Text||xxiv| | <urn:uuid:77f1cd7e-accf-4dc6-9f06-ec7e565785d0> | CC-MAIN-2016-26 | http://www.ecampus.com/complete-fables-aesop-author-temple-robert/bk/9780140446494&pos=18 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.880329 | 277 | 2.671875 | 3 |
This lesson will help students become good consumers and producers by taking turns buying and selling things in a classroom-created market. Students will establish prices for items and observe what happens during the sale of those items.
Students will understand what businesses are, that a marketplace exists whenever buyers and sellers exchange goods and services, and that there is competition in the market place if you have more than one seller of the same item or similar items.
It's December 16, 1773 and many of the citizens of Boston are furious with King George's new tax on tea. Young Ethan, a printer's errand boy, has been given the task of conveying information concerning an upcoming protest meeting. As he makes his rounds through the city the reader is introduced to the goods and services provided by colonial merchants. [NOTE: These lessons are based on the book "Colonial Voices Hear Them Speak" by Kay Winters. However, it is not necessary for the students to have read the book to successfully complete the activities.]
The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information.
Designed primarily for elementary and middle school students, each of the 15 lessons in this guide introduces an economics concept through activities with modeling clay.
3 out of 17 lessons from this publication relate to this EconEdLink lesson.
This interdisciplinary curriculum guide helps teachers introduce their students to economics using popular children's stories.
4 out of 29 lessons from this publication relate to this EconEdLink lesson.
This publication helps elementary students analyze energy and environment issues from an economics perspective.
3 out of 10 lessons from this publication relate to this EconEdLink lesson. | <urn:uuid:b6655da9-6499-43bf-979f-d2d0bf7865f0> | CC-MAIN-2016-26 | http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=382 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.943066 | 358 | 4 | 4 |
ENN Weekly: February 20th - 24th
The Week's Top Ten Articles
In the news February 20th - 24th: Logging and landslides, cultured gorillas, a new coral reef, making the best of dog waste, and much more.
1. Heavy Rains, Illegal Logging Blamed for Philippines Landslides
2. Supreme Court Raises Concern over Limiting Coverage of Clean Water Act
3. Greenland Starts Quota to Save Polar Bears
4. Study Finds Cultures Affect Captive Gorillas
5. Development May Spread Old Pesticides
6. Coral Reef Discovered off Thailand Coast
7. China Warns Officials against Covering up Pollution
8. Rising Energy Costs Illuminate Surging Fluorescent Bulb Market
9. Alaska's Mount McKinley Climbers Capped at 1,500
10. San Francisco to Test Turning Dog Waste into Power
Guest Commentary: Two Lost Worlds Give Us Hope
Dr. David Suzuki, David Suzuki Foundation
Two lost worlds were in the news last week. One was discovered halfway around the world, but the other is right here at home.
The first was a never-before examined patch of tropical rainforest deep in the heart of New Guinea. It's likely one of the most biologically diverse areas on Earth and it shows how little we really know about life on this planet.
An international team of scientists recently returned from the Foja Mountains of New Guinea having discovered 40 extremely rare mammals (including the golden-mantled tree kangaroo which was thought to have been hunted to near extinction), four new butterfly species, a new bird species, 20 new frog species and many previously unknown plant species. Having never encountered humans, some of the creatures were so unafraid of people that researchers could simply pick them up off the ground.
That places such as this still exist is cause for hope. With well over six billion people on the planet and an insatiable appetite for resources, pristine places are becoming increasingly rare and species are disappearing at an alarming rate. Yet scientists have only studied a small percentage of life on Earth. Researchers estimate that there are literally millions of species out there that we have never examined and have no clue what they do in an ecosystem. As Oxford entomologist George McGavin points out: in a tropical rainforest, every second or third insect you pick up is probably unknown to science.
The other lost world in the news last week is also a remote and incredibly diverse rainforest - but this one is in Canada. British Columbia's north and central coast, known as the Great Bear Rainforest, is unique, it is special and it contains creatures found nowhere else in the world. Most people know about the Kermode bears that live on this coast. They're a white version of the black bear, found only in this area. And their differences extend to more than just fur colour: researchers are finding that they behave differently too.
Wolves of the Great Bear are also different - smaller, more agile and specially adapted to forage for the bounty of sea life found along the shore. Then there are the salmon, which researchers have found are vital to the health of the forests and many land-dwelling creatures. Hundreds of unique runs of salmon find their way back to the Great Bear every year to spawn; their bodies providing nourishment to the wildlife, the trees and the soil.
The Great Bear Rainforest made international news last week because the B.C. government, along with First Nations, environmental groups and the forest industry, have drafted a plan to protect a portion of it. That's good news for science and good news for the people who depend on the health of this ecosystem for their livelihoods.
The story is only partially complete, however, as discussions are still underway as to what kind of logging will take place in the parts of the Great Bear outside the protected areas. This is critical because unprotected areas make up more than 70 per cent of the land base and contain the majority of salmon streams and much of the best wildlife habitat.
Scientists have only just begun to understand this magnificent region and all the life within it. The recent agreement, if combined with truly sustainable logging practices outside the protected areas, could keep this ecosystem functioning, allow economic activities such as tourism and logging to co-exist and give scientists a chance to understand more about Canada's own lost world.
It's an opportunity we would be foolish to pass up.
Take the Nature Challenge and learn more at www.davidsuzuki.org.
Source: David Suzuki Foundation
ENN welcomes a wide range of perspectives in its Commentary Series. To find out more or to submit a commentary for consideration please contact ENN's editor, Carrie Schluter: [email protected].
Photo: A boy cools off at a fountain in Hong Kong Park. Photo credit: © dwho photography, Courtesy of Photoshare. | <urn:uuid:6dd41e9f-53f4-464f-8ddb-d52db46f6f81> | CC-MAIN-2016-26 | http://www.enn.com/top_stories/article/3753 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.949019 | 1,007 | 2.625 | 3 |
Urban Greening May Reduce Crime Rates in Cities
Urban planning is not only important to the strategic design behind a city's infrastructure, but now one study finds that the landscaping itself which emphasizes urban greening and the introduction of well-maintained vegetation, can lower the rates of certain types of crime such as aggravated assault, robbery and burglary, in cities.
According to a Temple University study, "Does vegetation encourage or suppress urban crime? Evidence from Philadelphia, PA," researchers found that the presence of grass, trees and shrubs is associated with lower crime rates in Philadelphia.
"There is a longstanding principle, particularly in urban planning, that you don't want a high level of vegetation, because it abets crime by either shielding the criminal activity or allowing the criminal to escape," said Jeremy Mennis, associate professor of geography and urban studies at Temple. "Well-maintained greenery, however, can have a suppressive effect on crime."
Researchers established controls for other key socioeconomic factors related to crime, such as poverty, educational attainment and population density, and examined socioeconomic, crime and vegetation data, the latter from satellite imagery.
The authors conclude that this deterrent effect is rooted in the fact that maintained greenery encourages social interaction and community supervision of public spaces, as well the calming effect that vegetated landscapes may impart, thus reducing psychological precursors to violent acts. They offer their findings and related work as evidence for urban planners to use when designing crime prevention strategies, especially important in an age when sustainability is valued.
Mennis said rather than decreasing vegetation as a crime deterrent, their study provides evidence that cities should be exploring increasing maintained green spaces.
Increasing vegetation not only supports sustainability, but improves the aesthetics in a community creating a certain pride and respect for the neighborhood which is a nice complement to many city initiatives, explains Mennis.
By adding vegetation, stormwater runoff is also reduced which is a major source of pollution for city sewer systems and surface waters. "Reducing stormwater runoff, improving quality of life, reducing crime — all of these objectives are furthered by increasing well-managed vegetation within the city."
"If you see well-maintained window boxes, gardens, lawns and community spaces it gives the impression of a stable, healthy community - people are watching out for that neighborhood," said Eva Monheim, instructor in Landscape Architecture and Horticulture at Temple University. "Broken window panes, unmaintained vegetation - weeds and tall grass - give the opposite impression: a neighborhood in decline."
The study was published in the journal, Landscape and Urban Planning.
Read more at Temple University.
City model image via Shutterstock. | <urn:uuid:95b98c24-ffef-4f4c-839b-7d8f12c8b32f> | CC-MAIN-2016-26 | http://www.enn.com/top_stories/article/45768/print | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946223 | 547 | 2.65625 | 3 |
- Download PDF
1 Answer | Add Yours
The executive branch of the United States government is a massive organization. The vast majority of all government workers are part of the executive branch.
The executive branch as a whole is under the supervision of the president. The president is the head of the entire branch. Under him (or someday her) are the cabinet secretaries. They are the heads of all the departments of the government.
Under the cabinet secretaries are all of the rest of the executive branch of the government. For example, all of the people who work for the FBI are part of the Department of Justice. Therefore, they are all members of the executive branch. As another example, everyone who works for the Internal Revenue Service (the agency in charge of taxation), is a member of the Department of the Treasury. The people who work in national parks are all part of the Department of the Interior. Even the military is part of the executive branch. They are part of the Department of Defense.
Thus, everyone whose job is in some way connected to carrying out federal laws is a part of the executive branch. That branch is the massive organization that is charged with executing all of our laws.
We’ve answered 327,474 questions. We can answer yours, too.Ask a question | <urn:uuid:3b7551f0-7ddd-4cce-9970-755e9d4cf0af> | CC-MAIN-2016-26 | http://www.enotes.com/homework-help/what-makes-up-executive-branch-461465 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97238 | 263 | 3.53125 | 4 |
1 Answer | Add Yours
In Chapter Four, Steinbeck gives Crooks a thorough introduction, particularly in describing Crooks himself and his living quarters. Since Crooks is black, he is forced to live apart from the other white workers; clearly, racism was a part of this culture and era. Crooks lives/sleeps in a shed attached to the barn. Crooks was also excluded from other things at the ranch. While the rest of the workers are in town, Crooks stays behind claiming he isn't wanted. Having become accustomed to being excluded, Crooks has become a loner himself as if to accept his isolation or to have some control over it.
This room was swept and fairly neat, for Crooks was a proud, aloof man. He kept his distance and demanded that other people keep theirs.
It is fitting then that Steinbeck gives Crooks his own introduction, thereby presenting him as he is in the novel: somewhat isolated from the others.
Crooks has a back ailment, a crooked spine, and he has to rub ointment on it every night. He must deal with being a social outcast in addition to dealing with a physical ailment. Crooks is one who suffers yet perseveres. And being that he is often isolated, he suffers alone, seemingly with no hope.
So it is unlike Crooks to allow Lennie (and then Candy) into his bunk. And although Crooks initially criticizes Lennie and the dream of owning a farm, Crooks eventually opens up a little bit. Even Crooks, in his solitude has not lost all hope of having a better life. When Candy and Lennie talk more about the farm, Crooks reluctantly offers to help:
He hesitated. ". . . If you . . . guys would want a hand to work for nothing--just his keep, why I'd come an' lend a hand. I ain't so crippled I can't work like a son-of-a-bitch if I want to."
Unfortunately, Curley's wife interrupts this moment and after she threatens Crooks, he retreats back to his original demeanor and attitude of being aloof and keeping his distance.
It is fitting that in this chapter, Crooks is befriended by Lennie and Candy. Lennie is a social outcast because he is socially awkward. In his innocence and mental disability, he often gets into trouble, often violent trouble on account of not understanding his own strength. Candy is the aging ranch hand, fearful that he will be too old to work (in the eyes of others) and therefore he feels like a potential outcast (being fired) in the future. All three are treated as different, "other," or unwanted in some way. Crooks is black, Candy is old, and Lennie is mentally challenged. And yet they come together in a fitting place, Crooks' isolated bunk, to discuss one last dream.
Curley's wife interrupts talk of this dream, but her presence is somewhat fitting as well. She later reveals to Lennie that she had dreams herself but married Curley and now finds herself stuck at a ranch with nothing to do but flirt and talk with the other ranchers.
The chapter ends with Crooks rubbing ointment on his back and this symbolizes his reluctant acceptance of his role as the isolated, ailing worker on the ranch. It is more melancholy knowing that Crooks had at least entertained the idea and hope of following George, Lennie, and Candy to a better life on a new farm, a place where he would probably not be persecuted the way he is in his current situation.
We’ve answered 327,476 questions. We can answer yours, too.Ask a question | <urn:uuid:eb9a04cc-9e02-4acb-b9fc-f285bb9527b0> | CC-MAIN-2016-26 | http://www.enotes.com/homework-help/whats-methods-does-steinbeck-use-present-crooks-462160 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.986067 | 768 | 2.90625 | 3 |
1 Answer | Add Yours
For Ellis, the idea of calling the new leaders of the new nation "brothers" helps to convey the sense of connection and shared experiences that these individuals shared. Fatherhood does not convey this connection as much as brotherhood does. Ellis believes that this particular group was Revolutionary because they held a certain esteem for one another and revered the bond between one another, an experience that transcended political reality. It is for this reason that Ellis argues that they were perfectly equipped to handle the challenges of rebelling against England and forming a new nation. These "brothers" understood political differences not to be personal ones. Ellis suggests that such connection between one another helped to enhance the idea that character was essential in order for the public to accept the legitimacy of the new government. They were "brothers" because they understood that while intensity of political issues can divide them, it cannot divide the nation. When Jefferson invites a feuding Hamilton and Madison to his home for dinner to work out disagreements, it is akin to an older brother moving to bring consensus to arguing siblings. It is in examples such as this one where Ellis' term of "brother" becomes poignant and emotionally heroic to describe the men who are credited with founding the nation.
We’ve answered 327,476 questions. We can answer yours, too.Ask a question | <urn:uuid:316aeda4-e6ee-43ce-b322-90f11c2e7c62> | CC-MAIN-2016-26 | http://www.enotes.com/homework-help/why-did-ellis-call-them-founding-brothers-instead-348722 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.973835 | 273 | 3.75 | 4 |
Europe reaches new frontier – Huygens lands on Titan
ESA PR 03-2005. Today, after its seven-year journey through the Solar System on board the Cassini spacecraft, ESA’s Huygens probe has successfully descended through the atmosphere of Titan, Saturn’s largest moon, and safely landed on its surface.
The first scientific data arrived at the European Space Operations Centre (ESOC) in Darmstadt, Germany, this afternoon at 17:19 CET. Huygens is mankind’s first successful attempt to land a probe on another world in the outer Solar System. “This is a great achievement for Europe and its US partners in this ambitious international endeavour to explore the Saturnian system,” said Jean-Jacques Dordain, ESA’s Director General.
Following its release from the Cassini mothership on 25 December, Huygens reached Titan’s outer atmosphere after 20 days and a 4 million km cruise. The probe started its descent through Titan’s hazy cloud layers from an altitude of about 1270 km at 11:13 CET. During the following three minutes Huygens had to decelerate from 18 000 to 1400 km per hour.
A sequence of parachutes then slowed it down to less than 300 km per hour. At a height of about 160 km the probe’s scientific instruments were exposed to Titan’s atmosphere. At about 120 km, the main parachute was replaced by a smaller one to complete the descent, with an expected touchdown at 13:34 CET. Preliminary data indicate that the probe landed safely, likely on a solid surface.
The probe began transmitting data to Cassini four minutes into its descent and continued to transmit data after landing at least as long as Cassini was above Titan’s horizon. The certainty that Huygens was alive came already at 11:25 CET today, when the Green Bank radio telescope in West Virginia, USA, picked up a faint but unmistakable radio signal from the probe. Radio telescopes on Earth continued to receive this signal well past the expected lifetime of Huygens.
Huygens data, relayed by Cassini, were picked up by NASA’s Deep Space Network and delivered immediately to ESA’s European Space Operation Centre in Darmstadt, Germany, where the scientific analysis is currently taking place.
“Titan was always the target in the Saturn system where the need for ‘ground truth’ from a probe was critical. It is a fascinating world and we are now eagerly awaiting the scientific results,” says Professor David Southwood, Director of ESA’s scientific programmme.
“The Huygens scientists are all delighted. This was worth the long wait,” says Dr Jean-Pierre Lebreton, ESA Huygens Mission Manager. Huygens is expected to provide the first direct and detailed sampling of Titan’s atmospheric chemistry and the first photographs of its hidden surface, and will supply a detailed ‘weather report’.
One of the main reasons for sending Huygens to Titan is that its nitrogen atmosphere, rich in methane, and its surface may contain many chemicals of the kind that existed on the young Earth. Combined with the Cassini observations, Huygens will afford an unprecedented view of Saturn’s mysterious moon.
“Descending through Titan was a once-in-a-lifetime opportunity and today’s achievement proves that our partnership with ESA was an excellent one,” says Alphonso Diaz, NASA Associate Administrator of Science.
The Cassini-Huygens mission is a cooperation between NASA, the European Space Agency and ASI, the Italian space agency. The Jet Propulsion Laboratory (JPL), a division of the California Institute of Technology in Pasadena, is managing the mission for NASA’s Office of Space Science, Washington. JPL designed, developed and assembled the Cassini orbiter.
“The teamwork in Europe and the USA, between scientists, industry and agencies has been extraordinary and has set the foundation for today’s enormous success,” concludes Jean-Jacques Dordain. | <urn:uuid:ebeefdcc-6e5f-449f-aaf4-170df83c570e> | CC-MAIN-2016-26 | http://www.esa.int/ESA_in_your_country/Ireland/Europe_reaches_new_frontier_Huygens_lands_on_Titan/(print) | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923405 | 870 | 3.3125 | 3 |
Envisat concludes a busy second year in orbit
On the last night of February 2002 ESA's Envisat - the largest and most sophisticated Earth Observation spacecraft ever built - swapped the tropical atmosphere of French Guiana for orbital vacuum, as it was shot 800 km into the sky by Ariane 5 launcher.
Two years on this lorry-sized spacecraft operates nominally in the extreme environment of space; it circles the world every hundred minutes at a speed of seven kilometres per second while its ten onboard instruments gather a mass of data about the terrestrial environment.
More than 70 different types of information products derived from Envisat data are now available. Fresh products will add to this number during the coming year, while existing products undergo a process of continuous improvement. The products represent valuable tools for researchers and service providers requiring knowledge about the current state of our planet.
Seeing more of the world
One major development since Envisat's first year occurred last spring when ESA's Artemis telecommunication satellite finally reached its assigned station, in geostationary orbit 36,000 km above the Congo Basin. During its launch in July 2001 Artemis was placed in the wrong orbit, and mission controllers had to spend the next 18 months nursing the spacecraft up to its intended position.
Envisat had always been designed to work in conjunction with Artemis. Without it the amount of data Envisat could return to Earth was constrained by onboard storage limits as well as bottlenecking at the overworked Kiruna ground station in Sweden - even after a back-up downlink capability was added to a neighbouring ground station at Svalbard in the Norwegian Arctic.
By last summer Envisat could relay data via Artemis to ESRIN in Frascati, from which it is processed and distributed to users. The result is that Envisat now returns sufficient data to image the entire surface of the world during each and every orbit.
"We have seen the data returned by the Medium Resolution Imaging Spectrometer (MERIS) in its full resolution mode increase threefold since the summer," says Envisat Mission Manager Henri Laur. "And the data yield from the Advanced Synthetic Aperture Radar (ASAR) instrument has doubled.
"Envisat's ASAR returns four times more data than its equivalent instrument on ERS-2, and in up to 37 different sub-modes. One of the most used modes is called 'ERS-like' because of its ability to extend the ERS SAR archive, and in this mode alone ASAR produces more data than its predecessor instrument."
Overall a total of 140 gigabytes of information products are derived from Envisat data daily, equivalent to 50 terabytes a year – sufficient in paper form to fill the largest library on Earth, the United States Library of Congress, more than two times over. Depending on their size, processed products are disseminated to users either via internet, DVD or CD-Roms, or relayed via communication satellite.
Data use doubled
During the last 12 months the number of projects worldwide making use of this data has more than doubled to 450.
"This number encompasses many different activities and requirements," explains Laur. "Projects range from small scientific teams making use of limited scientific data up to the heavy data demands and dedicated dissemination of systems like those making up the Global Environment for Environment and Security (GMES) Services Element."
The GMES Services Element is the first programme of a joint ESA-European Union initiative intended to enhance global monitoring capabilities to support European policy goals.
In the longer term the initiative will have its own dedicated missions, but for now the multiple-sensor capability of Envisat plays an important role in supplying GMES services coming on stream with the data they require: from the advance of ice floes in the Canadian Arctic to the damage done by forest fires and subsidence events in city streets.
Mapping the atmosphere in near-real time
In addition, during each and every orbit Envisat instruments provide continuously updated measurements of many of the fastest-changing components of the Earth system. Delivered rapidly to end users, these measurements serve as the basis for a variety of near-real time services.
The swiftest changes of all occur in the atmosphere, but data from Envisat enables regularly updated mapping of the concentration of otherwise invisible trace chemicals in the air. These maps are available to users worldwide over the internet.
As part of the Netherlands-hosted Tropospheric Emission Monitoring Internet Service (TEMIS) Envisat's German-backed SCIAMACHY (SCanning Imaging Absorption Spectrometer for Atmospheric Cartography) sensor routinely supplies data for a daily map of global ozone thickness. Because stratospheric ozone blocks harmful ultraviolet radiation, a four-day ultraviolet radiation forecast can in turn be derived from these results.
The website of the Florence-based Institute for Applied Physics (IFAC) enables researchers to browse atmospheric maps derived from near-real time MIPAS Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) results. Users can specify the time, chemical species and atmospheric altitude they wish to see.
"The site enables us to visualise our analysis of MIPAS data in different ways as needed," explains Bruno Carli of IFAC. "It started as an internal tool for our monitoring work as part of the quality working group for MIPAS level 2 data, but it is now accessible to everyone."
Another near real website service called Belgian Assimilation System of Chemical Observations from Envisat (BASCOE) assimilates MIPAS data as well, forecasting the chemical concentration of ozone as well as 56 other chemical species.
Watching ocean waters
Envisat instruments also monitor the ever-changing features of the ocean surface. These observations that help to make other near real time services possible, including a French system called Mercator Ocean that provides analysis and forecasts of ocean circulation around Europe for up to a fortnight ahead.
The service has a variety of users, from oceanographers to competitive yachtsmen, and is the French contribution to the international Global Ocean Data Assimilation Experiment (GODAE) initiative, intended to create a real time four-dimensional planetary ocean current model.
Mercator makes use of near real time results from Envisat's Radar Altimetry-2 (RA-2), which bounces thousands of radar pulses each second off the sea surface in order to detect ocean eddies and currents.
"Our model for forecasting has a maximum five km mesh size, which is demanding in terms of computing power, but which we need to resolve surface features like major currents and see their meanders," says Pierre Bahurel, Director of Mercator Ocean. "It is also demanding in terms of observations, to keep the model corresponding to actual conditions.
"That is why space-based altimeters are so important, because they give us a continuous supply of data to assimilate. The different satellites we rely on compliment each other, so, while the French-American Jason-1 has a more frequent revisit time, Envisat has a higher spatial resolution.
"We are also working to assimilate further Envisat products into our model, such as near-real time sea surface temperature measurements taken by the Advanced Along Track Scanning Radiometer (AATSR)."
RA-2 ocean wave height data – plus Envisat ozone measurements - gets assimilated operationally into the ten-day forecast model of the European Centre for Medium-Range Weather Forecasts (ECMWF), with plans to make additional use of ASAR wave mode in future.
ASAR homes in on earthquake zones
The improved amount of data returned by Envisat has improved the global reach of the spacecraft's imaging instruments. Near real time ASAR products are currently put to numerous uses including iceberg monitoring and ship tracking.
And the archive of ASAR imagery has grown greatly, which can be 'mined' for many other applications. For example, part of Envisat's 'background mission' has involved prioritising acquiring images of the 15% or so of the planet's land surface classed as seismically active.
The policy paid off in the aftermath of the tragic Bam earthquake that took place in Iran in December 2003. Before and after images of the quake zone were successfully merged using a technique called radar interferometry to identify very tiny ground movements occurring between acquisitions, allowing geologists to work out the location and extent of the subsurface fault responsible.
MERIS tracks algae blooms
Meanwhile regularly updated MERIS images of the Southern Ocean off the coast of Chile are being used together with AATSR sea surface temperature products to identify and forecast the distribution of potentially hazardous algae blooms.
MERIS is an ocean colour sensor optimised to detect chlorophyll pigments in the water – an indication of the presence of microscopic organisms called phytoplankton. In certain conditions the growth of these organisms 'blooms' out of control.
Interest in bloom tracking goes beyond the purely scientific, because some species of algae contain toxins that can poison marine life as well as any humans that eat them. Alternatively algae blooms can exhaust water of oxygen, suffocating larger fish.
Fish farms are particularly vulnerable to algae blooms because the fish cannot flee affected areas. In the past algae blooms have caused millions of Euros worth of annual losses to the 360 fish farms found in the southern region of Chile.
The country's national fish farming association, Salmon Chile, has co-founded a pilot scheme with oceanography firm Mariscope Chilena to investigate the feasibility of an operational satellite early warning service for algae blooms.
"Since the end of last year, high concentrations of phytoplankton have been measured from in situ water samples and online detectors, as well as with remote sensing," says Dr Cristina Rodríguez-Benito of Mariscope Chilena. "MERIS images demonstrated the presence of blooms from the middle of December.
"Some show intense patches in several areas, indicating the blooms do not only originate locally but may be influenced by meso-scale phenomena. The blooms are already affecting some of the main aquaculture companies and losses of fish have started."
Salmon were found starved of oxygen within their cages off the big island of Chiloé. In addition, more than four hundred people were reported poisoned by contaminated shellfish.
"Beyond our current scientific study, the next step would be to secure delivery of near real time products within 24 or 48 hours after satellite acquisition for integration into an operational and commercial service."
Images of the year
Throughout the last year Envisat has been returning a steady stream of striking images and scientific data showing the ever-changing face of our planet. Some of the most memorable results of the year include:
The splitting of the gigantic B-15 iceberg off the Antarctic coast, captured by a sequence of ASAR images as it was ripped in two by winter storms.
In September MERIS acquired Hurricane Isabel looming up on the United States coast, while sister satellite ERS-2 mapped the wind fields powering the storm.
During Europe's long hot summer of 2003 Envisat's Advanced Along Track Scanning Radiometer (AATSR) showed the Mediterranean 'hot flush' that occurred as a result.
In the aftermath of the worst Portuguese forest fires for two decades, MERIS images mapped burn scar areas, quickly quantifying the overall scale of damage done.
- MERIS imagery showed the current state of the fast-shrinking Aral Sea in Central Asia, as well as producing a spectacular composite image of the entire planet Earth. | <urn:uuid:6290ce1d-2269-4309-a4d8-2d9d3ac3569b> | CC-MAIN-2016-26 | http://www.esa.int/Our_Activities/Observing_the_Earth/Envisat_concludes_a_busy_second_year_in_orbit | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.924326 | 2,396 | 2.921875 | 3 |
It is quite common to find maps on which a hillshaded surface is overlain with a transparent colored thematic layer. The thematic layer could be soils, land use, vegetation, or other types of phenomena, but it is often elevation (figure 1). Using what is called an elevation tint, ranges of elevation are assigned different colors that mimic what you might see on the ground. Greens in the low-lying valleys transition smoothly to light browns at the lower rocky elevations, which blend into darker browns in the higher treeless areas, and finally change to white on the snowcapped peaks.
This is a great way to display elevation over hillshade, but a couple of serious problems often arise: the gray in the hillshade mutes the colors of the elevation tint, and the detail in the hillshade becomes obscured by the overlaid theme (figure 2).
By using a set of functions for displaying raster data in ArcGIS 10 for Desktop, you can easily display colored tints on hillshades without losing the original colors and the hillshade details (figure 3).
Combining the layer tint and hillshade to create the desired result involves specifying a set of functions for a mosaic dataset. You can specify these functions once you have created and added data to a mosaic dataset. When defining functions for a mosaic dataset, you will add new functions to the previously defined ones. They will appear in the Mosaic Dataset Properties dialog box on the Functions tab. The most recently defined function will appear at the top. The final function chain for the method described here for displaying an elevation tint and hillshade will look like the one in figure 4.
Step 1. Make sure that the data is all positive values. If you already know your data contains no negative values, go to step 2. Otherwise, in ArcCatalog, right-click the mosaic dataset and click Properties. On the Functions tab, right-click Mosaic Function > Insert > Arithmetic Function. Set Operation to Plus, set Raster to Raster 2, and set the constant to the absolute value of the lowest number in your dataset (for example, if the lowest value is 12,000, set the constant to 12000). In ArcGIS 10.1, you will also be able to use data with negative values, so when you upgrade to the new version, you can skip this step.
Step 2. To use the color map function in step 3, you must make sure that the data is defined as 16-bit unsigned. In ArcCatalog, right-click Mosaic Function (or, if you performed step 3, right-click Arithmetic Function) and click Properties. On the General tab of the Raster Function Properties dialog box, change Output Pixel Type to 16 Bit Unsigned if it is not already set to this. Click OK.
Step 3. Apply the color map function. A color map is a text file that contains a color specification for each elevation value. On the Functions tab, right-click Mosaic Function > Insert > Colormap Function. Browse to the color map file. (You can download a color map file for ETOPO1 and other elevation data from the Esri Mapping Center. Note that all values in the color map file must be positive. If they are not, edit the file by adding the same constant used in step 1 to all values.) Click OK.
Step 4. Convert the single-band raster to a three-band raster so that you can use the pan-sharpening function later in this process (step 8). Right-click Colormap Function > Insert > Colormap To RGB Function. Keep the default value.
Step 5. Make sure the data is defined as 8 bit unsigned so that you can use the pan-sharpening function in the next step. On the General tab, change Output Pixel Type to 8 Bit Unsigned. Click OK.
Step 6. Apply the pan-sharpening function. On the Functions tab, right-click Colormap To RGB Function > Insert > Pansharpening Function. For panchromatic, select the hillshade, if you have one. If you do not, select the DEM. Change Method to Simple Mean. Keep the rest of the defaults and click OK.
Step 7. If you did not select a hillshade in the previous step, right-click DEM > Insert > Hillshade. Keep the defaults and click OK. If the data is in a geographic coordinate system, change the Z Factor. (Refer to the Esri Mapping Center blog "Setting the Z Factor Parameter Correctly" for a list of Z factor values.)
Step 8. Apply the stretch function. Right-click Pansharpening Function > Insert > Stretch Function. Change Type to Minimum-Maximum. Check the Use Gamma option. Input 0.5 as the Gamma value for bands 1, 2, and 3. Input 10 and 220, respectively, as the Min and Max Statistics values for each of the three bands. (After you have checked the results, feel free to experiment with the Gamma, Min, and Max values in the stretch function.) Click OK.
Step 9. To see the results, click OK. If you do not see any results it is probably because the pixel sizes are not within the minimum-maximum that are automatically set for the mosaic dataset. To correct this in ArcMap, right-click Footprint in the table of contents > Open Attribute Table, right-click the MaxPS field, and click Field Calculator. For "MaxPS =", input [MaxPS] * 10000 and click OK. (For more information about what scalar value to use in this equation, read the online help topic Cell size ranges in a mosaic dataset.) Close the attribute table.
Using the steps above, you will see results that are much closer to your original data than using the more common transparent overlay method. With this new technique, your original colors will be sharp and brilliant, and your hillshade will convey the detail you originally wanted to show. Even though this method requires a few additional steps, the outcome is worth it for this reason alone. But you will also save time by not having to adjust the transparency values to find a good result with the overlay method. Now you don't have to imagine how to compensate for those shortcomings—you know that the colored raster and the hillshade that you input will be shown exactly the same way in the output.
While the example discussed here is shown with a hillshade and an elevation tint, you can use the same method to display any colored raster along with any grayscale raster. Consider that this method could also be used with other grayscale rasters, such as panchromatic imagery and hillshades of surfaces other than elevation (for example, a density surface). When you overlay your colored layer on them, you will get the same results—the detail in your grayscale raster and the colors you originally selected.
Try this yourself and see how easy and useful this is! | <urn:uuid:6e85118b-c70e-4ebf-b516-69dc84b0fe05> | CC-MAIN-2016-26 | http://www.esri.com/news/arcwatch/0312/learn-a-new-method-for-displaying-hillshades-and-elevation-tints.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.839469 | 1,458 | 3.453125 | 3 |
- chain (n.)
- c. 1300, from Old French chaeine "chain" (12c., Modern French chaîne), from Latin catena "chain" (source also of Spanish cadena, Italian catena), which is of unknown origin, perhaps from PIE root *kat- "to twist, twine" (source also of Latin cassis "hunting net, snare").
Figurative use from c. 1600. As a type of ornament worn about the neck, from late 14c. Chain of stores is American English, 1846. Chain gang is from 1834; chain reaction is from 1916 in physics, specific nuclear physics sense is from 1938; chain mail first recorded 1822, in Scott, from mail (n.2). Before that, mail alone sufficed. Chain letter recorded from 1892; usually to raise money at first; decried from the start as a nuisance.
Nine out of every ten givers are reluctant and unwilling, and are coerced into giving through the awful fear of "breaking the chain," so that the spirit of charity is woefully absent. ["St. Nicholas" magazine, vol. XXVI, April 1899]
Chain smoker is attested from 1886, originally of Bismarck (who smoked cigars), thus probably a loan-translation of German Kettenraucher. Chain-smoking is from 1930.
- chain (v.)
- late 14c., "to bar with a chain; to put (someone) in chains," also "to link things together," from chain (n.). Related: Chained; chaining. | <urn:uuid:fe4b7154-6cfd-425a-b5f7-8c6326903e8b> | CC-MAIN-2016-26 | http://www.etymonline.com/index.php?term=chain&allowed_in_frame=0 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.94859 | 334 | 3.140625 | 3 |
Cyclone Haruna strengthened into a cyclone and quickly developed an eye that became apparent on visible and infrared imagery from NASA's Aqua satellite. NASA's TRMM satellite analyzed Haruna's heavy rainfall, and NASA and NOAA's Suomi NPP satellite captured a night-time image that verified the strongest areas of the storm.
On Feb. 20 at 1111 UTC (6:11 a.m. EST/U.S.) the AIRS instrument aboard NASA's Aqua satellite captured this infrared image of Tropical Cyclone Storm Haruna. The area of strongest thunderstorms circled the eye and had cloud top temperatures colder than -63F (-52C). Those cold cloud top temperatures indicated strong storms with heavy rainfall, which was verified by NASA's Tropical Rainfall Measuring Mission (TRMM) satellite.
The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument that flies with AIRS aboard NASA's Aqua satellite captured a visible image of Tropical Storm Haruna on Feb. 20 at 1115 UTC (6:15 a.m. EST) that revealed its large eye.
NASA's TRMM satellite flew above intensifying tropical storm Haruna on February 20, 2013 at 0717 UTC (2:17 a.m. EST). A rainfall analysis was created using data from TRMM's Microwave Imager (TMI) and Precipitation Radar (PR) instruments overlaid on a combination visible/infrared image from the Visible and InfraRed Scanner (VIRS). The analysis showed that Haruna had become much better organized since Feb. 19 and developed intense bands of rainfall circling the cyclone's center. Some rain in powerful storms on the northern edge of Haruna's center was found by TRMM PR to be falling at a rate of over 108 mm (~4.25 inches) per hour.
NASA's TRMM Precipitation Radar (PR) was used to create a 3-D image that sliced through tropical storm Haruna's center. TRMM data showed that towering thunderstorms on the northern edge of Haruna's center were over 14.25 km (~8.85 miles) high.
NASA-NOAA's Suomi NPP satellite captured infrared night-time data of Cyclone Haruna on Feb. 19 at 2303 UTC (2 a.m. local time Madagascar on Feb. 20). The data was false-colored at the University of Wisconsin Madison and showed the coldest cloud top temperatures and heaviest rainfall north of the center of circulation, verifying NASA's TRMM satellite data.
At 1500 UTC (10 a.m. EST) on Feb. 20, Haruna reached hurricane (or cyclone)-force with maximum sustained winds near 70 knots (80 mph/129.6 kph). Haruna is centered near 22.1 south latitude and 40.7 east longitude, about 400 nautical miles (460 miles/741 km) west-southwest of Antananarivo, Madagascar. Haruna is moving to the west at 4 knots (4.6 mph/7.4 kph) and generating 25-foot-high (7.6 meter-high) waves.
Forecasters at the Joint Typhoon Warning Center expect Haruna to make a brief landfall near Androka in the southwestern part of Madagascar as the storm heads southeast into the open waters of the southern Indian Ocean. | <urn:uuid:f1056661-f1ae-49c4-b854-6557018ea991> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2013-02/nsfc-tns022013.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.897845 | 685 | 2.984375 | 3 |
Smoking during pregnancy - particularly among economically-disadvantaged women - leads to a host of poor pregnancy outcomes, including miscarriage, preterm birth, SIDS, and additional adverse effects later in life. Without a formal treatment intervention, women in this population continue to smoke, and their babies suffer. Vermont Center on Behavior and Health Director Stephen Higgins, Ph.D., and colleagues, have developed an effective behavioral economic approach that offers women financial incentives for quitting.
The groups' most recent findings, published online this month in Preventive Medicine, demonstrated that providing incentives more than doubled smoking abstinence rates during pregnancy and increased fetal growth. They also examined whether altering the way the incentives were offered might get still more women to quit, without increasing costs, but that strategy was not successful.
About 23 percent of women of childbearing age are regular smokers, says Higgins, a professor of psychiatry and psychology at the University of Vermont, who adds that smoking prevalence varies by socioeconomic status - particularly in terms of educational attainment.
"More than 40 percent of women with a high school GED report regular smoking versus eight percent of college graduates and six percent of those with graduate degrees," Higgins says. While about 20 percent of smokers quit without formal treatment soon after learning of a pregnancy, the vast majority of the largest segment of this group smoke through the pregnancy if there is no formal intervention.
Participants in the study, which took place between 2006 and 2012, included 118 pregnant women smokers from the greater Burlington, Vt., area. For this project, Higgins and his team adapted a previously-proven voucher-based incentive intervention to decrease smoking during pregnancy and increase fetal growth, without increasing the cost of the intervention. Study participants were randomized to receive either of two incentive interventions or to a control condition that received comparable incentives even if they were unable to quit. Two serial ultrasound examinations were performed at approximately 30- and 34-week gestation to estimate fetal growth.
"This trial provides further evidence that providing financial incentives for quitting can increase smoking abstinence in economically-disadvantaged pregnant smokers and increase the healthy growth of their fetuses," says Higgins. "We still need to find a way to get a larger percentage of women/infants to benefit from the intervention, but these study results show we're on the right track."
Colleagues across the U.S. and beyond are paying attention to this research, says Higgins, who notes evaluations underway in Oregon and Wisconsin exploring the use of incentives among women insured by Medicaid. The United Kingdom began exploring adaptations of this voucher-based incentive approach several years ago. | <urn:uuid:9489455e-5059-4d12-9d58-bb46eaa0b5c7> | CC-MAIN-2016-26 | http://www.eurekalert.org/pub_releases/2014-04/uov-fih041914.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965299 | 521 | 3.046875 | 3 |
MONDAY, June 11, 2012 (MedPage Today) — The two leading causes of death in the U.S. — cancer and heart disease — accounted for nearly half of all deaths in 2008, the latest year for which data are available, according to a CDC report.
Heart disease accounted for 25 percent of all deaths, while cancer came in second at 23 percent, according to research by Melonie Heron, PhD, of the Department of Health and Human Services.
The finalized statistics for 2008 show that, all together, the top 10 causes of death in the U.S. accounted for 76 percent of all U.S. mortalities, Heron reported in the National Vital Statistics Reports.
Among those top 10 killers, seven saw relative increases in number from 2007 to 2008, with the largest increases seen in Alzheimer's disease, chronic lower respiratory diseases , influenza and pneumonia, suicide, and kidney diseases.
Decreases in three of the top 10 causes — death by unintentional injury, stroke, and diabetes — were reported from 2007 to 2008. However, as a result of the increases in seven categories, the total number of deaths for all top 10 causes increased, from 2.42 million in 2007 to 2.47 million in 2008.
Data were collected through all death certificates filed in 50 states and the District of Columbia, with causes of death classified by the International Classification of Diseases, Tenth Revision.
Ranking from most to least common remained unchanged between years, except for a switch between stroke — formerly the third-most-common killer — and CLRD — formerly fourth-most-common.
Heart disease and cancer were the most common killers for both men and women, though women had lower percentages of each, while CLRD was fourth for both at roughly the same rate. Unintentional injuries accounted for men's third greatest cause of mortality, while stroke was women's third highest killer.
By race, heart disease was the most common killer for whites, blacks, and Hispanics. Cancer came in second among those groups.
Among American Indians/Alaska Natives and Asian/Pacific Islanders, those two causes were reversed: Cancer was most common while heart disease was second.
The third most-common cause of death also varied between races. For whites, CLRD was third, while stroke was third for both blacks and Asians. Diabetes accounted for 5.3 percent of American Indian/Alaska Native deaths, making it the third-most-common killer. Hispanics, on the other hand, saw an 8 percent rate of death due to unintentional injury, which was their number three killer.
The most common causes of infant mortality were congenital malformations, unclassified short gestation/low birth weight disorders, sudden infant death syndrome (SIDS), maternal complications in pregnancy, and unintentional injury.
In those younger than 28 days, unclassified short gestation/low birth weight disorders were the leading cause of death, while patients 28 days- to less than 12 months-old most-commonly died due to SIDS. Congenital malformations were the second-most-common cause of death.
Heron noted that data interpretation can be limited by age distribution — where some causes of death in younger populations may be more common than in older populations, and vice versa — and random variation in cause-of-death rankings. She also noted that some changes in mortality rankings were due to different qualifications for certain cause-of-death categories.
Last Updated: 6/11/2012 | <urn:uuid:c84ecff4-3526-4945-8e80-a2593868444a> | CC-MAIN-2016-26 | http://www.everydayhealth.com/heart-health/0611/heart-disease-cancer-top-killers-in-us.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.969961 | 713 | 2.671875 | 3 |
Berkeleys Mesh Network: Dust in the RFID Wind
Berkeleys Mesh Network: Dust in the RFID Wind
UC/Berkeley researchers have created tiny wireless "motes"aka network sensorsthat use radio signals to communicate where they are located in physical space.
The end goal: an RFID network that could revolutionize the industry with its ability to locate tagged items without the aid of readers.
"What we showed in the university was that you could network together a lot of sensors," said Kristofer Pister, a professor of Electrical Engineering and Computer Science at UC/Berkeley who made a name for himself with his 1997 development of technology called Smart Dusta self-organizing network of tiny wireless "motes."
"There was a lot of industry demand, so we started a company and now were shipping products that let you network [sensors]," Pister said. "The next thing is that these sensors can figure out where they are in 3-D and measure their location."
In January 2003 Smart Dust was commercialized when Pister co-founded Dust Networks.
Now Pister is back at Berkeley full time, working with graduate student Steven Lanzisera on this next phase of sensor network innovation, dubbed RF Time of Flight.
It could have a huge impact on the ubiquitous use of RFIDa technology that has struggled to gain widespread adoption despite backing by Wal-Martthe worlds largest retailerand the U.S. Department of Defense.
Pister believes that location estimation using RF Time of Flight will finally enable RFID to live up to its promise of tracking items in real time.
"The sound bite on RFID is that most people think its going to tell them where their stuff is at all the time. In fact, what RFID does is tell you where your stuff was the last time it went through a reader successfully," said Pister.
"Contrast having to put in readers everywhere you want [information] to just having the tags know where they are and having that broadcasted every few feet."
To understand RF Time of Flight, one has to go back to the basics of Smart Dust. Smart Dust refers to motes laid out in a mesh network that search and find one another, form a network, then communicate information back and forth.
To set up a mesh network in a hospital or warehouse, for example, three access points are placed at random points and connected wirelessly to an interrogator, "like a little USB thing that you plug into your computer," said Pister.
Then a dozen (or 100,000) motes come into the network, set up a multi-hub mesh, and start to communicate and report their nearest range. Measuring the distance from one mote to the next provides a reading of where tagged items are.
"If you know a range to a bunch of different points then you know where you are," said Pister. "Even if that range is moving you can still do the math and figure out where you are at that point."
RF Time of Flight adds radio communication and ranging capability.
Intended to be about the size of a grain of sand or a piece of dustthe motes from Dust Networks are currently about the size of a quarterthe motes contain sensors, computing circuits, bi-directional wireless technology, and an antenna and very low-battery power supply that are external to the chip. The motes can detect light, temperature or vibrations.
"About an inch on the side is the size of most commercial motes out there today," said Pister. "We all use the same antenna so thats not a differentiator. The key question is who can use the smallest battery, and that has to do with how much power you burn."
Next Page: RFID glitter falling from the sky.
RFID Glitter Falling from
Pister said Dust Networks has an advantage over its competitorsCross Bow, Millennial, Emberin that its components burn less power, providing a decade of life for a D-cell battery, for example.
He hopes to develop a "truly single-chip mote" within the next three years that will put a solar cell into the silicon chip itself, obviating the need for an external battery and shrinking the mote down to an ever-smaller size.
Adding a power source directly to a mote would "really be a breakthrough," according to Marlene Bourne, president and principal analyst at Bourne Research.
"I dont think we are ready to take a handful of sensorslike a handful of glitterand drop them out of an airplane and they read information," said Bourne, in Scottsdale, Ariz.
"Right now, theyre limited by the size of the battery. There are some solar-based approaches, and some completely different approaches, that could feasibly allow Smart Dust to be just that."
Though its taken a while for the Smart Dust concept to prove out, Emerson Electric Co. announced in October that it would use Dust Networks Time Synchronized Mesh Protocol (the underlying system for the motes) as the communications technology used in its Smart Wireless field networks and software.
"Self-organizing mesh networking is one of the most exciting innovations to come along in the process industry in over 30 years," said Steve Sonnenberg, president of Emersons Rosemount division.
"We have tested a number of wireless sensor networking technologies in real-world industrial environments over the last three years and have found that Dust Networks TSMP technology best meets the reliability, security, long battery life and ease of use requirements demanded by our end users."
Pister sees asset tracking as the biggest application area for RF Time of Flight, from tracking patients and doctors in a hospital to tracking assets in a theater of war (particularly since early Smart Dust funding came from the Department of Defense). But he also sees many other potential areas of use.
"Imagine if we put this capability into cell phones or Palm Pilots and kids start using it to find each other at the mall, or on campus," said Pister.
"In addition, they can leave little notes that are triggered by proximity to a restaurant or make-out spot. Somebody with a good consumer application could turn the corner in a matter of a year."
The time frame for an actual product could be relatively soon. Pister expects to have RF capability on a square millimeter silicon chip by the summer of 2007.
"We will show that even little chips running on not very much power can measure their distance with three points of reference," said Pister.
"[Whats needed] is someone finding an application for that technology. I do think RF Time of Flight is going to be a big deal and [will] give people the localization capability that delivers on the promise of RFID. But it will take creative people getting the applications right. The technologys almost ready."
Check out eWEEK.coms for the latest news, reviews and analysis on mobile and wireless computing. | <urn:uuid:03381bf4-ab99-469a-a19e-f418b6c0c5f1> | CC-MAIN-2016-26 | http://www.eweek.com/print/c/a/Mobile-and-Wireless/Berkeleys-Mesh-Network-Dust-in-the-RFID-Wind | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.958245 | 1,443 | 2.65625 | 3 |
Benito Mussolini was Italy's firs Fascist dictator. He had many influences in his life which contributed to his political success as a fascist dictator. Mussolini and his fascist political party believed in ruling Italy using a system of extreme right-wing dictatorial government. The Italian people were annoyed with the current government so it was a good time for Mussolini to promote his political views. Hitler and Mussolini supported each other, however as the years went by, Benito Mussolini became more and more influenced by Hitler. The rise of Benito Mussolini and his political party played a critical role in the growth of fascism as well as Italy's decision to side with Germany in World War 2.
Before Benito Mussolini and his fascist regime came to power, Italy was in a horrible state. Italy had suffered badly during the First World War having lost hundreds of thousands of soldiers and the country was heavily in debt. Britain and France promised extra land for the Italian government during the war but when World War 1 ended this land was not given to them. The Italian government was promised extra land by Britain and France during the war but when World War 1 ended this land was not given to them. This made the Italian government feel like they were being ignored. Unemployment rose and led to unrest in many cities in Italy. People were ready for a change. When Benito Mussolini formed his fascist party and promoted himself as a strong man who could resolve Italy's problems, the Italian people threw their support behind him. Mussolini promised to rebuild Italy. He organized armed gangs called the Blackshirts who dealt with troublemakers and criminals.
Mussolini was the son of a poor Blacksmith and his mother was a teacher. In Mussolini's early years, his father influenced him with his socialist beliefs. He became a teacher and taught for 1 year before moving to Switzerland where he g | <urn:uuid:167c399d-feed-4696-8924-8739ca046d1d> | CC-MAIN-2016-26 | http://www.exampleessays.com/viewpaper/12662.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.994101 | 377 | 3.515625 | 4 |
Explain xkcd: It's 'cause you're dumb.
LD₅₀ is a term used in toxicology that identifies the median lethal dose of a toxin, or how much is required to kill 50% of a given population. LD₅₀s are usually measured in g/kg, as the amount of toxin to kill something is usually linearly related with its mass. The lower the LD₅₀, the more lethal the toxin. An LD₅₀ can be determined for almost any substance: for example, the LD₅₀ for sugar (in rats) is 29.7 g/kg. However, Botulinum toxin (commercially known as Botox in the beauty industry), the most acutely toxic substance known, has a LD₅₀ of roughly 1 ng/kg, or 0.000000001 g/kg, a vanishingly small amount.
The comic is making the joke that the LD₅₀ of papers on toxicology is 2 kg/kg, so it takes 2 kilograms of papers on toxicology to kill a person for each kg he/she weighs. The worldwide average weight of an adult is 62 kg (137 lb), so the lethal dose would be 124 kg (273 pounds) of toxicology papers. Death is apparently caused by compression or smothering.
The title text says it will take less paper to kill a person if the paper is shoved down their throat instead of dropped on them, either by suffocation or by bursting the subject's stomach. A third method of delivering a toxin is by subcutaneous injections which are highly effective in administering vaccines and medications, but that number is omitted since they couldn't figure out how to do it. If they could, the amount of paper required to trigger a fatal blood vessel blockage would probably be fairly small.
- [A figure in a white coat lies on the floor, crushed beneath a giant pile of binders & paper. Megan and Cueball in white coats stand next to him, looking on. Megan is holding a clipboard.]
- The LD₅₀ of toxicity data is 2 kilograms per kilogram.
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!
There's one toxicology paper that's facing us instead of laying flat. Is it just me, or is there a funny "concerned" face on it? --Druid816 (talk) 05:58, 4 September 2013 (UTC)
- I think that's just pareidolia at work 188.8.131.52 20:56, 3 August 2014 (UTC)
Oh my, when I checked the comic this morning I didn't even see Cueball lying underneath the stack of toxicology papers... --Buggz (talk) 06:17, 4 September 2013 (UTC)
Is there any way to move this page from LD50 to LD50? 184.108.40.206 06:31, 4 September 2013 (UTC)
- We try to stay as faithful to the main xkcd comics as possible when referencing xkcd materials. If the comic title on xkcd.com is LD50, it's LD50 here too. Davidy²²[talk] 06:51, 4 September 2013 (UTC)
Wait, that's just one scientist out of three that died of toxicity data. Doesn't that mean, that they've only determined LD33? Is there any way to estimate LD50 from LD33? Imho the exact distribution of death rate / dose would have to be known up to one free parameter for such an estimate... -- Xorg (talk) 10:56, 4 September 2013 (UTC)
- I tried to address this with an edit. Betwixt the ultimate and penultimate ("...he/she weighs.") sentences I started to add:
- Presumably, for every recorded death a statistically matched second person survived the same load. In this case perhaps this is the Cueball scientist behind the Megan scientist, although he is now obviously unencumbered.
- But what do we know, maybe Cueball is only half dead. 220.127.116.11 (talk) (please sign your comments with ~~~~)
- ...although it started to run away with me. Was also going to say something about saving paper by re-using the 'test dose', or something, but it's already getting too long. But someone might be able to edit it (and even re-arrange it) better than I. 18.104.22.168 11:42, 4 September 2013 (UTC)
- Second-thoughts edit! The person beneath the documentation isn't necessarily the dead one (in any given pair)! He lacks any obvious signs of being deceased (e.g. "a cross for an eye", by common cartoon standards, albeit that cueballs generally don't have eyes, or signs of bodily breakage or presumably vital fluids slowly seeping across the floor, or...). Thus maybe this is one of the (uncomfortable!) survivors from the cohort of testees, being observed. If only Randall would have added a sign of death (or life, like a "groan") then we could get on with our lives! (Unlike fully half of those tested upon.) 22.214.171.124 11:51, 4 September 2013 (UTC)
A third edit from me: Regular printing paper's density (according to Wiki) is 800kg/m³, with the human body being slightly less than 1000kg/m³ as a ready reckoner (oh, go on then... wiki says... oh, it doesn't, obviously at least... well, given how we float in water, I'd estimate it at 850-950kg/m³). Doesn't that pile of literature (even assuming air gaps, and possibly some lamination/plastic covering of perhaps even less dense nature) look a little more than than twice-and-a-bit the volume of the typical Cueball beneath, even unflattened and unstickified? Right, that was my last edit. Honest. 126.96.36.199 12:02, 4 September 2013 (UTC)
Could it be that by "administered orally", Randall means "verbally" (i.e. read out loud)? I think that could be quite a funny interpretation... :-) Gregatar (talk) 18:56, 4 September 2013 (UTC)
- Aurally? 188.8.131.52 16:55, 5 September 2013 (UTC)APB
- YA RLY!!! 184.108.40.206 14:06, 1 April 2014 (UTC)BK201
I totally agree with this 'verbally' thing, I was thinking the same, that a too large set of data read out loud would be fatal after a few 100 pages :) Include in explanation?
Flekkie (talk) 23:50, 5 September 2013 (UTC)
Laughing my ass off!!! But nothing to contribute other than laughter. :¬D ExternalMonolog (talk) 20:14, 4 September 2013 (UTC)ExternalMonolog
Something is very wrong here. The LD50 is the dose required to kill HALF of the test population, but here we see only one guy, and he's presumably either dead or not-dead. The "2kg/kg" figure suggests that if you drop 2x each person's weight in paper on an entire population, *half* of them will die. 220.127.116.11 (talk) (please sign your comments with ~~~~)
- I don't think it's wrong, Randall just didn't draw all the experiments (like in Significant), but just the last one. The humour is just to show how the experiment is performed, not how many people it kills. The LD50 term just adds fun by using toxicology jargon.--18.104.22.168 14:25, 5 September 2013 (UTC)
When I first read this, I thought it was referring to a fork bomb, saying that the data was toxic to the computer and that the data's mass is twice its own mass, i.e. its size doubles before you know it. The toxicology explanation does seem more convincing though. 22.214.171.124 01:37, 6 September 2013 (UTC) edited 126.96.36.199 01:42, 6 September 2013 (UTC)
The explanation of subcutaneous injection mentions blood clots in vessels. Subcutaneous injection in used in the medical field to refer to injections under the skin, but not inside muscle (intramuscular) or inside the veins (intravenous). IV would clearly be more lethal at a lower dose than subcutaneous and I would imagine Randall's intent was to describe an IV injection. I would expect the cause of death from paper particles injected under the skin to be infection if a small to moderate amount of paper was used to hemorrhage due to mechanical tearing of the skin and underlying tissues in a high dose.188.8.131.52 05:25, 7 September 2013 (UTC)
Why is the image missing?--184.108.40.206 10:32, 5 December 2013 (UTC) | <urn:uuid:2e3ef7a2-dcb0-468b-a81a-7ead3e5bdaa0> | CC-MAIN-2016-26 | http://www.explainxkcd.com/wiki/index.php/1260:_LD50 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.968597 | 1,942 | 2.96875 | 3 |
Full clinical trials to test the impact of the cholesterol-busting drugs on the number of breast cancer cases could now happen within 10 years.
The breakthrough comes after scientists discovered a link between having high cholesterol levels and developing the disease that kills nearly 12,000 women a year in the UK.
It means the cheap and simple heart pills could become a powerful new treatment.
A study of one million British women found high cholesterol increased the risk of developing breast cancer by 1.64 times.
Lead researcher Dr Rahul Potluri, from Aston University in Birmingham, said: “Statins are cheap, widely available and relatively safe. We are potentially heading towards a clinical trial in 10 to 15 years.
“If such a trial is successful, statins may have a role in the prevention of breast cancer especially in high-risk groups.”
This was an observational study so we can’t conclude that high cholesterol causes breast cancer
Recent studies have suggested a link between obesity and breast cancer.
Dr Potluri’s team conducted an analysis of more than a million patients across the UK between 2000 and 2013 from the Algorithm for Comorbidities, Associations, Length of stay and Mortality clinical database.
He said: “We found that women with high cholesterol had a significantly greater chance of developing breast cancer.
“This was an observational study so we can’t conclude that high cholesterol causes breast cancer but the strength of this association warrants further investigation.”
The findings were presented yesterday at the Frontiers in CardioVascular Biology meeting in Barcelona, Spain.
Breakthrough Breast Cancer said: “This study is promising. It may point us to an important development.” | <urn:uuid:a385dc99-16de-4502-b0a2-2691da47c42c> | CC-MAIN-2016-26 | http://www.express.co.uk/news/uk/486893/Daily-statin-pill-could-cut-breast-cancer-risk | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.946724 | 356 | 2.625 | 3 |
The Massacre of the Armenians, 1915
The Armenian people trace their history back to the Bronze Age. At the height of its power in the first century BC, the Armenian kingdom controlled the territory between the Caspian Sea and the Mediterranean Sea in an area roughly equivalent to modern-day Turkey. Christianity spread into the region soon after the death of Christ with the establishment of numerous Christian communities. In 301, Armenia became the first state to proclaim Christianity its official religion.
|Ambassador Henry Morgentau
Click image to read a telegram
from Morgenthau to the State
By the beginning of the 20th century, Armenia's glory days were a distant
memory. They had been absorbed by the Ottoman Empire and occupied a reduced
territory located in what is now north-eastern Turkey. Still strongly Christian,
they enjoyed relative tolerance within the Muslim Ottoman Empire. However,
during the period between 1892 and 1894 The Armenians suffered a series of
massacres instigated by the reigning Sultan Abdul Hamid II in which between
80,000 and 300,000 lost their lives.
With the outbreak of World War I and the threat of Russian invasion, the Ottomans
began to suspect the loyalty of the Armenians and feared that they might actively
support the Russians if an invasion occurred. To prevent this, the Ottomans
devised a plan to eliminate the Armenians from their territory that resulted
in one of the bloodiest, systematic massacres of the 20th century.
The atrocities began in the spring of 1915 with the methodical killing of as many young, able-bodied Armenian men as possible. This was followed by the forced evacuation of thousands of Armenians from their homeland to the desert area in what today is Syria. Compelled to make the journey on foot and continually attacked throughout their trek, thousands died. It is estimated that between 600,000 and 1.5 million Armenians lost their lives during this period.
Much of what we know about this period of carnage comes to us through the writings of Henry Morgenthau, the American Ambassador to the Ottoman Empire from 1913 to 1916. The Ambassador supplemented his personal observations of the treatment of the Armenians with eyewitness reports from informants throughout the country.
Ambassador Morgenthau describes the forced evacuation of one group of Armenians from their homeland to the Syrian desert:
"All through the spring and summer of 1915 the deportations took place. . . Scarcely a single Armenian, whatever his education or wealth, or whatever the social class to which he belonged, was exempted from the order. In some villages placards were posted ordering the whole Armenian population to present itself in a public place at an appointed time-usually a day or two ahead, and in other places the town crier would go through the streets delivering the order vocally. In still others not the slightest warning was given.
The gendarmes would appear before an Armenian house and order all the inmates
to follow them. They would take women engaged in their domestic tasks without
giving them the chance to change their clothes. The police fell upon them just
as the eruption of Vesuvius fell upon Pompeii; women were taken from the washtubs,
children were snatched out of bed, the bread was left half baked in the oven,
the family meal was abandoned partly eaten, the children were taken from the
schoolroom, leaving their books open at the daily task, and the men were forced
to abandon their ploughs in the fields and their cattle on the mountain side.
Even women who had just given birth to children would be forced to leave their
beds and join the panic-stricken throng, their sleeping babies in their arms.
Such things as they hurriedly snatched up - a shawl, a blanket, perhaps a few
scraps of food - were all that they could take of their household belongings.
To their frantic questions 'Where are we going?' the gendarmes would vouchsafe
only one reply: 'To the interior.'
. . . 'Pray for us,' they would say as they left their homes - the homes in which their ancestors had lived for 2,500 years. 'We shall not see you in this world again, but sometime we shall meet. Pray for us!'
|Armenian victims gather
in the city of Van
Click image to learn of
American relief efforts
The Armenians had hardly left their native villages when the persecutions began. The roads over which they travelled were little more than donkey paths; and what had started a few hours before as an orderly procession soon became a dishevelled and scrambling mob. Women were separated from their children and husbands from their wives. The old people soon lost contact with their families and became exhausted and footsore. The Turkish drivers of the ox-carts, after extorting the last coin from their charges, would suddenly dump them and their belongings into the road, turn around, and return to the village for other victims.
Thus in a short time practically everybody, young and old, was compelled to travel on foot. The gendarmes whom the Government had sent, supposedly to protect the exiles, in a very few hours became their tormentors. They followed their charges with fixed bayonets, prodding any one who showed any tendency to slacken the pace. Those who attempted to stop for rest, or who fell exhausted on the road, were compelled, with the utmost brutality, to rejoin the moving throng. They even prodded pregnant women with bayonets; if one, as frequently happened, gave birth along the road, she was immediately forced to get up and rejoin the marchers. The whole course of the journey became a perpetual struggle with the Moslem inhabitants.
When the victims had travelled a few hours from their starting place, the Kurds would sweep down from their mountain homes. Rushing up to the young girls, they would lift their veils and carry the pretty ones off to the hills. They would steal such children as pleased their fancy and mercilessly rob all the rest of the throng. If the exiles had started with any money or food, their assailants would appropriate it, thus leaving them a hopeless prey to starvation. They would steal their clothing, and sometimes even leave both men and women in a state of complete nudity. All the time that they were committing these depradations the Kurds would freely massacre, and the screams of women and old men would add to the general horror.
And thus, as the exiles moved, they left behind them another caravan - that of dead and unburied bodies, of old men and of women dying in the last stages of typhus, dysentery, and cholera, of little children lying on their backs and setting up their last piteous wails for food and water. There were women who held up their babies to strangers, begging them to take them and save them from their tormentors, and failing this, they would throw them into wells or leave them behind bushes., that at least they might die undisturbed. Behind was left a small army of girls who had been sold as slaves - frequently for a medjidie, or about eighty cents - and who, after serving the brutal purposes of their purchasers, were forced to lead lives of prostitution.
|Armenian victims left by the roadside
A string of encampments, filled by the sick and the dying, mingled with the unburied or half-buried bodies of the dead, marked the course of the advancing throngs. Flocks of vultures followed them in the air, and ravenous dogs, fighting one another for the bodies of the dead, constantly pursued them. The most terrible scenes took place at the rivers, especially the Euphrates. Sometimes, when crossing this stream, the gendarmes would push the women into the water, shooting all who attempted to save themselves by swimming. Frequently the women themselves would save their honour by jumping into the river, their children in their arms.
. . . All the way to Ras-ul-Ain, the first station on the Bagdad line, the existence of these wretched travellers was one prolonged horror. The gendarmes went ahead, informing the half-savage tribes of the mountains that several thousand Armenian women and girls were approaching. The Arabs and Kurds began to carry off the girls, the mountaineers fell upon them repeatedly, violating and killing the women, and the gendarmes themselves joined in the orgy. One by one the few men who accompanied the convoy were killed. The women had succeeded in secreting money from their persecutors, keeping it in their mouths and hair; with this they would buy horses, only to have them repeatedly stolen by the Kurdish tribesmen. Finally the gendarmes, having robbed and beaten and violated and killed their charges for thirteen days, abandoned them altogether.
Two days afterward the Kurds went through the party and rounded up all the males who still remained alive. They found about 150, their ages varying from 15 to 90 years, and these, they promptly took away and butchered to the last man. But that same day another convoy from Sivas joined - his one from Harpoot, increasing the numbers of the whole caravan to 18,000 people.
. . . On the seventieth day a few creatures reached Aleppo. Out of the combined convoy of 18,000 souls just 150 women and children reached their destination. A few of the rest, the most attractive, were still living as captives of the Kurds and Turks; all the rest were dead.'"
This eyewitness account appears in: Morgenthau, Henry, Ambassador
Morgenthau's Story (1918); Miller, William, The Ottoman Empire and its Successors 1801-1927 (1936).
How To Cite This Article:
"The Massacre of the Armenians, 1915 " EyeWitness to History, www.eyewitnesstohistory.com | <urn:uuid:c1ec0148-c43a-467a-8659-2efe14560a28> | CC-MAIN-2016-26 | http://www.eyewitnesstohistory.com/armenianmassacre.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.981592 | 2,072 | 3.65625 | 4 |
bagworm, common name for the larva of small moths of the family Psychidae. The larva spins a silken cocoon as it travels, hence the term bagworm. When fully grown, the bagworm fastens its covering to a twig and pupates within it. Some species weave bits of leaves or twigs into their bags. During mating season the wingless, footless adult female perforates the lower end of the bag, protrudes her abdomen for breeding, and soon after laying about a thousand overwintering eggs in the bag, dies. The larvae develop slowly, requiring several months to reach maturity. Bagworms prefer arborvitae and juniper trees, but practically all trees are attacked. The best known of these small moths is Thyridopteryx ephemeraeformis, occurring throughout the E United States and regions adjacent to the Gulf of Mexico. Control of the pests is through use of insecticides or by handpicking the cocoons before the eggs hatch at the end of May. Bagworms are classified in the phylum Arthropoda, class Insecta, order Lepidoptera, family Psychidae.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | <urn:uuid:838d2a73-b31d-456b-9e16-e6143d2df28b> | CC-MAIN-2016-26 | http://www.factmonster.com/encyclopedia/science/bagworm.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936239 | 260 | 3.71875 | 4 |
objectives of meeting
The main objectives of the meeting are:
1. To reassess and confirm
the lists of key issues and states of global change affecting
the terrestrial coastal interface and their associated variables.
These were identified at the first workshop and evaluated
during the inter-session.
To identify organizations and agencies that make regular observations
of the variables and to register the organizations in the
Terrestrial Ecosystem Monitoring Sites database (TEMS).
To continue to incorporate the Driver, Pressure, State, Impact
and Response (DPSIR) framework into the observing system strategy.
To continue to prepare an implementation plan for the Coastal
Module of GTOS.
To evaluate and contribute to the proposal for an Integrated
Global Observing Strategy (IGOS) theme for the coast.
Topics of discussion will also include:
1. Data and information
2. Education, capacity
building, training, communications;
3. Strengthen links with
other groups and partnerships;
4. Better define collaborating
5. Need to make more progress
on product definition;
6. Define linkages, users
and products and priorities;
7. Who are the players
to develop products?
8. Integration of ocean
9. Shoreline characterization
and habitat products could help establish early C-Gtos credibility.
(doc file, 43k)
and documents (new
documents will be added)
to GTOS (pps, 1,141 kb) Tschirley
Ecosystem Monitoring Sites (TEMS) 614 kb, Servin
Terrestrial Observations Panel for Climate (TOPC)
838 kb, Belward
use and population change in the coastal ecosystem 3,131
96 kb, Clark
delivery report 221 kb, Christian and Paul DiGiacomo
lagoon observational network LAGUNET 1,065 kb
information needs 31kb
preparation for the meeting
Topics of discussion will also include:
1. Please confirm your
interest in participating in the meeting within the 17 January
2003 (contact details below).
2. Check the list of indicators
for your states to determine if you still agree with it. Your
group leader should take responsibility for exchanges concerning
agreement on a final list of indicators. As a group, make
any additions or deletions that are appropriate.
3. As a group activity,
please compose a short report concerning the state and its
indicators. The format for the report should be as follows:
A brief (one to two paragraph) summary of why the state is
important to Coastal GTOS.
One paragraph on the boundary conditions of the coast and
The table of indicators plus text on how indicators were chosen,
the ease with which they might be measured, scales of measurement
(in time and space), and relative importance to assessing
A summary of organizations, networks, etc. that might be responsible
for the measurements.
General conclusions and recommendations.
4. The text should be made available for the
group by the Ispra meeting on March 3, 2003.
of Greenville meeting
The following are the "states" considered
at the last meeting, identified with participants who contributed
to the list of indicators associated with the state. The states
are in the context of the DPSIR framework. Outline summary
of the meeting and the group presentations are available on
indicated in bold is asked to take responsibility for
organizing the inter-session activities associated with the
Land use and population change (Bowen, Jimenez, Clark, Burbridge)
2. Water cycle and Matrix
quality (DeMora, Viaroli)
3. Coastal habitat (McManus,
4. Sediment delivery (DiGiacomo,
5. Sea level change (ECU
identified during the Greenville meeting were placed in a
this was supplemented with additional appropriate indicators
gathered from the internet. Indicators were the divided into
a primary group that directly address the state and a secondary
group that relates to factors that affect the state or are
affected by the state.
IGOS Coastal Theme
Development Workshop (Washington, Jan. 2003)
the Coastal Modules of GOOS and GTOS
Details of the first
C-GTOS meeting (Greenville, Oct. 2002)
Coastal Global Terrestrial
Observing Network (GT-NET)
Workshop on "The
Role of Indicators in Integrated Coastal Management"
Tool for the Ranking of Common Variables
Framework and Environmental Indicators
Change of Infoterra to UNEP-Infoterra
Environment Information and Observation Network (EIONET)
monitoring and managing sustainability in Indian coastal areas
development indicators generated by the UK government
working list of "indicators of sustainable development
indicators and integrated coastal management
state of the Nation's Ecosystems
Resources 2000-2001-- People and ecosystems
East Carolina University
Greenville, NC 27858
United States of America
Viale delle Terme di Caracalla
I-00100 Rome Italy
FAO :: Global Terrestrial Observing
System - GTOS :: 4 April 2003 | <urn:uuid:58e594d8-29b1-4e81-bdc6-0499f38a7aff> | CC-MAIN-2016-26 | http://www.fao.org/GTOS/meetISP1.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.82271 | 1,084 | 2.546875 | 3 |
WASHINGTON, D.C. — In an ongoing effort to protect bees and other pollinators, the U.S. Environmental Protection Agency has developed new pesticide labels that prohibit use of some neonicotinoid pesticide products where bees are present.
“Multiple factors play a role in bee colony declines, including pesticides. The Environmental Protection Agency is taking action to protect bees from pesticide exposure and these label changes will further our efforts,” said Jim Jones, assistant administrator for the Office of Chemical Safety and Pollution Prevention.
The new labels will have a bee advisory box and icon with information on routes of exposure and spray drift precautions.
This announcement affects products containing the neonicotinoids imidacloprid, dinotefuran, clothianidin and thiamethoxam.
The EPA will work with pesticide manufacturers to change labels so that they will meet the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) safety standard.
In May, the U.S. Department of Agriculture and EPA released a comprehensive scientific report on honey bee health, showing scientific consensus that there are a complex set of stressors associated with honey bee declines, including loss of habitat, parasites and disease, genetics, poor nutrition and pesticide exposure.
The agency continues to work with beekeepers, growers, pesticide applicators, pesticide and seed companies, and federal and state agencies to reduce pesticide drift dust and advance best management practices.
The EPA recently released new enforcement guidance to federal, state and tribal enforcement officials to enhance investigations of bee-kill incidents. | <urn:uuid:47682fe5-563e-4589-86d7-372c29d90893> | CC-MAIN-2016-26 | http://www.farmanddairy.com/top-stories/new-pesticide-labels-will-better-protect-bees-and-other-pollinators/152019.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.903605 | 320 | 2.890625 | 3 |
The first 26 volumes in FAO's Better Farming Series were based on the Cours d'apprentissage agricole prepared in Cote d'lvoire by the Institut africain de developpement economique et social for use by extension workers. Later volumes, beginning with No. 27, have been prepared by FAO for use in agricultural development at the farm and family level. The approach has deliberately been a general one, the intention being to create a basic model that can be modified or expanded according to local conditions of agriculture.
Many of the booklets deal with specific crops and techniques, while others are intended to give farmers more general information that can help them to understand why they do what they do, so that they will be able to do it better.
Adaptations of the series, or individual volumes in it, have been published in Amharic, Arabic, Armenian, Bengali, Creole, Ewe, Gipende, Hindi, Igala, Indonesian, Kiswahili, Malagasy, Malaysian, Nepali, Oriya, SiSwati, Thai, Tschiluba, Turkish, Urdu and Vietnamese.
Requests for permission to issue this booklet in other languages and to adapt it according to local climatic and ecological conditions are welcomed. They should be addressed to the Director, Publications Division, Food and Agriculture Organization of the United Nations, Viale delle Terme di Caracalla, 00100 Rome, Italy. | <urn:uuid:e2eb44a8-8179-4005-af29-32d571c58033> | CC-MAIN-2016-26 | http://www.fastonline.org/CD3WD_40/CD3WD/AGRIC/FB43FE/EN/B106_2.HTM | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.921656 | 304 | 2.8125 | 3 |
AUDIENCE: Consumer, Pediatrics, Family Practice
ISSUE: FDA notified health professionals, their provider organizations and caregivers for infants, that prescription oral viscous lidocaine 2% solution should not be used to treat infants and children with teething pain. FDA is requiring a Boxed Warning to be added to the prescribing information (label) to highlight this information. Oral viscous lidocaine solution is not approved to treat teething pain, and use in infants and young children can cause serious harm, including death.
Topical pain relievers and medications that are rubbed on the gums are not necessary or even useful because they wash out of the baby’s mouth within minutes. When too much viscous lidocaine is given to infants and young children or they accidentally swallow too much, it can result in seizures, severe brain injury, and problems with the heart. Cases of overdose due to wrong dosing or accidental ingestion have resulted in infants and children being hospitalized or dying.
BACKGROUND: In 2014, FDA reviewed 22 case reports of serious adverse reactions, including deaths, in infants and young children 5 months to 3.5 years of age who were given oral viscous lidocaine 2 percent solution for the treatment of mouth pain, including teething and stomatitis, or who had accidental ingestions. See further details in the FDA Drug Safety Communication.
RECOMMENDATION: Health care professionals should not prescribe or recommend this product for teething pain. Parents and caregivers should follow the American Academy of Pediatrics’ recommendations for treating teething pain.
- Use a teething ring chilled in the refrigerator (not frozen).
- Gently rub or massage the child’s gums with your finger to relieve the symptoms.
FDA is also encouraging parents and caregivers not to use topical medications for teething pain that are available over the counter (OTC) because some of them can be harmful. FDA recommends following the American Academy of Pediatrics’ recommendations to help lessen teething pain.
For additional information for health professionals and patients, including the full data summary, see the FDA Drug Safety Communication. | <urn:uuid:c858ad77-4e01-4fbc-8cef-e98e19582500> | CC-MAIN-2016-26 | http://www.fda.gov/Safety/MedWatch/SafetyInformation/SafetyAlertsforHumanMedicalProducts/ucm402790.htm?source=govdelivery | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.902678 | 441 | 2.53125 | 3 |
On this day in economic and business history...
An artificial moon -- that is, a human-launched satellite -- twinkled in the sky above the Earth for the first time on Oct. 4, 1957. That day, the Russian-built Sputnik 1 entered orbit, announcing to the world a new age of space travel with each faint beep of its radio signals.
American reactions were diverse. The Washington Post reported that one top scientist was "elated that it is up there." Rear Admiral Rawson Bennett responded with an entirely different tone, dismissing the satellite to The New York Times as a "hunk of iron almost anybody could launch." Sputnik 1's orbit disintegrated, and it burned up in the atmosphere three months later, far exceeding the three-week time period most had expected it to remain aloft.
The successful Russian launch lit a fire under the American space program, and the U.S.-built Explorer 1 reached reach orbit a mere four months later. NASA was created a year after Sputnik's launch to put the U.S. in the lead in the space race. It was only 12 years after Sputnik's first beep when two American astronauts touched down on the surface of the Moon -- a remarkably short time by any sensible measure.
Sputnik wasn't the only craft to set a spaceflight milestone on Oct. 4. Burt Rutan's SpaceShipOne won the Ansari X Prize race to create a reusable private spacecraft on Oct. 4, 2004, when it completed its second successful flight beyond Earth's atmosphere.
Birth of the PC
The first "personal computer" was offered to the public on Oct. 4, 1968. Hewlett-Packard (NYSE: HPQ ) was the first company to use the term when it advertised its HP 9100A in that week's issue of Science, a leading scientific journal. The 40-pound, $4,900 machine offered professionals their own dedicated computing platform, freeing them from a reliance on the massive mainframes that dominated the era. The HP 9100A could perform trigonometry in a mere 330 milliseconds per function, which was impressive at the time for something released a decade before the first true PCs hit the market.
Barbarians at the gate
The corporate raiding of the 1980s was on full display on Oct. 4, 1988, when British liquor giant Grand Metropolitan made a hostile offer to buy Pillsbury for $5.12 billion. Grand Met, which you now know as Diageo (NYSE: DEO ) , offered Pillsbury a 53% premium over its previous closing price of $39 per share. Many analysts thought the offer was more than fair, and one estimated Pillsbury's "break-up value" to be roughly $50 to $54 per share.
Grand Met eventually succeeded in its efforts later that year despite Pillsbury executives' attempt to implement a poison pill to prevent the takeover. The final deal, worth $5.7 billion, went through after a judge in Delaware rejected both the poison-pill strategy and Pillsbury's desperate attempt to spin off fast-food subsidiary Burger King to reduce its appeal to the British firm.
Grand Met, which merged with Guinness in 1997 to become Diageo, never made the most of Pillsbury's diverse operations. Burger King, neglected by its corporate parent for more than a decade, was eventually sold off to a group of investment firms for a modest $1.5 billion in 2002. Pillsbury's manufacturing and distribution assets were quickly divested after the Grand Met acquisition, and even its brand assets wound up being sold after the start of the new millennium. A $10.5 billion sale, completed in 2000, sent Pillsbury to General Mills, which kept the brand's refrigerated lineup but sold Pillsbury's Doughboy-fronted baking products division to J.M. Smucker.
The Pillsbury and Burger King divestitures work out to an annualized return of just more than 6% for Diageo over the life of its acquisition. This period of ownership took place during the greatest bull market since the Roaring 20s -- the Dow Jones Industrial Average (DJINDICES: ^DJI ) produced annualized gains of 14.6% over the same 12 years following Diageo's hostile takeover -- so Pillsbury turned out to be a rather disappointing investment for Diageo.
The whole world is social
Facebook (NASDAQ: FB ) founder Mark Zuckerberg announced that his social-networking site had surpassed 1 billion active users on Oct. 4, 2012. His announcement, posted to the official Facebook blog, read:
Helping a billion people connect is amazing, humbling and by far the thing I am most proud of in my life. I am committed to working every day to make Facebook better for you, and hopefully together one day we will be able to connect the rest of the world too.
Facebook, launched in early 2004 at Harvard University, reached a million users before that year was over despite restricting itself only to university students. MySpace, which had launched that same year, was already up to 5 million members. A little over three years later, MySpace peaked at over 150 million members, but Facebook had barely passed 25 million users. But the tables were turned from then on: Facebook crossed the 250 million-user mark in mid-2009, shot up to 500 million users in mid-2010, and doubled its user base again to a billion users little more than two years later. Facebook's meteoric rise made it by far the most valuable property in the world of Web 2.0. Five months before crossing the one-billion-user mark, Facebook went public in the largest IPO (by the size of its first-day market cap) in history.
Growth has slowed somewhat for Facebook, which had signed up more than 40% of the world's Internet users -- and roughly 54% of the world if you exclude users in China, which blocks Facebook -- by the time it crossed the billion-user barrier. Roughly nine months after its billion-user announcement, Facebook's total user base had increased by another 150 million people. How many people will sign up before growth stops -- if it ever does?
Facebook vs. the world
The tech world has been thrown into chaos as the biggest titans invade one another's turf. At stake is the future of a trillion-dollar revolution: mobile. Facebook is just one of the companies that have entered the ring, but to find out which of these giants is set to rule the next decade, can claim your free Motley Fool report that answers the question: "Who Will Win the War Between the 5 Biggest Tech Stocks?" Inside, you'll find out who is set to dominate, and we'll give in-the-know investors an edge. To grab a copy of this report, simply click here -- it's free! | <urn:uuid:ac0b1b35-84c9-45c5-8a57-91e6d53f7b2d> | CC-MAIN-2016-26 | http://www.fool.com/investing/general/2013/10/04/the-start-of-the-space-race-and-facebooks-world-co.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960577 | 1,424 | 2.640625 | 3 |
A new study of the fluids used in the controversial practice of hydraulic fracturing, or fracking, shows that several of them may not be as safe as the energy industry says they are, and some are downright toxic.
A team of researchers at the Lawrence Berkeley National Laboratory and University of the Pacific looked at more than just the process of fracking – which involves injecting water mixed with chemicals into underground rock formations to extract gas and crude oil. In their report, the researchers list the chemicals that are most often used, based on industry reports and databases. Among them were "gelling agents to thicken the fluids, biocides to keep microbes from growing, sand to prop open tiny cracks in the rocks and compounds to prevent pipe corrosion."
"The industrial side was saying, 'We're just using food additives, basically making ice cream here,'" said team leader William Stringfellow. "On the other side, there's talk about the injection of thousands of toxic chemicals. ...we looked at the debate and asked,' What's the real story?'"
The story so far has been that fracking is an environmentally safe way to extract oil and gas from underground deposits trapped in shale. Its rapid growth has helped the United States dramatically increase oil production. In April, Texas generated more oil than Iraq, OPEC's second-largest producer.
Yet fracking has also been met with opposition because of reports of contaminated well water and increased air pollution around drilling sites. Further, the injection of wastewater into disposal wells at fracking sites has been linked to earthquakes.
Stringfellow's team found that fracking fluid is, in fact, mixed with plenty of food-grade and other non-toxic materials, but some of them may not be safe. At the recent 248th National Meeting & Exposition of the American Chemical Society (ACS), the team reported that eight of the compounds are toxic to mammals.
"There are a number of chemicals, like corrosion inhibitors and biocides in particular, that are being used in reasonably high concentrations that potentially could have adverse effects," Stringfellow said. "Biocides, for example, are designed to kill bacteria. It's not a benign material."
As for food-grade materials, disposal after use isn't always a simple matter. "You can't take a truckload of ice cream and dump it down the storm drain," Stringfellow said. "Even ice cream manufacturers have to treat dairy wastes, which are natural and biodegradable. They must break them down rather than releasing them directly into the environment."
Beyond that, the researchers could find very little information about the safety of fully one-third of the chemicals added to water used in fracking. "It should be a priority to try to close that data gap," Stringfellow said.
Do you know this energy tax "loophole"?
You already know record oil and natural gas production is changing the lives of millions of Americans. But what you probably haven't heard is that the IRS is encouraging investors to support our growing energy renaissance, offering you a tax loophole to invest in some of America's greatest energy companies. Take advantage of this profitable opportunity by grabbing your brand-new special report, "The IRS Is Daring You to Make This Investment Now!," and you'll learn about the simple strategy to take advantage of a little-known IRS rule. Don't miss out on advice that could help you cut taxes for decades to come. Click here to learn more.
Written by Andy Tully of Oilprice.com | <urn:uuid:255bb8d1-dc7e-40d3-a23d-445216279e8f> | CC-MAIN-2016-26 | http://www.fool.com/investing/general/2014/08/18/fracking-fluids-more-toxic-than-previously-thought.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965346 | 712 | 3.375 | 3 |
|Student Learning Outcomes -|
- Upon completion students should be able to perform exercises in and out of the water to achieve improved cardiovascular fitness, muscular strength, endurance, and flexibility
- Upon completion students should be able to identify weight management principles, basic physiology of exercise and the benefits of regular exercise
|Description - |
|This course will provide advanced training and instruction in the use of weights for the sport of water polo.|
|Course Objectives - |
|The student will be able to: |
- Participate in a structured and comprehensive program of advanced weight training for the sport of water polo.
- Develop and apply personal and performance goals.
- Employ correct lifting techniques in a variety of advanced resistance exercise techniques for the sport of water polo.
- Demonstrate the differences between a variety of advanced resistance exercise techniques for performance in the sport of water polo.
|Special Facilities and/or Equipment - |
- Free weights
- Squat racks
- Olympic lifting platforms
|Course Content (Body of knowledge) - |
- Establish performance goals which students are encouraged to work towards.
- Develop knowledge and understanding of various advanced strength training techniques.
- Super sets
- Isometric and Super Slow training
- Olympic style lifts
- Develop strength through participation in various advanced strength training techniques.
- Develop individualized performance goals which encourage specialization in the sport of water polo.
- Explain physiological and anatomical relationships of weight training effects on the body consistent with the performance goals for the sport of water polo.
|Methods of Evaluation - |
|The student will demonstrate proficiency by: |
- Strength development will be assessed and measured by certain lifts such as the bench press, squats, and military press.
- Demonstrating the correct form in the olympic lifts used for performance in the sport of water polo.
|Representative Text(s) - |
|None required. |
|Disciplines - |
|Physical Education |
|Method of Instruction - |
- Active participation by students and instructor to facilitate an effective learning environment.
- Lecture and/or demonstration
|Lab Content - |
|Use of pin-set machines, free weights and functional fitness strengthening exercises such as lifting, stretching, balancing, squatting. i.e. medicine balls, BOSU, and TRX. |
|Types and/or Examples of Required Reading, Writing and Outside of Class Assignments - |
|Optional reading and writing assignments as recommended by instructor. | | <urn:uuid:b89cb342-e1c8-4336-9ae2-59cb530251b1> | CC-MAIN-2016-26 | http://www.foothill.edu/schedule/outlines.php?act=1&rec_id=5989 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.905154 | 529 | 3 | 3 |
Difference between revisions of "FAT"
(added Fat32 Limitations with Windows 2000)
|Line 361:||Line 361:|
Revision as of 15:05, 30 January 2006
FAT, or file allocation table, is a file system that is designed to keep track of allocation status of clusters on a hard drive. Developed in 1977 by Microsoft Corporation, FAT was originally intended to be a file system for the Microsoft Disk BASIC interpreter. FAT was quickly incorporated into an early version of Tim Patterson's QDOS, which was a moniker for "Quick and Dirty Operating System". Microsoft later purchased the rights to QDOS and released it under Microsoft branding as PC-DOS and later, MS-DOS.
File Allocation Table Structure
The FAT file system is composed of several areas:
- Boot Record or Boot Sector
- Root Directory or Root Folder
- Data Area
- Wasted Sectors
When a computer is powered on, a POST (power-on self test) is performed, and control is then transferred to the MBR (Master Boot Record). The MBR is present no matter what file system is in use, and contains information about how the storage device is logically partitioned. When using a FAT file system, the MBR hands off control of the computer to the Boot Record, which is the first sector on the partition. The Boot Record, which occupies a reserved area on the partition, contains executable code, in addition to information such as an OEM identifier, number of FATs, media descriptor (type of storage device), and information about the operating system to be booted. Once the Boot Record code executes, control is handed off to the operating system installed on that partition.
The primary task of the FATs is to keep track of the allocation status of clusters, or logical groupings of sectors, on the disk drive. There are four different possible FAT entries: allocated (along with the address of the next cluster associated with the file), unallocated, end of file, and bad sector.
In order to provide redundancy in case of data corruption, two FATs, FAT1 and FAT2, are stored in the file system. FAT2 is a typically a duplicate of FAT1. However, FAT mirroring can be disabled on a FAT32 drive, thus enabling any of the FATs to become the Primary FAT. This possibly leaves FAT1 empty, which can be deceiving.
The Root Directory, sometimes referred to as the Root Folder, contains an entry for each file and directory stored in the file system. This information includes the file name, starting cluster number, and file size. This information is changed whenever a file is created or subsequently modified. Root directory has a fixed size of 512 entries on a hard disk and the size on a floppy disk depends. With FAT32 it can be stored anywhere within the partition, although in previous versions it is always located immediately following the FAT region.
The Boot Record, FATs, and Root Directory are collectively referred to as the System Area. The remaining space on the logical drive is called the Data Area, which is where files are actually stored. It should be noted that when a file is deleted by the operating system, the data stored in the Data Area remains intact until it is overwritten.
In order for FAT to manage files with satisfactory efficiency, it groups sectors into larger blocks referred to as clusters. A cluster is the smallest unit of disk space that can be allocated to a file, which is why clusters are often called allocation units. Only the "data area" is divided into clusters, the rest of the partition is simply sectors. Cluster size is determined by the size of the disk volume and every file must be allocated an even number of clusters. Cluster sizing has a significant impact on performance and disk utilization. Larger cluster sizes result in more wasted space because files are less likely to fill up an even number of clusters.
The size of one cluster is specified in the Boot Record and can range from a single sector (512 bytes) to 128 sectors (65536 bytes). The sectors in a cluster are continuous, therefore each cluster is a continuous block of space on the disk. Note that only one file can be allocated to a cluster. Therefore if a 1KB file is placed within a 32KB cluster there are 31KB of wasted space. The formula for determining clusters in a partition is (# of Sectors in Partition) - (# of Sectors per Fat * 2) - (# of Reserved Sectors) ) / (# of Sectors per Cluster).
Wasted Sectors are a result of the number of data sectors not being evenly distributed by the cluster size. It's made up of unused bytes left at the end of a file. Also if the partition as declared in the partition table is larger than what is claimed in the Boot Record the volume can be said to have wasted sectors. Small files on a hard drive are the reason for wasted space and the bigger the hard drive the more wasted space there is.
FAT Entry Values
0x000 (Free Cluster)
0x001 (Reserved Cluster)
0x002 - 0xFEF (Used cluster; value points to next cluster)
0xFF0 - 0xFF6 (Reserved values)
0xFF7 (Bad cluster)
0xFF8 - 0xFFF (Last cluster in file)
0x0000 (Free Cluster)
0x0001 (Reserved Cluster)
0x0002 - 0xFFEF (Used cluster; value points to next cluster)
0xFFF0 - 0xFFF6 (Reserved values)
0xFFF7 (Bad cluster)
0xFFF8 - 0xFFFF (Last cluster in file)
0x?0000000 (Free Cluster)
0x?0000001 (Reserved Cluster)
0x?0000002 - 0x?FFFFFEF (Used cluster; value points to next cluster)
0x?FFFFFF0 - 0x?FFFFFF6 (Reserved values)
0x?FFFFFF7 (Bad cluster)
0x?FFFFFF8 - 0x?FFFFFFF (Last cluster in file)
Note: FAT32 uses only 28 of 32 possible bits, the upper 4 bits should be left alone. Typically these bits are zero, and are represented above by a question mark (?).
There are three variants of FAT in existence: FAT12, FAT16, and FAT32.
- FAT12 is the oldest type of FAT that uses a 12 bit file allocation table entry.
- FAT12 can hold a max of 4,086 clusters (which is 212 clusters minus a few values that are reserved for values used in the FAT).
- It is used for floppy disks and hard drive partitions that are smaller than 16 MB.
- All 1.44 MB 3.5" floppy disks are formatted using FAT12.
- Cluster size that is used is between 0.5 KB to 4 KB.
- It is called FAT16 because all entries are 16 bit.
- FAT16 can hold a max of 65,536 addressable units (2 26
- It is used for small and moderate sized hard disk volumes.
- The actual capacity is 65,525 due to some reserved values
FAT32 is the enhanced version of the FAT system implemented beginning with Windows 95 OSR2, Windows 98, and Windows Me. Features include:
- Drives of up to 2 terabytes are supported (Windows 2000 only supports up to 32 gigabytes)
- Since FAT32 uses smaller clusters (of 4 kilobytes each), it uses hard drive space more efficiently. This is a 10 to 15 percent improvement over FAT or FAT16.
- The limitations of FAT or FAT 16 on the number of root folder entries have been eliminated. In FAT32, the root folder is an ordinary cluster chain, and can be located anywhere on the drive.
- File allocation mirroring can be disabled in FAT32. This allows a different copy of the file allocation table then the default to be active.
Fat32 Limitations with Windows 2000
- Clusters cannot be 64KB or larger.
- Cannot decrease cluster size to be larger than 16 MB and less than 64KB in size.
- Must contain at least 65,527 clusters.
- Maximum of 32KB per cluster.
Comparison of FAT Versions
Table adapted from: http://en.wikipedia.org/wiki/File_Allocation_Table
|Full Name||File Allocation Table|
|(12-bit version)||(16-bit version)||(32-bit version)|
|Introduced||1977 (Microsoft Disk BASIC)||July 1988 (MS-DOS 4.0)||August 1996 (Windows 95 OSR2)|
|Partition identifier||0x01 (MBR)||0x04, 0x06, 0x0E (MBR)||0x0B, 0x0C (MBR)
|File allocation||Linked List|
|Bad blocks||Linked List|
|Max file size||32 MiB||2 GiB||4 GiB|
|Max number of files||4,077||65,517||268,435,437|
|Max filename size||8.3 or 255 characters when using LFNs|
|Max volume size||16 MiB||2 GiB for all (4 GiB for some)||32 GiB for all OS (2 TiB for some)|
|Dates recorded||Creation, modified, access|
|Date range||January 1, 1980 - December 31, 2107|
|Unicode File Names||System Character Set|
|Attributes||Read-only, hidden, system, volume label, subdirectory, archive|
|Transparent compression||Per-volume, Stacker, DoubleSpace, DriveSpace||No|
|Transparent encryption||Per-volume only with DR-DOS||No|
|Disk Space Economy||Average||Minimal on large volumes||Max|
Applications of FAT
Due to its low cost, mobility, and non-volatile nature, flash memory has quickly become the choice medium for storing and transferring data in consumer electronic devices. The majority of flash memory storage is formatted using the FAT file system. In addition, FAT is also frequently used in electronic devices with miniature hard drives.
Examples of devices in which FAT is utilized include:
- USB thumb drives
- Digital cameras
- Digital camcorders
- Portable audio and video players
- Multifunction printers
- Electronic photo frames
- Electronic musical instruments
- Standard televisions
Recovering directory entries from FAT filesystems as part of recovering deleted data can be accomplished by looking for entries that begin with a sigma 0xe5. When a file or directory is deleted under a FAT filesystem, the first character of its name is changed to sigma. The remainder of the directory entry information remains intact.
The pointers are also changed to zero for each cluster used by the file. Recovery tools look at the FAT to find the entry for the file. The location of the starting cluster will still be there. It is not deleted or modified. The tool will go straight to that cluster and try to recover the file using the file size as a determinant. Some tools will go to the starting cluster and recover the next "X" number of clusters needed for the specific file size. However, this tool is not ideal. An ideal tool will locate "X" number of available clusters. Since files are most often fragmented, this will be a more precise way to recover the file.
An issue arises when two files in the same row of clusters are deleted. If the clusters are not in sequential order, the tool will automatically receive "X" number of clusters. However, because the file was fragmented, it's most likely that all the clusters obtained will not all contain data for that file. If these two deleted files are in the same row of clusters, it is highly unlikely the file can be recovered.
File slack is data that starts from the end of the file written and continues to the end of the sectors designated to the file. There are two types of file slack, RAM slack, and Residual slack. RAM slack starts from the end of the file and goes to the end of that sector. Residual slack then starts at the next sector and goes to the end of the cluster allocated for the file. File slack is a helpful tool when analyzing a hard drive because the old data that is not overwritten by the new file is still in tact. Go to http://www.pcguide.com/ref/hdd/file/partSizes-c.html for examples.
The diagram above demonstrates the larger the cluster size used, the more disk space is wasted due to slack. This suggests it is better to use smaller cluster sizes whenever possible. | <urn:uuid:d9053b12-640c-4562-82aa-b6038c586f27> | CC-MAIN-2016-26 | http://www.forensicswiki.org/w/index.php?title=FAT&diff=1578&oldid=1577 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.883056 | 2,673 | 3.515625 | 4 |
William Henry Fox Talbot published installments of The Pencil of Nature between 1844 and 1846. The book contained 24 salted paper prints, mostly of architecture, still-life arrangements, and works of art. To Talbot they were “specimens” made to showcase the potential of the “new Art.” This image is plate number 14. In his notes accompanying the plate, Talbot wrote, “Portraits of living persons and groups of figures form one of the most attractive subjects of photography... Groups of figures take no longer time to obtain than single figures would require, since the camera depicts them all at once, however numerous they may be...” (Talbot, The Pencil of Nature, 1844.) This is the only portrait in Talbot’s book. Exposures were long, except in bright sunlight. This meant that inanimate subjects were easier to photograph. | <urn:uuid:aeb0f1b2-621e-4597-a26c-3843ca662480> | CC-MAIN-2016-26 | http://www.gallery.ca/cybermuse/teachers/plans/works_theme_e.jsp?mkey=29176&lessonid=183 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.97274 | 187 | 2.875 | 3 |
Abstraction is utilized by every game ever made. Not only is perfectly mimicking reality impossible – it shouldn’t even be the goal. Interactive entertainment sets itself apart by offering players interesting decisions, testing their skills or immersing them in a unique world. Games fail to achieve these goals when they prioritize realism above all else.
What does abstraction do for a game? How can it be used for both good and ill? What does a designer need to consider? We’ll examine these topics and more!
Along the way we’ll also look at some features of the Civilization series, as it presents an excellent case study for how abstraction not only improves a game but can also frustrate a subset of players.
What is Abstraction?
Whether in games, graphic design or fine art, the use of abstraction is always a means to the same goal: emphasizing a few important elements over the larger, more complex picture.
As with most design knobs, abstraction isn’t black-and-white, where you either have it or you don’t. The spectrum ranges from offerings which try to model every factor, such as military sims (the “game” status of which is debatable) to the venerable Go, where the incredibly nuanced concept of warfare is simplified all the way down to two types of stones on a grid.
The amount of abstraction a game embraces is one of the key elements in determining what type of audience it will appeal to. As such, developers concerned even a little bit about sales numbers are wise to give more than just a passing thought to the degree of abstraction utilized in their products. Sales figures aside though, virtually everything is on the table – an opportunity which designers should find both exciting and daunting.
Now that we know what abstraction does, let’s look at some concrete examples of how it can actually be applied.
Abstraction in Action
The best opportunity to utilize abstraction is when dealing with detailed subject matter. Economics, present in some form in a large number of games, is a great example. Notabstracting economics isn’t really an option – hell, not a single person alive has a complete understanding of how it works. Even the models professional economists use are heavily simplified from reality. Needless to say, we have neither the brains nor the computing power to properly simulate the economic impact of every human alive.
Because economics is such a vast and detailed subject matter, designers can pull out any number of facets to emphasize. What is the key to a robust economy - having the most people? An advanced industry? A large trade network? Buying and selling at the most profitable opportunities? These concepts are just the tip of the iceberg.
In the Civ series, the ingredients for economic success always include one part good geography, one part technological advancement and a pinch of “population is power.” This approach is very much an extension of the overarching theme of Civ - human history as the story of continual upward progress. Even this is an abstraction, as countless kingdoms have not just risen but also fallen throughout the ages. Despite this fact, over the course of civilization the needle has generally points upwards and this was the feel Sid Meier was trying to emphasize in the first Civ. Most folks find it more fun to play a game where you’re building ever upward, rather than dealing with the risk of being toppled by your own people. It would have been equally valid for Civ to prominently feature internal strife and famine, but these were elements the designer simply didn’t care to represent.
Combat is another real-world concept that goes through a bit of a filter before showing up in games. Accurately simulating the nuances of single combat is far beyond what most titles want to offer, and those which present full-scale battles have no choice butabstraction – the only question is what form it should take. As with economics, the specifics mainly come down to the team’s preferences. Is success in battle primarily based on good timing? Proper positioning? Wielding the best weapons? It’s possible to incorporate all of these elements, but odds are the developers have a few pet ideas which they believe are especially interesting or pertinent.
Winning wars in Civ usually just comes down to bringing the biggest, most advanced army to the party. Tactics also factor in, but to a much lesser extent. This abstraction was no accident, and applied largely because Civ is an economic game at its core. Modeling the intricacies of battlefield strategy is no small challenge, and it’s very easy to simply “round down” and accurately claim that wars are typically won long before soldiers take to the field.
A Double-Edged Sword
Like all powerful tools, abstraction has the capacity to do as much damage as it does good. An abstracted icon can be hard to identify. Abstract art is only understood by a dozen people worldwide. Maybe. Too much abstraction in a game can cause the final product to bear very little resemblance to the source material. Abstract games that draw on topics only loosely can certainly be fun, but creating one means the designers are inviting a much greater challenge upon themselves.
In game design, the downside of too much abstraction is the potential for players to refuse to buy what you’re trying to sell. One famous (or infamous) example from the Civ series is the eternal struggle between tank and spearman. For those of unfamiliar with this “meme,” in the combat system there are rare times when a vastly superior unit can get extremely unlucky and lose a fight. Ultimately this is a consequence of abstracted form combat assumes in the series. For some people this is nothing more than a humorous quirk but for others it’s an unforgivable sin.
While I certainly wasn’t one of this “feature’s” largest detractors, I did find such occurrences to be a little out of place in a game that, while not a simulation by any means, at least tries to tie itself to world history. As a result, I modified the combat resolution model in Civ 5 so that the possible outcomes for a battle live inside a much narrower band.
One of the other design changes I made in Civ 5 was to abstract naval transport. Instead of needing to ferry armies around on actual boat units, the idea was that once your civilization is capable of launching seaworthy vessels, transport ships are simply “available” for your forces as soon as they enter the water. This made crossing the seas much less of a hassle, and as a result there are many players which vastly prefer this model. But it wasn’t universally beloved. A number of other people found this design tooabstract, believing it ridiculous that a group of spearmen could “magically transform” into a boat.
Although it’s second-nature for many of us at this point, even the basic abstraction of “turns” (when some agents can act and others cannot) out of our real-time experience of living is a fairly unnatural translation. Outside of artificially-created constructs, how much of the world could really be considered “turn-based?” Turns are a crucial component of many games, and they help focus attention in ways real-time titles struggle to or simply cannot offer. The way turns “freeze” time in Civ opens up a better opportunity to craft both short-term and long-term strategies compared with a real-time cousin with similar subject matter, such as Age of Empires.
Civ also applies another abstraction relating to turns. In order to inject a sense of progressing through history the turns are labeled with very specific dates, such as 3950 BC or 1492 AD. Few people have taken issue with this, but there are those who dislike the unrealistic holes this opens up – after all, there’s no way it would take 50 years to get from New York to Boston no matter how slow you walked.
Fans of Realism
While most people are fine with a fair helping of abstraction in the games they play, some are not quite as forgiving. Given that all games are entertainment, there’s no objective “good” or “bad,” and as such their opinions are just as valid, even if their numbers are few.
But even hardcore realism fans are looking for some form of abstraction. Civ players who ask for more realism simply want to slide over a few notches on the abstraction spectrum – they don’t actually want to live out the life of a historical leader with all that entails. Even sports sims aren’t perfect replications, if only because you’re giving commands through a controller or mouse. I can’t imagine anyone really wanting their gaming experience to include bundling up to ward of sub-freezing temperatures while dealing with insubordinate players who refuse to follow through on the play you called! Anyone interested in this would actually, you know, coach football in some capacity.
The only time I can see the goal including a perfect duplication of reality is when the target audience is comprised of individuals who simply lacks the capacity to perform the represented activities outside of a game. Projects of this type are part of a very rare breed.
As I noted earlier in this article, abstraction is simply one part of the designer’s toolbox. While true realism is never the objective, nearly everyone at least expects the games they play to be believable. As long as a title is consistent with itself and its subject matter then you as a designer have done your job. | <urn:uuid:f4f6990b-c39e-4ebd-85eb-b54fbc136cf9> | CC-MAIN-2016-26 | http://www.gamasutra.com/blogs/JonShafer/20121207/183125/Abstraction__Civilization.php | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.962066 | 1,976 | 2.75 | 3 |
Staghorn ferns (Platycerium spp.) are tropical plants that have a low susceptibility to pests and diseases; however, neglecting the particular care requirements of staghorn ferns will lead to diminished health and vigor and can leave the fern vulnerable to pest and disease attack such as snails, slugs and Rhizoctonia fungal infection.
A common problem with staghorn ferns is light deficiency. Staghorn ferns thrive in full to partial shade; it is important to remember that even in shade, plants receive sunlight. According to the University of Florida Cooperative Extension Service, staghorn ferns that receive extremely low light will grow at a slow rate and are more susceptible to attack by pests and diseases. Make sure your fern receives adequate sunlight.
The University of Florida Cooperative Extension Service asserts that the highest occurrence of problems with staghorn ferns is due to incorrect watering. Staghorn ferns are negatively affected when the soil is not allowed to dry out completely between waterings. Even though the soil may appear to be dry, deeper layers may still be completely wet. Water once or twice a week depending on the size of your plant (during warm weather); in cooler weather, decrease watering. Watering at the appearance of a slight wilt is an option often employed in commercial growth of staghorn ferns.
The fungus Rhizoctonia may infect the susceptible staghorn fern, causing black spots on the fern's fronds. If left untreated, the disease may spread to the entire plant, leading to death. Generally caused by excessive watering, the only treatment necessary may be a decrease in watering. For a more severe problem, the University of Florida Cooperative Extension Service recommends contacting a local county extension agent to determine a course of action through chemical control.
Though the variety of insect pests is low in staghorn ferns, an infestation can quickly injure your fern. Mealy bugs, slugs and snails can be problematic. Insect pests feed on staghorn ferns by devouring plant tissue or sucking the plant's juices. If left untreated, the fern may lose vigor, become more susceptible to fungal infections, or may eventually die. For control of insects, the University of Florida Cooperative Extension Service suggests applying non-oil-based insecticides, as oil-based insecticides can burn or injure staghorn ferns. | <urn:uuid:71229cd4-4804-4ef4-83f8-6a1649118d7b> | CC-MAIN-2016-26 | http://www.gardenguides.com/111920-problems-staghorn-ferns.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.925892 | 506 | 3.328125 | 3 |
One thing you will want to do is write down questions as they come to you. These questions will help focus your research later on or may give you clues as to what you need to be searching at that time to solve your current problem. Along with the research question, you will want to also record the date and the specific person or family the question refers to so you don't waste time trying to figure it out later. These research questions become a to-do list that you will want to complete sometime in the future if you do not spend time on them at that point.
Another good idea is to document your research well, especially those facts that may be questionable, such as gaps between children that you have found explanations for. You can include the documentation in notes on the computer or in the research log. You can tell there is sufficient documentation when another researcher can pick up your information and see exactly what still needs to be worked on and what you have solved.
It is also wise to take time directly after a research session to record what you have found and not found and where you can look next time. This will keep you from duplicating research. Printing out a new family group sheet each time new information is added in this process will also help avoid duplicate searches, as you will then have all the information from recent finds.
You may also want to keep track of your research in a calendar type format. You can record the date, which individuals you worked on, and what you would like to do during the next session. This will help you stay focused and prevent having to go back to the records you have collected to analyze information time after time trying to decide what needs to be done.
In addition to recording your research results and keeping a calendar each time you do research, you may also want to write out the fruits of your labors in a report. It doesn't have to be fancy, but when you write a report, your mind is able to analyze information and you may see things in a new way. These reports may be easier for you to read than the names and dates on a group sheet or pedigree. As you read through reports written after different sessions, it is also easier for your mind to realize something you missed earlier and solve a problem that your one search alone couldn't have bridged, but correlated information from different searches was able to fill in.
These are just a few things that can help you as you do research to make your time more meaningful and your searches more enjoyable. | <urn:uuid:3ac629e0-60c4-4b7d-9881-1c330d13c8fc> | CC-MAIN-2016-26 | http://www.genealogytoday.com/articles/reader.mv?ID=652 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975203 | 507 | 2.765625 | 3 |
How can the census help in locating my ancestor's marriage record?
After 1850 certain marriage-specific questions were asked in the U.S. Federal Census that can help you learn more about your ancestor's marriage and when that marriage might have taken place. Of course, the best method is to follow your ancestor through all census years during his or her lifetime, beginning with the last census before death to the first census after birth to compare the changes from one census year to the next, and to evaluate the ages of children.
Following are the marriage-specific questions asked in the census, along with the years that question was asked. From this you can determine what marriage questions were asked during your ancestor's lifetime. Some questions were unique to a single census year' others were asked across several censuses. And while this information does not give you precise dates, it certainly does allow you to narrow the field in your search for records.
It is important to be note that, depending on the year, not all questions were asked of all people: race did play an issue. Information presented here applies to questionnaires given to the general population during a particular census year.
- Was the person married within the last year? (1850, 1860, 1870)
Keep in mind, this does not refer to the calendar year, but in the 12 months preceding the official enumeration date for that year, which for the years listed it was June 1st.
- If married within the past year, which month? (1870)
The 1870 census was the first and only year to ask this question. So if you ancestor was married in the time period between 1 June 1869 and 31 May 1870, this census year allows you to pinpoint the month of marriage.
- Is the person single, married, widowed or divorced? (1880-1940)
This standard question on marital status may be the most familiar, asked every year since 1880. In 1910 and 1920, as part of the marital status question, respondents were asked to identify their present marriage by number, indicating a subsequent marriage.
- How many years has the person been married? (1900, 1910, 1920)
This question refers to the number of years married to the present spouse, allowing you to count back the number of years given to estimate the marriage year.
- Age at first marriage (1930 and for a [[[[http://gentod.com/A.mv?Y=8-9105|sample group]] only in 1940)
This question is important, allowing you to look back and determine where your ancestor may have been living at the time of first marriage. In 1940, certain questions were asked of a sample group only, but it is possible that your ancestor was among that sample group.
Overall, these marriage-specific questions in the census, taken together over time can help you create a timeline of your ancestor's married life, and in turn provide important clues in your search for records.
<< FAQ Section
Help us improve this frequently asked questions area. Please send us feedback or additional questions. | <urn:uuid:253148c8-f0a7-4f90-a58d-cd5aaebf2b70> | CC-MAIN-2016-26 | http://www.genealogytoday.com/genealogy/answers/How_can_the_census_help_in_locating_my_ancestor's_marriage_record.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975183 | 630 | 2.671875 | 3 |
1850 Meyer Map Argentina, Chile, Bolivia, Uruguay and Paraguay
Sudlichster Theil von America enthaltend Bolivia, das sudliche Brasilien, Paraguay, Chile, La Plata, Cisplatina und Patagonien.
1850 (dated) 9 x 10.5 in (22.86 x 26.67 cm)
1 : 17000000
This is a fine 1850 map by Joseph Meyer depicting Chile, Argentina (La Plata, including parts of Patagonia), Paraguay, Uruguay (Cisplatin), Bolivia and parts of Peru, and Brazil. Bolivia's claims to the Atacama Desert and Peru's claims to Moquegua - both of which are today part of Chile are noted.
As this map was being drawn, the region was gaining its independence from Spain. Following General Jose de San Martin's defeat of the Royalist forces and the subsequent liberation of Argentina, European and other Latin American settlers flocked to the region with dreams of rich farm lands and other natural wealth.
This map was issued as plate no. 81 in Meyer's Zeitung Atlas. Although all the maps in this atlas are not individually dated, the title page and maps were often updated while the imprint with the date was not, causing confusion to the exact date for some of the maps. Moreover some maps in the atlas were taped in at a later date as an update to the atlas. We have dated the maps in this collection to the best of our ability.
Meyer, J., Meyer's Zeitung Atlas, 1852.
Meyer's Zeitung Atlas, formally titled Neuster Zeitungs-Atlas Fuer Alte und Neue Erdkunde was a popular German hand-atlas published in Heidelberg by Joseph Meyer between, roughly, 1848 and 1859. The atlas is well engraved in the German style with exceptionally dense detail and minimal decoration. Meyer's Atlas, and its constituent maps, are typically very difficult to date as later editions often contain earlier maps and earlier editions later paste-in updates. That said, the atlas' frequent updates and publication run during a turbulent decade provide a noteworthy cartographic record of the period.
Very good. Minor overall toning and some foxing at places. Minor dampstain top left margin. Map of New York and Vicinity pasted on verso. | <urn:uuid:fbaab48d-e486-477f-959b-9fab95228a7c> | CC-MAIN-2016-26 | http://www.geographicus.com/P/AntiqueMap/SudlichsterAmerica-meyer-1850 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.913744 | 498 | 2.71875 | 3 |
Lymph Nodes in the Male Retroperitoneum and Pelvis
Lymph nodes found in the male retroperitoneum and pelvis are part of the
lymphatic system, which carries fluid, nutrients, and waste material between
the body tissues and the bloodstream. Lymph nodes are connected by a network of
channels that run throughout the body.
Cancer may spread through
lymph nodes to distant parts of the body.
ByHealthwise Staff Primary Medical ReviewerE. Gregory Thompson, MD - Internal Medicine Specialist Medical ReviewerChristopher G. Wood, MD, FACS - Urology, Oncology | <urn:uuid:1b0b097f-5a10-4890-b004-98c2f5d1f2d3> | CC-MAIN-2016-26 | http://www.ghc.org/kbase/topic.jhtml?docId=ax2015 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.860797 | 130 | 2.90625 | 3 |
Gio Ponti’s most famous work in the field of furniture design may arguably have been his chairs, but he actually created entire lines of home and office furnishings. These collections included lamps, desks, coffee tables, side tables, chandeliers, bedroom furniture, and even accessories such as vases. Frequently Ponti found himself so inspired by an architectural project that he conceived a vision for the entire structure, inside and out, and this often included furnishings. His work took on masterpiece proportions, as one could literally visit a hotel or home designed by Ponti, and admire his architectural skill while sitting in a Ponti chair and later sleeping in a Ponti bed.
It is this attention to even the smallest detail that made Ponti both a highly accomplished architect as well as an artist. Early in his career, when Ponti first began designing furniture, he dedicated himself to proving factory-made furniture could also be stylish. It is important to remember that prior to the Industrial Revolution, furniture was always handmade by a craftsman. During the 18th and 19th centuries, however, developments in technology meant many products could now be made by machine. While there were certainly benefits to be found in mass production, the resulting products often lacked the quality and unique style found in handcrafted items. During the 1800s, in particular, the ease with which cast iron could be manufactured made it a common choice of material for many items of furniture. This type of furniture was originally very popular, but the obvious drawbacks of its weight and tendency toward clunky design eventually impacted its popularity.
Luckily, in the early 1900s more lightweight and practical materials were being developed. This was the perfect time for an innovative artist such as Ponti to come along and revolutionize the way the world viewed factory-produced furniture. Ponti sought to reclaim the stylishness and artistic appeal of crafted furniture, while combining it with the ease of factory production. He utilized new materials to create beautiful but practical pieces designed with everyday use in mind.
As an architect, Ponti understood the value of space and functionality. As an artist, he naturally appreciated the beauty that could be found in even the simplest things. A Ponti lamp, for example, did not simply light the room. It could be an object of art, appreciated for its form as much as its function. Ponti’s style is said to be eclectic; at first glimpse his designs may seem sleek and modern, but closer inspection often reveals unexpected elements. For example, one Venini table lamp actually drew inspiration from old kerosene lamps, an unexpected surprise in a modern electric appliance. His chandeliers even appear to have drawn inspiration from Victorian-era lighting fixtures, but were manufactured in modern materials with pops of bright color.
Gio Ponti was so successful at blending aesthetics with functionality, that his designs became timeless classics which have never gone out of style. To this day, collectors clamor over his antique pieces at auction, where they command a respectable price. Furniture manufacturers frequently re-release Ponti’s designs to eager consumers, keeping the products true to their original form and even utilizing Ponti’s original fabric designs as upholstery. It could be said that if Gio Ponti wanted to create a style of his own, one which would be beloved for generations to come, then he certainly succeeded. | <urn:uuid:7684233d-9af6-475d-8e93-e996a0c0d492> | CC-MAIN-2016-26 | http://www.gioponti.com/furniture/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.984331 | 687 | 2.8125 | 3 |
Without water, humans cannot live. Since time began, we have lived by the water and vast tracts of waterless land have been abandoned as too difficult to inhabit. A new machine which extracts water from air could change that … One evening 20 years ago, James J Reidy checked on his new dehumidifier and as he poured the contents down the drain, he reflected on how pure it looked. Two decades on, the idea which was spawned from that moment could influence where and how people live on Planet Earth. Reidy’s idea was simple – it is possible to extract drinking water from the air and there is a market for machines which can do it.
Reidy’s technology is now becoming commercially available and the AirWater machines will be sold in many sizes, producing from 20 litres (AUD$1300 inc GST) to 5,000 litres per day (AUD$160,000 inc), with the option to run machines greater than 50 litres a day capacity from solar power. The 5,000 litre machine with solar power costs AUD$250,000 but the only things it requires are sun and air, and they are both free, so running costs amount to maintenance and capital expenses.
Obtaining water from the atmosphere is nothing new - since the beginning of time, nature’s continuous cycle of evaporation and condensation in the form of rain or snow (the Hydrologic Cycle) has been the sole source and means of regenerating wholesome water for all forms of life on earth.
At any given moment, the earth’s atmosphere contains 4,000 cubic miles of water, which is just .000012% of the 344 million cubic miles of water on earth. Nature maintains this ratio via evaporation and condensation, irrespective of the activities of man.
The availability of drinking water is a global problem - there is a global US$15 billion bottled water market, a US$100 billion point-of-use water treatment industry, and wherever practical, expensive desalination plants with huge infrastructures and severe geographical restrictions. All of these methods require traditional sources of water and each has inherent weaknesses and disadvantages.
In spite of the above there exists a pent-up, insatiable, world-wide need for new sources of drinking water. AirWater machines could be the answer as they offer an inexhaustible source of safe sterilized drinking water.
Basically, the AirWater System, regardless of the model size, sterilises each drop of water within 5-6 seconds of its formation by exposure to ultra-violet light. UV light waves fracture the DNA strands within bacteria, virus, and other micro-organisms which kills them instantly.
This sterilised water is then passed through a unique patented 1-micron activated carbon water filter. (The average size of bacteria is 5 microns). This filter removes any possible solid particles, toxic chemicals, volatile organics, and other contaminates as well as any odors, taste, or discoloration. This filtration is followed by a 2nd UV exposure and sterilization.
The same bulb bathes the exit port, also patented, in UV light creating a sterile exit. The AirWater System maintains an enclosed sterile environment throughout its water treatment, from the first drop in to the last drop out -- into a water tank or removable container.
The system is particularly effective in areas often regarded as arid, but where there is actually a lot of moisture in the air. In those climates the machine can charge all day in the sun, and produce water all night when the air is moist. The production of AirWater machines will initially be done in Brazil, Isreal and China with a distinct possibility that Australia could also become one of the manufacturing hubs.
For further information see Airwater.com.au
Brent Lobel [email protected] Len MacElvey [email protected] 0415 585616 07 5529 3666 | <urn:uuid:58f8a658-0457-4755-9fba-ce9f456e3581> | CC-MAIN-2016-26 | http://www.gizmag.com/extracting-water-from-the-air/2796/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.937475 | 817 | 2.6875 | 3 |
Governor: Climate change intensifies threat of wildfire
ASPEN — Removal of beetle-killed trees and defensible-space improvements by homeowners can help reduce Colorado’s wildfire problems, but the contributing factor of climate change shouldn’t be ignored either, Gov. John Hickenlooper said Thursday.
A changing climate is generating enormous costs both in Colorado’s forests and in coastal areas elsewhere in the United States, Hickenlooper said in an appearance at the Aspen Ideas Festival.
“The drought and temperature disruption has huge potential consequences,” Hickenlooper said.
His lunchtime interview with Derek Thompson, an editor with The Atlantic, was billed as focusing on innovation, entrepreneurship and other aspects of economic development. But Thompson and audience members from across the country asked him about a wide range of topics that recently have put Colorado and Hickenlooper in the national news, from gun control to his political future to the fires that have besieged sizable swaths of the state.
“This morning I flew down over Pagosa Springs and South Fork and it’s grim,” Hickenlooper said in reference to the burning forests there. “… They haven’t lost a structure yet but the smoke down there, it’s just miserable. They’ve got a lot of people that have been out of their homes for I think five days now.”
He said one lesson from fires a year ago in the state was for local communities to quickly ask for state and federal help when needed. As a result, that help is now coming in a matter of hours rather than days.
While agencies can do more to trim beetle-killed forests, there are million of acres of such forests in the state and it would take hundreds of millions of dollars to deal with them all, he said. He also noted that last year’s destructive Waldo Canyon and High Park fires weren’t fueled by big stands of dead timber.
He said people shouldn’t have wood shake roofs or wooden decks against their houses and should clear vegetation near their homes.
“Do what we call defensible space because that will dramatically reduce the losses,” he said.
But he added, “It does not reduce the fires.”
He said he thinks climate change is occurring, and even if its occurrence is a matter of high probability rather than fact, it’s worth spending tens of millions of dollars to combat it. Fighting this year’s destructive Black Forest Fire in Colorado cost $10 million, he noted.
Hickenlooper acknowledged that it has been an eventful six months or so for the state and him, thanks to high-profile issues such as gun control, marijuana legalization, capital punishment and hydraulic fracturing. Thompson noted recent polling suggesting it would be a close race if Republican Tom Tancredo challenges Hickenlooper in next year’s gubernatorial race.
Hickenlooper said he thinks the last half year shows how he tries to be apolitical and to focus on the facts.
Hickenlooper elaborated on a number of issues:
■ He defended his decision to grant a death penalty reprieve, but not full clemency, to killer Nathan Dunlap. He questioned using the death penalty against someone with bipolar disorder, cited the high cost of pursuing the death penalty, and said its use doesn’t reduce murder rates in states that regularly use it. Saying his view on the death penalty has changed over the years, he said, ““I didn’t think it was appropriate for me to overrule the people of Colorado because they haven’t come around on this.”
■ Explaining his signing of Colorado’s law requiring background checks on gun sales, he said history shows criminals try to buy guns in Colorado.
“You can’t argue that this doesn’t work, right, that this isn’t a significant benefit, that the cost, for 10 bucks per transaction, doesn’t dramatically reduce the number of guns in people’s hands that shouldn’t have them.”
■ Asked about water used in hydraulic fracturing, he noted that it’s less than half a percent of the state’s total water use, but added that suburban water use for things such as green lawns is a far bigger challenge.
■ He said the state will be applying heavy restrictions on the recreational marijuana industry just as it has medical pot.
■ He hailed Colorado’s attractiveness to young adults, many of whom he said are “nerds” and “geeks” who are involved in entrepreneurship. | <urn:uuid:1472e037-be70-4b7a-b680-5d2027d7f90b> | CC-MAIN-2016-26 | http://www.gjsentinel.com/sports/articles/governor-climate-change-intensifies-threat-of-wild | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959032 | 975 | 2.515625 | 3 |
Whey protein is quick to digest and provides all of the protein building blocks-the amino acids. Our bodies cannot make some amino acids, and whey is ideal for meeting essential amino acid needs. Whey also supplies branched-chain amino acids (BCAAs), and some research supports that they aid muscle recovery after hard workouts.
Compared with whey, casein is slower to digest, and results in a lower, yet more prolonged rise in blood amino acid levels, which may provide a particular advantage for body builders. At least one study supports that casein outshines whey in terms of promoting strength and lean body mass gains in people following a structured weight-training plan.
Rice protein is less likely to create allergic reactions than other proteins, and it comes from a plant, making it appropriate for vegetarians. Another potential advantage is that rice protein contains a high proportion of arginine, an amino acid that can dilate blood vessels, possibly enhancing blood flow to muscles. Rice is not a "complete" protein however; it doesn't supply all of the essential amino acids. Some products combine rice protein with proteins from sources like soy or milk to make it complete.
Egg protein is ideal for people who are looking to build new muscle. It has a very high protein efficiency ratio (PER), which is one measure of how well our bodies can use any particular form of protein. The higher the PER, the more efficiently our bodies can use that protein when we eat it. Egg is off the charts in terms of PER. Egg protein also is a complete protein, and is a good source of essential and branched-chain amino acids. Egg protein powder is made from egg whites, and comes in a convenient, pasteurized powder form.
Soy protein is a high-quality plant protein that provides all essential amino acids, making it a good option for vegetarians. For the body to best utilize soy protein, vegetarians should also eat grain or dairy within a few days. Soy protein comes in two basic forms: soy protein isolate and soy protein concentrate. Soy protein isolate is the most highly purified form, and has a minimum protein content of 90%. Soy protein concentrate contains more carbohydrates, and has a protein content of approximately 70%. Concentrates tend to cost a little less, but if you find soy protein concentrate doesn't agree with you, try isolate, which is easier for some people to digest.
With protein, as with many nutrients, more is not always better. According to Dr. Doug Paddon-Jones, Associate Professor at the University of Texas Medical Branch and Director of Exercise Studies, "30 grams of protein appears to stimulate maximum muscle synthesis. For athletes, each meal and snack is a chance to hit the 30 gram mark, giving your body several opportunities each day to maximize muscle growth and repair."
Another reason to spread protein evenly through the day is simple efficiency. "Given that your body won't use much beyond 30 grams of protein at a time, it doesn't make sense to load up with more than this," says Paddon-Jones.
- During breakfast. To support muscle building first thing in the morning, try trading traditional carb-heavy breakfast foods for more protein-rich options, such as a powder protein supplement mixed with milk and cereal.
- Prior to a long strength-training session. Sipping a casein-based protein supplement prior to and during your workout will give muscles sustained access to amino acids. If taken in the evening, casein can provide a steady supply of amino acids while you're sleeping.
- Immediately pre- or post-workout. Especially if your workout includes aerobic or circuit training, protein manufacturers recommend a whey protein-based liquid or powder supplement. | <urn:uuid:02529dc9-e680-47fb-88fb-813e6e68d24f> | CC-MAIN-2016-26 | http://www.gnc.com/product/index.jsp?productId=2446690&lmdn=Brand&cp=4046473.13355460.3703232 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.952046 | 760 | 2.59375 | 3 |
14.9.2 Constructors and Accessors for Ports
The procedures in this section provide means for constructing ports,
accessing the type of a port, and manipulating the state of a port.
— procedure: make-port port-type state
Returns a new port with type port-type and the given
state. The port will be an input, output, or I/O port
according to port-type.
— procedure: port/type port
Returns the port type of port.
— procedure: port/state port
Returns the state component of port.
— procedure: set-port/state! port object
Changes the state component of port to be object.
Returns an unspecified value.
— procedure: port/operation port symbol
(port-type/operation (port/type port) symbol)
— procedure: port/operation-names port
(port-type/operation-names (port/type port))
— procedure: make-eof-object input-port
Returns an object that satisfies the predicate
is sometimes useful when building input ports. | <urn:uuid:b5bec27c-91cd-4e06-9f38-00d5aa1026da> | CC-MAIN-2016-26 | http://www.gnu.org/software/mit-scheme/documentation/mit-scheme-ref/Constructors-and-Accessors-for-Ports.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.658898 | 229 | 2.78125 | 3 |
Meteorologists need sensors that are on the ground directly measuring local weather conditions, as well as in orbit high above Earth’s atmosphere observing the "big picture" remotely. The United States has a network of ground stations for measuring surface and upper-air weather conditions at particular locations and times. However, this network leaves gaps in the information about the geographical extent of weather phenomena, their speed and direction of movement, and their duration. Satellite data are also needed to provide a complete and continuous picture of atmospheric conditions. Forecasting the approach of severe storms since 1975, GOES are a cornerstone of weather observing and forecasting.
Geostationary satellites rotate with Earth from west to east directly over the equator at an altitude of 35,800 km (22,300 statute miles). Because the satellite orbits in the same direction as Earth turns on its axis and matches the speed of Earth’s rotation at the equator, the satellite always has the same view of the Earth’s surface. The U.S. has two Geostationary Operational Environmental Satellites (GOES) in service, one positioned to view the west coast and the Pacific Ocean and one to view the east coast and the Atlantic. Geostationary satellites are in position to maintain a constant vigil over nearly half the planet.
Geostationary weather satellites work by sensing electromagnetic radiation to indicate the presence of clouds, water vapor, and surface features. Unlike ground-based radar systems and some other types of satellites, these satellites do not send energy waves into the atmosphere and analyze returning signals. Rather, the GOES work by passively sensing energy. The GOES sense visible (reflected sunlight) and infrared (for example, heat energy), from the Earth’s surface, clouds, and atmosphere. The Earth and atmosphere emit infrared energy 24 hours a day, and satellites can sense this energy continuously. In contrast, visible imagery is available only during daylight hours when sunlight is reflected.
The instruments on the GOES that measure electromagnetic energy are called radiometers. GOES carries two types of imagers: One measures the amount of visible light from the sun that Earth’s surface or clouds reflect back into space. The second measures the infrared energy that Earth’s surface and clouds radiate back to space. Because the GOES can sense infrared radiation, they can operate at night.
Most visible light passes right through the atmosphere, but no so much through the clouds. Clouds reflect some of the visible light back into space. How much depends upon the thickness and height of the cloud. Earth’s surface absorbs the visible light energy, gets warmer, and re-radiates the energy as infrared radiation. Clouds also absorb some of the visible light energy, as well as the infrared energy re-radiated from Earth. Satellite sensors are particularly sensitive to those wavelengths of infrared energy re-radiated up through to the atmosphere to space. Scientists can measure the height, temperature, moisture content (and more) of nearly every feature of the Earth’s atmosphere, ocean, and land surface, with and without vegetation.
Communications, transportation, and electrical power systems can be disrupted and damaged by space weather storms. Exposure to radiation can threaten astronauts and commercial air travelers alike, and has affected the evolution of life on Earth. Geomagnetic storms and other space weather phenomena pose a serious threat to all space operations, and can result in total mission failure.
Beginning with GOES-I, the Search and Rescue subsystem has been carried on each of the GOES. Distress signals are broadcast by Emergency Locator Transmitters carried on general aviation aircraft, aboard some marine vessels, and by individuals, such as hikers and climbers. A dedicated transponder on each GOES detects and relays signals to a Search and Rescue Satellite-Aided Tracking (SARSAT) ground station. Through a rescue coordination center, help is dispatched to the aircraft, ship, or individual in distress. SARSAT is an international program, with many satellites making up a world-wide network of emergency beacon transponders. Since 1982, SARSAT helped save more than 39,000 lives worlwide. Learn more.
GOES-R is the next generation of NOAA geostationary Earth-observing systems. The satellite’s advanced spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. » More
GOES-R advanced spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. The Advanced Baseline Imager (ABI) will collect three times more data and provide four times better resolution and more than five times faster coverage than current GOES. The GOES-R series satellites will also carry the first lightning mapper flown from geostationary orbit. The Geostationary Lightning Mapper, or GLM, will map total lightning (in-cloud and cloud-to-ground) continuously over the Americas and adjacent ocean regions. Research has shown that lightning flash rate increases can be a predictor of impending severe weather and total lightning data from GLM has great potential to increase lead time for severe thunderstorm storm warnings. The satellites will also host a suite of instruments that will provide significantly improved detection of space weather for more accurate monitoring of energetic particles responsible for radiation hazards, improved power blackout forecasts, increased warning of communications and navigation disruptions, and more.
It is the consolidated backup facility from which NOAA will command and control the GOES-R satellites in the event the primary satellite operating locations (NOAA Satellite Operations Facility and Wallops Command and Data Acquisition Station) become disabled. It includes the components, antennas and ground infrastructure needed to communicate with the GOES-R series satellites to control the spacecraft and capture the telemetry and science data from the instruments. This site will be able to perform all of the operational functions in case of a failover scenario at the NSOF and/or WCDAS ground segment facilities. The CBU is located at the I-79 Technology Park Research Center in Fairmont, West Virginia. » Learn More
The GOES-R Series Program is engaging users early in the process through Proving Ground and NOAA testbed activities, simulated data sets, scientific and user conferences, and other communication and outreach efforts. » Learn more about user readiness efforts here.
he Proving Ground is a collaborative effort between the GOES-R Program Office, NOAA Cooperative Institutes, a NASA center, NWS Weather Forecast Offices, NCEP National Centers, and NOAA testbeds across the country. The Proving Ground is a project in which simulated GOES-R products can be tested and evaluated before the GOES-R satellite is launched. The simulated GOES-R products are generated using combinations of currently available GOES data, data provided by instruments on polar-orbiting satellites, and model synthetic satellite data.
» Learn More
GOES Rebroadcast is the primary space relay of Level 1b products from GOES-R and will replace the GOES VARiable (GVAR) service. GRB will provide full resolution, calibrated, navigated, near-real-time direct broadcast data. » Learn more about GRB.
Users must either acquire new systems to receive GRB or upgrade components of their existing GVAR systems. At a minimum, GVAR systems will require new receive antenna hardware, signal demodulation hardware, and computer hardware/software system resources to ingest the extended magnitude of GOES-R GRB data. See the GRB Downlink Specifications and GRB Product Users Guide (PUG) documents for more information.
The GRB Simulators allow for on-site testing of user ingest and data handling systems, aka GRB field terminal sites. Each unit simulates GRB downlink functionality by generating Consultative Committee for Space Data Systems (CCSDS) formatted GRB output data based on user-defined scenarios, test patterns, and proxy data files. Four GRB simulators have been designated for loan to borrowers who manufacture GRB receivers and other users interested in testing their receive systems. The objective is to allow borrower access to simulators to verify GRB receive system compatibility with the GRB transmission. Information about requesting a simulator for loan can be found at http://go.usa.gov/WvXY. For a complete list of frequently asked questions about the GRB Simulator, see the GRB Simulator FAQs document.
When GOES-R is launched in November 2016 it will be placed in the 89.5° checkout orbit. It has not yet been determined where GOES-R will be placed in operational orbit. The final decision will be based on the health/safety/performance of the GOES constellation. NOAA's Office of Satellite and Product Operations will be responsible for determining the operational orbit for GOES-R. | <urn:uuid:4528dd06-bf5c-40ca-bed2-914ed4381afd> | CC-MAIN-2016-26 | http://www.goes-r.gov/resources/faqs.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.914362 | 1,817 | 4.0625 | 4 |
As recently as two decades ago pain in our pets was largely overlooked. In fact, some veterinarians considered pain to be a good thing, as they thought it would limit a post-operative pet from having too much activity. My, how our outlook has changed. As our pets become more and more a part of the family, when they hurt, we hurt.
If you’ve had a pet in the clinic recently for a surgical procedure, you know how much pain management protocols have changed. Veterinarians, now more than ever, are acutely aware of managing pain both during and after surgeries, and your pet will probably be sent home with one or more medications to give post-operatively for pain.
Obviously, when your pet has had surgery, you assume there will be some discomfort. But what about pain from illness or injury? Signs of pain in pets can be easy to miss. Because animals still rely heavily on the “survival of the fittest” lifestyle, it benefits them greatly to hide signs of pain to ensure that they do not fall victim to predators. This is especially true in cats and other smaller pets such as rabbits and hamsters.
Pay careful attention to your pet and look for classic signs of pain, such as:
- Limping or abnormal gait
- Listlessness or restlessness. Often pets shift and seem to have a hard time getting comfortable
- Whining, whimpering, or meowing
- Licking specific joints
- Trouble sleeping
- Antisocial behavior. Painful pets will often retreat away from the family.
- Nipping or biting when touched. Pets often are protective of the painful areas.
Some of these signs are attributed to “old age” but often pain is an underlying cause. Older dogs are more prone to pain from osteoarthritis, so pay attention to the way your dog gets up from lying down. Is she having a difficult time rising from a sleeping position, or trouble on the steps? Arthritis can be to blame, and your veterinarian can prescribe pain relievers to alleviate some of her discomfort.
Cats can get arthritis, too, so these warning signs also apply to our feline friends. Cats are also prone to dental disease, and oral pain can certainly arise from tooth and gum infections. Inappetence, weight loss and chattering teeth can all be symptoms of mouth pain. Your veterinarian can easily tell if your pets have dental disease during their routine exams.
Studies have proven that effective pain management improves the recovery process (whether it is from surgery or illness) and reduces stress, allowing the immune system to function more effectively. If you think your pet may be in pain, either from illness or injury, please seek the advice of your veterinarian. Never give over-the-counter human pain relievers to your pets without first checking with your vet. Several of them are toxic to our pets, even at low doses. | <urn:uuid:80de6d37-7596-42a6-ade1-72e2f17ee6f8> | CC-MAIN-2016-26 | http://www.gopetplan.com/blogpost/no-pain-no-gain-petplan-pet-insurance-takes-a-look-at-pain-in-pets | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.9609 | 599 | 2.609375 | 3 |
The flu season has reached epidemic levels and has yet to reach its peak, despite a drop-off in cases in some states. Nationwide, nearly four dozen children, about half of them under the age of five, have already died this flu season, which began in November and typically runs into March. New York City doctors, hospitals and pharmacists report quickly rising numbers of flu cases, particularly among children and the city has asked doctors to prioritize vaccines for those with the highest risk. That means the elderly, people with compromised immune systems, pregnant women and children, particularly those under two years of age.
Doctors are calling this the most intense flu season for children since the Hong Kong flu of 1968-69. New York City is reporting widespread activity and the number of reported flu-like illnesses, such as pneumonia, continues to rise, as do deaths from these illnesses.
People with chronic conditions such as asthma and diabetes, from which many New York City children suffer, are at increased risk for complications from the flu. Children with the flu are also at particular risk for ear and sinus infections; it's the bacterial illnesses that can lead to hospitalization and death, although death from influenza is rare for children, an average of 92 annually. This season is worse than usual because two of the three strains currently spreading are particularly harsh, and A/Fujian, one of the more virulent strains, was not included in this year's vaccine. Fujian has the strongest link to severe illness and deaths of children and teens.
ABUSE CHARGED AT THE EDWIN GOULD ACADEMY
Six current and former students of the Edwin Gould Academy in Rockland County have filed a civil suit against the school, charging that the staff beat them and slammed them into walls, broke bones, smoked marijuana in front of them and came to work intoxicated. At least one teen was being injected with Thorazine, an anti-psychosis drug, without parental consent or knowledge, says the suit.
The residential school for troubled, abandoned and abused children, located in Spring Valley, is publicly funded as a special school district of the state. It educates up to 200 children ages 12 to 20, most of whom are from New York City. The school's mission is to provide a safe environment in which children can get an education and a new start. The lawsuit was filed in December in the State Supreme Court in the Bronx; the students in the suit reside in Brooklyn. The suit asks unspecified damages.
The school's superintendent said that state officials had investigated abuse allegations and that some of the named staffers had been fired.
CITY COUNCIL PASSES LEAD PAINT BILL
The New York City Council passed a controversial lead paint bill that makes landlords responsible for lead paint and requires them to remove under a stringent schedule.While many in health and social services are in favor of the bill, others are opposed, including non-profit housing developers. They say lead paint is only one of many problems in the housing stock and this bill be result in the reduction of housing renovation in low and moderate-income neighborhoods.
The mayor vetoed the bill on December 20, saying that it could actually harm families with children because landlords will be reluctant to rent apartments to families with small children. The City Council is expected to override his veto.
Sasha Nyary, formerly on the staff of Life Magazine, edits the newsletter of The Fifth Avenue Committee, a community-based organization. | <urn:uuid:2817b407-5f79-45ff-8e78-dde22f0f11e9> | CC-MAIN-2016-26 | http://www.gothamgazette.com/index.php/topics/health/2271-the-flu-abuse-lead-paint | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.978941 | 702 | 2.59375 | 3 |
Cartago is drenched in unique culture, stunning architecture and a storied history that is still prevalent today. Because of this, the landscape of Cartago is unparalleled to that of most other regions in the country, even though it encompasses the fewest square miles. Travelers can explore pre-Columbian times and dive deep into the past in this truly unique province.
Map of Cartago
Before 1823, Cartago city was the capital of Costa Rica, but due to its location along the foothills of the Volcano Irazu, locals decided to place their capital in a more secure location away from a possible eruption. In 1563, Juan Vasquez de Coronado founded the city as one of the first Spanish settlements in the country. History buffs will love strolling down the streets of Cartago, as it acts as a gateway into the past of the region. The Basilica de Nuestra Senora de Los Angeles
is a must-see in Cartago. Noted as the largest church in the city, it contains La Negrita or the Black Madonna
, which is fabled to have healing powers.
Sitting in the Central Highlands of Costa Rica, the still active Volcano Irazu
is one of the highest peaks in the province. Irazu made a name for itself in 1963 when U.S. President John F. Kennedy visited the country and the volcano erupted, which smothered area crops in inches of dust. Irazu has several craters, two of which are the most popular attractions. Visitors can reach nearby viewing stations, and on a clear day be able to see both the Pacific and Caribbean waters. Irazu Volcano National Park sits on 5,705 acres of lush montane forest, where travelers can witness many varieties of winged, furry and scaly wildlife. Although the area is fairly sparse in terms of large animals because of the volcanic activity, it is comprised of primary, secondary and cloud forest. The park is just 70 kilometers away from the city of Cartago, on the border of Cartago and San Jose provinces.
Cartago is one of the least visited regions in Costa Rica but it is full of culture and nature alike.
As the largest and most important archeological site in the country, Guayabo is a mysterious ancient city that was home to more than 10,000 people before the Spanish arrived in the country in the 16th century. Old inhabitants of the ruins are unknown by scientists, and currently only 10 percent of the 218 hectares have been excavated. Whoever these previous residents were, they built a very advanced culture: There are trails, bridges, water systems and homes that date back 3,000 years here.
The small town of Orosi boasts a beautiful landscape that includes a sweeping view of the valley, mountains and plantations. But one of the most visited attractions in this tiny town is the Iglesia de San Jose, which is the oldest church in Costa Rica this is still in use today. Constructed in 1743, this church was built in colonial style and has a wooden altar with traditional Mexican paintings, and a roof made with ceramic tiling and thatched cane. Throughout the entire town, travelers will find historic gems and ancient ruins that date back to 1,000 B.C.
Often called Orosi National Park, Tapanti sits in the middle of the Macho River Forest Reserve, which features the meandering El Rio Grande de Orosi. Because of the unique location and diverse landscape, this park is one of the rainiest places in Costa Rica – the annual rainfall averages between 635 and 760 centimeters. Travelers can hike through the densely wooded forests along one of several trails that weave through the park. | <urn:uuid:f27bf81c-2ac6-49c8-aa63-024e78b8c07c> | CC-MAIN-2016-26 | http://www.govisitcostarica.com/region/cartago/cartago.asp | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.954184 | 760 | 3.09375 | 3 |
Tesla Model XEnlarge Photo
The rear-view mirror has now been with us exactly 100 years, having first been introduced in 1914, and it hasn't changed much in all that time.
It's a small area of mirrored glass--three of them now, actually--for the purpose of seeing through the rear window or along each side of the car without the driver having to turn his or her head more than slightly.
For decades, concept cars have shown one or more of these mirrors replaced by video cameras that show real-time images on a small display screen the same size as the mirror.
Tesla taking lead
Now Tesla Motors [NSDQ:TSLA] appears to be one of the companies taking the lead in getting government regulators to update Federal vehicle standards to permit such a replacement on production vehicles sold to the public.
The goal for the electric car-maker is to eliminate a large source of aerodynamic drag from its future cars.
The mandated mirrors on the driver's and front passenger's doors increase the car's frontal area. Aerodynamics engineers spend a great deal of time in the wind tunnel to make air flow over door mirrors as smoothly as possible, to reduce the energy needed to overcome that resistance.
Concept drawings and some prototype versions of Tesla's 2015 Model X electric crossover utility vehicle do not have door mirrors, but use a tiny camera in their place with the images displayed inside the passenger compartment.
Tesla Model S Designer Franz von Holzhausen
3 to 6 percent of total drag
On Friday, journalist Allison von Diggelen wrote that Tesla design chief Franz von Holzhausen told her the company is talking with "authorities" about getting the necessary permissions to replace door mirrors with cameras and electronic displays.
Tesla is presumably in discussion with the National Highway Traffic Safety Administration (NHTSA), which issues the Federal Motor Vehicle Safety Standards (FMVSS) document, first issued in 1967 and regularly updated since then.
While door mirrors may not seem like much of a big deal, both of them together--with a total area of up to 2 square feet between them--create 3 to 6 percent of a typical vehicle's total aerodynamic drag.
Eliminating them could reduce energy consumption as much as lowering the car's roof by almost half an inch.
The Volkswagen XL1 ultra-high-mileage two-seater uses video-camera technology instead of door mirrors, as von Diggelen notes--but it is not designed to comply with all parts of the FMVSS because it's not intended to be sold in the United States.
Cumbersome volume of rules
Von Holzhausen "bemoaned the cumbersome amount of regulations that prevent or delay innovative car design," the author writes, linking to an earlier interview she did with Tesla CEO Elon Musk in January.
Tesla Model X at 2013 Detroit Auto ShowEnlarge Photo
In that interview and a related video, Musk complained about the regulations and their specificity, including some points that cover headlamp design and elements of the dashboard user interface that he views as "completely anachronistic."
The discussion with von Holzhausen was published on Friday in Boldly Going, a portion of her "Fresh Dialogues" blog that covers her family's trip in their Tesla Model S from Silicon Valley to Los Angeles using the company's Supercharger network of quick-charging stations.
Hand mirrors for women drivers
Automotive historians note that the earliest known reference to a rear-view mirror being used in a car comes from 1906, when author Dorothy Levitt wrote in her book, The Woman and the Car, that women were advised to "carry a little hand-mirror in a convenient place when driving"
Doing so, Levitt suggested, permitted women drivers to "hold the mirror aloft from time to time in order to see behind while driving in traffic."
It would take eight more years for the first fixed mirror to show up on a production car. Vibration-free mounts and auto-dimming aside, not a great deal has changed in rear-view mirrors since then. | <urn:uuid:008ced57-17a7-420d-8118-99c2f82986e5> | CC-MAIN-2016-26 | http://www.greencarreports.com/news/1086180_tesla-takes-the-lead-on-dumping-door-mirrors-for-video-cameras | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.948695 | 843 | 2.84375 | 3 |
24366 Grand River Avenue Suite 111
Detroit, MI 48219
Our team of dental specialists and staff strive to improve the overall health of our patients by focusing on preventing, diagnosing and treating conditions associated with your teeth and gums. Please use our dental library to learn more about dental problems and treatments available. If you have questions or need to schedule an appointment, contact us.
Gingivitis is the medical term for early gum disease, or periodontal disease. In general, gum disease can be caused by long-term exposure to plaque, the sticky but colorless film on teeth that forms after eating or sleeping.
Gum disease originates in the gums, where infections form from harmful bacteria and other materials left behind from eating. Early warning signs include chronic bad breath, tender or painful swollen gums and minor bleeding after brushing or flossing. In many cases, however, gingivitis can go unnoticed. The infections can eventually cause the gums to separate from the teeth, creating even greater opportunities for infection and decay.
Although gum disease is the major cause of tooth loss in adults, in many cases it is avoidable.
If gingivitis goes untreated, more serious problems such as abscesses, bone loss or periodontitis can occur.
Periodontitis is treated in a number of ways. One method, called root planing, involved cleaning and scraping below the gum line to smooth the roots. If effective, this procedure helps the gums reattach themselves to the tooth structure. However, not all instances of scaling and root planing successfully reattach the tooth to the gums. Additional measures may be needed if the periodontal pockets persist after scaling and root planing
Pregnancy has also been known to cause a form of gingivitis. This has been linked to hormonal changes in the woman's body that promote plaque production.
Search through our library of dental topics, including articles, fun facts, celebrity interviews and more. | <urn:uuid:4ac6548f-7e69-40a6-814b-afc66e8cde88> | CC-MAIN-2016-26 | http://www.grfamilydentistry.com/library/35/GumDisease(Gingivitis).html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.92756 | 407 | 2.59375 | 3 |
Bruce Tap. The Fort Pillow Massacre: North, South, and the Status of African Americans in the Civil War Era. London and New York: Routledge, 2014. 216 pp. ISBN 978-0-415-80864-4; $130.00 (cloth), ISBN 978-0-415-80863-7.
Reviewed by John Cimprich (Thomas More College)
Published on H-CivWar (May, 2014)
Commissioned by Christopher Childers
A Reader on the Fort Pillow Incident
Most American Civil War specialists now hold that General Nathan Bedford Forrest’s Confederate force that attacked Fort Pillow, Tennessee, on April 12, 1864, ended up massacring many of the black and white unionist soldiers there. Illustrating that view and just in time for the incident’s sesquicentennial, Bruce Tap’s The Fort Pillow Massacre: North, South, and the Status of African Americans in the Civil War Era (2014), was published within Routledge Press’s Critical Moments in American History series. These short books, primarily intended for classroom use, contain primary sources and extensive introductions.
As such, The Fort Pillow Massacre has a number of educational elements. A map helps readers understand the main battle site, but markings incorrectly locate the starting point of the attack and the outer earthworks, both of which lay beyond the area shown. The book usefully includes a timeline, bibliography, and annotations for the documents. Its prints and political cartoons would serve a class discussion well. Numerous short, boxed features are interesting, but many are rather tangential.
The book series aims to present “engaging primary sources” (p. vii) and does so. The documents include the best contemporary Confederate description of the massacre, some of Forrest’s correspondence, samples of Federal survivor testimony, and representative editorials. Yet, most items come from the Federal side, especially the investigations by the army and Congress. Good additions would be contemporary letters by Charlie Robinson, “Marion,” and “Memphis”; an April 28, 1864, Richmond Examiner editorial; Forrest’s two reports; and more of his correspondence with superiors. The work sometimes neglects to note the date of a document.
Most of the book consists of Tap’s logically organized chapters. He had previously written about the congressional investigation in his Over Lincoln’s Shoulder: The Committee on the Conduct of the War (1998). The new book has a background chapter and a later segment on the postwar period which effectively establish a very broad context for the era’s race relations. The author also contributes an overview of the incident and its aftermath. A chapter on Fort Pillow’s historiography mostly provides excellent coverage. But, it attributes allegations about rapes by black Federals to Thomas Jordan and J.P. Pryor’s The Campaigns of Lt. Gen. N.B. Forrest (1868), when they originated much later in James Dinkins, Personal Recollections and Experiences in the Confederate Army (1897).
Tap synthesizes recent historical studies about the massacre. He finds racial prejudice common, though not universal, in the nineteenth-century United States. By the fall of 1862 the Republican Party’s wartime policies threatened slavery and white supremacy. The beginning in 1863 of extensive Federal enlistment of black troops “implied parity between former slaves and Confederate soldiers that many Confederates could not stomach” (p. 117). This insult to their honor enraged Confederates and provoked a spontaneous slaughter at Fort Pillow. He concludes that “A preponderance of evidence makes it obvious that a brutal massacre took place at Fort Pillow on April 12, 1864. None of the reasons put forth by points of view sympathetic to the Confederacy provides a plausible alternative explanation” (p. 117). He particularly rejects Confederate claims that Federals were drunk, that no Federals tried to surrender, and that Federals tried to lure Confederates within their gunboat’s range. He does accept the Confederates’ claim that the unionist Federals infuriatingly included a number of Confederate deserters, something that, while plausible, has never been proven from service records. After carefully reviewing the inconclusive evidence about Forrest’s role, the author thoughtfully contends that the general likely knew a massacre might occur but “did not move immediately to stop it” (p. 59). Both the impracticalities of revenge and continuing Northern prejudice prevented retaliation. He observes that “it is difficult to maintain that racial relations were revolutionized as a result of the Civil War” (p. 124). There is a tension between the forcefulness of Tap’s interpretations and the series’ aim to have students “reach their own conclusions” (p. vii). His findings might best be presented as one typical current view.
Tap’s chapters are mostly well written in a flowing style. Key points reappear at different points in the text, something helpful for undergraduates. However, numerous typos, misspellings, and wrong words mar communication at times. Some statements have misleading wording. For example, the claim that “the Lincoln Administration warned Southerners in rebellion that it ... was inching toward a position of social and political equality between the two races” (p. 2) should have been qualified as the Confederates’ perception. One supposed quotation on p. 41 actually consists partly of an inaccurate summary cited to the wrong page in the source.
The Fort Pillow incident is such a challenging subject to study that it is easy to make factual errors. Despite extensive research, Tap made some. Examples are the claims that Federal general Stephen Hurlbut never evacuated the fort (in early 1864 the fort he closed it for seventeen days), that the last garrison included the entire 6th United States Colored Heavy Artillery (actually just one battalion), and that Lincoln issued a retaliation order for Fort Pillow (Roy Basler’s The Collected Works of Abraham Lincoln clarifies that it did not get past a draft). Fortunately, most of the other mistakes involve minor matters.
To this reviewer, it looks like the press rushed the book to print before the incident’s 150th anniversary. If flaws could be cleaned up in a second edition, a valuable product would result.
If there is additional discussion of this review, you may access it through the network, at: https://networks.h-net.org/h-civwar.
John Cimprich. Review of Tap, Bruce, The Fort Pillow Massacre: North, South, and the Status of African Americans in the Civil War Era.
H-CivWar, H-Net Reviews.
|This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.| | <urn:uuid:865d50cb-60ea-4328-a79e-cbf329f7d575> | CC-MAIN-2016-26 | http://www.h-net.org/reviews/showrev.php?id=40985 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.932143 | 1,439 | 2.859375 | 3 |
Rainwater harvesting systems range from the simple and inexpensive to the elaborate and costly. If you are considering a rainwater harvesting system it is important to calculate the total amount of water than could be captured, stored, and used in a year with typical rainfall patterns. Using your local water and wastewater rates you can calculate the value of the saved water and determine if the system is cost effective.
On the simple and inexpensive end of the spectrum are rain barrels. A 57 gallon rain barrel with a cover, overflow hose, and hose bib connection costs about $100. If this barrel can be filled and used 20 times during the irrigation season approximately 1,100 gallons of water can be saved. At a value of $3 per 1000 gallons you can see that these devices would not be particularly cost effective, but they can save some water.
A mid-sized rainwater harvesting system may employ a series of rain barrels or a small storage tank. These systems have about 500 gallons of storage capacity and cost $350 - $500. Polyethylene storage tanks are available for between $0.30 and $0.60 per gallon. If a 500 gallon system can be filled and used 20 times during an irrigation season then 10,000 gallons of water can be saved valued at approximately $30 (depending upon local rates).
A larger residential rainwater harvesting system might have storage capacity of 2500 gallons. At a price of $0.50 per gallon, a 2,500 gallon polyethylene storage tank costs $1,250. Total installed cost for the system could be as much as $2,000. With one inch of rainfall, a 3,000 square foot roof catchment area with 80% efficiency can provide approximately 1,440 gallons of water to the storage tank. In an area that receives 30 inches of rain per year, approximately 43,000 gallons of water could theoretically be captured, stored, and used. The value of the saved water would be approximately $130 (at $3 per 1000 gallons). | <urn:uuid:66039786-826a-4c5c-bba4-666b71a26862> | CC-MAIN-2016-26 | http://www.h2ouse.org/action/details/action_element_contents.cfm?actionID=7BFC2969-7BDB-4000-93556F98AAB057A4&elementID=0C06CF60-D95F-437F-B092225D4A55024D | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.937539 | 405 | 2.84375 | 3 |
Basic description of a harlequin ladybird:
• Size and shape :
large (7-8 mm or about 1/4 inch), round
• Elytra (wing
case) ground colour: pale yellow-orange, orange-red,
red or black; highly variable
• Elytra pattern:
0-21 orange-red or black spots, or grid pattern; highly
• Most common forms in UK : orange
with 15-21 black spots: black with two or four orange
or red spots
• Pronotum pattern:
white or cream with up to 5 spots or fused lateral spots
forming 2 curved lines, M-shaped mark or solid trapezoid
• Other characteristics: elytra
with wide keel at base; legs almost always brown
Distinguishing the harlequin ladybird from other British
its less than 5 mm (1/5 inch) in length, it is definitely not a
• If its red with precisely 7 black spots, it is a 7-spot ladybird.
• If it has white or cream spots, it is a striped ladybird,
an orange ladybird or a cream-spot ladybird.
• If it is large, burgundy coloured and has 15 black spots,
it is an eyed ladybird
• If it has an orange pronotum, and fine hairs all over
it is a bryony ladybird.
• If it is black with four or six red spots, two of which
are right at the front of the outside margin of the elytra,
it is a melanic form
of the 2-spot ladybird. | <urn:uuid:259b0f01-f7c6-40cf-87df-0179456cc934> | CC-MAIN-2016-26 | http://www.harlequin-survey.org/recognition_and_distinction.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.837323 | 344 | 2.71875 | 3 |
An eel fossil alongside another fossilized fish. Eels usually live in shallow water and belong to the order Anguilliformes. Some eels live in deep water (4000 meters [13123 feet]).They may vary in size between 10 centimeters (4 in) and 3 meters (9.8 feet) and may weigh up to 65 kilograms (143.3 pounds). This fossilized eel is no different from eels living today. They have not undergone any changes in 95 million years, which proves that these creatures did not go through a process of evolution. | <urn:uuid:6d324910-6612-4d45-8b4e-18f4f3394f8e> | CC-MAIN-2016-26 | http://www.harunyahya.com/en/Creation-Museum%0A/123512/EEL | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940772 | 116 | 3.546875 | 4 |
Lying Is Forbidden
One of the greatest errors is for people to act according to their own logic, or to the value judgments widespread in their society, which often is far from Islam, and not to the logic prescribed by Allah in the Qur'an. In other words, they approve of, ignore, or implement, comfortably and without thinking, the very behavior of which Allah disapproves and thus will punish in the Hereafter. Lying is the most prominent behavior of this type. Even though most people know that lying is a bad moral characteristic, some people merely pay lip service to this knowledge because so many people have turned this serious character defect into a habit. Allah points out this fact in the following verse of the Qur'an:
If you obeyed most of those on Earth, they would misguide you from Allah's Way. They follow nothing but conjecture. They are only guessing. (Surat al-An`am, 6:116)
Interestingly, most people who come into contact with a liar know when he or she is lying, but do not bother to expose the lies. In other words, they allow the liar to continue spreading his or her lies. Lying is a secret language among people, one about which everybody remains silent.
When something valuable is broken, for instance, the person who broke it may lie and deny having done so, thereby saving the day according to his own mentality. In fact, he puts himself in a very bad position, because if it is revealed that he is lying, he will greatly damage the very pride that he is trying to protect. Even more important, he has earned Allah's disapproval. To the same extent that a Muslim avoids eating pork and makes sure to pray five times a day, he is scrupulous about not lying.
However, people who do not consider that lying is forbidden immediately resort to lies to protect themselves whenever they find themselves in a difficult position. Maybe at that moment they rescue themselves from what really is a difficult position, or believe that they have done so, but, as unrepentant and dishonest people, they will be held responsible for their lie in the afterlife.
Allah tells us in the following verse that those who do not believe in the Qur'an's verses are liars:
Those who do not believe in Allah's Signs are merely inventing lies. It is they who are the liars. (Surat an-Nahl, 16:105)
Some people lie with great ease because they do not think about the Hereafter or believe that lying causes any harm. An example of this is the expression "white lies," which signifies small untruths that are believed to be innocent and harmless, or that rescue the person from a particular situation. However, any type of lying indicates insincerity, hypocrisy, and falsity under any circumstances, for those who engage in it are deceiving and disrespecting others. For this reason, "white" lies are the same as "black" lies, and have their own harmful effects.Allah has forbidden lying, as has our Prophet (saas), as seen below:
"Shall I not inform you of a great sin? Beware, it is to speak falsehood..." 1
"False witness has been made equivalent to attributing a partner to Allah. Avoid the abomination of idols and speaking falsehood as people pure of faith to Allah, not associating anything with Him" 2
"Be careful of falsehood as it is the companion of the sinners and both will be in Hell." 3 | <urn:uuid:dabaa9d4-2c3c-462c-a974-54f749366141> | CC-MAIN-2016-26 | http://www.harunyahya.com/en/books/2162/What_The_Quran_Says_About_Liars_And_Their_Methods/chapter/1003/Lying_is_forbidden | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.975925 | 719 | 2.578125 | 3 |
Doctor of Pharmacy
Pharmacists today play a major role in health care...
Many of us are familiar with a pharmacist in the local drugstore, and may have even had important questions about our medications answered. But few realize how the role of the pharmacist has evolved and how important they have become in the health care system. A pharmacist is more than just a prescription provider. They are highly educated and carefully trained. They not only carefully fill prescriptions written by your physician, but are able to knowledgably answer questions about medications.
Become a Pharm D
- Career Plan: How to Become a Pharmacist
- Schools Offering: Pharmacy Programs
Pharm D Resources...
The role of the pharmacist is extremely important. They ensure that the correct medication in the correct amount is dispensed, are on hand to answer questions, provide drug information, and work as a liaison between you and the physician. They can monitor a patientís medication tolerance and effectiveness, and can initiate medication changes. Additionally, pharmacists can actually reduce unnecessary costs in health care by ensuring that medications are properly dispensed and renewed.
Options and requirements for education
How do you become a pharmacist? All pharmacists have at minimum a Bachelor of Science degree in Pharmacy. However, requirements have changed in recent years to require a full six years of study post high school. In 1996, most pharmacy schools began to change over to a Doctor of Pharmacy degree (PharmD), which requires four years of study and then an additional two years of pharmacy study. This is now the professional degree for pharmacists. The first two years are generally considered the prerequisite courses, and the remaining four years are part of the professional program. Students who complete the program will be earning a Bachelor of Science in Pharmaceutical Sciences and a PharmD degree simultaneously.
As a pharmacy student, you will take courses in biology, chemistry, organic chemistry, physics, anatomy, physiology, microbiology, biochemistry, as well as in calculus and statistics. Admissions are often competitive and a high GPA is often required. Professional courses include pharmacology (the study of drugs), pharmacognosy (the study of drugs that are naturally occurring), pharmaceutics (includes calculations, preparations, and dispensing), and clinical pharmacy. Further study is also possible. You can also earn a Master of Science in Pharmacy, or a Ph.D. in Pharmaceutical Sciences.
State laws require all pharmacists to be licensed
In addition to a pharmacy degree, all pharmacists require licensure. Specific requirements vary from state to state, so check with your State Board of Pharmacy. However, requirements generally require graduating from an accredited college pharmacy program, gaining internship or hands-on experience, and passing the NAPLEX (North American Pharmacist Licensure Examination). Passing this results in becoming a registered pharmacist (RPh), which is equivalent to a licensure.
Pharmacy career outlook and statistics
The American Pharmacy Association (APhA) is a professional organization that can provide insight to this profession as well as career development opportunities. There are currently approximately 250,000 pharmacists in the country. As a country we spend over $75 billion annually on prescription and over-the-counter drugs. Many states also legally allow pharmacists to provide immunizations, or to be the site immunization clinics and provide public education. This profession will continue to grow as it becomes an important part of drug management in health care As a pharmacist, you can be part of an improved system that works to deliver Americans health care in the most efficient and cost-effective manner. If you are strong in science and pharmacology and interested in benefiting from the booming health care field, consider becoming a Doctor of Pharmacy. But you will need a strong education in science and to pursue six years of study, so get started now in finding the right pharmacy program.
To learn more about becoming an pharmacist, you can contact schools that offer related training programs or learn more by reading the career plan discussion on becoming an Pharm D. If you are still trying to determine the right career choice, take some time to explore additional careers in health care. | <urn:uuid:8021527f-441d-41da-a2d8-6d3fa389a3a5> | CC-MAIN-2016-26 | http://www.healthcarepathway.com/Health-Care-Careers/pharm-d.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.956532 | 853 | 2.765625 | 3 |
Health and Social Behaviour: Dietary Reference Values (DRVs), current dietary goals, recommendations, guidelines and the evidence for them
Three main types of dietary recommendations may be produced by public health agencies: dietary allowances (DRVs), dietary goals, and dietary guidelines.
Dietary allowances are quantitative guidelines for different population subgroups for the essential macro- and micro-nutrients to prevent nutritional deficiencies.
Dietary goals are quantitative national targets for selected macronutrients and micronutrients aimed at preventing long-term chronic disease e.g. coronary heart disease, stroke and cancer. They are usually aimed at the national population level
rather than the individual level.
Dietary guidelines are broad targets aimed at the individual to promote nutritional well-being. They were initially introduced for macronutrients but are now being used for micronutrients. Dietary guidelines can be expressed as quantitative
targets (e.g. five servings of fruit and vegetables/day) or as qualitative guidelines (e.g. eat more fruit and vegetables).
- The human body needs a variety of nutrients and the amount of each nutrient needed is called the nutrient requirement.
- In the UK, estimated requirements for various groups within the UK population were examined and published by the Committee on Medical Aspects of Food and Nutrition Policy (COMA) in the 1991 report Dietary Reference Values for Food Energy and
Nutrients for the United Kingdom. COMA has now been replaced by the Scientific Advisory Committee on Nutrition (SACN) who are likely to review the UK nutritional requirements in the near future.
- DRVs are a series of estimates of the amount of energy and nutrients needed by different groups of healthy people in the UK population; they are not recommendations or goals for individuals.
- DRVs have been set for following groups:
Boys and girls aged 0-3 months; 4-6 months; 7-9 months; 10-12 months; 1-3 years; 4-6 years; 7-10 years Males aged 11-14 years; 15-18 years; 19-50 years; 50+ years Females aged 11-14 years; 15-18 years; 19-50 years; 50+ years; pregnancy and breastfeeding
- In order to take account of the distribution of nutritional requirements within the population, COMA used four Dietary Reference Values (DRVs):
- Estimated Average Requirements (EARs)
- Reference Nutrient Intakes (RNIs)
- Lower Reference Nutrient Intakes (LRNIs)
- Safe Intake
Source: Food and Agriculture Organization of the United Nations
- EAR is an estimate of the average requirement of energy or a nutrient needed by a group of people i.e. approximately 50% of people will require less, and 50% will require more.
- RNI is the amount of a nutrient that is enough to ensure that the needs of nearly all a group (97.5%) are being met i.e. the majority will need less.
- LRNI is the amount of a nutrient that is enough for only a small number of people in a group who have low requirements (2.5%) i.e. the majority need more.
- Safe intake is used where there is insufficient evidence to set an EAR, RNI or LRNI. The safe intake is the amount judged to be enough for almost everyone, but below a level that could have undesirable effects.
- The amount of each nutrient needed differs between individuals and at different life stages. Individual requirements of each nutrient are related to a person’s age, gender, level of physical activity and health status.
- The changes in estimated nutritional requirements at different life-stages are outlined in table 1 below:
Table 1: Nutritional Requirements at Different Life-Stages
First 4-6 months of life (period of rapid growth and development) breast milk (or infant formula) contains all the nutrients required.
Between 6-12 months - requirements for iron, protein, thiamin, niacin, vitamin B6, vitamin B12, magnesium, zinc, sodium and chloride increase.
Department of Health advice recommends exclusive breastfeeding until 6 months of age with weaning introduced at 6 months.
Energy requirements increase (children are active and growing rapidly). Protein requirements increase slightly. Vitamins requirements increase (except vitamin D). Mineral requirements decrease for calcium, phosphorus and iron and
increase for the remaining minerals (except for Zinc).
Requirements for energy, protein, all the vitamins and minerals increase except C and D and iron.
Requirements for energy, protein, all vitamins and minerals increase except thiamin, vitamin C and A.
Requirements for energy continue to increase and protein requirements increase by approximately 50%.
By the age of 11, the vitamin and mineral requirements for boys and girls start to differ.
Boys: increased requirement for all the vitamins and minerals.
Girls: no change in the requirement for thiamin, niacin, vitamin B6, but there is an increased requirement for all the minerals. Girls have a much higher iron requirement than boys (once menstruation starts).
Boys: requirements for energy and protein continue to increase as do the requirements for a number of vitamins and minerals (thiamin, riboflavin, niacin, vitamins B6, B12, C and A, magnesium,
potassium, zinc, copper, selenium and iodine). Calcium requirements remain high as skeletal development is rapid.
Girls: requirements for energy, protein, thiamin, niacin, vitamins B6, B12 and C, phosphorus, magnesium, potassium, copper, selenium and iodine all increase.
Boys and girls have the same requirement for vitamin B12, folate, vitamin C, magnesium, sodium, potassium, chloride and copper. Girls have a higher requirement than boys for iron (due to menstrual losses) but a lower requirement
for zinc and calcium.
Requirements for energy, calcium and phosphorus are lower for both men and women than adolescents and a reduced requirement in women for magnesium, and in men for iron. The requirements for protein and most of the vitamins and minerals
remain virtually unchanged in comparison to adolescents (except for selenium in men which increases slightly).
Increased requirements for some nutrients. Women intending to become pregnant and for the first 12 weeks of pregnancy are advised to take supplements of folic acid. Additional energy and thiamin are required only during the last three
months of pregnancy. Mineral requirements do not increase.
Increased requirement for energy, protein, all the vitamins (except B6), calcium, phosphorus, magnesium, zinc, copper and selenium.
Energy requirements decrease gradually after the age of 50 in women and age 60 in men as people typically become less active and the basal metabolic rate is reduced. Protein requirements decrease for men but continue to increase
slightly in women. The requirements for vitamins and minerals remain virtually unchanged for both men and women.
After the menopause, women’s requirement for iron is reduced to the same level as that for men.
After the age of 65 there is a reduction in energy needs but vitamins and minerals requirements remain unchanged. This means that the nutrient density of the diet is even more important.
DRVs are estimates of energy and nutrient intakes and should therefore be used as guidance but should not be considered as exact recommendations. They show the amount of energy/nutrient that a group of people of a certain age range (and sometimes
sex) needs for good health and they only apply for healthy people.
Current dietary goals, recommendations, guidelines and the evidence for them
The UK Food Standards Agency issues guidance on dietary recommendations on behalf of the Department of Health for the general public. The current government recommendations are outlined in table 2 below.
Table 2: Government Dietary Recommendations
|Total Fat||Reduce to no more than 35% of food energy (currently at 35.3%)|
|Saturated Fat||Reduce to no more than 11% of food energy (currently at 13.3%)|
|Total Carbohydrate||Increase to more than 50% of food energy (currently at 48.1%)|
|Sugars (added)||No more than 11% of food energy (currently at 12.7%)|
|Dietary Fibre (NSP)||Increase the average intake of dietary fibre to 18g per day (currently 13.8g per day). Children’s intakes should be less|
|Fruit & Vegetables||Increase to at least 5 portions (400g) of a variety of fruit and vegetables per day (currently 2.8 portions per day)|
|Alcohol||Should not provide more than 5% of energy in the diet.
Women – should not regularly drink more than 2-3 units of alcohol/day
Men – should not regularly drink more than 3-4 units of alcohol/day
|Salt||Adults – no more than 6g salt a day (2.4g sodium)
1 to 3 years - 2 g salt a day (0.8g sodium)
4 to 6 years - 3g salt a day (1.2g sodium)
7 to 10 years - 5g salt a day (2g sodium)
11 and over - 6g salt a day (2.4g sodium)
The evidence for nutritional recommendations comes from a range of sources but particular emphasis is placed on COMA reports:
- 1991, COMA report on energy and nutrients provided evidence for the dietary recommendations for total fat, saturated fat, total carbohydrate, sugars, and dietary fibre.
- 1994, COMA recommended reducing the average salt intake of the population to 6g a day based on evidence of a link between high salt intake and high blood pressure. In 2003, the SACN reviewed the evidence (e.g. Intersalt study and Dietary
Approaches to Stop Hypertension (DASH) sodium trial) since 1994 and concluded the strength for the association between high salt intake and hypertension had increased. High blood pressure increases the risk of stroke and cardiovascular disease.
SACN confirmed that reducing salt intake to 6g per day would benefit the whole population.
Evidence for increasing the consumption of fruit and vegetables to 5 a day is provided by a number of sources. The Department of Health estimated that eating at least 5 portions of a variety of fruit and vegetables can reduce the risk of
deaths from chronic diseases (heart disease, stroke and cancer) by up to 20%, delay the development of cataracts, reduce the symptoms of asthma, improve bowel function and help to manage diabetes.
- British Nutrition Foundation http://www.nutrition.org.uk [accessed 01.08.08]
- Department of Health. ‘Dietary Reference Values for Food and Energy and Nutrients for the United Kingdom.’ 1991. Report of the Panel on Dietary Reference Values of the Committee on Medical Aspects of Food Policy.
- Department of Health http://www.dh.gov.uk/en/Publichealth [accessed 01.08.08]
- Food Standards Agency http://www.food.gov.uk [accessed 01.08.08]
- Gibney M., Margetts B., Kearney J., Arab L. Public Health Nutrition. The Nutrition Society. Blackwell Publishing.
- Lewis, G. Sheringham, J. Kalim, K. Crayford, T. Mastering Public Health: A postgraduate guide to examinations and revalidation. The Royal Society of Medicine Press Limited.
© Hannah Pheasant 2008 | <urn:uuid:4bb17913-cb08-4f50-b306-a9b181b54531> | CC-MAIN-2016-26 | http://www.healthknowledge.org.uk/public-health-textbook/disease-causation-diagnostic/2e-health-social-behaviour/drvs | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.906175 | 2,397 | 3.5625 | 4 |
- Subacute thyroiditis is a form of thyroiditis that causes pain in the thyroid gland.
- Common symptoms of subacute thyroiditis include fatigue, weakness, and fever.
- Subacute thyroiditis is treated with medication and often goes away within 12 to 18 months.
Thyroiditis refers to the inflammation of the thyroid. Your thyroid is a gland in the front of your neck that releases a variety of hormones. These hormones help regulate metabolism, the process that converts food into energy. They also play a crucial role in your physical and emotional responses, such as fear, excitement, and pleasure.
Thyroiditis includes a group of disorders that cause the thyroid to become inflamed. Most types of thyroiditis typically lead to either hyperthyroidism or hypothyroidism. Hyperthyroidism is a disorder in which the thyroid is overactive and produces too many hormones. Hypothyroidism is a disease in which the thyroid is underactive and doesn’t make enough hormones. Both of these conditions cause weight changes, anxiety, and fatigue.
Subacute thyroiditis is a rare type of thyroiditis that causes pain and discomfort in the thyroid. Individuals with this condition will also have symptoms of hyperthyroidism and later develop symptoms of hypothyroidism.
Subacute thyroiditis is slightly more common in women aged 40 to 50 than it is in men of the same age. It generally occurs after an upper respiratory infection, such as the flu or the mumps.
Unlike other forms of thyroiditis, subacute thyroiditis causes pain in the thyroid gland. In some cases, this pain might also spread to other parts of your neck, your ears, or your jaw. Your thyroid may be swollen and tender to the touch.
Other symptoms of subacute thyroiditis include:
- a fever
- difficulty swallowing
Most people typically develop hyperthyroidism in the initial stages of subacute thyroiditis. The symptoms during this stage of the disease may include:
- trouble concentrating
- sudden weight loss
- a fast or irregular heartbeat
- an increased body temperature that often leads to excessive sweating
As the disease progresses, hypothyroidism generally replaces hyperthyroidism. You’ll likely develop a new set of symptoms, including:
- sudden weight gain
- heavy menstrual periods
The first stage of subacute thyroiditis usually lasts for less than three months. The second stage may last for an additional nine to 15 months.
Your doctor will feel and examine your neck to see if the thyroid gland is enlarged or inflamed. They’ll also ask you about your symptoms and your recent medical history. Your doctor will be more likely to check for subacute thyroiditis if you’ve recently had a viral infection in the upper respiratory tract.
Your doctor will order a blood test to confirm a subacute thyroiditis diagnosis. This test will check the levels of certain hormones in your blood. Specifically, the blood test will measure your thyroid hormone, or free T4, and thyroid stimulating hormone (TSH) levels. The free T4 and TSH levels are part of what’s called an “internal feedback loop.” When one level is high, the other level is low, and vice versa.
The results of the blood test will vary depending on the stage of the disease. In the initial stages, your free T4 levels will be high while your TSH levels will be low. In the later stages, your TSH levels will be high while your T4 levels will be low. An abnormal level of either hormone indicates subacute thyroiditis.
Your doctor will give you medications to help reduce the pain and control inflammation. In some cases, this is the only treatment required for subacute thyroiditis. Possible medications include steroids, aspirin, and ibuprofen. Your doctor may also prescribe beta-blockers if hyperthyroidism is present in the early stages. These medications lower blood pressure and relieve certain symptoms, including anxiety and an irregular heartbeat.
Treatment for hyperthyroidism is important at the beginning of the disease. However, it will not be helpful once your condition progresses into the second phase. During the later stages of the disease, you’ll develop hypothyroidism. You’ll probably need to take hormones to replace the ones that your body isn’t producing.
Treatment for subacute thyroiditis is usually temporary. Your doctor will eventually wean you off any medications that have been prescribed to treat the condition.
The symptoms of subacute thyroiditis usually go away within 12 to 18 months. In some cases, however, hypothyroidism may end up being permanent. The American Thyroid Association estimates that approximately 5 percent of people with subacute thyroiditis develop permanent hypothyroidism.
Call your doctor if you suspect you have subacute thyroiditis. Early diagnosis and treatment can help prevent complications from occurring. | <urn:uuid:09d38bcd-5b01-474f-8a9a-7555295fe034> | CC-MAIN-2016-26 | http://www.healthline.com/health/subacute-thyroiditis | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.916424 | 1,025 | 3.65625 | 4 |
In the spectacular landscape of the Peruvian Andes, very little grows. Heifer alpaca projects address nutrition and climate change vulnerability in a precarious environment.
By Brooke Edwards, Heifer International writer
Photos by Dave Anderson
In the Andean highlands of Peru, 13,000 feet above sea level, very little can grow. The air is thin, the water scarce, the earth rocky and infertile. Against a backdrop of glacier-capped mountains, however, indigenous families survived since ancient Incan times by raising hearty alpacas. Climate change, severe deterioration of water sources and pasture, low incomes, low market value of alpaca varieties and little diversity of food in more recent times have made life here even more tenuous. Heifer International's Alpaca Biodiversity in High Andean Communities Project works in 22 indigenous, small- farming communities to reduce vulnerability to climate change and food insecurity of 4,333 alpaca-raising families.
Alpacas are gentle not only to their human caretakers but also on the land. They eat scrub vegetation other livestock won't eat, and their padded feet don't damage the fragile terrain. Their droppings help fertilize the topsoil, improving crops and reducing erosion. The exceptionally soft wool is collected without injuring the animal, providing Heifer families with fine material to make blankets, ponchos, hats and carpets.
Three key factors prevent alpaca-raising families from earning higher incomes from the sale of their products: low genetic quality of the alpaca herds, low quality of the alpaca fiber and textiles due to poor quality control during shearing and processing and the use of intermediaries to sell their products.
As the alpaca wool industry flourished in Peru over the last century, alpaca farmers turned their focus away from the natural variability of alpaca colors to produce only white stock. White fiber is easier to dye, making it more valued on the commercial market. However, this practice resulted in a gradual loss of biodiversity and richness of the species and, therefore, a level of vulnerability to external commercial interests these indigenous families cannot afford.
A more recent shift in smaller, local markets' preference for natural colors of wool has led to a resurgence in breeding alpacas in a variety of hues, which are in turn genetically more resilient and resistant to climate change.
Artisan goods made with natural colors of wool sell for high prices to tourists. For a farmer to get a good price for the wool, however, the color must be uniform. Although adorable, this alpaca (at right) is of lower quality than desired, according to alpaca farmer Lucio Mandura Crispin. The reason? She has three distinct shades of fiber.
Crispin learned techniques to improve the genetics of his livestock through training from Heifer International's Peru country program. Now, instead of depending on the sale of his alpacas' wool, his income comes largely from selling high-quality breeding stock, which can bring in as much as $5,000 each.
Crispin, posing with one of his award- winning, solid-brown alpacas in his community of Fundo Tumpata, Pacchanta, says he loves his alpacas almost as much as he loves his wife.
So far, Crispin has won 150 awards for his alpacas' fiber color, quality and conformity. In addition to recognition, Crispin also received tools, irrigation supplies and veterinary products to improve his farm.
The successes of farmers like Crispin who raise quality breeding stock trickle down to the rest of the community. High-quality breeders yield offspring with high-quality wool, which can be sold raw at higher prices than before or woven into beautiful products for sale. | <urn:uuid:dc7fc716-291e-4bfe-b4a1-b2dc5f5dbbba> | CC-MAIN-2016-26 | http://www.heifer.org/join-the-conversation/magazine/2012/Fall-2012/alpaca-country.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.951248 | 779 | 3.1875 | 3 |
Herong's Notes on Astrology and Horoscope - Version 2.10, by Dr. Herong Yang
Horoscope Native and Ascendant (Rising Sign)
This section provides introduction of some basic horoscope elements: Native and Ascendant (Rising Sign).
In horoscopic astrology, "Native" is defined as the reference point for the place and time of the event (individual's birth) to be charted. Native is always located at the center of a horoscope chart.
"Ascendant (Rising Sign)" is defined as the zodiac sign (zodiac constellation) that is rising on the east horizon at the time and the place of the event.
If we use a person's birth horoscope chart as an example:
On opposite side of the Ascendant point is the Descendant point, where the opposite zodiac sign (zodiac constellation) is descending on the west horizon at the time and the person's birth event.
Now, let's use a simplified calculation to explain the concepts of Ascendant and Descendant. Assuming that John is born on May 30, 2000 at noon time in the city of Jakarta, here how we can draw a simplified version of John's birth horoscope chart:
1. Draw a big circle to represent the ecliptic circle. Draw a dot at the center of the ecliptic circle to represent the John's Native point. Draw a horizontal, diagonal, line on the ecliptic circle to represent the horizontal plane.
2. At John's birth time, noon time, the Sun is at the top of the sky, the Zenith point in the observer's relative celestial coordinate system. So, mark the Sun's location at 12 o'clock position on the ecliptic circle.
3. Based on the sidereal zodiac sign table, on John's birth date, the Sun is travelling in the middle of the Taurus, the Bull, sign, 15 degrees of Taurus. In other word, John's sidereal Sun Sign is 15 degrees of Taurus, the Bull. This means that the middle point of the Taurus sign meets the Sun's position on John's birth chart. So mark the Taurus sign section on the ecliptic circle from 11:30 position to 12:30 position.
4. Based on the position of the Taurus sign, the Rising Sign or Ascendant should be Taurus sign plus 90 degrees, which will be the Leo, the Lion, sign. In other word, John's Rising Sign is Leo, the Lion. More precisely, the Ascendant, the Rising Sign, is at 15 degrees of Leo on John's birth chart. So mark the Leo sign section on the ecliptic circle from 8:30 position to 9:30 position.
5. Based on the Ascendant, calculating the Descendant is easy. The Descendant should be the opposite sign of Leo, Aquarius (the Water Bearer). More precisely, John's Descendant sign is 15 degrees of Aquarius. So mark the Aquarius sign section on the ecliptic circle from 2:30 position to 3:30 position.
The picture below shows a simplified version of John's birth horoscope chart:
However, the above birth chart needs to be revised if you are using the tropical zodiac coordinate system, as in Western astrology. We know that the different between tropical coordinates and the sidereal coordinates is about 24 days, about 24 degrees.
So John's birth chart with the tropical coordinate system will have:
Table of Contents | <urn:uuid:c9a3b63d-b9c3-4e16-8cde-118579e66f4d> | CC-MAIN-2016-26 | http://www.herongyang.com/astrology_horoscope/Horoscope_Native_and_Ascendant_Rising_Sign.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.863699 | 739 | 2.828125 | 3 |
The first thing to consider when furnishing an entire lab is the work surface options. Workstations and tables are both great options. Ultimately, the right choice depends on your needs. Lab tables are more versatile because they can be moved around when you want to rearrange a classroom. Workstations, however, have the added convenience of built in gas and water fixtures. If your science curriculum includes many experiments that utilize an open flame, having enough gas fixtures in the room is very important.
No matter which configuration you select, you will need to choose the ideal work surface material for your environment. Lab tables are much tougher than regular tables so they can withstand the chemicals and other harsh materials that are used for experiments. Surface materials come in varying degrees of toughness including chemical-resistant and epoxy resin options.
Laboratory tables generally seat two students side by side. This is an ideal arrangement because many teachers like students to work in pairs. When students work together they develop important communication skills and use less supplies per person.
Workstations come in forward-facing arrangements, or with students seated on all sides. If you have many instructional periods in which students must face the board and take notes, it is important for everyone to be facing forward so they can see clearly. Lab workstations facilitate work in pairs or larger groups. While you are making sure that budding scientists have a place to work, don't forget about their teachers. Be sure to include a well-equipped instructor's desk that is as well.
Every room in a school needs to have appropriate seating, and the science classroom is no exception. No matter how work spaces are configured, our lab stools provide the support that is needed during classes. They come in a variety of materials and styles, from leading manufacturers including National Public Seating, Academia and OFM.
Choose hardboard, hard plastic or padded seats. We even have swivel stools with anti-bacterial, anti-microbial vinyl upholstery. If you prefer mesh or wooden stools, we have those too. Models are available with and without backrests, with square or round seats. To make sure that people of all sizes can stay positioned properly, adjustable-height options are available as well.
Every experiment requires a unique set of materials. Rather than having to run back and forth to a storage closet or spend a considerable amount of time setting up the laboratory in advance you can keep all necessary materials handy with lab storage. Lab cabinets come with either wood or glass doors, in freestanding, mobile and wall-mounted models. If you are working with extremely harsh chemicals, flammable liquids safety cabinets from Edsal will keep everything securely stowed away when not in use. Microscope storage cabinets are available as well.
Ensure that every experiment is a safe one with precautionary equipment. Every operational lab needs to have a lab shower and eye wash station, in case of the unlikely event that they are needed. You can get both of these items installed in one unit or order them separately. We even have eyewash stations that can be installed on a regular sink.
Demonstrate proper technique and regard for the learning process with lab furniture from Hertz Furniture. All of our items are safe and functional so students and teachers can enjoy a proper worry-free experiment experience. If you need help arranging your space and choosing the ideal furnishings for it, contact the experts at the Hertz Design Center for free project-planning advice. | <urn:uuid:b172ccae-b4e8-4a33-a807-11dfa970a8cd> | CC-MAIN-2016-26 | http://www.hertzfurniture.com/Lab-Furniture--72--no.html?pcid=category_root&cn=Lab%20Furniture&cat=72&r=1&history=nx9hiqt7%7C%7Cpcid~category_root%5Erefine_options_to_show~* | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950103 | 712 | 2.765625 | 3 |
Motor Vehicle Accidents
Every year millions of people are injured in motor vehicle accidents—many very seriously. In fact, according to the National Highway Traffic Safety Administration, every 10 seconds someone in the United States is involved in a car accident. In 2009, there were an estimated 5,505,000 police reported traffic crashes, in which 33,308 people were killed and 2,217,000 were injured.
Motor vehicle accidents cause the loss of time, property, health and even life. Such accidents occur because of elements including driver error, negligence, manufacturing defects and dangerous weather. No matter what the specific cause or result, a crash can turn a normal day into a prolonged struggle. Speaking with a lawyer can help you sort out rights, your options and your future. Contact an attorney to find out more.
When you have been in an auto accident, you may have a sense of who caused it. Issues of fault, however, can be complicated by who acted when and which laws governed the situation. If the other driver was negligent, you may have to prove that the driver breached a duty of care to you and that the breach caused your damages. The assistance of an attorney can be immensely valuable at this time, whether you are battling and insurance company or seeking compensation for your injuries.
Motor Vehicle Accidents Resource Links
National Highway Traffic Safety Administration (NHTSA)
The NTHSA is a government agency dedicated to making American roads safer for travelers.
National Safety Council
The National Safety Council, a nonprofit organization, provides links and articles on topics like seat belt use, safe driving for teenagers and reducing motor vehicle crashes.
U.S. Department of Transportation (DOT)
The Department of Transportation is a federal agency focusing on policy and lawmaking to ensure safer U.S. travel.
MedlinePlus: Motor Vehicle Safety
The website, from the National Library of Medicine and National Institutes of Health, offers information on preventing motor vehicle crashes, stating that every 12 minutes, someone in America dies from a motor vehicle accident. | <urn:uuid:e8ed802f-3003-44e9-b1e0-22a96beec040> | CC-MAIN-2016-26 | http://www.hgdlawfirm.com/blog/practice_area/motor-vehicle-accidents/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955786 | 415 | 2.546875 | 3 |
PagesHinduism & Quantum Physics
======= Understanding Hinduism =======
The teachings of Sri Ramana Maharshi
[Note: By David Godman: The original texts from which these
At first sight Sri Ramana Maharshis statements on God appear to be riddled with contradictions: on one occasion he might say that God never does anything, on another that nothing happens except by Gods will. Sometimes he would say that God is just an idea in the mind, while at other times he would say that God is the only existing reality.
The contradictory statements are largely a reflection of the differing levels of understanding he encountered in his questioners. Those who worshipped personal Gods would often be given anthropomorphic explanations. They would be told that God created the world, that he sustains it by his divine power, that he looks after the needs of all its inhabitants and that nothing happens that is contrary to Gods will. On the other hand, those who were not attracted to such a theory would be told that all such ideas about God and his power were mental creations, which only obscured the real experience of God, which is inherent in everyone.
At the highest level of his teachings the term God and Self are synonyms for the immanent reality which is discovered by Self-realisation. Thus realisation of the Self is realisation of God; it is not an experience of God, rather it is an understanding that one is God. Speaking from this ultimate level, Sri Ramanas statements on God can be summarised in the following way:
On a lower level Sri Ramana Maharshi spoke about Iswara, the Hindu name for the supreme personal God. He said that Iswara exists as a real entity only so long as one imagines that one is an individual person. When individuality persists there is a God who survives the activities of the universe; in the absence of individuality Iswara is non-existent.
Beside Iswara, Hinduism has many deities which resemble the gods and demons of Norse
and Greek mythology. Such deities are a central feature of popular Hinduism and their
reality is still widely accepted. Sri Ramana surprised many people by saying that such
beings were as real as the people who believed in them. He admitted that after realisation
they shared the same fate as Iswara, but prior to that, he seemed to regard them as senior
officials in a cosmological hierarchy which looked after the affairs of the world.
Question: God is described as manifest and unmanifest. As the former he is said to include the world as a part of his being. If that is so, we as part of that world should have easily known him in the manifest form.
Sri Ramana Maharshi: Know your self before you seek to decide about the nature of God and the world.
Question: Does knowing myself imply knowing God?
Sri Ramana Maharshi: Yes, God is within you.
Question: Then, what stands in the way of my knowing myself or God?
Sri Ramana Maharshi: Your wandering mind and perverted ways.
Question: Is God personal?
Sri Ramana Maharshi: Yes, he is always the first person, the I, ever standing before you. Because you give precedence to worldly things, God appears to have receded to the background. If you give up all else and seek him alone, he alone will remain as the I, the Self.
Question: Is God apart from the Self?
Sri Ramana Maharshi: The Self is God. I am is God. This question arises because you are holding on to the ego self. It will not arise if you hold on to the true Self. For the real Self will not and cannot ask anything. If God be apart from the Self he must be a self-less God, which is absurd. God, who seems to be non-existent, alone truly exists. Whereas the individual, who seems to be existing, is ever non-existent. Sages say that the state in which one thus knows ones own non-existence (sunya) alone is the glorious supreme knowledge.
You now think that you are an individual, that there is the universe and that God is beyond the cosmos. So there is the idea of separateness. This idea must go. For God is not separate from you or the cosmos. The Gita also says:
The Self am I, O Lord of sleep,
Thus God is not only in the heart of all, he is the prop of all, he is the source of all, their abiding place and their end. All proceed from him, have their stay in him, and finally resolve into him. Therefore, he is not separate.
Question: How are we to understand this passage in the Gita: This whole cosmos forms a particle of me?
Sri Ramana Maharshi: It does not mean that a small particle of God separates from him and forms the universe. His sakti (power) is acting. As a result of one phase of such activity the cosmos has become manifest. Similarly, the statement in Purusha Sukta, All the beings form his one foot, does not mean that Brahman is in several parts.
Questioner: I understand that. Brahman is certainly not divisible.
Sri Ramana Maharshi: So the fact is that Brahman is all and remains indivisible. It is ever realised but man is not aware of this. He must come to know this. Knowledge means the overcoming of obstacles, which obstruct the revelation of the eternal truth that the Self is the same as Brahman. The obstacles taken together form your idea of separateness as an individual.
Question: Is God the same as Self?
Sri Ramana Maharshi: The Self is known to everyone, but not clearly. You always exist. The be-ing is the Self. I am is the name of God. Of all the definitions of God, none is indeed so well put as the Biblical statement I am that I am in Exodus 3. There are other statements, such as Brahmaivaham (Brahman am I), Aham Brahmasmi (I am Brahman) and Soham (I am he). But none is so direct as the name Jehovah which means I am. The absolute being is what is. It is the Self. It is God. Knowing the Self, God is known. In fact God is none other than the Self.
Question: God seems to be known by many different names. Are any of them justified?
Sri Ramana Maharshi: Among the many thousands of names of God, no name suits God, who abides in the Heart, devoid of thought, so truly, aptly, and beautifully as the name I or I am. Of all the known names of God, the name of God I I alone will resound triumphantly when the ego is destroyed, rising as the silent supreme word (mouna-para-vak) in the Heart-space of those whose attention is Selfward-facing. Even if one unceasingly meditates upon that name I-I with ones attention on the feeling I, it will take one and plunge one into the source from which thought rises, destroying the ego, the embryo, which is joined to the body.
Question: What is the relationship between God and the world? Is he the creator or sustainer of it?
Sri Ramana Maharshi: Sentient and insentient beings of all kinds are performing actions only by the mere presence of the sun, which rises in the sky without any volition. Similarly all actions are done by the Lord without any volition or desire on his part. In the mere presence of the sun, the magnifying lens emits fire, the lotus-bud blossoms, the water-lily closes and all the countless creatures perform actions and rest.
The order of the great multitude of worlds is maintained by the mere presence of God in the same manner as the needle moves in front of a magnet, and as the moonstone emits water, the water-lily blossoms and the lotus closes in front of the moon.
In the mere presence of God, who does not have even the least volition, the living beings, who are engaged in innumerable activities, are embarking upon many paths to which they are drawn according to the course determined by their own Karmas, finally realise the futility of action, turn back to Self and attain liberation.
The actions of living beings certainly do not go and affect God, who transcends the mind, in the same manner as the activities of the world do not affect that sun and as the qualities of the conspicuous four elements (earth, water, fire and air) do not affect the limitless space.
Question: Why is samsara- creation and manifestation as finitised- so full of sorrow and evil?
Sri Ramana Maharshi: Gods will!
Question: Why does God will it so?
Sri Ramana Maharshi: It is inscrutable. No motive can be attributed to that power no desire, no end to achieve can be asserted of that one infinite, all-wise and all-powerful being. God is untouched by activities, which take place in his presence. Compare the sun and the world activities. There is no meaning in attributing responsibility and motive to the one before it becomes many.
Question: Does everything happen by the will of God?
Sri Ramana Maharshi: It is not possible for anyone to do anything opposed to the ordinance of God, who has the ability to do everything. Therefore to remain silent at the feet of God, having given up all the anxieties of the wicked, defective mind, is best.
Question: Is there a separate being Iswara (personal God) who is the rewarder of virtue and punisher of sins? Is there a God?
Sri Ramana Maharshi: Yes.
Question: What is he like?
Sri Ramana Maharshi: Iswara has individuality in mind and body, which are perishable, but at the same time he has also the transcendental consciousness and liberation inwardly.
Iswara, the personal God, the supreme creator of the universe really does exist. But this is true only from the relative standpoint of those who have not realised the truth, those people who believe in the reality of individual souls. From the absolute standpoint the sage cannot accept any other existence than the impersonal Self, one and formless.
Iswara has a physical body, a form and a name, but it is not so gross as this material body. It can be seen in visions in the form created by the devotee. The forms and names of God are many and various and differ with each religion. His essence is the same as ours, the real Self being only one and without form. Hence forms he assumes are only creations or appearances.
Iswara is immanent in every person and every object throughout the universe. The totality of all things and beings constitutes God. There is a power out of which a small fraction has become all this universe, and the remainder is in reserve. Both this reserve power plus the manifested power as material world together constitute Iswara.
Question: So ultimately Iswara is not real?
Sri Ramana Maharshi: Existence of Iswara follows from our conception of Iswara. Let us first know whose concept he is. The concept will be only according to the one who conceives. Find out who you are and the other problems will solve themselves.
Iswara, God, the creator, the personal God, is the last of the unreal forms to go. Only the absolute being is real. Hence, not only the world, not only the ego, but also the personal God are of unreality. We must find the absolute nothing else.
Question: You say that even the highest God is still only an idea. Does that mean that there is no God?
Sri Ramana Maharshi: No, there is an Iswara.
Question: Does he exist in any particular place or form?
Sri Ramana Maharshi: If the individual is a form, even Self, the source, who is the Lord, will also appear to be form. If one is not a form, since there then cannot be knowledge of other things, will that statement that God has a form be correct? God assumes any form imagined by the devotees through repeated thinking in prolonged meditation. Though he thus assumes endless names, the real formless consciousness alone is God.
With regard to his location, God does not reside in any place other than the Heart. It is due to illusion, caused by the ego, the I am the body idea, that the kingdom of God is conceived to be elsewhere. Be sure that the Heart is the kingdom of God.
Know that you are perfect, shining light, which not only makes the existence of Gods kingdom possible, but also allows it to be seen as some wonderful heaven. To know this is alone jnana. Therefore, the kingdom of God is within you. The unlimited space of Turiyatita (beyond the four states, i.e. the Self), which shines suddenly, in all its fullness, within the Heart of a highly mature aspirant during the state of complete absorption of mind, as if a fresh and previously unknown experience, is the rarely attained and true Siva-loka (the kingdom of God), which shines by the light of Self.
Question: They say that the jiva (individual soul) is subject to the evil effects of illusion such as limited vision and knowledge, whereas Iswara has all-pervading vision and knowledge. It is also said that Jiva and Iswara become identical if the individual discards his limited vision and knowledge. Should not Iswara also discard his particular characteristics such as all-pervading vision and knowledge? They too are illusions, arent they?
Sri Ramana Maharshi: Is that your doubt? First discard your own limited vision and then there will be time enough to think of Iswaras all-pervading vision and knowledge. First get rid of your own limited knowledge. Why do you worry about Iswara? He will look after himself. Has he not got as much capacity as we have? Why should we worry about whether he possesses all-pervading vision and knowledge or not? It is indeed a great thing if we can take care of ourselves.
Question: But does God know everything?
Sri Ramana Maharshi: The Vedas declare God to be omniscient only to those who ignorantly think themselves to be people of little knowledge. But if one attains and knows him as he really is, it will be found that God does not know anything, because his nature is the ever-real whole, other than which nothing exists to be known.
Question: Why do religions speak of Gods, heaven, hell, etc.?
Sri Ramana Maharshi: Only to make the people realise that they are on a par with this world and that the Self alone is real. The religions are according to the view-point of the seeker.
Question: Do Vishnu, Siva, etc., exist?
Sri Ramana Maharshi: Individual human souls are not the only beings known.
Question: And their sacred regions Kailasa or Vaikuntha, are they real?
Sri Ramana Maharshi: As real as you are in this body.
Question: Do they possess a phenomenal existence, like my body? Or are they fictions like the horn of a hare?
Sri Ramana Maharshi: They do exist.
Question: If so, they must be somewhere. Where are they?
Sri Ramana Maharshi: Persons who have seen them say that they exist somewhere. So we must accept their statement.
Question: Where do they exist?
Sri Ramana Maharshi: In you.
Question: Then it is only an idea, which I can create and control?
Sri Ramana Maharshi: Everything is like that.
Question: But I can create pure fictions, for example, a hares horn, or only part truths, for example a mirage, while there are also facts irrespective of my imagination. Do the Gods Iswara or Vishnu exist like that?
Sri Ramana Maharshi: Yes.
Question: Is God subject to Pralaya (cosmic dissolution)?
Sri Ramana Maharshi: Why? Man becoming aware of the Self transcends cosmic dissolution and becomes liberated. Why not Iswara who is infinitely wiser and abler?
Question: Do devas (angels) and pisachas (devils) exist similarly?
Sri Ramana Maharshi: Yes.
Question: These deities, what is their status relative to the Self?
Sri Ramana Maharshi: Siva, Ganapati and other deities like Brahma, exist from a human standpoint; that is to say, if you consider your personal self as real, then they also exist. Just a government has its high executive officers to carry on the government, so has the creator. But from the standpoint of the Self all these gods are illusory and must themselves merge into the one reality.
Questioner: Whenever I worship God with name and form, I feel tempted to think whether I am not wrong in doing so, as that would be limiting the limitless, giving form to the formless. At the same time I feel I am not constant in my adherence to worship God without form.
Sri Ramana Maharshi: As long as you respond to a name, what objection could there be to your worshipping a God with name or form? Worship God with or without form till you know who you are.
Question: I find it difficult to believe in a personal God. In fact I find it impossible. But I can believe in an impersonal God, a divine force which rules and guides the world, and it would be a great help to me, even in my work of healing, if this faith were increased. May I know how to increase this faith?
Sri Ramana Maharshi: Faith is in things unknown, but the Self is self-evident. Even the greatest egotist cannot deny his own existence, that is to say, cannot deny the Self. You can call the ultimate reality by whatever name you like and say that you have faith in it or love for it, but who is there who will not have faith in his own existence or love for himself? That is because faith and love are our real nature.
Question: Should I not have any idea about God?
Sri Ramana Maharshi: Only so long as there are other thoughts in the
Heart can there be a thought of God conceived by ones mind. The destruction of even
that thought of God due to the destruction of all other thoughts alone is the unthought
thought, which is the true thought of God. | <urn:uuid:2921779b-f082-4bb1-afac-ff75fb38a036> | CC-MAIN-2016-26 | http://www.hinduism.co.za/god.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.963854 | 3,998 | 2.71875 | 3 |
Description of Historic Place
La Corne Nursing Station National Historic Site of Canada is located in La Corne, in the Abitibi-Témiscamingue region of Québec. It was built in 1940 and comprises two wooden buildings painted in white, the dispensary-residence, a two-storey building with a front veranda, and a garage. A summer kitchen is located at the south-west corner of the premises. The nursing station has a rear courtyard where the tree line marks the western limit of the property. The nursing station has also retained its furniture and houses an ethnological collection directly associated with the site’s history. The official recognition refers to the nursing station’s legal property.
The La Corne Nursing Station was designated a national historic site of Canada in 2004 because:
- it is the best extant example of the network of dispensary-residences established by the Service médical aux colons;
- it symbolizes the contribution of several hundred outpost nurses to the implementation of public health policies and the development of rural social life;
- it represents the support of outpost nurses to the colonization of rural Quebec.
The La Corne Nursing Station is one of the best preserved of the network of 174 nursing stations created in Québec between 1932 and 1975. For 50 years, from 1940 to 1990, the same nurse, Gertrude Duchemin, who retired in 1976, lived at the La Corne Nursing Station. Her long residence in the building helped to preserve the site’s integrity and enabled the conservation of the furniture as well as an ethnological collection directly associated with the site’s history. Because of its physical integrity, the La Corne Nursing Station is an excellent example of a nursing station residence built in Québec and implemented in newly colonized regions by the Service médical aux colons (SMC) during the Great Depression. Three models of nursing station residences were built. The La Corne Nursing Station was built in the years 1930-1949 according to the first model, which was also the most common, and is comprised of a two-storey wooden building with an attached garage and an adjacent summer kitchen.
The nursing station also symbolizes the contribution of the network created by the SMC to the development of healthcare services in Québec’s remote areas. These nursing stations contributed to the genesis of the socio-sanitary infrastructure of many rural regions in Québec and nurses played a key role. The nursing station served both as the nurse’s workplace and residence consisting of a nurse's office, waiting room, kitchen, living room, bathroom and bedrooms on the second floor, hence the term “dispensary-residence.” Nevertheless, nurses had to travel great distances to serve the settlers. They took on many responsibilities, including promoting public health, monitoring the outbreak of contagious diseases, caring for the poor, delivering babies, and extracting teeth.
The La Corne Nursing Station illustrates the fundamental role played by these nursing stations in the development of communities and in the colonization process of the Abitibi region. Nurses like Gertrude Duchemin played an essential role in the development of Québec’s regions, particularly in Abitibi-Témiscamingue.
Source: Historic Sites and Monuments Board of Canada, Minutes, July 2003.
Key elements contributing to the heritage value of this site include:
- its location on a main road, next to the church and the cemetery;
- the visible remains representing the nursing station architecture of the first model, including its wood construction, two storeys, veranda, garage annex and adjacent summer kitchen;
- the interior evidence reflecting the 1940-1976 period when the house was a dispensary, such as the original floor plan and intact finishing, and particularly the original arrangement of private and professional spaces, such as the waiting room, the nurse’s office, the consulting room, the equipment, the counter and the series of cabinets;
- the ethnological collection, which demonstrates the nurse’s role in the social development of remote communities and the promotion of health policies of the period such as reference books and work instruments. | <urn:uuid:30a6cbec-b779-40aa-90be-892a969d0a50> | CC-MAIN-2016-26 | http://www.historicplaces.ca/en/rep-reg/place-lieu.aspx?id=12958 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960284 | 860 | 2.84375 | 3 |
Soviet leader Mikhail Gorbachev arrives in Washington, D.C., for three days of talks with President George Bush. The summit meeting centered on the issue of Germany and its place in a changing Europe.
When Gorbachev arrived for this second summit meeting with President Bush, his situation in the Soviet Union was perilous. The Soviet economy, despite Gorbachev’s many attempts at reform, was rapidly reaching a crisis point. Russia’s control over its satellites in Eastern Europe was quickly eroding, and even Russian republics such as Lithuania were pursuing paths of independence. Some U.S. observers believed that in an effort to save his struggling regime, Gorbachev might try to curry favor with hard-line elements in the Russian Communist Party. That prediction seemed to be borne out by Gorbachev’s behavior at the May 1990 summit. The main issue at the summit was Germany.
By late 1989, the Communist Party in East Germany was rapidly losing its grip on power; the Berlin Wall had come down and calls for democracy and reunification with West Germany abounded. By the time Gorbachev and Bush met in May 1990, leaders in East and West Germany were making plans for reunification. This brought about the question of a unified Germany’s role in Europe. U.S. officials argued that Germany should become a member of the North Atlantic Treaty Organization (NATO). The Soviets adamantly opposed this, fearful that a reunified and pro-western Germany might be a threat to Russian security. Gorbachev indicated his impatience with the U.S. argument when he declared shortly before the summit that, “The West hasn’t done much thinking,” and complained that the argument concerning German membership in NATO was “an old record that keeps playing the same note again and again.”
The Gorbachev-Bush summit ended after three days with no clear agreement on the future of Germany. Russia’s pressing economic needs, however, soon led to a breakthrough. In July 1990, Bush promised Gorbachev a large economic aid package and vowed that the German army would remain relatively small. The Soviet leader dropped his opposition to German membership in NATO. In October 1990, East and West Germany formally reunified and shortly thereafter joined NATO. | <urn:uuid:6c708c51-d426-4fc3-a383-87392b4e36d9> | CC-MAIN-2016-26 | http://www.history.com/this-day-in-history/gorbachev-arrives-in-washington-for-summit | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.980087 | 468 | 3.421875 | 3 |
It can be frustrating to try a new recipe and not understand how much of an ingredient to purchase at the store. Sometimes they give you a quantity of strawberries in volume (2 cups or 1 pint), sometimes as a weight (4 ounces) and still others the strawberries are given as a fruit description (1 pound of fresh large strawberries). But what are they really talking about? How many strawberries in a pound? In order to help make cooking easier we did some experiments to help tell you exactly how many strawberries you need to buy.
To answer How many strawberries in a cup we went to the local market to check out the fruit section. After surveying the options we found that a 1 pint container that holds about 12 strawberries is considered large berries. The count for medium sized strawberries is 24 and if they are small there will be about 36 berries in the 1 pint container. However, in the grocery store most 1 pint cartons have a mix of sizes. We selected 1 pound of fresh strawberries with the green leaves still attached for our how many strawberries in a pint testing samples.
Strawberries are normally trimmed of the upper section where the stem enters the berry and then are served whole, sliced or chopped. In addition, these berries are made into refreshing and tasty jellies and jams. Many basic jelly and jam recipes use 3 to 4 quarts of fresh strawberries. This would give you 3 to 4 cups of juice for jelly or 6 to 8 cups of mashed berries for making jam.
Once we hit the kitchen we started our measuring, our 1 pound strawberries yielded 3.5 to 4 cups of whole berries; these were mixed sizes but predominantly medium to large. One cup of these whole strawberries weighed 4 to 5 ounces. Next we cut our pound of strawberries into ¼ inch thick slices and ended up with 2.7 cups or 1.3 pints. If you purchase a 1 pint plastic container of whole strawberries it will yield about 2 cups of sliced berries that weigh about ¾ of a pound.
Did you know that technically a strawberry is not a berry, it is a member of the rose family. It is the only fruit with the seeds on the outside; each strawberry has about 200 seeds. Unlike some other fruits, they don’t continue to ripen after being picked. The Le Musee de la Fraise in Belgium is a museum dedicated to strawberries.
Next time your recipe calls for a cup of strawberries you’ll feel confident knowing what to purchase. You can also use our conversion tool below for any custom how many strawberries in a... measurements you need. If you are hulling a lot of fresh strawberries you should definitely consider getting a good huller. I would highly recommend the Joie Stainless Steel Strawberry Huller. Besides being very inexpensive and fast to clean, it’s quick and easy to use.
One of the biggest hassles when cooking and working in the kitchen is when a recipe calls for "the juice of 1 lime" or a similar measurement. Often times when cooking people use bottled juices, pre-sliced vegetables and other convenient cooking time savers. Produce Converter will help you convert the "juice of 1 lime" and other similar recipe instructions into tablespoons, cups and other concrete measurements.
Produce Converter can also be used to figure out how many vegetables to buy when you need, for instance, "A cup of diced onion." You can use our easy conversion tool to figure out exactly how many onions you need to buy at the store in order to end up with the amount you need for your cooking.
We hope you enjoy Produce Converter and if you have any suggestions for how we can improve it and make your cooking easier please let us know. | <urn:uuid:32606d65-8a53-4dc2-b675-95e1d4e97e2a> | CC-MAIN-2016-26 | http://www.howmuchisin.com/produce_converters/strawberries | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.948821 | 761 | 3.078125 | 3 |
Good news - the Sun doesn't cause earthquakes.
That might seem obvious, but it's been suggested for a while, although not with much evidence, that earthquakes can be strengthened or even triggered by solar flares or other powerful activity on the Sun.
But despite the various attempts to show otherwise, there remain those dedicated to the theory.
Fortunately, a new study appears to have put that to rest.
Researchers Dr Jeffrey Love (US Geological Society) and Dr Jeremy Thomas (Northwest Research Associates) looked at the history of earthquakes around the world, and tried to find a correlation with instances of abnormally large solar flares, coronal mass ejections and other bouts of radiation.
"Across a range of earthquake magnitude thresholds, we find no consistent and statistically significant distributional differences. We also introduce time lags between the solar-terrestrial variables and the number of earthquakes, but again no statistically significant distributional difference is found."
According to the team, there was no direct correlation in the data.
So while solar flares can have an impact on Earth - including radio blackouts, problems with infrastructure and possibly even a marginal rise in the risk of cancer - earthquakes aren't in that list.
"It's natural for scientists to want to see relationships between things," said Love, according to Universe Today. "Of course, that doesn't mean that a relationship actually exists!" | <urn:uuid:7267388b-e37a-4c8f-8815-92504cc9868a> | CC-MAIN-2016-26 | http://www.huffingtonpost.co.uk/2013/04/12/solar-flares-dont-cause-earthquakes_n_3067827.html?ir=Technology | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95912 | 280 | 3.625 | 4 |
Is the federal government preparing to impose strict new standards on the food industry and how it markets junk food to kids?
On Wednesday, BNET pointed to an interagency document (embedded below), from the FTC, FDA, CDC, and USDA proposing new nutritional standards for food marketed to children ages 2-17. Sugary fruit juices and fatty foods would be off limits, and could not be aimed at children. According to the new guidelines, foods marketed to kids must actually include food.
While the USDA did help to write the guidelines, they're the only agency who hasn't signed off on the proposal, reports BNET:
So what's the status of these standards? Nobody knows. They were presented at a meeting in December 2009 and were supposed to be finalized by February or March. Both the FTC and the FDA have reportedly signed off on them, but the USDA has not, leading some watchdog groups to speculate that the food industry has unleashed a lobbying effort aimed at its friends in the Agriculture Department. No one from the food industry was present at the meeting in December.
A section titled "Meaningful Contribution to a Healthful Diet" explains some of the new standards that the agencies came up with:
Foods marketed to children must provide a meaningful contribution to a healthful diet.
Food must contain at least 50% by weight of one or more of the following: fruit; vegetable; whole grain; fat-free or low-fat milk or yogurt; fish; extra lean meat or poultry; eggs; nuts and
seeds; or beans
Food must contain one or more of the following per RACC:
0.5 cups fruit or fruit juice
0.6 cups vegetables or vegetable juice
0.75 oz. equivalent of 100% whole grain
0.75 cups milk or yogurt; 1 oz. natural cheese; 1.5 oz.
1.4 oz. meat equivalent of fish or extra lean meat or
0.3 cups cooked dry beans
0.7 oz. nuts or seeds
1 egg or egg equivalent
Last month, the Federal Trade Commission publicly admonished cereal maker Kellogg's for the second in time in as many years for making unsupported health claims regarding children.
Last year the company reached a settlement with the FTC, which criticized its unfounded claims that Frosted Mini Wheats cereal was "clinically shown to improve kids' attentiveness by nearly 20%." Yet shortly after coming to an agreement on that previous marketing campaign, Kellogg launched another effort for Rice Krispies, claiming that the cereal could boost children's immunity, according to a statement released yesterday by the FTC.
Under the previous settlement, Kellogg was banned from making any claims about their products' benefits to cognitive health or function unless such claims could be backed up by scientific evidence. In light of the subsequent claims -- that Rice Krispies can boost children's immunity -- the original order has been expanded, and now more broadly bars the company from "making claims about any health benefit of any food unless the claims are backed by scientific evidence and not misleading."
READ THE NEW STANDARDS:
SUBSCRIBE AND FOLLOW
Get top stories and blog posts emailed to me each day. Newsletters may offer personalized content or advertisements.Learn more | <urn:uuid:48bcae7e-47d9-491c-921d-6b16587342e1> | CC-MAIN-2016-26 | http://www.huffingtonpost.com/2010/07/08/junk-food-crackdown-feds_n_639100.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.951975 | 668 | 2.609375 | 3 |
Inside the humidor cigars should be stored at a relative humidity of approximately 68-74%. Generally it is assumed that the typical cigar flavors can best evolve in such a climate. This level of humidity supports an even burning of the cigar. At a relative humidity level of 70%, and a temperature of 64°F (18°C) the air contains approximately 10 grams of water per cubic meter of air. In such an environment, the cigar should absorb the ideal rate of humidity of 14% of its weight.
Dry cigars become fragile and burn faster since their burning is not slowed down by the cigar’s natural humidity. The cigar takes on an aggressive and slightly bitter taste.
Damp cigars, on the contrary, burn unevenly and take on a heavy and acidic flavor.
A few aficionados appreciate these modifications to the cigar flavor and therefore intentionally store their cigars in a drier or damper environment. In the 65-75% range cigars can be stored for long periods of time without any concern. Caution is required, however, if the humidity level should exceed 80%. In this case the cigar can begin to rot and mold. | <urn:uuid:8369164a-a9a5-4904-8a83-ca73d3a108ba> | CC-MAIN-2016-26 | http://www.humidor-guide.com/humidifier/optimum-humidity-level | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.912157 | 234 | 2.515625 | 3 |
Prior to 1922, the development of three important German catalytic processes had shown the potential impact that catalysis could have on the process industry. One was the so-called contact process for producing sulfuric acid catalytically from the sulfur dioxide generated by smelting operations. Another was the catalytic method for synthetic production of the valuable dyestuff indigo. The third was the catalytic combination of nitrogen and hydrogen for the production of ammoniathe Haber-Bosch process for nitrogen fixation.
Present day value
The impact of catalysis is substantial with over 90% of all industrial chemicals produced aided by catalysts. Annually speaking, process catalysts have become a $13 billion business worldwide. The value-added products dependent on process catalysts include petroleum-based products, chemicals, pharmaceuticals, synthetic rubber, plastics and many others. The annual value of catalyst-aid products is estimated at $500600 billion.
The definition of a catalyst that was coined in the 19th century is still used today: a substance that alters the velocity of a chemical reaction without itself being consumed. Although that is theoretically true, in practice, catalysts decrease in activity with use and suffer losses in material handling, thus requiring periodic replacement. These factors, together with economic growth and discoveries of new applications contribute to the continued growth of the catalyst business.
The other side of the picture is the drive to find more-efficient, long service life, more active and selective catalyst systems. Economic and practical considerations provide incentives to develop new catalysts, along with a greater understanding of catalysis systems in general. Development is further driven by the need for new sources of energy and chemicals, concern over environmental pollution, desire and demand for new products, and the cost and potential restrictions on the availability of the noble metals used in many catalysts.
CATALYSTS TAKE OFF
The rapid growth of catalysis began around the time of World War II (WWII) with the development of catalytic cracking of crude oil. The process enabled the breaking of large hydrocarbon molecules into smaller compounds needed to process transportation fuels and petrochemicals. An important process breakthrough was the Houdry process that coupled the endothermic cracking reaction with the exothermic reaction (heat is released) of catalyst regeneration in a cyclic, continuous operation. The wartime need for toluene feedstock for trinitrotoluene (TNT) production supported the development of catalytic reforming processesthe dehydrogenation, cyclization and isomerization of aliphatic hydrocarbons obtained from crude oil to form aromatic compounds. Owing chiefly to this process, toluene production increased tenfold from 1940 to 1944, to 1 billion liters.
Significant developments since the 1940s have made catalytic processes even more important to the modern petroleum refining and petrochemical/chemical processing industries. These have included the emerging metallocene and other single-site catalysts (SSCs) for the polymerization of olefins, the Ziegler-Natta titanium (Ti) halidealuminum alkyl catalysts, zeolite catalysts for petroleum refining and petrochemicals production, catalysts for the oxo reaction to convert olefins to aldehydes and catalysts for the reaction of diisocyanates with polyols to produce polyurethanes.
Petroleum refining, for example, is the source for the largest share of industrial products. Upgrading crude oil technology consists almost entirely of catalytic processes. In 2009, catalysts for the refining market were a $3.2 billion business worldwide. The largest catalyst segment in terms of value is catalytic cracking, while the largest-volume products are alkylation catalysts. Other major refinery catalyst market sectors, in terms of value, include hydrotreating, reforming and hydrocracking.
Worldwide environmental regulations now mandate the production of cleaner fuels. Consequently, refiners are experiencing severe pressures from market forces that demand a change in the product mix, aside from quality. On the regulatory side, stringent product specifications limit sulfur content along with changes in gasoline and diesel composition. Major technological challenges to refining operations include achieving zero or heavily reduced sulfur content in all fuel for almost all countries around the world. The phase-out of methyl tertiary butyl ether (MTBE) in reformulated gasoline in the US and other nations has forced operating changes for reformer operations to achieve the required high-octane number of gasoline-blending components. Environmental pressures have become the major driving forces in catalysis and process design, as modifications and/or new technologies are required to facilitate compliance with the regulations, while still allowing the hydrocarbon processing industry to economically provide hydrocarbon-based products without interruption and meet the increasing needs of the growing global population.
Fig. 1. New UOP catalytic cracking unit installed
at the Rock Island Refinery. Petroleum Refiner,
Polymerization catalyst sales in 2009 were estimated at $4.3 billion. Major market segments include polyethylene (PE), polypropylene (PP), polyethylene terephthalate (PET), polyvinyl chloride (PVC) and polystyrene (PS). Polyolefin catalysts are the largest single market sector, with about a 50%60% market share of the total poly-merization market, equivalent to about $2.22.6 billion. Significant new technical developments that were introduced commercially since the 1990s are:
- SSCs for polymerization offer tremendous opportunities for polyolefins development
- Polymers with closely controlled molecular-weight distributions allow greater control over properties and facilitate new product applications.
Metallocenes, the initial class of SSCs developed, are very expensive. Less complicated ligands are used in metallocene catalysts for PE than for PP, facilitating PE catalyst development. Technical improvements have reduced the cost of metallocene-produced polymers to levels that are more competitive with those produced with conventional Ziegler-Natta polymerization catalysts. Polymers based on SSCs have unique properties and are creating new markets. Even in the existing market, some metallocene-based polymers can be competitive with conventional polymers, which has added a new dynamic to some applications.
Advanced Ziegler-Natta catalysts have been developed; these catalyst systems can produce polyolefins with properties similar to those produced by metallocenes. We expect that Ziegler-Natta catalysts will remain the dominating technology due to cost benefits.
We will summarize some of the major developments in catalysis that have occurred over the last 90 years. While the list of advances and the implications of these developments on the process industry are too numerous to list, they represent some of the most noteworthy.
Fig. 2. The arrival of a 96-ton cat-cracker
reactor is part of an expansion at the Anglo-
Iranian Oil Co.s Grangemouth, Scotland,
refinery. On the left is the topping unit; the
catalytic cracker is under construction, as
shown on the right of the photo. Petroleum
Refiner, May 1952.
PETROLEUM REFINING CATALYSTS
The major processes involved in petroleum refining are distillation, catalytic hydrotreating, catalytic reforming, isomerization, catalytic cracking, catalytic hydrocracking, alkylation and thermal operations. Only distillation and thermal operations involve no catalysts. The utilization of each refining process depends on the quality of the crude oil and the demand for the various product streams and products. Many of the advances in refining process technology were possible due to catalyst developments. Much of this work was driven by the need to increase production of the refined products needed to support the war efforts in the mid-1900s. These developments provided the basis for many processes that are common processing practices in the present-day refining industry.
Fig. 3. View of the new catalytic
cracking and thermal refining unit for
West Germanys newest refinery
constructed by Esso A.G., a German
affiliate of Standard Oil Co. (New
Jersey). The $12 million expansion is
located in Hamburg, Germany, and
replaces the older refinery destroyed
during WWII; it is the most modern
refinery in Europe. Petroleum Refiner,
The first full-scale commercial catalytic cracker for the selective conversion of crude petroleum to gasoline went on stream at the Marcus Hook refinery in 1937. Pioneered by Eugene Jules Houdry (18921962), the catalytic cracking of petroleum revolutionized the industry. The Houdry process conserved crude oil by doubling the amount of gasoline produced by other processes. It also greatly improved the gasoline octane rating, making possible todays efficient, high-compression automobile engines. During WWII, the high-octane fuel shipped from Houdry plants played a critical role in the Allied victory.
The most dramatic benefit of the earliest Houdry units was in the production of 100-octane aviation gasoline, just before the outbreak of WWII. The Houdry plants provided a better gasoline for blending with scarce high-octane components, as well as byproducts that could be converted by other processes to make more high-octane fractions. The increased performance gave Allied planes some advantage over the Axis. In the first six months of 1940, at the time of the Battle of Britain, 1.1 million bbl/month of 100-octane aviation gasoline was shipped to the Allies. Houdry plants produced 90% of this catalytically cracked gasoline during the first two years of the war.
The original fixed-bed Houdry process units have been outmoded by engineering advances that transformed the fixed-bed to more economical fluidized-bed systems, and introduced the use of crystalline aluminosilicate catalysts to provide higher gasoline yields. Yet, it is remarkable that, 70 years after Houdrys discovery, the same fundamental principles are still the primary platform for manufacturing gasoline worldwide.
Donald Campbell, Homer Martin, Eger Murphree and Charles Tyson were known for their development of a process still used today to produce more than half of the worlds gasoline. These four horsemen were part of the Exxon Research Co. They began thinking of a design that would allow for a moving catalyst to ensure a steady and continuous cracking operation. The four ultimately invented a fluidized-solids reactor bed and a pipe-transfer system between the reactor and regenerator unit in which the catalyst is processed for re-use. The fluid catalytic cracking (FCC) process revolutionized the petroleum industry by more efficiently transforming heavier oil fractions into lighter, usable products. Catalysts for this process have evolved significantly over the past 30 years from the original amorphous silica/alumina products. Essentially all commercial gasoline refining processes now use zeolite catalysts, and FCC is the largest market for zeolites.
Reactions involving catalytic hydrogenation of organic substances were known prior to 1897. The property of finely divided nickel to catalyze the fixation of hydrogen on hydrocarbon (ethylene and benzene) double bonds was discovered by the French chemist Paul Sabatier who found that unsaturated hydrocarbons in the vapor phase could be converted into saturated hydrocarbons by using hydrogen and a catalytic metal. His work was the foundation of the modern catalytic hydrogenation process.
Soon after Sabatiers work, a German chemist, Wilhelm Normann, found that catalytic hydrogenation could be used to convert unsaturated fatty acids or glycerides in the liquid phase into saturated ones. He was awarded a patent in Germany in 1902 and in Britain in 1903, which was the beginning of what is now a worldwide industry.
In the mid-1950s, the first noble metal catalytic reforming process (UOPs Platformer Process) was commercialized. At the same time, the catalytic hydrodesulfurization (HDS) of the naphtha feed to such reformers was also commercialized. In the decades that followed, various proprietary catalytic HDS processes have been commercialized. Most refineries have one or more HDS units.
Hydrocracking was first developed in Germany as early as 1915 to provide coal-based liquid fuels from domestic coal deposits. The first plant that might be considered as a commercial hydrocracking unit began operation in Leuna, Germany, in 1927. Similar efforts to convert coal to liquid fuels took place in Great Britain, France and other countries. Between 1925 and 1930, Standard Oil of New Jersey collaborated with I.G. Farbenindustrie of Germany to develop a hydrocracking technology capable of converting heavy petroleum oils into fuels. Such processes required pressures of 200 bar300 bar and temperatures of over 375°C, and they were very expensive.
In 1939, Imperial Chemical Industries (ICI) of Great Britain developed a two-stage hydrocracking process. During WWII, this two-stage hydrocracking process helped refiners in Germany, Great Britain and the US to supply the needed volumes of aviation gasoline. After WWII, hydrocracking technology became less important, as increased availability of petroleum crude oil from the Middle East removed the motivation and the economics to convert coal into liquid fuels. Newly developed FCC processes were more economical than hydrocracking to convert high-boiling petroleum oils to fuels.
In the early 1960s, hydrocracking become economical in part due to the introduction of zeolite-based catalysts, during the period from about 1964 to 1966. Zeolite-based catalysts performed much better than the earlier catalysts, and these catalysts permitted operation at lower pressures. The combination of higher performance and lower operating pressures significantly reduced the cost of building and operating hydrocrackers.
The alkylation process started with an observation that puzzled Herman Pines in 1930 when he was working in the lab of Universal Oil Products (UOP). While vigorously shaking petroleum fractions with concentrated sulfuric acid in a calibrated glass cylinder to determine how much of the oil dissolved in the aqueous acid phase, Pines observed that, after a few hours, the phase boundary between oil and acid had shifted again. Apparently, paraffins had formed from the olefins. Pines concluded that this process required the simultaneous formation of a highly unsaturated coproduct, which remained dissolved in the aqueous phase in a process called conjunct polymerization.
Alkylation was commercialized in 1938, and experienced tremendous growth during the 1940s stemming from demand for high-octane aviation fuel during WWII. After the war, refiners interests shifted from producing aviation fuels to using alkylate as a blending component in gasoline motor fuels. Alkylation capacity remained relatively flat through the 1960s due to the lower cost of other blending components. When the US Environmental Protection Agencys lead phase-down program began in the 1970s and completed in the 1980s, alkylate demand sharply increased. Alkylate was sought as a blending component to compensate for lead removal from gasoline. As additional environmental regulations were imposed worldwide, the importance of alkylate as a blending component for motor fuel increased.
In the 1940s, Vladimir Haensel, while working for UOP, developed a platinum-based catalytic reforming process for producing a high-octane gasoline from low-octane naphthas known as the UOP Platforming process. Haensels process was commercialized by UOP in 1949 when the first Platforming unit was built by the Old Dutch Refining Co. in Muskegon, Michigan.
Dr. Sinfelt, at Standard Oil Co., was researching alternate petroleum conversion chemistries and developed the application of novel, highly active and selective bimetallic-cluster catalyst systems to produce high-octane motor gasoline without lead additives. Earlier work on metal alloys had demonstrated the relation between catalytic performance of a metal and its electron band structure. However, the possibility of using this to catalytically influencing the selectivity of chemical transformations (product selectivities) had not been considered. Dr. Sinfelt, through in-depth studies on bimetallic catalysts, discovered how to influence chemical reaction selectivity. He discovered that it is possible to catalyze one type of chemical reaction in preference to others that are thermodynamically favorable. He showed that bimetallic catalysts could be used to effectively reduce undesirable competing reactions. This made possible the economic conversion of low-octane-number molecules to high-octane number molecules.
Many versions of this process have been developed by the major oil companies and other organizations. In 1971, UOP commercialized a fully regenerative reforming process called continuous catalysis regeneration (CCR). The Institut Français du Pétrol (IFP) also offers a CCR process. This process stacks the reactors so that the catalyst may be withdrawn from the bottom reactor, regenerated and fed back to the top reactor without interrupting operations. The process uses lower operating pressures, thereby increasing the yield of hydrogen and aromatics and improving the octane rating.
German Karl Ziegler, for his discovery of the first titanium-based catalysts, and Italian Giulio Natta, for using them to prepare stereo-regular polymers from propylene, were awarded the Nobel Prize in Chemistry in 1963. Ziegler discovered the basic catalyst systems for polymerizing ethylene to linear high polymers. Zieglers research had started with propylene but was unsuccessful, and he then shifted his focus to ethylene. Natta was a professor at the Institute of Industrial Chemistry at Milan Polytechnic and was a consultant for Montecatini. Natta learned of Zieglers success with ethylene polymerization and pursued propylene polymerization, thus determining the crystal structure in 1954 for which Ziegler and Natta were awarded the Nobel Prize in Chemistry.
In the early 1950s, workers at Phillips Petroleum discovered that chromium (Cr) catalysts are highly effective for the low-temperature polymerization of ethylene. A few years later, Ziegler discovered that a combination of TiCl4 and Al(C2H5)2Cl gave comparable activities for PE production. Natta used crystalline α-TiCl3 in combination with Al(C2H5)3 to first produce isotactic PP, which decreased the atacticity, and it was key to PP market development. Usually, Ziegler catalysts refer to Ti-based systems for conversions of ethylene, and Ziegler-Natta catalysts refer to systems for conversions of propylene. In the 1970s, magnesium chloride (MgCl2) was discovered to greatly enhance the activity of the Ti-based catalysts. These catalysts were so active that the small amount of residual Ti was no longer removed from the product. They enabled commercialization of linear-low-density PE (LLDPE) resins and it allowed the development of noncrystalline copolymers.
Fig. 4. Catalytic cracking unit No. 3the largest
cat cracker in Amoco Oils Texas City refinery
will be among the units modified to process
high-sulfur crude oils. Hydrocarbon Processing,
Ziegler-Natta catalysts have been used in the commercial manufacture of various polyolefins since 1956. In 2010, the total volume of plastics, elastomers and rubbers produced from alkenes with these and related catalysts worldwide exceeded 100 million metric tons. Together, these polymers represent the largest-volume commodity plastics, as well as the largest-volume commodity petrochemicals in the world.
One of the most exciting developments in chemical-process catalysts is the new class of SSCsmetallocene and nometallocene. Polymers based on SSCs have unique properties and are creating new markets. Even in the current market, some metallocene-based polymers, especially LLDPE, are replacing conventional polymers.
Metallocene catalysts are just as old as the Ziegler-Natta systems, but the first systems using them were found to have low activity. It wasnt until 1980, when metallocene catalysts were put together with a methyl aluminoxane cocatalyst, that their full potential was realized. Their big advantage over the Ziegler-Natta systems is that these systems catalyze the reaction of olefins through only one reactive site. Due to this single-site reaction, the polymerization continues in a far more controllable fashion, leading to polymers with narrow molecular weight ranges and, more importantly, predictable and desirable properties. Also, it has been found that changing the ligands (functional groups attached to the metal) within the metallocene molecule can controllably affect the properties of the polymer. This is very attractive to petrochemical companies trying to keep up with the demand for engineered plastics.
Fig. 5. Qatofins Ras Laffan, Qatar, complex is
one of the latest world scale olefins and
polyethylene manufacturing sites. Hydrocarbon
Processing, April 2011.
Research and development
Following the lead of the pharmaceutical industry, oil, petrochemical and catalyst companies are turning to high-throughput screening (HTS), including combinatorial chemistry, to accelerate catalyst development as short as two years, and, therefore, shortening the time-to-market of new products. For example, UOP is developing HTS expertise to develop new catalysts and adsorbents, which it considers to be at the basis of its competitive advantage. Other companies developing their own HTS capabilities include BASF and Johnson Matthey. Research using these methods, as well as banks of microreactors, continues at the R&D centers of the major energy and chemical companies.
In the last two decades, catalyst development has been transitioning from an art form into a science based on advances in physical and chemical instrumentation plus computer-based modeling tools. New initiatives in catalytic processes are focused on reducing cycle time for catalyst discovery and process development from five to ten years down to three to five years. New approaches are designed to integrate and validate catalyst design methodologies along with HTS techniques and process modeling.
Combinatorial chemistry is speeding up innovation and accelerating availability of improved catalytic materials for the chemical industry. HTE refers to high-throughput experimentation. The term combinatorial catalysis is really a misnomer because, although this concept may be used to visualize libraries of catalysts to be tested, it is actually the HTE techniques that are the key to decreasing catalyst development time. The application of HTE to catalyst research requires developing new methods for catalyst preparation, reactors and instrumentation, along with new methods for rapid analysis and information systems capable of handling the large quantities of generated data.
The rapidly growing field of biotechnology brings with it opportunities in the field of enzyme-catalyzed reactions. The role of genetically engineered microorganisms in synthesizing rare and valuable peptides used in human therapeutics is now well established. The same techniques of molecular biology can also be used to enhance the properties of enzymes as catalysts for industrial processes.
This approach can potentially revolutionize the applications of biological systems in catalysis. Enzymes and other biological systems work well in dilute aqueous solutions at moderate temperature, pressure and pH. The reactions catalyzed by these systems are typically environmentally friendly in that few byproducts or waste products are generated. The reactions are typically selective with extremely high yields. Enzymes can be used to catalyze a whole sequence of reactions in a single reactor, resulting in vastly improved overall yields with high positional specificity and 100% chiral synthesis in most cases. The improved use of enzyme-based catalyst technology with whole-cell catalysis, reactions catalyzed by single enzymes, and mixed enzymatic and chemical syntheses are all important for fostering new catalyst technology.
Whole cells of various microorganisms are being used more frequently in the catalytic synthesis of complex molecules from simple starting materials. The use of whole microbial cells as biosynthetic catalysts takes advantage of one of the unique properties of enzymes: They were designed by nature to function together in complex synthetic or degradative pathways. Because of this property, whole cells and microorganisms can be used as catalytic entities that carry out multiple reactions for the complete synthesis of complex chiral molecules. A number of specialty chemicals with complex synthetic schemes can be produced most efficiently by intact microorganisms utilizing a series of enzyme-catalyzed reactions designed by nature to work together.
The biotechnology field has a growing number of examples of reactions of industrial significance catalyzed by isolated enzymes. The enzymatic conversion of acrylonitrile to acrylamide was recently commercialized in Japan. Japanese companies and researchers have been very diligent in developing enzymatic processes for the synthesis of fine chemicals. The stereospecificity of enzyme-catalyzed reactions has been used to advantage in polymer synthesis, as well. Workers at ICI have developed a combined enzymatic and chemical process for the synthesis of polyphenylene from benzene. These are only a few of the developments that demonstrate the potential for the process industry to utilize breakthroughs in other areas to improve the range of products that can be produced economically. While catalysis has made great advances over the last 90 years, the application of new technologies developed in other areas offers great promise for future breakthroughs. HP
Russell Heinen is the director of technology services for IHS Chemical and manages the Process Economics Program (PEP), and the Carbon Footprint Initiative. He has 30 years of experience in energy and chemical consulting. He joined IHS in 2010 when SRI Consulting (SRIC) whom he had been with for more than 13 years was aquired by IHS. Based in The Woodlands, Texas, his specific expertise covers natural gas, refining, and chemicals market analysis and technology evaluations. This experience has recently been focused on helping clients to identify new opportunities in the downstream chemical markets and assisting companies with technology evaluations and selections. In addition to these studies, he also is responsible for the Carbon Footprint Initiative, which helps companies understand and manage their strategy related to carbon emissions. Mr. Heinen holds a BS degree in engineering from Rice University in Houston, and received an MBA from the Jesse H. Jones Graduate School of Administration at Rice University in 1982. He is a registered engineer in the state of Texas, and is a member of the American Institute of Chemical Engineers. | <urn:uuid:9f6e20c5-49fc-49c2-9c7b-f5d50000b75d> | CC-MAIN-2016-26 | http://www.hydrocarbonprocessing.com/Article/3050841/Petrochemicals-Aromatics/Catalyst-developments-The-past-90-years.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.9453 | 5,457 | 3.46875 | 3 |
This study is provided by ICPSR. ICPSR provides leadership and training in data access, curation, and methods of analysis for a diverse and expanding social science research community.
Census of Population and Housing, 1970 [United States]: Extract Data (ICPSR 9694)
Principal Investigator(s): Adams, Terry K.
This extraction of data from 1970 decennial Census files (CENSUS OF POPULATION AND HOUSING, 1970 [UNITED STATES]: SUMMARY TAPE FILES 4A, 4B, 4C [ICPSR 9014, 8127, 8107] and STF 5A, 5B, and 5C) was designed to provide a set of contextual variables to be matched to any survey dataset that has been coded for the geographic location of respondents, such as the PANEL STUDY OF INCOME DYNAMICS, 1968-1988 (ICPSR 7439). This geographic area data can also be analyzed independently with neighborhoods, labor market areas, etc., as the units of analysis. Over 120 variables were selected from the original Census sources, and more than 100 variables were derived from those component variables. The variables characterize geographic areas in terms of population counts, ethnicity, family structure, income and poverty, education, residential mobility, labor force activity, and housing. The geographic areas range from neighborhoods, through intermediate levels of geography, through large economic areas, and beyond to large regions. These variables were selected from the Census data for their relevance to problems associated with poverty and income determination, and 80 percent were present in comparable form in both the 1970 and 1980 Census datasets.
These data are available only to users at ICPSR member institutions. Because you are not logged in, we cannot verify that you will be able to download these data.
WARNING: Because this study has many datasets, the download all files option has been suppressed, and you will need to download one dataset at a time.
WARNING: This study is over 150MB in size and may take several minutes to download on a typical internet connection.
Adams, Terry K. Census of Population and Housing, 1970 [United States]: Extract Data. ICPSR09694-v3. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2007-12-21. http://doi.org/10.3886/ICPSR09694.v3
Persistent URL: http://doi.org/10.3886/ICPSR09694.v3
This study was funded by:
- United States Department of Health and Human Services. Office of the Assistant Secretary for Planning and Evaluation (88ASPE203A)
- Ford Foundation (890-0047, 905-0790)
- Rockefeller Foundation (GA-EO-8743)
- Social Science Research Council
Scope of Study
Subject Terms: census data, citizenship, counties, demographic characteristics, education, employment, ethnicity, families, household composition, housing, housing units, housing conditions, income, labor markets, Metropolitan Statistical Areas, neighborhoods, occupational status, occupations, population, population characteristics, poverty
Geographic Coverage: United States
Date of Collection:
Universe: Population of the United States (which includes all 50 states and Washington, DC).
Data Types: aggregate data
United States Bureau of the Census, CENSUS OF POPULATION AND HOUSING, 1970 [UNITED STATES]: SUMMARY TAPE FILES 4A, 4B, 4C (ICPSR 9014, 8127, and 8107) and STF 5A, 5B, and 5C
Original ICPSR Release: 1992-03-10
- 2007-12-21 SAS and SPSS setup files were added and flagged as study-level files, so that they will accompany all downloads.
- 2006-01-12 All files were removed from dataset 15 and flagged as study-level files, so that they will accompany all downloads.
Related Publications (?)
- Citations exports are provided above.
Export Study-level metadata (does not include variable-level metadata)
If you're looking for collection-level metadata rather than an individual metadata record, please visit our Metadata Records page. | <urn:uuid:52b83517-8df6-4a4c-b61d-2a92e585f780> | CC-MAIN-2016-26 | http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/9694?keyword%5B1%5D=housing&keyword=families&dataFormat%5B0%5D=STATA&paging.startRow=1 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.905186 | 884 | 2.578125 | 3 |
Curie (kürēˈ) [key], family of French scientists. Pierre Curie, 1859–1906, scientist, and his wife, Marie Sklodowska Curie, 1867–1934, chemist and physicist, b. Warsaw, are known for their work on radioactivity and on radium. The Curies' daughter Irène (see under Joliot-Curie, family) was also a scientist.
Pierre Curie's early work dealt with crystallography and with the effects of temperature on magnetism; he discovered (1883) and, with his brother Jacques Curie, investigated piezoelectricity (a form of electric polarity) in crystals. Marie Sklodowska's interest in science was stimulated by her father, a professor of physics in Warsaw. In 1891 she went to Paris to continue her studies at the Sorbonne. In 1895 she married Pierre Curie and engaged in independent research in his laboratory at the municipal school of physics and chemistry where Pierre was director of laboratories (from 1882) and professor (from 1895).
Following A. H. Becquerel's discovery of radioactivity, Mme Curie began to investigate uranium, a radioactive element found in pitchblende. In 1898 she reported a probable new element in pitchblende, and Pierre Curie joined in her research. They discovered (1898) both polonium and radium, laboriously isolated one gram of radium salts from about eight tons of pitchblende, and determined the atomic weights and properties of radium and polonium. The Curies refused to patent their processes or otherwise to profit from the commercial exploitation of radium. For their work on radioactivity they shared with Becquerel the 1903 Nobel Prize in Physics.
The Sorbonne created (1904) a special chair of physics for Pierre Curie; Marie Curie was appointed his successor after his death in a street accident. She also retained her professorship (assumed in 1900) at the normal school at Sèvres and continued her research. In 1910 she isolated (with André Debierne) metallic radium. As the recipient of the 1911 Nobel Prize in Chemistry she was the first person to be awarded a second Nobel Prize. She was made director of the laboratory of radioactivity at the Curie Institute of Radium, established jointly by the Univ. of Paris and the Pasteur Institute, for research on radioactivity and for radium therapy.
During World War I, Mme Curie devoted her energies to providing radiological services for hospitals. In 1921 a gram of radium, a gift from American women, was presented to her by President Harding; this she accepted in behalf of the Curie Institute. A second gram, presented in 1929, was given by Mme Curie to the newly founded Curie Institute in Warsaw. Five years later she died from the effects of radioactivity. In 1995 Marie and Pierre Curie's ashes were enshrined in the Panthéon, Paris; she was the first woman to be honored so in her own right.
Among the numerous and valuable writings of the Curies are Marie Curie's doctoral dissertation, Radioactive Substances (2 vol., 1902; tr. 1961); Traité de radioactivité (1910); Radioactivité (1935); and her biography of Pierre Curie (1923, tr. 1923). Pierre Curie's collected works appeared in 1908. A biography of Marie Curie was written by her daughter Ève Curie (tr. 1937). See also biographies by R. W. Reid (1974), F. Giroud (tr. 1986), S. Quinn (1995), and B. Goldsmith (2004); S. Emling, Marie Curie and Her Daughters (2012).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Physics: Biographies | <urn:uuid:ba502fb3-0020-4cda-9421-0b20dea7f863> | CC-MAIN-2016-26 | http://www.infoplease.com/encyclopedia/people/curie.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.950829 | 819 | 3.125 | 3 |
The satellite images displayed are infrared (IR) images. Warmest (lowest) clouds are shown in white; coldest (highest) clouds are displayed in shades of yellow, red, and purple. Imagery is obtained from the GOES and METEOSAT geostationary satellites, and the two US Polar Orbiter (POES) satellites. POES satellites orbit the earth 14 times each day at an altitude of approximately 520 miles (870 km). As each orbit is made the satellite can view a 1,600 mile (2,700 km) wide area of the earth. Due to the rotation of the earth the satellite is able to view every spot on earth twice each day. Data from multiple orbits are mosaicked together to provide wide scale global and full earth views in a single image. Occasional dark triangular areas that occur on POES images are a result of gaps in data transmitted from the orbiters.
A weather satellite
is a type of satellite that is primarily used to monitor the weather
of the Earth. These meteorological satellites, however, see more than clouds
and cloud systems
. City lights, fires, effects of pollution, auroras, sand and dust storms, snow cover
, ice mapping, boundaries of ocean
currents, energy flows, etc., are other types of environmental information collected using weather satellites.
Weather satellite images helped in monitoring the volcanic ash cloud from Mount St. Helens and activity from other volcanoes such as Mount Etna. Smoke from fires in the western United States such as Colorado and Utah have also been monitored.
Other environmental satellites can detect changes in the Earth's vegetation, sea
color, and ice fields. For example, the 2002 oil spill off the northwest coast of Spain was watched carefully by the European ENVISAT, which, though not a weather satellite, flies an instrument (ASAR) which can see changes in the sea surface
El Niño and its effects on weather are monitored daily from satellite images. The Antarctic ozone hole is mapped from weather satellite data. Collectively, weather satellites flown by the U.S., Europe, India, China, Russia, and Japan provide nearly continuous observations
for a global weather | <urn:uuid:0a37500e-b6f0-4c99-b0b6-c4e450396857> | CC-MAIN-2016-26 | http://www.intellicast.com/National/Satellite/Regional.aspx?location=USMO0388 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.90928 | 446 | 3.921875 | 4 |
The Current Surface Analysis map shows current weather conditions
, including frontal and high/low pressure positions, satellite infrared
(IR) cloud cover
, and areas of precipitation
. A surface weather analysis is a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations. Weather maps are created by plotting or tracing the values of relevant quantities such as sea level pressure, temperature
, and cloud cover
onto a geographical map to help find synoptic scale features such as weather fronts.
The first weather maps in the 19th century were drawn well after the fact to help devise a theory on storm systems. After the advent of the telegraph, simultaneous surface weather observations
became possible for the first time, and beginning in the late 1840s, the Smithsonian Institution became the first organization to draw real-time surface analyses. Use of surface analyses began first in the United States, spreading worldwide during the 1870s. Use of the Norwegian cyclone model for frontal analysis began in the late 1910s across Europe, with its use finally spreading to the United States during World War II.
Surface weather analyses have special symbols which show frontal systems, cloud cover
, or other important information. For example, an H may represent high pressure, implying good and fair weather. An L on the other hand may represent low pressure, which frequently accompanies precipitation
. Various symbols are used not just for frontal zones and other surface boundaries on weather maps, but also to depict the present weather at various locations on the weather map. Areas of precipitation
help determine the frontal type and location. | <urn:uuid:6500e44a-d8a7-41e0-9493-27549c898629> | CC-MAIN-2016-26 | http://www.intellicast.com/National/Surface/Current.aspx?location=USMO9648 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.936753 | 334 | 3.96875 | 4 |
Commodities markets, both historically and in modern times, have had tremendous economic impact on nations and people. The impact of commodity markets throughout history is still not fully known, but it has been suggested that rice futures may have been traded in China as long ago as 6,000 years. Shortages on critical commodities have sparked wars throughout history (such as in World War II, when Japan ventured into foreign lands to secure oil and rubber), while oversupply can have a devastating impact on a region by devaluing the prices of core commodities.
Energy commodities such as crude are closely watched by countries, corporations and consumers alike. The average Western consumer can become significantly impacted by high crude prices. Alternatively, oil-producing countries in the Middle East (that are largely dependent on petrodollars as their source of income) can become adversely affected by low crude prices. Unusual disruptions caused by weather or natural disasters can not only be an impetus for price volatility, but can also cause regional food shortages. Read on to find out about the role that various commodities play in the global economy and how investors can turn economic events into opportunities.
The four categories of trading commodities include:
- Energy (including crude oil, heating oil, natural gas and gasoline)
- Metals (including gold, silver, platinum and copper)
- Livestock and Meat (including lean hogs, pork bellies, live cattle and feeder cattle)
- Agricultural (including corn, soybeans, wheat, rice, cocoa, coffee, cotton and sugar)
Ancient civilizations traded a wide array of commodities, including livestock, seashells, spices and gold. Although the quality of product, date of delivery and transportation methods were often unreliable, commodity trading was an essential business. The might of empires can be viewed as somewhat proportionate to their ability to create and manage complex trading systems and facilitate commodity trades, as these served as the wheels of commerce, economic development and taxation for the kingdom's treasuries. Reputation and reliability were critical underpinnings to secure the trust of ancient investors, traders and suppliers.
Commodity trading in the exchanges can require agreed-upon standards so that trades can be executed (without visual inspection). You don't want to buy 100 units of cattle only to find out that the cattle are sick, or discover that the sugar purchased is of inferior or unacceptable quality.
There are other ways in which trading and investing in commodities can be very different from investing in traditional securities such as stocks and bonds. Global economic development, technological advances and market demands for commodities influence the prices of staples such as oil, aluminum, copper, sugar and corn. For instance, the emergence of China and India as significant economic players has contributed to the declining availability of industrial metals, such as steel, for the rest of the world.
Basic economic principles typically follow the commodities markets: lower supply equals higher prices. For instance, investors can follow livestock patterns and statistics. Major disruptions in supply, such as widespread health scares and diseases, can lead to investing plays, given that the long-term demand for livestock is generally stable and predictable.
The Gold Standard
There is some call for caution, as investing directly in specific commodities can be a risky proposition, if not downright speculative without the requisite diligence and rationale involved. Some plays are more popular and sensible in nature. Volatile or bearish markets typically find scared investors scrambling to transfer money to precious metals such as gold, which has historically been viewed as a reliable, dependable metal with conveyable value. Investors losing money in the stock market can create nice returns by trading precious metals. Precious metals can also be used as a hedge against high inflation or periods of currency devaluation.
Energizing the Market
Energy plays are also common for commodities. Global economic developments and reduced oil outputs from wells around the world can lead to upward surges in oil prices, as investors weigh and assess limited oil supplies with ever-increasing energy demands. However, optimistic outlooks regarding the price of oil should be tempered with certain considerations. Economic downturns, production changes by the Organization of the Petroleum Exporting Countries (OPEC) and emerging technological advances (such as wind, solar and biofuel) that aim to supplant (or complement) crude oil as an energy purveyor should also be considered.
Commodities can quickly become risky investment propositions because they can be affected by eventualities that are difficult, if not impossible, to predict. These include unusual weather patterns, natural disasters, epidemics and man-made disasters. For example, grains have a very active trading market and can be volatile during summer months or periods of weather transitions. Therefore, it may be a good idea to not allocate more than 10% of a portfolio to commodities (unless genuine insights indicate specific trends or events).
With commodities playing a major and critical role in the global economic markets and affecting the lives of most people on the planet, there are multitudes of commodity and futures exchanges around the world. Each exchange carries a few commodities or specializes in a single commodity. For instance, the U.S. Futures Exchange is an important exchange that only carries energy commodities.
The most popular exchanges include the CME Group, which resulted after the Chicago Mercantile Exchange and Chicago Board of Trade merged in 2006, Intercontinental Exchange, Kansas City Board of Trade and the London Metal Exchange.
Futures and Hedging
Futures, forward contracts and hedging are a prevalent practice with commodities. The airline sector is an example of a large industry that must secure massive amounts of fuel at stable prices for planning purposes. Because of this need, airline companies engage in hedging and purchase fuel at fixed rates (for a period of time) to avoid the market volatility of crude and gasoline, which would make their financial statements more volatile and riskier for investors. Farming cooperatives also utilize this mechanism. Without futures and hedging, volatility in commodities could cause bankruptcies for businesses that require predictability in managing their expenses. Thus, commodity exchanges are used by manufacturers and service providers as part of their budgeting process – and the ability to normalize expenses through the use of forward contracts reduces a lot of cash flow-related headaches.
The Bottom Line
Investing in commodities can quickly degenerate into gambling or speculation when a trader makes uninformed decisions. However, by using commodity futures or hedging, investors and business planners can secure insurance against volatile prices. Population growth, combined with limited agricultural supply, can provide opportunities to ride agricultural price increases. Demands for industrial metals can also lead to opportunities to make money by betting on future price increases. When markets are unusually volatile or bearish, commodities can also increase in price and become a (temporary) place to park cash. | <urn:uuid:1cdc3f69-011e-4703-ac39-8c79b046e460> | CC-MAIN-2016-26 | http://www.investopedia.com/articles/optioninvestor/09/commodity-trading.asp | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955357 | 1,372 | 3.4375 | 3 |
I began writing a eulogy for Berta Isabel Cáceres Flores years ago, though she died only this month.
Berta was assassinated by Honduran government-backed death squads on March 3. Like many who knew and worked with her, I was aware that this fighter for indigenous people wasn’t destined to die of old age. She spoke too much truth to power — for indigenous rights, for women’s and LGBTQ rights, for authentic democracy, for the well-being of the earth, and for an end to tyranny by transnational capital and U.S. empire.
Berta cut her teeth on revolution. She was strongly impacted by the broadcasts from Cuba and Sandinista-led Nicaragua that her family listened to clandestinely, gathered around a radio — with the volume turned very low, since those stations were outlawed in Honduras. Always a committed leftist, Berta’s mother raised her many children to believe in justice. Doña Bertha — the mother made her youngest child her namesake — was mayor of her town and governor of her state back when women were neither, in addition to being a midwife. She was Berta’s life-long inspiration. As a young adult, like so many others from the region who shared her convictions, Berta went on to support the Salvadoran revolution.
In 1993, Berta — a Lenca Native — cofounded the Civic Council of Popular and Indigenous Organizations of Honduras, or COPINH. At that time in Honduras, there was little pride and even less power in being indigenous. Berta created COPINH to build the political strength of Lencas, campesinos, and other grassroots sectors to transform one of the most corrupt, anti-democratic, and unequal societies in the hemisphere.
A Political Force
Berta loved to say, “They fear us because we’re fearless.”
The fearlessness paid off over the years. COPINH has successfully reclaimed ancestral lands, winning unheard-of communal land titles. They’ve stalled or stopped dams, logging operations, and mining exploration — and not to mention free-trade agreements. They’ve prevented many precious and sacred places from being plundered and destroyed.
In addition to Berta’s remarkable leadership, COPINH’s victories have come through their size, strength, unity, and fierce commitment. Communities have participated in hundreds of protests, from their local mayors’ office to the steps of the national congress. They’ve occupied public spaces, including several of the six U.S. military bases in their country, and refused to leave. They’ve shut down the road to Tegucigalpa, strategically blocking goods from moving to the city. They’ve declared a boycott of all international financial institutions on their lands. They’ve helped coordinate 150 local referendums to raise the stakes on democracy.
Here’s one of many tales that Berta told of strategies and actions.
The backstory is that Honduran farmers — which most COPINH members are — wear thick work boots made of unventilated rubber. Over their course of containing sweaty feet, they come to smell horrendously, so bad that their owners often refer to them as bombas, or bombs.
Early in COPINH’s history, a team went from La Esperanza to Tegucigalpa to negotiate with the government on a land titling law. The discussions went on for days. Berta recalled that, during lunchtime, the government received lavish, catered meals; the COPINH members had no money, and so their side of the table stayed empty. Far less connected in those days, the group had nowhere to sleep or shower, and spent the nights in the streets.
At one point, the negotiations were tense and the members of COPINH’s team were shaky on their strategy. They asked for a recess, but the government refused. So someone on the COPINH side gave a discrete signal, and altogether the farmer-activists pulled off their bombas. The smell was so toxic that the government officials fled the room. COPINH was able to regroup and develop a stunning strategy. The indigenous radicals won the law.
The most recent campaign — which yielded a partial victory — was likely the proximate cause of Berta’s death: stopping the Agua Zarca dam project on a sacred Lenca river.
The COPINH community of Rio Blanco — everyone: men, women, elders, toddlers, nursing mothers — formed a human barricade and blocked construction of the dam. Meanwhile, Berta, other members of COPINH, and national and international friends pressured the World Bank and the largest dam company in the world, Chinese state-owned Sinohydro, to pull out. Rio Blanco didn’t just blockade the construction for an hour or for a day, or even for a week. They did it for more than a year. They did it until they won. They got the most powerful financial interests in the world to abandon the project.
Tragically, because other financial interests are always waiting in the wings to plunder for profit, the dam is still under construction. Forty-eight more are either planned or underway on their lands.
Berta’s belief in participatory democracy extended profoundly to her everyday practice. As the unparalleled leader of COPINH, and with a large gap between her own level of education and political experience and that of all but a few in the group, it would be have been easy for her to act on her own. Yet she always made herself accountable to the communities she worked for.
I saw the degree of this commitment in action one night when Berta called in to Utopia, COPINH’s rural community meeting center, and asked to speak to everyone. Fifteen or so people quickly gathered around the cell phone on the shaky wooden table next to the only light, that of a candle. Berta explained a fairly pro forma request that had come to her from a government office, and proposed a response. When she was finished, she asked the almost exclusively illiterate group, “¿Cheque sí, o cheque no?” All raised their thumbs toward the little cell phone and called out, “Sí!” No joint decision had been required, and yet she’d sought consensus.
The Woman Behind the Myth
Berta was unflappable. She was calm in the face of chaos and strategic in the face of disaster. She got right in the face of soldiers and goons when they showed aggression toward her or others, and told them what was what.
Berta was indefatigable, working around the clock with no complaint. When not traveling around Honduras or the world to raise support for the struggle, she would wake early and go straight to her desk to receive updates, often on the most recent attacks on COPINH members, and in those cases to write condemnations — all even before a cup of coffee. She would then jump into her yellow beater truck to pick up other members of COPINH and head off to wherever action or investigation was needed.
I was amazed that Berta drove that noteworthy truck everywhere without protection, and that she lived in a house secured by only a small bolt and a couple of friendly dogs. Then I realized that it made no difference how much security she had. Though Berta also spent periods in deep hiding, the government and the companies she opposed almost always knew where she was — and how to get her when they were ready to kill her.
Berta took two small breaks in her life. The first was a two-week vacation with a friend in a neighboring country, the second a three-month semi-repose at my house in Albuquerque — though even then, she spent most of her days building a continent-wide boycott of the World Bank and Inter-American Development Bank.
Even as she served her community, Berta rose in the past decade to become an international people’s diplomat. She was a heroine to many global movements, a critical player in many struggles, a keynote speaker at many venues. She was someone consulted by government officials, by international networks, and even, a few months ago, by Pope Francis.
As we watched Berta’s rise as a global leader, our close friend and colleague Gustavo Castro commented to me, “I hope she never loses her humility.” She never did.
I once asked Berta how to say “integrity” in Spanish. She translated it coherencia — coherence between one’s stated principles and actions, coherence amongst all parts of one’s life. Berta had coherence.
She was highly critical of U.S. Americans for our lack of that coherence. She once led an anti-oppression training for an organization I was running, in which she asked us to examine whether we were Caesars or artisans. She meant whether our practice — and not just our statements — aligned us with the oppressors or with the oppressed, and whether we were promoting the grassroots or ourselves as leaders. For a long time after, the refrigerator that Berta and I shared held her line drawing of a thonged Roman sandal.
She commented to me once that the problem with U.S. Americans is our attachment to comfort. Berta herself eschewed comfort. She lived in the modest house in which she was raised, where she cared for her elderly mother. She slept in a bare cement room, more than half of which had been converted to her office, housing her desk with its mountain range of documents and small computer table. Her trademark style — regardless of with whom she was meeting — was jeans, sneakers, and a cotton shirt. She didn’t shop, go to fancy restaurants, or take a plane when a bus was available.
Besides COPINH and the struggle for justice, Berta had another profound commitment: to her mother and her four children. I recall watching the deep pride on Berta’s face when one of her daughters, then only 7 or so, recited a poem, “Las Margaritas” (The Daisies), for a group of foreign visitors; it was a very different expression from any other I’d ever seen from her. She grew prouder as her three daughters and son grew older, all of them holding the flame for justice.
Following Berta’s murder, her children and mother issued a statement in which they said, “We know with complete certainty that the motivation for her vile assassination was her struggle against the exploitation of nature’s commonwealth and the defense of the Lenca people. Her murder is an attempt to put an end to the struggle of the Lenca people against all forms of exploitation and expulsion. It is an attempt to halt the construction of a new world.
“Berta’s struggle was not only for the environment, it was for system change, in opposition to capitalism, racism, and patriarchy.”
Back in 2013, after the Honduran government dropped sedition charges against Berta — one of its countless attempts to silence her — someone asked her mother if she was scared for her daughter. Laughing, Berta quoted her mother’s response: “Absolutely not. She’s doing exactly what she should be doing.”
Berta’s humor was legend. A joke from her, and her soft up-and-down-the-scales laugh, punctuated the most tense of moments and kept many of us going, even as she never strayed from the gravity of the situation. One of her jokes was recirculated recently by radical Honduran Jesuit priest Ismael “Melo” Moreno. He once accompanied her to Rio Blanco, where someone snapped a photo of them together. As she peered at the picture, Berta laughed and said to Melo, “Let’s see which of the two of us goes first.”
When Berta saw a performance of the Raging Grannies, a group of elder women who dress up in outrageous skirts and joyously sing protest songs at rallies and events in Albuquerque, she told me, “I never wanted to live to be an old woman. Now I do.”
That chance was just taken from her.
Repression in the Wake of Berta’s Death
One person witnessed Berta’s assassination: Gustavo Castro Soto, coordinator of Otros Mundos Chiapas/Friends of the Earth Mexico, coordinator of the Mesoamerican Movement against the Extractive Mining Model (M4), and co-founder and board member of Other Worlds.
A close friend and ally of Berta, Gustavo slept in her house on the last night of her life to provide accompaniment in the hope of deterring violence — something dozens of us have had to do for her over the years. Gustavo was shot twice and feigned death. Berta died in his arms.
Gustavo was immediately detained in physically and psychologically inhumane conditions by the Honduran government, and held for several days for “questioning.” The subsequent days have resembled a bad spy movie, with Gustavo finally given permission to leave the country, only to be seized at the migration checkpoint at the airport by Honduran authorities, then placed into protective custody in the Mexican Embassy — only to be handed back to the Hondurans once more, who took him back to the town of La Esperanza for more “questioning.” The Honduran government has declared that Gustavo must stay in Honduras for 30 days. He is being “protected” by the Tigers, vicious U.S.-funded and trained “special forces.”
Chillingly, according to the State Department, the United States is cooperating with the Honduran investigators. A note from a close colleague, from outside Gustavo’s place of detention, said that a team of U.S. “FBI types” were actually in the interrogation room. The role of the U.S. government in the attempted destruction of social movements in Honduras is vast.
One can draw also draw a straight line from Washington to Berta’s death. But that’s the topic of another article.
Gustavo continues to be in terrible danger in Honduran custody, as what he witnessed is an impediment to the government’s deceptive attempts to pin Berta’s murder on COPINH itself. In a note to some friends on March 6, Gustavo wrote, “The death squads know that they did not kill me, and I am certain that they want to accomplish their task.”
The Honduran government also imprisoned COPINH leader Aureliano “Lito” Molina Villanueva for two days just after Berta’s murder, on “suspicion in a crime of passion.” Authorities are interrogating COPINH leaders Tomas Gomez and Sotero Chaverria as well, while denying them lawyers. This is part of an effort to criminalize COPINH members.
Now, COPINH needs more than ever to be protected, to be supported, and to carry on the legacy that Berta helped to build.
Berta touched everyone she met, and even countless ones she didn’t.
My young daughter is one of those. The morning of Berta’s death, she wrote this: “I was shocked, because how can somebody kill someone who was only trying to do what’s right? Then I remembered they killed Martin Luther King and Malcolm X. If I die for doing the right thing, that would let me know that I did my part in this world. Just like Berta.”
When Berta received the 2015 Goldman Prize, the most prestigious environmental award in the world, she dedicated the prize to rebellion, to her mother, to the Lenca people, to Rio Blanco, to COPINH — and “to the martyrs who gave their lives in defense of the riches of nature.”
Now Berta is one of these martyrs.
Berta, Gustavo, and I co-founded the grassroots network Convergence of Movements of the Peoples of the Americas (COMPA) in 1999. Early on the horrific morning of March 3, a COMPA listserv note blasted the news of Berta’s assassination. Reading that message, I spotted the posting just prior, dated February 24. It was from Berta. It read simply, “Aqui!” I am here!
She is here. Long may Berta live, in the hearts, minds, passions, and actions of all of us. May all women and men commit themselves to realizing the vision of transformation, dignity, and justice for which Berta lived, and for which she died.
¡Berta Cáceres, presente! | <urn:uuid:792653a7-0a4e-496c-bf61-02b9a8ccb962> | CC-MAIN-2016-26 | http://www.ips-dc.org/t/blog/page/4/?since=10%2F01%2F2013&until=10%2F31%2F2013&start=5 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.974077 | 3,567 | 2.546875 | 3 |
- Development & Aid
- Economy & Trade
- Human Rights
- Global Governance
- Civil Society
Friday, June 24, 2016
- Indigenous rights groups are applauding U.S. President Barack Obama’s creation of a new high-level council aimed at coordinating government actions relating to Native American communities, a move that advocates have been urging since early in the president’s first term.
The new White House Council on Native American Affairs will consist of top officials from all agencies and departments, including the budget office, that implement policies affecting Native American tribes.
Critically, these tribes are considered sovereign nations under U.S. law, so that much of the council’s mandate has to do with strengthening this government-to-government context.
The new body will be tasked with improving the atrocious track record of consultation between tribes and the government. That history, coupled with the sometimes bewilderingly complex bureaucracy governing this relationship, has long exacerbated the anger and suspicion already felt among Native American (also known as American Indian) communities towards the U.S. government.
“President Obama’s Executive Order represents a very strong step forward to strengthen our nation-to-nation relationship,” Jefferson Keel, president of the National Congress of American Indians, a six-decade-old advocacy group that has been at the forefront of pushing for the creation of such a high-level body, said Wednesday.
“The Council has been a top priority of tribal leaders from the earliest days of the Obama administration. It will increase respect for the trust responsibility and facilitate the efficient delivery of government services.”
Legal decisions, official treaties and agreements have repeatedly confirmed the sovereignty of the country’s more than 560 officially recognised tribes. Yet that understanding has been regularly violated on the ground, resulting in centuries of oppression and marginalisation.
Official relations with Native American communities have come under increased legal scrutiny in recent years. Last year, courts found federal mismanagement of native funds to have been so egregious that they awarded tribes more than a billion dollars in settlements.
Perhaps more than any past president, Obama appears to have made concerted efforts to strengthen these relationships and begin addressing past wrongs. Advocates say creating the new council is the latest of these steps, aimed at ensuring that these relations extend into subsequent administrations.
“Honouring these relationships and respecting the sovereignty of tribal nations is critical to advancing tribal self-determination and prosperity,” Obama stated in the executive order creating the council, released Wednesday. “We cannot ignore a history of mistreatment and destructive policies that have hurt tribal communities.”
The order also noted that restoring historically tribal-owned lands taken from Native American control – a particularly contentious issue for both indigenous and non-indigenous communities – “helps foster tribal self-determination”.
The White House Council on Native American Affairs will be required to meet at least three times a year, with the first session this summer. The body builds upon an annual conference that Obama began in 2009, which marked the first time that Native American leaders were regularly brought together with high-ranking government officials.
An important part of the council’s responsibilities will also be in educating, or reminding, government officials of the federal government’s roles and responsibilities regarding Native American tribes.
“I never fully understood these ideas of self-determination and governance, and I would expect many colleagues will also not be steeped in those issues,” Sally Jewell, the recently appointed secretary of the interior, who will chair the new council, told reporters Thursday.
“This council will bring [high-ranking officials] together to understand these issues more deeply and to make sure that as we fulfil our relationships and obligations, that we do that at the right government-to-government level.”
Proponents are hoping the new council indicates the consolidation of improved coordination between the U.S. government’s many departments and the concerns of Native American communities throughout the country.
According to both advocates and the government, this shift will require better communication both between agencies and engagement with community leaders.
“All areas, agencies and policies of the federal government impact American Indian citizens in almost every single aspect of our lives, more than other American citizens, from education, health services, natural resources issues and land management, to tribal criminal and civil jurisdiction,” Helen B. Padilla, director of the American Indian Law Centre, told IPS.
“These matters are complex and require that federal agencies become knowledgeable about the federal trust responsibility and their role in carrying out the current policy of Indian self-determination.”
Padilla noted that tribal governments have an “arduous and sometimes insurmountable task” in “providing for their people while navigating…comprehensive federal laws, rules, regulations and policies impacting their ability to provide those services”.
Even with this new step, relations between the federal government and Native American communities have traditionally been so poor and one-sided that it will take years of such regular contact before substantive impact can be gauged.
“Tribal governments do have much more direct access to the administration, and Obama has directed key agencies to engage in far more extensive consultations with tribal agencies,” Ruth Flower, legislative director with the Friends Committee on National Legislation, an advocacy group, told IPS.
“There’s still a lot to do,” she added, “and the relationship between the tribal governments and the U.S. government is still very rocky in a number of places.”
She cited continued complaints regarding land use, with widespread instances in which Native American lands are taken or used by the federal government without thought given to legal status.
The currently debated immigration reform bill, for example, includes a provision that would allow the Department of Homeland Security to place personnel or infrastructure anywhere within 100 miles of the U.S.-Mexico border, including on sovereign tribal land.
“There’s no sense of consultation in these instances, just an assumption that the lands there are open to the use of the U.S. government – there’s no sense that these lands are being reserved for the tribes,” Flower said.
“I’ll be looking very closely at the recommendations guiding the direction in which this new council is expecting to go. If it’s just going to be filing more reports, that’ll be a good indication that it will be pretty ineffectual.” | <urn:uuid:78e62337-4a60-4d65-aecb-11fcd641d3c6> | CC-MAIN-2016-26 | http://www.ipsnews.net/2013/06/govt-council-raises-hopes-for-improved-u-s-tribal-relations/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95361 | 1,346 | 2.578125 | 3 |
June 24, 2016
Post a New Question
Saturday, May 5, 2012
Explain why a line in three-space cannot be represented by a scalar equation.
Answer This Question
- Explain why a line in three-space cannot be represented by a scalar ...
- explain why a scalar equation of the line exists in 2-D space, but ...
Calculus / a bit of physics
- explain why a scalar equation of the line exists ...
Math- Intesection of a plane and line
- Does the line [x,y,z]=[5,-9,3]+k[1,-12,2...
Math: Scalar Equations of a Plane
- Find the scalar equation of each of the ...
Alegbra 116 Help Please
- why the line x = 4 is a vertical line. Name three ...
- Any ideas on how to solve this: Use a vector solution to show ...
- Find the scalar equation of a plane that is perpendicular to plane r...
- (a) Let S be the set of solutions y(t) to the ...
- Which choice could be the equation of a line perpendicular to the ...
More Related Questions
Not yet registered?
Click here to register!
© 2016 Jiskha Homework Help | <urn:uuid:f8032439-2838-4718-a462-ba983dc46817> | CC-MAIN-2016-26 | http://www.jiskha.com/display.cgi?id=1336235346 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.830396 | 273 | 2.765625 | 3 |
This set of images from NASA's Cassini mission shows how the gravitational pull of Saturn affects the amount of spray coming from jets at the active moon Enceladus. Enceladus has the most spray when it is farthest away from Saturn in its orbit (inset image on the left) and the least spray when it is closest to Saturn (inset image on the right).
Water ice and organic particles gush out of fissures known as "tiger stripes" at Enceladus' south pole. Scientists think the fissures are squeezed shut when the moon is feeling the greatest force of Saturn's gravity. They theorize the reduction of that gravity allows the fissures to open and release the spray. Enceladus' orbit is slightly closer to Saturn on one side than the other. A simplified version of that orbit is shown as a white oval.
Scientists correlate the brightness of the Enceladus plume to the amount of solid material being ejected because the fine grains of water ice in the plume are very bright when lit from behind. Between the dimmest and brightest images, they detected a change of about three to four times in brightness, approximately the same as moving from a dim hallway to a brightly lit office.
This analysis is the first clear finding that shows the jets at Enceladus vary in a predictable manner. The background image is a mosaic made from data obtained by Cassini's imaging science subsystem in 2006. The inset image on the left was obtained on Oct. 1, 2011. The inset image on the right was obtained on Jan. 30, 2011.
A related image, PIA17039, shows just the Enceladus images. The Saturn system mosaic was created from data obtained by Cassini's imaging cameras in 2006.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, DC. The Cassini orbiter was designed, developed and assembled at JPL. The visual and infrared mapping spectrometer was built by JPL, with a major contribution by the Italian Space Agency. The visual and infrared mapping spectrometer science team is based at the University of Arizona, Tucson. | <urn:uuid:a26350be-8892-4fa5-bdeb-75ad7e277862> | CC-MAIN-2016-26 | http://www.jpl.nasa.gov/spaceimages/details.php?id=PIA17040 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.923349 | 488 | 3.90625 | 4 |
NEW YORK, July 8 (JTA) Many people are familiar with Jan Gross’ recent book about the 1941 massacre of 1,600 Jews in the Polish town of Jedwabne but few are aware of the person who inspired it.
Early one Sunday morning, Polish filmmaker Agnieszka Arnold sat at her cluttered kitchen table in Southern Warsaw and sipped coffee as she talked about her film “Neighbors” Arnold lent Gross the use of her film’s title for his book. Visibly exhausted, Arnold is a commanding woman in her late 40s, with a tough exterior and blue eyes that easily dampen.
She has made more than 20 documentaries about Polish-Jewish relations, but nothing prepared her for what she discovered in Jedwabne.
“For the past five years,” Arnold, a Lutheran, says softly, “I lived in that barn where the 1,600 Jews were rounded up and killed. Every testimony I collected, every witness I spoke to, put me back into that barn. I haven’t slept well since.”
After the Polish responsibility for the massacre was publicized, the Polish government decided to commemorate the 60th anniversary of the event with a memorial Tuesday.
The Jewish community marked the event with its own ceremony last Friday.
On July 10, 1941, some 1,600 Polish Jews were rounded up by their Polish neighbors, forced into a barn and burned to death. A plaque that stood in Jedwabne since the war falsely claimed that the Nazis committed the atrocity.
Even though Gross’ book has received all the attention, Arnold was instrumental in placing the massacre before the public eye.
This spring, Arnold’s film, broadcast in two prime-time slots, was viewed by 2 million Poles.
Viewers have reacted strongly to Arnold’s film.
Some critics say that Arnold is anti-Polish; others embrace her work.
Boguslaw Majewski, minister at the Polish Embassy in Washington, is sympathetic: “Arnold’s film enables us to come to terms with the crimes that some of our own committed because others stood on the side in silence. It shows us that indifference can be lethal.”
Arnold’s concern about Polish-Jewish relations stems from growing up as a member of the only Lutheran family in the Catholic town of Lowicz.
She believes that minorities in Poland have an advantage when it comes to examining society
“We feel and see more. Being an outsider allows us to look in,” Arnold says.
After graduating from the University of Warsaw in 1972, Arnold worked for Communist-run state television but was forced to quit because her pieces were considered too political.
Eventually, her independent streak motivated her to direct her own films.
How Arnold came to discover the truth about Jedwabne has more to do with the striking absence of information rather than its existence.
After communism fell and the Polish government set up a commission to investigate Nazi war crimes, an alarm bell sounded for Arnold when she read a falsified report that claimed the Nazis had committed the massacre in Jedwabne.
In 1997, Arnold went to Jedwabne, a small town about 90 miles northeast of Warsaw.
“Literally, in the course of two hours, I learned the truth,” Arnold says.
She and her cameraman, without equipment, entered a bar and began buying drinks for the patrons.
“We might have been tourists although, how many tourists go to Jedwabne?” Arnold asks ironically.
The patrons in the bar accused one another, loudly, of having relatives who were murderers or living on stolen land.
The tension was palpable; they seemed to be competing for Arnold’s attention.
Why they were so anxious to talk about such a taboo topic is something that Arnold easily answers.
“No one had wanted to listen to them until me,” she says.
To make “Neighbors,” Arnold traveled as far as Costa Rica and the United States to interview survivors, rescuers, perpetrators and witnesses.
The film uses no narration, and craftily unravels the hidden horrors through intimate interviews. Some faces are hidden and others boldly face the camera.
The current priest of Jedwabne, who claims on camera that no anti-Semitism exists, warned his congregants not to speak openly to Arnold, but many volunteered.
Arnold points out that for 60 years, no one neither the Roman Catholic Church, schools nor the Communist Party ever attempted to talk with them.
Arnold admits that only four years ago she thought the broadcast of her film would be postponed indefinitely.
Poland was still emerging from 50 years of communism, a period during which all open dialogue had been frozen.
Poles were still reluctant to openly face controversy about the past.
It was against this political backdrop that Arnold began to show her unfinished film to everyone she could. Its chilling firsthand testimonies captured the imagination of Gross, a Polish emigre and historian, and inspired him to expand his own research and publish his internationally acclaimed book.
In addition to sharing her film’s title, Arnold was also happy to provide her transcripts of her interviews.
“I left my ego out of it. I knew that once Janek” as she calls Gross “released his book, the public would know the story and my film would be broadcast,” says Arnold.
She was right. Her producers are now negotiating to bring the film to the United States this fall.
Pouring herself and her guests a second cup of coffee, Arnold glances at the rows of drab Communist-bloc buildings outside her kitchen window.
She’s quick to point out that interspersed between the ugly gray blocks stood some prewar buildings reminiscent of a once-elegant Warsaw.
But, not surprisingly, Arnold refuses to romanticize Poland’s past.
“People tell me that I’m a woman who has changed the course of history. Yet I am not satisfied. I am emotionally spent,” Arnold sighs and continues.
“I am a mother with a teen-age son who I want to bring up in a country that has faced the demons of its past. My work is not done until Poles come to grips with the truth. I hope that my film restored some order and things are clearer now.”
Deborah Sklar is the assistant director of the Community Services Department of the American Jewish Committee and has developed a project with the Polish government for the “Next Generation” of young Poles and Jews to address the past. | <urn:uuid:c963e53d-5b9c-49ca-b49c-f2af66d14d42> | CC-MAIN-2016-26 | http://www.jta.org/2001/09/18/life-religion/features/polish-filmmaker-put-wwii-massacre-on-the-map-2 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.976454 | 1,390 | 2.859375 | 3 |
The upper gastrointestinal endoscopy (which is camera exam of stomach) will pick up GERD, also called as acid reflux. In this, food and stomach acid may come to food pipe or even throat. The acid in throat causes sinusitis. The sinuses are hollow structures in the bones behind the nose which helps in the aeration of the nose. Infection and inflammation of the sinus is called sinusitis. GERD and sinusitis are easily treatable. GERD is treated as following;
1) antacids; Maalox, Digene
2) acid blocker; Prilosec, Omeprazole
3) loosing weight, if overweight.
4) avoiding alcohol, citrus fruits and juices, chocolate, and tomato based products
5) avoiding large meals. Eat 5 small meals in a day.
6) wait three hours after the meal before you sleep.
7) elevate head end of the bed by 8 inches.
A coronal CT of the sinuses and/or fiberoptic nasal endoscopy is diagnostic for sinusitis. Some physician may recommend nasal smear also to differential the allergy and infection. The aim of treatment is; relieve the obstruction, treatment of the infection, if present, thinning the mucus, and opening the sinus. Following would be the treatment protocol;
1) Nasal steroid; decrease inflammation and open up obstruction.
2) Oral decongestants such as pseudoephedrine are often helpful.
3) Topical decongestant for short duration.
5) Steam vaporizer
6) Eucalyptus oil may help
7) Nasal saline irrigation
8) Vitamin C
For treating it naturally; steam inhalation, nasal saline water (salt water) irrigation (can be done by Neti-pot), Vitamin C, Eucalyptus oil inhalation can be done. Following can increase the body immunity so help in overcoming the sinus infection;
1) Omega 3; Fish oil capsule and flaxseed oil capsule.
2) Ginkgo Biloba
3) Zinc 30 mg.
GERD, has increasingly been implicated in causing or exacerbating chronic sinusitis. The exact relationships and mechanisms are presently a matter of speculation, though.
Consulting an ENT (ear, nose throat) specialist would be prudent.
It is privilege assisting you. | <urn:uuid:8384af10-5a81-4c69-a9dc-91037e072175> | CC-MAIN-2016-26 | http://www.justanswer.com/health/7mbr8-noticed-eat-little-odor-room.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.884886 | 500 | 2.640625 | 3 |
Shots for school? How to calm a child's fears. TODAY
Many parents know that a needle is in their kid’s future -- either shots or a blood draw. And kids know it, too.
Don’t lie. If you say it won't hurt and it does, they certainly won't believe you next time.
Do stay calm and composed (no matter how upset you really are). Children of all ages will look to his or her parents for cues on how to react.
Don’t threaten kids with shots as a means of discipline. A line often overheard: “If you don't behave, the doctor’s going to give you a needle,” says the parent.
Do prepare them. “When kids know what’s going on and what to expect, they generally do better,” says Smith. “Truthfully answering any questions they have can be very helpful.” | <urn:uuid:f1d32703-daa1-4d86-bc83-af07eb54ff88> | CC-MAIN-2016-26 | http://www.klove.com/blog/scottandkelli/?q=30secondvideoclipforcontest&page=332 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.972537 | 197 | 2.578125 | 3 |
Image credit: Tracy Collins/WUSTL Creative ServicesThe idea that people make calculated decisions that allow them to obtain the most goods with the smallest amount of effort — a complex hypothesis called ‘economic man’ for short — often has been challenged. People sometimes make irrational decisions, they rarely possess sufficient information to make the best decision, and they sometimes act against their own economic self-interest, critics say.
But none of these critiques is as radical as the one advanced in the Jan. 13 online issue of Proceedings of the National Academy of Sciences (PNAS). Written by an international team of researchers, it was inspired by a workshop on biological markets (transactions in which partners, typically animals, exchange commodities for their mutual benefit) held at the Lorentz Center of the University of Leiden in The Netherlands in January. (Click here for agenda.)
The scientists asked themselves how far biological market theory, which has been used successfully to explain cooperative behavior in many species, could be extended. Could it be used to describe, for example, the exchange of commodities between organisms without any cognitive ability, such as microbes.
They could think of instances where single-celled organisms had been shown to avoid bad trading partners, build local business ties, diversify or specialize in a particular commodity, save for a rainy day, eliminate the competition and otherwise behave in ways that seem to follow market-based principles.
They concluded not only that microbes are economic actors, but also that microbial markets can be useful systems for testing questions about biological markets in general, such as the evolution of partner choice, responses to price fluctuations and the identification of market conditions that drive diversification or specialization.
They even foresee practical applications of the work. It might be possible, for example, to manipulate ‘market conditions’ in crop fields to drive nitrogen-fixing bacteria to trade more of their commodity (a biologically available form of nitrogen) with crop plants.
“Creative insights are often easier when theories from one field are explored in a different system as we do here, applying economic concepts to microbial interactions,” said Joan Strassmann, PhD, the Charles Rebstock Professor of Biology in Arts & Sciences at Washington University in St. Louis, who participated in the workshop and helped write the PNAS paper.
“The microscopic nature of microbial systems means it is easy to misunderstand their interactions; an economic framework helps us focus on what is important,” said David Queller, PhD, the Spencer T. Olin Professor of Biology, another of the brainstorming scientists.
Microbial business practices
The idea of biological markets is not new. Biological market theory was first formulated in 1994, by two of the workshop participants, Ronald Noë, PhD, of Université de Strasbourg in France and Peter Hammerstein, PhD, of the Institute for Theoretical Biology, Homboldt-Universität zu Berlin, in Germany.
Scientists have long been aware that trades among a wide range of organisms are not blind exchanges but instead ones shaped by ‘market conditions’ such as price, quality and competition.
For example, cleaner fish, small fish that pick dead cells and mucus off of larger fish, provide a higher-quality cleaning service when competing cleaner fish are around.
What is new is the suggestion that single-celled organisms might participate in markets as well. Gijsbert Werner, a doctoral candidate at Vrije Universiteit in Amsterdam, The Netherlands, and the first author of the PNAS paper, said that the requirements for the emergence of market behavior are fairly minimal.
“For biological markets to evolve, you actually only need that individuals can detect co-operators and respond by rewarding them with more resources,” Werner said. “This can work through automatic responses. Organisms without cognition, like microbes, are also capable of automatic economic responses.”
Impressively complex behavior can emerge from those automatic responses. Toby Kiers, PhD, professor of evolutionary interactions at Vrije Universiteit and the senior author on the paper, studies complex underground networks between plant roots and fungi associated with those roots, called mycorrhizal fungi. The plants supply the fungi with sugar in exchange for mineral nutrients such as phosphorus.
Kiers and her colleagues found that the fungi compare the resources on offer by different plants, and adjust their resource allocations accordingly.
Some fungi even hoard resources until they get a better deal. “We now see that such ‘playing of the market’ happens in microbes. Microbial traders can be ruthless, even using chemicals to actively elbow competitors out of the marketplace,” Kiers said.
The scientists expect that studying microbial exchange systems as miniature markets will give them insight into the many collaborative behaviors of microbes, helping to generate new hypotheses and approaches in the field of social microbiology.
And by comparing microbial markets and animal markets they will then be able to determine “which, if any, market features are specific to cognitive agents,” the scientists write.
So much for the egos of Wall Street traders. | <urn:uuid:b6c85092-7b5f-40a5-a704-fa9f450cc5e4> | CC-MAIN-2016-26 | http://www.labmanager.com/news/2014/01/microbes-buy-low-and-sell-high | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.940537 | 1,047 | 2.9375 | 3 |
White Blood Cell Count and Differential Count
(WBC and Diff)
- WBC Count
- Leukocyte Count
- White Count
- WBC and Differential
- Leukocyte Count
WBCs are produced in the bone marrow, and are necessary to fight infection and for normal immune response. A WBC count measures the total amount of WBCs in a specified amount of blood, while a Differential count measures specific WBCs as a percentage of the total WBC count, and allows for more specific diagnoses. The Absolute Count is the product of a mathematical calculation done in the lab, and is done as a further diagnostic tool. WBC and Diff are always done as part of a Complete Blood Count, which gives your doctor a lot of information concerning your general health, and is used to help diagnose illness, and to monitor how well you are responding to treatment.
There are five different WBCs and each have a different function in normal immunity. Neutrophils fight and eat bacteria and only live for a few hours. Basophils, mainly eosinophils, are released as a result of allergic reactions or the presence of parasites. Lymphocytes are divided into T cells and B cells; T cells are involved in immune reactions; B cells are involved in antibody production. Monocytes are similar to neutrophils, but they are produced very quickly and live longer. Low blood levels are seen in people with severe infections, inherited bone marrow disease, autoimmune disease, poor nutrition, non-functioning bone marrow, chemotherapy, or excessively high levels of some medications. High levels are seen in people with inflammation or infection, stress, trauma, or leukemia. There are many medications that can affect WBC levels.
In a medical emergency, step away from this web site and call for emergency help. Remember, we're not doctors and we don't claim to be able to diagnose your condition. The information and services we provide or display here are merely intended to make you a more knowledgeable patient so that you can have smarter conversations with your actual health care providers. | <urn:uuid:24f3ed25-5d84-405e-a156-0e148466f4cf> | CC-MAIN-2016-26 | http://www.labtesthelp.com/test/White_Blood_Cell_Count_and_Differential_Count | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.955752 | 427 | 3.234375 | 3 |
There is only one goalie on a lacrosse field. The goalie's job is to save the ball. The goalie uses a head specifically designed for saving shots. Goalie heads are taller and a lot wider than all the other positions' heads. Goalies also use a s goalie shaft (40" shaft). The goalie's pocket is also deeper than everyone else's pocket to keep the ball from bouncing out of the stick when saving a shot. The goalie does not wear any elbow pads to allow maximum mobility to save tough shots. All goalies are required to wear a throat guard and a chest protector and a protective cup. | <urn:uuid:e44a65f3-8d7d-48a8-8dc0-f8c96224b182> | CC-MAIN-2016-26 | http://www.lacrosse.com/Navigation.process?Ne=646&Srp=24&N=4294958725+293+285+4294959933+4294959046 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.954566 | 125 | 2.5625 | 3 |
11 years ago on this day, September 11th, 2001, a series of four suicide attacks that were committed in the United States. On that Tuesday morning, 19 terrorists from the Islamist militant group al-Qaeda hijacked four passenger jets. The hijackers intentionally piloted two of those planes, American Airlines Flight 11 and United Airlines Flight 175, into the North and South towers of the World Trade Center complex in New York City; both towers collapsed within two hours. The hijackers also intentionally crashed American Airlines Flight 77 into the Pentagon in Arlington, Virginia, and intended to pilot the fourth hijacked jet, United Airlines Flight 93, into the United States Capitol Building in Washington, D.C.; however, the plane crashed into a field near Shanksville, Pennsylvania, after its passengers attempted to take control of the jet from the hijackers. Nearly 3,000 people died in the attacks, including the 246 civilians and 19 hijackers aboard the four planes.
Today is not only a day of remembrance for those that were lost, but also a day to honor the brave police, fire fighters, service men, and citizens who were there on the frontlines. That day marked all that remember in very different ways. Please take a moment today to stop and REMEMBER 9/11.
Lasalle General Hospital
187 9th Street Jena, Louisiana 71342 Phone: 318.992.9200 | <urn:uuid:16753b58-96c3-4ae4-bf67-ed612da8d823> | CC-MAIN-2016-26 | http://www.lasallegeneralhospital.com/articles/2012/09/memory-9-11 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.944156 | 281 | 2.640625 | 3 |
TABLE OF CONTENTS
A Guide to the Herbert Hoover Newspaper Clipping Collection, 1928-1954
President of the United States Herbert Clark Hoover (1874-1964) was born to Jessie and Hulda Minthorn Hoover in West Branch, Iowa. After the death of his parents, Hoover moved to Oregon and later graduated from Stanford University (1895). He married Lou Henry in 1899 and worked as a mining engineer in Australia and China. At the onset of World War I, Hoover managed a food relief effort as chairman of the Commission for Relief in Belgium and was appointed head of the U.S. Food Administration in 1917. Following the war, Hoover served on the Supreme Economic Council and was head of the American Relief Administration, which oversaw the shipment of foodstuffs to Europe.
After serving as the Secretary of Commerce under Presidents Harding and Coolidge, Hoover was elected to the presidency on the Republican ticket in 1929. During the stock market crash of 1929 and the Great Depression, Hoover sought a balanced Federal budget and was resistant to increasing Federal welfare programs. After the Great Depression spread throughout the world, he was defeated in 1932. In the 1930s, he was a critic of the New Deal and chaired commissions to increase the efficacy of the executive department in 1947 and 1953.
Source: “Herbert Hoover.” White House Biography. Accessed on May 5, 2011. http://www.whitehouse.gov/about/presidents/herberthoover/
Original newspaper clippings and printed material comprise the Herbert Hoover Newspaper Clipping Collection, 1928-1954, documenting the life and political career of President Herbert Hoover. Newspaper clippings from around the United States chronicle Hoover’s 80th birthday in summer of 1954. Miscellaneous news clippings detail his political activities during the 1930s and 1940s. Additionally, the collection includes printed material related to Hoover, including speeches and a biography.
This collection is open for research use.
Herbert Hoover Newspaper Clipping Collection, 1928-1954, Dolph Briscoe Center for American History, The University of Texas at Austin.
This collection was processed by Evan Usler, May 2011. | <urn:uuid:00f9f23a-2023-476f-903a-deed2ad07412> | CC-MAIN-2016-26 | http://www.lib.utexas.edu/taro/utcah/02693/cah-02693.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.959023 | 445 | 2.859375 | 3 |
What is family practice?
A family practitioner is a primary care doctor who provides health services to adults, children and adolescents. Family practitioners have a very broad scope of practice, but are usually the patient’s first consultation before being referred to other specialists, if necessary. The family practitioner performs annual physical examinations; ensures up-to-date immunization status; counsels patients on healthy lifestyles; monitors patients’ blood pressure, cholesterol and glucose levels; and ensures other baseline tests are within normal levels for the patient’s age and gender.When do I see a family practitioner?
In the United States, about one in four visits to a doctor are to a family practice doctor. Family care physicians care for the poor, indigent and underserved in the community more than any other physician specialty.
When a patient is an infant or child, he or she might see a family practitioner specializing in pediatric care. When the young patient transitions from childhood to adulthood during adolescence, he or she can see a family practice doctor specializing in adolescent medicine or a regular family practitioner, also known as adult-care physician. Around the ages 18-21, patients typically transition to an adult-care physician who is better suited to their health-care needs. What should I expect when I visit a family practitioner?
A family practitioner’s scope of practice varies, but these specialists typically provide basic diagnoses and non-surgical treatment of common medical conditions and illnesses.
To arrive at a diagnosis, family doctors will interview and examine the patient. This requires discussing the history of the present illness, including a review of the patient’s body systems, medication history, allergies, family history, surgical history and social history. Then the physician will perform a physical examination and possibly order basic medical tests, such as blood tests, electrocardiograms or X-rays. Ultimately, all this information is combined to arrive at a diagnosis and possible treatment. Tests of a more complex and lengthy nature may be referred to a specialist
Together with the patient, the family practitioner forms a plan of care that can include additional testing if needed, a referral to see a specialist, medication prescriptions, therapies, changes to diet or lifestyle, additional patient education, or follow-up treatment. Patients also may receive advice or education on improving health behaviors, self-care and treatment, screening tests and immunizations.What are the most common conditions family practitioners treat?
Shopping for a New Doc
Decoding Your Health Test Results
Patient Types: Which One Are You?
Questions to Ask When Looking for a New Doctor
- Common cold
- Gastrointestinal complaint
- Gynecological complaint
- High blood pressure
- Infectious diseases
- Musculoskeletal complaint
- Psychiatric disease
- Sexually transmitted disease
- Skin complaint
- Urinary tract complaint | <urn:uuid:6bba2703-b791-45d7-95bf-b0d08396ae93> | CC-MAIN-2016-26 | http://www.lifescript.com/Doctor-directory/family-practice/chicago-illinois-il-ghousia-j-khan-jr-md.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.924043 | 578 | 3.0625 | 3 |
What is psychiatry?
Psychiatry is a medical specialty that focuses on the study and treatment of mental health disorders. It is a highly specialized field that, despite the lack of direct diagnostic testing available, provides an extremely important role in overall wellbeing.What is a psychiatrist and what does a psychiatrist do?
Psychiatrists are board-certified medical doctors who diagnose and treat a variety of mental health disorders, which range from anxiety attacks and depression to schizophrenia and other psychoses. As a medical doctor, a psychiatrist is licensed to prescribe medication, and often works in conjunction with psychotherapists (talk therapists) and other mental health professionals.
Psychiatrists typically diagnose mental health disorders based on the criteria listed in manuals such as the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association. Before starting any kind of psychiatric treatment, a psychiatrist will typically give the patient a medical and psychiatric evaluation, gather patient history (medical history, symptoms of problems) and conduct physical exams, laboratory tests and brain imaging scans, such as magnetic resonance imaging (MRI) or computed tomography (CT) scans.
Psychiatrists typically work directly with patients, either in a clinical (hospital) or counseling situation, however many devote their practice to research, teaching in academic settings, or are employed in a corporate sector (industrial, media, law, etc.). Psychiatrists must graduate medical school and complete 3-4 years of psychiatric residency, followed by psychiatric fellowship training. They are certified by the American Board of Psychiatry and Neurology(ABPN).
Psychiatry is a complex field with a range of subspecialties including:
What conditions does a psychiatrist commonly treat?
- Addiction psychiatry – Evaluation and treatment of individuals with alcohol, drug, or other substance-related disorders as pertaining to mental processes
- Child and adolescent psychiatry– Psychiatric issues in children, teenagers, and families
- Cross-cultural psychiatry–Cultural and ethnic issues related to mental disorders and psychiatric services
- Geriatric psychiatry – The understanding, prevention and treatment of mental disorders in elderly persons
Psychiatrists treat a wide range of conditions in children and adults, some of which can be harmful or life-threatening to the patient and others if not treated. Examples of various conditions psychiatrists treat include:
Related Articles:Top 10 Tips to Fight DepressionEccentric or Undiagnosed?Are You an ADHD Adult?
- Autism and Asperger disorder
- Alcoholism, substance abuse
- Addictions and compulsions
- Abuse, bullying and violence
- Attention deficit/hyperactivity disorder
- Eating disorders (Anorexia nervosa, bulimia nervosa)
- Personality disorders (bipolar disorder, avoidant personality disorder, antisocial personality disorder, borderline personality disorder, obsessive-compulsive personality disorder)
- Anxiety and depression
- Dementia, hallucinations, delusions
- Gender identity disorder
- Post-traumatic stress disorder
- Sexual dysfunction and paraphilia (exhibitionism, frotteurism, voyeurism)
- Suicide attempts and self-harm
- Sleep disorders (sleep terror disorder, sleepwalking disorder) | <urn:uuid:ebfa27cc-8363-4662-843c-d28817ea0610> | CC-MAIN-2016-26 | http://www.lifescript.com/doctor-directory/psychiatry/san-francisco-california-ca-william-byerley-md.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.90885 | 642 | 3.875 | 4 |
Dehydration results from excessive loss of fluids from the body.
To work properly, the body requires a certain amount of water and other elements, called electrolytes. Drinking and eating help to replace fluids that have been lost through the body's functions. Fluids are normally lost through sweat, urine, bowel movements, and breathing. If you lose a lot of fluids and do not replace them, you can become dehydrated.
Factors that may increase the risk of dehydration include:
Other risk factors include:
Symptoms vary depending on the degree of dehydration. Symptoms may include:
Soft Spot in Infant Skull
Copyright © Nucleus Medical Media, Inc.
Dehydration can be extremely serious and life threatening. It may require immediate medical care.
The doctor will ask about your symptoms and medical history. A physical exam will also be done. This will include measuring your vital signs. To help provide information for the doctor, keep a diary of:
Tests may include:
Therapy aims to rehydrate the body, replace lost electrolytes, and prevent complications. If you have an underlying condition, your doctor will treat that, as well.
Treatment may include:
If you have minimal or moderate dehydration, you doctor may have you replace fluids by mouth. You may need to:
If you are severely dehydrated, intravenous fluids (given through a vein in your arm) will be given to rapidly replace fluids.
Your doctor may recommend that you take medicine, such as:
If you are diagnosed with dehydration, follow your doctor's
To prevent dehydration:
American Academy of Family Physicianshttp://www.aafp.org
American Academy of Pediatricshttp://www.aap.org
About Kids Healthhttp://www.aboutkidshealth.ca/
Dehydration and heat stroke. Wexner Medical Center website. Available at:http://medicalcenter.osu.edu/patientcare/healthcare_services/emergency_services/non_traumatic_emergencies/dehydration_heat_stroke/pages/index.aspx. Accessed July 23, 2012.
Dehydration and hypovolemia. EBSCO DynaMed website. Available at:http://www.ebscohost.com/dynamed/ . Updated January 25, 2012. Accessed July 23, 2012.
Rehydration therapy in children. EBSCO DynaMed website. Available at:http://www.ebscohost.com/dynamed/ . Updated February 26, 2012. Accessed July 23, 2012. | <urn:uuid:8cb1a932-6324-4bc4-8c59-4590696d5183> | CC-MAIN-2016-26 | http://www.lifescript.com/health/a-z/conditions_a-z/conditions/d/dehydration.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.904845 | 534 | 3.71875 | 4 |
Web Servers and Dynamic Content
When web servers first appeared their primary purpose was to serve up selected information from the machine on which they ran. The idea was to simply take the contents of a file and transmit them over a TCP connection in HTTP format. The inherent limitation discovered early on was that dynamic content could not be delivered, and the CGI interface was defined and added to web servers to address this.
The Common Gateway Interface (CGI) provides a way for the web server to execute a process whose output is determined by the code that it executes, and then take this output and pass it back to the client browser as if it were the contents of a static file. Since that time many variations that combine scripting engines and CGI have evolved that simplify the job of the programmer and make the execution of multiple external threads more efficient (Perl, Python, PHP, etc.). However, for the most part, these scripting languages have the drawback of needing to be interpreted. Also, they overlook the fact that enormous bodies of code already exist in such languages as C and Fortran (remember it?) to solve complex, computationally intensive applications. While building dynamic content based on the results of queries is very useful, it is not realistic to apply image data transformations or calculate Fourier Transforms using these scripting languages, as the time required to complete such calculations is very long with respect to the response time which the user expects.
Therefore, because of the existing code base and the existence of dynamic content that has to be generated by some algorithms that cannot be efficiently implemented in scripting languages, there is a definite need for developing programs that use the CGI, written in a traditional language like C or Fortran and compiled down to native code.
There are two ways in which data in the HTTP protocol is passed from the browser to the web server: the GET data (specified as part of the URL, commonly that part that appears after the “?” in the URL) and the POST data (the collected name-value pairs of all of the fields in the form that is being submitted by the web browser). To figure out the GET data, just look at the URL—http://www.mydomain.com/pages/external.cgi?additional-data.
GET data portion: additional-data
To figure out the POST data look at the source of the document generating the request, and find the form that is being submitted:
<FORM> <INPUT TYPE="text"NAME="fld01" VALUE="val01"> <INPUT TYPE="hidden"NAME="fld02" VALUE="val02"> <INPUT TYPE="checkbox"NAME="fld03" CHECKED> <SELECT NAME="fld04"> <OPTION VALUE="val03"> Val-03 <OPTION SELECTED VALUE="val04"> Val-04 <OPTION VALUE="val05"> Val-05 </SELECT> </FORM> POST data: fld01=val01&fld02=val02&fld03=on&fld04=val04As you can see, POST data is submitted in the form of a continuous string identifying each field/value pair separated by an equal sign (“=”). Each field/value entity is separated by an ampersand (“&”).
There are actually other complexities involved in the format for passing in POST data. Many characters need to be “escaped” in order not to confuse the web server with control characters or separator breaks. This is solved by inserting plus signs (“+”) for spaces and escape sequences of the format “%[0-9,A-F][0-9,A-F]” in the place of non-printable characters, ampersands, pluses and equal signs. (For the purist, yes, it is true that spaces can be represented as either the “+” symbol or as the escape sequence “%20”. All versions of Apache and IIS with which I have worked accept both.)
The web browser passes GET and POST data to the external thread by different mechanisms. GET data is placed in an environment variable visible to the context local to that thread. This environment variable is “QUERY_STRING”. Therefore, gaining access to GET data in C/C++ is a simple matter of this command:
char *pszGetData = getenv("QUERY_STRING");
(This should work in all UNIX and in all Microsoft development environments.)
On the other hand, POST data is passed to the external thread on the standard input stream. For those of you not familiar with streams, it is the same data producer as the keyboard, so whatever it is you've been doing to read input from the keyboard, that is the mechanism you will use to access POST data. However, an inherent liability with streams is that you have no way of knowing how much data is waiting for you. The obvious solution is to keep reading byte by byte from that stream until there is no more data and keep resizing a dynamically allocated buffer accordingly. However, web browsers provide the developer with another piece of information that saves them the trouble of having to grow buffers, incur the added overhead and deal with handling the exceptions that occur when one of multiple dynamic memory allocations decides to fail.
When a web browser passes POST data to the standard input of an external thread, it places all of the POST data there in one shot; therefore, there is never any chance that additional data will be added to the stream after you have first encountered a data portion on that stream.
The web browser also tells the process just how much data it has placed in the standard input by writing (as ASCII text) the number of bytes waiting to be read from the standard input in an environment variable visible to the context local to that thread. This environment variable is “CONTENT_LENGTH”. Therefore, gaining access to POST data in C/C++ requires a three-step process that will work in all UNIX and Microsoft development environments:
long iContentLength = atol(getenv("CONTENT_LENGTH")); char *szFormData = (char *) malloc(iContentLength * sizeof(char)); bzero(szFormData, iContentLength * sizeof(char)); fread(szFormData, (iContentLength - 1) * sizeof(char), 1, stdin);
A given browser document can contain both GET and POST data; therefore, both mechanisms may be used at the same time. In the <FORM> tag, a target may be specified which contains GET data, and the data contained in the form will be passed to the target as POST data.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Astronomy for KDE
- Profiles and RC Files
- Understanding Ceph and Its Place in the Market
- Maru OS Brings Debian to Your Phone
- Git 2.9 Released
- What's Our Next Fight?
- The Giant Zero, Part 0.x
- Snappy Moves to New Platforms
- OpenSwitch Finds a New Home
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide | <urn:uuid:00ffd854-788f-474c-b2ab-5b84b67c9399> | CC-MAIN-2016-26 | http://www.linuxjournal.com/article/4386?quicktabs_1=0 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.905974 | 1,782 | 3.5 | 4 |
On this day in 1939, World War II began as Nazi Germany invaded Poland. Each Monday, this column turns a page in history to explore the discoveries, events and people that continue to affect the history being made today.
Aside from the Bible, few books over time have stirred up such controversy as one composed from the cell of a German prison.
It is a poorly-written mess, according to literary critics, but the ideas contained within Adolf Hitler's 1925 tome "Mein Kampf" (or My Struggle) sadly would resonate well beyond the book's quality of prose.
Mein Kampf was the manifesto from which all of Hitler's atrocities stemmed, a tinderbox of a book that may have disappeared from the annals of history had the author not actually gone on to carry out the ideas presented in his tirade against all things non-German.
Because he did, however, the notorious book remains banned in some parts of the world, more than 80 years later, and has sparked ongoing debates about literary freedoms.
Hitler passes the time behind bars
Hitler rose through the ranks of Germany's small but powerful National Socialist (Nazi) political party to become its bombastic leader in the early 1920s. Believing that Germany's central Weimar government had let the country be ridiculed by a series of post-World War I punishments handed down by the victorious Allies, the Nazis attempted a coup d'etat in 1923. The famous "Munich Beer Hall Putsch" failed and sent Hitler to jail for high treason.
While imprisoned, Hitler read piles of books on history and philosophy, consolidating his own set of beliefs all the while. He didn't consider putting his political ideas down on paper, however, until his business manager suggested that an autobiography might be a fruitful way to pass the time in prison. At the urging of his manager, Hitler's original title for his work, "Four Years of Struggle Against Lies, Stupidity and Cowardice," was also changed to the more marketable "My Struggle." Hitler did not, in fact, write the book himself, but dictated to his friend Rudolf Hess, who was imprisoned alongside him.
Part autobiography, part political manifesto, the patchwork pages of "Mein Kampf" were put to print in 1925. By that time, Hitler had already been released from prison upon growing pressure from members of the Nazi party.
A gift for the newly-married couple
Despite being repetitive, long-winded and difficult to read, "Mein Kampf" had become a popular read throughout Germany by the late 1920s, disseminating Hitler's main theories to a large audience.
In "Mein Kampf," the future chancellor of the Third Reich goes on at length about his youth and the early days of the Nazi Party, but it was his visions for a Germany of the future that resonated most with its readers.
Chief among his ideas was the absolute, innate superiority of the Germanic race, which Hitler called Aryan, over every group of people. "Mein Kampf" singled out Jews as the source of many of Germany's ills and a threat to Aryan dominance. The Aryans had a duty to restore Germany's former glory and enlarge its territory by winning back the land it had during World War I and gaining new area by expanding into Russia.
"Mein Kampf" gained enormous readership in the early 1930s and once Hitler gained power as Chancellor of Germany in 1933, became the de facto Nazi bible. Every newly married couple received a free copy on their wedding day, and every soldier had one included as part of his gear. At the outset of World War II, the book had been translated into 11 languages and sold 5 million copies.
The debate goes on today
"Mein Kampf" was a clear-cut warning to the world of Hitler's intentions for war and genocide, which may have been recognized and prevented had more people read it outside of Germany, some historians say. Publishers in the United States and the U.K. did produce copies in English prior to the War, but were held up by copyright lawsuits by Hitler's publishers.
Since the war, the book has remained a flashpoint of controversy, especially in Germany and the former Axis nations.
Worried over its use as propaganda by neo-Nazi groups, Germany and Austria have banned the possession and selling of "Mein Kampf" outright, while some countries restrict its possession to people using the book for academic purposes only. Opponents of the ban argue that the book is a valuable historical document, and that keeping it unavailable only makes it more desirable to right-wing groups.
The debate over the book's ban has flared up again in Germany in recent months, with some groups calling for an edition carefully annotated by academics to be produced before 2015, when the book's copyright officially expires in Germany and it will become available to anyone in the general public to own and reprint. | <urn:uuid:dc43c733-5ba2-40bf-b13e-55db4b4e6bb4> | CC-MAIN-2016-26 | http://www.livescience.com/2821-mein-kampf-changed-world.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.983385 | 1,016 | 3.28125 | 3 |
Alaska's Columbia Glacier, a majestic "river" of ice, is one of the most rapidly changing glaciers on Earth. Thirty years ago, the glacier began to shrink with unprecedented speed — and satellite images reveal the scale of the changes that have continued ever since.
The Columbia Glacier originates high in the Chugach Mountains, at an elevation of 10,000 feet (3,050 meters), and flows to the sea, emptying out into Prince William Sound in southeastern Alaska.
When British explorers first surveyed the glacier in 1794, it reached as far as Heather Island, a small island near the mouth of Columbia Bay.
The glacier didn't budge for almost a century, but in 1980, the glacier began a sudden retreat.
A 1986 satellite image, shown in false colors above, shows that the front edge of the glacier had begun to splinter and fall apart, giving birth to flotillas of icebergs. In the photograph, the glacier has retreated several kilometers north of Heather Island.
By 2011, the glacier had retreated more than 12 miles, as revealed in the image below.
As the glacier has retreated, it has also thinned substantially — as shown by the expansion of brown bedrock areas on either side of the glacier. Since the 1980s, the glacier has lost about half of its total thickness and volume.
The glacier's rapid retreat has also changed the way it flows. In the 1980s image, you can see the area where two branches of the glacier meet — it's called the medial moraine.
In the 2011 image, the glacier has shrunk so much that its two branches no longer meet; it is essentially two different glaciers, which is causing the splintering and melting to happen on two different fronts, instead of one. | <urn:uuid:b968e38c-544e-4045-b050-994a832476c0> | CC-MAIN-2016-26 | http://www.livescience.com/31441-alaska-glacier-image.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.965265 | 361 | 4.25 | 4 |
Loyola health services experts offer tips for staying healthy during holiday travel
Loyola press release - November 23, 2009
The holidays often mean bringing people close together for family gatherings and get-togethers, but it’s also an ideal way for illness to spread. Before and during any holiday travel, practicing everyday healthy habits and getting vaccinated can help prevent the spread of flu, says Alicia Bourque, Loyola University New Orleans director of counseling and health services.
“Thanksgiving and Christmas are two of the busiest times of the year to travel. Not only will people be exposed to family members they haven’t seen for a while, but it also means close contact with others during travel,” said Bourque. “In order to minimize exposure to the H1N1 flu and the seasonal flu, practicing prevention is the most important thing you can do to stay healthy. It’s also very important to get vaccinated against both types of flu as soon as possible.”
The Centers for Disease Control expects that the 2009 H1N1 flu and seasonal flu activity will remain high during the 2009-10 flu season. So far, the CDC says most flu cases this year have been attributed to the 2009 H1N1 flu or "swine flu." However, officials expect both 2009 H1N1 flu and seasonal flu to cause illness, hospital stays and deaths this season.
Bourque offers the following advice to stay healthy during the holidays:
- Get vaccinated. Vaccination is the best protection we have against flu. The seasonal flu vaccine and the 2009 H1N1 vaccine are now readily available in several locations throughout Louisiana. Go to http://apps14.dhh.louisiana.gov/h1n1/default.aspx and enter your zip code or parish to get a list of sites that are administering the H1N1 vaccine. Seasonal flu vaccine is also available from many sources including your own doctor or health care provider, or retail outlets in or near your neighborhood.
- Avoid close contact with sick people. The flu is thought to spread mainly from person-to-person through coughing or sneezing of infected people. Keep your distance when possible.
- Wash your hands often. Wash your hands with warm water and soap, especially after coughing or sneezing. If soap and water are not available, use an alcohol-based hand rub, even when your hands are not visibly dirty. Other hand sanitizers that do not contain alcohol also may be useful.
- Cover your mouth and nose with a tissue when you cough or sneeze. After coughing or sneezing, throw your used tissue in the trash. If you don't have a tissue, cough or sneeze into your elbow or upper sleeve, not into your hands.
- Avoid touching your eyes, nose, or mouth. Germs are spread this way.
- Stay informed. Visit http://www.cdc.gov/h1n1flu/ and http://blogs.loyno.edu/emergency-updates/ for more information and the latest flu updates.
To arrange an interview with Bourque, contact Meredith Hartley with Loyola’s Office of Public Affairs at 504-861-5883.
LoyNews is an e-newswire produced by the Loyola University New Orleans Office of Public Affairs. LoyNews is distributed weekly to local, regional and national news media outlets, communicating the latest news and accomplishments of the university and its community. | <urn:uuid:1f601083-26f4-4a21-a8c8-ab21a1d2183b> | CC-MAIN-2016-26 | http://www.loyno.edu/news/loynews/20091123/1983/print | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.95101 | 732 | 2.640625 | 3 |
Click any word in a definition or example to find the entry for that word
the creation of Internet-based audio programmes which can be automatically downloaded from the Internet onto a device such as an iPod or MP3 player
'Podcasting will shift much of our time away from an old medium where we wait for what we might want to hear to a new medium where we choose what we want to hear, when we want to hear it …'CRM Daily 4th April 2005
'Podcasts have caught on like wildfire since they first emerged only nine months ago. Listeners can pick from roughly 10,000 shows on topics ranging from religion to wine to technology …'The New Zealand Herald 11th April 2005
'Podcasters create radio-like programs of commentary, music or humor, which are saved in MP3 audio format and posted online.'Reuters 3rd April 2005
'Grand Forks City Attorney Howard Swanson has determined that Mayor Mike Brown's "podcasted" radio shows are not an illegal use of city funds for political purposes …'Grandforks Herald 17th March 2005
As millions of mourners streamed into Rome to witness the funeral of Pope John Paul II on 8th April, Dutch priest Father Roderick Vonhogen used cutting edge technology to give an intimate audio tour to interested listeners throughout the world. The priest's Catholic Insider programme, featuring interviews with students on St Peter's Square and descriptions of the Pope lying in state in the basilica, exploited a new technique called podcasting.
Podcasting involves the creation of radio-style programmes on a wide range of topics, including music and audio commentary, which are posted on the Internet for downloading to a listener's own iPod or MP3 player. The basic idea is that instead of listening to radio shows over the airwaves, a listener can download the shows that they are really interested in, and listen to them when they want.
recent research in the US suggests that over 6 million people are already regularly tuning in to podcasts
The noun podcast has already been coined to refer to such downloadable broadcasts, with websites like podcast.com offering access to hundreds of podcasts covering a wide range of topics and interests. Anyone who owns a microphone, and has access to the Internet and some simple software can also produce their own podcasts. Such DIY radio enthusiasts have been described as podcasters, and amateur podcasts are often alternatively referred to as audioblogs, a new take on the now extremely popular activity of blogging (writing online journals). Podcast is also used as both an intransitive and transitive verb on the model of the verb broadcast, with some evidence for structures such as podcast about something. Podcasted is a participle adjective in regular use, as in podcasted audio/content/shows.
Aficionados of this emerging technology predict that podcasting will revolutionize the world of radio. Recent research in the US suggests that over 6 million people are already regularly tuning in to podcasts, and the number is rising daily.
The noun podcasting and its derivatives are formed from a blend of the term iPod (a portable digital audio player manufactured by Apple Computers) and the verb broadcast.
The new technology of podcasting first came into the public eye in August 2004, its development and promotion mainly associated with Adam Curry, a former presenter on the music video channel MTV. The first recorded use of the term podcasting occurred earlier in the same year however, when along with audioblogging it was aired in a Guardian newspaper article discussing the growing popularity of amateur online radio.
This article was first published on 16th May 2005.
A must for anyone with an interest in the changing face of language. The Macmillan Dictionary blog explores English as it is spoken around the world today.global English and language change from our blog | <urn:uuid:e6cd349a-10f0-4f00-a379-d4e82b0a63ec> | CC-MAIN-2016-26 | http://www.macmillandictionary.com/buzzword/entries/podcasting.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.960063 | 770 | 2.765625 | 3 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.